Society is a fast-moving jigsaw of people and the infrastructure within which these people interact. The cornerstone of this interaction is communication, which forms the very basis of modern existence. With time, communication has taken on many forms, but with the same unflinching intent. From romantic candlelit conversations to ads on Google and Facebook, every form of communication attempts to nudge the target into some action.
This simplistic idea became more complicated with the advent of Artificial Intelligence, or AI. AI is now used to store, analyse, and even create communication that remove the need for humans to be on both sides of the dialogue. Consumer chatbots on your favourite e-commerce website, automated emails confirming reservations, and the more advanced algorithm-generated recommendations for your next holiday destination are all instances of machines interacting with humans to nudge a desired action.
But what happens when these machines can produce extremely sophisticated, human-like communication? Aren’t we more likely to get convinced when a seemingly specialised, well-read human is hitting all the right emotional notes?
The Advent of Predictive AI
The fascination of creating machines that can communicate like humans has been around since Alan Turing formulated the Turing test in 1950 as a measure of machine intelligence. The first attempts at a conversational software date back to the development of 'ELIZA' around 1966 (try it here). This started with attempts at identification of well-known human phrases and suggesting plausible answers to common questions. But ELIZA was still a far cry from Gmail suggesting sentence completion or a Saturday night heart to heart with Siri.
The first breakthrough came when algorithms could establish statistical relationships between texts and understand which words are closer to others in their meaning or grammatical construction. This led to understanding of relations between words across documents or even corpus of documents, resulting in an ability to decipher their main ideas and classifying them. This was pretty much where AI research was stuck for years.
The next breakthrough came with the invention of models that could remember the last word, predict the next one and somewhat make a sensible phrase, much like a child learning to speak. Remember autocorrect on Gmail and how revolutionary it was?
Open AI, the tech non-profit first seeded by Elon Musk and now funded by Microsoft, then took these primitive models and trained them on all the text they could find. Think big Greek libraries. Only bigger. They spent enormous resources and trained this model to become somewhat of a teenager- one who talks a lot but does not always make sense- their models GPT 1 and 2.
These models were good at certain tasks and not good at others, with overall coherency missing. Especially when it lacked contextual understanding, GPT would generate something off topic. To solve for this, the code was trained on an even larger set of text and over a longer period. With better computational resources and more time investment, predictive AI started understanding a wider array of contexts and writing styles. Enter GPT 3.
GPT 3 saw the teenager, of GPT 1 and 2, grow into a young adult at the precipice of undergrad. It not only understands the different styles of written text but also memorises how facts are used in documents. It can write code for you, answer your Wikipedia questions, hold a conversation, and even complete the next paragraph of your novel.
The potential to spread fake news and hate speech
Keeping your terminator fantasies aside, with GPT 3, predictive language AI has ascended a new peak. It can produce content with near-human precision. What’s more? You can train it to write like a writer who has a sizable body of work. The key here is that GPT 3 can produce all this content at a much faster speed than humans. If deployed strategically, it can overwhelm communication channels from newsrooms to informal WhatsApp groups with tailor-made messages promoting the same underlying idea. Theoretically, it can just as easily be used to propagate racial and gender bias, spewing toxic language that does more harm than good.
Consequently, it can be used by newspapers, magazines, interest groups, and even political parties to produce sophisticated fake news and subliminal hate speech messages at an alarming rate. With these already being an uncontrolled evil, the ramifications of such predictive language tech are unexplored and potentially devastating.
Social and legal preparedness for predictive language AI
GPT 3 symbolizes a mere sliver of a new era of predictive AI technologies, but highlights defining choices society will have to make- innovation against risks of misuse, protecting human labour, and the regulation of these technologies.
Experts had raised similar concerns when even GPT 2 was launched. Though very few of those materialised, the question remains - What will GPT 10 be able to achieve? How deep do deepfakes go? What can we do to prepare for this new technology?
These choices and questions aren’t new, we have faced them time and again, most remarkably at the cusp of the industrial revolution. Subsequently, came solutions such as patents, modern education systems, safety regulations and financial audits. While we don’t know the perfect solution to these AI-generated problems yet, learning from the past can enable us to prepare for this technology preemptively.
Citizen awareness and education is the key to ensuring that people understand the existence of fake news and hate speech. Propaganda comes with distinct markers- extreme dependence on emotions, fear creating imagery, lack of evidence or credible sources, hyperbolic facts and figures. Aggressive fact-checking and software to spot computer-generated content are two possible solutions.
Predictive language AI and its research needs to be regulated through policies and frameworks that closely monitor its progress, but more importantly its restricted deployment. An example could be the news and education sectors because there is value in ensuring humans are part of the publication process, ensuring correctness, authenticity and, above all, accountability.
Another unexplored but promising vertical to be considered is the potential impacts such technology can have on human productivity. Just as Canva revolutionized micro marketing, GPT 3 could infinitely add to the ability of small businesses to create compelling narratives and grow their business. While jobs will be lost, it will also create new jobs for people who learn to use these tools. These innovations also pave the way for further research.
Other areas in which predictive AI could be disruptive but not destructive is increasing access to regional vernaculars, language preservation, and implementation of education programs in poor countries.
In conclusion, while technological advancements are leapfrogging and re-defining the very core of our lives, their ability to definitively improve human experience on Earth is yet to be seen. The prognosis isn’t encouraging.
from Firstpost Tech Latest News https://ift.tt/35vwn1u
No comments:
Post a Comment
please do not enter any spam link in the comment box.