QAnon Explained: A GPT-3 Powered Bot Made by Researchers
Share:
How Badly Can Artificial Intelligence Affect the Society When Used the Wrong Way, and Why It Might Be Important to Put Regulate the Use of Smart Technologies.

With the US Presidential Elections just around the corner, things on social media aren’t looking good. With an enormous amount of hate speech, neo-Nazi propaganda, conspiracy theories, and AI-manipulated media like DeepFakes floating around on the Internet, misleading disinformation campaigns are active at an all-time-high rate. The main motive behind all this is to influence the election results and polarize the general public.
Amidst all this, one group of conspiracy theorists have particularly risen to popularity with their far-right beliefs— QAnon. Determined by FBI as a potential source of domestic terrorism in the US, the QAnon ideology, despite its nonsensical claims of “A Satan-worshiping pedophile ring operating on the US soil, plotting against President Donald Trump”, has seen an exponential boost in the number of followers recently.
If we talk about the stats, as of now, there are thousands of active QAnon (and related) bot accounts on Twitter and Facebook, that are engaging an even higher count of the human audience into spreading their alt-right, extremist beliefs on the Internet. Despite Facebook and Twitter trying to actively crack down upon this QAnon problem, things don’t seem to be getting any better yet. Recently something even worse has come up.
A few months back, the Elon Musk-backed company OpenAI revealed their natural-language based generative AI model. The USP of this model is that it can be trained to generate text with a human-like ingenuity, sometimes so accurate that it is impossible to tell it apart from an actual human being.
Post-launch the model was put into a trial phase by OpenAI, where it was open to researchers around the world for experimentation and hands-on testing. Up until now, we have seen all sorts of uses for the model, including some advanced chatbots, automated website generation, and many other small and big experiments.
It was during this trial phase that two researchers from the Middlebury Institute of International Studies, Kris McDuffie and Alex Newhouse came up with something that clearly describes how far beyond the limits the infamous, “evil” aspect of AI has gone.
The two researchers, in a bid to understand how an advanced neural language AI model like GPT-3 can pose radicalization threats to the society, created a QAnon bot— A chatbot built using GPT-3 by training and testing it over all sorts of QAnon propaganda and conspiracy theories.
And how to describe the results of this experiment? Simply put, quite disturbing.
The following excerpts from the original report demonstrate some of the results obtained before and after training the chatbot with QAnon conspiracy data.
Before:
As we can see from the above results, before being trained on the QAnon data, the chatbot had, what one can define, a quite neutral response to the questions, primarily based on the reports on QAnon in various existing news and articles on the Internet.
After:
The snapshot from the report above demonstrates how the bot responded to the researchers’ questions after training it on the QAnon data. As we can see the answers are more conspiracy-inclined, conforming to the beliefs of QAnon.
The main idea behind this experiment, according to the researchers, was to test how a technology like this, once fallen into the hands of potential radicals can be used to scale up the generation of synthetic content for the internet, allowing these people with radical political and religious beliefs to expand the reach of their extremist ideas.
While this was just an experiment and not an actual product that the group of researchers will be releasing out in the public to be exploited, the research raises several concerns regarding how AI technologies of the future can affect the society.
With time, technologies like GPT-3are bound to become mainstream. This implies that in the future, there’s going to be a higher risk for extremist groups to secure access to such advanced systems.
While a chatbot doesn’t seem too harmful at the first glance, QAnon and hundreds of bot accounts promoting their toxic beliefs on social media have already proved that these “simple” disinformation campaigns have the potential to sway people.
The irresponsible use of AI technologies like GPT-3 or Deepfakes can cause major socio-economic havoc.
One thing that is to be noted here is that OpenAI has assured that it will enforce a strict policy over the use of its GPT-3 GPT-3 API to prevent its use for any immoral cause used for any immoral cause, however, ver, GPT-3 is not necessarily the only advance natGPT-3 is not necessarily the only advance natural language model that real language model that is presented in the market right now.
Like we said, technologies like this are bound to become mainstream in the coming years. As a result, we need to have proper mechanisms in place to counter the challenge we will be facing in the future.in the market right now.
Like we said, technologies like this are bound to become mainstream in the coming years.
As a result, we need to have proper mechanisms in place to counter the challenge we will be facing in the future.
Since the last decade, Artificial Intelligence has been making some major strides. With computers getting more powerful, and more and more people getting into professional fields like Data Science, AI, and Deep Learning, the last few years have been particularly good when it comes to the developments in the field of AI.
However, these developments have once again raised the age-old question that has been asked since the very inception of AI.
With systems around the world getting faster and smarter, how are we going to deal with the negative impacts of the unethical and immoral use of AI? And how worse can its impacts be on the existing socio-economic conditions of the world? The answer to these questions, according to many is some kind of a regulatory body that oversees all developments in the artificial intelligence world.
AI researchers all around the world have time and again called for some universal regulations— a set of rules that determine the moral boundaries within which AI technologies can be developed and used.
On the other hand, this idea has a lot of negative critics as well who think at a time when AI is flourishing at its best, imposing some restrictive rules could result in another AI winter.
However, a regulatory body or not, I think we all can agree that for now, we at least need some preventive measures to protect us against the flood of radical, synthetic, AI-generated content on social media.