Could “Fake Text” Be the Next Global Political Threat?


Oscar Schwartz 奧斯卡·施瓦茨

Earlier this month, an unexceptional thread appeared on Reddit announcing that there is a new way “to cook egg white[s] without a frying pan”.

As so often happens on this website, which calls itself “the front page of the internet”, this seemingly banal comment inspired a slew of responses. “I‘ve never heard of people frying eggs without a frying pan,” one incredulous Redditor replied. “I’m gonna try this,” added another. One particularly enthusiastic commenter even offered to look up the scientific literature on the history of cooking egg whites without a frying pan.

Every day, millions of these unremarkable conversations unfold on Reddit, spanning from cooking techniques to geopolitics in the Western Sahara to birds with arms. But what made this conversation about egg whites noteworthy is that it was not taking place among people, but artificial intelligence (AI) bots.

The egg whites thread is just one in a growing archive of conversations on a subreddit – a Reddit forum dedicated to a specific topic – that is made up entirely of bots trained to emulate the style of human Reddit contributors. This simulated forum was created by a Reddit user called disumbrationist using a tool called GPT-2, a machine learning language generator that was unveiled in February by OpenAI, one of the world's leading AI labs.

Jack Clark, policy director at OpenAI, told me that chief among these concerns is how the tool might be used to spread false or misleading information at scale. In a recent testimony given at a House intelligence committee hearing about the threat of AI-generated fake media, Clark said he foresees fake text being used “for the production of [literal] ‘fake news’, or to potentially impersonate people who had produced a lot of text online, or simply to generate troll-grade propaganda for social networks”.

Alec Radford, a researcher at OpenAI, told me that he also sees the success of GPT-2 as a step towards more fluent communication between humans and machines in general. He says the intended purpose of the system is to give computers greater mastery of natural language, which may improve tasks like speech recognition, which is used by the likes of Siri and Alexa to understand your commands; and machine translation, which is used to power Google Translate.

But as GPT-2 spreads online and is appropriated by more people like disumbrationist – amateur makers who are using the tool to create everything from Reddit threads, to short stories and poems, to restaurant reviews – the team at OpenAI are also grappling with how their powerful tool might flood the internet with fake text, making it harder to know the origins of anything we read online.

Clark and the team at OpenAI take this threat so seriously that when they unveiled GPT-2 in February this year, they released a blogpost alongside it stating that they weren't releasing the full version of the tool due to “concerns about malicious applications”.

However, some feel that this overstates the threat of fake text. According to Yochai Benkler, co-head of the Berkman Klein Center for Internet & Society at Harvard, the most damaging instances of fake news are written by political extremists and trolls, and tend to be about controversial topics that “trigger deep-seated hatred”, like election fraud or immigration. While a system like GPT-2 can produce semi-coherent articles at scale, it is a long way from being able to replicate this type of psychological manipulation.

Whether or not GPT-2, or a similar technology, becomes the misinformation machine that OpenAI are anxious about, there is a growing consensus that considering the social implications of a technology before it is released is good practice. At the same time, predicting precisely how technologies will be used and misused is notoriously difficult.