Inteligencia y Seguridad Frente Externo En Profundidad Economia y Finanzas Transparencia
  En Parrilla Medio Ambiente Sociedad High Tech Contacto
High Tech  
 
07/07/2019 | Technology - Could ¨fake text¨ be the next global political threat?

Oscar Schwartz

An AI fake text generator that can write paragraphs in a style based on just a sentence has raised concerns about its potential to spread false information.

 

Earlier this month, an unexceptional thread appeared on Reddit announcing that there is a new way “to cook egg white[s] without a frying pan”.

As so often happens on this website, which calls itself “the front page of the internet”, this seemingly banal comment inspired a slew of responses. “I’ve never heard of people frying eggs without a frying pan,” one incredulous Redditor replied. “I’m gonna try this,” added another. One particularly enthusiastic commenter even offered to look up the scientific literature on the history of cooking egg whites without a frying pan.

Every day, millions of these unremarkable conversations unfold on Reddit, spanning from cooking techniques to geopolitics in the Western Sahara to birds with arms. But what made this conversation about egg whites noteworthy is that it was not taking place among people, but artificial intelligence (AI) bots.

The egg whites thread is just one in a growing archive of conversations on a subreddit – a Reddit forum dedicated to a specific topic – that is made up entirely of bots trained to emulate the style of human Reddit contributors. This simulated forum was created by a Reddit user called disumbrationist using a tool called GPT-2, a machine learning language generator that was unveiled in February by OpenAI, one of the world’s leading AI labs.

Jack Clark, policy director at OpenAI, told me that chief among these concerns is how the tool might be used to spread false or misleading information at scale. In a recent testimony given at a House intelligence committee hearing about the threat of AI-generated fake media, Clark said he foresees fake text being used “for the production of [literal] ‘fake news’, or to potentially impersonate people who had produced a lot of text online, or simply to generate troll-grade propaganda for social networks”.

GPT-2 is an example of a technique called language modeling, which involves training an algorithm to predict the next most likely word in a sentence. While previous language models have struggled to generate coherent longform text, the combination of more raw data – GPT-2 was trained on 8m online articles – and better algorithms has made this model the most robust yet.

It essentially works like Google auto-complete or predictive text for messaging. But instead of simply offering one-word suggestions, if you prompt GPT-2 with a sentence, it can generate entire paragraphs of language in that style. For example, if you feed the system a line from Shakespeare, it generates a Shakespeare-like response. If you prompt it with a news headline, it will generate text that almost looks like a news article.

Alec Radford, a researcher at OpenAI, told me that he also sees the success of GPT-2 as a step towards more fluent communication between humans and machines in general. He says the intended purpose of the system is to give computers greater mastery of natural language, which may improve tasks like speech recognition, which is used by the likes of Siri and Alexa to understand your commands; and machine translation, which is used to power Google Translate.

But as GPT-2 spreads online and is appropriated by more people like disumbrationist – amateur makers who are using the tool to create everything from Reddit threads, to short stories and poems, to restaurant reviews – the team at OpenAI are also grappling with how their powerful tool might flood the internet with fake text, making it harder to know the origins of anything we read online.

Clark and the team at OpenAI take this threat so seriously that when they unveiled GPT-2 in February this year, they released a blogpost alongside it stating that they weren’t releasing the full version of the tool due to “concerns about malicious applications”. (They have since released a larger version of the model, which is being used to create the fake Reddit threads, poems and so on.)

For Clark, convincing machine text like the variety GPT-2 is capable of pose a similar threat to “deepfakes” – machine-learning generated fake images and videos that can been used to make people appear to do things they never did, say things they never said (like this video of former president Barack Obama). “They are essentially the same,” Clark told me. “You have technology that makes it cheaper and easier to fake something, which means that it will just get harder to offer guarantees about the truth of information in the future.”

However, some feel that this overstates the threat of fake text. According to Yochai Benkler, co-head of the Berkman Klein Center for Internet & Society at Harvard, the most damaging instances of fake news are written by political extremists and trolls, and tend to be about controversial topics that “trigger deep-seated hatred”, like election fraud or immigration. While a system like GPT-2 can produce semi-coherent articles at scale, it is a long way from being able to replicate this type of psychological manipulation. “The simple ability to generate false text at scale is not likely to affect most forms of disinformation,” he told me.

Other experts have suggested that OpenAI exaggerated the malicious potential of GPT-2 in order to create hype around their research. For Zack Lipton, professor of business technologies at Carnegie Mellon University, the assessment of the risk of the technology was disingenuous.

“Of all the bad uses of AI – from recommender systems that lead to filter bubbles and the racial consequences that emerge from automated categorization – I would put the threat of language modeling at the bottom of the list,” he said. “What OpenAI have done is commandeered the discourse and fear about AI and used it to generate hype around their product.”

OpenAI’s concerns are being taken seriously by some. A team of researchers from the Allen Institute for Artificial Intelligence recently developed a tool to detect “neural fake news”. Yejin Choi, a professor of computer science at the University of Washington who worked on the project, told me that detecting synthetic text is actually “fairly easy” due to the fact that generated text has a “statistical signature”, almost like a fingerprint, that can be easily identified.

While such digital forensics are useful, Britt Paris, a researcher at New York-based institute Data & Society, worries that such solutions misleadingly frame fake news as a technological problem when, in fact, most misinformation is created and spread online without the help of sophisticated technologies.

“We already have a ton of ways for generating false information and people do a pretty good job of circulating this stuff without the help of machines,” she said. Indeed, the most prominent instances of fake content online – such as the “drunk Nancy Pelosi” video released earlier this year – were created using rudimentary editing techniques that have been around for decades.

Benkler agrees, adding that fake news and disinformation are “first and foremost political-cultural problems, not technological problems”. Tackling the problem, he says, requires not better detection technologies, but an examination of the social conditions that have made fake news a reality.

Whether or not GPT-2, or a similar technology, becomes the misinformation machine that OpenAI are anxious about, there is a growing consensus that considering the social implications of a technology before it is released is good practice. At the same time, predicting precisely how technologies will be used and misused is notoriously difficult. Who would have thought 10 years ago that a recommendation algorithm for watching videos online would turn into a powerful radicalizing instrument?

Given the difficulty of predicting the potential harm of a technology, I thought I would see how GPT-2 faired in assessing its own capacity for spreading misinformation. “Do you think that you will be used to spread fake news and further imperil our already degraded information eco-system?” I prompted the machine.

“The fact that we can’t find the name of who actually post the article is a great clue,” it responded. “However, this person is still using social media sites to post the fake news with a clear purpose.”

The Guardian (Nigeria)

 



 
Center for the Study of the Presidency
Freedom House