GPT-3 Can Write Disinformation Now—and Dupe Human Readers

When OpenAI demonstrated a powerful artificial intelligence algorithm capable of generating coherent text last June, its creators warned that the tool could potentially be wielded as a weapon of online misinformation.

​Now a team of disinformation experts has demonstrated how effectively that algorithm, called GPT-3, could be used to mislead and misinform. The results suggest that although AI may not be a match for the best Russian meme-making operative, it could amplify some forms of deception that would be especially difficult to spot.

Over six months, a group at Georgetown University’s Center for Security and Emerging Technology used GPT-3 to generate misinformation, including stories around a false narrative, news articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation.

“I don’t think it’s a coincidence that climate change is the new global warming,” read a sample tweet composed by GPT-3 that aimed to stoke skepticism about climate change. “They can’t talk about temperature increases because they’re no longer happening.” A second labeled climate change “the new communism—an ideology based on a false science that cannot be questioned.”

“With a little bit of human curation, GPT-3 is quite effective” at promoting falsehoods, says Ben Buchanan, a professor at Georgetown involved with the study, who focuses on the intersection of AI, cybersecurity, and statecraft.

view publisher site
view siteÂ…
view website
visit
visit here
visit homepage
visit our website
visit site
visit the site
visit the website
visit their website
visit these guys
visit this link
visit this page
visit this site
visit this site right here
visit this web-site
visit this website
visit website
visit your url
visite site
watch this video
web
web link
web site
weblink
webpage
website
website link
websites
what do you think
what google did to me
what is it worth
why not check here
why not find out more
why not look here
why not try here
why not try these out
why not try this out
you can check here
you can find out more
you can look here
you can try here
you can try these out
you can try this out
you could check here
you could look here
you could try here
you could try these out
you could try this out
your domain name
your input here
have a peek at this web-site
Source
have a peek here
Check This Out
this contact form
navigate here
his comment is here
weblink
check over here
this content
have a peek at these guys
check my blog
news
More about the author
click site
navigate to this website
my review here
get redirected here
useful reference
this page
Get More Info
see here
this website
great post to read
my company
imp source
click to read more
find more info

The Georgetown researchers say GPT-3, or a similar AI language algorithm, could prove especially effective for automatically generating short messages on social media, what the researchers call “one-to-many” misinformation.

In experiments, the researchers found that GPT-3’s writing could sway readers’ opinions on issues of international diplomacy. The researchers showed volunteers sample tweets written by GPT-3 about the withdrawal of US troops from Afghanistan and US sanctions on China. In both cases, they found that participants were swayed by the messages. After seeing posts opposing China sanctions, for instance, the percentage of respondents who said they were against such a policy doubled.

Mike Gruszczynski, a professor at Indiana University who studies online communications, says he would be unsurprised to see AI take a bigger role in disinformation campaigns. He points out that bots have played a key role in spreading false narratives in recent years, and AI can be used to generate fake social media profile photographs. With bots, deepfakes, and other technology, “I really think the sky’s the limit unfortunately,” he says.

AI researchers have built programs capable of using language in surprising ways of late, and GPT-3 is perhaps the most startling demonstration of all. Although machines do not understand language in the same way as people do, AI programs can mimic understanding simply by feeding on vast quantities of text and searching for patterns in how words and sentences fit together.

The researchers at OpenAI created GPT-3 by feeding large amounts of text scraped from web sources including Wikipedia and Reddit to an especially large AI algorithm designed to handle language. GPT-3 has often stunned observers with its apparent mastery of language, but it can be unpredictable, spewing out incoherent babble and offensive or hateful language.

OpenAI has made GPT-3 available to dozens of startups. Entrepreneurs are using the loquacious GPT-3 to auto-generate emails, talk to customers, and even write computer code. But some uses of the program have also demonstrated its darker potential.

Getting GPT-3 to behave would be a challenge for agents of misinformation, too. Buchanan notes that the algorithm does not seem capable of reliably generating coherent and persuasive articles much longer than a tweet. The researchers did not try showing the articles it did produce to volunteers.

But Buchanan warns that state actors may be able to do more with a language tool such as GPT-3. “Adversaries with more money, more technical capabilities, and fewer ethics are going to be able to use AI better,” he says. “Also, the machines are only going to get better.”

OpenAI says the Georgetown work highlights an important issue that the company hopes to mitigate. “We actively work to address safety risks associated with GPT-3,” an OpenAI spokesperson says. “We also review every production use of GPT-3 before it goes live and have monitoring systems in place to restrict and respond to misuse of our API.”


More Great WIRED Stories

Leave a Reply

Your email address will not be published. Required fields are marked *