OpenAI has revealed operations linked to Russia, China, Iran and Israel have been using its artificial intelligence tools to create and spread disinformation, as technology becomes a powerful weapon in information warfare in an election-heavy year.
The San Francisco-based maker of the ChatGPT chatbot said in a report on Thursday that five covert influence operations had used its AI models to generate text and images at a high volume, with fewer language errors than previously, as well as to generate comments or replies to their own posts. OpenAI’s policies prohibit the use of its models to deceive or mislead others.
The content focused on issues “including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” OpenAI said in the report.
The networks also used AI to enhance their own productivity, applying it to tasks such as debugging code or doing research into public social media activity, it said.
Social media platforms, including Meta and Google’s YouTube, have sought to clamp down on the proliferation of disinformation campaigns in the wake of Donald Trump’s 2016 win in the US presidential election when investigators found evidence that a Russian troll farm had sought to manipulate the vote.
Pressure is mounting on fast-growing AI companies such as OpenAI, as rapid advances in their technology mean it is cheaper and easier than ever for disinformation perpetrators to create realistic deepfakes and manipulate media and then spread that content in an automated fashion.
As about 2 billion people head to the polls this year, policymakers have urged the companies to introduce and enforce appropriate guardrails.
Ben Nimmo, principal investigator for intelligence and investigations at OpenAI, said on a call with reporters that the campaigns did not appear to have “meaningfully” boosted their engagement or reach as a result of using OpenAI’s models.
But, he added, “this is not the time for complacency. History shows that influence operations which spent years failing to get anywhere can suddenly break out if nobody’s looking for them.”
Microsoft-backed OpenAI said it was committed to uncovering such disinformation campaigns and was building its own AI-powered tools to make detection and analysis “more effective.” It added its safety systems already made it difficult for the perpetrators to operate, with its models refusing in multiple instances to generate the text or images asked for.
In the report, OpenAI revealed several well-known state-affiliated disinformation actors had been using its tools. These included a Russian operation, Doppelganger, which was first discovered in 2022 and typically attempts to undermine support for Ukraine, and a Chinese network known as Spamouflage, which pushes Beijing’s interests abroad. Both campaigns used its models to generate text or comment in multiple languages before posting on platforms such as Elon Musk’s X.
It flagged a previously unreported Russian operation, dubbed Bad Grammar, saying it used OpenAI models to debug code for running a Telegram bot and to create short, political comments in Russian and English that were then posted on messaging platform Telegram.
X and Telegram have been approached for comment.
It also said it had thwarted a pro-Israel disinformation-for-hire effort, allegedly run by a Tel Aviv-based political campaign management business called STOIC, which used its models to generate articles and comments on X and across Meta’s Instagram and Facebook.
Meta on Wednesday released a report stating it removed the STOIC content. The accounts linked to these operations were terminated by OpenAI.
Additional reporting by Cristina Criddle
© 2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.
+ There are no comments
Add yours