While much was made about the potential dangers of deepfakes and artificial intelligence-powered disinformation campaigns ahead of this past year’s elections, not much actually showed up on Meta’s social media platforms, the company said Tuesday.
The parent of Facebook and Instagram says that while there were confirmed and suspected instances where AI was used as part of disinformation operations, “volumes remained low” and the company’s existing practices were enough to minimize their impact. In addition, ratings on AI content related to elections, politics and social topics represented less than 1% of all fact-checked misinformation on its platforms.
“From what we’ve monitored across our services, it seems these risks didn’t materialize in a significant way and any such impact was modest and limited in scope,” Nick Clegg, Meta’s president of global affairs, said in a call with reporters.
That’s not to say foreign governments aren’t trying to sway the options of people around the world through social media campaigns. Meta says that so far this year, its teams have taken down about 20 new covert influence operations around the world, with Russia remaining the top source of these kinds of campaigns.
About 2 billion people spread across more than 70 countries were eligible to vote in national elections this year. Election security experts had fretted about the possible impacts of AI-powered deepfakes and other forms of disinformation on the voting public.
Social media companies were faced with the challenge of keeping disinformation off their platforms, while not unnecessarily restricting the free expression of their users. Some politicians, including President-elect Donald Trump, frequently criticized the platforms while at the same time using them to spread baseless accusations about election fraud and immigrants.
+ There are no comments
Add yours