Meta Is Labeling Some Real Photos as Made With AI, Report Says

Estimated read time 3 min read


Facebook and Instagram parent Meta is reportedly struggling to detect social-media posts that have been made or manipulated with artificial intelligence, sometimes mislabeling real-life photos in the process. Meta said earlier this year that it would begin labeling posts that it could detect as having been generated or manipulated with AI. But reports in TechCrunch and PetaPixel indicate that some people’s posts are being labeled as “Made with AI” even when they aren’t.

The problem, TechCrunch wrote, appears to be editing tools like Adobe’s Generative AI Fill, which can be used to remove unwanted objects from images. Former White House photographer Pete Souza told the publication that even cropping tools appear to be adding information to the images, and that information then alerts Meta’s AI detectors.

AI Atlas art badge tag AI Atlas art badge tag

Representatives for Meta didn’t immediately respond to a request for comment.

The report raises questions about the role social-media companies have in helping users determine the nature of other users and their posts. As technology has improved, and particularly as AI tools have become widely available and easy to use, it’s become increasingly hard to distinguish what is truly real.

Read more: How Close Is That Photo to the Truth? What to Know in the Age of AI

History is full of people passing off other people’s work as their own, or manipulating their work to make it seem different than it is. In today’s rapidly expanding AI age, those lies appear to be spreading faster and easier than ever. Industry watchers have even identified examples of AI-powered social media accounts that pretend to be real people

Meta’s attempts to respond to this issue are part of a broader effort across the tech world. OpenAI earlier this year said it had disrupted social media disinformation campaigns tied to Russia, China, Iran and Israel, all powered by the company’s AI tools. Apple, meanwhile, announced earlier this month that it will use metadata to label images touched by its AI tools, regardless of whether they’re being altered, edited or generated.

“We make sure to mark up the metadata of the generated image to indicate that it’s been altered,” said Craig Federighi, Apple senior vice-president of software engineering, on a recent podcast. OpenAI, TikTok, Google, Microsoft and Adobe have announced similar moves.

Still, it’s increasingly hard to accurately identify posts that are made or manipulated by AI. A new term, “slop,” has become increasingly popular to describe the increasing flood of posts created by AI. And media experts warn the problem will likely get worse as we head into the 2024 US presidential election in November





Source link

You May Also Like

More From Author

+ There are no comments

Add yours