Apple Intelligence Will Label AI-Generated Images in Metadata

Estimated read time 3 min read


Apple’s new artificial intelligence features, called Apple Intelligence, are designed to help you create new emoji, edit photos and create images from a simple text prompt or uploaded photo. Now we know that Apple Intelligence will also add code to each image, helping people to identify that it was created with AI.

On a recent podcast by prominent blogger John Gruber, Apple executives described how the company’s teams wanted to ensure transparency, even with seemingly simple photo edits, such as removing a background object.

“We make sure to mark up the metadata of the generated image to indicate that it’s been altered,” said Craig Federighi, Apple senior vice-president of software engineering, adding that Apple isn’t intending to build technology that generates realistic images of people or places.

Apple’s commitment to add information to images touched by its AI adds to a growing list of companies that are attempting to help people identify when images have been manipulated. TikTok, OpenAI, Microsoft and Adobe have all begun adding a sort of digital watermark to help identify content created or manipulated by AI.

AI Atlas art badge tag AI Atlas art badge tag

Media and information experts have warned that despite these efforts, the problem is likely to get worse, particularly ahead of the contentious 2024 US presidential election. A new term, “slop,” has become increasingly popular to describe the realistic lies and misinformation created by AI

Artificial intelligence tools to create text, videos and audio have become significantly easier to use, allowing people to do all sorts of things without much need for technical knowledge. (Check out CNET’s hands-on reviews of AI image-generating tools like Google’s ImageFX, Adobe Firefly and OpenAI’s Dall-E 3 as well as more AI tips, explainers and news on our AI Atlas resource page.)

Read more: How Close Is That Photo to the Truth? What to Know in the Age of AI

At the same time, AI content has become much more believable. Some of tech’s biggest companies have begun adding AI technology to apps we use daily, but with decidedly mixed results. One of the most high profile screwups was Google, whose AI Overview summaries attached to search results began inserting wrong and potentially dangerous information, such as suggesting adding glue to pizza to keep cheese from slipping off.

Apple appears to be taking a more conservative approach to AI for now. The company said it intends to offer its AI tools in a public “beta” test later this year. It’s also struck a partnership with leading startup OpenAI to add extra capabilities to its iPhones, iPads and Mac computers.





Source link

You May Also Like

More From Author

+ There are no comments

Add yours