Kate Middleton Photoshop Scandal Update: Why Obama’s Photographer Got Involved

Estimated read time 10 min read


Everyone has something to say, it seems, about the Kate Middleton photo scandal. That news story, in which England’s royal family had to admit that the Princess of Wales edited a photo of her family sent to news agencies, is still churning. Now, Pete Souza, the former chief presidential photographer who worked for presidents Barack Obama and Ronald Reagan, is weighing in. And he’s got some personal experience with photographing Britain’s royal family.

Last week, Souza reposted a photo he took of young Prince George meeting President Obama in 2016. He explained exactly how he edited that image and how it’s different from the Kate Middleton fiasco.

“The digital file was ‘processed’ with Photoshop, a software program made by Adobe that virtually every professional photographer uses,” Souza wrote on the photo of Prince George. “Yet my photograph was certainly not ‘altered’ or ‘changed’ in content.⁣”

Souza said he cringed when news stories referred to the royal picture as being “photoshopped,” noting that publications and news organizations have “strict policies” on using Photoshop.

“Basically, the accepted practices allow a news photograph to be tweaked by adjusting the color balance; the density (make the raw file lighter or darker); and shadows and highlights,” Souza wrote. “What’s not acceptable is to remove, add, or change elements in the photograph. That would be altering the content. ⁣For example, if there’s a telephone pole sticking out of a person’s head, you wouldn’t be allowed to remove it. ⁣Or if someone mashes multiple family pictures together into one, that wouldn’t be acceptable.”

Kensington Palace has not released the original image, and has not commented on whether multiple photos were “mashed” together, or what other changes the Princess of Wales reportedly made.

It’s a reminder that we’re in a brave new world of manipulated images now. Even prominent figures are comfortable attempting to pass modified photographs off as authentic, it’s never clear how much editing has been done to a published image and people can’t be blamed for being suspicious.

Instagram has placed a red warning on royal photo

The Prince and Princess of Wales have more than 15 million followers on their Instagram account, and the now-infamous, heavily edited photo of Kate and their children was posted there on March 10. But if you go to that photo now, you’ll see Instagram has plastered it with a red-text warning reading, “Altered photo/video. The same altered photo was reviewed by independent fact-checkers in another post.”

Click on the warning, and you’ll get a message from Instagram noting, “Independent fact-checkers say the photo or image has been edited in a way that could mislead people, but not because it was shown out of context,” and crediting that to a fact-checker, EFE Verifica.

Instagram did not immediately respond to a request for comment on why some edited photos earn a warning and others do not.

Car photo controversy

Earlier in the week, a different photo of the princess also came under fire. The photo agency that provided a picture of the Prince and Princess of Wales together in a Range Rover on Monday, the same day the princess apologized for her editing, is speaking out about its own photo. In a statement, Goff Photos said it didn’t change its photo beyond the most basic updates.

“[The] images of the Prince and Princess of Wales in the back of the Range Rover have been cropped and lightened,” but “nothing has been doctored,” the statement said, according to Today.com. Goff Photos didn’t immediately respond to a request for comment.

How did we get here?

Kate’s surgery sparked rumors

Kate Middleton, Prince William’s wife and England’s future queen, underwent abdominal surgery in January. The original statement issued about her condition said she wouldn’t be seen until after Easter, although one paparazzi photo of the princess and her mother was released last week.

Despite the palace’s original statement, rumors about Middleton’s whereabouts reached a fever pitch on social media. Was she seriously ill? Dead? Had she separated from Prince William? There was zero evidence for any of those theories, but give the internet zero news, and people will make things up.

The family photo was obviously edited

The buzz kicked into high gear on March 10, when a seemingly everyday family image of Kate and her children was sent to news agencies to mark the UK’s Mother’s Day. But then those groups sent out a rare notice requesting that their clients no longer use the photo, saying it had been manipulated.

Within hours, the royal family admitted the photo indeed had been changed — and the princess herself took the blame.

“Like many amateur photographers, I do occasionally experiment with editing,” she said in a rare apology. British tabloid The Daily Mail reported that palace representatives refused to release the original photograph. Kensington Palace did not respond to a request for comment.

Then came the Range Rover photo

While the Internet was still buzzing about the edited photo, Goff Photos released its own picture, that image of a Range Rover with two difficult-to-see passengers, who appear to be Prince William and Kate.

Palace representatives probably would have liked for that photo to have ended people’s concerns about whether Kate is alive and well. But with suspicions already high and the photo itself hard to make out, that wasn’t going to happen, and a whole new world of conspiracy theories was born.

Real or manipulated? How to tell if a photo is edited

Image manipulation isn’t new. Russia’s Joseph Stalin famously removed political enemies from photos nearly a century ago. Since then, manipulated images have become so commonplace in some parts of society that some celebrities have begun publicly criticizing the practice.

Though it’s increasingly hard to identify a manipulated photo, there are some telltale signs. Some of the giveaways that the royal image was manipulated included oddly faded strands of hair, weirdly changing lines on their clothing and a zipper that appeared to change color and appearance.

Some companies have attempted to help ensure we can at least identify when an image is manipulated. Samsung announced that its Galaxy S24, for example, adds metadata and a watermark to identify photos manipulated with AI. AI-generated images also often have the wrong number of fingers or teeth on their subjects, though the technology is improving.

Other companies too have begun promising some form of identification for images that are created or edited by AI, but there is no standard so far. Meanwhile, Adobe and other companies have created new ways to confirm an image is real, hoping to at least guarantee when an image is authentic.

The landscape has changed so quickly that there are now startups attempting to create ways to identify when images are authentic, and when they’ve been manipulated. CNET’s Sareena Dayaram writes that Google AI tools recently built into the company’s photo app both open up exciting photo editing possibilities, while raising questions about the authenticity and credibility of online images.

Read more: AI or Not AI: Can You Spot the Real Photos? 

More editing, more AI: Editing photos on your phone

Photoshop has always been able to do amazing things in the right hands. But it hasn’t always been easy. 

That’s begun to change with AI-powered editing tools, including those added to Photoshop over the past couple years. While the political ramifications of photo editing sound alarming, the personal benefits from this technology can be incredible. One feature, called generative fill, imagines the world beyond a photo’s borders, effectively zooming out on an image

AI tools are also being trained to help people more effectively edit photos, even allowing you to hone in on specific parts of images and turn them into cute stickers to share with friends.

That’s in addition to techniques like high dynamic range, or HDR, which has become a standard feature, particularly on mobile phone cameras. It’s designed to capture high-contrast scenes by taking and then combining multiple images that are dark and bright.

Google’s Magic Eraser photo tool can banish random strangers from your pictures with a few taps, and works for many devices including Apple’s iPhone.

And Google’s Pixel 8 phone, released last year, includes a feature called Best Take, which ensures everyone in a photo is smiling by combining multiple images, effectively creating a new picture taken from all the others.

Apple, meanwhile, focused on adding features to automatically improve image quality, including the iPhone 15 Pro‘s new capability to change focus after you take a portrait photo. 

Read moreYou Should Be Using Google’s Magic Photo Editing Tool

Changing political landscape

While AI can help make photos look a lot better, it’s set to cause serious troubles in the world of politics.

Companies like OpenAI, Google and Facebook have touted text-to-video tools that can create ultra-realistic videos of people, animals and scenes that do not exist in the real world, but internet troublemakers have used AI tools to create fake pornography of celebrities like Taylor Swift.

Supporters of former President Donald Trump have similarly created images that depict the now-presidential candidate surrounded by fake Black voters as part of misinformation campaigns to “encourage African Americans to vote Republican,” the BBC reported.

“If anybody’s voting one way or another because of one photo they see on a Facebook page, that’s a problem with that person, not with the post itself,” one of the creators of the fake photos, Florida radio show host Mark Kaye told the BBC.

In his State of the Union address delivered March 7, President Joe Biden asked Congress to “ban voice impersonation using AI.” That call came after scammers created fake, AI-generated recordings of Biden encouraging Democratic voters not to cast a ballot in the New Hampshire presidential primary earlier this year. The move also led the Federal Communications Commission to ban robocalls using AI-generated voices.

As CNET’s Connie Guglielmo wrote, the New Hampshire example shows the dangers of AI-generated voice impersonations. “But do we have to ban them all?” she asked. “There are potential use cases that aren’t that bad, like the Calm app having an AI-generated version of Jimmy Stewart narrate a bedtime story.”

AI in images: It’s far from over

It’s unlikely that Middleton’s Photoshop kerfuffle can be blamed on AI, but the technology is being integrated into image editing at a rapid clip — and the next edited photo may not be so easy to spot.

As Stephen Shankland wrote on CNET, we’re right to question how much truth there is in the photos we see

“It’s true that you need to exercise more skepticism these days, especially for emotionally charged social media photos of provocative influencers and shocking warfare,” Shankland wrote. “The good news is that for many photos that matter, like those in an insurance claim or published by the news media, technology is arriving that can digitally build some trust into the photo itself.”

Watch this: CNET’s Pro Photographers React to AI Photos

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.





Source link

You May Also Like

More From Author

+ There are no comments

Add yours