15 of the Biggest Fake Images That Went Viral in 2024

Estimated read time 41 min read


The internet is absolutely flooded with fake images, whether they’re fake photos, fake videos or fake animated GIFs. There are so many fakes spreading online these days that it can be hard to keep track. We’ve got a round-up of the fakes that went viral this year and it hopefully serves as a helpful reminder that you shouldn’t always believe your own lying eyes.

Here at Gizmodo, we’ve been fact-checking fake images that go viral for well over a decade now. And while most of those images of the past were altered with clunky Photoshop-like tools, generative AI is now fueling viral hoaxes.

But that doesn’t mean everything in 2024 was AI-generated. In fact, the images that actually tricked people in 2024 were often more traditional photoshop-style fakes. AI is abundant, but people have also developed an eye for when AI has been used. There’s an eerie vibe emanating from AI-generated images. And old-school Photoshop can still create images that don’t give the same red flags.

1) No Robo-Dogs Were Harmed

Have you seen that photo of a dog attempting to make sweet and passionate love to a robot dog? It went viral in 2024 on sites like Reddit, Instagram, and X. But it’s fake. Hopelessly, tragically fake.

The original photo was first shared by the New York Fire Department on Facebook way back on April 18, 2023. And, as you can see from the side-by-side we’ve created below, there’s no white dog going to town on the fire department’s robotic canine.

The manipulated image that’s been going viral (left) and the original photo posted to the New York Fire Department’s Facebook page.

It’s not entirely clear who first made the photoshopped image, but it shows up on the subreddit Firefighting on April 19, 2023, just a day after the original was posted to Facebook. Other accounts on a variety of platforms also posted the manipulated photo on April 19, 2023, including this account on Instagram, making it difficult to narrow down who actually created the joke.

The original photo gained attention because the New York Fire Department has a robotics unit, founded in 2022, that can be deployed for search and rescue operations. The robot was being used that day during a partial building collapse of a parking garage in Lower Manhattan.

“The robot dog or the drones, they’re able to stream the video directly to our phones, directly to our command center,” New York’s chief of fire operations John Esposito told reporters shortly after it happened last year in a video aired by CBS New York.

“This is the first time that we’ve been able to fly inside in a collapse to do this and try to get us some information, again, without risking the lives of our firefighters,” Esposito continued.

While it’s great to see life-saving technology being used in such cool ways, none of this answers the question we’re struck with after seeing an admittedly photoshopped image of a dog trying to mount a robot dog: Would this ever happen in real life? Dogs will famously try to hump just about anything, but there haven’t been any documented cases that we’re aware of. But that may simply be explained by the fact that these four-legged robots aren’t very common yet.

Have you seen a robot trying to get up close and personal with a robot dog? Drop us a line or leave your story in the comments. We’re probably not going to believe you without photographic evidence. And given the ubiquity of artificial intelligence tools that make such images easier to create than ever, we still probably won’t believe you.

Oh, the perils of living in the future. Photographic evidence isn’t worth very much these days. But a funny altered image is still a funny altered image, no matter what era you live in.

2) All Crime is Legal Now

Did you see photos on social media in 2024 of a street sign that reads, “Notice, Stolen goods must remain under $950″? The photos were taken in San Francisco and appear to be referring to a right-wing disinformation campaign that claims theft has been legalized in California. But the signs weren’t put there by any government entity. They’re a prank.

When the photos first surfaced on X earlier this year, many people speculated the images may have been photoshopped or created with AI technology in some way. One of the photos even received a Community Note submitted by users that claimed as much. But that’s not true.

They’re photos of a “real” sign in the sense that they weren’t created using programs like ChatGPT or Photoshop. The sign was captured from multiple angles, as you can see above, helping prove that this sign was actually placed there in front of the Louis Vuitton store.

And while they look professionally done, the signs had subtle clues indicating they weren’t real (including screws that look different from those used by the city), which proves they were installed by anonymous pranksters. The San Francisco Department of Public Works and the Office of the City Administrator confirmed to Gizmodo by email that the sign “was not City sanctioned and not posted by the City.”

What’s the idea behind the sign? It’s most likely a reference to the fact that the state of California raised the threshold for when shoplifting goes from a misdemeanor to a felony back in 2014. The threshold in California is $950, which some people think is too high. Fox News has done several segments on the topic, claiming that California has “legalized” shoplifting, which is complete nonsense. The idea is fueled by cellphone videos aired by conservative media that give viewers the impression that theft is non-stop in the state.

The problem, of course, is that many other states—including those with Republican governors and legislatures—have much higher thresholds for when shoplifting becomes a felony. In fact, as former Washington Post criminal justice reporter Radley Balko wrote in 2023, a whopping 34 states have a higher threshold than California. That includes Republican-run states like Texas ($2,500) and South Carolina ($2,000). Needless to say, nobody is claiming that Texas and South Carolina have legalized theft.

Professional-looking signs clearly made by pranksters have been popping up in San Francisco for years. There was the sign near OpenAI’s headquarters that warned all activities were being monitored by security cameras and used for training AI, others declaring a “no-tech zone” back in 2015 aimed at tourists, and the one that read “we regret this bike lane” in 2023.

Given the number of fake official-looking signs that have sprung up in San Francisco over the past decade, it seems unlikely we’ll ever learn who was behind the “stolen goods” sign that’s been going viral. But we know that’s not a real sign put out by the city. And despite what you might see on Fox News or X, retail stores in California aren’t really a lawless Mad Max hellscape. You have to drive the freeways in L.A. to experience that.

3) The Lines on Maps Dividing States Aren’t Real Either

Did you see videos in 2024 claiming to show a point where the Pacific Ocean meets the North Sea, suggesting the two don’t mix? It’s a nonsense claim for so many reasons, but that didn’t stop one video from racking up over 20 million views about it.

“Nobody can explain why oceans meet and never mix,” one X user with a tweet featuring the viral video wrote.

“The beauty of the oceans,” another X user wrote in a tweet that was seen over 12 million times.

Both of the accounts, it should be noted, have blue checkmarks which can be purchased for $8 per month. Before Elon Musk bought the platform, the verification system was intended to combat impersonators, but it now gives anyone who can rub two brain cells together the ability to get boosted by the X algorithm.

Why is this video so dumb? If you pull up a map of the North Sea, you can observe for yourself that it’s surrounded by England, Norway, Denmark, and the Netherlands. The closest major ocean is the Atlantic and it’s nowhere near the Pacific. Simply put, the North Sea and the Pacific Ocean never meet. But that’s just one reason it’s so painful to see this video getting traction on a major social media site.

For whatever reason, the past few years have produced countless videos of people claiming to show where oceans meet but don’t mix. Typically, these viral videos show places where saltwater and freshwater collide, making it look like there’s a line separating the two. These videos can be particularly deceiving when a wide river meets the ocean. Reasonable people can think they’re viewing something shot on the open ocean, not realizing the very simple explanation for what they’re seeing.

Ask any oceanographer, as USA Today did in a debunker from 2022, and they’ll tell you that the oceans do “mix,” despite frequent posts on social media that there’s some kind of reason for them not mixing. One common claim on platforms like X, TikTok, and YouTube is that different iron and clay content prevent the oceans from mixing, an idea that simply isn’t true.

But the idea that you can draw a line precisely showing where major oceans begin and end is tremendously popular. And that idea seems particularly common among people who want to insist science doesn’t understand why “oceans don’t mix.”

“This is the Gulf of Alaska where 2 oceans meet but do not mix. Tell me there is No God and I’ll ask you ‘Who commanded the mighty waves and told them they could go no further than this’! What an absolutely AMAZING God…..” one viral post from Facebook claimed.

Well, actually we do understand. Because the oceans do mix. Even if incredibly dense people on social media tell you otherwise.

4) Mike Would Never

A video appearing to show MyPillow CEO Mike Lindell driving while not paying attention to the road went viral on X in 2024. And one user even claimed Lindell was “hammered” while driving. But the video is significantly altered.

The video was seen million of times on X alone, with many people clearly understanding it’s an edited video. But the video, which is originally from 2023, started to be shared without context. And some people thought it was real.

X’s program of crowdsourced fact-checking, Community Notes, eventually annotated the fake video. The original video, while not entirely complimentary to Lindell, clearly shows the MyPillow CEO talking directly to the camera while his car isn’t moving.

“Strung-out looking Mike Lindell says he really needs people to buy some of his new slippers after he was canceled by retailers and shopping channels,” liberal influencer Ron Filipkowski wrote on X back in March 2023.

The video appears to have been originally edited by comedy writer Jesse McLaren, who added a moving background and engine noises to make it appear like Lindell is driving. McLaren’s video was retweeted on top of the original, making it clear his intention wasn’t to deceive anyone but instead to just make a joke.

“I made it look like the car’s moving and it’s 100x better,” McLaren tweeted last year.

But the video started making the rounds again, without making it clear the video had been manipulated. That happens frequently, as it recently did when a joke about Gmail being shut down escaped the circle of original posters who knew it was a joke.

“My dude is HAMMERED,” an X account called Universe Lover wrote on Friday.

It’s not clear what software McLaren used to edit this video, but with AI video generation tools getting better with each passing month, it won’t be long until everyone has the ability to create virtually anything (at least in short form) they can imagine with just a few word prompts. The viral internet is only going to get more confusing with AI advancements just over the horizon.

5) The McDonald’s Goof

AI-generated image of a man in the 1980s smoking at McDonald's
AI-generated image of a man in the 1980s smoking at McDonald’s. Image: Twitter

Have you seen a photo on social media recently that appears to show a man in 1980s-style clothes smoking a cigarette in McDonald’s? The image went viral, racking up tens of millions of views in 2024. But it’s completely fake. The image was made using generative AI.

The image, which features a man with long curly brown hair and a mustache, instantly grabbed attention when it was first posted to X earlier this year. The man looks like he’s smoking a cigarette while emitting a puff of smoke, something that was allowed in some McDonald’s locations of the 20th century before clean indoor air laws became the norm.

But if you take a closer look at the image, there are some telltale signs that his “photo” was created using AI. For starters, just take a look at the fingers and hands. Do you notice how unusually long the man’s left hand is, without any discernible wrist? It appears like his arm just morphs into fingers that are incredibly long.

Image: X

Next, just take a look at the writing in the image. The red cup on the table, which appears to be an attempt by the AI to mimic Coke’s branding, is a swirl of nonsense. And while McDonald’s signature golden arches look accurate on the french fry box, the packaging for McDonald’s fries has typically been predominantly red, not yellow. It also looks like a straw is sticking out from the container. I’ve never tried to consume fries through a straw, but one imagines that’s a difficult task.

The man behind the main subject of the “photo” looks even more warped, with both odd-looking hands and a face to match. The background man’s hat also looks like an attempt at both a shirt style and a bucket hat from the 2000s, which puts the main subject in an even more perplexing situation.

Image: X

And what is the writing in the upper-right corner supposed to say? We only have partial letters, but it appears to say Modlidani in the shape of what’s approximating a McDonald’s sign.

Image: X

Last but not least, check out the main subject’s shirt situation. The man appears to be bare-chested while wearing a denim vest but the sleeves look like a white t-shirt. It doesn’t make much sense.

Obviously the reason this image went viral is that it speaks to some version of the past that doesn’t exist anymore, whether it’s a guy with that haircut or just smoking in general. Smoking tobacco in public places used to be the norm before it was phased out in a decades-long process across the U.S. in an effort to protect public health. Many states first created “smoking” and non-smoking” sections of restaurants in the late 20th century before smoking was banished altogether in most indoor spaces by the early 21st century. There are still a handful of states that allow indoor smoking of cigarettes in some venues but they’re becoming rarer with each passing year.

If you saw this image in your feeds and didn’t immediately register it as AI, you’re not alone. We regularly debunk images and didn’t even give it a second thought when we first saw it. But that perhaps speaks to how a low-stakes image doesn’t get quite as much scrutiny when it’s going viral on social media platforms.

Frank J. Fleming, a former writer for conservative news satire site the Babylon Bee, pointed out on X how many people didn’t have their guard up when sharing the fake smoking image.

“This is such an interesting case of people being fooled by an AI image because the stakes are so low. There are so many obvious signs this is AI, but most would miss them because they’re not part of the focus of the image and since this is not a case where you think someone would be tricking you, you have no reason to analyze it that closely,” Fleming wrote on X.

We’re reminded of the viral image of Pope Francis wearing a big white puffer coat in 2023, another instance where people were quick to believe it might be real simply because it didn’t register as something that mattered all that much, yet was still amusing to see. Who cares if the Pope wears a cool jacket? Well, plenty of people if it’s just an example of material excess or fashion-consciousness from a figure who’s supposed to be above earthly concerns.

6) Yummy But Fake

Croissant Dinosaur
© Instagram

Did you see that that croissant made in the shape of a dinosaur that went viral on sites like Reddit and X? It’s incredibly cute. But we regret to inform you this “Croissantosaurus” was made using generative AI. In fact, if you track down where this pastry was first posted, you’ll find an entire fake restaurant that never existed.

“Babe, what’s wrong? You haven’t eaten your Croissantosaurus…” a viral tweet joked on X. The tweet racked up millions of views. But a reverse-image search of the admittedly cool-looking pastry will bring you to this Instagram page for a restaurant in Austin, Texas called Ethos Cafe. And if you notice something weird about the restaurant, you’re not alone.

“Unleash your inner paleontologist and savor our new Dino Croissants. Choose your favorite dinosaur and pair it with a delightful cappuccino. A prehistoric treat for a modern indulgence!” the Instagram post reads.

Strangely, Ethos doesn’t actually exist and appears to be some kind of hoax or art project by anonymous creators. Austin Monthly reported on the fake restaurant last year and a subreddit for Austin Food picked apart many of the weird things about this new restaurant when it first surfaced online.

For starters, all the staff at the restaurant appear to be AI-generated. Do you spot anything strange about this bartender, aside from the fact that he’s named Tommy Kollins? That’s right, he appears to be gripping that drink with a hand that has six fingers.

An image that appears on a fake restaurant website for Ethos Cafe in Austin, Texas that appears to be made using AI.

AI image generators like DALL-E, Midjourney, and Stable Diffusion often have trouble generating hands accurately. And while many improvements have been made since they were first introduced, hands can still be tricky. The website also has odd instructions for obtaining a reservation, which appears to be a commentary on the lengths some people will go to in order to eat at elite dining establishments.

“Reservations go live at 4:30 am every first Monday of every month. By using multiple devices simultaneously, you can access our system more easily, reduce competition from other users, respond faster, and have a backup option in case of any technical issues or internet connection problems. This approach maximizes your opportunity to secure your desired reservation promptly and efficiently,” the website reads.

Gizmodo tried to contact the Ethos Cafe through the form on its website but didn’t hear back. Just know that the Croissantosaurus is adorable but totally fake.

7) Land Before Time Fakes

Movie posters appearing to show an upcoming remake of the children’s dinosaur movie The Land Before Time (1988) elicited strong emotions on social media in 2024. But no matter if you think a remake is a good idea, the movie isn’t happening. At least not in the foreseeable future.

The rumors about this fake dino remake can likely be traced to a Facebook page called YODA BBY ABY, which first wrote about the potential movie in late 2023.

“Get ready to embark on a prehistoric escapade like never before! Disney and Pixar join forces to bring you a dazzling remake of The Land Before Time, where Littlefoot and friends journey through lush landscapes and encounter enchanting surprises,” the fake post reads. “Brace yourself for a January 2025 release – a dino-mite adventure awaits!”

But there’s no evidence that any remake of The Land Before Time is in production by Disney and Pixar, much less coming out in January 2025. Another viral claim suggested the movie is coming out in December 2024, but there’s no evidence for that either.

The prospect of a remake has been incredibly polarizing, especially because people who loved the original movie took issue with the way the dinosaurs looked on these fake movie posters.

“I hope this is some kind of sick joke that someone made, because that is not Little Foot,” on TikTok user commented last week.

Other TikTok users said they were “disrespecting the spirit of Land Before Time” and “disrespecting Littlefoot” with the new character designs.

While the original 1988 film, executive produced by George Lucas and Steven Spielberg, is the most beloved, there were actually 13 sequels. Only the 1988 version received a theatrical release though, with all of the follow-ups going straight to home video. The last in the series was released in 2016 and is titled Journey of the Brave.

But if I’m Universal Pictures I’m looking at the strong opinions currently circulating online and seeing dollar signs in my eyes. If people have strong feelings about the film series, that certainly counts for something. Millennial nostalgia can be an extremely profitable enterprise as the generation enters middle age, whether it’s the 30th iteration of Mean Girls or our favorite animated dinosaurs. Get to work, movie execs.

8) Hate-monger Fakes

The fake tweet that’s been made to look like it was deleted, with an annotation of FAKE over the top by Gizmodo.

A tweet that appeared to show far-right influencer Candace Owens ridiculing Ben Shapiro with a reference to a “dry” bank account went viral in 2024. The tweet refers to “Ben’s wife” and even looks like it had been deleted, based on a viral screenshot. But the tweet isn’t real. It was made by a comedian.

“After getting fired today, my bank account is gonna be dry for a while, but not as dry as Ben’s wife,” the fake tweet from Owens reads.

Photoshopped tweets about Shapiro and “dryness,” shall we say, have been common on social media platforms like X ever since the conservative commentator awkwardly read lyrics to Cardi B’s hit song “WAP” back in 2020.

Owens joined the Daily Wire, a conservative media network co-founded by Shapiro, in 2021 to host a weekly show. But Owens departed the network, according to a social media post by Daily Wire CEO Jeremy Boreing. It’s still not clear whether Owens quit or was fired, but it’s easy to guess why Owens and the Daily Wire parted ways.

Owens had clashed with Shapiro, who’s Jewish, in recent months as she’s peddled antisemitic conspiracy theories, claiming on her show that Jewish “gangs” do “horrific things” in Hollywood. Owens recently liked a tweet about Jews being “drunk on Christian blood,” which appears to have been the last straw for the network.

The fake tweet that’s been made to look like it’s from Owens seems to have tricked quite a few people, including one X user who wrote, “The singular one and only time I will give her props that post is gold.”

Another user commented, “Why do people always post their best bangers and then fucking delete them after everyone has already seen it a thousand times.” The answer, of course, is that they didn’t actually tweet them in the first place.

The tweet has also made it to at least one other social media platform, the X rival BlueSky, where it was getting passed around as real.

The tweet was actually created by an X user who goes by the name Trap Queen Enthusiast. And if that name sounds familiar, it’s probably because they often go viral using fake screenshots that are made to look like they’re deleted tweets. In fact, earlier Gizmodo debunked a popular tweet from Trap Queen Enthusiast that was made to look like it had come from musician Grimes, the former partner of billionaire Elon Musk.

The fake tweet from Owens racked up over 900,000 views in a short amount of time. But Community Notes, the crowdsourced fact-checking program at X, took a very long time to note it was fake.

Part of the genius of these fake tweets is that it’s incredibly difficult to fact-check unless you know the players involved. Passing around a screenshot with that little text “This post has been deleted” at the bottom makes it almost impossible for the average person to verify whether it ever existed.

9) AI Tom Cruise Sounds Off

Did you see clips from a new documentary narrated by Tom Cruise called Olympics Has Fallen, a play on the title of the 2013 movie Olympus Has Fallen? The new film claimed to document corruption at the Olympic Games in Paris, France. But it’s fake. Cruise’s narration was created with artificial intelligence and the “documentary” is actually the work of disinformation agents tied to the Russian government, according to a report from researchers at Microsoft.

The fake documentary is tied to two influence groups in Russia, dubbed Storm-1679 and Storm-1099 by Microsoft, and it’s easy to see how some people were duped. The film comes in four 9-minute episodes, each starting with Netflix’s signature “ta-dum” sound effect and red-N animation.

“In this series, you will discover the inner workings of the global sports industry,” the fake Tom Cruise says as dramatic music plays in the background. “In particular, I will shed some light on the venal executives of the International Olympic Committee, IOC, who are slowly and painfully destroying the Olympic sports that have existed for thousands of years.”

But there are plenty of signs that this movie is bullshit to anyone paying attention. For starters, Cruise’s voice is realistic but sometimes has a stilted delivery. The big giveaway, however, might be words used by the Russian campaign that wouldn’t be used by Americans. For example, the first episode includes a line from the fake Cruise narration where he talks about a “hockey match” rather than a hockey game. The word “match” is much more common in Russia for sports like soccer and any real fan of ice hockey in the U.S. would call it a game, not a match.

There are also times when the fake Cruise narration sounds like it’s reading strategy notes made by the people who concocted this piece of disinformation. Much of the documentary spends time trying to tear down the organizers of the Olympics as hopelessly corrupt, and AI-generated Cruise tries to tie it to one of the actor’s most famous roles in the 1990s.

“In Jerry Maguire, my character writes a 25-page-long firm mission statement about dishonesty in the sports management business. Jerry wanted justice for athletes, which makes him extremely relatable,” the narration says.

It’s hard to imagine a line like that making it into an authentic documentary.

The fake documentary also has some editing errors that stick out as particularly odd, like when the AI Cruise inexplicably repeats a line, the audio briefly cutting out for no discernible reason. The entire film is available on the messaging app Telegram, where Gizmodo watched it.

The fake documentary first surfaced in June 2023, according to Microsoft, but got renewed attention just before the start of the Olympics. As Microsoft pointed out in a report, the Storm-1679 group tried to instill fear in people about attending the 2024 Olympics in France. Fake videos purporting to show warnings from the CIA also claimed the games were at risk of a major terrorist attack. Other videos made to look like they’re from reputable news outlets, like France24, “claimed that 24% of tickets for the games had been returned due to fears of terrorism.” That’s simply not true.

The disinformation agents also tried to stoke fear around the war in Gaza, claiming there could be terrorism in France tied to the conflict.

“Storm-1679 has also sought to use the Israel-Hamas conflict to fabricate threats to the Games,” Microsoft wrote in the new report. “In November 2023, it posted images claiming to show graffiti in Paris threatening violence against Israeli citizens attending the Games. Microsoft assesses this graffiti was digitally generated and unlikely to exist at a physical location.”

That said, some of the information in the pseudo-documentary is actually true. For example, the film discusses the history of Wu Ching-kuo, the head of amateur boxing’s governing body Aiba, who was suspended for financial mismanagement claims. Other claims about Olympic officials are also true, according to the news sources available online. But that’s to be expected. The most successful propaganda mixes fact and fiction to make people unsure about what the truth might be.

All we know for certain is that Tom Cruise never narrated this movie. And if someone is trying to hijack the credibility of a major movie star to spread their message, you should always be skeptical of whatever they have to say—especially as bots help spread that media widely across social media platforms.

“While video has traditionally been a powerful tool for Russian IO campaigns and will remain so, we are likely to see a tactical shift towards online bots and automated social media accounts,” Microsoft wrote. “These can offer the illusion of widespread support by quickly flooding social media channels and give the Russians a level of plausible deniability.”

10) Jimmy Lives


Tweets claiming that former president Jimmy Carter died went viral back in July, with many of the dumbest people on social media helping spread the hoax—from Laura Loomer to Mario Nawfal. But there was one person who shared the fake letter announcing Carter’s death who really should know better: Senator Mike Lee of Utah.

Brian Metzger from Business Insider captured a screenshot of Lee’s tweet before he deleted it. And while Lee was surprisingly respectful in his condolences for Carter’s family (considering he’s a Trump supporter), the actual text of the letter should have tipped him off.

Why? Once you get to the fourth paragraph, things get a bit weird, making it clear this isn’t something that should be taken seriously.

Despite these successes as President, all his life President Carter considered his marriage to former First Lady Rosalynn Carter his life’s greatest achievement. At her passing last November President Carter said, “Rosalynn was a baddie. Jill, Melania, even throat goat Nancy Reagan had nothing on Rosalynn. She was the original Brat. She gave me wise guidance and encouragement when I needed it. As long as Rosalynn was in the world, I always knew somebody loved and supported me.” They were married for 77 years.

If you’re unfamiliar with the Nancy Reagan “throat goat” meme, we’ll let you google it on your own time and away from your work computer. It’s also very unlikely that Carter’s death announcement would make reference to Charli XCX’s “Brat.”

Lee’s office in Washington D.C. confirmed over the phone that he deleted the tweet after finding out it wasn’t true but had no comment on where the senator first found the hoax letter.

The fake letter can be traced back to an X account called @Bocc_accio, which included the caption “BREAKING” with “Former President Jimmy Carter has passed away. He was 99 years old.” But anyone who actually read the letter should have been able to figure it out. If you read the alt-text on the tweet the image description even explains: “President Carter is still alive and in hospice care. This was an experiment to see how gullible people are to sensationalist headlines.”

Oddly, it appears some of the fake Carter notes were photoshopped to exclude the stuff about Nancy Reagan. Loomer, a far-right grifter who will fall for just about any hoax and helps spread many of her own, actually shared a cleaner version of the letter. But she’s also been spreading lots of other dumb hoaxes in 2024, like claiming that President Joe Biden is literally on his deathbed and has entered hospice care. Some right-wing accounts were recently going so far as to insist they’d delete their accounts if they were wrong about Biden dying imminently. It seems obvious that they’ll just wait until Biden eventually dies and claim they were right all along.

There was no evidence that Biden was on the brink of death, even when he had contracted covid-19. Being that old made him especially vulnerable to covid, but the president didn’t die. Carter, who turned 100 years old in October is still alive and in hospice care. And while it could be any day now that the former president may pass on, he’s not dead yet.

11) Hurricane Hits Disney

Walt Disney World in Orlando, Florida, shut down operations one night in October due to Hurricane Milton and the tornadoes that sprung up before the storm even made landfall. But photos of the theme park flooded in water started to pop up, despite being completely fake. Russian state media, TikTok, and X all helped them spread.

Russian state media outlet RIA made a post on Telegram with three images that appeared to show Disney World submerged. “Social media users post photos of Disneyland in Florida flooding as a result of Hurricane Milton,” the RIA account said according to an English language translation.

How did we know these images are fake? For starters, the buildings aren’t right at all. If you compare, for instance, what it looks like on either side of Cinderella’s Castle at the Magic Kingdom, you don’t see those buildings that appear in the fake image.

Fake AI image (left) and a real photo showing a crowd of people at the Cinderella Castle in Walt Disney World.
Fake AI image (left) and a real photo showing a crowd of people at the Cinderella Castle in Walt Disney World. Real photo by Roberto Machado Noa/LightRocket via Getty Images

But you don’t even need to know what the real Cinderella’s Caste looks like to know it’s fake. Just zoom in on the turrets of the building itself. They’re not rendered completely, and some appear at very odd angles that make it clear these are AI-generated images.

The images quickly made their way back to X, the site formerly known as Twitter before it was purchased by Elon Musk. And it’s impossible to overstate just how awful that platform has become. From Holocaust denial tweets that are getting millions of views to breaking news tweets that show AI images of children crying, the whole place is bubbling over with garbage.

The fake images of Disney World were shared by right-wing influencer Mario Nawfal, a frequent purveyor of misinformation who’s often retweeted by Musk himself. As just one recent example, Nawfal contributed to a Jimmy Carter death hoax in July that was shared by other idiots like Laura Loomer and Republican Sen. Mike Lee of Utah. Nawfal’s tweet helped spread the fake Disney photos even further on X before he finally deleted them.

There were also videos shared on TikTok showing incredibly over-the-top images of destruction at Disney World, like this one shared by user @joysparkleshine. The video appears to have been originally created as a joke by an account called MouseTrapNews but is getting reposted and stripped of all context, taken seriously by a certain segment of the population.

 

Some comments on the video include “I’m screaming the only place that made me feel like a kid again” and “Good… maybe they will now show all the underground tunnels under Disney next. Those that know… KNOW!!”

That last comment is a reference to the QAnon conspiracy theory which asserts that children are being trafficked by powerful political figures and people like Oprah Winfrey and Tom Hanks. Incredibly, they believe that Donald Trump is going to save those kids. Yes, that Donald Trump.

Some AI images of the hurricane that were created as a joke even started showing up in various Russian news outlets. Like the one below showing Pluto in a life-jacket carrying a child through floodwaters. The image appears to have been earnestly shared by Rubryka.com, which credits the image to Bretral Florida Tourism Oversight District, an X account devoted to jokes about theme parks.

AI image of Pluto rescuing a child from Hurricane Milton at Walt Disney World
Image: Twitter

Another joke image shared by that account, Bretral Florida Tourism Oversight District, showed a photo of a boat stuck on a large mountain rock, which anyone who knows Disney World will recognize as a permanent fixture at Disney’s Typhoon Lagoon water park.

Other accounts on X were sharing the QAnon theories, insisting that the destruction of Disney World might finally reveal the truth about child trafficking.

“Look at Mickey’s arms,” one particularly bizarre tweet with an image of a Mickey Mouse clock reads. “Could they be showing a date? October 9th. The day the storm hit Disney World. There are no coincidences. The Military are clearing out the tunnels underneath. Used for human trafficking of children and other horrible crimes.”

Milton made landfall as a Category 3, which caused thousands of canceled flights, battered homes and businesses with punishing winds, according to NBC News. Disney posted an update on its website explaining that everything would be opening back up a day later.

“We’re grateful Walt Disney World Resort weathered the storm, and we are currently assessing the impacts to our property to prepare for reopening the theme parks, Disney Springs, and possibly other areas on Friday, October 11. Our hearts are with our fellow Floridians who were impacted by this storm,” the website reads.

It’s a ridiculous environment for disinformation right now. And that will likely continue to be our reality for the foreseeable future.

12) Fake Claims About Tim Walz

Viral image falsely purporting to show a boy who was abused by Tim Walz.
© Twitter

The 2024 presidential campaign was awash with fake stories about Vice President Kamala Harris and her running mate Tim Walz, all spread by supporters of Donald Trump. But nowhere are these stories more abundant than X, the social media platform formerly known as Twitter, which was purchased in 2022 by far-right billionaire Elon Musk.

What do we mean by ridiculous lies? Take a story that went viral on X in 2024, purporting to be a testimonial from a former student of Tim Walz. The person in the video, falsely identified as a man named Matthew Metro, claimed they were sexually abused by Walz. But anyone actually from Minnesota would immediately notice some big red flags. For starters, the person claimed they were a student at Mankato West, the high school where Walz worked in the 1990s and early 2000s. But the person mispronounces the word Mankato and somehow even the word Minnesota sounds weird.

Why was this going viral? Because it was being promoted by X’s algorithm in the For You feed, as we personally experienced on October 6. But the Washington Post spoke to the Matthew Metro, apparently a name identified by the disinformation agents through a publicly posted yearbook, who now lives in Hawaii.

“It’s obviously not me: The teeth are different, the hair is different, the eyes are different, the nose is different,” the 45-year-old Metro told the Washington Post. “I don’t know where they’re getting this from.”

Wired talked with analysts who believe the video is the work of a Russian-aligned disinformation operation called Storm-1516, which also spread the false claim that Harris participated in a hit-and-run in San Francisco in 2011. But Wired also believes the video was created AI, something that not all experts are agreed on. All we know for certain is that the video is clearly fake and was being spread inorganically on X.

13) Hollywood Mountain?

Hollywood Mountain
© Facebook

A stunning photo of Hollywood Mountain, California went viral in 2024? One image that made the rounds on social media platforms like Facebook and BlueSky depicted what appears to be a lush green mountain with the L.A. skyline in the background. But it’s completely fake. There’s not even a real place called Hollywood Mountain.

The image appears to have originally been posted to Facebook by someone named Mimi Ehis Ojo, who has plenty of other AI-generated images on their page. And while anyone who lives in Southern California can probably tell immediately that it’s fake, it looks like some people who’ve never visited the U.S. are getting tricked into thinking it’s real.

“How many mountains does America have so all the physical land features are complete in America then we are cheated by God,” one Facebook user from Nigeria commented.

Writer Cooper Lund pointed out on BlueSky that this kind of confusion could be a recipe for disaster if foreign tourists show up to the U.S. expecting to see some of these fantastical scenes that were generated by AI.

“Sometime soon we’re going to get a story about tourists having weird meltdowns because the only things they knew about the places they’re visiting are bad AI generations,” Lund tweeted.

Lund pointed out that something very similar happened in the early 2010s when Chinese tourists visited Paris, France. They expected Paris to be “like a pristine film set for a romantic love story,” as the New York Times described it in 2014. Instead, they were shocked by “the cigarette butts and dog manure, the rude insouciance of the locals and the gratuitous public displays of affection.”

Experts even have a name for when this happens, according to the Times:

Psychologists warned that Chinese tourists shaken by thieves and dashed expectations were at risk for Paris Syndrome, a condition in which foreigners suffer depression, anxiety, feelings of persecution and even hallucinations when their rosy images of Champagne, majestic architecture and Monet are upended by the stresses of a city whose natives are also known for being among the unhappiest people on the planet.

Given America’s enormous cultural footprint, plenty of people probably have a very different idea of the U.S. in their head compared to what they’d encounter upon visiting. And while you can blame a lot of that on Hollywood, there seems to be a new cultural catfisher in town.

Generative AI can turn any imaginary place into a passable reality with just a few simple text prompts. And if you visit the real city of Hollywood as an outsider, prepare to temper your expectations. It doesn’t look anything like the completely fictitious Hollywood Mountain.

14) MyPillow Zombie

Big-time Trump backer Mike Lindell and MyPillow CEO had a number of fakes made about him in 2024, including this one where he looked spaced out, with dark circles under his eyes. But it’s not real. The photo might look oddly realistic, but it’s been altered significantly.

“Meeting Mike Lindell at the Waukesha Trump Rally must’ve been what it was like to shake hands with the Apostle Paul before Christ took the stage at the sermon on the mound,” an X account credited to someone named Gary Peterson tweeted.

Gary Peterson, operating under the handle @GaryPetersonUSA, appears to have been the first to post the photoshopped image. But it’s an account that frequently publishes altered images, often in a way that makes it difficult to distinguish from an authentic photo.

The tweet alone racked up over 6 million views, spreading far and wide, jumping to various social media services like Facebook and Bluesky, as these things often do when they go viral.

Where is the image actually from? It appears the image was originally posted to X by Waukesha, Wisconsin’s The Devil’s Advocate Radio on May 1 and shows Lindell outside one of Donald Trump’s neo-fascist rallies.

As you can see, the image doesn’t include Lindell with mussed-up hair and heavy, dark circles under his eyes. Lindell also has a more normal smile as opposed to the bizarre gaze from the altered image.

The altered image of Mike Lindell (left) along with the original image of him at a Trump rally in Wisconsin on May 1, 2024.

Eagle-eyed viewers may have also noticed something else odd about the photoshopped image. If you zoom in over Lindell’s left shoulder you can spot what appears to be a roaring bear. It’s not clear why the people who made this image included the bear, but it’s not in the original image.

Image: @GaryPetersonUSA

Most major media outlets were laser-focused on generative AI and the ways that could influence people and swing voters in the election. But this fake featuring Lindell was a great reminder that old-fashioned photoshopping is still around. You don’t need fancy AI to make a convincing (if admittedly perplexing) fake image.

15) Luigi’s Fake Clock

Did you see a mysterious video featuring a countdown clock that purported to be from Luigi Mangione, the 26-year-old charged in the killing of UnitedHealthcare CEO Brian Thompson in New York? The video went viral on YouTube, getting attention on sites like Hacker News. But it’s completely fake.

Mangione reportedly possessed a “manifesto” as well as a ghost gun and was arrested at a McDonald’s in Altoona, Pennsylvania. Not long after that, a video popped up that appeared to be from a YouTube account associated with Mangione. It opened with the words “The Truth” and “If you see this, I’m already under arrest.” It featured a countdown clock that first counted from 5 to 1 before flipping to 60 and counting down all the way to zero from there.

The lower right corner included the word “Soon” and briefly flashed the date Dec. 11 before disappearing again in less than a second. It ended with the words “All is scheduled, be patient. Bye for now.”

If you’re curious what the video actually looked like, you can check it out here. YouTube confirmed to Gizmodo that it wasn’t real.

 

“We terminated the channel in question for violating our policies covering impersonation, which prohibit content intended to impersonate another person on YouTube,” a spokesperson for the video platform told Gizmodo over email.

“The channel’s metadata was updated following widespread reporting of Luigi Mangione’s arrest, including updates made to the channel name and handle,” the spokesperson continued. “Additionally, we terminated 3 other channels owned by the suspect, per our Creator Responsibility Guidelines.”

The spokesperson also noted that these accounts had been dormant for months. Who is actually behind the video? That remains unknown. But the viral memes about Mangione aren’t going to stop anytime soon now that he’s been paraded in front of the cameras during his extradition to New York.

“Okay, get some snaps of that Mangione character, but you better not pull any bullshit where he looks like the subject in a 16th century painting called ‘Christ taken at the Garden of Gethsemane’.”
“Okay, now don’t be mad but…”

[image or embed]

— Jessica Ritchey (@jmritchey.bsky.social) December 19, 2024 at 1:25 PM

Who at the NYPD thought this was a good idea?





Source link

You May Also Like

More From Author

+ There are no comments

Add yours