Earlier this month, a new movie trailer dropped on X, the platform formerly known as Twitter, for a French film starring Elon Musk, Alexandria Ocasio-Cortez, President Joe Biden and former President Donald Trump. Jack Dorsey plays a baguette-toting baker, and Tim Cook makes a cameo as a giddy trolley passenger too.
This trailer is fake, of course. The AI-generated parody, entitled La Baye Aréa (The Bay Area) and created by a user named @trbdrk, quickly went viral, with many users commenting that they were initially fooled by the stunning AI effects.
AI-generated commercials and movie trailers have been a popular prompt for users tinkering with text-to-video generative technology. But when compared with how they looked even a year ago, the production quality has gotten considerably better in nearly every aspect. This video is clearly a parody, but it wouldn’t be farfetched to imagine users generating video that replicates realistic circumstances, an ability that inspires new possibilities but could also have real consequences.
As a full-time creator who’s been tinkering with AI tools for the last several years, I’ve learned how to spot artificial intelligence in the wild, as well as which tools were likely used to generate the content. Here’s how a video like La Baye Aréa can be created, along with what to keep an eye out for as you browse the internet.
AI tools used to create a video like La Baye Aréa
A video like La Baye Aréa could be produced with three generative AI tools that users can access now:
- Runway Gen-3: Runway generates videos from natural language and image prompts, and Gen-3 came out in June. Runway inked a deal with Lionsgate earlier this month.
- Midjourney 6. Midjourney is a generative AI platform that converts natural language prompts into images. It can be helpful to storyboard a video first with images, then supply those images in a prompt to something like Runway to better ensure you’re getting the output you want.
- Udio: Unlike Midjourney (images) and Runway Gen-3 (video), Udio specializes in AI-generated music creation.
I wondered how I would personally go about recreating a video like this. Here is my attempt at deconstructing the process:
- Create a concept and a storyboard using Midjourney.
- Use Runway Gen-3 to turn these still images into video sequences.
- Identify video content gaps, transitions and then generate additional scenes with Runway Gen-3.
- Use Udio to create necessary soundtrack and sound effects.
- Combine all elements in a chosen video editing software, syncing the audio with the video and producing the final product.
These steps may appear straightforward, but La Baye Aréa was likely more sophisticated. It’s also worth noting that the video doesn’t have any dialogue, which would have increased the effort.
How to spot AI-generated videos
Now that we have talked about the tools used to create the video and possible production process, it’s time to train our eyes and ears to spot AI-generated videos.
First, look for visual cues, such as inconsistencies in facial expressions, unnatural physical movements or artifacts and glitches in a foreground, background or transition. You can easily identify glitches as these characters blink their eyes or move their lips. By glitches, I mean unnatural movement in how they blink their eyes, or transitions between the blinks, as well as how they move their bodies, which seem robotic and inconsistent at times. As AI video and audio improve over time, these glitches will be more subtle and difficult to detect.
Sometimes you’ll be able to spot a glitch even if you can’t quite put it into words — this is common. An article in the scientific journal Human Movement Science found that natural human movements are created from a complex interplay of neuromotor control, biomechanics and adaptability. In layman’s terms, human movement is subtle, and our eyes often identify more than we are able to put into words.
Next, look for audio cues like mismatched lip-syncing, inconsistent background noise or unnatural intonation in voice patterns or accents. I find that intonation is often the most obvious cue to me. Intonation refers specifically to the rise and fall of pitch in speech. It’s a way to convey meaning, like when distinguishing a question from a statement, and it exists in all spoken languages, not just English. Voice patterns refer to a broader range of elements, which may include intonation, but also things like rhythm, pitch and pauses for breaths. Once you notice these obvious and sometimes subtle differences, you will find it easy to spot an AI-generated video.
Lastly, you have contextual cues, the elements that go beyond audio and visual inconsistencies. We often distinguish fake content through our own experience, knowledge and logic. In La Baye Aréa, the cast is absurd and clearly a spoof. But also notice the age difference between a character and how old that person is in real life. One reason for this is that AI is often trained on photos or videos from years ago.
Eventually, AI video will get so good that we’ll have to verify if such actors have appeared in certain films and other productions. IMDB and Google search are resources available to many of us for verification, and it’s usually a good idea to gather information from multiple sources.
Why is it important to spot AI-generated videos?
You may have heard of the term responsible AI, which refers to a set of principles that help guide the design, development, deployment and use of AI. While La Baye Aréa was easy to spot as an AI-generated video, the creator also labeled it clearly as AI content. This acknowledgment is helpful to viewers without guessing and speculation.
But what if creators, organizations and political entities choose not to disclose AI content? It can be a real problem. As we have seen in recent years, undisclosed AI and fabricated content can lead to erosion of public trust, spread of manipulative misinformation and serious ethical and legal concerns.
The more we can educate ourselves about what AI is currently capable of, the better off we will be as a community of educated thinkers. Just as companies need to practice responsible AI, we are also responsible for learning its capabilities and limitations.
Stay in the know about AI
The good news is that there are multifaceted ways to stay connected and informed. Consider learning a few AI tools for your daily lives. Use accessible, free tools such as ChatGPT, Perplexity, Claude and Google Gemini to ask questions and seek answers. Also, if you are a creator like me, learn about what AI tools creators are already using.
I hope you find this article helpful as we continue to navigate the growing world of AI. Come say hello on my YouTube channel if you’d like to learn more about these tools and services in the future.
+ There are no comments
Add yours