There’s been a lot of coverage about how generative AI is being used by organizers of the summer Olympic Games to enhance the experience for viewers and athletes and to manage security. NBC, which is broadcasting the Games in the US, is even offering AI-generated daily summaries, using a replica of sports broadcaster Al Michaels’ voice, to subscribers of its Peacock streaming service.
But there’s already been one high-profile AI fail around the Olympics, and it has to do with a Google ad touting the company’s Gemini chatbot. Called “Dear Sydney,” the ad, which Google told AdAge it decided to pull but which can still be viewed on YouTube (with comments off), tells the story of a young fan writing to one of her sports heroes, American track-and-field star Sydney McLaughlin-Levrone. At least, that’s what Google’s ad people probably pitched.
The takeaway from the minute-long spot, however, is that an ambitious, helicopter parent has convinced his young daughter that she shouldn’t express herself in writing because what she might produce on her own won’t be as good as the output she can get from an AI engine.
In the ad, a proud dad describes how his little girl, a dedicated runner, wants to send a fan letter to McLaughlin-Levrone. But he tells us that though he’s “pretty good with words,” the letter needs to be “just right.” We have no clue if his little girl is good with words, because it seems she wasn’t even asked to write her own letter. Instead, the dad gives Gemini this prompt: “Help my daughter write a letter telling Sydney McLaughlin-Levrone how inspiring she is and be sure to mention that my daughter plans on breaking her world record… one day. (She says sorry, not sorry.)” After showing snippets from the Gemini-authored fan letter, the spot ends with the tagline “A little help from Gemini.”
I don’t know about you, but I’d call that a massive parental fail. The whole reason we sometimes gush over fan letters written by kids to their heroes is that the kids produce such charming, honest — and imperfect — homages in their heartfelt, handwritten letters and quaint crayon drawings. Do we really want to encourage little kids to stop writing and drawing on their own because it has to be “just right,” which apparently only an AI can produce?
I’m not the only one who’s not a fan of Google’s message. The Atlantic declared that, “Google Wins the Gold Medal for Worst Olympic Ad,” Ars Technica wrote about “Outsourcing emotion: The horror of Google’s ‘Dear Sydney’ AI ad,” and New York Magazine referred to the “AI Olympics Commercial That Everyone Hated.” My favorite headline is from Quartz: “Google’s Olympics-themed AI ad gives some viewers the ick.”
I asked Google for comment and a spokesperson told me, “While the ad tested well before airing, given the feedback, we’ve decided to phase the ad out of our Olympics rotation.”
Ick indeed.
Here are the other doings in AI worth your attention.
EU AI Act, aimed at AI risks, will also affect US tech companies
Four years after it was proposed, the European Union’s AI legislation — called the EU Artificial Intelligence Act — went into effect Aug. 1. The law is designed to mitigate the risks posed by AI by setting guardrails and consumer safety rules for AI developers. It will affect anyone doing business in the EU, which includes today’s big tech companies investing in AI, most of which are based in the US (Anthropic, Apple, Google, Meta, Microsoft and OpenAI, to name just a few).
For general-purpose gen AI systems, like Google’s Gemini and OpenAI’s ChatGPT, the act “imposes strict requirements such as respecting EU copyright law, issuing transparency disclosures on how the models are trained, and carrying out routine testing and adequate cybersecurity protections,” CNBC said in an explainer of the law.
Those requirements are based on the potential risk levels the systems pose to society, with AI dangers ranked as: unacceptable risk, high risk, limited risk or acceptable risk, CNET reported.
Fines for violating the act vary, but companies could be fined as much as 35 million euros (nearly $38 million) or 7% of their global annual revenues, whichever amount is higher, noted the law firm of White & Case.
You can find an overview of the EU AI Act here. The Guardian also has a useful explainer.
As for the US, President Joe Biden issued an AI executive order in October calling for guardrails, safety checks and consumer protections around AI development and deployment. Many US tech companies say they’re looking for the government to offer laws and regulations to help mitigate AI risk. In May, Google, Meta and OpenAI were “among the companies that made voluntary safety commitments at the AI Seoul Summit, including pulling the plug on their cutting-edge systems if they can’t rein in the most extreme risks,” the Associated Press reported.
And Microsoft last week said there was an urgent need for legislation against deepfakes (see below).
Still, some companies, venture capitalists and tech types argue that laws and regulations may deter AI innovation. Industry watchers, like Amnesty International, say those antiregulation concerns are more about preserving sales and profit than safeguarding human rights. A small group of Silicon Valley investors and executives seemingly threw their support behind former President Donald Trump recently because he might be less into government oversight of AI.
Needless to say, there’s a lot of speculation and hyperbole around AI regulation, which is why a new report and set of recommendations from the US Copyright Office is worth a look. Read on…
Microsoft, Copyright Office aim to rein in deepfakes as Elon Musk touts them
A day after Microsoft said the US needs new laws so people who abuse AI can be held accountable, the US Copyright Office released the first part of its report on the legal and policy issues related to copyright and artificial intelligence, especially regarding deepfakes, CNET’s Ian Sherr reported.
The Copyright Office recommended that Congress pass a new federal law that protects people from the knowing distribution of unauthorized digital replicas. The 70-page document, which says it uses the terms “deepfakes” and “digital replicas” interchangeably, can be found here.
The report explores the pros and cons of AI-generated copies. “On the positive side, they can serve as accessibility tools for people with disabilities, enable ‘performances’ by deceased or non-touring artists, support creative work, or allow individuals to license, and be compensated for, the use of their voice, image, and likeness,” the Copyright Office wrote.
On the other hand, the agency said, “The surge of voice clones and image generators has stoked fears that performers and other artists will lose work or income. There have already been film projects that use digital replica extras in lieu of background actors, and situations where voice actors have been replaced by AI replicas.”
Other harms include using AI to commit fraud by impersonating real people, and generating misinformation, with the real “danger that digital replicas will undermine our political system and news reporting by making misinformation impossible to discern.”
The Copyright Office’s report is worth an overall glance, but I’ll also point you to the section on “artistic style” and the acknowledgement that copyright law is “limited” because it doesn’t protect artistic style as a separate element of copyrighted work.
That discussion, which starts on page 53, notes that “the Office received many comments seeking protection against AI ‘outputs that imitate the artistic style of a human creator.’ Commenters voiced concern over the ability of an AI system, in response to a text prompt asking for an output ‘in the style of artist X,’ to quickly produce a virtually unlimited supply of material evoking the work of a particular author, visual artist, or musician. They asserted that these outputs can harm, and in some cases have already harmed, the market for that creator’s works.”
Shira Perlmutter, register of copyrights and director of the Copyright Office, said in a statement that the agency is eager to work with Congress on the office’s recommendations.
As for Microsoft, the company last week called on the US to pass a “comprehensive deepfake fraud statute” that targets criminals who use AI to steal from or manipulate everyday Americans.
“AI-generated deepfakes are realistic, easy for nearly anyone to make, and increasingly being used for fraud, abuse, and manipulation — especially to target kids and seniors,” Microsoft President Brad Smith wrote in a company blog post. “The greatest risk is not that the world will do too much to solve these problems. It’s that the world will do too little.”
Though AI chatbots from Microsoft, Google, Meta and OpenAI have been made available for free for less than two years, CNET’s Sherr says data about “how criminals are abusing them is already staggering.”
That includes everything from fake job postings to election disinformation to AI-generated pornographic deepfakes of average Americans, as well as celebrities including musician Taylor Swift. (The Senate passed a bill in July that aims to protect victims of pornographic deepfakes; the House now needs to approve it — it’s called the Defiance Act, which stands for “disrupt explicit forged images and non-consensual edits.”)
What happens in the meantime? Expect AI-generated slop, deepfakes and other misinformation to proliferate. One instance that got a lot of attention: Elon Musk, who owns the X social media network, shared a video that used a cloned voice of Vice President and Democratic presidential candidate Kamala Harris to belittle President Joe Biden and refer to Harris as a “diversity hire,” CNET noted. X’s service rules prohibit users from sharing manipulated content, including “media likely to result in widespread confusion on public issues, impact public safety, or cause serious harm.” Musk, who has endorsed Donald Trump’s bid for reelection, defended his post as parody.
Like I say, it’s only funny until someone loses an eye — or an election.
And that may be why the US Federal Communications Commission is now “proposing a first-of-its-kind rule to mandate disclosure of artificial intelligence-generated content in political ads,” NBC News reported last week. The hitch: the FCC’s proposed rule, which would require political ads on TV and radio to say whether they include any AI-generated content, may not go into effect before the US elections in November.
OpenAI offers voice features to a small group of paying ChatGPT users
Almost three months after actor Scarlett Johansson accused OpenAI of copying her voice without permission for a new feature in its popular chatbot, the company rolled out its “Advanced Voice Mode” to a small number of ChatGPT paid subscribers, in what it’s described as an alpha trial, CNET’s Lisa Lacy reported.
“Users with access have jumped on social media to share their initial experiences, which include getting help with French pronunciations, mimicking an airline pilot speaking from the cockpit, and imitating seven US regional dialects,” Lacy noted. “The New York and Midwestern accents could use a little work, but the chatbot knows that New Yorkers fold their pizza.”
ChatGPT Plus costs $20 a month. Advanced Voice Mode, which allows users to have more-natural, real-time conversations with ChatGPT, also senses and responds to your emotions and allows you to interrupt it, Lacy added.
OpenAI said ChatGPT can’t impersonate voices, and that the company added filters that will block requests that involve copyrighted audio. The Advanced Voice Mode will be available to more ChatGPT Plus users this fall.
An AI pendant you can call your friend?
Companies creating AI wearables have had mixed results in the market so far, judging from the reception of the much-maligned, $699 Humane AI pin and the $199 Rabbit R1 handheld device.
That hasn’t stopped developers from forging ahead. Enter “friend,” a $99 AI pendant whose goal is to pretty much just keep you company if you’re lonely. It’s always listening to you, but instead of talking back with an AI-generated voice — like, say, your Amazon Alexa assistant — the pendant will send a text to your phone with its response to your musings.
It also may not be the bestest friend, CNET’s Gael Cooper reported. The trailer for the device, which you can watch on YouTube, “points out what would seem to be a major problem with friend: Each time a person tries to converse with the device, they have to tap it, wait for a corresponding beep and then turn their attention to their phone to read the text, since friend can’t just speak back like the virtual assistants to which it seems similar,” Cooper said.
“If you’re lonely, but you already have access to a smartphone,” Cooper added, “wouldn’t you just text a real person or lose yourself in doomscrolling the news or read Reddit or play Candy Crush to ease your loneliness?”
The friend pendant is about the size of an Apple AirTag, Wired reported. Its creator, 21-year-old Avi Schiffmann, made headlines for creating the first website to track COVID-19 cases around the world. He told Wired the device came out of his own loneliness, and how he just wanted an AI assistant to talk to him.
No judgment if you want to preorder this “friend,” which is scheduled to be available in 2025. The device requires a phone and only works on iPhones for now, though the product FAQ says Android may be added, depending on demand.
Colin Kaepernick aims to help creators, Perplexity ponies up
Here are a few other bits of AI news that caught my attention:
A platform for creators
Former NFL quarterback and self-described civil rights activist Colin Kaepernick has created a new company, Lumi, to let storytellers and creators use AI to get around the roadblocks to putting their work out into the world. Forbes said the platform could be a boon to creators of comic books, graphic novels and manga.
“Lumi’s mission is to democratize storytelling by providing tools for creators to turn their ideas into finished products, as well as distributing and merchandising those stories — transforming any creator into Disney,” the company said in a statement. “By leveraging advanced AI tools, Lumi enhances the creative process, allowing creators to focus on bringing their stories to life, while the platform handles all of the logistics.”
Perplexity pays publishers after content dispute
After being accused by Forbes, Wired and other publishers of plagiarizing copyrighted content for its AI search service, Perplexity announced a new revenue-sharing program for publishers, in a company blog post. The first group of takers includes Time, Der Spiegel, Fortune, Entrepreneur, The Texas Tribune and WordPress.com.
“Any time a user asks a question and Perplexity generates advertising revenue from citing one of the publisher’s articles in its answer, Perplexity will share a flat percentage of that revenue,” CNBC reported. “That percentage counts on a per-article basis, Dmitry Shevelenko, Perplexity’s chief business officer, told CNBC in an interview — meaning that if three articles from one publisher were used in one answer, the partner would receive “triple the revenue share.”
The move comes after high-profile authors, represented by the Authors Guild, and publishers, including The New York Times, the Chicago Tribune and the Center for Investigative Reporting, had sued OpenAI, Microsoft and other makers of AI chatbots, alleging that they’ve been slurping up copyrighted content without permission or compensation to the content owners. Perplexity, which aims to take on Google in AI-based search, “raised new funding in April at a valuation exceeding $1 billion — doubling its valuation from three months before,” CNBC added, suggesting the company needed to shore up its business as it seeks to grow and attract new audiences.
Meta helps creators make AI replicas of themselves
Putting aside legitimate concerns that unscrupulous actors will use AI to create digital duplicates of people without their permission, we turn to recent news from Meta. The tech giant launched a feature that lets creators make AI-powered characters and AI-powered duplicates of themselves that people can then DM in Meta-owned Instagram, Messenger and WhatsApp, CNET reported.
Meta said the new functionality is part of its AI Studio, a free feature that lets influencers and other content creators in the US “build an AI extension of themselves” or use AI to “create conversational AIs based on their interests, for fun, utility or support.” As an example, Meta said travel influencers might want to create local guides in the form of AI characters or AI-based versions of themselves.
AI vocabulary lessons
To help you get up to speed on AI lingo, I’ve started a new “Adventures in AI” series of short takes on TikTok, to provide overviews of key terms that you (and anyone interested in AI) should know. Lesson 1 covers gen AI, chatbots and large language models, or LLMs. Bonus: You can see two of my most-prized possessions behind me — a lightsaber and my 3D printed model of the Maltese Falcon.
+ There are no comments
Add yours