Writers Say No to AI-Generated Novels, Bill Gates Says We Should Get Familiar With AI

Estimated read time 12 min read


Is an AI tool capable of creating art? Or should we only think of it as “art,” as in an “artistic replacement tool?”

That was the topic of conversation last week after NaNoWriMo, the nonprofit behind National Novel Writing Month, didn’t ban the use of AI from its speed-writing challenge. (Since its beginnings in 1999, National Novel Writing Month has encouraged writers to produce a 50,000-word original novel during the month of November.)

AI Atlas art badge tag AI Atlas art badge tag

Instead, NaNoWriMo noted in a blog post (later revised) that while some writers “stand staunchly against AI for themselves,” it believed “that to categorically condemn AI would be to ignore classist and ableist issues surrounding the use of the technology, and that questions around the use of AI tie to questions around privilege.”

Some people took exception to the fact that the group had an AI sponsor, suggesting that its AI stance was informed more by its backers than how generative AI writing tools might affect creative work. 

The controversy led one member of NaNoWriMo’s board to resign (his letter was entitled NaNoHellNo) and prompted the group to update its online statement to note that some bad actors are doing harm to writers and acting unethically. The dustup also led to stories in publications including The Washington Post, Wired, CNET and The Atlantic, whose columnist welcomed the “robot overlords” to the annual speed-writing contest.

The discussion about whether novelists should use AI to help write novels isn’t new. Japanese author Rie Kudan won an award earlier this year for a novel she wrote with help from ChatGPT. But the NaNoWriMo kerfuffle unfolded a few days after The New Yorker featured an essay on AI and art by noted sci-fi author Ted Chiang. Chiang, whose work includes the short story on which the movie Arrival is based, said that though gen AI may improve to the point that it can successfully write a credible novel or produce a realistic painting, that doesn’t really count as what he calls real art. Why? Because unlike humans, the AI isn’t really trying to communicate anything to us through its output (other than that it can produce output).  

I’ll just note three quotes from his essay that are worth considering: 

“We are all products of what has come before us, but it’s by living our lives in interaction with others that we bring meaning into the world,” Chiang wrote. “That is something that an auto-complete algorithm can never do, and don’t let anyone tell you otherwise.”

“Art is something that results from making a lot of choices,” he added. “To oversimplify, we can imagine that a ten-thousand-word short story requires something on the order of ten thousand choices. If an AI generates a ten-thousand-word story based on your prompt, it has to fill in for all of the choices that you are not making.”

His takeaway: “The task that generative AI has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.”

Chiang’s essay and the NaNoWriMo row have spurred numerous discussions about how, when, if and why we should be using AI tools, and what to think of the output. As for me, I agree that the act of writing, whether it’s a novel or an email, should always be intentional. I wonder if AI’s promise of helping us fast-track tasks — a “let’s just get the writing out of the way” mentality — might mean it’s time for us to reconsider why we’re doing such tasks in the first place. 

Here are the other doings in AI worth your attention.

YouTube developing AI detection tools for faces, singing voices

In a blog post last week, YouTube announced two new AI tools that it says will help content creators spot deepfakes on its popular video platform. The tools are part of its ongoing effort to develop AI “guardrails” aimed at protecting creators’ rights. “We believe AI should enhance human creativity, not replace it,” the company wrote.

First, YouTube said it’s developed “synthetic-singing identification technology” that will allow people to “automatically detect and manage AI-generated content on YouTube that simulates their singing voices.” That tech expands YouTube’s Content ID system, which is used to identify copyright-protected content. A pilot program with the new tech for checking singing voices is scheduled to launch in early 2025.

Second, the company said it’s “actively developing” new technology that will allow people to “detect and manage AI-generated content showing their faces on YouTube.” No date on when that’ll be released. 

Almost all of Time magazine’s AI 100 are new to the list, a sign things are moving fast

Time released its second annual list of the 100 most influential people in AI. The list features company executives; researchers; policymakers and government officials; influencers; and researchers. The magazine also noted that 91 of the people on the 2024 list “were not on last year’s, an indication of just how quickly this field is changing.” I agree.

Among the 40 CEOs and founders are Sam Altman of OpenAI, Jensen Huang of Nvidia, Steve Huffman of Reddit, Microsoft’s Satya Nadella, Sundar Pichai of Google and Meta’s Mark Zuckerberg.

Signup notice for AI Atlas newsletter Signup notice for AI Atlas newsletter

If that seems like a lot of guys, it is. But Time also singled out Daphne Koller, CEO of Insitro, and said the “list features dozens of women leaders in AI,” including US Secretary of Commerce Gina Raimondo; Cohere for AI research head Sara Hooker; US AI Safety Institute Director Elizabeth Kelly; chief learning officer of Khan Academy Kristen DiCerbo; and UK Competition and Markets Authority CEO Sarah Cardell.

The youngest person on the list, 15-year-old Francesca Mani, is a high schooler “who started a campaign against sexualized deepfakes after she and her friends were victims of fake AI images.” What a reason to be on the list. The oldest person is “77-year-old Andrew Yao, a renowned computer scientist who is shaping a new generation of AI minds at colleges across China,” Time said.

I thought it was laudable that Time recognized “creatives” who are “interrogating the influence of AI on society.” That list includes actor Scarlett Johansson, who accused OpenAI of co-opting her voice for its Voice Mode feature in the latest version of ChatGPT. Johansson should rightly get credit for highlighting the concerns humans have about their intellectual property — including their face and voice — being copied without permission by AI.

Time explains how it picked this year’s notables here

Bill Gates says we should be learning how to use AI tools 

Though Bill Gates didn’t make Time’s list of AI notables, the Microsoft co-founder does have a few thoughts about AI, based on his decades of experience introducing the world to new technologies — starting with the personal computer.

In an interview with CNET’s Kourtnee Jackson, Gates, who’s starring in a new Netflix documentary series called “What’s Next? The Future With Bill Gates,” said that today’s tech experts don’t really know how AI will affect jobs and society in the future. But what he does know is that all of us should be working with AI tools now, given where the technology is headed.

“Whether you’re an illustrator or a coder or a support person or somebody who is in the health care system — the ability to work well with AI and take advantage of it is now more important than understanding Excel or the internet,” Gates said. 

He appreciates that we, as a society, should be discussing how humans live in an AI-powered world, and should probably set some limits, even as AI is being considered as a way to help cope with shortages of teachers, doctors and other professionals. 

“We don’t want to watch robots play baseball, and so where is the boundary where you say, ‘OK, whatever the machines can do is great and these other things are perhaps very social activities, intimate things, where we keep those jobs’?” he said. “That’s not for technologists to understand better than anyone else.”

As for AI and misinformation, especially as the US draws near to the 2024 presidential election, Gates told Jackson he doesn’t have a solid answer.   

“The US is a tough one because we have the notion of the First Amendment and what are the exceptions, like yelling ‘fire’ in a theater,” he said. “I do think over time, with things like deepfakes, most of the time you’re online you’re going to want to be in an environment where the people are truly identified. That is, they’re connected to a real-world identity that you trust, instead of just people saying whatever they want.”

What’s Next? The Future With Bill Gates is set to air on Netflix on Sept. 18.

NYT, Washington Post embrace AI-generated content

Publishers, including Time, say they’re working with AI companies and looking to AI tools to help identify new business models and opportunities. Add The New York Times and The Washington Post to that list.

The Times introduced its first regular use of AI in a new Connections Bot that helps people who play the Connections game get insight into how they solved each day’s puzzle. Beyond helping folks understand the notoriously difficult game, the Times told subscribers, the move “also has a larger significance: It includes the first English text generated by AI that The Times newsroom will regularly publish.”

Here’s how the Connections Bot works, according to CNET’s Gael Cooper.

“The bot will compare your game with that of other players and give you a score of up to 99. To get the perfect 99, you need to win without any mistakes and solve the hardest categories first — so purple first, blue second, green third and yellow last,” Cooper said. “Once you receive your skill score, the bot uses AI to try and read your mind and determine what you were thinking when you guessed wrong.”

As for The Washington Post, Axios reported that the newspaper “published its first-ever story built on the work of a new AI tool called Haystacker that allows journalists to sift through large data sets — video, photo or text — to find newsworthy trends or patterns.” 

Haystacker, which took more than a year to build, will be used by the Post’s in-house journalists, the paper’s CTO, Vineet Khosla, told Axios. “Asked whether the Post would ever license Haystacker to other newsrooms, Khosla said that’s not the company’s focus right now,” Axios said.

This isn’t the first AI tool deployed by the Post. In July, it announced an AI chatbot that answers readers’ questions about climate issues, with the answers pulled from Washington Post articles. It also “debuted a new article summary product that summarizes a given article using generative AI,” Axios reported. “Khosla said the company will ramp up its investment in the summary product as the election draws nearer.”

And back in 2016, the paper created a robot reporter called Heliograf that it then used to write more than 850 short news articles and alerts about politics and sports.  

If you want a summary of some of the initiatives that newsrooms have launched with AI, check out the Associated Press’ April report on how the tech is already in use. 

Younger staffers might be the best gen AI coaches  

Younger employees may be more willing to learn new technologies and test new things, because they’re often closest to the work. But senior executives shouldn’t rely on them to provide insight into how to adopt and use gen AI technology, because they might not know enough about the business — or the fast-evolving technology —  to be able to adequately assess the risks. 

That’s the summation from a new paper authored in part by researchers at the Harvard Business School, the MIT Sloan School of Management, Wharton and The Boston Consulting Group. The paper is entitled Don’t Expect Juniors to Teach Senior Professionals to Use Generative AI.

“The literature on communities of practice demonstrates that a proven way for senior professionals to upskill themselves in the use of new technologies that undermine existing expertise is to learn from junior professionals,” says the 29-page paper. The literature “notes that juniors may be better able than seniors to engage in real-time experimentation close to the work itself, and may be more willing to learn innovative methods that conflict with traditional identities and norms.” But after talking with 78 such “junior consultants” about working with OpenAI’s GPT-4 (which powers ChatGPT),  the researchers found that “such juniors may fail to be a source of expertise in the use of emerging technologies.” 

Why?

“The current literature on junior professionals being a source of expertise in the use of new technologies for more senior members” hasn’t looked at “contexts where the juniors themselves are not technical experts, and where technology is so new and rapidly changing that juniors have had no formal training on how to use the technology, have had no experience with using it in the work setting, and have had little experience with using it outside of the work setting,” the paper says. “In such contexts, it seems unreasonable to expect that juniors should have a deep level of technical understanding of the technology.”

The report details all the things junior staffers — and presumably senior staffers — don’t know about gen AI. The TL;DR: Businesses should spend more time learning about gen AI, the opportunities and risks, and how you may need to make changes to “system design in addition to human routines” before moving ahead, since no one, junior or senior, has all the answers yet. (Like Bill Gates said.)  





Source link

You May Also Like

More From Author

+ There are no comments

Add yours