Should you send your kid to AI summer camp?
Yes — at least according to my unscientific polling of educators and parents, and executives at companies who plan on hiring AI-educated talent for years to come.
Generative AI tools including Midjourney, OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, Adobe’s Firefly, and ElevenLabs’ text-to-audio converter are already showing how the tech will transform industries and occupations — from computer programming to animation, as you’ll see in some of the news below. So If you’re looking for guidance on what you should be doing to help your children get ready for an AI-assisted world, summer camp might be a worthwhile starting place.
“AI as a science is not incorporated into the curriculum and we’re years away from that happening because those are changes that have to be made at the state education level,” said Stacy Hawthorne, chief academic officer for Learn21, a nonprofit that develops educational tech for K-12 schools. “Students that aren’t having exposure to AI in a critical thinking capacity are literally going to be further behind than they are right now.”
Today, just seven states have official guidance on how to approach AI in education: California, North Carolina, Ohio, Oregon, Virginia, Washington and West Virginia. A handful of others are in the process of developing AI guidelines.
Hawthorne pointed me to TeachAI.org, a coalition of more than 60 tech companies and education and government organizations, associations and researchers. They banded together in early 2023 to provide free toolkits and resources for school leaders, staff and policymakers on how to develop programs and policies around AI in schools. She also shared an annotated bibliography of AI resources compiled by Learn21.
There are even more resources at AI4K12.org, a nonprofit working to develop national guidelines for K-12 education on AI. The group is funded by the National Science Foundation and the Carnegie Mellon University School of Computer Science.
Hawthorne said that in addition to experimenting with chatbots for classroom assignments, kids can get a boost from summer camps that incorporate AI into their programs, since students should know what an AI chatbot is, how it works — including the science and engineering behind the tech — and how to use new tools effectively.
But she’s also a big fan of making sure kids don’t spend four to six hours of their summer days just sitting in front of computers. Hopefully dinosaur camp, cooking camp, and science, robotics and sports camps — places where kids can learn new skills in a hands-on environment — will incorporate some AI learning into their programs, Hawthorne said.
“The world is changing so fast,” she said, and AI isn’t going away, so parents, educators, governments and others should band together today to make sure kids are learning about AI. If not, she said, “you’re literally just creating a generation of kids that are obsolete before they even have half a chance to be successful.”
Here are the other doings in AI worth your attention.
AI may be as smart as humans by 2029. Musk says by next year
How long will it take before an AI system is smarter than human beings — which involves a category of AI called artificial general intelligence (AGI)?
Depends who you ask.
Futurist Ray Kurzweil said AI will reach human-level intelligence in five years. “We’re not quite there. But we will be there, and by 2029, it will match any person,” Kurzweil said during an interview with podcast host Joe Rogan. “I actually said that in 1999. I said we would match any person by 2029. So 30 years, people thought that was totally crazy. … I’m actually considered conservative. People think that will happen next year or the year after.”
One of those people who think it’ll be next year is billionaire Elon Musk, who tweeted on his social media platform, X, that not only will AI “probably be smarter than any single human next year,” but also that “by 2029, AI is probably smarter than all humans combined.”
Several tech titans are working on building AGI, including Meta CEO Mark Zuckerberg, OpenAI, and Google. What’s the difference between today’s genAI and AGI? Artificial general intelligence is closer to the fictional AI systems we’ve seen in movies, like Jarvis from Iron Man and HAL from 2001: A Space Odyssey. In comparison, today’s genAI chatbots — which create content by predicting the next word or sentence after analyzing a lot of data and finding patterns — are commonly described as autocomplete on steroids.
In response to Musk’s post on X, Lex Fridman, host of a popular podcast on robots and humans, tweeted, “We’re in for a few interesting years. I hope humanity wins in the end.”
Me too, Lex. Me too.
EU passes first AI legislation meant to protect consumers
European Union lawmakers agreed in December on principles for first-of-its kind AI legislation to ensure safeguards around the development and use of the technology. This past Wednesday they gave final approval to the EU AI Act.
It “imposes blanket bans on some ‘unacceptable’ uses of the technology while enacting stiff guardrails for other applications deemed ‘high-risk,'” CNN reported.
You can get an overview of the landmark law at the European Parliament site.
The regulation puts limits on the use of biometric identification systems by law enforcement, and it offers consumers the right to launch complaints and receive meaningful explanations on how AI systems work and what those systems are doing with personal information, according to the European Parliament press release. AI-generated deepfake pictures, video or audio of existing people, places or events must have a label saying that the content has been artificially manipulated, The Associated Press noted.
In addition, AI developers will be required to publish “detailed summaries of the content used for training” to ensure that the system complies with EU copyright laws, the European Parliament site said.
“The EU AI Act outlaws social scoring systems powered by AI and any biometric-based tools used to guess a person’s race, political leanings or sexual orientation,” CNN reported. “It also bans the use of AI to interpret the emotions of people in schools and workplaces, as well as some types of automated profiling intended to predict a person’s likelihood of committing future crimes.”
Companies that violate the AI Act could be fined up to 35 million euros (about $38 million) or 7% of their global revenue, the law says.
The regulation will begin going into effect later this year as part of a rollout that will take about two years. That still puts the EU ahead of the US in adopting such guardrails, the AP reported. There’s talk of legislation around AI in the US, but so far that hasn’t translated into law — though President Joe Biden released an Executive Order in October outlining his administration’s guidelines for AI safety.
“The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential,” said Dragos Tudorache, a Romanian lawmaker who worked on the EU law.
OpenAI’s Sora tool should ‘freak us out,’ WSJ reviewer says
The Wall Street Journal got a rare interview with Mira Murati, OpenAI’s chief technology officer, and asked the company to create animations and videos using its new photorealistic text-to-video generator, Sora.
The results are “good enough to freak us out,” reporter Joanna Stern wrote.
One of the WSJ’s asks was to have Sora generate a clip of a “bull in a china shop, in the style of an animated movie.” Anyone who’s a Hollywood animator today should check it out, because it raises serious questions about what the future holds for that occupation, Stern said.
“Welcome to the next ‘holy cow’ moment in AI, where your words transform into smooth, highly realistic, detailed video,” she said. “So long, reality! Thanks for all the good times.”
Sora, which means “sky” in Japanese, was unveiled last month and garnered a lot of attention because of the quality of the videos it can create after being prompted with a few words of text. OpenAI says Sora can “create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.” See for yourself in this 10-minute demo reel that OpenAI posted on YouTube and that’s already been viewed more than 2 million times. They had me at the golden retriever puppies playing in the snow.
As for Murati, who’s overseeing Sora’s development, she told the WSJ that Sora is trained on publicly available and licensed data, including content from Shutterstock (though she initially didn’t know whether it was using videos from YouTube, Instagram and Facebook). Murati also said the technology, which won’t be released to the public for a while, is still far from perfect — one video shows a woman with 10 fingers growing out of one hand.
Still, the Sora demo reel was enough to convince director Tyler Perry to put a hold on his $800 million plans to expand his film production studio, “saying this tech could save money on sets and location shoots,” Stern noted. Dreamworks co-founder Jeffrey Katzenberg said in a November interview that he could see 90% of animation artists replaced by AI.
Murati said OpenAI is working with animators as part of a”slow, careful rollout” of Sora, telling the WSJ that “we want people in the film industry and creators everywhere to be a part of informing how we develop it further.”
I’m not sure what that means for the future of video creators, including animators. Neither is the WSJ’s Stern.
“Murati assured me OpenAI is taking a measured approach to releasing this powerful tool. That doesn’t mean everything’s gonna be alright,” Stern wrote. “If OpenAI is that bull in the china shop, it might be treading lightly now. Inevitably, though, it’s going to start smashing plates.”
Elon Musk releases his genAI chatbot Grok as open source
Elon Musk last week announced in a post on his social media platform that he was going to open-source Grok, the chatbot his xAI company is developing as a competitor to OpenAI’s ChatGPT. A few days later, Musk followed through on his promise with the release of Grok-1.
“Still work to do, but this platform is already by far the most transparent & truth-seeking (not a high bar tbh),” Musk wrote in a post on X on March 17.
Making the computer code behind Grok freely available comes after Musk, an early investor in OpenAI, sued the company, saying it had veered away from its original nonprofit, open-source mission by turning into a for-profit company. In response, “OpenAI publicized emails that showed the Tesla CEO supported a plan to create a for-profit entity and wanted a merger with the EV maker to make the combined company a ‘cash cow,'” Reuters reported.
Wired offered its take on why Musk would feel compelled to open-source Grok: because it makes business sense as he works to catch up with market leader OpenAI and woo users over to grok.
“Open sourcing Grok could help Musk drum up interest in his company’s AI,” Wired said. “Limiting Grok access to only paid subscribers of X, one of the smaller global social platforms, means that it does not yet have the traction of OpenAI’s ChatGPT or Google’s Gemini. Releasing Grok could draw developers to use and build upon the model, and may ultimately help it reach more end users. That could provide xAI with data it can use to improve its technology.”
The New York Times said in addition to catching up with rivals, Musk has also positioned himself among those who think open sourcing such powerful systems will make them safer.”The controversy over open sourcing generative A.I. — which can create realistic images and videos and recreate humanlike text responses — has roiled the tech world over the past year after the explosion in the popularity of the technology. Silicon Valley is deeply divided over whether the coding underlying A.I. should be publicly available, with some engineers arguing that the powerful technology must be guarded against interlopers while others insist that the benefits of transparency outweigh the harms. By publishing his A.I. code, Mr. Musk planted himself firmly in the latter camp, a decision that could enable him to leapfrog competitors who have had a head start in developing the technology.”
To be sure, Musk isn’t the first to point to open-sourced AI. Meta and France’s Mistral already have open-source AI models, Reuters reminds us, adding that “Google has also released an AI model called Gemma that outside developers can potentially fashion according to their needs.”
The first AI software engineer is named ‘Devin’
Speaking of careers that might be heavily affected by AI, startup Cognition Labs released a new tool called “Devin” that can “autonomously code, complete engineering jobs on Upwork, and even tune its own AI model,” PC Mag reported. “Devin takes the premise of GitHub and Microsoft’s Copilot developer tool one step further, as it’s able to finish whole projects on its own without human intervention.”
In a video posted on X, Cognition CEO Scott Wu calls Devin “the first AI software engineer,” with the company noting that Devin “has successfully passed practical engineering interviews from leading AI companies, and has even completed real jobs on Upwork.”
Once upon a time, being a computer programmer was considered a ticket to a long and lucrative career. And it still may be in the future, because someone needs to fact-check code and make sure it’s doing what it’s supposed to do. Still, Devin reminds me that anyone concerned about the future of AI and jobs should look over the list of roles that are likely to face minimum disruption from AI in the future. According to the Pew Research Center, that list includes barbers, childcare workers, dishwashers, firefighters and pipelayers. I’ll add construction workers and gardeners to the list.
Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.
+ There are no comments
Add yours