Artificial intelligence experts tend to have two points they keep repeating in public: how advanced and capable AI is today, but also how it won’t become The Terminator’s malevolent Skynet. Still, governments around the world have begun asking companies to pledge safety, transparency and a “kill switch” in their technology, in case it goes rogue.
Ilya Sutskever, former chief scientist at OpenAI, has based his next company on this concept. He announced the company, called Safe Superintelligence, in a blog post Wednesday, pledging that his team, investors and business model “are all aligned” and that his team has “one goal and one product: a safe superintelligence.”
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” he wrote, alongside co-founders Daniel Gross, who came from Apple’s AI team, and Daniel Levy, who previously worked at OpenAI. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.”
Sutskever’s announcement has been anticipated since he left OpenAI in May. Sutskever reportedly helped lead the effort to oust OpenAI CEO Sam Altman last year. The boardroom coup ultimately failed after one of the company’s key investors, Microsoft, hired Altman as hundreds of OpenAI staff publicly threatened to quit and join him.
Now, the question is who will ultimately end up controlling one of the potentially biggest new technologies in decades. OpenAI has continued apace without Sutskever, launching new features like its GPT-4o, which the company said responds to people’s requests faster, is able to reason better, and can hold conversations by voice and through a smartphone camera. Meanwhile, Google, Apple, Facebook and Microsoft have announced new AI features and initiatives to take on both each other and a growing field of startups.
Sutskever reportedly plans to take a different tack, telling Bloomberg in an interview that his company does not have “near-term intention” of selling AI products or services.
“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever said. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”
Bloomberg said Sutskever declined to name Safe Superintelligence’s financial backers or disclose how much his company has raised.
Editors’ note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you’re reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.
+ There are no comments
Add yours