What if AI doesn’t just keep getting better forever?

Estimated read time 2 min read



For years now, many AI industry watchers have looked at the quickly growing capabilities of new AI models and mused about exponential performance increases continuing well into the future. Recently, though, some of that AI “scaling law” optimism has been replaced by fears that we may already be hitting a plateau in the capabilities of LLMs trained with standard methods.

A weekend report from The Information effectively summarized how these fears are manifesting amid a number of insiders at OpenAI. Unnamed OpenAI researchers told The Information that Orion, the company’s codename for its next full-fledged model release, is showing a smaller performance jump than the one seen between GPT-3 and GPT-4 in recent years. On certain tasks, in fact, the upcoming model “isn’t reliably better than its predecessor,” according to unnamed OpenAI researchers cited in the piece.

On Monday, OpenAI co-founder Ilya Sutskever, who left the company earlier this year, added to the concerns that LLMs were hitting a plateau in what can be gained from traditional pre-training. Sutskever told Reuters that “the 2010s were the age of scaling,” where throwing additional computing resources and training data at the same basic training methods could lead to impressive improvements in subsequent models.

“Now we’re back in the age of wonder and discovery once again,” Sutskever told Reuters. “Everyone is looking for the next thing. Scaling the right thing matters more now than ever.”

What’s next?

A large part of the training problem, according to experts and insiders cited in these and other pieces, is a lack of new, quality textual data for new LLMs to train on. At this point, model makers may have already picked the lowest hanging fruit from the vast troves of text available on the public Internet and published books.



Source link

You May Also Like

More From Author

+ There are no comments

Add yours