Intel sees AI in enterprise on a ‘three to five-year path’

Estimated read time 5 min read


gettyimages-1959833640

NurPhoto/Getty Images

Most enterprises are implementing “proofs of concept” of generative artificial intelligence (gen AI) in their data centers, yet most don’t have production apps, and according to chip giant Intel, it’s going to take them a while to get there.

In an interview with ZDNET, Melissa Evers, vice president of the Software and Advanced Technology Group at the semiconductor giant, said, “There are a lot of folks who agree that there’s huge potential” in gen AI, “whether it be in retail or various verticals, government, et cetera.”

“But shifting that into production is really, really challenging.”

Evers and colleague Bill Pearson, who runs Solution & Ecosystem Engineering and Data Center & AI Software, cited data released earlier this year by consulting firm Ernst & Young showing a rough start for gen AI in the enterprise. 

Also: Intel offers a two-year extended warranty for crashing chips. Here’s what you need to know

intel-2024-melissa-evers

“Generic” uses of AI are happening in the enterprise now, says Evers, but it will take another couple of years to see “really much more sophisticated, complex systems, where you have pipelines of different types of generative AI feeding different types of things.”

Intel

The data show “43% of enterprises are exploring proof of concepts on generative AI, but 0% of them had brought generative AI to production in terms of use cases,” said Evers, summarizing the findings.  

Moreover, the “generic use cases,” said Evers, are happening now. “Then you’re going to see the customization and further integration in the following year.” 

“And, then, you’re going to see really much more sophisticated, complex systems, where you have pipelines of different types of generative AI feeding different types of things in another year or two after that,” she said. “My guess is we’re on a three to five-year path for that whole vision to be realized. And that’s pretty consistent with what we’ve seen through[out] history with various new technologies adoptions as well.”

There are many reasons for the lack of traction to date, said Evers and Pearson, including the security concerns raised regarding gen AI

Also: Generative AI is new attack vector endangering enterprises, says CrowdStrike CTO

Another issue for enterprise is the rapid pace of change in gen AI, said Evers, the “amount of churn in the industry, and new models and new database solutions that are being built continuously.”

Evers said this problem “is real and felt across the ecosystem” of AI and enterprise.

To address both security and constant change, Intel has announced numerous partnerships in recent days to give enterprises the components of gen AI in a way that is “as close to turnkey as possible,” said Pearson. 

“So we’re looking at rack-scale hardware designs with OEM partners that include compute, network storage, foundational software,” said Pearson, “and then leverage both open source micro-services that we’ve curated into particular use cases you can use or not use, and from ISVs who are offering solutions, or pieces of solutions, that can contribute to building out the RAG solution that a customer is implementing.”

Evers said the offerings are meant to be “hardened” technology to handle the security issues but also  “modular, such that I could experiment and see if this model provides me better results or that model provides me better results, or I could experiment with various database types of solutions.”

“I see companies today [that] are saying, I want rag in a box, I just want a solution that works,” said Pearson. “You, as the enterprise, can buy the hardware and pick the use case you want to deploy.” 

intel-william-pearson

“We believe we’re going to shift that pie chart” in the hardware battle with Nvidia, says Pearson of Intel’s chances.

Intel

Some companies “don’t even want to do that much,” he said. They just want to go to a systems integrator and pick functionality from a menu. A third group, a minority, are “very sophisticated enterprises” that “want to build their own, and we’re working with them on that.”

Intel’s packaged approach to gen AI echoes the Nvidia Inference Microservices, or “NIMs,” that Intel’s rival is selling with partners as a ready-built offering for the enterprise

To bolster its own efforts and to offset Nvidia’s AI dominance, Intel has partnered with a raft of companies, including Red Hat and VMware, on an open-source software consortium called the Open Platform for Enterprise AI (OPEA). This initiative of the Linux Foundation promises “the development of open, multi-provider, robust, and composable gen AI systems.”

The OPEA work is providing “reference implementations” that will be the starting point for applying and tuning those generic functions to which Evers referred. 

Also: If Intel can’t come up with a Qualcomm-killer soon, it’s game over for x86 PCs

“For a chatbot, you know, whether you apply that chatbot to retail versus health care, it’s going to look really different,” she observed. But here’s an implementation [of a chatbot] that enables you to check out different types of models and their accuracy with your RAG implementation, et cetera.”

OPEA will allow companies to start answering the array of tech questions concerning gen AI, such as, “Do I really need a 70-billion-parameter model, or do I get sufficient accuracy with a 7-billion-parameter model?” She said, referring to the number of neural “weights” that are the defining metric of most gen AI models (An “AI model” is part of an AI program that contains numerous neural net parameters and activation functions that are the key elements for how an AI program functions).

Regarding Nvidia’s dominance in the AI accelerator chip market, “We believe we’re going to shift that pie chart,” said Pearson. “We believe that providing open, neutral, horizontal solutions for the ecosystem enables openness, choice, and trust, and fundamentally, the history of technology is built on those principles. 

“If you look at the changes in the data center with regard to Linux penetration [and] software-defined networking, all of these markets were built and defined by openness.”





Source link

You May Also Like

More From Author

+ There are no comments

Add yours