Apple may be late to the generative AI party but just don’t count it out yet. According to Bloomberg’s Marc Gurman and MacRumors’ Hartley Charlton, the company will use the M2 Ultra in its own servers – in its own data centres – to power its growing GAI ambitions. Launched in June 2023, the CPU – as used in the Mac Studio – remains the most complex piece of silicon ever released by Apple with 24 compute cores, up to 76 GPU cores and 32 AI accelerators.
The report neither mentions whether Apple plans to revive its defunct Xserve range of rack servers nor if it will bring back its Mac OS X server operating system. Both products have been mothballed for years as Apple moved its focus away from the enterprise market at the beginning of the last decade. A separate article from WSJ also adds that Apple is using the internal code name ACDC (Apple Chips in the Data Center).
In this piece, authors Aaron Tilley and Yang Jie posits that Apple would use the formidable firepower of its data centers for training or for more complex inference while lighter workloads (or those that would require access to personal data) would be handled locally on the device itself, eliminating the need to run in the cloud.
This mirrors what chip manufacturers like x86 stalwarts AMD and Intel have been advocating, in unison with Microsoft, with the AI PC paradigm: big server chips (like Epyc and Xeon) working in tandem with smaller client processors (Ryzen or Core). The difference being, of course, that Apple is using an existing processor rather than a new one.
Another way to AI hegemony?
Which raises another question; Did Apple plan this – having a jack-of-all-trade CPU family – from the onset? Bearing in mind that the M2 Ultra would probably be the only server processor in the world to have a GPU and an AI engine. Could it give way to a server-only version (the S1?) geared towards a data-center environment, with far more cores, no GPU and far, far more memory?
All in all though, there was never a doubt that sooner or later Apple would have started dabbling with server processors. It was a matter of when, rather than if. Reports of Apple building its own servers date back from as far as 2016 and is in line with Apple’s doctrine of owning the stack. In 2022, the company also looked to recruit an “upbeat and hard-working hardware validation engineer” to “develop, implement and complete hardware validation plans for its next generation hyperscale and storage server platforms”.
Then a year later, research carried out by analyst firm Structure Research found out that Apple was planning to triple its critical power capacity of its data centers to accommodate its two billion active devices (and nearly one billion iOS users) and deliver more services.
Of course, hardware requires software and Apple has been increasingly vocal over the past 12 months, releasing MLX, a machine learning framework designed specifically for Apple Silicon, a glimpse at an AI-enhanced Siri and a new suite of AI tools called OpenELMs.
It will be immensely instructive to see how Apple manages to do generative AI at scale using anything other than brute force GPU (à H100). This may well have a direct impact on the fate of another trillion-dollar company called Nvidia). WWDC, the annual Apple developer conference, takes places next month and it will have AI written all over it.
+ There are no comments
Add yours