Intel recently unveiled its Xeon 6 family, the latest addition to its server CPU lineup, replacing the former “Scalable” branding. This release introduces two lines of chips: Granite Rapids, which boasts P (Performance) cores, and Sierra Forest which feature E (Efficient) cores.
Intel plans to stagger the rollout of the Xeon 6 CPUs, with the initial launch of the 6700E chips, followed by the scheduled release of the Intel Xeon 6900P CPUs in Q3 of this year. Further releases, including the 6900E, 6700P, 6500P, Xeon 6 SoC, and 6300P, are expected in Q1 of 2025.
The Xeon 6700E series is specifically designed to cater to hyperscalers and boasts a 144-core configuration, supported by DDR5 memory and PCIe Gen5, all within a 250W TDP. Serve The Home had the chance to put the Xeon 6780E and 6766E CPUs through their paces and, spoiler alert, said they ‘Shatter Xeon Expectations’.
“Super” power consumption
The two Sierra Forest processors were pitted against AMD’s EPYC Bergamo and Siena series and against older Intel models, such as the Xeon Gold 5218 from the Cascade Lake generation.
The Intel chips were also compared with the Ampere Altra Max, an ARM-based processor known for its efficiency. Finally, the E-cores were compared to the P-cores of the 5th Gen Intel Xeon “Emerald Rapids” series.
For the full results you’ll need to check out Serve The Home’s exhaustive testing, but the Xeon 6780E and 6766E performed competitively and excelled in power efficiency (the site refers to Sierra Forest’s power consumption as “super”) and multi-threaded workloads.
Intel’s move to a dual-socket capability with these chips clearly gives them an upper hand over competitors like AMD’s EPYC series and allows for better scalability and flexibility in high-density server environments.
Summing up the review, Serve The Home’s Patrick Kennedy observed, “If you still have Xeon E5 servers or 1st/ 2nd gen Intel Xeon Scalable virtualization or container hosts, Sierra Forest offers wild consolidation gains that will drive big power savings. Those power savings can be directly channeled to add more AI servers, even if your traditional computing demands are slowly growing.”
+ There are no comments
Add yours