According to Israeli startup NeuReality, many AI possibilities aren’t fully realized due to the cost and complexity of building and scaling AI systems.
Current solutions are not optimized for inference and rely on general-purpose CPUs, which were not designed for AI. Moreover, CPU-centric architectures necessitate multiple hardware components, resulting in underutilized Deep Learning Accelerators (DLAs) due to CPU bottlenecks.
NeuReality’s answer to this problem is the NR1AI Inference Solution, a combination of purpose-built software and a unique network addressable inference server-on-a-chip. NeuReality says this will deliver improved performance and scalability at a lower cost alongside reduced power consumption.
An express lane for large AI pipelines
“Our disruptive AI Inference technology is unbound by conventional CPUs, GPUs, and NICs,” said NeuReality’s CEO Moshe Tanach. “We didn’t try to just improve an already flawed system. Instead, we unpacked and redefined the ideal AI Inference system from top to bottom and end to end, to deliver breakthrough performance, cost savings, and energy efficiency.”
The key to NeuReality’s solution is a Network Addressable Processing Unit (NAPU), a new architecture design that leverages the power of DLAs. The NeuReality NR1, a network addressable inference Server-on-a-Chip, has an embedded Neural Network Engine and a NAPU.
This new architecture enables inference through hardware with AI-over-Fabric, an AI hypervisor, and AI-pipeline offload.
The company has two products that utilize its Server-on-a-Chip: the NR1-M AI Inference Module and the NR1-S AI Inference Appliance. The former is a Full-Height, Double-wide PCIe card that contains one NR1 NAPU system-on-a-chip and a network-addressable Inference Server that can connect to an external DLA. The latter is an AI-centric inference server containing NR1-M modules with the NR1 NAPU. NeuReality claims the server “lowers cost and power performance by up to 50X but doesn’t require IT to implement for end users.”
“Investing in more and more DLAs, GPUs, LPUs, TPUs… won’t address your core issue of system inefficiency,” said Tanach. “It’s akin to installing a faster engine in your car to navigate through traffic congestion and dead ends – it simply won’t get you to your destination any faster. NeuReality, on the other hand, provides an express lane for large AI pipelines, seamlessly routing tasks to purpose-built AI devices and swiftly delivering responses to your customers, while conserving both resources and capital.”
NeuReality recently secured $20 million in funding from the European Innovation Council (EIC) Fund, Varana Capital, Cleveland Avenue, XT Hi-Tech and OurCrowd.
+ There are no comments
Add yours