Why every quantum computer will need a powerful classical computer

Estimated read time 5 min read


Image of a set of spheres with arrows within them, with all the arrows pointing in the same direction.
Enlarge / A single logical qubit is built from a large collection of hardware qubits.

One of the more striking things about quantum computing is that the field, despite not having proven itself especially useful, has already spawned a collection of startups that are focused on building something other than qubits. It might be easy to dismiss this as opportunism—trying to cash in on the hype surrounding quantum computing. But it can be useful to look at the things these startups are targeting, because they can be an indication of hard problems in quantum computing that haven’t yet been solved by any one of the big companies involved in that space—companies like Amazon, Google, IBM, or Intel.

In the case of a UK-based company called Riverlane, the unsolved piece that is being addressed is the huge amount of classical computations that are going to be necessary to make the quantum hardware work. Specifically, it’s targeting the huge amount of data processing that will be needed for a key part of quantum error correction: recognizing when an error has occurred.

Error detection vs. the data

All qubits are fragile, tending to lose their state during operations, or simply over time. No matter what the technology—cold atoms, superconducting transmons, whatever—these error rates put a hard limit on the amount of computation that can be done before an error is inevitable. That rules out doing almost every useful computation operating directly on existing hardware qubits.

The generally accepted solution to this is to work with what are called logical qubits. These involve linking multiple hardware qubits together and spreading the quantum information among them. Additional hardware qubits are linked in so that they can be measured to monitor errors affecting the data, allowing them to be corrected. It can take dozens of hardware qubits to make a single logical qubit, meaning even the largest existing systems can only support about 50 robust logical qubits.

Riverlane’s founder and CEO, Steve Brierley, told Ars that error correction doesn’t only stress the qubit hardware; it stresses the classical portion of the system as well. Each of the measurements of the qubits used for monitoring the system needs to be processed to detect and interpret any errors. We’ll need roughly 100 logical qubits to do some of the simplest interesting calculations, meaning monitoring thousands of hardware qubits. Doing more sophisticated calculations may mean thousands of logical qubits.

That error-correction data (termed syndrome data in the field) needs to be read between each operation, which makes for a lot of data. “At scale, we’re talking a hundred terabytes per second,” said Brierley. “At a million physical qubits, we’ll be processing about a hundred terabytes per second, which is Netflix global streaming.”

It also has to be processed in real time, otherwise computations will get held up waiting for error correction to happen. To avoid that, detecting errors must happen in real time. For transmon-based qubits, syndrome data is generated roughly every microsecond, so real time means completing the processing of the data—possibly Terabytes of it—with a frequency of around a Megahertz. And Riverlane was founded to provide hardware that’s capable of handling it.

Handling the data

The system the company has developed is described in a paper that it has posted on the arXiv. It’s designed to handle syndrome data after other hardware has already converted the analog signals into digital form. This allows Riverlane’s hardware to sit outside any low-temperature hardware that’s needed for some forms of physical qubits.

That data is run through an algorithm the paper terms a “Collision Clustering decoder,” which handles the error detection. To demonstrate its effectiveness, they implement it based on a typical Field Programmable Gate Array from Xilinx, where it occupies only about 5 percent of the chip but can handle a logical qubit built from nearly 900 hardware qubits (simulated, in this case).

The company also demonstrated a custom chip that handled an even larger logical qubit, while only occupying a tiny fraction of a square millimeter and consuming just 8 milliwatts of power.

Both of these versions are highly specialized; they simply feed the error information on for other parts of the system to act on. So it is a highly focused solution. But it’s also quite flexible, in that it works with a variety of different error-correction codes. Critically, it also integrates with systems designed to control a qubit based on very different physics, including cold atom, trapped ion, and transmons.

“I think early on it was a bit of a puzzle,” Brierley said. “You’ve got all these different types of physics, how are we going to do this?” It turned out not to be a major challenge. “One of our engineers was in Oxford working with the superconducting qubits, and in the afternoon he was working with the iron trap qubits. He came back to Cambridge and he was all excited. He was like, ‘they’re using the same control electronics.'” It turns out that, regardless of the physics involved in controlling the qubits, everybody had borrowed the same hardware from a different field (Brierley said it was a Xilinx radiofrequency system-on-a-chip built for 5G base stationed prototyping). That makes it relatively easy to integrate Riverlane’s custom hardware with a variety of systems.



Source link

You May Also Like

More From Author

+ There are no comments

Add yours