The performance of today’s quantum hardware remains modest, and despite the hard work of innovators that is not going to change overnight. Each qubit platform faces its own significant challenges as vendors seek to scale up. But if we look at the details we can see that underlying progress remains strong across the hardware stack. Progress on error correcting codes must not be neglected. Innovation there has the potential to disrupt our standard assumptions about what it takes to build a quantum computer.
The strategic hardware playing field
As the quantum computing sector comes off the peak of its first hype cycle, questions on progress are inevitably being asked. This outlook asks specifically how the hardware itself is stepping up to the challenges it faces. To understand this, we need to appreciate the interplay between the performance specs that current hardware is able to achieve, and those required to deliver against different scenarios that players are targeting.
The NISQ era
Some look for early noisy intermediate scale quantum (NISQ) devices to step-up and support commercially useful applications in the next 2-5 years.
Opinions differ on requirements. Some believe quantum annealing is already starting to do this. GQI believe gate-model protagonists are ideally looking for:
- 100-200+ physical qubits
- 99.99%+ fidelities, especially 2Q gate fidelity
- High qubit connectivity
Probably this would be delivered in a single module, so interconnects may not be an issue.
It’s important to realize that there is no guarantee that quantum algorithms can deliver on their part of this bargain. Classical algorithms are a fierce competitor. Niche applications certainly look possible but broad quantum adoption looks much less likely without a further breakthrough.
A low depth algorithm breakthrough could still yield broad NISQ quantum advantage. However without further progress on algorithms the alternative could be a hard quantum winter. Making hardware developers financial plans robust to both these scenarios is a challenge.
Others focus on the potential that will be realized by larger machines to deliver fault tolerant quantum computation (FTQC). Early machines will still be limited.
Much current discussion assumes a ‘standard model’ of how this will be achieved:
- 99.9%+ fidelities, especially 2Q gate fidelities
- Beyond a certain module size, interconnects are essential for scaling
- Sufficiently fast classical control logic
- The 2D surface code (or similar) used to encode physical qubits into logical qubits
With standard error correction techniques overheads are very high. A system with 1 million physical qubits might offer only 200-300 truly high performance logical qubits, though better fidelities or better connectivity could radically improve this.
If we believe that several of today’s vendors can deliver on their announced roadmaps we will see broad early FTQC in 5-10 years. Many don’t see things playing out so smoothly. Scaling up is hard and some believe only their approach has what it takes to deliver eye of the needle FTQC.
The longer term
The longer term landscape will be shaped by developments that are even more uncertain.
Goliath FTQC – the direct future path would simply be brute force scaling of current hardware architectures. In the end we want 10,000+ logical qubits. Large device footprints could lead to truly immense machines, putting pressure on affordability.
Distributed FTQC – the emergence of quantum enabled networks provides new niche opportunities. In the end, an entanglement based quantum internet could potentially allow a literal exponential multiplication of quantum resources.
Turbo FTQC – to allow a fuller range of quantum speedups to become viable we would ideally like a new architecture with faster logical clock speeds, low overheads and capabilities such as a hardware efficient implementation of QRAM. No one is yet building such an architecture, but it is reasonable to believe that someday someone will.
Not all applications require lots of qubits. ‘Few qubit’ quantum computing (FQQC) devices may find a variety of use, particularly in network related applications.
Cryptographic applications may provide an early opportunity in quantum enabled networks. Some see helping to join up a future entanglement based quantum internet as the biggest opportunity of all.
This report is a Must Read for providers, users, investors or anyone interested in quantum computing and what’s in store for it in the future.
The report totals 59 pages in length and covers the quantum hardware landscape in greater depth than any other source. Our analysis is product and technically driven, often with much scientific detail, while also offering a good overview for the quantum novice on major approaches and differentiations.
All the major vendors and QC architectures are discussed in detail including but not limited to: | Alibaba | Alice & Bob | AQT | Archer Materials | Atom Computing | AWS | Bleximo | C12 | ColdQuanta | D-Wave | Diraq | Duality Quantum Photonics | EeroQ | EleQtron | Entangled Networks | Fujitsu | Google | Hitachi | IBM | Infleqtion | Intel | IonQ | IQM | Jiuzhang Quantum | Keysight Technologies | M Squared | Microsoft | Nu Quantum | OQC | ORCA Computing | OriginQ | Oxford Ionics | Pasqal | Photonic Inc. | PsiQuantum | Qblox | QMICS | QphoX | Quandela | Quantinuum | Quantum Brilliance | Quantum Machines | Quantum Motion | QUDORA | QuEra Computing | QuiX | QuTech | Rigetti | Riverlane | SEEQC | Siquance | SpinQ | SQC | Toshiba | TuringQ | Universal Quantum | USTC | Xanada | Zurich Instruments
Outlook Report | Quantum Computing Hardware '23
The GQI Outlook series is the predominant and authoritative, annual overview of the quantum tech space.
Provided across quantum applications, outlooks analyze computing hardware and software, algorithms and other areas with an in-depth look at the technologies, products, vendors - and the scientific current state driving these.
Anyone interested in quantum tech, will not find a more detailed or valuable analysis of the state than the GQI Outlooks.