4/22/2025 at 9:17:46 AM
To people who do quantum computing: are qubits (after error correction) functionally equivalent and hence directly comparable across quantum computers, and is it a useful objective measure to compare progress? Or is it more a easy-to-mediatise stat?by rtrgrd
4/22/2025 at 9:54:50 AM
Pretty much everything you read (especially when aimed at a non-expert audience) about quantum computing in the media is "easy-to-mediatize" information only.People building these things are trying to oversell their achievements while carefully avoiding making them easy to check, reproduce, or objectively compare to others. It's hard to objectively evaluate even for people who work in the field but haven't worked on the exact technology platform reported on. Metrics are taylored to marketing goals, for example, IBM made up a performance metric called "quantum volume", only to basically stop using it when it seemed to no longer favour them.
That being said, it's also undeniable that quantum computing is making significant progress, error correction being a major milestone. What this ends up being actually used for, if anything, remains to be seen (I'm rather sure we'll find something).
by staunton
4/22/2025 at 2:30:48 PM
I worked on a quantum computer for several years and can speak to this a bit: sorta. They're functionally equivalent in the sense that you can do the same computations, usually, but there are a ton of details that make how each particular modality behave. Things like gate fidelities (how good the gates are), how fast the gates can "execute", how long it takes to initialize the quantum state so you can execute gates, how long decoherence times (how long before the quantum state is lost) are, and many (many) other differences. Some modalities even have restrictions on what qubits can interact with other qubits which will, among other things, impact algorithm design.by packetlost
4/22/2025 at 4:04:41 PM
Also the error correction strategies that are available to different architectures/platforms can make a huge difference in the future.by ziofill
4/22/2025 at 4:06:40 PM
Yup! Though error correction was not something I spent a lot of time on. I worked primarily on "quantum OS" (really, just AMO control systems) so wasn't thinking much on the theoretical side of things.by packetlost
4/22/2025 at 5:05:51 PM
No, they are not comparable. There are gate-model quantum computers like the one described in the article, and there are quantum annealers from companies like D-Wave that are geared toward solving a type of problem called QUBO: https://en.wikipedia.org/wiki/Quadratic_unconstrained_binary...The latter has released quantum computers with thousands of qubits, but these qubits are not comparable with the physical qubits in a gate-model computer (and especially not with logical qubits from one).
by joshuaissac
4/22/2025 at 9:30:44 AM
As far as I understand quantum error correction, the number of physical qubits might vary between systems with the same number of logical qubits. So, if you care about the overhead, they might not be equivalent in practice.by k__
4/22/2025 at 2:49:57 PM
The idea is to implement something like the quantum equivalent of a Turing machine. What one universal quantum computer can do, another can. So yeah. However, connectivity and gate/measurement time will set some aspects of the performance but not the asymptotics.by gaze
4/22/2025 at 1:49:04 PM
It matters, but it's not functionally equivalent between different architectures.Since noone has many qubits, typically physical qubits are compared as opposed to virtual qubits (the error corrected ones).
The other key figures of merit are the 1-qubit and 2-qubit gate fidelities (basically the success rates). The 2-qubit gate is typically more difficult and has a lower fidelity, so people often compare qubits by looking only at the 2-qubit gate fidelity. Every 9 added to the 2-qubit gate fidelity is expected to roughly decrease the ratio of physical to virtual qubits by an order of magnitude.
In architectures where qubits are fixed in place and can only talk to their nearest neighbours, moving information around requires swap gates which are made up of the elementary 1 and 2-qubit gates. Some architectures have mobile qubits and all-to-all connectivity, so their proponents hope to avoid swap gates, considerably reducing the number of required 2-qubit gates required to run an algorithm, thus resulting in less errors to deal with.
Some companies, particularly ones on younger architectures, but perhaps with much better gate fidelities, argue that their scheme is better by virtue of being more "scalable" (having more potential in future).
It is expected that in the future, the overall clock speed of the quantum computer will matter, as the circuits we ultimately want to run are expected to be massively long. Since we're far away from the point where this matters, clock speed is uncommonly brought up.
In general, different architectures have different advantages. With different proponents having different beliefs of what matters, it was once described to me as each architecture having their own religion.
TL;DR: the two key stats are number of qubits and 2-qubit gate fidelity.
by JBits
4/22/2025 at 9:14:06 PM
They should declare a standard unit called Q*bert.by kazinator
4/23/2025 at 7:37:35 AM
Q sub BASIC?Does HN do subscript? I don't think I've seen it. I'm unsure if markdown supports it. I'd probably use subscripts a lot more if markdown had them.
by genewitch