Quantum volume is a benchmark used for assessing generic quantum circuit performance. The benchmark consists of applying sequences of random gates and checking whether, at a certain number of qubits and a certain size (overall number of gates), the output distribution is heavy. A circuit is said to admit a heavy output distribution if at least 2/3 of its output bit strings have an associated probability larger than the median probability of the random distribution (uniform guessing). There is strong complexity-theoretic evidence saying that classical circuits with reasonable (polynomial) depth cannot result in a heavy output distribution. Thus, Quantum volume can be used to probe the potential for a quantum advantage at a certain system size.
Noise destroys a quantum circuit’s ability to generate heavy output distributions, thus rendering it efficiently classically simulable. The quantum volume is related to the maximal size and number of qubits for which the output distribution of the quantum circuit remains heavy. Although too much noise renders this unusable, the QV does encompass benchmarking metrics such as: gate-fidelity, qubit connectivity, compiler efficiency, measurement fidelity, etc – if all of these are good enough then you should get a heavy output distribution. It is therefore used as a generic performance benchmark in many cases, but it does not give detailed information.
Frequently asked questions
- Does having a large quantum volume (QV) guarantee a quantum advantage for the algorithm I am running? Not necessarily, QV is designed to assess the performance of unstructured noisy circuits. Although this is a very good indicator of ”average” performance, your circuit might be a very structured noisy circuit whose statistics deviate significantly from this average.
- Is QV hardware agnostic? Yes and No. Yes, you can run QV tests on any quantum hardware, but for certain hardware these tests are not a fair reflection of the performance. For example, in photonic quantum technologies QV adds significant overhead and does not capture information about so-called native photonic algorithms, which are some of the main algorithms currently being executed on photonic hardware.