Verifying quantum computations at scale: A cryptographic leash on quantum devices

By Thomas Vidick

Abstract

Rapid technological advances point to a near future where engineered devices based on the laws of quantum mechanics are able to implement computations that can no longer be emulated on a classical computer. Once that stage is reached, will it be possible to verify the results of the quantum device?

Recently, Mahadev introduced a solution to the following problem: Is it possible to delegate a quantum computation to a quantum device in a way that the final outcome of the computation can be verified on a classical computer, given that the device may be faulty or adversarial and given only the ability to generate classical instructions and obtain classical readout information in return?

Mahadev’s solution combines the framework of interactive proof systems from complexity theory with an ingenious use of classical cryptographic techniques to tie a “cryptographic leash” around the quantum device. In these notes I give a self-contained introduction to her elegant solution, explaining the required concepts from complexity, quantum computing, and cryptography, and how they are brought together in Mahadev’s protocol for classical verification of quantum computations.

Quantum mechanics has been a source of endless fascination throughout the 20th century—and continues to be in the 21st. Two of the most thought-provoking aspects of the theory are the exponential scaling of parameter space (a pure state of qubits requires complex parameters to be fully specified), and the uncertainty principle (measurements represented by noncommuting observables cannot be performed simultaneously without perturbing the state). The conceptual difficulty of the problem of verification of quantum computations stems from both aspects. Suppose we are given the description of an experiment that can be modeled in quantum mechanics—say, a number of individual photons are emitted by lasers in a certain configuration, then made to interact according to optical equipment such as mirrors and beam-splitters, and finally some observation is made, for example counting the number of photons that hit a strategically located detector within a certain time period. Quantum mechanics provides a set of formal rules that, in principle, allow for the computation of the distribution of possible outcomes obtained in this experiment—what is the probability that any number of photons hit the detector within the prescribed time frame? These rules yield extremely precise predictions that have been verified in countless experiments. In general however, computing the prediction requires a number of operations that scales exponentially with , the total number of photons in the experiment. What this means in practice is that as soon as exceeds, say, , it becomes all but infeasible, using even the most powerful supercomputers available today, to predict the outcome of any nontrivial quantum experiment.

In addition to the intrinsic exponential scaling of quantum mechanics, other quantum phenomena, such as the uncertainty principle, place fundamental limits on our ability to verify that a quantum mechanical evolution proceeds as expected. Any attempt by the experimentalist at making intermediate observations on the state of a subset of the elementary particles involved in her experiment risks altering its outcome, so that it is not clear at all if the results of low-level benchmarks can be meaningfully pieced together to certify the final result.

These obstructions should come as no surprise. Indeed it is the same difficulty, of classical simulation of quantum evolutions, that prompted Feynman to bring forward the idea of a quantum computer in the first place: a computer that by its very nature would have the ability to efficiently simulate any quantum process Reference 15. While such a universal quantum simulator remains a distant technological challenge, smaller-scale quantum devices have begun to appear that will soon have the capacity to simulate the evolution of specific quantum-mechanical systems, enabling physicists to, e.g., make predictions regarding properties of new materials (see, e.g., Reference 7). Such simulators will become interesting the day when they are able to generate predictions that could not have been obtained on a classical computer. Assuming that such a classically unsimulatable quantum simulator exists, can one check that the simulator accomplishes the task it was asked to perform—given that the task could not have been accomplished on a classical computer? If the simulator makes a wrong prediction—be it due to a fault in its implementation, or even due to malice on the implementer’s part—is there any way that the error can be detected without having to rely on yet another quantum simulator to duplicate the first simulator’s results?

Not all is lost. There obviously are some quantum computations whose outcome can be easily checked. A standard example is factoring: having run Shor’s quantum algorithm for finding a prime factor of an integer , it is easy to execute Euclid’s algorithm to check that . In the language of complexity theory, the complexity class⁠Footnote1 associated with the set of languages that can be decided efficiently with the help of a classical (probabilistic) computer is denoted BPP (for bounded-error probabilistic polynomial-time), while with a quantum computer one gets BQP (for bounded-error quantum polynomial-time). Factoring is an example of a problem that lies in the class BQP, but is not known to lie in the class BPP.⁠Footnote2

1

A complexity class is a collection of languages, and a language is a set of strings; for example, the set of all binary representations of graphs that are -colorable is a language. See Section 1 for more background on complexity theory.

2

Technically, one would have to associate a language (i.e., a set of bitstrings) to factoring. An example is binary representations of a pair of integers such that has a prime factor such that .

As emphasized above, the factoring problem has the additional property that solutions to it are easily verifiable. Recognizing that many algorithmically difficult problems seemed to share the feature of having easily verifiable solutions (other examples are the problem if deciding if two graphs are isomorphic and the problem of deciding if a graph is -colorable), Karp introduced in 1970 the complexity class NP (for nondeterministic polynomial-time). Informally, NP is the class of all languages that can be efficiently verified on a classical computer, given the right witness, or proof. If every problem in BQP (i.e., every problem that can be solved efficiently on a quantum computer) had this property—that the correct outcome of the computation can be efficiently verified on a classical computer—then the question of classical verifiability of quantum computation would be moot. However, there are very good reasons to think that this is not the case.

Indeed, complexity theorists strongly believe that there are problems in BQP that do not lie in NP.⁠Footnote3 One candidate for such a problem is the forrelation problem introduced in Reference 1. The input to this problem is a pair of Boolean functions , and it is promised that and the Fourier transform of are either highly correlated, , or barely correlated, .⁠Footnote4 The goal is to determine which of the two cases hold. There is an easy quantum algorithm for this problem that requires a single evaluation of the functions and , whereas it was shown in Reference 27 that, if access to and is restricted to evaluation queries, then the problem lies beyond even the polynomial hierarchy, which is a vast extension of the class NP. The open challenge is to find an explicit family of functions , for which the problem remains hard, even when given the explicit description of the functions (such as an evaluation circuit) as input.

3

They also believe that there are problems in NP, such as the traveling salesman problem or any NP-complete problem, that are not in BQP, so that the two classes are incomparable.

4

To be precise, the definition is .

Figure 1.

Known inclusions between complexity classes. Note that NP is not known to contain BPP because in NP, the verification procedure is assumed deterministic. With a randomized verifier, one obtains the class MA (for Merlin–Arthur) that does contain BPP.

Graphic without alt text

There exists a wide range of problems that are known to be efficiently solvable on a quantum computer, but for which there is no obvious efficiently verifiable proof (such as the forrelation problem). Is it possible to design theoretical experiments, or protocols, based on quantum mechanics, for such problems such that the proper execution of the experiment, or computation, can be certified correct using a verification procedure that does not itself rely on the manipulation of quantum information?

In these notes I present an elegant solution to this problem due to Mahadev Reference 24. Before presenting the solution, a first step is to formalize the question. This is done in two steps. First, in Section 1 I introduce the complexity-theoretic formalism of interactive proofs and arguments that forms the conceptual backdrop against which Mahadev’s result is formulated. Second, in Section 2 I give standard background in quantum computing that allows me to formalize the complexity class BQP and state Mahadev’s theorem. The remainder of the document is devoted to a presentation of the main ideas that constitute the proof of Mahadev’s result. Very roughly, the proof proceeds through the following steps.

An arbitrary quantum computation can be specified classically by providing the gate-by-gate description of a quantum circuit, as well as a classical description of the input on which the circuit is to be evaluated (generally, this will be a zero state representing a default initialization of all wires of the circuit). The first step is to reduce the problem of evaluating the output of such a circuit to the problem of verifying that a local Hamiltonian , that can be efficiently computed from classically, has a smallest eigenvalue that is below a certain threshold. Informally, this step is the quantum analogue of encoding the tableau of a classical computation into an instance of the Ising spin problem. This step is described in Section 3.

The previous step has reduced the verification problem to the problem of classically certifying that a local Hamiltonian has an eigenstate with small enough eigenvalue. Section 4 introduces the idea of a commitment, which is an interactive procedure by which a party can commit to a certain value , without revealing information about , yet in a way that is binding: the party is unable to later “change its mind” with respect to and claim that it in fact had committed to some . The notion of a commitment protocol for quantum states is the key ingredient in Mahadev’s protocol and is presented in Section 5.

Armed with the notion of a quantum commitment scheme, we describe the verification protocol itself in Section 6. Informally, in the protocol the prover first provides classical information that is meant to indicate the prover’s commitment to being in the possession of a unique, well-defined quantum state . Then, the verifier challenges the prover to reveal measurement outcomes performed on the state that it committed to, in a way that allows the verifier to estimate the associated energy and verify that it is sufficiently small.

The security of Mahadev’s quantum commitment scheme, on which her result ultimately rests, relies on a computational assumption regarding the impossibility of efficiently solving a problem known as the Learning with Errors (LWE) problem. This problem is described in Section 7.

These notes are written in a way that aims to make the most important insights of Mahadev’s work accessible to any mathematician, with or without background in complexity theory and quantum information. As a result, the presentation will remain rather informal, and we alert the reader whenever the discussion makes an important shortcut.

To keep the presentation focused, we do not survey prior works and other approaches to verification in any depth. The question has a long history that, prior to Mahadev’s work, had resulted in partial answers, obtained in a variety of models of verification. Some of the most important results include the concurrent works of Aharonov et al. Reference 2 and Broadbent et al. Reference 12 which showed how to achieve verification in a model where the verification procedure can make use of a small, trusted quantum computer, and the work of Reichardt et al. Reference 29 in a model where the verification procedure is entirely classical but has access to two spatially isolated quantum computers, sharing entanglement, who jointly implemented a quantum computation it aims to verify. In contrast to Mahadev’s result presented here, these works achieve information-theoretic (instead of computational) guarantees in their respective models. For an in-depth discussion of these and related works, I recommend the recent survey by Gheorghiu et al. Reference 17.

1. Interactive poofs and arguments

The concept of an interactive proof system can be difficult to digest for the mathematician, in part because it involves some amount of personification. When they talk of an interactive protocol, computer scientists are accustomed to refer to imaginary beings known as the verifier and the prover. Even worse, these imaginary beings are endowed with intentions: the prover is trying to demonstrate something to the verifier, while the verifier attempts to catch any cheating behavior from the prover. This is not the kind of language that is frequently used in, say, the theory of differential equations or operator spaces. Please bear with me—interactive proofs are one of the most powerful ideas to have emerged out of complexity theory since the 1990s, and they are a key element of Mahadev’s solution.

It all starts with the complexity class NP, whose study originates in the works of Cook, Karp, and Levin in the 1970s. A complexity class is a collection of languages. A (promise) language is a pair of subsets , where is the set of all bit strings, which are sequences of arbitrary but finite length over the alphabet . For example, could be the set of (suitably encoded) -SAT formulas that admit a satisfying assignment,⁠Footnote5 and could be the set of formulas that are not satisfiable. The language is called -SAT. Informally, a language is in the class NP if valid statements have an efficiently verifiable proof, and invalid statements have no valid proof. To formalize the notion of valid proof, we introduce the notion of verifier, represented by a polynomial-time Turing machine (for our purposes, the reader may replace the intimidating notion of Turing machine by any intuitive notion of efficient computation, such as an algorithm) whose goal is to verify claimed proofs. Thus a language is in the class NP if there exists a real polynomial and a Turing machine such that for all ,

5

A -SAT formula is an AND of -variable ORs, i.e., a Boolean formula of the form , where the variables and denotes variable negation.

(i)

if , then there exists a , the witness, or proof, such that halts in at most steps, where denotes the length of , and returns (for “accept”); and

(ii)

if , then for any , halts in at most steps and returns (for “reject”).

Property (i) is usually referred to as the completeness condition and (ii) as the soundness condition. The fact that -SAT is in NP follows since given as input a -SAT formula , if the formula is satisfiable, then there exists a witness (a satisfying assignment for ) that proves this, whereas if the formula is not satisfiable, it is easy for to check that any given purported assignment is indeed invalid.

Informally, NP captures the collection of problems that have efficiently verifiable solutions, such as factoring, -SAT, the traveling salesman problem, or mathematical theorems (that have reasonably short proofs within a prespecified axiomatic system).

1.1. Interactive proof systems

A key insight from research in complexity theory in the 1990s is that the collection of languages that admit efficient verification can be substantially extended beyond the class NP by allowing interactive protocols for verification Reference 5Reference 18. Consider a verification procedure , referred to as the verifier (personification, here it comes…), that is allowed to “ask questions” to another entity, the prover, about a claimed proof, as illustrated in Figure 2. Can this allow verification of languages other than the languages in NP, which admit static proofs?

The verifier in an interactive proof system is modeled as a randomized polynomial-time procedure. Informally, this refers to any algorithm that makes a total number of steps that is bounded by a polynomial function (that may depend on the algorithm) of the bitlength of its input and that has access to a source of uniformly random bits. Formally, the verifier is modeled by a randomized interactive Turing machine, but we will not need to descend to that level of detail. It is important that the verifier be randomized: indeed, interactive proof systems with deterministic verifiers reduce to NP because, in the case of deterministic messages from the verifier, there is no advantage to interaction (the prover can send the entire transcript as “proof”). A randomized verifier may sometimes make an erroneous decision, as long as for every input the probability of making an error in deciding that input is small: this ensures that repeating the verification procedure sufficiently many times and taking the majority decision yields an outcome that is erroneous with arbitrarily small probability.

Definition 1.1.

A language has an interactive proof system if there exists a randomized polynomial-time verifier such that the following hold.

Completeness. For any , there exists a prover , referred to as the honest prover, such that accepts on input with probability at least .

Soundness. For any and any prover , accepts on input with probability at most .

The collection of languages for which there exists an interactive proof system is denoted IP.

Note the asymmetry between the completeness and soundness conditions in the definition, which reflects a similar asymmetry in the definition of NP: it is always assumed that the prover aims to convince the verifier that the input is in the language (e.g., for the case of NP, that a 3-SAT formula is satisfiable). The verifier, in turn, aims to make the right decision by taking information from the prover but “verifying” it before making a decision, so as not to wrongly accept a negative instance.

To gain intuition as to how interaction may help, consider the following toy problem. The input is interpreted as a bivariate polynomial defined over a prime field , such that has degree at most in each of its variables. Think of as much larger than : the input size, measured in number of bits, is roughly (a list of coefficients), and could be exponentially larger. Let be the set of all such polynomials such that , and let be the set of polynomials such that . Computing naïvely takes time , where the hides a multiplicative factor of order the time it takes to evaluate at any given point, i.e., a polynomial in and . Consider the following protocol, using which the verifier can be led to making the right decision while having to invest a computational effort of only. The first step in the protocol is a message from the prover to the verifier, which consists in the claimed value of , together with a supporting statement in the form of a degree- polynomial such that . (The honest prover can find such a without difficulty by setting .) Upon receipt of an arbitrary , the verifier first checks the equality , which takes time. Next, she selects a uniformly random and checks that , which again requires time . Note that, if it was the case that , by the Schwartz–Zippel lemma the probability that would be at most , which is small provided is much larger than , as we assumed. Thus the verifier makes the right decision with high probability, on any input .

This protocol reduces the verifier’s effort from order to order , since the verifier only needs to perform summations over a single variable at a time, instead of both variables for the naïve computation. The attentive reader will have realized that the protocol is in fact noninteractive—there is a single message from the prover to the verifier. Applying the same idea to polynomials with more variables yields an interactive protocol, in which variables are randomly fixed by the verifier one at a time, with exponential savings in the amount of time required for verification, from order to order , with representing the number of variables. This idea, of efficiently verifying claims about the sum of the values taken by a multivariate polynomial, is central to the proof of a celebrated result in complexity theory, namely the inclusion of PSPACE (the class of languages that can be decided using polynomial space but arbitrary time) in IP Reference 23Reference 30. While the state of the art in complexity theory does not allow one to prove that PSPACE is a strictly larger class than NP, this is generally believed to be the case, so that interaction seems to significantly broaden the class of languages that have efficient verifiers.

In fact, the possibility of interaction sufficiently strengthens the verifier so as to allow it to verify a class of languages that includes the class BQP of languages that can be efficiently decided on a quantum computer! Using the idea of Feynman path integrals, the probability that a quantum circuit returns the outcome can be expressed as the sum of exponentially many complex numbers, each of which can be directly computed by taking a product of entries of the matrices that specify the quantum gates of the circuit; this exponential sum can be exactly computed (modulo finite-precision issues) in polynomial space. Given that PSPACE is in IP, it follows that there exists an interactive protocol, of the form described above, that allows a classical polynomial-time verifier to verify the outcome of a quantum computation by asking questions to an untrusted prover. But there is a hitch. The model of interactive proofs does not place limitations on the computational effort required of the prover. In the protocol for computing sums of polynomials described earlier, the prover has to compute multiple exponential-sized intermediate sums, that in general may take time to compute. Unfortunately, following the proofs that BQP is in PSPACE is in IP leads to a very similar protocol, in which the prover has to compute exponentially large sums that do not seem to be amenable to efficient computation even by a quantum procedure. There has been very little progress in tweaking the protocols obtained in this way to decrease the computational effort required for the honest prover (see, e.g., Reference 3 for a discussion).

1.2. Interactive arguments

If we are to make the requirement that the actions of the honest prover (i.e., the prover in charge of convincing the verifier in the case of an input in ) can be implemented efficiently (on a quantum computer), it is reasonable also to ask, not that there does not exist a that would improperly convince the verifier to accept an input in , but merely that no such exists that can be implemented in (quantum) polynomial time. Interactive proof systems such that the soundness condition holds under such a computational assumption are called arguments, a notion introduced in Reference 11.

Since it is not known that the classes P and NP, or even P and IP, are distinct, in order for the assumption to be effective, we further need to posit that a particular problem cannot be solved in (quantum) polynomial time. For example, we could make the assumption that the problem of factoring an integer cannot be solved in probabilistic polynomial time. Another example is that a certain explicit family of functions , where plays the role of the input size parameter, cannot be inverted with nonnegligible (in ) success probability in (quantum) polynomial time.⁠Footnote6 We refer to any such assumption as a computational assumption. The following definition parallels Definition 1.1.

6

A negligible function is one such that for any polynomial . A nonnegligible function grows faster than any negligible function.

Definition 1.2.

A language has an interactive argument under computational assumption (A) if there exists a randomized polynomial-time verifier such that the following hold.

Completeness. For any , there exists a prover such that accepts on input with probability at least .⁠Footnote7

7

The definition of the completeness property does not explicity require to be efficient. In practice, the properties of the honest prover are discussed on a case-by-case basis, depending on the argument system considered.

Soundness. For any and any (quantum) polynomial-time prover , accepts on input with probability at most .

The relaxed soundness condition of an interactive argument has proved fruitful to design more efficient proof systems than are known otherwise. For example, there are arguments for languages in NP such that the total communication is only polylogarithmic in the size of the input Reference 22; no such protocol is known in the interactive proof model. The construction of such arguments rests upon two notions: probabilistically checkable proofs and commitments. Intuitively, the computational assumption is used by the verifier to delegate parts of its own verification procedure to the prover. The notion of commitment is used to tie the prover to performing the right actions. We discuss commitments in Section 4, as they play a central role in Mahadev’s interactive argument for quantum computation. To formulate her result precisely, it only remains to introduce the formalism of quantum computation and the associated complexity class BQP. This is done in the next section.

2. Quantum computation

In this section we give a light introduction to the formalism of quantum computing. Our goal in doing so is to provide the minimal background required to formally state Mahadev’s theorem, which is given at the end of the section, as well as to describe the main ideas of her proof, introduced in the following sections. (The reader already familiar with quantum computing may directly skip ahead to the statement of Theorem 2.3 at the end of the section.)

2.1. Quantum states and observables

2.1.1. States

An -qubit quantum state is specified by a density matrix, a positive semidefinite matrix on the -dimensional Hilbert space such that has trace . Density matrices generalize classical probability distributions over bits, as the latter can be represented by a probability vector that we can embed on the diagonal of a density matrix.

Even though in general the formalism of quantum mechanics extends to arbitrary separable Hilbert spaces, for convenience in these notes Hilbert spaces always come endowed with a canonical decomposition as a tensor product of copies of , for some finite integer , such that each copy of has a canonical basis . We use the notation “ket” to write the canonical basis as , , and we refer to this basis as the computational basis. A quantum state is called pure if its density matrix has rank ; in this case we can also represent the state as a unit vector expressed in the canonical basis , or in the more compact ket notation. An arbitrary pure state thus has an expansion , where the are complex coefficients such that . The associated density matrix is the rank- projection , where the “bra” notation is used for the conjugate-transpose. For a density matrix that lies in the tensor product of multiple spaces we write e.g. using sans serif , to identify the different subsystems, often referred to as registers.

2.1.2. Observables

A measurement of a set of qubits is specified by an orthonormal basis of the Hilbert space associated with the qubits. The outcome of the measurement is the label of one of the basis vectors, and the probability with which each basis vector is obtained equals the squared norm of the component of the state that is in the direction of that basis vector. Formally, suppose is a state in , and that the first qubits of are measured in the orthonormal basis of . To compute the probability of the th outcome being obtained, we expand in the basis as

where the are arbitrary vectors in (not necessarily normalized or orthogonal). The probability of the th outcome is given by . It will later be important to remember that a measurement collapses the state: once the outcome has been obtained and recorded, the state undergoes a nonunitary evolution .⁠Footnote8

8

If the conflict between the statements that “quantum mechanics requires all evolutions to be unitary” and “a measurement is an irreversible process” puts you ill at ease, you are not alone.

A measurement in the basis , together with a choice of a real number associated with each outcome , can be succinctly represented as an observable . For a quantum state , the real number is precisely the expectation of , under the distribution on obtained by measuring the state in the basis . An example is the observable associated with a measurement of a qubit in the computational basis , labeling the first outcome and the second ”. The associated observable is the Pauli matrix,

Similarly, a measurement in the Hadamard basis is represented by the Pauli observable,

2.2. Circuits

Evolution in quantum mechanics is unitary. Given a unitary , a pure state evolves as , and a density matrix evolves as .

Not every unitary requires the same amount of resources to be implemented. Multiple models have been introduced to evaluate the complexity of implementing a unitary transformation. These include quantum Turing machines, the quantum circuit model, measurement-based computation, adiabatic computation, topological computation, and others. All these models have been shown equivalent up to a polynomial rescaling of the resources required. For our purposes, the simplest and most convenient model is the circuit model that we describe next.

In the circuit model, in order to implement an -qubit unitary , the unitary must first be decomposed as a product , where each is a unitary that acts nontrivially on at most two qubits; i.e., it can be written as a tensor product of a unitary on with the identity on the remaining space. Moreover, each should be taken from a finite gate set of allowed operations on the computer. A widely used gate set is , where is the Hadamard gate, is the (sometimes also called ) gate, and CNOT is the two-qubit unitary that sends for any .

A fundamental theorem in quantum computing, the Solovay–Kitaev theorem, states that for the purpose of efficient circuit representation any finite set of and -qubit gates is as good as any other, as long as it generates a dense subgroup in (which is the case for the above-defined set). More formally,

Theorem 2.1 (Solovay and Kitaev, 1997).

There is a constant such that for any finite gate set such that the group generated by is dense in and is closed under inverse, for any there is an such that is an -net in .⁠Footnote9

9

An -net is a set of points such that for all , there is such that . Here the norm is the operator norm, but any norm would give essentially the same result since the space has small dimension.

We can now define the class BQP of problems that we are concerned about.

Definition 2.2.

A promise language is in BQP if there exists a classical deterministic polynomial-time Turing machine such that, on input the Turing machine returns the description of a quantum circuit over the gate set , together with a specially designated output qubit for the circuit, such that the following hold.

If , then the probability that a measurement of the output qubit of , when all input qubits are initialized to the state, is , is at least ;

If , then the same probability is at most .

Due to the fact that quantum gates are by necessity reversible, it may not be immediately obvious that quantum computations generalize classical computation. In fact, given a function specified by a classical circuit, it is always possible to devise a quantum circuit of comparable size for the unitary that maps to for , . This can be achieved by showing that any classical circuit over, say, can be efficiently simulated by a reversible circuit; we omit the details. We often consider the application of the unitary in superposition: by linearity,

We close this section by stating the main result that these notes aim to present. The computational assumption that underlies soundness of the interactive argument, the learning with errors assumption, is a standard assumption in post-quantum cryptography (classical cryptography designed to be secure against quantum adversaries) that we review in Section 7.

Theorem 2.3 (Mahadev, 2018).

Any language in BQP has an interactive argument that is computationally sound against quantum polynomial-time provers under the LWE assumption. Moreover, the verifier runs in probabilistic polynomial time, the honest prover can be executed in quantum polynomial-time, and the interaction between verifier and prover consists of two rounds (four messages) only.

It is an important open question if the same theorem can be stated in the formalism of interactive proofs, instead of arguments (i.e., without making a computational assumption on the cheating prover), while still keeping the condition that the honest prover can be executed in quantum polynomial time. Intuitively, in Mahadev’s proof system the post-quantum cryptographic assumption is used to restore symmetry between the quantum prover and the less powerful classical verifier. A specific cryptographic construction, a quantum commitment scheme, is used to “tie the prover’s hands” in a way such that it has no choice but to implement the required computation—unless it has the power to break the cryptographic assumption. Nevertheless, while it is clear how the assumption is leveraged in the design of the protocol, it is not known whether one could do without it.

3. Certificates for computation

The first step in the proof of Theorem 2.3 is to identify a quantum certificate for the validity of a given quantum computation (i.e., a certificate that the probability that a quantum circuit provided as input returns the outcome ”, when all input qubits are initialized to the state, is at least ; cf. Definition 2.2). This is achieved by the main theorem of this section, Theorem 3.1, that is due to Kitaev and predates Mahadev’s work. In the following sections we will show how the existence of the kind of certificate provided by Theorem 3.1 can be verified using an interactive argument with a classical verifier.

We first consider the case of a classical computation. Given the description of a classical circuit as input, what is a good certificate for the claim that the circuit returns the value when all input bits are initialized to the state? While such a certificate is not really needed, as the verifier can simply execute the circuit by itself, this solution does not generalize well to the case of a quantum circuit and a classical verifier. In the following section we leverage the theory of NP-completeness to identify a particular kind of certificate for the validity of a classical circuit that has the advantage that it will generalize to the case of quantum circuits.

3.1. Warmup: the Ising spin problem

We consider a problem from classical statistical physics and show how to reduce the verification of a classical computation to it. Consider a graph with vertices, such that each vertex is associated a value (or state) , and each edge is associated a real weight (or coupling constant) such that . (See Figure 3.) Vertices represent particles that can be in one of two states, or , and edges represent interactions between particles, where the interaction can be attractive () or repulsive (). The energy of a configuration is defined as⁠Footnote10

10

Technically the expression in Equation 3.1 can take negative values, which may not seem appropriate for an “energy”. Correcting this is a matter of introducing an additive shift.

Informally, the energy functional (a.k.a. Hamiltonian) measures the number of interaction constraints that are not satisfied by an assignment , with each violation giving a penalty of . It is well-known that the problem of deciding, given as input, the coefficients , and two thresholds such that is at least a constant independent of ,⁠Footnote11 whether the minimum of over all is less than , or larger than , is an NP-complete problem (i.e., any problem in NP reduces to it); moreover, this holds even for the case where for all .

11

It is typical to normalize by dividing by its norm , in which case the promise on the gap becomes that it is at least an inverse polynomial in .

To understand why the Ising problem is as hard as any problem in NP, let us see how we can reduce to it the problem of deciding whether, given a classical circuit acting on bits and an input string , there exists a string such that the circuit accepts . By definition any problem in NP can be expressed in this form, with as the input, as the witness, and as the verifier’s circuit. In general is specified by a sequence of gates taken from some fixed gate set, that we may without loss of generality restrict to the sole NAND gate.⁠Footnote12 Next consider the tableau of the computation performed by the circuit . (See also Figure 4.) This is simply a list of values associated with each wire in the circuit: the input wires (initialized to ), the output wire of any gate in the circuit, and the final output wire (that should equal , which stands for “accept”). Given a tableau, it is possible to verify that the tableau is correct by checking the propagation of each gate, one at a time: if the inputs to a NAND gate are , the output should be . Wires corresponding to the input string should be initialized to the corresponding input bit, whereas wires associated with the witness string can take arbitrary values. We can express this set of constraints as a Hamiltonian , where is the total number of wires in the circuit:

12

The NAND gate maps to . (The first output is included for convenience, to keep the number of wires constant, but is not necessary.) It is a universal gate, meaning that any circuit can be simulated by one made of NAND games only with a polynomial overhead in the total number of gates.

where is a summation of energy penalties⁠Footnote13 with the input wire associated with the th bit of and with the input wire associated with the th ancilla bit, a summation of penalties of the form for each NAND gate mapping to , and consists of a single penalty term for the output wire. Then, is such that there exists such that if and only if there is a witness such that accepts ; otherwise, for all .

13

We use the notation for the indicator that event occurs.

Note that does not quite have the form Equation 3.1 of an Ising spin Hamiltonian yet: some of its terms involve three variables at a time, and moreover not all terms are directly expressed as a function of the parity of the sum of two variables. With a little more work using so-called gadgets, it is possible to complete the reduction and find an that is equivalent to (in terms of capturing the same decision problem) but is of the form Equation 3.1 Reference 6.

Using the trivial observation that P NP (any language that can be efficiently decided can be efficiently verified by ignoring the witness and performing the efficient decision procedure), it follows from the above discussion that the problem of deciding whether a classical circuit returns on the all input, which is complete for P, can be efficiently reduced to the problem of deciding whether an Ising Hamiltonian has a configuration with energy at most some threshold value (that can be efficiently determined by following the steps of the reduction), or if all configurations have energy at least , for some such that . (Explicitly, the Hamiltonian is obtained by adding penalty terms for all input wires associated to to the Hamiltonian obtained from the reduction described in the preceding paragraphs.)

Our next step is to devise a quantum analogue of this reduction.

3.2. Quantum spin problems

We can interpret the Hamiltonian introduced in Equation 3.1 as an energy functional that associates an energy to any configuration . In quantum mechanics, a Hamiltonian is any linear operator on Hilbert space, with the restriction that the operator should be Hermitian (and bounded; for convenience here we only consider finite-dimensional spaces, so that the latter condition is automatic). The interpretation is that the Hamiltonian associates a definite energy to any quantum state that happens to be in one of the Hamiltonian’s eigenstates . The energy of an arbitrary state is then computed as .⁠Footnote14 Often it is also required that the Hamiltonian be local, meaning that