Verifying quantum computations at scale: A cryptographic leash on quantum devices

By Thomas Vidick

Abstract

Rapid technological advances point to a near future where engineered devices based on the laws of quantum mechanics are able to implement computations that can no longer be emulated on a classical computer. Once that stage is reached, will it be possible to verify the results of the quantum device?

Recently, Mahadev introduced a solution to the following problem: Is it possible to delegate a quantum computation to a quantum device in a way that the final outcome of the computation can be verified on a classical computer, given that the device may be faulty or adversarial and given only the ability to generate classical instructions and obtain classical readout information in return?

Mahadev’s solution combines the framework of interactive proof systems from complexity theory with an ingenious use of classical cryptographic techniques to tie a “cryptographic leash” around the quantum device. In these notes I give a self-contained introduction to her elegant solution, explaining the required concepts from complexity, quantum computing, and cryptography, and how they are brought together in Mahadev’s protocol for classical verification of quantum computations.

Quantum mechanics has been a source of endless fascination throughout the 20th century—and continues to be in the 21st. Two of the most thought-provoking aspects of the theory are the exponential scaling of parameter space (a pure state of qubits requires complex parameters to be fully specified), and the uncertainty principle (measurements represented by noncommuting observables cannot be performed simultaneously without perturbing the state). The conceptual difficulty of the problem of verification of quantum computations stems from both aspects. Suppose we are given the description of an experiment that can be modeled in quantum mechanics—say, a number of individual photons are emitted by lasers in a certain configuration, then made to interact according to optical equipment such as mirrors and beam-splitters, and finally some observation is made, for example counting the number of photons that hit a strategically located detector within a certain time period. Quantum mechanics provides a set of formal rules that, in principle, allow for the computation of the distribution of possible outcomes obtained in this experiment—what is the probability that any number of photons hit the detector within the prescribed time frame? These rules yield extremely precise predictions that have been verified in countless experiments. In general however, computing the prediction requires a number of operations that scales exponentially with , the total number of photons in the experiment. What this means in practice is that as soon as exceeds, say, , it becomes all but infeasible, using even the most powerful supercomputers available today, to predict the outcome of any nontrivial quantum experiment.

In addition to the intrinsic exponential scaling of quantum mechanics, other quantum phenomena, such as the uncertainty principle, place fundamental limits on our ability to verify that a quantum mechanical evolution proceeds as expected. Any attempt by the experimentalist at making intermediate observations on the state of a subset of the elementary particles involved in her experiment risks altering its outcome, so that it is not clear at all if the results of low-level benchmarks can be meaningfully pieced together to certify the final result.

These obstructions should come as no surprise. Indeed it is the same difficulty, of classical simulation of quantum evolutions, that prompted Feynman to bring forward the idea of a quantum computer in the first place: a computer that by its very nature would have the ability to efficiently simulate any quantum process Reference 15. While such a universal quantum simulator remains a distant technological challenge, smaller-scale quantum devices have begun to appear that will soon have the capacity to simulate the evolution of specific quantum-mechanical systems, enabling physicists to, e.g., make predictions regarding properties of new materials (see, e.g., Reference 7). Such simulators will become interesting the day when they are able to generate predictions that could not have been obtained on a classical computer. Assuming that such a classically unsimulatable quantum simulator exists, can one check that the simulator accomplishes the task it was asked to perform—given that the task could not have been accomplished on a classical computer? If the simulator makes a wrong prediction—be it due to a fault in its implementation, or even due to malice on the implementer’s part—is there any way that the error can be detected without having to rely on yet another quantum simulator to duplicate the first simulator’s results?

Not all is lost. There obviously are some quantum computations whose outcome can be easily checked. A standard example is factoring: having run Shor’s quantum algorithm for finding a prime factor of an integer , it is easy to execute Euclid’s algorithm to check that . In the language of complexity theory, the complexity class⁠Footnote1 associated with the set of languages that can be decided efficiently with the help of a classical (probabilistic) computer is denoted BPP (for bounded-error probabilistic polynomial-time), while with a quantum computer one gets BQP (for bounded-error quantum polynomial-time). Factoring is an example of a problem that lies in the class BQP, but is not known to lie in the class BPP.⁠Footnote2

1

A complexity class is a collection of languages, and a language is a set of strings; for example, the set of all binary representations of graphs that are -colorable is a language. See Section 1 for more background on complexity theory.

2

Technically, one would have to associate a language (i.e., a set of bitstrings) to factoring. An example is binary representations of a pair of integers such that has a prime factor such that .

As emphasized above, the factoring problem has the additional property that solutions to it are easily verifiable. Recognizing that many algorithmically difficult problems seemed to share the feature of having easily verifiable solutions (other examples are the problem if deciding if two graphs are isomorphic and the problem of deciding if a graph is -colorable), Karp introduced in 1970 the complexity class NP (for nondeterministic polynomial-time). Informally, NP is the class of all languages that can be efficiently verified on a classical computer, given the right witness, or proof. If every problem in BQP (i.e., every problem that can be solved efficiently on a quantum computer) had this property—that the correct outcome of the computation can be efficiently verified on a classical computer—then the question of classical verifiability of quantum computation would be moot. However, there are very good reasons to think that this is not the case.

Indeed, complexity theorists strongly believe that there are problems in BQP that do not lie in NP.⁠Footnote3 One candidate for such a problem is the forrelation problem introduced in Reference 1. The input to this problem is a pair of Boolean functions , and it is promised that and the Fourier transform of are either highly correlated, , or barely correlated, .⁠Footnote4 The goal is to determine which of the two cases hold. There is an easy quantum algorithm for this problem that requires a single evaluation of the functions and , whereas it was shown in Reference 27 that, if access to and is restricted to evaluation queries, then the problem lies beyond even the polynomial hierarchy, which is a vast extension of the class NP. The open challenge is to find an explicit family of functions , for which the problem remains hard, even when given the explicit description of the functions (such as an evaluation circuit) as input.

3

They also believe that there are problems in NP, such as the traveling salesman problem or any NP-complete problem, that are not in BQP, so that the two classes are incomparable.

4

To be precise, the definition is .

Figure 1.

Known inclusions between complexity classes. Note that NP is not known to contain BPP because in NP, the verification procedure is assumed deterministic. With a randomized verifier, one obtains the class MA (for Merlin–Arthur) that does contain BPP.

Graphic without alt text

There exists a wide range of problems that are known to be efficiently solvable on a quantum computer, but for which there is no obvious efficiently verifiable proof (such as the forrelation problem). Is it possible to design theoretical experiments, or protocols, based on quantum mechanics, for such problems such that the proper execution of the experiment, or computation, can be certified correct using a verification procedure that does not itself rely on the manipulation of quantum information?

In these notes I present an elegant solution to this problem due to Mahadev Reference 24. Before presenting the solution, a first step is to formalize the question. This is done in two steps. First, in Section 1 I introduce the complexity-theoretic formalism of interactive proofs and arguments that forms the conceptual backdrop against which Mahadev’s result is formulated. Second, in Section 2 I give standard background in quantum computing that allows me to formalize the complexity class BQP and state Mahadev’s theorem. The remainder of the document is devoted to a presentation of the main ideas that constitute the proof of Mahadev’s result. Very roughly, the proof proceeds through the following steps.

An arbitrary quantum computation can be specified classically by providing the gate-by-gate description of a quantum circuit, as well as a classical description of the input on which the circuit is to be evaluated (generally, this will be a zero state representing a default initialization of all wires of the circuit). The first step is to reduce the problem of evaluating the output of such a circuit to the problem of verifying that a local Hamiltonian , that can be efficiently computed from classically, has a smallest eigenvalue that is below a certain threshold. Informally, this step is the quantum analogue of encoding the tableau of a classical computation into an instance of the Ising spin problem. This step is described in Section 3.

The previous step has reduced the verification problem to the problem of classically certifying that a local Hamiltonian has an eigenstate with small enough eigenvalue. Section 4 introduces the idea of a commitment, which is an interactive procedure by which a party can commit to a certain value , without revealing information about , yet in a way that is binding: the party is unable to later “change its mind” with respect to and claim that it in fact had committed to some . The notion of a commitment protocol for quantum states is the key ingredient in Mahadev’s protocol and is presented in Section 5.

Armed with the notion of a quantum commitment scheme, we describe the verification protocol itself in Section 6. Informally, in the protocol the prover first provides classical information that is meant to indicate the prover’s commitment to being in the possession of a unique, well-defined quantum state . Then, the verifier challenges the prover to reveal measurement outcomes performed on the state that it committed to, in a way that allows the verifier to estimate the associated energy and verify that it is sufficiently small.

The security of Mahadev’s quantum commitment scheme, on which her result ultimately rests, relies on a computational assumption regarding the impossibility of efficiently solving a problem known as the Learning with Errors (LWE) problem. This problem is described in Section 7.

These notes are written in a way that aims to make the most important insights of Mahadev’s work accessible to any mathematician, with or without background in complexity theory and quantum information. As a result, the presentation will remain rather informal, and we alert the reader whenever the discussion makes an important shortcut.

To keep the presentation focused, we do not survey prior works and other approaches to verification in any depth. The question has a long history that, prior to Mahadev’s work, had resulted in partial answers, obtained in a variety of models of verification. Some of the most important results include the concurrent works of Aharonov et al. Reference 2 and Broadbent et al. Reference 12 which showed how to achieve verification in a model where the verification procedure can make use of a small, trusted quantum computer, and the work of Reichardt et al. Reference 29 in a model where the verification procedure is entirely classical but has access to two spatially isolated quantum computers, sharing entanglement, who jointly implemented a quantum computation it aims to verify. In contrast to Mahadev’s result presented here, these works achieve information-theoretic (instead of computational) guarantees in their respective models. For an in-depth discussion of these and related works, I recommend the recent survey by Gheorghiu et al. Reference 17.

1. Interactive poofs and arguments

The concept of an interactive proof system can be difficult to digest for the mathematician, in part because it involves some amount of personification. When they talk of an interactive protocol, computer scientists are accustomed to refer to imaginary beings known as the verifier and the prover. Even worse, these imaginary beings are endowed with intentions: the prover is trying to demonstrate something to the verifier, while the verifier attempts to catch any cheating behavior from the prover. This is not the kind of language that is frequently used in, say, the theory of differential equations or operator spaces. Please bear with me—interactive proofs are one of the most powerful ideas to have emerged out of complexity theory since the 1990s, and they are a key element of Mahadev’s solution.

It all starts with the complexity class NP, whose study originates in the works of Cook, Karp, and Levin in the 1970s. A complexity class is a collection of languages. A (promise) language is a pair of subsets , where is the set of all bit strings, which are sequences of arbitrary but finite length over the alphabet . For example, could be the set of (suitably encoded) -SAT formulas that admit a satisfying assignment,⁠Footnote5 and could be the set of formulas that are not satisfiable. The language is called -SAT. Informally, a language is in the class NP if valid statements have an efficiently verifiable proof, and invalid statements have no valid proof. To formalize the notion of valid proof, we introduce the notion of verifier, represented by a polynomial-time Turing machine (for our purposes, the reader may replace the intimidating notion of Turing machine by any intuitive notion of efficient computation, such as an algorithm) whose goal is to verify claimed proofs. Thus a language is in the class NP if there exists a real polynomial and a Turing machine such that for all ,

5

A -SAT formula is an AND of -variable ORs, i.e., a Boolean formula of the form , where the variables and denotes variable negation.

(i)

if , then there exists a , the witness, or proof, such that halts in at most steps, where denotes the length of , and returns (for “accept”); and

(ii)

if , then for any , halts in at most steps and returns (for “reject”).

Property (i) is usually referred to as the completeness condition and (ii) as the soundness condition. The fact that -SAT is in NP follows since given as input a -SAT formula , if the formula is satisfiable, then there exists a witness (a satisfying assignment for ) that proves this, whereas if the formula is not satisfiable, it is easy for to check that any given purported assignment is indeed invalid.

Informally, NP captures the collection of problems that have efficiently verifiable solutions, such as factoring, -SAT, the traveling salesman problem, or mathematical theorems (that have reasonably short proofs within a prespecified axiomatic system).

1.1. Interactive proof systems

A key insight from research in complexity theory in the 1990s is that the collection of languages that admit efficient verification can be substantially extended beyond the class NP by allowing interactive protocols for verification Reference 5Reference 18. Consider a verification procedure , referred to as the verifier (personification, here it comes…), that is allowed to “ask questions” to another entity, the prover, about a claimed proof, as illustrated in Figure 2. Can this allow verification of languages other than the languages in NP, which admit static proofs?

The verifier in an interactive proof system is modeled as a randomized polynomial-time procedure. Informally, this refers to any algorithm that makes a total number of steps that is bounded by a polynomial function (that may depend on the algorithm) of the bitlength of its input and that has access to a source of uniformly random bits. Formally, the verifier is modeled by a randomized interactive Turing machine, but we will not need to descend to that level of detail. It is important that the verifier be randomized: indeed, interactive proof systems with deterministic verifiers reduce to NP because, in the case of deterministic messages from the verifier, there is no advantage to interaction (the prover can send the entire transcript as “proof”). A randomized verifier may sometimes make an erroneous decision, as long as for every input the probability of making an error in deciding that input is small: this ensures that repeating the verification procedure sufficiently many times and taking the majority decision yields an outcome that is erroneous with arbitrarily small probability.

Definition 1.1.

A language has an interactive proof system if there exists a randomized polynomial-time verifier such that the following hold.

Completeness. For any , there exists a prover , referred to as the honest prover, such that accepts on input with probability at least .

Soundness. For any and any prover , accepts on input with probability at most .

The collection of languages for which there exists an interactive proof system is denoted IP.

Note the asymmetry between the completeness and soundness conditions in the definition, which reflects a similar asymmetry in the definition of NP: it is always assumed that the prover aims to convince the verifier that the input is in the language (e.g., for the case of NP, that a 3-SAT formula is satisfiable). The verifier, in turn, aims to make the right decision by taking information from the prover but “verifying” it before making a decision, so as not to wrongly accept a negative instance.

To gain intuition as to how interaction may help, consider the following toy problem. The input is interpreted as a bivariate polynomial defined over a prime field , such that has degree at most in each of its variables. Think of as much larger than : the input size, measured in number of bits, is roughly (a list of coefficients), and could be exponentially larger. Let be the set of all such polynomials such that , and let be the set of polynomials such that . Computing naïvely takes time , where the hides a multiplicative factor of order the time it takes to evaluate at any given point, i.e., a polynomial in and . Consider the following protocol, using which the verifier can be led to making the right decision while having to invest a computational effort of only. The first step in the protocol is a message from the prover to the verifier, which consists in the claimed value of , together with a supporting statement in the form of a degree- polynomial such that . (The honest prover can find such a without difficulty by setting .) Upon receipt of an arbitrary , the verifier first checks the equality , which takes time. Next, she selects a uniformly random and checks that , which again requires time . Note that, if it was the case that , by the Schwartz–Zippel lemma the probability that would be at most , which is small provided is much larger than , as we assumed. Thus the verifier makes the right decision with high probability, on any input .

This protocol reduces the verifier’s effort from order to order , since the verifier only needs to perform summations over a single variable at a time, instead of both variables for the naïve computation. The attentive reader will have realized that the protocol is in fact noninteractive—there is a single message from the prover to the verifier. Applying the same idea to polynomials with more variables yields an interactive protocol, in which variables are randomly fixed by the verifier one at a time, with exponential savings in the amount of time required for verification, from order to order , with representing the number of variables. This idea, of efficiently verifying claims about the sum of the values taken by a multivariate polynomial, is central to the proof of a celebrated result in complexity theory, namely the inclusion of PSPACE (the class of languages that can be decided using polynomial space but arbitrary time) in IP Reference 23Reference 30. While the state of the art in complexity theory does not allow one to prove that PSPACE is a strictly larger class than NP, this is generally believed to be the case, so that interaction seems to significantly broaden the class of languages that have efficient verifiers.

In fact, the possibility of interaction sufficiently strengthens the verifier so as to allow it to verify a class of languages that includes the class BQP of languages that can be efficiently decided on a quantum computer! Using the idea of Feynman path integrals, the probability that a quantum circuit returns the outcome can be expressed as the sum of exponentially many complex numbers, each of which can be directly computed by taking a product of entries of the matrices that specify the quantum gates of the circuit; this exponential sum can be exactly computed (modulo finite-precision issues) in polynomial space. Given that PSPACE is in IP, it follows that there exists an interactive protocol, of the form described above, that allows a classical polynomial-time verifier to verify the outcome of a quantum computation by asking questions to an untrusted prover. But there is a hitch. The model of interactive proofs does not place limitations on the computational effort required of the prover. In the protocol for computing sums of polynomials described earlier, the prover has to compute multiple exponential-sized intermediate sums, that in general may take time to compute. Unfortunately, following the proofs that BQP is in PSPACE is in IP leads to a very similar protocol, in which the prover has to compute exponentially large sums that do not seem to be amenable to efficient computation even by a quantum procedure. There has been very little progress in tweaking the protocols obtained in this way to decrease the computational effort required for the honest prover (see, e.g., Reference 3 for a discussion).

1.2. Interactive arguments

If we are to make the requirement that the actions of the honest prover (i.e., the prover in charge of convincing the verifier in the case of an input in ) can be implemented efficiently (on a quantum computer), it is reasonable also to ask, not that there does not exist a that would improperly convince the verifier to accept an input in , but merely that no such exists that can be implemented in (quantum) polynomial time. Interactive proof systems such that the soundness condition holds under such a computational assumption are called arguments, a notion introduced in Reference 11.

Since it is not known that the classes P and NP, or even P and IP, are distinct, in order for the assumption to be effective, we further need to posit that a particular problem cannot be solved in (quantum) polynomial time. For example, we could make the assumption that the problem of factoring an integer cannot be solved in probabilistic polynomial time. Another example is that a certain explicit family of functions , where plays the role of the input size parameter, cannot be inverted with nonnegligible (in ) success probability in (quantum) polynomial time.⁠Footnote6 We refer to any such assumption as a computational assumption. The following definition parallels Definition 1.1.

6

A negligible function is one such that for any polynomial . A nonnegligible function grows faster than any negligible function.

Definition 1.2.

A language has an interactive argument under computational assumption (A) if there exists a randomized polynomial-time verifier such that the following hold.

Completeness. For any , there exists a prover such that accepts on input with probability at least .⁠Footnote7

7

The definition of the completeness property does not explicity require to be efficient. In practice, the properties of the honest prover are discussed on a case-by-case basis, depending on the argument system considered.

Soundness. For any and any (quantum) polynomial-time prover , accepts on input with probability at most .

The relaxed soundness condition of an interactive argument has proved fruitful to design more efficient proof systems than are known otherwise. For example, there are arguments for languages in NP such that the total communication is only polylogarithmic in the size of the input Reference 22; no such protocol is known in the interactive proof model. The construction of such arguments rests upon two notions: probabilistically checkable proofs and commitments. Intuitively, the computational assumption is used by the verifier to delegate parts of its own verification procedure to the prover. The notion of commitment is used to tie the prover to performing the right actions. We discuss commitments in Section 4, as they play a central role in Mahadev’s interactive argument for quantum computation. To formulate her result precisely, it only remains to introduce the formalism of quantum computation and the associated complexity class BQP. This is done in the next section.

2. Quantum computation

In this section we give a light introduction to the formalism of quantum computing. Our goal in doing so is to provide the minimal background required to formally state Mahadev’s theorem, which is given at the end of the section, as well as to describe the main ideas of her proof, introduced in the following sections. (The reader already familiar with quantum computing may directly skip ahead to the statement of Theorem 2.3 at the end of the section.)

2.1. Quantum states and observables

2.1.1. States

An -qubit quantum state is specified by a density matrix, a positive semidefinite matrix on the -dimensional Hilbert space such that has trace . Density matrices generalize classical probability distributions over bits, as the latter can be represented by a probability vector that we can embed on the diagonal of a density matrix.

Even though in general the formalism of quantum mechanics extends to arbitrary separable Hilbert spaces, for convenience in these notes Hilbert spaces always come endowed with a canonical decomposition as a tensor product of copies of , for some finite integer , such that each copy of has a canonical basis . We use the notation “ket” to write the canonical basis as , , and we refer to this basis as the computational basis. A quantum state is called pure if its density matrix has rank ; in this case we can also represent the state as a unit vector expressed in the canonical basis , or in the more compact ket notation. An arbitrary pure state thus has an expansion , where the are complex coefficients such that . The associated density matrix is the rank- projection , where the “bra” notation is used for the conjugate-transpose. For a density matrix that lies in the tensor product of multiple spaces we write e.g. using sans serif , to identify the different subsystems, often referred to as registers.

2.1.2. Observables

A measurement of a set of qubits is specified by an orthonormal basis of the Hilbert space associated with the qubits. The outcome of the measurement is the label of one of the basis vectors, and the probability with which each basis vector is obtained equals the squared norm of the component of the state that is in the direction of that basis vector. Formally, suppose is a state in , and that the first qubits of are measured in the orthonormal basis of . To compute the probability of the th outcome being obtained, we expand in the basis as

where the are arbitrary vectors in (not necessarily normalized or orthogonal). The probability of the th outcome is given by . It will later be important to remember that a measurement collapses the state: once the outcome has been obtained and recorded, the state undergoes a nonunitary evolution .⁠Footnote8

8

If the conflict between the statements that “quantum mechanics requires all evolutions to be unitary” and “a measurement is an irreversible process” puts you ill at ease, you are not alone.

A measurement in the basis , together with a choice of a real number associated with each outcome , can be succinctly represented as an observable . For a quantum state , the real number is precisely the expectation of , under the distribution on obtained by measuring the state in the basis . An example is the observable associated with a measurement of a qubit in the computational basis , labeling the first outcome and the second ”. The associated observable is the Pauli matrix,

Similarly, a measurement in the Hadamard basis is represented by the Pauli observable,

2.2. Circuits

Evolution in quantum mechanics is unitary. Given a unitary , a pure state evolves as , and a density matrix evolves as .

Not every unitary requires the same amount of resources to be implemented. Multiple models have been introduced to evaluate the complexity of implementing a unitary transformation. These include quantum Turing machines, the quantum circuit model, measurement-based computation, adiabatic computation, topological computation, and others. All these models have been shown equivalent up to a polynomial rescaling of the resources required. For our purposes, the simplest and most convenient model is the circuit model that we describe next.

In the circuit model, in order to implement an -qubit unitary , the unitary must first be decomposed as a product , where each is a unitary that acts nontrivially on at most two qubits; i.e., it can be written as a tensor product of a unitary on with the identity on the remaining space. Moreover, each should be taken from a finite gate set of allowed operations on the computer. A widely used gate set is , where is the Hadamard gate, is the (sometimes also called ) gate, and CNOT is the two-qubit unitary that sends for any .

A fundamental theorem in quantum computing, the Solovay–Kitaev theorem, states that for the purpose of efficient circuit representation any finite set of and -qubit gates is as good as any other, as long as it generates a dense subgroup in (which is the case for the above-defined set). More formally,

Theorem 2.1 (Solovay and Kitaev, 1997).

There is a constant such that for any finite gate set such that the group generated by is dense in and is closed under inverse, for any there is an such that is an -net in .⁠Footnote9

9

An -net is a set of points such that for all , there is such that . Here the norm is the operator norm, but any norm would give essentially the same result since the space has small dimension.

We can now define the class BQP of problems that we are concerned about.

Definition 2.2.

A promise language is in BQP if there exists a classical deterministic polynomial-time Turing machine such that, on input the Turing machine returns the description of a quantum circuit over the gate set , together with a specially designated output qubit for the circuit, such that the following hold.

If , then the probability that a measurement of the output qubit of , when all input qubits are initialized to the state, is , is at least ;

If , then the same probability is at most .

Due to the fact that quantum gates are by necessity reversible, it may not be immediately obvious that quantum computations generalize classical computation. In fact, given a function specified by a classical circuit, it is always possible to devise a quantum circuit of comparable size for the unitary that maps to for , . This can be achieved by showing that any classical circuit over, say, can be efficiently simulated by a reversible circuit; we omit the details. We often consider the application of the unitary in superposition: by linearity,

We close this section by stating the main result that these notes aim to present. The computational assumption that underlies soundness of the interactive argument, the learning with errors assumption, is a standard assumption in post-quantum cryptography (classical cryptography designed to be secure against quantum adversaries) that we review in Section 7.

Theorem 2.3 (Mahadev, 2018).

Any language in BQP has an interactive argument that is computationally sound against quantum polynomial-time provers under the LWE assumption. Moreover, the verifier runs in probabilistic polynomial time, the honest prover can be executed in quantum polynomial-time, and the interaction between verifier and prover consists of two rounds (four messages) only.

It is an important open question if the same theorem can be stated in the formalism of interactive proofs, instead of arguments (i.e., without making a computational assumption on the cheating prover), while still keeping the condition that the honest prover can be executed in quantum polynomial time. Intuitively, in Mahadev’s proof system the post-quantum cryptographic assumption is used to restore symmetry between the quantum prover and the less powerful classical verifier. A specific cryptographic construction, a quantum commitment scheme, is used to “tie the prover’s hands” in a way such that it has no choice but to implement the required computation—unless it has the power to break the cryptographic assumption. Nevertheless, while it is clear how the assumption is leveraged in the design of the protocol, it is not known whether one could do without it.

3. Certificates for computation

The first step in the proof of Theorem 2.3 is to identify a quantum certificate for the validity of a given quantum computation (i.e., a certificate that the probability that a quantum circuit provided as input returns the outcome ”, when all input qubits are initialized to the state, is at least ; cf. Definition 2.2). This is achieved by the main theorem of this section, Theorem 3.1, that is due to Kitaev and predates Mahadev’s work. In the following sections we will show how the existence of the kind of certificate provided by Theorem 3.1 can be verified using an interactive argument with a classical verifier.

We first consider the case of a classical computation. Given the description of a classical circuit as input, what is a good certificate for the claim that the circuit returns the value when all input bits are initialized to the state? While such a certificate is not really needed, as the verifier can simply execute the circuit by itself, this solution does not generalize well to the case of a quantum circuit and a classical verifier. In the following section we leverage the theory of NP-completeness to identify a particular kind of certificate for the validity of a classical circuit that has the advantage that it will generalize to the case of quantum circuits.

3.1. Warmup: the Ising spin problem

We consider a problem from classical statistical physics and show how to reduce the verification of a classical computation to it. Consider a graph with vertices, such that each vertex is associated a value (or state) , and each edge is associated a real weight (or coupling constant) such that . (See Figure 3.) Vertices represent particles that can be in one of two states, or , and edges represent interactions between particles, where the interaction can be attractive () or repulsive (). The energy of a configuration is defined as⁠Footnote10

10

Technically the expression in Equation 3.1 can take negative values, which may not seem appropriate for an “energy”. Correcting this is a matter of introducing an additive shift.

Informally, the energy functional (a.k.a. Hamiltonian) measures the number of interaction constraints that are not satisfied by an assignment , with each violation giving a penalty of . It is well-known that the problem of deciding, given as input, the coefficients , and two thresholds such that is at least a constant independent of ,⁠Footnote11 whether the minimum of over all is less than , or larger than , is an NP-complete problem (i.e., any problem in NP reduces to it); moreover, this holds even for the case where for all .

11

It is typical to normalize by dividing by its norm , in which case the promise on the gap becomes that it is at least an inverse polynomial in .

To understand why the Ising problem is as hard as any problem in NP, let us see how we can reduce to it the problem of deciding whether, given a classical circuit acting on bits and an input string , there exists a string such that the circuit accepts . By definition any problem in NP can be expressed in this form, with as the input, as the witness, and as the verifier’s circuit. In general is specified by a sequence of gates taken from some fixed gate set, that we may without loss of generality restrict to the sole NAND gate.⁠Footnote12 Next consider the tableau of the computation performed by the circuit . (See also Figure 4.) This is simply a list of values associated with each wire in the circuit: the input wires (initialized to ), the output wire of any gate in the circuit, and the final output wire (that should equal , which stands for “accept”). Given a tableau, it is possible to verify that the tableau is correct by checking the propagation of each gate, one at a time: if the inputs to a NAND gate are , the output should be . Wires corresponding to the input string should be initialized to the corresponding input bit, whereas wires associated with the witness string can take arbitrary values. We can express this set of constraints as a Hamiltonian , where is the total number of wires in the circuit:

12

The NAND gate maps to . (The first output is included for convenience, to keep the number of wires constant, but is not necessary.) It is a universal gate, meaning that any circuit can be simulated by one made of NAND games only with a polynomial overhead in the total number of gates.

where is a summation of energy penalties⁠Footnote13 with the input wire associated with the th bit of and with the input wire associated with the th ancilla bit, a summation of penalties of the form for each NAND gate mapping to , and consists of a single penalty term for the output wire. Then, is such that there exists such that if and only if there is a witness such that accepts ; otherwise, for all .

13

We use the notation for the indicator that event occurs.

Note that does not quite have the form Equation 3.1 of an Ising spin Hamiltonian yet: some of its terms involve three variables at a time, and moreover not all terms are directly expressed as a function of the parity of the sum of two variables. With a little more work using so-called gadgets, it is possible to complete the reduction and find an that is equivalent to (in terms of capturing the same decision problem) but is of the form Equation 3.1 Reference 6.

Using the trivial observation that P NP (any language that can be efficiently decided can be efficiently verified by ignoring the witness and performing the efficient decision procedure), it follows from the above discussion that the problem of deciding whether a classical circuit returns on the all input, which is complete for P, can be efficiently reduced to the problem of deciding whether an Ising Hamiltonian has a configuration with energy at most some threshold value (that can be efficiently determined by following the steps of the reduction), or if all configurations have energy at least , for some such that . (Explicitly, the Hamiltonian is obtained by adding penalty terms for all input wires associated to to the Hamiltonian obtained from the reduction described in the preceding paragraphs.)

Our next step is to devise a quantum analogue of this reduction.

3.2. Quantum spin problems

We can interpret the Hamiltonian introduced in Equation 3.1 as an energy functional that associates an energy to any configuration . In quantum mechanics, a Hamiltonian is any linear operator on Hilbert space, with the restriction that the operator should be Hermitian (and bounded; for convenience here we only consider finite-dimensional spaces, so that the latter condition is automatic). The interpretation is that the Hamiltonian associates a definite energy to any quantum state that happens to be in one of the Hamiltonian’s eigenstates . The energy of an arbitrary state is then computed as .⁠Footnote14 Often it is also required that the Hamiltonian be local, meaning that can be expressed as a sum of a polynomial number of terms , each of which acts nontrivially on at most qubits (i.e., each term can be expressed as the tensor product of the identity on qubits and an arbitrary Hamiltonian on the remaining qubits) for some constant . This constraint reflects the fact that each term in the Hamiltonian is meant to represent a physical interaction between a small number of (typically spatially close) elementary particles.

14

The reader may have noticed that the syntactic requirements for Hamiltonians and observables are identical. Physically, a Hamiltonian is meant to represent a specific observable, that corresponds to the energy of a system; mathematically, the two notions are interchangeable.

Using the notation introduced in the previous section, the Ising spin Hamiltonian can be recast as a quantum Hamiltonian, , where is shorthand for the observable that is on the th qubit and the identity on the others, . Since this Hamiltonian is diagonal in the computational basis, its eigenstate with smallest eigenvalue, also called its ground state or minimal energy state, is always attained at a pure computational basis state , for some .

Things get more interesting when we consider Hamiltonians made of a combination of noncommuting observables. Consider for example the -qubit Hamiltonian

As a matrix this can be written as

What is remarkable about this Hamiltonian is that its smallest-eigenvalue eigenstate is a state

also known as a Bell state or EPR pair. This state has the property of being entangled: it cannot be expressed in the form , for any two single-qubit states and .

The possibility for quantum Hamiltonians to force entanglement in their ground state distinguishes them from classical Hamiltonians, whose eigenstates are computational basis states and, in particular, can be expressed as a tensor product of single-qubit states. As a result, while a classical Hamiltonian always has a minimal energy configuration that can be described using bits (hence, as already observed, the problem of deciding its minimum energy is in NP, with the witness being a classical -bit description of the minimal energy configuration), for quantum Hamiltonians this need not be the case. The complexity class QMA (for quantum Merlin–Arthur) is the quantum analogue of NP: QMA is the collection of all (promise) languages such that it is possible for a quantum polynomial-time verifier to correctly decide whether an input or , with error at most , with the help of a quantum proof provided by an all-powerful, but untrusted, quantum prover. The problem of deciding the minimal energy of a local Hamiltonian to within some inverse polynomial precision is an example of a problem that is in QMA: informally, given a claimed minimum-eigenvalue eigenstate presented as a quantum state, it is possible to estimate the associated eigenvalue by making the appropriate energy measurement. Kitaev established a quantum analogue of NP-completeness of -SAT by showing that the local Hamiltonian problem is QMA-complete, i.e., the constraints expressed by any polynomial-time quantum verification procedure can be reduced to constraints of the form checked by a local Hamiltonian. We give an overview of Kitaev’s reduction next.

3.3. Certificates for quantum computations

In Section 3.1 we have seen that the computation carried out by a classical circuit can be represented as a tableau, such that the property of being a valid tableau can be encoded in a classical Hamiltonian, thereby reducing the task of deciding whether a classical circuit accepts its input to the task of deciding whether the associated Hamiltonian (that can be efficiently computed from the circuit) has a small enough eigenvalue.

What is the correct notion of a tableau for quantum circuits? The first idea is to consider the juxtaposition of the quantum state of an -gate circuit at each step of the computation, i.e., the tensor product of the states obtained by executing the circuit from scratch and stopping after gates have been applied. While this is a well-defined -qubit quantum state (see Figure 5) the property of being a valid quantum tableau cannot be enforced using a local Hamiltonian! The reason is subtle, and has to do with the possible presence of entanglement at intermediate steps of the computation. Indeed, there are quantum states that are very different, in the sense that they are perfectly distinguishable by some global observable, yet cannot be distinguished at all by any local observable, that would act on at most, say, half the qubits. An example is given by the two -qubit “cat” (named after the homonymous animal) states

The two states and are easily seen to be orthogonal, so that they can be perfectly distinguished by a measurement. But it is an exercise to verify that for any observable that acts on at most of the qubits, both states give exactly the same expectation value. (Informally, this is because any measurement on a strict subset of the qubits of the state necessarily destroys the coherence; the only relevant information, the sign, is encoded globally and cannot be accessed locally.) Note that this is a uniquely quantum phenomenon: if two classical strings of bits have each of their bits equal, one pair at a time, then the strings are globally identical. Not so for quantum states.

So naïve tableaus will not do. In the late 1990s the physicist Alexei Kitaev introduced a very powerful idea that provides a solution. Kitaev’s idea is to replace the juxtaposition of snapshot states by their superposition (see Figure 5). A special ancilla system, called the clock, is introduced to index different elements of the superposition. Thus, instead of defining a tableau as , Kitaev considers the state

Kitaev showed that, assuming the clock register is encoded in unary, it is possible to check the correct propagation of every step of the circuit directly on this superposition by only applying local observables, in a manner very similar to what we did for classical tableaus: there is a set of observables that checks that has the right format; a set of observables that checks propagation of the circuit, and an observable that checks that the output qubit of the circuit is in the right state. (In addition, there is a term that checks that the clock register is well-formed, i.e. contains the representation of an integer in unary. This can be done locally by penalizing configurations of the form ”.) The key point that makes this possible is that, while equality of quantum states cannot be decided locally when the states are juxtaposed, it becomes possible when they are given in superposition. As an exercise, we can verify that a measurement of the first qubit of the state

in the Hadamard basis returns the first outcome with probability exactly . With more work, replacing the use of gadgets for the classical case by techniques from perturbation theory, it is possible to write the resulting Hamiltonian as a linear combination of local terms that all take the form of the EPR Hamiltonian Equation 3.3. (Such a Hamiltonian is called a Hamiltonian in form, for obvious reasons.) The result is the following theorem from Reference 14.

Theorem 3.1.

For any integer there are , and such that the following holds. Given a -gate quantum circuit acting on qubits, such that , and an input for the circuit, there exist efficiently computable real weights such that for all and moreover if

then:

Completeness. If the circuit accepts its input with probability at least , then the smallest eigenvalue of is at most a;

Soundness. If the circuit accepts its input with probability at most , then the smallest eigenvalue of is at least .

Remark 3.2.

It is possible to modify Theorem 3.1 so that the completeness and soundness statements specify that “if there exists a state such that accepts on input with probability at least and “if there does not exist a state such that accepts on input with probability greater than ”, respectively. Thus, Theorem 3.1 can be adapted to show that the problem of estimating the minimal energy of a Hamiltonian of the form Equation 3.5 is a QMA-complete problem.

Theorem 3.1 provides us with a roadmap for the verification of quantum circuits: it is sufficient to verify the existence of a quantum state that yields certain statistics, when some of its qubits are measured in the computational ( observable) or Hadamard ( observable) basis. The reason this can be considered progress is that we no longer need to check the time evolution of a quantum state under a quantum cicuit; it is sufficient to collect measurement statistics and estimate the energy. In particular, the theorem readily leads to a verification protocol in a model where the prover has a full quantum computer, and the verifier only has a limited quantum device—namely, a one-qubit memory, together with the ability to measure the qubit using either the or observables.

Such a verification protocol was introduced by Fitzsimons, Hadjušek, and Morimae Reference 16 and is summarized in Figure 6. In the protocol, the prover is required to prepare a smallest eigenstate of the Hamiltonian given in Equation 3.5. While it may not be immediately obvious at the level of our description, it is possible to prepare such a history state Equation 3.4 by executing a quantum circuit that is only mildly more complex than the original circuit .

Even though the verifier’s “quantumness” in this protocol is limited—she only needs to hold one qubit at a time—this capability is crucial for the analysis, as it is used to guarantee the existence of the state that is being measured: it allows us to meaningfully talk about “the state whose first qubit is the first qubit received by the verifier; whose second qubit is the second qubit received by the verifier; etc.”. These qubits are distinct, because the verifier has seen and then discarded them (it would be a different matter if they were returned to the prover). In particular, the fact that a one-qubit computer can be trivially simulated on a classical piece of paper is immaterial to the argument.

With a classical verifier things become substantially more delicate. How can we verify the existence of an -qubit state with certain properties, while having only access to classical data about the state, data that, for all we know a priori, could have been generated by a simple—classical—laptop? To achieve this we need to find a way for the verifier to establish that the prover holds an -qubit state, without ever having the ability to directly probe even a single qubit of that state. The major achievement in Mahadev’s work is a method to do just this; it is the topic of the next section.

4. Commitments

The key ingredient in Mahadev’s verification protocol for quantum computations is a commitment scheme that allows a quantum prover to commit to a quantum state using only classical information, thereby providing the means to remove the need for quantum communication in the protocol described at the end of the previous section. Before introducing Mahadev’s commitment scheme, we review the classical notion of commitment and show how it can be implemented using collision-resistant hash functions.

4.1. Classical commitments

Consider the following toy task of “coin-flipping over the phone” Reference 8. Alice is at work; Bob is at home; they would like to decide over the phone who will cook dinner tonight. Neither volunteers: they need to flip a coin. Clearly, neither of them trusts the other to do this properly, so they need a protocol that makes it infeasible for either party to bias the outcome in their favor. Here is a way to achieve this using commitments. Bob chooses a value —ideally, he chooses it uniformly at random, but this is up to him. He then commits to by sending Alice some information —think of Bob as inserting a piece of paper with written on it in a large safe, handing the safe to Alice, but keeping the key to himself. Then, Alice herself chooses a bit , and announces it directly to Bob. Finally, Bob reveals his bit by giving Alice the key to the safe. Alice uses to open the safe and check Bob’s claimed value for . If the check goes through, they jointly agree that the bit is unbiased. Finally, they use to designate the night’s cook-in-chief.

The properties of the coin-flipping protocol described in the previous paragraph —informally, that neither user has the ability to bias the outcome of the coin flip, provided the other user behaves honestly—suggest the following two requirements for the commitment scheme: it should be hiding (Alice does not learn , unless Bob explicitly reveals it to her) and binding (once he has committed, Bob cannot reveal any value other than the one he committed to). Formally, a commitment scheme is defined as follows.

Definition 4.1 (Classical commitment scheme).

A (noninteractive) commitment scheme (for a message space ) is a triple of probabilistic polynomial-time procedures such that

generates a key , when provided as input an integer written in unary.⁠Footnote15

15

is referred to as the security parameter. The reason it is provided in unary is that this guarantees, by the polynomial-time requirement on Gen, that the bit length of is polynomial in the integer .

Commit takes as input a key and an and returns a pair that consists of a commitment value , and a reveal value .⁠Footnote16

16

We write the input key to Commit as a subscript that is often omitted for convenience.

Reveal takes as input a key and a pair and returns a value such that , where is a special “failure” symbol.

Correctness. For any , if , then it holds that .

Statistical hiding. For any two the distributions of and are identical.

Computational binding. No probabilistic polynomial-time procedure can, given as input, return a triple such that and are such that with nonnegligible probability.

The definition refers to a scheme that is statistically hiding and computationally binding. It is possible to consider schemes with the qualifiers inverted (i.e., computationally hiding and statistically binding) but we will not make use of such a scheme here. Note that, under the assumption that the scheme is statistically hiding, the binding property can only be required to hold under a computational assumption. This is because if the distribution of the commitment value does not depend on whether the message or , then for any it is possible to find and that reveal to and , respectively. The computational binding requirement is that, even though these values must exist, they should be hard to find.

We end this section by presenting a construction of a commitment scheme whose computational binding property rests on the assumption that there exists a family of collision-resistant hash functions (CRHF). A CRHF is a family of functions , for polynomially bounded functions , , of , such that given for any the value can be evaluated efficiently, but it is impossible for any probabilistic polynomial-time adversary to find a pair of inputs such that with nonnegligible probability. Collision-resistant hash functions are widely used in cryptography, and many constructions are known based on assumptions such as the Diffie–Hellman assumption about the hardness of the discrete logarithm problem or the LWE problem about the hardness of solving noisy linear equations. (Unconditionally proving the existence of a CRHF would imply that PNP,⁠Footnote17 so we have to rely on computational assumptions.) For reasons that will become clear when we discuss the extension to quantum commitments, it is convenient for us to make the additional assumption that the functions are 2-to-1. Specifically,

17

The converse implication is not known to hold.

Assumption (2TO1). For each , both functions and are injective, and they have the same range.

In Section 7 we sketch a construction of a CRHF family that is almost 2-to-1, in a sense that we will make precise, based on the LWE problem.

Let be a 2-to-1 CRHF family (this is also called a claw-free family, where any triple such that forms a claw). Here is a commitment scheme based on (see also Figure 7). The scheme allows commitments of single-bit messages, . Both parties agree on a security parameter . To commit to a bit , the committer selects a uniformly random and sends to the verifier, where the symbol is used to denote string concatenation. To reveal , the committer sends both and to the verifier, who checks that . This scheme is computationally binding, because to “change his mind” the committer needs to identify and such that , which is a collision. And the scheme is statistically hiding, because due to the -to- property any commitment from the committer has exactly one preimage under and one preimage under , that the verifier has no basis to prefer over each other.

4.2. Quantum commitment schemes

We now devise an analogue of Definition 4.1 that allows for committing to quantum states. Keeping in mind the goal of using such a commitment scheme for classical verification, we wish to keep the actions of the verifier classical. It is then no longer meaningful to expect that the scheme would allow the committer to reveal its quantum state to the verifier. Instead of revealing the quantum state itself, we settle for the ability for the committer to reveal measurement outcomes performed on the state to which it has committed. Importantly, we should allow the verifier to request measurement outcomes in a choice of at least two incompatible bases, so that the committer could not simply have measured its state first and then classically committed to the measurement outcome. Intuitively, the binding requirement of the scheme should be that it is impossible for the committer to report measurement outcomes that are obtained by measuring a state that depends on the measurement basis requested by the verifier. For example, it should be impossible for the committer to commit to a state such that it later has the ability to both reveal to the verifier that a measurement of in the computational, or in the Hadamard, basis yields an outcome —as indeed, there is no such state.

We proceed with a formal definition that mimics Definition 4.1 with these desiderata in mind.

Definition 4.2 (Qubit commitment scheme).

A qubit commitment scheme for a pair of observables on a Hilbert space is a triple of classical probabilistic polynomial-time procedures and a triple of quantum polynomial-time procedures such that

generates a key , when provided as input an integer written in unary.

Commit is a quantum polynomial-time procedure

such that for any density matrix on , where is an arbitrary reference system, the state is classical on (i.e., the first output register, , has been measured). We call the post-measurement state on the post-commitment state.

and are quantum measurements on that return classical strings and as their output, respectively.

and take as input a pair and and return a value and respectively.

Correctness. For any state , the distribution of

is statistically indistinguishable from the distribution obtained by measuring the register of using observable .⁠Footnote18 A similar condition holds for the combination of and .

18

Statistical indistinguishability between two families of distributions and means that there does not exist a procedure that, given as input as well as a sample from either or , can determine which is the case with a success probability that is larger than , for some nonnegligible function . This is equivalent to the statistical distance between the two distributions being at most . Computational indistinguishability is defined similarly, except that the distinguishing procedure is restricted to run in probabilistic polynomial time. For distributions on a single bit, computational indistinguishability implies statistical indistinguishability, but this is no longer the case for distributions over larger domains.

Statistical hiding. For every and , the distributions of and obtained from and , respectively, are statistically indistinguishable.

Computational binding. For any triple of quantum polynomial-time procedures and efficiently preparable state such that the values and defined as in Equation 4.1 (with and replacing Commit and Meas respectively) are different from with nonnegligible probability, there exists a state such that the distribution of and , conditioned on or , respectively, are computationally indistinguishable from the outcome distribution obtained by directly measuring using and , respectively.

Although the definition may at first seem complicated, it closely parallels Definition 4.1 of a classical commitment scheme. The main difference is that a quantum commitment scheme allows the committer to commit to a quantum state in such a way that the verifier may request measurement outcomes obtained by measuring the state using two different observables, and . The observables and may not be simultaneously measurable, so that it is not possible to implement a quantum commitment scheme by providing classical commitments to measurement outcomes in both bases simultaneously. The most important guarantee of the scheme is the binding property, which ensures that any committer is tied to reporting outcomes for both types of measurement requests that could, at least in principle, have been obtained from a single quantum state. For example, for the case of a single qubit commitment and and corresponding to measurements in the computational and Hadamard basis, respectively, it should not be possible to generate a commitment string such that it is later possible to find and that both yield and with high probability, as Heisenberg’s uncertainty principle rules out the existence of a single-qubit state that yields the outcome when measured in both the computational and the Hadamard basis.

A few additional remarks on the definition are in order. A first remark is that the definition does not restrict the dimension of the system , so that it allows committing to a multiple-qubit state. However, we have restricted the reported outcomes to be from a pair of two-outcome observables only. It is possible to formulate a more general definition, that allows more than two measurements with outcomes in a larger set than . In fact, such a commitment scheme is eventually needed for Mahadev’s protocol. For the sake of simplicity, we gave the simpler definition.

A second remark is that the definition is formulated as a stand-alone security definition, meaning that it does not attempt to guarantee security in a broader context where, e.g., multiple commitments would be obtained in sequence or the committer’s post-measurement states could be used as part of a subsequent cryptographic procedure, etc. Giving a universally composable definition, as has been done for the case of classical commitments Reference 13, would be desirable to make quantum commitments useful in a broader context; giving such a definition falls outside the scope of these notes (and is not needed for the analysis of Mahadev’s protocol).

5. Constructing quantum commitments

We proceed to describe the key innovation that underlies Mahadev’s protocol, a quantum commitment scheme that satisfies the properties of Definition 4.2.⁠Footnote19 To get started, we consider the most naïve scheme, that consists in applying the commitment scheme described in Section 4.1 directly, in superposition, to a quantum state. For simplicity we consider the case of committing to a single-qubit state, when the observables and are the single-qubit Pauli observables and .

19

As already hinted at, her scheme satisfies more and, in particular, allows commitments to more than two measurements, each with more than two outcomes. Here we concentrate on the simplest nontrivial variant, that already demonstrates (almost) all the interesting features.

5.1. Key generation

Let be integer, and let

be a family of functions that satisfies assumption (2TO1) stated in Section 4.1. The public key for the scheme contains a description of the function that allows one to evaluate it on any input. For simplicity, throughout this section we fix , and often drop the index , writing for a function chosen according to the procedure Gen. In the coming subsections we progressively formulate requirements on that are stronger than those of a -to- CRHF.

5.2. Commitment procedure

We start by describing the procedure , for the case where the committer wishes to commit to a single-qubit state . As a first step, the committer evaluates the function in superposition by creating an additional register that contains a uniform superposition over all -bit strings and then computing the commitment in superposition into a third ancilla register initialized to :

The committer then measures the last register in the computational basis to obtain a classical commitment string . The string is the outcome returned by the procedure . The remaining two registers, the first qubit and the register containing , form the register that contains the post-commitment state.

At this point it is worth noting explicitly that the (2TO1) property is crucial in ensuring that the post-measurement state retains the same structure as the original state that was committed to, i.e., the commitment procedure does not “damage” the quantum state to which it is applied. Recall from Section 2.1 that in general any measurement induces a collapse of the state. Informally, the reason that this does not happen here is that since the classical commitment scheme is statistically hiding, the commitment string reveals no information about the committed bit, so that measuring the commitment can be done without perturbing the committed state. Thus when the commitment string is measured, the state in Equation 5.1 partially collapses to the post-measurement state consistent with the outcome obtained,

We pause for a moment and argue that the state is a plausible commitment to and, in particular, why the operations performed so far may yield a scheme that has the binding property. Observe that is very similar to , except that the initial superposition that defined got slightly “muddled” by the inclusion of and . Nevertheless, morally the superposition is preserved: most importantly, the coefficients that define it are unchanged. In fact, the state is unitarily related to , by the simple unitary

(extended in an arbitrary way to the whole space). However, although this unitary exists, it may not be easy for the prover to implement it! This is because doing so seems to require the identification of both and from , which would require the prover to find a collision for . (See Figure 8.)

Note that we have not made the requirement that should be collision-resistant yet. Even though it is only needed for the analysis of a multi-qubit commitment scheme, for completeness we introduce an assumption on that is known as collapsing and is a stronger property than being collision resistant.

Assumption (C). Consider the following abstract game between an arbitrary prover and a trusted (quantum) challenger.⁠Footnote20 First, the prover is required to prepare an arbitrary state of the form , where ranges over the domain of . The prover hands the state over to the challenger, who evaluates in superposition on and measures the image register, obtaining a in the range of and the (suitably normalized) post-measurement state . The challenger returns to the prover the string together with either the state or the probabilistic mixture obtained by measuring the same state in the computational basis (and throwing away the outcome). The prover wins if it correctly guesses which is the case. Assumption (C) on the function states that no quantum polynomial-time prover can succeed in this game with probability nonnegligibly larger than .

20

This game is not meant to be executed in the protocol; rather, it is meant to indicate a task that the committer, impersonating the prover, should not be able to complete.

The reason that Assumption (C) implies collision resistance is that, if the function were not collision resistant, the prover could identify a colliding pair and submit to the challenger. It could then measure the challenger’s response in a basis containing the two states and guess that, in case the outcome is obtained, the challenger must have measured; in the other case, the prover guesses at random.

5.3. Statistical hiding

The statistical hiding property is straightforward and follows from the (2TO1) property as in the case of the classical commitment scheme: any measurement outcome that the prover could obtain is such that for some , and in fact the distribution of is uniform over the range of the function .

5.4. Revealing measurement outcomes in the computational basis

We describe the procedures and together. Recall that after having performed the commitment procedure, the committer’s state takes the form of in Equation 5.2. The measurement simply consists in a measurement of the first two registers of in the computational basis. This results in an outcome that the committer reports to the verifier.

The verifier’s procedure is defined as follows. Given and , the verifier first checks that is a preimage of under , i.e., that . We refer to this test as the preimage test. In case the test fails, the verifier sets . (Note that this test is identical to the test performed by the verifier in the reveal phase of the classical commitment protocol described in Section 4.1.) In case the test succeeds, the verifier sets .

The completeness property for this pair of procedures is clear. Given a state of the form Equation 5.2 that has been obtained by the correct application of the procedure , a measurement of the first qubits always yields a pair such that . Moreover, the distribution of the bit is , , which is exactly the distribution of a measurement of the qubit in the computational basis.

5.5. Revealing measurement outcomes in the Hadamard basis

We turn to the procedures and . The measurement consists in a measurement of the first two registers of in the Hadamard basis. This results in an outcome that the committer reports to the verifier.

The verifier’s procedure is defined as follows. Given and the verifier first computes the two preimages and of under . For this to be possible, we make the following assumption on the function :

Assumption (T). There is trapdoor information such that, given the trapdoor, it is possible to efficiently invert , i.e., given , and recover , such that .

Functions that are easy to compute and hard to invert, but easy to invert given a trapdoor, are widely used in cryptography; they form the bedrock of public-key cryptography. We will say more about how such a trapdoor can be “planted” in an efficiently computable function in Section 7.⁠Footnote21 For the time being, we proceed with the description of . Having determined and , the verifier sets

21

Technically, the use of a trapdoor requires a modification of the Gen procedure so that it is allowed to return a (public key, private key) pair such that the public key is given to the committer and allows evaluation of , while the private key is given to the verifier only and allows inversion of .

except if , in which case the verifier sets (the motivation for this condition will become clear later, when we discuss Assumption (HC)).

The completeness property of this pair of procedures can be verified by direct calculation. To provide some intuition for the expression Equation 5.4, we verify completeness for the case when , i.e. . Applying the gate on the first qubits of the state in Equation 5.2 yields (omitting the last register, that contains )

Thus in this case, for any outcome that can be obtained with nonzero probability by the committer’s procedure , the verifier’s decoded bit is , which agrees with the outcome of a measurement of the state in the Hadamard basis. (Moreover, the probability of obtaining is exponentially small, so that the committer has only an exponentially small chance of leading to an abort.) Informally, the addition of to the bit acts as ad decoding operation that accounts for interference created by the strings that have been appended to the prover’s qubit in the commitment phase.

5.6. The committed qubit

We have defined all procedures that underlie the qubit commitment scheme, and verified that these procedures satisfy the required completeness requirement, as well as the property of statistical hiding. (The protocol is summarized in Figure 9.) It remains to show the computational binding property, which is the heart of the matter. The property requires to show the existence of a state having certain properties in regard to the outcomes recorded by the verifier in the protocol. In this section, we define the state , that we refer to as the commited qubit. In the next section we argue the required properties.

Note that we have to be careful with the meaning that we ascribe to any definition of a committed qubit. Indeed, the attentive reader will have observed that it is straightforward for a classical committer to succeed in the commitment protocol, by selecting an arbitrary , setting , and sending and a uniformly random -bit string when asked to do so. Indeed, that this is possible should come as no surprise: it exactly corresponds to the honest behavior for a committer desiring to commit to the classical qubit ! In fact, one may consider the fact that the commmitment scheme can be implemented by a classical verifier, whenever the information to be committed to is classical, as a useful feature of the scheme.

In general, the actions of an arbitrary committer can be modeled by two measurement procedures and such that each measurement has outcomes that range in the set of -bit strings. Up to a change of basis for the committer’s post-commitment state we may assume without loss of generality that the measurement consists of a measurement of the first qubits of the prover’s post-commitment state in the computational basis. In other words, we may fix a basis in which the post-commitment state can be expressed as

for arbitrary coefficients and normalized states , and such that moreover it holds that .⁠Footnote22

22

We have not entirely justified why this assumption is without loss of generality. It requires the use of the Stinespring dilation theorem, which guarantees that any measurement with -bit outcomes can be expressed, up to isometry, as a measurement of qubits in the computational basis.

Having fixed a basis, the measurement can in general be expressed as the composition of an arbitrary unitary acting on the entirety of the committer’s space, followed by a measurement of the first qubits in the Hadamard basis, i.e., . Using that form a basis for the space of linear operators on , any such unitary can be uniquely expanded according to its action on the first qubit as

With this notation it is not hard to verify that the linear map defined by

where and is arbitrary, is an isometry, hence it is an admissible operation in quantum mechanics.⁠Footnote23 (Note that increases dimension by a factor . The new qubit register, in third position, is called a purifying system.) We are ready to make a key definition.

23

For the expert, is obtained from by a -twirl, followed by a conditional bit-flip. The motivation for this definition will become clear in the proof of the computational binding property.

Definition 5.1 (Committed qubit).

Given a commitment string and an arbitrary post-commitment state for the prover of the form Equation 5.5, let be the single-qubit state obtained from by performing the following operations:

(i)

Apply the isometry defined in Equation 5.6 to , yielding a state on (for some ).

(ii)

Measure qubits to in the Hadamard basis, obtaining an outcome string .

(iii)

If , apply a operator to the first qubit. Here and are the two preimages of the commitment string under .

(iv)

Return the first qubit.

Note that the verifier does not know the state ; in fact, strictly speaking, is not present on the committer’s space at any time. The point is that exists, and is well-defined (mathematically) as a function only of the commitment string , the post-commitment state , and the unitary .

5.7. Computational binding

The discussion in the preceding section establishes a definition of a state , that may not exist directly on the prover’s space at any time in the protocol, but that is explicitly defined from states and operators that are a function of the committer’s. To establish the binding property, it remains to show that the distribution of outcomes obtained by measuring in the computational and Hadamard bases is computationally indistinguishable from the distribution of outcomes computed by the verifier using the and procedures, respectively, under the assumption that the procedures lead to non- outcomes with high probability.

We start with the case of the computational basis, which is easier. Recall that we made a choice of basis for the committer’s space such that its post-commitment state is of the form in Equation 5.5, and moreover the measurement measures the first qubits of in the computational basis and returns the outcome , from which the verifier obtains (provided the preimage test succeeds).

According to Definition 5.1, the committed qubit is defined from by applying the isometry in Equation 5.6 and returning the first qubit, to which may have been applied a operator, depending on the string . We make two observations. First, note that has a block-diagonal form: it stabilizes the spaces and , where is the Hilbert space associated with all but the committer’s first qubit. As a result the outcome of a measurement of the first qubit of , or of , in the computational basis are identically distributed. Second, the operator has no effect on a measurement in the computational basis. These two observations together establish that the verifier’s outcome has exactly the right distribution.

Before we consider measurements in the Hadamard basis, we make a simplifying assumption. Note that an honest committer, whose state is the state in Equation 5.2, always passes the preimage test. For the remainder of the analysis we make the assumption that, in case the verifier decides to execute the procedure, the committer’s measurement procedure always returns a pair that passes the preimage test.⁠Footnote24 As a consequence of this assumption the expression for the state simplifies to

24

This assumption requires that in any execution of the commitment protocol there is a positive probability that the verifier executes the preimage test. In the case of Mahadev’s verification protocol, each of the two reveal phases is chosen with probability , so that a simple reduction allows us to make the assumption without loss of generality.

where and are such that , since using Assumption (2TO1) all other would be rejected by the preimage test.

Our task now is to argue that the verifier’s bit , as obtained from the procedure described in Equation 5.4, has a distribution that is computationally indistinguishable from that of a Hadamard measurement of the committed qubit.

We start from the distribution of . Recall from Section 2.1 that given a quantum state the expectation value of an observable is given by . Here, using , the observable is obtained by first applying , followed by a measurement in the Hadamard basis of all qubits but the first to obtain the string (this corresponds to applying the projection ), then a bit-flip on the first qubit, as a function of the outcome obtained (this corresponds to applying the unitary ), and finally a measurement of the first qubit in the Hadamard basis (this corresponds to the observable ). As a result, the expectation value of can be expressed as

where for later convenience we have used to rewrite the observable applied on the first qubit.

We now turn to the expectation value of , where is the outcome of a measurement of the committed qubit in the Hadamard basis. Using the definition of the committed qubit, this can be expressed in a similar fashion, except that due to the use of the isometry in the definition of the unitary has been conjugated by a random operator:⁠Footnote25

25

For the second part of the state on the right-hand side of Equation 5.6, corresponding to the last qubit being in state , there is also a missing operator on the first qubit, labeled . A has no effect on the outcome of a measurement in the Hadamard basis, so it can be ignored here.

where all operators act on the first qubit, and for the second line we used to commute the innermost all the way to the middle, where we simplified . Taking the difference between the two terms, simplifying the middle expression using anti-commutation and cancelling out cross-terms gives

where, to write this last expression, we used the assumption that the state can be expressed as in Equation 5.7, i.e., that the committer succeeds in the verifier’s preimage test with probability . To argue that the right-hand side of Equation 5.8 cannot be large, we make the following final assumption.

Assumption (HC). No quantum polynomial-time procedure can, given as input a description of , return a quadruple such that for some , , and , where are the two preimages of , with probability nonnegligibly larger than .

If the expression on the right-hand side of Equation 5.8 were nonnegligible, there would be a violation of Assumption (HC): a quantum polynomial-time adversary (to the assumption) could simulate the prover to prepare and measure the first qubits register to obtain . It would then apply the unitary and measure the first qubits in the Hadamard basis to obtain a string . Finally, the adversary would return the tuple ; Equation 5.8 exactly measures the correlation of the bit with the correct value . Note that the requirement that is necessary for the assumption to be reasonable, as for the value is easy to determine. The adversary described above obtains with probability .

Thus Assumption (HC) guarantees that the expression in Equation 5.8 is negligibly small, completing the proof of the binding property for the case of a Hadamard basis measurement.

5.8. Summary

The preceding sections have introduced a quantum commitment scheme that allows a committer to commit to a single-qubit state in a way that is perfectly hiding (this guarantees to the committer that committing does not leak information) and later to reveal the outcome of a measurement of the qubit in the computational or Hadamard basis in a way that is computationally binding (this guarantees to the verifier that the outcomes or obtained from the committer’s reported strings or , respectively, are consistent with measurements on a single state).

The use of a quantum commitment scheme in Mahadev’s verification protocol, to be described in the next section, requires the prover to commit to an -qubit state. This is done by requiring commitment strings . There are still only two possible measurements: the first reports the -bit outcome obtained by measuring the -qubit state in the computational basis, and the second does the same for the Hadamard basis. The construction and analysis of a quantum commitment scheme that accommodates this is very similar to the construction and analysis given in this section, except that the proof of the computational binding property explicitly requires the assumption (C) introduced in Section 5.2.

Our description would not be satisfying if we did not discuss assumptions (2TO1), (C), (T), and (HC). Are these assumptions reasonable? While (2TO1) and (T) are fairly standard assumptions in classical cryptography for which it is possible to find multiple constructions, Assumption (C) is less common (though it has been used in different contexts in quantum cryptography), and Assumption (HC) is even less usual (though it can be seen as a strengthening of a more standard (nonadaptive) hardcore bit property). In Section 7 we sketch a construction of a function family satisfying all four assumptions simultaneously, based on the computational hardness of the LWE problem in cryptography.

6. Verifying quantum computations

In Section 3 we reduced the problem of verifying the outcome of an arbitrary polynomial-time quantum computation to the following decision problem.

Input. An integer , the description of an -qubit Hamiltonian in XZ form,

a real number , and .

Promise. The smallest eigenvalue of is either less than or at least .

Decision. Accept if and only if the smallest eigenvalue of is less than . (We refer to such as “YES” instances.)

The reduction guarantees that the promise gap can be taken to be at least some inverse polynomial in . For ease of exposition we further assume that all coefficients lie in , and that . In physical language this corresponds to a frustration-free Hamiltonian, meaning that in the case of a YES instance an eigenstate with smallest eigenvalue is also an eigenstate with eigenvalue of each of the local terms . (This last assumption is with loss of generality, as it is not hard to see that the resulting problem lies in NP; nevertheless, we make it because it helps simplify the presentation without hiding any interesting steps.)

Informally, Mahadev’s protocol for deciding this problem combines the protocol described in Figure 6 in Section 3 with the quantum commitment scheme introduced in Section 5. This leads to the protocol summarized in Figure 10. (The protocol we present requires a polynomial number of rounds. We chose to do so to simplify the analysis. Mahadev’s protocol can be executed in two rounds only, as stated in Theorem 2.3.)

In order to prove Theorem 2.3, we need to verify the completeness and soundness properties required of an argument system. The completeness property is straightforward. Assuming has an eigenstate with eigenvalue at most , the prover can commit to that eigenstate and honestly report measurement outcomes. In this case the verifier will never abort. Her counter is a random variable such that the probability of being larger than can be bounded by a small constant, less than , using Hoeffding’s inequality and assuming that the constant is chosen large enough.

To establish the soundness property, we first observe that we may assume that the prover’s actions in, say, a fraction at least of the rounds, is such that, if is chosen by the verifier in that round, then the probability of the verifier’s reported outcome leading to is very small, smaller than . Indeed, if this is not the case, then the probability that the verifier obtains at least once throughout the entire protocol will be at least , which would immediately establish the soundness property.

Under that assumption, for those rounds we may make use of the computational binding property of the quantum commitment scheme, that establishes the existence of an -qubit committed state that underlies the distribution of measurement outcomes or obtained by the verifier in those rounds. By assumption of a NO instance, for any such state the distribution of outcomes obtained when measuring in the computational or Hadamard basis must, on average, lead to a value for the verifier’s variable that averages, for those rounds, to at least . Taking into account those rounds where we do not have control of the committer (because its behavior has a larger probability of leading to an abort), the probability that the verifier’s final count is lower than can be made less than provided that the constant is chosen large enough.

The argument completes the proof sketch for the main theorem exposed here, Theorem 2.3. In the next section we conclude by outlining a construction of a family of functions that satisfies the requirements for implementing the quantum commitment scheme described in Section 5, under the LWE assumption.

7. A construction based on the learning with errors problem

In Section 5 we have identified four assumptions on a family of functions that are sufficient for the resulting quantum commitment scheme to be computationally binding. Can the four assumptions be simultaneously satisfied? Strictly speaking, we do not know the answer. In this section we sketch a construction that nearly satisfies the assumptions. The construction appears in Reference 9, and a mild modification of it is used in Mahadev’s protocol. Even though the assumptions introduced in Section 5 will not all be strictly satisfied by the construction, it is possible to verify that the protocol itself remains sound.

7.1. The LWE problem

Our starting point is the Learning with Errors (LWE) problem, introduced by Regev Reference 28. The hardness of this problem has become a widely used computational assumption in cryptography, for at least three reasons. The first is that it is very versatile, allowing the implementation of advanced primitives such as fully homomorphic encryption Reference 10, attribute-based encryption Reference 19, program obfuscation Reference 20Reference 32, traitor tracing Reference 21, and many others. The second is that the assumption can be reduced to the hardness of worst-case computational problems on lattices: an efficient procedure that breaks the LWE assumption on average can be used to solve the closest vector problem in (almost) any lattice. The third reason, which is most relevant to the use of the LWE assumption for the verification protocol presented here, is that in contrast to the RSA assumption on the hardness of factoring or the discrete logarithm problem so far it is believed that the LWE problem may be hard for quantum computers, so that cryptographic schemes based on it remain (to the best of published knowledge) secure against quantum attacks.

The LWE assumption comes in multiple flavors, all roughly equivalent. Here we formulate the decisional LWE assumption on the difficulty of distinguishing samples from two distributions. To state the problem, fix a size parameter , an integer modulus , a number of equations , and an error distribution over . Given , write for the distribution over that is obtained by sampling each entry of a vector independently according to . The decisional LWE assumption is the following.

Decisional LWE. Let be a uniformly random matrix in , let be a uniformly random vector in , let be a random vector in drawn from , and let be a uniformly random vector in . Then no classical or quantum probabilistic polynomial-time procedure can distinguish from .

We include a few words of explanation for the reader unaccustomed with the notion of computational indistinguishibility between ensembles of distributions. Note that the distribution of and the distribution of are in general very far from each other: provided is sufficiently larger than , a random vector will not lie in the column span of , nor even be close to it. What the (decisional) LWE assumption asserts is that, even though in principle these distributions are far from each other, it is computationally difficult, given a sample from the one or the other, to tell which is the case.⁠Footnote26 Note that without the error vector the task would be easy: given , solve for and check whether the solution has coefficients in . The LWE assumption is that the inclusion of makes the task substantially more arduous. In particular, it is well-known that Gaussian elimination is very sensitive to errors, which rules out the most natural approach. To the reader with a geometric mind, it might help to picture a discrete lattice (all integer linear combinations of the columns of , as a subset of ) such that to each lattice point is added a little noise, in the form of a discrete Gaussian distribution with small variance centered at the lattice point. Even though all the Gaussian “blobs” thus obtained are well separated, given a point in any one of them, it is (conjecturally) hard to recover the center of the blob, i.e., the closest lattice vector.

26

Computational hardness only makes sense as the input size goes to infinity, which is why to be precise we should consider a family of distributions, parametrized by an integer , and argue that the samples become harder and harder to distinguish as .

We comment briefly on the choice of parameters. The integer should generally be thought of as commensurate with the security parameter , i.e., . The modulus should be at least polynomial in , but can be as large as exponential; this will be the case in our construction. The error distribution can be chosen in multiple ways. A common choice is to set a discretized centered Gaussian distribution with variance , for some small parameter (typically chosen as an inverse polynomial function of ); this is generally denoted . For more details on LWE and its applications, we refer to the survey Reference 26.

7.2. Construction

To specify the function , we describe how public and private parameters for the function are chosen. Let be the security parameter (i.e., the number is thought of as an estimate of the time required to break assumptions such as (HC)).

First, integers and a modulus are chosen such that , is a prime, and . Then, a matrix is sampled at random, together with a trapdoor in the form of a matrix , where is a parameter. The sampling procedure has the property that the distribution of is statistically close to uniform, and is such that is a nice matrix, in the sense that given , for any and small enough, it is computationally easy to recover .⁠Footnote27 That such a sampling procedure would exist and be efficiently implementable is nontrivial, and it relies on the underlying lattice structure given by the columns of ; see Reference 25. Finally, a uniformly random , and a random distributed according to with of order ,⁠Footnote28 are sampled. The public information is . The private information is the pair .

27

One can think of as a matrix whose rows are almost orthonormal, so that Gaussian elimination on induces only small propagation of the errors.

28

The precise choice of is delicate, and the parameters given here should only be treated as indicative; we refer to Reference 9, Section 8 for the right setting of parameters.

Next, we discuss how the function can be evaluated, given the public parameters . We define two functions that should be understood as and , respectively. For the function takes as input an (that can be seen as an element of for ) and returns , which is an element of . Here, is a vector sampled at random from a distribution such that is “much larger” than . The inclusion of makes a randomized function, which is the main way in which the construction differs from the requirements expressed in Section 6. A formal way around this is to think of as the function that returns not , but the distribution of , when and all other variables are fixed. In practice, the evaluation of on a quantum computer (as required of the honest prover in the verification protocol) involves preparing a weighted superposition over all error vectors, and computing the function in superposition.

We would, of course, rather do away with this complication. Why is the error vector necessary? It is there to satisfy the important requirement that the functions and are injective with overlapping ranges, i.e., Assumption (2TO1). Injectivity follows from the existence of the trapdoor for and an appropriate setting of the standard deviation of the error distribution, which guarantee that (given the trapdoor) can be recovered from (with high probability over the choice of ). To make the function ranges overlap, we need the distribution of to have the same support as the distribution of . The first distribution considers an arbitrary vector in the column span of , shifted by ; the second considers the same, except that the shift is by . For the two distributions to (almost) match, we need the distribution of to (almost) match the distribution of . This is possible as long as the standard deviation is substantially larger than the standard deviation ; provided this holds it is an exercise to compute the statistical distance between the two Gaussians and verify that it can be made very close to .

With this important caveat in place, we have specified the function and verified property (2TO1). Property (T) follows from the existence of the secret information . Given a and an element in the range of , it is possible to use the trapdoor matrix to recover and subtract to deduce the preimage of under .

The two remaining properties, the collapsing property (C), and the hardcore bit property (HC), require more work, and we refer to Reference 9 for a detailed exposition. We remark that the two properties are not entirely new. Property (C) was introduced by Unruh as a strengthening of the classical property of collision resistance required for his work on the security of commitment protocols that are computationally binding against quantum adversaries Reference 31. Similar hardcore bit properties to (HC) have been shown for many LWE-based cryptographic schemes (see, e.g., Reference 4). Usually the property states that “for any vector , the value is indistinguishable from uniform, even given a sample ”. Our property (HC) is subtly stronger, in that the adversary may choose the vector itself, possibly as a function of the sample . An additional difficulty stems from the specific bit that the adversary predicts in our setting. In the definition of Assumption (HC) this bit is the value , where , are the binary representation of the two preimages in , and , of the prover’s commitment string . (Recall that the use of the binary representation came from the requirements on the honest prover, that is asked to perform a measurement in the Hadamard basis, yielding a binary string of outcomes.) It is in order to complete the argument showing that a procedure that returns the information asked for in Assumption (HC) (i.e., the quadruple ) can be turned into a procedure that breaks the decisional LWE assumption, which we need to assume that the secret vector is a binary vector. The result is a somewhat roundabout construction that we may hope will be simplified in future work.

Acknowledgments

I am indebted to Urmila Mahadev for numerous conversations that helped clarify her work. I thank Victor Albert, Alexandru Georghiu, Urmila Mahadev, and Oded Regev for comments on earlier versions of these notes.

About the author

Thomas Vidick is professor of computing and mathematical sciences at California Institute of Technology. He obtained his undergraduate degree in mathematics from École Normale Supérieure in Paris in 2007, and he received his PhD in computer science from University of California, Berkeley, in 2012. He works in quantum complexity theory and cryptography.

Figures

Figure 2.

An example of a -message interactive proof between a verifier , assumed to be computationally bounded, and an all-powerful prover .

Graphic without alt text
Figure 3.

Schematic representation of an instance of the Ising spin problem. Here each is a variable in , and each is a fixed coupling constant in . The goal is to find an assignment to the variables that minimizes the expression Equation 3.1.

Graphic without alt text
Figure 4.

The tableau of a classical circuit. Here , , and there is one ancilla qubit, initialized to . The circuit has five NAND gates . The tableau is given by , that represents the state of each of the five circuit wires at successive stages of an execution of the circuit.

Graphic without alt text
Figure 5.

Two different ways to create a tableau from a quantum circuit. The state is the tensor product of the state of the circuit at each time step. The state is their superposition, indexed by a clock register that goes from to .

Graphic without alt text
Figure 6.

The Fitzsimons–Hadjuček–Morimae verification protocol

Let be a quantum circuit provided as input, and let be the -qubit Hamiltonian obtained from as in Equation 3.5.
(1)

The verifier initializes a counter to . She executes the following interaction with the prover independently times, where is a large enough universal constant:

(a)

The prover creates an eigenstate of with smallest eigenvalue.

(b)

The prover sends the qubits of one by one to the verifier.

(c)

The verifier selects a measurement uniformly at random, and measures each qubit in the associated basis upon reception. Let be the outcome for the th qubit.

(d)

The verifier selects uniformly at random among those pairs such that . She updates her counter .

(2)

If , the verifier accepts the interaction. Otherwise, she rejects.

Figure 7.

A computationally binding commitment protocol based on the use of a CRHF family . We refer to the party performing the commitment as the committer, and the party receiving the commitment as the verifier. The symbol means that is selected uniformly at random from the finite set .

Graphic without alt text
Figure 8.

An illustration of the commitment procedure. On the left, the state is expressed in the known basis . On the right, the basis is an unknown (to the committer) pair of orthonormal vectors in a high-dimensional space.

Graphic without alt text
Figure 9.

Committing to a qubit

Graphic without alt text
Figure 10.

The Mahadev verification protocol

Let be a security parameter, let be a quantum circuit provided as input, and let be the -qubit Hamiltonian obtained from as in Equation 3.5. Let be the verifier’s procedures in a quantum commitment scheme such that the measurements and correspond to -bit outcome -qubit measurements in the computational and Hadamard bases, respectively. Let be the committer’s procedures in the same scheme.
(1)

The verifier initializes a counter to . She executes the following interaction with the prover independently times, where is a large enough universal constant:

(a)

The verifier generates a (public key, private key) pair , and sends the public key to the prover. She keeps the secret key private.

(b)

The prover creates an eigenstate of with smallest eigenvalue. Let be the associated density matrix. The prover executes and sends to the verifier.

(c)

The verifier selects a measurement uniformly at random, and asks the prover to reveal measurement outcomes in the basis .

(d)

The prover performs the measurement on the post-commitment state, and returns the outcome to the verifier.

(e)

The verifier computes . If , the verifier aborts the protocol. Otherwise, is an -bit string. The verifier selects uniformly at random among those pairs such that . She updates her counter .

(2)

If , the verifier accepts the interaction. Otherwise, she rejects.

Mathematical Fragments

Definition 1.1.

A language has an interactive proof system if there exists a randomized polynomial-time verifier such that the following hold.

Completeness. For any , there exists a prover , referred to as the honest prover, such that accepts on input with probability at least .

Soundness. For any and any prover , accepts on input with probability at most .

The collection of languages for which there exists an interactive proof system is denoted IP.

Definition 2.2.

A promise language is in BQP if there exists a classical deterministic polynomial-time Turing machine such that, on input the Turing machine returns the description of a quantum circuit over the gate set , together with a specially designated output qubit for the circuit, such that the following hold.

If , then the probability that a measurement of the output qubit of , when all input qubits are initialized to the state, is , is at least ;

If , then the same probability is at most .

Theorem 2.3 (Mahadev, 2018).

Any language in BQP has an interactive argument that is computationally sound against quantum polynomial-time provers under the LWE assumption. Moreover, the verifier runs in probabilistic polynomial time, the honest prover can be executed in quantum polynomial-time, and the interaction between verifier and prover consists of two rounds (four messages) only.

Equation (3.1)
Equation (3.3)
Equation (3.4)
Theorem 3.1.

For any integer there are , and such that the following holds. Given a -gate quantum circuit acting on qubits, such that , and an input for the circuit, there exist efficiently computable real weights such that for all and moreover if

then:

Completeness. If the circuit accepts its input with probability at least , then the smallest eigenvalue of is at most a;

Soundness. If the circuit accepts its input with probability at most , then the smallest eigenvalue of is at least .

Definition 4.1 (Classical commitment scheme).

A (noninteractive) commitment scheme (for a message space ) is a triple of probabilistic polynomial-time procedures such that

generates a key , when provided as input an integer written in unary.⁠Footnote15

15

is referred to as the security parameter. The reason it is provided in unary is that this guarantees, by the polynomial-time requirement on Gen, that the bit length of is polynomial in the integer .

Commit takes as input a key and an and returns a pair that consists of a commitment value , and a reveal value .⁠Footnote16

16

We write the input key to Commit as a subscript that is often omitted for convenience.

Reveal takes as input a key and a pair and returns a value such that , where is a special “failure” symbol.

Correctness. For any , if , then it holds that .

Statistical hiding. For any two the distributions of and are identical.

Computational binding. No probabilistic polynomial-time procedure can, given as input, return a triple such that and are such that with nonnegligible probability.

Definition 4.2 (Qubit commitment scheme).

A qubit commitment scheme for a pair of observables on a Hilbert space is a triple of classical probabilistic polynomial-time procedures and a triple of quantum polynomial-time procedures such that

generates a key , when provided as input an integer written in unary.

Commit is a quantum polynomial-time procedure

such that for any density matrix on , where is an arbitrary reference system, the state is classical on (i.e., the first output register, , has been measured). We call the post-measurement state on the post-commitment state.

and are quantum measurements on that return classical strings and as their output, respectively.

and take as input a pair and and return a value and respectively.

Correctness. For any state , the distribution of

is statistically indistinguishable from the distribution obtained by measuring the register of using observable .⁠Footnote18 A similar condition holds for the combination of and .

18

Statistical indistinguishability between two families of distributions and means that there does not exist a procedure that, given as input as well as a sample from either or , can determine which is the case with a success probability that is larger than , for some nonnegligible function . This is equivalent to the statistical distance between the two distributions being at most . Computational indistinguishability is defined similarly, except that the distinguishing procedure is restricted to run in probabilistic polynomial time. For distributions on a single bit, computational indistinguishability implies statistical indistinguishability, but this is no longer the case for distributions over larger domains.

Statistical hiding. For every and , the distributions of and obtained from and , respectively, are statistically indistinguishable.

Computational binding. For any triple of quantum polynomial-time procedures and efficiently preparable state such that the values and defined as in 4.1 (with and replacing Commit and Meas respectively) are different from with nonnegligible probability, there exists a state such that the distribution of and , conditioned on or , respectively, are computationally indistinguishable from the outcome distribution obtained by directly measuring using and , respectively.

Equation (5.1)
Equation (5.2)
Equation (5.4)
Equation (5.5)
Equation (5.6)
Definition 5.1 (Committed qubit).

Given a commitment string and an arbitrary post-commitment state for the prover of the form Equation 5.5, let be the single-qubit state obtained from by performing the following operations:

(i)

Apply the isometry defined in Equation 5.6 to , yielding a state on (for some ).

(ii)

Measure qubits to in the Hadamard basis, obtaining an outcome string .

(iii)

If , apply a operator to the first qubit. Here and are the two preimages of the commitment string under .

(iv)

Return the first qubit.

Equation (5.7)
Equation (5.8)

References

Reference [1]
Scott Aaronson and Andris Ambainis, Forrelation: a problem that optimally separates quantum from classical computing, SIAM J. Comput. 47 (2018), no. 3, 982–1038, DOI 10.1137/15M1050902. MR3818333,
Show rawAMSref \bib{aaronson2018forrelation}{article}{ author={Aaronson, Scott}, author={Ambainis, Andris}, title={Forrelation: a problem that optimally separates quantum from classical computing}, journal={SIAM J. Comput.}, volume={47}, date={2018}, number={3}, pages={982--1038}, issn={0097-5397}, review={\MR {3818333}}, doi={10.1137/15M1050902}, }
Reference [2]
Dorit Aharonov, Micahel Ben-Or, and Elad Eban, Interactive Proofs For Quantum Computations, arXiv:0810.5375 (2008).
Reference [3]
Dorit Aharonov and Ayal Green, A quantum inspired proof of , arXiv:1710.09078 (2017).
Reference [4]
Adi Akavia, Shafi Goldwasser, and Vinod Vaikuntanathan, Simultaneous hardcore bits and cryptography against memory attacks, Theory of cryptography, Lecture Notes in Comput. Sci., vol. 5444, Springer, Berlin, 2009, pp. 474–495, DOI 10.1007/978-3-642-00457-5_28. MR2546214,
Show rawAMSref \bib{akavia2009simultaneous}{article}{ author={Akavia, Adi}, author={Goldwasser, Shafi}, author={Vaikuntanathan, Vinod}, title={Simultaneous hardcore bits and cryptography against memory attacks}, conference={ title={Theory of cryptography}, }, book={ series={Lecture Notes in Comput. Sci.}, volume={5444}, publisher={Springer, Berlin}, }, date={2009}, pages={474--495}, review={\MR {2546214}}, doi={10.1007/978-3-642-00457-5\_28}, }
Reference [5]
László Babai, Trading group theory for randomness, Proceedings of the Seventeenth Annual ACM Symposium on Theory of Computing, ACM, 1985, pp. 421–429.
Reference [6]
Francisco Barahona, On the computational complexity of Ising spin glass models, J. Phys. A 15 (1982), no. 10, 3241–3253. MR684591,
Show rawAMSref \bib{barahona1982computational}{article}{ author={Barahona, Francisco}, title={On the computational complexity of Ising spin glass models}, journal={J. Phys. A}, volume={15}, date={1982}, number={10}, pages={3241--3253}, issn={0305-4470}, review={\MR {684591}}, }
Reference [7]
Hannes Bernien, Sylvain Schwartz, Alexander Keesling, Harry Levine, Ahmed Omran, Hannes Pichler, Soonwon Choi, Alexander S Zibrov, Manuel Endres, Markus Greiner, et al., Probing many-body dynamics on a 51-atom quantum simulator, Nature 551 (2017), no. 7682, 579.
Reference [8]
Manuel Blum, Coin flipping by telephone: a protocol for solving impossible problems, ACM SIGACT News 15 (1983), no. 1, 23–27.
Reference [9]
Zvika Brakerski, Paul Christiano, Urmila Mahadev, Umesh Vazirani, and Thomas Vidick, Certifiable randomness from a single quantum device, arXiv:1804.00640 (2018).
Reference [10]
Zvika Brakerski and Vinod Vaikuntanathan, Efficient fully homomorphic encryption from (standard) , SIAM J. Comput. 43 (2014), no. 2, 831–871, DOI 10.1137/120868669. MR3504684,
Show rawAMSref \bib{brakerski2014efficient}{article}{ author={Brakerski, Zvika}, author={Vaikuntanathan, Vinod}, title={Efficient fully homomorphic encryption from (standard) $\mathsf {LWE}$}, journal={SIAM J. Comput.}, volume={43}, date={2014}, number={2}, pages={831--871}, issn={0097-5397}, review={\MR {3504684}}, doi={10.1137/120868669}, }
Reference [11]
Gilles Brassard, David Chaum, and Claude Crépeau, Minimum disclosure proofs of knowledge, J. Comput. System Sci. 37 (1988), no. 2, 156–189, DOI 10.1016/0022-0000(88)90005-0. Twenty-Seventh Annual IEEE Symposium on the Foundations of Computer Science (Toronto, ON, 1986). MR979117,
Show rawAMSref \bib{brassard1988minimum}{article}{ author={Brassard, Gilles}, author={Chaum, David}, author={Cr\'{e}peau, Claude}, title={Minimum disclosure proofs of knowledge}, note={Twenty-Seventh Annual IEEE Symposium on the Foundations of Computer Science (Toronto, ON, 1986)}, journal={J. Comput. System Sci.}, volume={37}, date={1988}, number={2}, pages={156--189}, issn={0022-0000}, review={\MR {979117}}, doi={10.1016/0022-0000(88)90005-0}, }
Reference [12]
Anne Broadbent, Joseph Fitzsimons, and Elham Kashefi, Universal blind quantum computation, 2009 50th Annual IEEE Symposium on Foundations of Computer Science—FOCS 2009, IEEE Computer Soc., Los Alamitos, CA, 2009, pp. 517–526, DOI 10.1109/FOCS.2009.36. MR2648431,
Show rawAMSref \bib{broadbent2008ubq}{article}{ author={Broadbent, Anne}, author={Fitzsimons, Joseph}, author={Kashefi, Elham}, title={Universal blind quantum computation}, conference={ title={2009 50th Annual IEEE Symposium on Foundations of Computer Science---FOCS 2009}, }, book={ publisher={IEEE Computer Soc., Los Alamitos, CA}, }, date={2009}, pages={517--526}, review={\MR {2648431}}, doi={10.1109/FOCS.2009.36}, }
Reference [13]
Ran Canetti and Marc Fischlin, Universally composable commitments, Annual International Cryptology Conference, Springer, 2001, pp. 19–40.
Reference [14]
Toby Cubitt and Ashley Montanaro, Complexity classification of local Hamiltonian problems, SIAM J. Comput. 45 (2016), no. 2, 268–316, DOI 10.1137/140998287. MR3477325,
Show rawAMSref \bib{cubitt2016complexity}{article}{ author={Cubitt, Toby}, author={Montanaro, Ashley}, title={Complexity classification of local Hamiltonian problems}, journal={SIAM J. Comput.}, volume={45}, date={2016}, number={2}, pages={268--316}, issn={0097-5397}, review={\MR {3477325}}, doi={10.1137/140998287}, }
Reference [15]
Richard P. Feynman, Simulating physics with computers, Internat. J. Theoret. Phys. 21 (1981/82), no. 6-7, 467–488, DOI 10.1007/BF02650179. Physics of computation, Part II (Dedham, Mass., 1981). MR658311,
Show rawAMSref \bib{feynman1982simulating}{article}{ author={Feynman, Richard P.}, title={Simulating physics with computers}, note={Physics of computation, Part II (Dedham, Mass., 1981)}, journal={Internat. J. Theoret. Phys.}, volume={21}, date={1981/82}, number={6-7}, pages={467--488}, issn={0020-7748}, review={\MR {658311}}, doi={10.1007/BF02650179}, }
Reference [16]
Joseph F. Fitzsimons, Michal Hajdušek, and Tomoyuki Morimae, Post hoc verification of quantum computation, Phys. Rev. Lett. 120 (2018), no. 4, 040501, 5, DOI 10.1103/PhysRevLett.120.040501. MR3758139,
Show rawAMSref \bib{fitzsimons2018post}{article}{ author={Fitzsimons, Joseph F.}, author={Hajdu\v {s}ek, Michal}, author={Morimae, Tomoyuki}, title={{\it Post hoc} verification of quantum computation}, journal={Phys. Rev. Lett.}, volume={120}, date={2018}, number={4}, pages={040501, 5}, issn={0031-9007}, review={\MR {3758139}}, doi={10.1103/PhysRevLett.120.040501}, }
Reference [17]
Alexandru Gheorghiu, Theodoros Kapourniotis, and Elham Kashefi, Verification of quantum computation: an overview of existing approaches, Theory Comput. Syst. 63 (2019), no. 4, 715–808, DOI 10.1007/s00224-018-9872-3. MR3942255,
Show rawAMSref \bib{gheorghiu2017verification}{article}{ author={Gheorghiu, Alexandru}, author={Kapourniotis, Theodoros}, author={Kashefi, Elham}, title={Verification of quantum computation: an overview of existing approaches}, journal={Theory Comput. Syst.}, volume={63}, date={2019}, number={4}, pages={715--808}, issn={1432-4350}, review={\MR {3942255}}, doi={10.1007/s00224-018-9872-3}, }
Reference [18]
Shafi Goldwasser, Silvio Micali, and Charles Rackoff, The knowledge complexity of interactive proof systems, SIAM J. Comput. 18 (1989), no. 1, 186–208, DOI 10.1137/0218012. MR978174,
Show rawAMSref \bib{goldwasser1989knowledge}{article}{ author={Goldwasser, Shafi}, author={Micali, Silvio}, author={Rackoff, Charles}, title={The knowledge complexity of interactive proof systems}, journal={SIAM J. Comput.}, volume={18}, date={1989}, number={1}, pages={186--208}, issn={0097-5397}, review={\MR {978174}}, doi={10.1137/0218012}, }
Reference [19]
Sergey Gorbunov, Vinod Vaikuntanathan, and Hoeteck Wee, Attribute-based encryption for circuits, J. Assoc. Comput. Mach. 62 (2015), no. 6, Art. 45, 33, DOI 10.1145/2824233. MR3437230,
Show rawAMSref \bib{gorbunov2015attribute}{article}{ author={Gorbunov, Sergey}, author={Vaikuntanathan, Vinod}, author={Wee, Hoeteck}, title={Attribute-based encryption for circuits}, journal={J. Assoc. Comput. Mach.}, volume={62}, date={2015}, number={6}, pages={Art. 45, 33}, issn={0004-5411}, review={\MR {3437230}}, doi={10.1145/2824233}, }
Reference [20]
Rishab Goyal, Venkata Koppula, and Brent Waters, Lockable obfuscation, 58th Annual IEEE Symposium on Foundations of Computer Science—FOCS 2017, IEEE Computer Soc., Los Alamitos, CA, 2017, pp. 612–621. MR3734265,
Show rawAMSref \bib{goyal2017lockable}{article}{ author={Goyal, Rishab}, author={Koppula, Venkata}, author={Waters, Brent}, title={Lockable obfuscation}, conference={ title={58th Annual IEEE Symposium on Foundations of Computer Science---FOCS 2017}, }, book={ publisher={IEEE Computer Soc., Los Alamitos, CA}, }, date={2017}, pages={612--621}, review={\MR {3734265}}, }
Reference [21]
Rishab Goyal, Venkata Koppula, and Brent Waters, Collusion resistant traitor tracing from learning with errors, Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, ACM, 2018, pp. 660–670.
Reference [22]
Joe Kilian, A note on efficient zero-knowledge proofs and arguments, Proceedings of the Twenty-Fourth Annual ACM Symposium on Theory of Computing, ACM, 1992, pp. 723–732.
Reference [23]
Carsten Lund, Lance Fortnow, Howard Karloff, and Noam Nisan, Algebraic methods for interactive proof systems, J. Assoc. Comput. Mach. 39 (1992), no. 4, 859–868, DOI 10.1145/146585.146605. MR1187215,
Show rawAMSref \bib{lund1992algebraic}{article}{ author={Lund, Carsten}, author={Fortnow, Lance}, author={Karloff, Howard}, author={Nisan, Noam}, title={Algebraic methods for interactive proof systems}, journal={J. Assoc. Comput. Mach.}, volume={39}, date={1992}, number={4}, pages={859--868}, issn={0004-5411}, review={\MR {1187215}}, doi={10.1145/146585.146605}, }
Reference [24]
Urmila Mahadev, Classical verification of quantum computations, 59th Annual IEEE Symposium on Foundations of Computer Science—FOCS 2018, IEEE Computer Soc., Los Alamitos, CA, 2018, pp. 259–267. MR3899595,
Show rawAMSref \bib{mahadev2018classical}{article}{ author={Mahadev, Urmila}, title={Classical verification of quantum computations}, conference={ title={59th Annual IEEE Symposium on Foundations of Computer Science---FOCS 2018}, }, book={ publisher={IEEE Computer Soc., Los Alamitos, CA}, }, date={2018}, pages={259--267}, review={\MR {3899595}}, }
Reference [25]
Daniele Micciancio and Chris Peikert, Trapdoors for lattices: simpler, tighter, faster, smaller, Advances in Cryptology—EUROCRYPT 2012, Lecture Notes in Comput. Sci., vol. 7237, Springer, Heidelberg, 2012, pp. 700–718, DOI 10.1007/978-3-642-29011-4_41. MR2972927,
Show rawAMSref \bib{micciancio2012trapdoors}{article}{ author={Micciancio, Daniele}, author={Peikert, Chris}, title={Trapdoors for lattices: simpler, tighter, faster, smaller}, conference={ title={Advances in Cryptology---EUROCRYPT 2012}, }, book={ series={Lecture Notes in Comput. Sci.}, volume={7237}, publisher={Springer, Heidelberg}, }, date={2012}, pages={700--718}, review={\MR {2972927}}, doi={10.1007/978-3-642-29011-4\_41}, }
Reference [26]
Chris Peikert, A decade of lattice cryptography, Found. Trends Theor. Comput. Sci. 10 (2014), no. 4, i—iii, 283–424, DOI 10.1561/0400000074. MR3494162,
Show rawAMSref \bib{peikert2016decade}{article}{ author={Peikert, Chris}, title={A decade of lattice cryptography}, journal={Found. Trends Theor. Comput. Sci.}, volume={10}, date={2014}, number={4}, pages={i---iii, 283--424}, issn={1551-305X}, review={\MR {3494162}}, doi={10.1561/0400000074}, }
Reference [27]
Ran Raz and Avishay Tal, Oracle separation of BQP and PH, Electronic Colloquium on Computational Complexity (ECCC), vol. 25, 2018, p. 107.
Reference [28]
Oded Regev, On lattices, learning with errors, random linear codes, and cryptography, J. Assoc. Comput. Mach. 56 (2009), no. 6, Art. 34, 40, DOI 10.1145/1568318.1568324. MR2572935,
Show rawAMSref \bib{regev2009lattices}{article}{ author={Regev, Oded}, title={On lattices, learning with errors, random linear codes, and cryptography}, journal={J. Assoc. Comput. Mach.}, volume={56}, date={2009}, number={6}, pages={Art. 34, 40}, issn={0004-5411}, review={\MR {2572935}}, doi={10.1145/1568318.1568324}, }
Reference [29]
Ben W Reichardt, Falk Unger, and Umesh Vazirani, Classical command of quantum systems, Nature 496 (2013), no. 7446, 456.
Reference [30]
Adi Shamir, IP = PSPACE, J. Assoc. Comput. Mach. 39 (1992), no. 4, 869–877, DOI 10.1145/146585.146609. MR1187216,
Show rawAMSref \bib{shamir1992ip}{article}{ author={Shamir, Adi}, title={IP = PSPACE}, journal={J. Assoc. Comput. Mach.}, volume={39}, date={1992}, number={4}, pages={869--877}, issn={0004-5411}, review={\MR {1187216}}, doi={10.1145/146585.146609}, }
Reference [31]
Dominique Unruh, Computationally binding quantum commitments, Advances in Cryptology—EUROCRYPT 2016. Part II, Lecture Notes in Comput. Sci., vol. 9666, Springer, Berlin, 2016, pp. 497–527, DOI 10.1007/978-3-662-49896-5_18. MR3516412,
Show rawAMSref \bib{unruh2016computationally}{article}{ author={Unruh, Dominique}, title={Computationally binding quantum commitments}, conference={ title={Advances in Cryptology---EUROCRYPT 2016. Part II}, }, book={ series={Lecture Notes in Comput. Sci.}, volume={9666}, publisher={Springer, Berlin}, }, date={2016}, pages={497--527}, review={\MR {3516412}}, doi={10.1007/978-3-662-49896-5\_18}, }
Reference [32]
Daniel Wichs and Giorgos Zirdelis, Obfuscating compute-and-compare programs under LWE, 58th Annual IEEE Symposium on Foundations of Computer Science—FOCS 2017, IEEE Computer Soc., Los Alamitos, CA, 2017, pp. 600–611. MR3734264,
Show rawAMSref \bib{wichs2017obfuscating}{article}{ author={Wichs, Daniel}, author={Zirdelis, Giorgos}, title={Obfuscating compute-and-compare programs under LWE}, conference={ title={58th Annual IEEE Symposium on Foundations of Computer Science---FOCS 2017}, }, book={ publisher={IEEE Computer Soc., Los Alamitos, CA}, }, date={2017}, pages={600--611}, review={\MR {3734264}}, }

Article Information

MSC 2010
Primary: 68Q12 (Quantum algorithms and complexity)
Author Information
Thomas Vidick
California Institute of Technology, Pasadena, California 91106
vidick@cms.caltech.edu
MathSciNet
Journal Information
Bulletin of the American Mathematical Society, Volume 57, Issue 1, ISSN 1088-9485, published by the American Mathematical Society, Providence, Rhode Island.
Publication History
This article was received on and published on .
Copyright Information
Copyright 2019 American Mathematical Society
Article References
  • Permalink
  • Permalink (PDF)
  • DOI 10.1090/bull/1678
  • MathSciNet Review: 4037407
  • Show rawAMSref \bib{4037407}{article}{ author={Vidick, Thomas}, title={Verifying quantum computations at scale: A cryptographic leash on quantum devices}, journal={Bull. Amer. Math. Soc.}, volume={57}, number={1}, date={2020-01}, pages={39-76}, issn={0273-0979}, review={4037407}, doi={10.1090/bull/1678}, }

Settings

Change font size
Resize article panel
Enable equation enrichment

Note. To explore an equation, focus it (e.g., by clicking on it) and use the arrow keys to navigate its structure. Screenreader users should be advised that enabling speech synthesis will lead to duplicate aural rendering.

For more information please visit the AMS MathViewer documentation.