Skip to Main Content

Blockchain Viewed from Mathematics

Yan X Zhang

Communicated by Notices Associate Editor Steven Sam

Article cover

Much of human activity involves assets, objects that hold value that can be transferred between users, such as gold, cash, or stocks. In the modern era, we want some assets to be digital, or stored entirely as computer data. Since there are no physical items (like gold bars) to represent such an asset, its existence is equivalent to having a computerized ledger (or state) of the asset, which keeps track of how much of the asset each user owns, and a protocol, a set of rules that define how the ledger changes and govern user actions involving the asset.

For most currencies such as the USD (US Dollar), banks already accomplish digital assets by storing the ledger (say, how many USD each user owns) as numbers in some database; people can withdraw/transfer funds by running software that interacts with the database according to protocol. However, we might want digital assets to have additional “democratic” properties:

(1)

The asset should be decentralized: there is no single central authority that could render the protocol unusable by failing and/or purposefully denying service to its users.

(2)

The asset should be permissionless: anyone should be able to become a user of the asset if they follow the protocol.

Banks fail these objectives. For example, if a bank is shut down (via computer failure, government coercion, etc.), the users lose access to their assets immediately. We call a digital asset that fulfills these additional properties a cryptocurrency. The word blockchain refers to an instantiation of a cryptocurrency-inspired protocol; it has also become the name of the field of research for these protocols.

Even as a mathematician with little interest in financial speculation, I enjoyed the mathematical content lying underneath the digital gold rush of cryptocurrencies and blockchain. I wrote this specific paper to answer the question, “what aspects of blockchain might be interesting for a mathematical audience?” What I mean by “mathematical” also includes fields adjacent to mathematics (basically, fields where practitioners prove theorems):

(1)

Since a blockchain saves some state across different users, it is natural to study it with distributed systems theory, especially consensus, which studies: “how do we get nodes to agree on something, where the network and/or some of the nodes can be faulty?”

(2)

Game theory and mechanism design are very relevant as blockchain participants are usually dealing with money. These fields predict user behavior and align the incentives of the users with those of the protocol designer.

(3)

Many primitives in blockchain depend on understanding what is “easy” or “hard.” In particular, blockchain tries to serve large numbers of users while keeping certain actions difficult to prevent attacks on the system. These are the fundamental problems of cryptography.

(4)

Cryptocurrencies (or currencies built on top of cryptocurrencies!) are financial assets. Mathematical finance studies these objects and associated phenomena.

To start, let us try to create a cryptocurrency from scratch, leading to the main ideas of Bitcoin. After that section, I will introduce case studies of specific topics. As they are mostly independent, I invite the reader to skip around guided by their own interests and background; for example, “Consensus Meets Blockchain” would be more interesting for a reader who is already familiar with and/or interested in theoretical computer science.

We Build a Cryptocurrency

First, we need to identify and authenticate users. We do this with a digital signature protocol, which provides pairs of public and private keys. The public key acts as an “account” (a user can have many public keys) that owns some amount of the asset, and the private key is kept by the owner and can be used to sign messages. Digital signatures allow anyone to verify (with the help of the associated public key) that it is extremely unlikely for anyone other than the owner of the matching private key to have signed the message. Messages are to be broadcast (sent to all users).

Second, users need to be able to send assets. We allow users to create transactions, which are messages of the form sends amount to (signed by ), where and are public keys representing their owners. A block is a collection of transactions. It is dangerous for blocks to be “taken out of context” since they can be mutually exclusive (for example, maybe the transaction sends Bitcoin to was put in two blocks and , so we do not want to double-count that as part of the same history). This motivates having each block refer to a particular state of the blockchain from which the block dictates the next steps to take; in other words, a block is a “state transition function” from a specific state to the next. Putting it together, each block should contain:

(1)

a reference to a parent block (if is ’s parent, we say that is ’s child);

(2)

a set of transactions;

(3)

a digital signature of the block publisher.

Complementing this design, at the beginning of our protocol, we can define a genesis block , a block encoding an empty state with no parent. Then, we can visualize blocks as the nodes of a graph where each block has a directed edge towards its parent, which is a tree where each block identifies a unique chain . Thus, each block corresponds to a unique state defined by the empty state iteratively updated by the transactions chain in the reverse order .

Note that our blockchain can fork, which happens when we have two different child blocks with the same parent. We say that a block is a descendant of another block if there is a directed path from to (resp., from to ) in this graph. We say that two blocks conflict if neither are descendants of the other (equivalently, when the two blocks appear in different paths in a fork). Two conflicting blocks correspond to different and possibly incompatible states; in particular, the leaf blocks (blocks with no children) are the candidates for “latest” states of the blockchain. We now have a problem: “how do we resolve forking?”

Figure 1.

A representative visualization of many blockchains inspired by Bitcoin.

To resolve forking, we need a rule that tells a user which block to consider as the “true state.” We need to consider users having potentially different views (the set of blocks they have seen), which can happen because of network latency, attackers, etc. Abstractly, we need a fork-choice rule that takes as input , where is a user’s view and is a block in the view with at least one child, that outputs a unique child block of . If we repeat the fork-choice rule starting at the genesis block, we are guaranteed to end up at a leaf block in , which we call the head of the chain of the view. The protocol-following user is to consider the head of the chain as storing the “true history” of their view.

Forks are hard to avoid because they can be created non-maliciously. For example, in Figure 1, the fork between and may have occurred because an honest miner published block off before seeing in his view. On the other hand, malicious forking can be very damaging. The typical example of this is a double-spending attack, where the attacker gets the other users to accept a block that includes a transaction where paid some money, obtains some (real-life) service in return for , and then gets the other users to accept a forked chain starting from the parent of . If successful, then is effectively forgotten and the attacker can spend the money again in the future.

With these considerations, it is very important to have a fork-choice rule that is “robust” to forking. We now list a few intuitive approaches to fork-choice rules and why they fail:

(1)

Favor the “earliest” created child block when we see a fork: timestamps are easy to fake.

(2)

Take an arbitrary function of the child blocks (represented as integers/strings), and pick the block with the smallest function value: an attacker can eventually fork via a block with an even smaller value.

(3)

Pick the child block that would eventually lead to the longest chain of blocks: an attacker can just create a very long chain.

(4)

Have some sort of “vote” on the child blocks: an attacker can create many different accounts to get a higher number of votes.

Many of these attacks can be implemented as Sybil attacks, a generic term for when an attacker creates many users to deceive other users about the true state of the chain, including (but not limited to) creating frequent forking. Sybil resistance is especially important for blockchain because we want cryptocurrencies to be permissionless. The protocol designer needs to balance two forces:

(1)

The easier it is to publish blocks, the more susceptible the blockchain is to forking and Sybil attacks.

(2)

The harder it is to publish blocks, the slower the rate of transaction processing becomes.

Finally, we have not yet addressed the “leader selection” problem of “who publishes the blocks?” Each blockchain addresses all of these issues differently, starting with Bitcoin.

Bitcoin; Proof-of-Work

Bitcoin Nak19 is the first and still-dominant cryptocurrency and blockchain. Its approach to Sybil resistance is making block publication difficult, with the idea that if we make it costly to do a useful action, then it is costly to abuse/fake that action. This approach also addresses the leader selection problem: the user who performs the difficult action fastest gets to publish blocks.

First, if we make block publication “difficult,” then only certain users with access to computational resources would attempt to publish blocks; these users are called miners, though there is no “official” protocol labeling of miners and any user can theoretically be a miner. To incentivize miners to include their transactions and also to perform the “difficult” block publication, users are to include a fee in each transaction they create, which goes to the miner who includes the transaction.

Now, our goal is for block publishing to be “difficult” but for verification of a valid block to be “easy” (for the sake of other users). The clever idea that Bitcoin uses is to use a (cryptographic) hash function, which is a function which is fast to compute but whose outputs look “essentially random,” so that it is hard to find an input that enforces some property on the output. Specifically, if we have the output of such a function be represented as binary strings, then if we ask a miner to some input to the function that has an output starting with ’s for some “difficulty parameter” , the best thing the miner can do is to repeatedly try random inputs and then getting a satisfactory output roughly of the time, which requires an expected time of rounds. This design is called proof-of-work because whenever a miner finds a successful , it “proves” that the set of all miners did some (random) total amount of work (with expected total computation of rounds). This repetitive process feels like and is thus referred to as mining, which is also why the block publishers are called miners. Under this design, the next user to mine a block will essentially be randomly chosen, with each miner having the probability of being selected to publish a block proportional to their computational power.

Proof-of-work⁠Footnote1 We academics are well aware of the “academic proof-of-work”: research papers are like blocks where we cite previous papers as (multiple)“parents,” where we pay the “work” of peer-review to have papers published, else the archive of academia would be flooded with (more) useless papers. leads to the narrative that Bitcoin uses a lot of electricity—people are incentivized to buy machines optimized for computing these hash functions. In exchange, this “wasteful” plan is an elegant response against Sybil attacks: if making a block were free, then the network would be flooded with valid blocks, and the users would be hard-pressed to find consensus.

Specifically, miners follow the following protocol:

(1)

We define the following fork-choice rule: each user (in their view) is to consider the leaf block with the highest total difficulty in its chain to be the head of the chain. This rule is often called the (misleading) longest chain rule (it should really be renamed “heaviest chain rule”). The miner uses the longest chain rule to obtain their head of the chain .

(2)

The miner then computes a difficulty , a value dynamically set by the protocol that changes with time to target a specific rate of block generation for the blockchain.

(3)

The miner collects the transactions they want into a block (usually, the miner just picks a maximum number of transactions in the “pool” of outstanding transactions, sorted by the highest fees). We represent this set of transactions by some string⁠Footnote2 For the big picture it is sufficient to assume that is just a concatenation of strings encoding individual transactions; the technical implementation involving Merkle trees is slightly different. .

(4)

The miner looks for a string (typically called a nonce, something to be used just once) such that the concatenation satisfies

so that the first bits of the hash result are equal to . Here, is a protocol-agreed hash function and is the aforementioned difficulty.

(5)

The miner publishes a block with content and parent .

(6)

The miner receives a block reward (new Bitcoin generated and given to the miner directly from the protocol) and the fees from the transactions included in the block.

This protocol aligns the incentives for the blockchain: the users are incentivized to tip the miners with fees to include their own transactions and the miners are incentivized to do the computational work in order to receive the block award and the fees.

Most cryptocurrencies take on the same basic skeleton of Bitcoin. I mention two more (and only two, to not exceed the Notices bibliography item limit):

Ethereum W14, the second largest blockchain, also implements a ledger for its cryptocurrency, Ether. However, the blockchain also stores smart contracts (executable source code), making it more of a “virtual computer.” Thus, the state of Ethereum also contains statements such as has smart contract on top of statements such as has Ether.” Transactions can also be of the form “associate the smart contract to or “run the smart contract associated with .” As a consequence, many types of “software” have been written on Ethereum, including games, decentralized autonomous organizations (where users can put in stake to get voting rights), and decentralized exchanges (where users can trade assets built on top of Ethereum).

Zcash, based on SCG14, is one of the leading blockchains for private transfers. Despite the name, “cryptocurrencies” usually do not offer privacy past the pseudonymity of public keys. In the vast majority of designs (including Bitcoin), anyone can, in principle, keep track of the balances of all addresses. The protocol in SCG14 provides one transparent and one private chain. In the private chain, zk-SNARKs (a cryptographic tool that we will introduce later) provide arguments that valid transactions have occurred, without leaking information about the sender, the receiver, or the amount, while being able to detect, e.g., invalid transactions.

Consensus Meets Blockchain

A traditional consensus protocol, as found in distributed systems theory, is a set of rules to be followed by a set of replicas (users) sending messages to each other and changing internal states in order to agree on some value. Faulty replicas might crash and malicious replicas might attack the system (in cryptocurrencies, this is especially relevant as greed attracts malicious users). The usual approach in the field is to assume that no more than some fraction (usually ) of the replicas are byzantine (having completely unpredictable behavior) and then try to have the other honest (protocol-following) replicas agree. Specifically, during the execution of the protocol, a replica can be instructed to (irreversibly) commit to a value, and the goal is for all honest replicas to commit to the same value. Different consensus protocols optimize for different properties, such as:

safety: it is impossible to commit two conflicting⁠Footnote3 This word depends on context. If the value is, e.g., a number, “conflicting” simply means “different.” In more involved contexts, such as when the values represent the state of some machine, “conflicting” would mean representing incompatible states, analogous to conflicting blocks in different forks in blockchain. values;

liveness: it is always plausible to eventually have everyone commit values;

low complexity: not too many (and/or too long) messages are sent over the network.

There are a few differences between the usual assumptions of consensus and blockchain, such as:

The typical use case of a distributed system is a database with a small number of replicas (such as ). The permissionless nature of blockchain (in particular, a byzantine user can pretend to be many users) operates under higher scale.

The non-byzantine replicas in distributed systems are typically “obedient” nodes; the non-byzantine users in blockchain are typically “rational” human actors who must be incentivized (such as by fees) to perform costly actions.

In most consensus protocols, there is a leader replica proposing values and “phases” of replicas acknowledging these values; the phases need to be carefully designed to avoid well-coordinated attacks by byzantine actors. Practical Byzantine Fault Tolerance (PBFT) CL99 is representative of this school, guaranteeing safety when less than of replicas are byzantine. It mainly⁠Footnote4 I am omitting some details, such as the“view change” operation for when a leader takes too long and the other replicas select a new leader. This is not just a matter of convenience; the protocol needs to protect against potentially byzantine leaders. operates as follows:

(1)

Pre-prepare phase: a leader proposes a value and broadcasts it as a message.

(2)

Prepare phase: if a replica receives a value, it broadcasts a prepare message.

(3)

Commit phase: if a replica receives matching prepare messages from of all replicas, it broadcasts a commit message; it internally commits the value if it receives matching commit messages from of all replicas.

It is interesting (and exciting) that Bitcoin, in particular proof-of-work, provides a new approach to consensus that basically avoids the concepts of leaders or phases. This is why Bitcoin’s proof-of-work design is also called Nakamoto Consensus, despite its motivation as a Sybil resistance mechanism.

Nakamoto Consensus does not have safety in the traditional consensus sense, though it is natural to interpret it as having “probabilistic safety”: blocks on the longest chain are likely to be “safe,” but it is plausible for any block to be “undone” if a chain with higher difficulty appears. As a case in point, ES14 analyzes selfish mining attacks. The main idea is that if the attackers have a “secret chain” consisting of a valid chain of unpublished but successfully “mined” (that is, with the correct nonces computed) blocks, they can delay the publication of the chain until the rest of the miners catch up. This wastes the other miners’ progress, so the attackers end up with more relative revenue. The tradeoff is that this delay may cause them to waste their own publication if the rest of the miners publish a competing block quickly.

We now give some details. In ES14’s setup, the attackers have total computation power (interpreted as a fraction of total computational power of all users) . With Bitcoin’s design, this means that at any moment, the next block is mined (but not necessarily published) by the attackers with probability and is mined (and immediately published) by a non-attacker miner with probability , with the attackers being a minority. The authors then model the attacker’s strategy as a Markov chain with states:

(1)

: the chain is forked, with one leaf block published by the attackers and one not (the authors assume that a non-attacker would mine on one of these two blocks randomly via some fixed Bernoulli distribution, while the attackers would mine on the attacker-published block);

(2)

: there is a unique public head of the chain ; the attackers have no mined-but-unpublished chains that would be longer than if published.

(3)

: there is a unique public head of the chain ; the attackers have a mined-but-unpublished chain (extending or forking one of its ancestors) such that, if published, would be blocks ahead of .

State transitions happen when a block is mined, with nuances depending on the state and whether the attackers or non-attackers mine the block. For example, if a non-attacker publishes a block at state , the state moves to since the attackers’ unpublished chain’s lead over the public chain has decreased by block. However, if a non-attacker publishes a block at state , the proposed strategy is having the attackers publish their two blocks immediately, skipping past⁠Footnote5 The authors do not explicitly justify why the attackers do not stay at state . My interpretation is that if they do, they have two block rewards at risk if the non-attackers catch up (which advances the state to , from which the attackers are likely to lose the rewards). In contrast, if the attackers arrive at state from state instead of , only a single block reward is at risk if the attackers stay at , so the cost-benefit analyses differ. It may be helpful to envision an additional state to disambiguate the two state ’s. state to state . See Figure 2 (essentially ES14, Figure 1) for a visualization.

By computing the transition probabilities and keeping track of the probabilistic block rewards, the authors of ES14 quantify the attack in terms of additional revenue to the attackers. This work is one of the first mathematical approaches to Bitcoin’s safety. In a similar vein, SZ15 analyzes Bitcoin’s susceptibility to double-spending attacks and proposes GHOST, a replacement for the longest chain rule with nicer properties.

Figure 2.

The Markov chain for selfish-mining.

The main idea of proof-of-stake, an alternative design paradigm to proof-of-work, is to disincentivize bad behavior by threatening to slash (take away) a user’s stake (money that they commit to the protocol). In effect, proof-of-stake tries to obtain the benefits of proof-of-work through game theory, replacing “do work to earn the right to an action” with “do an action for free, but be susceptible to losing money.” This is analogous to allowing a tenant to live in an apartment but seizing their deposit if they commit bad behavior. Proof-of-stake protocols are highly desired because they avoid “useless” work, similar to the pursuit of “clean energy.”

Proof-of-stake protocols tend to be similar to traditional consensus protocols.⁠Footnote6 A heuristic reason for this similarity is that if we incentivize (via rewards and punishment) “rational” validators to act as “obedient” honest nodes in the traditional protocol, we reduce to a similar setup as a traditional consensus protocol. They also have their own weaknesses; in particular, BCNPW19 shows that after making some assumptions, every “longest-chain variant” proof-of-stake protocol contains one of two properties, either of which makes the protocol vulnerable to a different attack. This result has an “impossibility theorem” flavor reminiscent of Arrow’s Theorem Arr50 from voting theory.⁠Footnote7 Tangent: one interpretation of Arrow’s Theorem is “if we do not want a dictatorship, then our voting system will have some inadequacies (which dictatorships avoid).” Analogously, if we do not want a central authority, then our blockchain will have inadequacies (which central authorities avoid)!

With proof-of-work inspiring, e.g., Nakamoto Consensus and proof-of-stake inspiring, e.g., HotStuff⁠Footnote8 HotStuff YMR19 is a consensus protocol modelled after proof-of-stake blockchain protocols; the paper also presents an abstract framework that generalizes several existing protocols. YMR19, it is natural to ask the following question.

Question 1.

Using concepts blockchain popularizes such as proof-of-work and proof-of-stake (and also VDFs, SNARKs, etc. which we will see later), can we innovate designs in consensus and other subfields (for example, self-stabilization studies how replicas in a potentially corrupted state can guarantee returning to a “good” state) of distributed systems?

We end with a final voting analogy—when we do not have a central authority, the users need to have some sort of “voting.” Standard consensus protocols such as PBFT use acknowledgment messages as votes to ensure safety, where voting power is proportional to the number of users (assuming there are not enough byzantine replicas to break protocol by, e.g., voting twice). Sybil attacks in a permissionless blockchain make such systems ineffective, so we must find an alternative to “voting with number of users.” Indeed, proof-of-work is “voting with computational work” and proof-of-stake is “voting with staked money”!

Mechanism Design; Gas Markets

Mechanism design and game theory are everywhere in blockchain since we must always consider the incentives of the various parties. For example, game theorists study auctions where participants bid money to win one or more rewards. Most proof-of-work blockchains involve transaction creators bidding fees to win the service of being included in a block, so they can naturally be modelled as auctions.

An Ethereum transaction includes a gas limit (in abstract units of gas), which estimates how complex the operation is for the Ethereum “computer,” and a gas price, which is the amount the user is willing to pay per unit of gas. For example, a simple Ether transfer has a gas limit of . If the user bids a gas price of Ether for such a transaction, then the user would pay the product Ether to a miner including the transaction in a block.

From the miner’s perspective, the miner is incentivized to just take the maximum number of transactions they can fit, sorted by the highest gas prices. This means the highest gas price bidders “win” and then pay exactly what they bid, a design called a first-price auction. Finding optimal strategies is difficult in a first-price auction, especially with limited information. In practice for Ethereum (everything discussed so far also applies to Bitcoin and similar proof-of-work designs), participants often overbid, causing sharp spikes in gas prices.

In auction theory, such situations can often be improved by second-price auctions which enjoy better theoretical properties. In our case, “second-price” actually means that each transaction included in a block would pay the miner the lowest bid among them. However, blockchain miners can do things traditional auctioneers cannot; they can include fewer transactions or create “fake” transactions themselves. For example, the blockchain might want a miner to include the maximum of three transactions in each block, but the top three gas prices have values , which would offer value to the miner including all of them. Then the miner is incentivized to include just the first transaction or make two fake transactions at each (e.g., to pretend they are processing the maximum number of three transactions), both with a profit of . In summary, auction theory works differently under blockchain, so we need new ideas.

EIP-1559 is a proposal for revising the Ethereum gas market. Its main idea is to incentivize transaction creators to bid close to a default value known as a “base fee,” so the user experience is closer to, e.g., Amazon where a buyer simply pays a posted price. Counterintuitively, it also asks the base fee portion to be destroyed permanently instead of being given to the miner (otherwise, the miner could collude with users to simulate a first-price auction).

(1)

From a state of the protocol, one can deterministically compute a base fee. The idea is that the base increases under high trading activity and decreases under low trading activity.

(2)

A transaction replaces the concept of the single gas price by a tip and fee cap. If a transaction has a tip , fee cap , and gas limit is included in a block with base fee , then the creator of the transaction pays

(3)

The miner receives ; note this corresponds to the base fee being destroyed.

The report Rou20 contrasts this design with the status quo. The positive predictions are that there would be variance reduction, easier user discovery of optimal fees, resistance against fake transactions, and (beneficial) deflation. However, Rou20 argues that the average transaction fee will not change significantly and in periods of high demand the situation can revert to that of a first-price auction, where users might start inflating their tips.

Question 2.

Rou20 mostly analyzes single blocks. What can we say about EIP-1559 dynamics (base fee changes, collusion, etc.) over longer time periods?

Verifiable Delay Functions

With all this talk of proof-of-work and proof-of-stake, a fun mathematical tangent happens with “proof-of-time,” which would be a way to prove that a certain amount of time⁠Footnote9 Yes, there are also terms such as proof-of-space, proof-of-authority, and even proof-of-useful-work floating around. has elapsed. This is almost what happens in proof-of-work, though a miner running mining machines would on average use the time to mine a block.⁠Footnote10 As in high school physics, work is (hashing) power multiplied by time. Thus, what we want is a family of functions such that:

(1)

given a parameter , we can generate a specific function such that it takes around time to compute, and this cannot be easily parallelized;

(2)

after computing , we obtain a short proof that allows others to verify the computation very quickly.

If both conditions are satisfied, we call such a function a verifiable delay function (VDF).

Why would we want such a function? It is actually not to replace proof-of-work.⁠Footnote11 In proof-of-work, if Alice has of the resources in the world and Bob has the other , then Alice publishes of the blocks. However, if we naively replace the proof-of-work by computing a VDF, then Alice would publish near of the blocks, because her VDF-machine is simply faster than Bob’s. This promotes a monopoly, which is against the decentralizing goal of blockchain. The primary application of VDFs to blockchain is generating randomness on the blockchain, a problem much trickier than it appears to be. As an example, suppose we want to generate a random number, but the result may benefit some users and hurt others (such as a lottery, settling some bet on the blockchain, or choosing the block proposer in proof-of-stake). One naive way to generate the randomness would be to, e.g., hash the data of a block and look at its first bits. However, a miner who mines will immediately know whether the result benefits her. This means she might decide to censor (not publish) the block, spoiling our goal.

If we instead feed the data in the block to a VDF before hashing, then the miner cannot figure out the answer in advance. If the miner still wants to publish the block (e.g., for the block reward), the miner would mine the block. In summary, we have incentivized fair randomness by making sure that the last actor cannot peek ahead at the answer. It is also important for the function to be efficiently verifiable so that the other users of the chain can be convinced that the function was computed correctly.

Two of the earliest VDFs by Wesolowski Wes20 and Pietrzak Pie19 use the following idea: if we have a group element of some finite group of unknown order (a concept we discuss more in the next section), then we do not know of a better way of computing than doing repeated squarings starting from . Specifically:

(1)

To set up the VDF (with domain ), find an abelian group of unknown order and a hash function . Then save the parameters .

(2)

To evaluate the VDF on , output and a proof .

To give a specific example on how is generated, we show Wesolowski’s Wes20 method. Instead of showing directly, first we outline how a prover and a verifier can engage in an interactive proof (we will discuss interactive proofs in more detail soon) protocol as follows:

(1)

First pick a . As an example, assuming operations a second, a -minute VDF might have around .

(2)

The prover and verifier compute . The prover then computes in time.

(3)

The verifier gives the prover a random prime (say, sampled uniformly around a certain size such as ).

(4)

The prover computes , where with , and sends to the verifier. As is approximately , computing takes operations, a linear overhead to computing the VDF itself.

(5)

The verifier computes (which should equal ) and checks that . Computing takes , , operations, respectively, all of which are fast.

Finally, we can remove the interactions required with the well-known Fiat-Shamir heuristic (using randomness to simulate verifier choices). Instead of asking the verifier for , the prover and verifier can both generate by using some (public) hash function that takes as input and returns a random prime . Then the prover can just give to the verifier as if she received from the verifier in the interactive proof, and the verifier performs an identical verification with the same .

Question 3.

What are other ways to use VDFs creatively? For example, VDFs can be used to, e.g., create an unbiased “random beacon” that broadcasts randomness at set time intervals, which can be useful for situations beyond blockchain.

Cryptography and Number Theory

Most of the cryptography blockchain research is concentrated in the fairly specialized “program” of interactive proofs and SNARKs (which I cover in the next sections), but there are occasional ad-hoc problems that may be interesting to non-specialists, especially number theorists. I now give a couple of examples.

First, constructing groups of unknown order is useful for several applications; we already saw it in VDFs, and it also comes up in SNARKs (in particular DARKs). One natural idea is an RSA group, where a trusted authority multiplies two big primes to obtain and then “throws away” and . Given , we can do computations in the multiplicative group , but we do not know the order of this group without the factorization of . In the decentralized context of blockchain, however, such constructions involving a central authority are usually undesirable.

The more popular constructions of groups of unknown order involve ideal class groups of imaginary quadratic fields, which I now present (omitting some technical details). In short, an imaginary quadratic field is an algebraic extension

where is a negative square-free integer. We define the discriminant to be if and otherwise. Each such field creates a ring of integers . The ideal class group is the quotient group of the group of non-zero fractional ideals of modded out by the principal fractional ideals. The order of , also called the class number of , is a number that is believed to be difficult to compute for sufficiently large (assuming ), which means we can generate an ideal class group randomly and be fairly sure that the group order is hard to compute. For additional information, see Wes20 for the proposal of using ideal class groups for VDFs, BFS20 for the application to DARKs, or DGS20 for a mathematical review of these constructions and a proposal of using Jacobians of hyperelliptic curves (Lee20 gives counterpoints to the proposal).

Question 4.

How easy is it to get “partial information” from an ideal class group? For example (a specific question from Dan Boneh), given (equivalently ), how hard is it to determine if is divisible by or some other small prime?

Second, in blockchains (and more generally) it is often desired to have efficient and secure multiparty computation (MPC) where several users can collectively perform a computation requiring no trust in each other. For example, a blockchain may require a user to provide randomness. When several users want to act as a single user, they can use an MPC to jointly compute the randomness.

MPCs usually rely on some deterministic functions that simulate randomness (like hash functions, but with more requirements). One promising proposal uses the Legendre symbol , which equals if is a quadratic residue (equal to the square of some element) modulo the prime and otherwise; it obeys many nice properties and is easy to compute. Given a secret key , define

See https://legendreprf.org/bounties for references and bounties (for both theoretical and computational results) related to this construction.

Question 5.

Given a small, say , number of different evaluations of , is it possible to recover with high probability?

Interactive Proofs; SNARKs

Interactive proofs is a subfield of cryptography that is evolving rapidly due to blockchain. Informally, an interactive proof is a protocol for a prover to convince a verifier (we can think of both as people armed with computers) of some fact by sending messages over potentially many rounds. A nice example of an interactive proof relevant to blockchain is the “sumcheck” protocol from LFKN92. In this protocol, the prover tries to convince the verifier that the prover performed the computation , where is a known -variate polynomial of low degree over some finite field .

(1)

The protocol has rounds. In round , the prover sends the polynomial

where the are field elements to be received from the verifier. At the beginning of round , the prover knows through . In particular, at the beginning of the first round the prover does not have any and the expression also involves no .

(2)

After receiving the polynomial in round , the verifier computes ; then:

(a)

if , the verifier checks ;

(b)

if , the verifier checks .

Afterwards, the verifier gives a random to the prover.

(3)

After all rounds, the verifier checks if .

This algorithm is essentially a commitment to the computation which is narrowed down via the challenges given by the verifier one variable at a time. Its correctness is based on the fact that it is extremely unlikely (assuming is of low degree) that the prover can “keep up the act” if the prover did not in fact have such a computation. This idea is revisited frequently in deeper parts of blockchain research.

In blockchain, the main interactive proof systems people study are (zk)-SNARKs, or (zero-knowledge) succinct non-interactive arguments of knowledge. Informally, the important terms are:

Zero-knowledge: the proof is done in a way that the verifier is convinced that the prover knows some information , but the verifier does not gain any knowledge about . For example, it is possible to create a proof that two graphs are isomorphic without giving away any information about the actual isomorphism. Note that a digital signature is exactly a zero-knowledge proof that the signer knows the secret key.

Succinct: the proof generation time is fast (usually linear in computation length) and the verification time is fast (usually sublinear in computation length). Note that the latter requirement implies sublinear proof size (since it takes time for the verifier to read the proof).

Non-interactive: the total communication is one message sent by the prover. The name is unfortunate as most interactive proofs we see in practice are technically “non-interactive interactive proofs” thanks to the Fiat-Shamir heuristic (recall our VDF presentation).

SNARKs were introduced into blockchain via SCG14 to introduce private transactions via the zero-knowledge property. However, the more important⁠Footnote12 Case in point: one of the most important proposals for scaling Ethereum is called “zk-Rollup” because of the usage of SNARKs in combining transactions and saving space; what is important here is entirely the succinctness of the SNARK and the “zk” part is actually irrelevant. goal of SNARKs is providing succinct verified computation. A blockchain fundamentally tries to save a shared state, which is the result of a sequence of state transition computations (such as blocks). Specifically, we may want to prove statements of the form “when the user (the prover) performed some computation from state , using her own secret information (say ’s secret key), the new state is ,” where is a function encoding state transition. If we can provide easily verifiable (succinct) proofs of such computations, then a blockchain can just put the proofs of the computations “on-chain” (as transactions) and have much of the computation be done “off-chain” instead.

Given an arbitrary computation, a SNARK typically converts the computation from “circuit” form (composed of logic gates) into polynomial equations via a process called “arithmetization.” Since we can combine many linear equations into a single polynomial equation, polynomials⁠Footnote13 The choice of polynomials is what makes ideas such as the aforementioned sumcheck algorithm important. are very important in scaling blockchains. SNARKs ignited a period of rapid cryptography research that resulted in many variations. I will just name a couple:

STARKs BSBHR19, which are transparent, meaning they do not require a trusted setup like the SCG14 SNARKs. As a bonus, STARKs are believed to be post-quantum secure. However, STARKs have much larger proofs than SNARKs due to the lack of preprocessing.

DARKs BFS20, which use groups of unknown order to achieve transparency with better asymptotic proof size and similar verification time compared to STARKs, but loses some “in-practice” efficiency and loses post-quantum security. As a theoretical contribution, the paper BFS20 also gives a framework which can be used to express the other proposed SNARKs, including STARKs.

Question 6.

Can we design other post-quantum SNARKs besides STARKs? In general, there is much interest in designing SNARKs which push any metrics (proof size, verification speed, post-quantum resistance, etc.) farther.

Recursive SNARKs and Pairs of Elliptic Curves

While it is old news by now, I have always found it pleasant that something as “pure” as elliptic curves have found their way into cryptography. In blockchain, a particularly cute extension of this story is the appearance of amicable pairs of elliptic curves in the study of recursive SNARKs.

In standard blockchains, a new user must download the entire history to be completely sure that what they have is valid. A recursive SNARK design uses SNARKs recursively to prove statements of the form

which says “the initial state , which was correctly iteratively updated until it now equals .” Such guarantees would greatly reduce the total amount of data that users would need to hold. The key step of a recursive SNARK is to create a different SNARK that verifies the computation of verifying the first SNARK.

The construction of SNARKs in SCG14 is associated with a triple , where is an elliptic curve over the finite field , is the prime order of some subgroup in , and is ordinary and pairing-friendly. Sketching the essential details:

(1)

An elliptic curve over is a group of solutions to a particular family of polynomial equations over some field . For us, we always have for some prime . For cryptographic operations, typically we focus our attention on some subgroup of prime order within , so we always specify in conjunction.

(2)

The embedding degree of is the minimal natural number such that . It only depends on and , though those are always in context of .

(3)

We call pairing friendly if and the embedding degree of is not too big.⁠Footnote14 Depending on source, I have seen this to mean bounded above by some constant times or a straight constant such as . The main idea here is that if it is too small, we cannot compute pairings efficiently on them, which is needed for the underlying cryptography.

(4)

We call ordinary if , a technical condition (again, this depends only on and , but we always consider the triple in our applications) which guarantees that the embedding degree of is not too small.⁠Footnote15 Small embedding degree leads the discrete logarithm problem to be more easily broken, so it is also not desirable. A non-ordinary or supersingular curve can only have embedding degree at most over a prime field .

With these assumptions, we can create a SNARK which proves that the prover performed some computation in arithmetic, while the SNARK itself involves arithmetic. Thus, if we have an elliptic curve over encoding computations in , to perform the key recursive SNARK step, we want to verify computations done in , meaning we want a different elliptic curve over some which has an order subgroup.

Thus, to perform this operation ad infinitum, we want an infinite walk in the pairing friendly graph of elliptic curves:⁠Footnote16 As defined in https://coinlist.co/build/coda/pages/theory.

Each vertex of the graph is a pairing friendly and ordinary .

We have an edge if .

The easiest way to create an infinite walk is via a self-loop/-cycle, but this is impossible since we cannot have . It is then natural to ask the following question.

Question 7.

Can we find/classify pairs (or longer cycles) of ordinary pairing-friendly elliptic curves over and over such that and , ideally with larger embedding degrees than ?

As of now, the only known family for -cycles is a family of pairs of MNT cycles of (lower than desired for us) embedding degrees . CCW19 proves that cycles of MNT curves must be of lengths or , with embedding degrees or , so this direction is a dead end. They also show that -cycles for certain pairs of small embedding degrees are not possible, though slightly bigger degrees remain open.

Decentralized Finance; CFMMs

Broadly speaking, finance happens when there are (financial) assets. Assets include money, gold, stocks, or more complicated instruments, such as derivatives, contracts containing logic depending on other assets. Assets generate markets, mechanisms/protocols where they can be traded. The amount of assets available on a market to trade is called liquidity. Classical mathematical finance studies these financial objects. Decentralized finance (DeFi) does the same, except with financial objects that exist on a blockchain.

Many of these concepts are fairly close to traditional financial concepts, and can be studied similarly. For example, many tokens (currencies built on top of a blockchain, usually Ethereum) are analogous to pachinko balls (assets bought with money that can be used to perform specific tasks) or stocks (assets that can pay money later or be used as governance). Also, stablecoins are tokens that are designed to stay at a fixed value “in the real world,” like how Hong Kong pegs its currency to a constant multiple of the USD. However, some concepts are truly unique to DeFi. I introduce one now.

Traditionally, a market maker is a person working for a stock exchange who adjusts (based on market conditions) assets available for market participants. The market maker “balances” the exchange (by creating positions where other participants can both buy and sell) and makes money “from the spread” (by buying the same assets at slightly lower prices and selling at slightly higher prices). Blockchain allows automated market makers, which are smart contracts that provide this type of service with no (or optional) human oversight.

Uniswap (https://uniswap.org/) is such an automated market maker. Uniswap is a smart contract that holds an amount of one asset and of another, which allows people to trade between the two assets as long as they keep to be some “constant” (not accounting for a trading fee). Uniswap is extremely successful: it provides more than billion dollars of liquidity, completely automated.

Formally, AC20 generalizes Uniswap (and other similar protococols) to constant function market makers (CFMMs). These are defined by a trading function and a set of reserves . encodes the amounts of types of coins a smart contract holds. Besides , the trading function takes as input (a trader’s offer to the contract) and (what the trader wants in return). The CFMM then accepts the trade if and only if ; alternatively, it accepts a trade if and only if the trading function is kept constant, hence the name. Afterwards, the coins are exchanged, and now equals .

In this framework, Uniswap is an CFMM with

where is the percentage fee of a trade. Balancer (https://balancer.finance/) and Cycle (https://www.curve.fi/) are two more automated market makers which can be expressed as CFMMs. CFMMs are a great starting point for DeFi research due to their balance between mathematical clarity, simplicity, and actual usage.

The work TW is an example of traditional mathematical finance applied to (a slight variation of) Uniswap. The work takes the perspective of the liquidity provider (LP), which we can think of as the⁠Footnote17 In reality, there would be potentially many LPs all contributing different amounts of liquidity. person who owns the two assets held by Uniswap. If and are the amounts of the two assets held by the smart contract at time , then the wealth of the LP is defined by , where is the external price, or how much a unit of the second asset is worth in terms of the first. For example, the LP could be holding USD and Ether in Uniswap, while is the (current) price of Ether in USD. Then the wealth is a measure of how much the LP’s assets are valued in USD. The authors make a few assumptions:

(1)

follows a discrete simple random walk with (multiplicative) steps for some constant ;

(2)

the “outside world” is modeled by a single trader (whom we call OT for “outside trader”) with unlimited assets who trades with Uniswap if and only if the trade makes them money; e.g., if they can trade of the first asset for of the second asset such that , they do.

The authors’ goal is to find

the asymptotic geometric return of the LP’s wealth. Their main idea is defining , the implicit price; one way to motivate this quantity is to consider the feeless limit, where the OT’s strategy essentially becomes keeping the ratio of equal to . Then, the authors note that follows a random walk where:

(1)

the state space has states ( being a derived quantity from and ) labeled ;

(2)

at each step we go from “up” to with probability or “down” to with probability for some constant ;

(3)

if we attempt to go “up” at , then a trade happens: the LP changes to and to and stays at . A similar result happens if we go “down” at .

We “hit the boundary” exactly when the OT can make money by trading with the LP; the changes to and are obtained by solving for the optimal trade amount from the OT’s perspective. The random walk has the very nice property that after resolving each trade, the resulting and make it so that is unchanged, which has no obvious reason of being true! Using this and other properties of the random walk, TW derives a closed formula for the desired asymptotic return.

Question 8.

Can we generalize this type of analysis (wealth growth, liquidity growth, expected trade volume, etc.) to other CFMMs? Do we always get a random walk with nice properties, or did we just get “lucky” with a particularly nice ?

Acknowledgments

I thank Vitalik Buterin, Jing Chan, Justin Drake, Dankrad Feist, Petr Kuznetov, Steven Sam, Yi Sun, Andrei Tonkikh, and Barry Whitehat for valuable conversation. I thank two anonymous referees for useful feedback. I disclose that I have worked as a research consultant for the Ethereum Foundation.

References

[AC20]
Guillermo Angeris and Tarun Chitra, Improved price oracles: Constant function market makers, arXiv:2003.10001, 2020.Show rawAMSref\bib{angeris-cfmms}{eprint}{ label={AC20}, author={Angeris, Guillermo}, author={Chitra, Tarun}, title={Improved price oracles: Constant function market makers}, date={2020}, arxiv={2003.10001}, } Close amsref.
[Arr50]
Kenneth J. Arrow, A difficulty in the concept of social welfare, J. Political Economy 58 (1950), no. 4, 328–346.Show rawAMSref\bib{arrow1950difficulty}{article}{ author={Arrow, Kenneth~J.}, title={A difficulty in the concept of social welfare}, date={1950}, journal={J. Political Economy}, volume={58}, number={4}, pages={328\ndash 346}, } Close amsref.
[BCNPW19]
Jonah Brown-Cohen, Arvind Narayanan, Alexandros Psomas, and S. Matthew Weinberg, Formal barriers to longest-chain proof-of-stake protocols, Proceedings of the 2019 ACM Conference on Economics and Computation, 2019, pp. 459–473.Show rawAMSref\bib{formal-barriers}{inproceedings}{ author={Brown-Cohen, Jonah}, author={Narayanan, Arvind}, author={Psomas, Alexandros}, author={Weinberg, S.~Matthew}, title={Formal barriers to longest-chain proof-of-stake protocols}, date={2019}, booktitle={Proceedings of the 2019 ACM Conference on Economics and Computation}, pages={459\ndash 473}, } Close amsref.
[BFS20]
Benedikt Bünz, Ben Fisch, and Alan Szepieniec, Transparent SNARKs from DARK compilers, Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2020, pp. 677–706.Show rawAMSref\bib{bunz-dark}{inproceedings}{ author={B{\"u}nz, Benedikt}, author={Fisch, Ben}, author={Szepieniec, Alan}, title={Transparent {SNARK}s from {DARK} compilers}, organization={Springer}, date={2020}, booktitle={Annual International Conference on the Theory and Applications of Cryptographic Techniques}, pages={677\ndash 706}, } Close amsref.
[BSBHR19]
Eli Ben-Sasson, Iddo Bentov, Yinon Horesh, and Michael Riabzev, Scalable zero knowledge with no trusted setup, Annual International Cryptology Conference, 2019, pp. 701–732.Show rawAMSref\bib{starks}{inproceedings}{ author={Ben-Sasson, Eli}, author={Bentov, Iddo}, author={Horesh, Yinon}, author={Riabzev, Michael}, title={Scalable zero knowledge with no trusted setup}, organization={Springer}, date={2019}, booktitle={Annual International Cryptology Conference}, pages={701\ndash 732}, } Close amsref.
[CCW19]
Alessandro Chiesa, Lynn Chua, and Matthew Weidner, On cycles of pairing-friendly elliptic curves, SIAM J. Appl. Algebra Geom. 3 (2019), no. 2, 175–192, DOI 10.1137/18M1173708. MR3934100Show rawAMSref\bib{chiesa-cycles}{article}{ author={Chiesa, Alessandro}, author={Chua, Lynn}, author={Weid\-ner, Matthew}, title={On cycles of pairing-friendly elliptic curves}, journal={SIAM J. Appl. Algebra Geom.}, volume={3}, date={2019}, number={2}, pages={175--192}, review={\MR {3934100}}, doi={10.1137/18M1173708}, } Close amsref.
[CL99]
Miguel Castro and Barbara Liskov, Practical byzantine fault tolerance, Proceedings of the Third Symposium on Operating Systems Design and Implementation, 1999, pp. 173–186.Show rawAMSref\bib{PBFT}{inproceedings}{ author={Castro, Miguel}, author={Liskov, Barbara}, title={Practical byzantine fault tolerance}, date={1999}, booktitle={Proceedings of the Third Symposium on Operating Systems Design and Implementation}, series={OSDI '99}, publisher={USENIX Association}, address={USA}, pages={173--186}, } Close amsref.
[DGS20]
Samuel Dobson, Steven D. Galbraith, and Benjamin Smith, Trustless groups of unknown order with hyperelliptic curves, IACR Cryptol. ePrint Arch. 2020 (2020), 196.Show rawAMSref\bib{dobson2020trustless}{article}{ author={Dobson, Samuel}, author={Galbraith, Steven~D.}, author={Smith, Benjamin}, title={Trustless groups of unknown order with hyperelliptic curves}, date={2020}, journal={IACR Cryptol. ePrint Arch.}, volume={2020}, pages={196}, } Close amsref.
[ES14]
Ittay Eyal and Emin Gün Sirer, Majority is not enough: Bitcoin mining is vulnerable, International Conference on Financial Cryptography and Data Security, 2014, pp. 436–454.Show rawAMSref\bib{selfish-mining}{inproceedings}{ author={Eyal, Ittay}, author={Sirer, Emin~G{\"u}n}, title={Majority is not enough: Bitcoin mining is vulnerable}, organization={Springer}, date={2014}, booktitle={International Conference on Financial Cryptography and Data Security}, pages={436\ndash 454}, } Close amsref.
[Lee20]
Jonathan Lee, The security of groups of unknown order based on jacobians of hyperelliptic curves, IACR Cryptol. ePrint Arch. 2020 (2020), 289.Show rawAMSref\bib{lee2020security}{article}{ author={Lee, Jonathan}, title={The security of groups of unknown order based on jacobians of hyperelliptic curves}, date={2020}, journal={IACR Cryptol. ePrint Arch.}, volume={2020}, pages={289}, } Close amsref.
[LFKN92]
Carsten Lund, Lance Fortnow, Howard Karloff, and Noam Nisan, Algebraic methods for interactive proof systems, J. Assoc. Comput. Mach. 39 (1992), no. 4, 859–868, DOI 10.1145/146585.146605. MR1187215Show rawAMSref\bib{lfkn-1992}{article}{ author={Lund, Carsten}, author={Fortnow, Lance}, author={Karloff, Howard}, author={Nisan, Noam}, title={Algebraic methods for interactive proof systems}, journal={J. Assoc. Comput. Mach.}, volume={39}, date={1992}, number={4}, pages={859--868}, issn={0004-5411}, review={\MR {1187215}}, doi={10.1145/146585.146605}, } Close amsref.
[Nak19]
Satoshi Nakamoto, Bitcoin: A peer-to-peer electronic cash system, Manubot, 2019.Show rawAMSref\bib{bitcoin}{techreport}{ author={Nakamoto, Satoshi}, title={Bitcoin: A peer-to-peer electronic cash system}, institution={Manubot}, date={2019}, url={https://git.dhimmel.com/bitcoin-whitepaper/}, } Close amsref.
[Pie19]
Krzysztof Pietrzak, Simple verifiable delay functions, 10th Innovations in Theoretical Computer Science, LIPIcs. Leibniz Int. Proc. Inform., vol. 124, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2019, pp. Art. No. 60, 15. MR3899854Show rawAMSref\bib{pietrzak-vdf}{article}{ author={Pietrzak, Krzysztof}, title={Simple verifiable delay functions}, conference={ title={10th Innovations in Theoretical Computer Science}, }, book={ series={LIPIcs. Leibniz Int. Proc. Inform.}, volume={124}, publisher={Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern}, }, date={2019}, pages={Art. No. 60, 15}, review={\MR {3899854}}, } Close amsref.
[Rou20]
Tim Roughgarden, Transaction fee mechanism design for the ethereum blockchain: An economic analysis of eip-1559, arXiv:2012.00854, 2020.Show rawAMSref\bib{roughgarden}{eprint}{ label={Rou20}, author={Roughgarden, Tim}, title={Transaction fee mechanism design for the ethereum blockchain: An economic analysis of eip-1559}, date={2020}, arxiv={2012.00854}, } Close amsref.
[SCG14]
Eli Ben Sasson, Alessandro Chiesa, Christina Garman, Matthew Green, Ian Miers, Eran Tromer, and Madars Virza, Zerocash: Decentralized anonymous payments from bitcoin, 2014 IEEE Symposium on Security and Privacy, 2014, pp. 459–474.Show rawAMSref\bib{zerocash}{inproceedings}{ author={Sasson, Eli~Ben}, author={Chiesa, Alessandro}, author={Garman, Christina}, author={Green, Matthew}, author={Miers, Ian}, author={Tromer, Eran}, author={Virza, Madars}, title={Zerocash: Decentralized anonymous payments from bitcoin}, organization={IEEE}, date={2014}, booktitle={2014 IEEE Symposium on Security and Privacy}, pages={459\ndash 474}, } Close amsref.
[SZ15]
Yonatan Sompolinsky and Aviv Zohar, Secure high-rate transaction processing in Bitcoin, Financial Cryptography and Data Security, Lecture Notes in Comput. Sci., vol. 8975, Springer, Heidelberg, 2015, pp. 507–527, DOI 10.1007/978-3-662-47854-7_32. MR3395041Show rawAMSref\bib{GHOST}{article}{ author={Sompolinsky, Yonatan}, author={Zohar, Aviv}, title={Secure high-rate transaction processing in Bitcoin}, conference={ title={Financial Cryptography and Data Security}, }, book={ series={Lecture Notes in Comput. Sci.}, volume={8975}, publisher={Springer, Heidelberg}, }, date={2015}, pages={507--527}, review={\MR {3395041}}, doi={10.1007/978-3-662-47854-7\_32}, } Close amsref.
[TW]
Martin Tassy and David White, Growth rate of a liquidity provider’s wealth in automated market makers.Show rawAMSref\bib{tassy-growth}{misc}{ author={Tassy, Martin}, author={White, David}, title={Growth rate of a liquidity provider's wealth in $xy=c$ automated market makers}, } Close amsref.
[Wes20]
Benjamin Wesolowski, Efficient verifiable delay functions, J. Cryptology 33 (2020), no. 4, 2113–2147, DOI 10.1007/s00145-020-09364-x. MR4163449Show rawAMSref\bib{wesolowski}{article}{ author={Wesolowski, Benjamin}, title={Efficient verifiable delay functions}, journal={J. Cryptology}, volume={33}, date={2020}, number={4}, pages={2113--2147}, issn={0933-2790}, review={\MR {4163449}}, doi={10.1007/s00145-020-09364-x}, } Close amsref.
[W14]
Gavin Wood et al., Ethereum: A secure decentralised generalised transaction ledger, Ethereum project yellow paper 151 (2014), no. 2014, 1–32.Show rawAMSref\bib{ethereum}{article}{ author={Wood, Gavin}, author={others}, title={Ethereum: A secure decentralised generalised transaction ledger}, date={2014}, journal={Ethereum project yellow paper}, volume={151}, number={2014}, pages={1\ndash 32}, } Close amsref.
[YMR19]
Maofan Yin, Dahlia Malkhi, Michael K. Reiter, Guy Golan Gueta, and Ittai Abraham, HotStuff: BFT consensus with linearity and responsiveness, Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing, 2019, pp. 347–356.Show rawAMSref\bib{hotstuff}{inproceedings}{ author={Yin, Maofan}, author={Malkhi, Dahlia}, author={Reiter, Michael~K.}, author={Gueta, Guy~Golan}, author={Abraham, Ittai}, title={Hot{S}tuff: {BFT} consensus with linearity and responsiveness}, date={2019}, booktitle={Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing}, series={PODC '19}, publisher={Association for Computing Machinery}, address={New York, NY, USA}, pages={347--356}, url={https://doi.org/10.1145/3293611.3331591}, } Close amsref.

Yan X Zhang is an assistant professor of mathematics at San José State University. His email address is yan.x.zhang@sjsu.edu.

Article DOI: 10.1090/noti2365

Credits

Opening image is courtesy of Suebsiri via Getty.

Figures 1 and 2 and photo of Yan X Zhang are courtesy of Yan X Zhang.