Notices of the American Mathematical Society
Welcome to the current issue of the Notices of the American Mathematical Society.
With support from AMS membership, we are pleased to share the journal with the global mathematical community.
- Previous Issue
- Volume 72 | Number 5 | May 2025
- No newer issues
PDFLINK |
Turing in the Shadows of Nobel and Abel: An Algorithmic Story Behind Two Recent Prizes
The 2021 Nobel Prize in physics was awarded to Giorgio Parisi “for the discovery of the interplay of disorder and fluctuations in physical systems from atomic to planetary scales,” and the 2024 Abel Prize in mathematics was awarded to Michel Talagrand “for his groundbreaking contributions to probability theory and functional analysis, with outstanding applications in mathematical physics and statistics.” What remains largely absent in the popular descriptions of these prizes, however, is the profound contributions the works of both individuals have had to the field of algorithms and computation. The ideas first developed by Parisi and his collaborators relying on remarkably precise physics intuition, and later confirmed by Talagrand and others by no less remarkable mathematical techniques, have revolutionized the way we think algorithmically about optimization problems involving randomness. This is true both in terms of the existence of fast algorithms for some optimization problems, but also in terms of our persistent failures of finding such algorithms for some other optimization problems.
The goal of this article is to highlight these developments and explain how the ideas pioneered by Parisi and Talagrand have led to a remarkably precise characterization of which optimization problems admit fast algorithms, versus those which do not, and furthermore to explain why this characterization holds true.
1. Optimization of Random Structures
The works of Parisi and Talagrand—which were devoted to understanding a mysterious object in statistical physics known as spin glass—offered an entirely novel way of understanding algorithmic successes and failures in tackling optimization problems involving randomness. This progress was propelled by understanding the “physics” properties of the underlying problems, namely the geometry of the solution space. We will illustrate these notions using three examples, all of which fit into a general framework of optimization problems of the form
1Here is the function to be maximized, which we call the “Hamiltonian” for reasons to be explained later, is the space of potential solutions, and is a random variable describing the source of randomness. Our three examples all exhibit a persistent computation-to-optimization gap, namely a gap between the optimal value and the best guaranteed value that can be computed by known efficient algorithms—where “efficient” means “polynomial-time” in the parlance of the theory of computational complexity.
Example I. Largest cliques and independent sets in random graphs
The first of our three examples is the problem of finding a largest clique in a dense Erdös–Rényi graph and the related problem of finding a largest independent set in a sparse Erdös–Rényi graph.
Imagine a club with members where every pair of members are friends or nonfriends with equal likelihood independently across all , member pairs. What is the size of the largest group of members who are all friends with each other? This quantity is known as the clique number of the graph whose nodes are the members and whose edges correspond to the pairs of friends. This club of random friend pairings is just a vernacular description of a model of a random graph known as an Erdös–Rényi graph, denoted Paul Erdös introduced this model in 1948 as a very effective tool to study and solve problems in the field of extremal combinatorics. The question of estimating the largest clique size of an Erdös–Rényi graph was solved successfully in the 1970s and has a remarkably precise answer: With high probability (namely with probability approaching . as denoted as “whp”), the size is , Here . denotes a term which converges to zero as This is an instance of our optimization problem .1 in which is the set of all subsets of the node set of the graph; is the random graph itself; and is the cardinality of when is a clique of and , otherwise.
The asymptotic value has a very simple intuition. Fix an integer value and let , denote a random variable corresponding to the number of subsets of which have cardinality and are cliques. By the linearity of expectation, the expected value of this random variable is By straightforward computation, this quantity converges to zero when . is asymptotically larger than which provides a simple proof of the upper bound. The matching lower bound requires more work involving estimation of the variance of , which is now standard material in many textbooks on random graphs.
What about the problem of efficiently finding the largest clique algorithmically in at most polynomial time, when the graph is given to the algorithm designer as an input? Here and elsewhere we adhere to the gold standard of algorithmic efficiency when requiring that the running time of the algorithm grows at most polynomially in the size of the input. That is, we require that the algorithm belongs to class of algorithm whose running time grows at most polynomially in the size of the problem description, which in our case means it grows at most polynomially in —the Note that the largest clique problem can be trivially solved in time . by checking all subsets with cardinality this growth rate is super-polynomial, and thus this exhaustive search algorithm is not in ; .
This is where our story begins. In 1976, Richard Karp—one of the founding fathers of the modern algorithmic complexity theory—presented a very simple algorithm that computes a clique whose size is guaranteed to be (roughly) half-optimal, namely
Notably, while improving the half-optimality for the clique problem appears algorithmically hard, this hardness is only evidential but not (at least not yet) formal. Namely, we do not have any proof of even conditional algorithmic hardness for this problem say by assuming the validity of the “ conjecture. This is in stark contrast with the worst-case setup. Namely, consider the problem of finding the largest clique for an arbitrary graph, in particular for graphs which are not necessarily generated according to the ” distribution. For this version of the problem, it is known that no polynomial time algorithm can produce a clique within any multiplicative factor from optimality (in particular, much larger than factor unless ) We see that, in particular, the half-optimality gap emerging in the random case does not align with the classical algorithmic complexity theory based on the . conjecture.
A very similar story takes place when the underlying model is a sparse random graph defined next. Fix a constant Consider a random graph denoted by . in which every pair of nodes is an edge with probability independently across all pairs. We now switch to the mirror image of the clique problem, known as the independent set problem, defined as follows. A subset of nodes , is called an independent set if it contains no edges. Namely for all the pair , is not an edge. Frieze proved
Consider now the problem of identifying the largest independent set algorithmically in polynomial time. Using the formalism described in 1, is the set of all subsets of the node set of the graph; is the random graph itself; and is the cardinality of when is an independent set in the graph and , otherwise. An algorithm similar to Karp’s clique algorithm computes an independent set whose size is roughly half-optimal, namely also whp. No improvement of this algorithm is known and no formal hardness is known either. Thus, this problem is also evidentially hard and exhibits a computation-to-optimization gap. ,
Example II. Ground states of spin glasses
Our second example is in fact a probabilistic model of spin glasses themselves. Consider a tensor of random variables where each entry , of the tensor has a standard normal distribution and all entries are probabilistically independent. The optimization problem of interest is
2where for every ,
3The optimal solution of this problem (which is unique up to a sign flip) is called the ground state of the model using physics jargon, and the optimal value is called the ground state value. The normalization is chosen for convenience (see below). In our formalism 1, , is the tensor and , is the associated inner product 3. The special case corresponds to the well-known and widely studied Sherrington–Kirkpatrick model.
Notice how the signs of impact the optimal choices for When . , and prefer to have the same sign so that the product is positive. On the other hand, when , and prefer to have opposite signs so that the product is again positive. What is the best arrangement of signs to which optimizes this tradeoff? This is the core of the optimization problem, which turned out to be quite challenging.
In 1980, Parisi solved optimization problem 2 by introducing a mathematically nonrigorous but remarkably powerful physics-style method called replica symmetry breaking (RSB)
Define .
For every the ground state value satisfies ,
4whp as .
The optimal solution of the variational problem above is known to be unique and defines a cumulative distribution function of a probability measure, which has been called since then the Parisi measure. The identity in 4 was derived by Parisi heuristically using the RSB method. It was turned into a theorem by Talagrand in 2006 in his breakthrough paper
What about algorithms? Here the goal is to find a solution such that is as close to as possible when the tensor is given as an input. Interestingly, major progress on this question was achieved only very recently, and it deserves a separate discussion found in Section 5. Jumping ahead though, the summary of the state-of-the-art algorithms is as follows. For near optimum solutions can be found by a polynomial time algorithm. On the other hand, when , this is evidently not the case. In summary, the largest value , achievable by known poly-time algorithms satisfies , Thus similarly to Example I, the model exhibits a computation-to-optimization gap between the optimal values and the values found by the best-known polynomial time algorithms. .
Example III. Satisfiability of a random K-SAT formula
Our third example, the final family of examples, consists of randomly generated constraint satisfaction problems, specifically the so-called random K-SAT problem. An instance of a K-SAT model is described by Boolean variables (variables with values) and a conjunction of Boolean functions each of which is a disjunction of precisely , variables from the list or their negation. Here is an example for and : An assignment of variables . to values and is called a satisfying assignment if under this assignment. For example, the assignment (and any assignment for the remaining is a satisfying assignment. The formula s) is called satisfiable if there exists at least one satisfying assignment and is called nonsatisfiable otherwise.
A random K-SAT formula is generated according to the following probabilistic construction. For each independently across , we select variables uniformly at random from and attach the negation sign to each of them with probability also independently across , and The resulting formula . is a random formula whose probability distribution depends on and only. In the context of optimization problem 1, , is the description of the formula and , is the number of clauses satisfiable by the assignment In particular, the formula is satisfiable if and only if . .
Whether or not is satisfiable or not is now an event, which we write symbolically as The probability of this event . is our focus. There exists a (conjectural) critical threshold such that when and when Informally, . represents the largest fraction of clauses to variables for which the solution exists whp asymptotically in The existence of such a threshold . was recently proven for large enough
The constant has a very simple asymptotic expansion for large : where , hides a function vanishing to zero as The intuition for this asymptotics is rather straightforward. Observe that any fixed assignment of . is satisfiable with probability , Thus the expected number of satisfying solutions is . The value . is precisely the critical value of at which this expectation drops from (exponentially large) to (exponentially small).
Turning to algorithms, the best-known one succeeds in finding a satisfying assignment only when
In the remainder of the article, we will explain the nature of these gaps. In particular, we will explain the values and by leveraging powerful ideas first developed toward understanding statistical physics of materials called spin glasses.
2. Physics Matters: Spin Glasses
Statistical physics studies properties of matter using probabilistic models which provide statistical likelihoods one finds physical systems in different states. These likelihoods in their turn are linked with the energy of these states: As the energy decreases, the likelihood increases. One can apply this view point to our optimization problem thinking of , as (negative) “energy” (Hamiltonian) of the state The ground state is then a state with the lowest energy. The link between energies and likelihoods is formalized by introducing the Gibbs measure on the state space . which assigns probability measure , to each configuration Here . is the normalizing constant called the partition function. The parameter is called inverse temperature. At low temperature, namely large values of the Gibbs measure becomes concentrated in near ground states: states with nearly the lowest energy. The identity in ,4 was first derived for the case of finite and then extended to the case corresponding to the ground state value. The added value of introducing the Gibbs measure is that it allows one to consider the entire space , of solutions in one shot where the solutions are stratified by their energy levels. This is the so-called “solution space geometry” approach, which turned out to be very fruitful for the analysis of algorithms.
Spin glasses are particularly complex examples of matters which exhibit properties of being liquid and solid at the same time. For a recent treatment of the subject, see
The idea of repurposing the techniques from the theory of spin glasses for the field of optimization of combinatorial structures goes back to the mid-1980s. In particular, Fu and Anderson in their 1986 paper “Application of statistical mechanics to NP-complete problems in combinatorial optimisation” say, “…we propose to study the general properties of such problems (combinatorial optimization) in the light of …the structure of solution space, the existence and the nature of phase transitions ….”
3. Clustering Phase Transition in Random K-SAT
The first fruitful and mathematically verified link between the solution space geometry and phase transition on the one hand and the algorithmic tractability/hardness on the other hand appeared in concurrent works of Achlioptas and Ricci-Tersenghi
For every there exists , such that the following holds when whp as : there exists a disoint partition such that the Hamming distance between any two distinct sets , and is and , .
Clustering phase transition at Below . a bulk of the solution space (blue) is a giant connected set. Above , a bulk of the set is partitioned into (blue) clusters, with exponentially small exceptions (gray). ,

This is illustrated in Figure 1. The authors argued that the presence of such a clustering picture suggests algorithmic hardness of finding satisfying assignments, in particular by means of some local search type algorithms. Furthermore, as subsequent works showed, intriguingly, the onset of this clustering property phase transition coincides with the evidential algorithmic hardness discussed earlier, asymptotically as More precisely, it was verified that . This was the first (though purely hypothetical) link between the solution space geometry and the algorithmic hardness. !
Very quickly a similar result was established also for the independent set problem in and several other similar problems: For all independent sets of size , (namely more than half optimum) are clustered, and for all they are not clustered. ,
What about the clustering property in spin glasses? It does take place, and this is in fact what originally prompted Parisi to develop his RSB approach. Clustering in spin glasses is more intricate and comes in the form of a nested hierarchy of subclusters arranged in a particular way called an ultrametric tree. We delay the discussion of this structure until Section 5 when we discuss how this ultrametric structure of clusters successfully guided the creation of the state-of-the-art algorithm for problem 2.
4. Overlap Gap Property: A Provable Barrier to Algorithms
As discussed earlier, the clustering property exhibited by random K-SAT and similar other problems provided unfortunately only a hypothetical explanation of the algorithmic hardness, but not a proof of it. The property, which turned out to be a provable barrier to algorithms, was inspired by the clustering picture but came in the different form of the overlap gap property (OGP), which we now formally introduce in the context of problem 1. For this purpose, we assume that the solution space is equipped with some metric (for example, Hamming distance when is binary cube).
An instance of optimization problem 1 exhibits an OGP with parameters if for every satisfying it holds that .
This property has a simple interpretation. The model exhibits the OGP if the distance between every pair of “near”-optimal solutions (solutions with values at least is either “small” (has a high overlap represented by or is “large” (has a low overlap represented by ) ).
We need to extend Definition 4.1 in two ways. First involving ensembles of instances and second involving overlaps of more than two solutions. ,
Fix a positive integer and a subset A family of problem instances . exhibits an OGP with parameters if for every and every satisfying it holds that .
Definition 4.1 is recovered as a special case when and ,
5We stress that the randomness appearing in is index-specific, representing the fact that is near-optimal (achieves a value of at least specifically for instance While the intuition behind Definition .4.1 is fairly direct, the one for Definition 4.2 is rather opaque and requires some elaboration, to be done shortly.
It turns out that the OGP does hold in our models and is a provable obstruction to classes of algorithms. The first of these facts (in the context of Definition 4.1) was already implicitly established in
For every and sufficiently large there exist , such that whp as the set of independent sets in exhibits an OGP (in the sense of Definition 4.1) with parameters As a result, no local algorithm (appropriately defined) exists for constructing an independent set with size larger than . .
Namely, every two independent sets with cardinality which is multiplicative factor close to optimality are either at a “small” distance from each other (at most or at a “large” distance from each other (at least ) Theorem ).4.3 means that beating a factor optimality threshold for this problem will require going beyond the class of local algorithms. Local algorithms do in fact achieve half-optimality (which we recall is the evidential barrier for this problem).
The gap between half-optimality and was eliminated in
The broadest currently known class of algorithms ruled out by the OGP is the class of stable (noise insensitive) algorithms. We will not give one formal definition of this class, as it comes in different context-dependent variants. Loosely speaking, the stability property means the following. We think of an algorithm as a map which maps instances , of random input (graphs, tensors, etc.) to solutions. We say that the algorithm is stable if a small perturbation of the input results in a small perturbation of the solution Namely, if . then , A local algorithm is an example of a stable algorithm: deleting or adding one edge of a graph will affect the decisions associated only with nodes which contain this edge in their constant size neighborhood. This is so because the algorithm performs these decisions concurrently (in parallel). Thus deleting or adding one edge changes the independent set produced by an algorithm in at most . many nodes. Other classes of algorithms proven to be successful for these types of problems, including approximate message passing (see Section 5), stochastic localizations, low-degree polynomials, and Boolean circuits with small depth, turn out to be stable as well.
It was established in a series of followup works that the OGP is a barrier to stable algorithms
For every large enough there exist a large enough , and a subset such that whp the set of independent sets in the interpolation sequence of graphs , exhibits an OGP with parameters and As a result, the largest independent set which can be constructed by stable algorithms is asymptotically . .
Similarly, for every large enough there exists a large enough , and a subset such that whp the set of satisfying assignments in the interpolation sequence of a random K-SAT formula exhibits an OGP with parameters and when , As a result, the largest constraint-to-variables density for which stable algorithms can find a solution of a random K-SAT problem is . .
In particular, in both cases, the OGP provides a tight characterization of the power of stable algorithms.
For the second part of the theorem, we set since for constraint satisfaction problems there is no notion of approximation to optimality, as we consider exclusively satisfying assignments. Variants of Theorem 4.4 were proven in
Why is the OGP a barrier to stable algorithms? The intuitive reason for this is a rather simple continuity-type argument and best illustrated when in Definition 4.2 and is of the form in 5. Suppose that, in addition to the properties outlined in Definition 4.2, two additional properties hold. First, the sequence of instances is “continuous,” in the sense that for all Second, suppose that when . and the case does not occur. Namely, while per Definition 4.2, for every pair of solutions with values in the interpolated sequence both , (small distance) and (large distance) situations are possible; when the two solutions correspond to the beginning (that is, and the end (that is, ) of the interpolation sequence, only the latter situation ) (large distance) is possible. This can indeed be established in many special cases using the same techniques which verify the OGP. Then the argument establishing the OGP as an obstruction to stable algorithms goes informally as follows. Consider implementing a stable algorithm on a sequence of interpolated instances resulting in a series of solutions , Since the sequence . is “continuous” (changes slowly as increases), and the algorithm is stable, then changes “continuously” as well, as changes from its initial value to its final value which by a property above must be For any . by the OGP, we have , is either or Therefore, there must exist . such that and By the triangle inequality, this means . But this contradicts stability of . The argument is illustrated in Figure .2.
OGP is an obstruction to stable algorithms. The algorithmic path (blue curve) has to jump (red section) from the small-distance ( region to the large-distance )( region in the space of solutions ) .

Recall now the clustering property discussed earlier. Could one build an algorithmic barrier type argument directly from the clustering property? It appears the answer is no, as evidenced in a different model called the symmetric binary perceptron model
5. Ultrametricity Guides Algorithms for Spin Glasses
We now turn to the state-of-the-art algorithms for spin glasses, our Example II, specifically algorithms for solving optimization problem 1. A dramatic progress on this problem and a fairly complete understanding of its algorithmic tractability was achieved only recently in the works of Subag
Soon afterward, the result was extended to general in
Let be the set of functions on with total variation appropriately bounded. Let There exists a polynomial time stable algorithm which constructs a spin configuration . achieving whp as .
When it is conjectured that the variational problem , is solved by a strictly increasing function, and thus modulo this conjecture, Montanari’s algorithm achieves near optimality. It is known however
We now explain the algorithmic threshold and the successful algorithm for achieving it. The Parisi measure is conjecturally a measure describing random overlaps of nearly optimal solutions in the following sense. Fix a positive inverse temperature parameter and consider sampling two spin configurations , from the associated Gibbs measure independently. Then is the distribution of the inner product (overlap) in the limit Furthermore, suppose instead . samples are generated according to the same Gibbs distribution. Then the distribution of the set of overlaps is believed to be described by distributions over set of values in arranged on a tree. As such, it satisfies a strong form of the metric triangle inequality (with max replacing the addition operator) called ultrametricity, depicted in Figure 3. When the Parisi measure is strictly increasing and there is no gap in its support, the branching points on the tree are located in a continuous way without gaps. It turns out that this coincides with the case implying and this is the case when the algorithm reaches near optimality. The algorithm is constructed as a walk in the interior of , which starts from the center of the cube and follows the ultrametric tree as a guide. Specifically, at every stage, it makes a small incremental step in the direction (a) maximally improving the objective function and (b) orthogonal to the previous direction. Absence of gaps implies that the algorithm proceeds without interruption until hitting a corner of provably reaching near optimality. In a sense, the ultrametric tree “guides” the successive steps of the algorithm.
Ultrametric tree of solutions in End points of the curve correspond to corners of the cube. The Parisi measure has no gap in the support, reflected in the tree branching continously. .

When the Parisi measure is not strictly increasing, the authors show that the algorithm reaches the largest energy level achievable by ultrametric trees without gaps. This was shown to be the value of the extended variational problem which per 6 is strictly smaller than .
6. Branching-OGP: Tight Characterization of Algorithmically Solvable Spin Glasses
A final missing piece completing the algorithmic tractability versus hardness picture for spin glasses was delivered by Huang and Sellke
Optimization problem 2 exhibits an OGP in the sense of Definition 4.2 with parameters This property is an obstruction to stable algorithms in finding spin configurations with values larger than . whp as , .
Hence (when combined with Theorem 5.1) represents a tight algorithmic value achievable by stable algorithms.
As a corollary, a spin model can be solved to optimality by stable algorithms when - (modulo the strict monotonicity conjecture discussed above). On the other hand, when is even, the largest value achievable by stable algorithms is strictly suboptimal. It is conjectured that this remains the case for all regardless of the parity and has been verified for the case when is sufficiently large. The multi-OGP obstruction set is called branching-OGP (B-OGP). It subsumes the obstruction sets appearing in the context of Theorems 4.3 and 4.4 as special cases, and it represents the most general known form of OGP-based obstructions.
Ultrametric tree of solutions when the Parisi measure has a gap in the support leading to gaps in branching locations.

The proof of Theorem 6.1 follows a path similar to the ones used in Theorems 4.3 and 4.4 and is also based on the interpolation scheme between instances, using in a crucial way the approaches developed by Talagrand in
It is reasonable to believe that the scope of algorithms limited by the value is much broader than the class of stable algorithms and quite possibly includes the class of all polynomial time algorithms for these and similar problems. Such a result however is out of reach for now due to its very strong algorithmic complexity implications.
References
[ ART06] - Dimitris Achlioptas and Federico Ricci-Tersenghi, On the solution-space geometry of random constraint satisfaction problems, STOC’06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing, ACM, New York, 2006, pp. 130–139, DOI 10.1145/1132516.1132537. MR2277138,
Show rawAMSref
\bib{achlioptas2006solution}{article}{ author={Achlioptas, Dimitris}, author={Ricci-Tersenghi, Federico}, title={On the solution-space geometry of random constraint satisfaction problems}, conference={ title={STOC'06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing}, }, book={ publisher={ACM, New York}, }, date={2006}, pages={130--139}, review={\MR {2277138}}, doi={10.1145/1132516.1132537}, }
[ BH22] - Guy Bresler and Brice Huang, The algorithmic phase transition of random SAT for low degree polynomials, 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science—FOCS 2021, IEEE Computer Soc., Los Alamitos, CA, 2022, pp. 298–309. MR -4399691,
Show rawAMSref
\bib{bresler2022algorithmic}{article}{ author={Bresler, Guy}, author={Huang, Brice}, title={The algorithmic phase transition of random $k$-SAT for low degree polynomials}, conference={ title={2021 IEEE 62nd Annual Symposium on Foundations of Computer Science---FOCS 2021}, }, book={ publisher={IEEE Computer Soc., Los Alamitos, CA}, }, date={2022}, pages={298--309}, review={\MR {4399691}}, }
[ CMM 23] - Patrick Charbonneau, Enzo Marinari, Marc Mezard, Giorgio Parisi, Federico Ricci-Tersenghi, Gabriele Sicuro, and Francesco Zamponi (eds.), Spin glass theory and far beyond: replica symmetry breaking after 40 years, World Scientific, 2023.,
Show rawAMSref
\bib{charbonneau2023spin}{book}{ title={Spin glass theory and far beyond: replica symmetry breaking after 40 years}, editor={Charbonneau, Patrick}, editor={Marinari, Enzo}, editor={Mezard, Marc}, editor={Parisi, Giorgio}, editor={Ricci-Tersenghi, Federico}, editor={Sicuro, Gabriele}, editor={Zamponi, Francesco}, year={2023}, publisher={World Scientific}, }
[ CGPR19] - Wei-Kuo Chen, David Gamarnik, Dmitry Panchenko, and Mustazee Rahman, Suboptimality of local algorithms for a class of max-cut problems, Ann. Probab. 47 (2019), no. 3, 1587–1618, DOI 10.1214/18-AOP1291. MR3945754,
Show rawAMSref
\bib{chen2019suboptimality}{article}{ author={Chen, Wei-Kuo}, author={Gamarnik, David}, author={Panchenko, Dmitry}, author={Rahman, Mustazee}, title={Suboptimality of local algorithms for a class of max-cut problems}, journal={Ann. Probab.}, volume={47}, date={2019}, number={3}, pages={1587--1618}, issn={0091-1798}, review={\MR {3945754}}, doi={10.1214/18-AOP1291}, }
[ CO10] - Amin Coja-Oghlan, A better algorithm for random SAT, SIAM J. Comput. 39 (2010), no. 7, 2823–2864, DOI -10.1137/09076516X. MR2645890,
Show rawAMSref
\bib{coja2010better}{article}{ author={Coja-Oghlan, Amin}, title={A better algorithm for random $k$-SAT}, journal={SIAM J. Comput.}, volume={39}, date={2010}, number={7}, pages={2823--2864}, issn={0097-5397}, review={\MR {2645890}}, doi={10.1137/09076516X}, }
[ DSS22] - Jian Ding, Allan Sly, and Nike Sun, Proof of the satisfiability conjecture for large , Ann. of Math. (2) 196 (2022), no. 1, 1–388, DOI 10.4007/annals.2022.196.1.1. MR4429261,
Show rawAMSref
\bib{ding2022proof}{article}{ author={Ding, Jian}, author={Sly, Allan}, author={Sun, Nike}, title={Proof of the satisfiability conjecture for large $k$}, journal={Ann. of Math. (2)}, volume={196}, date={2022}, number={1}, pages={1--388}, issn={0003-486X}, review={\MR {4429261}}, doi={10.4007/annals.2022.196.1.1}, }
[ EAMS21] - Ahmed El Alaoui, Andrea Montanari, and Mark Sellke, Optimization of mean-field spin glasses, Ann. Probab. 49 (2021), no. 6, 2922–2960, DOI 10.1214/21-aop1519. MR4348682,
Show rawAMSref
\bib{el2021optimization}{article}{ author={El Alaoui, Ahmed}, author={Montanari, Andrea}, author={Sellke, Mark}, title={Optimization of mean-field spin glasses}, journal={Ann. Probab.}, volume={49}, date={2021}, number={6}, pages={2922--2960}, issn={0091-1798}, review={\MR {4348682}}, doi={10.1214/21-aop1519}, }
[ Fri90] - A. M. Frieze, On the independence number of random graphs, Discrete Math. 81 (1990), no. 2, 171–175, DOI 10.1016/0012-365X(90)90149-C. MR1054975,
Show rawAMSref
\bib{FriezeIndependentSet}{article}{ author={Frieze, A. M.}, title={On the independence number of random graphs}, journal={Discrete Math.}, volume={81}, date={1990}, number={2}, pages={171--175}, issn={0012-365X}, review={\MR {1054975}}, doi={10.1016/0012-365X(90)90149-C}, }
[ GJW24] - David Gamarnik, Aukosh Jagannath, and Alexander S. Wein, Hardness of random optimization problems for Boolean circuits, low-degree polynomials, and Langevin dynamics, SIAM J. Comput. 53 (2024), no. 1, 1–46, DOI 10.1137/22M150263X. MR4704592,
Show rawAMSref
\bib{gamarnik2024hardness}{article}{ author={Gamarnik, David}, author={Jagannath, Aukosh}, author={Wein, Alexander S.}, title={Hardness of random optimization problems for Boolean circuits, low-degree polynomials, and Langevin dynamics}, journal={SIAM J. Comput.}, volume={53}, date={2024}, number={1}, pages={1--46}, issn={0097-5397}, review={\MR {4704592}}, doi={10.1137/22M150263X}, }
[ GKPX22] - David Gamarnik, Eren C. Kızıldağ, Will Perkins, and Changji Xu, Algorithms and barriers in the symmetric binary perceptron model, 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science—FOCS 2022, IEEE Computer Soc., Los Alamitos, CA, 2022, pp. 576–587. MR4537237,
Show rawAMSref
\bib{GamarnikKizildagPerkinsXu2022}{article}{ author={Gamarnik, David}, author={K\i z\i lda\u {g}, Eren C.}, author={Perkins, Will}, author={Xu, Changji}, title={Algorithms and barriers in the symmetric binary perceptron model}, conference={ title={2022 IEEE 63rd Annual Symposium on Foundations of Computer Science---FOCS 2022}, }, book={ publisher={IEEE Computer Soc., Los Alamitos, CA}, }, date={2022}, pages={576--587}, review={\MR {4537237}}, }
[ GMZ22] - David Gamarnik, Cristopher Moore, and Lenka Zdeborová, Disordered systems insights on computational hardness, J. Stat. Mech. Theory Exp. 11 (2022), Paper No. 114015, 41, DOI 10.1088/1742-5468/ac9cc8. MR4535586,
Show rawAMSref
\bib{gamarnik2022disordered}{article}{ author={Gamarnik, David}, author={Moore, Cristopher}, author={Zdeborov\'{a}, Lenka}, title={Disordered systems insights on computational hardness}, journal={J. Stat. Mech. Theory Exp.}, date={2022}, number={11}, pages={Paper No. 114015, 41}, review={\MR {4535586}}, doi={10.1088/1742-5468/ac9cc8}, }
[ GS17] - David Gamarnik and Madhu Sudan, Limits of local algorithms over sparse random graphs, Ann. Probab. 45 (2017), no. 4, 2353–2376, DOI 10.1214/16-AOP1114. MR3693964,
Show rawAMSref
\bib{gamarnik2014limits}{article}{ author={Gamarnik, David}, author={Sudan, Madhu}, title={Limits of local algorithms over sparse random graphs}, journal={Ann. Probab.}, volume={45}, date={2017}, number={4}, pages={2353--2376}, issn={0091-1798}, review={\MR {3693964}}, doi={10.1214/16-AOP1114}, }
[ Gue03] - Francesco Guerra, Broken replica symmetry bounds in the mean field spin glass model, Comm. Math. Phys. 233 (2003), no. 1, 1–12, DOI 10.1007/s00220-002-0773-5. MR1957729,
Show rawAMSref
\bib{guerra2003broken}{article}{ author={Guerra, Francesco}, title={Broken replica symmetry bounds in the mean field spin glass model}, journal={Comm. Math. Phys.}, volume={233}, date={2003}, number={1}, pages={1--12}, issn={0010-3616}, review={\MR {1957729}}, doi={10.1007/s00220-002-0773-5}, }
[ HS22] - Brice Huang and Mark Sellke, Tight Lipschitz hardness for optimizing mean field spin glasses, 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science—FOCS 2022, IEEE Computer Soc., Los Alamitos, CA, 2022, pp. 312–322. MR4537213,
Show rawAMSref
\bib{huang2022tight}{article}{ author={Huang, Brice}, author={Sellke, Mark}, title={Tight Lipschitz hardness for optimizing mean field spin glasses}, conference={ title={2022 IEEE 63rd Annual Symposium on Foundations of Computer Science---FOCS 2022}, }, book={ publisher={IEEE Computer Soc., Los Alamitos, CA}, }, date={2022}, pages={312--322}, review={\MR {4537213}}, }
[ Kar76] - Richard M. Karp, The probabilistic analysis of some combinatorial search algorithms, Algorithms and complexity (Proc. Sympos., Carnegie-Mellon Univ., Pittsburgh, Pa., 1976), Academic Press, New York-London, 1976, pp. 1–19. MR445898,
Show rawAMSref
\bib{karp1976probabilistic}{article}{ author={Karp, Richard M.}, title={The probabilistic analysis of some combinatorial search algorithms}, conference={ title={Algorithms and complexity}, address={Proc. Sympos., Carnegie-Mellon Univ., Pittsburgh, Pa.}, date={1976}, }, book={ publisher={Academic Press, New York-London}, }, date={1976}, pages={1--19}, review={\MR {445898}}, }
[ MMZ05] - M. Mézard, T. Mora, and R. Zecchina, Clustering of solutions in the random satisfiability problem, Physical Review Letters 94 (2005), no. 19, 197205.,
Show rawAMSref
\bib{mezard2005clustering}{article}{ author={M{\'e}zard, M.}, author={Mora, T.}, author={Zecchina, R.}, title={Clustering of solutions in the random satisfiability problem}, date={2005}, journal={Physical Review Letters}, volume={94}, number={19}, pages={197205}, }
[ Mon19] - Andrea Montanari, Optimization of the Sherrington–Kirkpatrick Hamiltonian, 2019 IEEE 60th Annual Symposium on Foundations of Computer Science, IEEE Comput. Soc. Press, Los Alamitos, CA, 2019, pp. 1417–1433, DOI 10.1109/FOCS.2019.00087. MR4228234,
Show rawAMSref
\bib{montanari2021optimization}{article}{ author={Montanari, Andrea}, title={Optimization of the Sherrington--Kirkpatrick Hamiltonian}, conference={ title={2019 IEEE 60th Annual Symposium on Foundations of Computer Science}, }, book={ publisher={IEEE Comput. Soc. Press, Los Alamitos, CA}, }, date={2019}, pages={1417--1433}, review={\MR {4228234}}, doi={10.1109/FOCS.2019.00087}, }
[ Par80] - Giorgio Parisi, A sequence of approximated solutions to the SK model for spin glasses, J. Phys. A: Math. Gen. 13 (1980), no. 4, L115.,
Show rawAMSref
\bib{parisi1980sequence}{article}{ author={Parisi, Giorgio}, title={A sequence of approximated solutions to the {SK} model for spin glasses}, date={1980}, journal={J. Phys. A: Math. Gen.}, volume={13}, number={4}, pages={L115}, }
[ RV17] - Mustazee Rahman and Bálint Virág, Local algorithms for independent sets are half-optimal, Ann. Probab. 45 (2017), no. 3, 1543–1577, DOI 10.1214/16-AOP1094. MR3650409,
Show rawAMSref
\bib{rahman2017local}{article}{ author={Rahman, Mustazee}, author={Vir\'{a}g, B\'{a}lint}, title={Local algorithms for independent sets are half-optimal}, journal={Ann. Probab.}, volume={45}, date={2017}, number={3}, pages={1543--1577}, issn={0091-1798}, review={\MR {3650409}}, doi={10.1214/16-AOP1094}, }
[ Sub21] - Eliran Subag, Following the ground states of full-RSB spherical spin glasses, Comm. Pure Appl. Math. 74 (2021), no. 5, 1021–1044, DOI 10.1002/cpa.21922. MR4230065,
Show rawAMSref
\bib{subag2021following}{article}{ author={Subag, Eliran}, title={Following the ground states of full-RSB spherical spin glasses}, journal={Comm. Pure Appl. Math.}, volume={74}, date={2021}, number={5}, pages={1021--1044}, issn={0010-3640}, review={\MR {4230065}}, doi={10.1002/cpa.21922}, }
[ Tal06] - Michel Talagrand, The Parisi formula, Ann. of Math. (2) 163 (2006), no. 1, 221–263, DOI 10.4007/annals.2006.163.221. MR2195134,
Show rawAMSref
\bib{talagrand2006parisi}{article}{ author={Talagrand, Michel}, title={The Parisi formula}, journal={Ann. of Math. (2)}, volume={163}, date={2006}, number={1}, pages={221--263}, issn={0003-486X}, review={\MR {2195134}}, doi={10.4007/annals.2006.163.221}, }
Credits
Figure 1, Figure 2, Figure 3, Figure 4, and photo of David Gamarnik are courtesy of David Gamarnik.