Are tax forms complex? Is the world wide system of selling stocks a complex one? What about the Mandelbrot set?
Figure 1 (Mandelbrot set - Courtesy of Wikipedia)
What is a complex system? How can we measure complexity? Is complexity "good" or "bad?"
One of the features of mathematics is that it likes to streamline the insights that it has obtained and present its ideas in an organized and transparent manner, often starting from first principles. Yet many people do not see mathematics as a sleek or simple subject. They see it as a very complex subject.
Every year, to try to put in perspective the important contributions mathematics makes to America, we celebrate Mathematics Awareness Month. This year the theme for Mathematics Awareness Month is Unraveling Complex Systems. One particular aspect of complex systems that mathematics has helped illuminate is the way a system which has many parts organizes so that it often shows regularities that evolve over time. An example of this is when one sees a flock of birds take off and soon after they are airborne, one sees an organized pattern rather than the pandemonium of the flock taking to the air. Another example of this genre is the phenomenon of fireflies "blinking" in unison. Sometimes things which are disorderly assemble themselves into an orderly array, but sometimes the opposite occurs. Sometimes things we are used to working in concert, such as an electrical network or communications system (cell phone network), go into disarray. This can occur because of the failure of some critical backbone to a complex system which, when it fails, causes smaller subunits of the system to go down and eventually can bring the whole system to a halt.
In honor of Mathematics Awareness Month I will take an idiosyncratic look at mathematical aspects of complexity, in particular some history of mathematical approaches and insights into this subject.
History
As the importance of a topic matures it is often useful to look back and see the roots of the subject and how it evolved with time. It is not an accident that deterministic systems were studied before stochastic ones. For systems that have a mixture of deterministic and chance elements, it is common to take on the deterministic parts first. One source of complexity is that chance events are are harder to sort out. Or so it seemed until relatively recently - more of this in a bit. Thus, algebra and geometry were developed in many sophisticated directions (in Egypt, China, Babylonia, India, etc.) before a systematic approach to probability theory started to emerge in the 17th century in the work of Pascal and Fermat. Another early pioneer in the complexities that chance events cause in the understanding of our work was Thomas Bayes.
Figure 2 (Reverend Thomas Bayes)
Whereas until relatively recently a frequentist-based approach to making decisions was the norm, more and more discussions are being made of the use of ideas implicit in Bayes work for making decisions and reaching conclusions. For example, suppose one wants to argue that a certain die, which looks very symmetrically manufactured, is a fair die. One might roll this die 60 times and record the relative frequency of one dot, two dots, ..., six dots. However, it not very likely that each number of spots will occur equally often. If 6 occurs slightly more often than the other numbers of spots, does this mean a slight bias towards 6 appearing? A Bayesian approach might use the "data" from the 60 tosses as input for further work to try to give an answer to the question as to whether or not the die is fair.
It would seem that systems that involve randomness would raise "complexity issues" beyond those of deterministic systems, so it was a big surprise when it emerged that deterministic systems could show behavior of such complexity that had not been dreamed of! This type of behavior has now come to be called "chaotic" behavior to distinguish it from systems that are subject to "genuine" randomness. The discovery of deterministic chaotic systems motivated a renewed interest into how to tell if a system was truly random, and how to design systems that looked as if they were random but were "pseudo-random."
In retrospect there were a variety of examples of deterministic "chaos" in the work of Poincaré before the more recent work put chaos "on the map." Relatively recent contributions were made by Robert May, James Yorke (who coined the use of the word "chaos" to describe complex deterministic behavior), Mitchell Feigenbaum and others. May called attention to the complexities of ecological systems. A hallmark of this period was the complex behavior associated with the logistic map.
Figure 3 (Bifurcations of the logistic map - courtesy of Wikipedia)
This involves the iteration of computing from a given input x, the value r(x)(1-x) (where r is a fixed constant) as an "output" and then using this output value to compute r(x)(1-x) again. The fact that a quadratic equation could show this level of complexity was quite a surprise. It was Robert May (now Lord May of Oxford) who in 1978 first called attention to the complexities of this map in a variety of simple models motivated by questions in biology.
Figure 4 (Lord May, Courtesy of Wikipedia)
The mathematician and meteorologist Edward Lorenz had in essence called attention to a similar phenomenon in what is seen in difference equations for differential equations. It was Lorenz who coined the term "butterfly effect." This notion suggests that something as minute as a butterfly's flapping its wings can be responsible for much more dramatic effects separated from the original action in time and space. In the area of chaos people also speak of sensitive dependence on initial conditions. For example, if one runs a simulation of a system (based on a differential equation or difference equation), one needs to have initial conditions to indicate the starting state for the system. The difference equation then can be used to see what the future state of the system would be. It had been thought that a small difference in the starting conditions for running the simulation would lead to similar long term outcomes. This turned out not to be true. Two simulations that differed in the initial conditions that varied in the 10th decimal place might have nothing to do with each other after a very small number of time steps! Lorenz noted this phenomenon in the context of weather predictions.
Computers and complexity
One of the major tools people use to deal with complex systems are computers. It is often difficult to understand the relation of computer systems to humans these days because humans have come to depend so much on complex systems that can be run only by computers. For example, trades are now routinely carried out on stock markets much more quickly than could be done by humans without computer systems. Historically people have taken comfort in the fact that computers do impressive things because of their speed but that they don't seem capable of feats that seem to involve "thinking" or "creativity." Traditional examples of seemingly unique human skills are playing checkers, playing chess, and other "games" that seem to require thinking and intelligence. First, a checkers-playing program was developed which played checkers better than any human opponent. IBM developed a computer system known as Deep Blue which beat Gary Kasparov in a well-publicized match. Although Deep Blue was "retired," modern chess playing programs are amazingly good.
To deal with hard and technical problems humans have turned to machines - computers - to help them. While such computing machines have a long history, it was only in the 20th century that dramatic progress was made. Two major figures in this work were John Von Neumann and Alan Turing.
Figure 5 (Photo of John Von Neumann)
Von Neumann was a pioneer in developing the theory and implementation of early stored program digital computers. Turing's interest in computers was spurred by his World War II efforts to use computers to help break German codes.
Figure 6 (Photo of Alan Turing)
Turing, one of the pioneers of computer science and the complexity questions that became a part of computer science, invented a simplified "conceptual" computer now known as the Turing machine. The Turing machine is a conceptual device which marks symbols on an infinite tape which can be advanced in either direction from the cell that is currently being "looked at" by a "head" or reader. Turing was able to show that any problem that could be solved on a "standard" computer could also be solved on a Turing machine. Thus, this "standardized" device enabled one to talk about complexity without talking about the hardware of an actual machine. A supercomputer can solve problems more quickly than a Turing machine when they can be solved, but it cannot solve any problem that a Turing machine cannot solve!
The Turing machine can be used to investigate which problems a computer could solve and which are too hard/complex for computers. This is a curious turn of phrase because it is not customary to frame discussions of computers with what they cannot do! However, it has been shown using human ingenuity that there are indeed such problems. One of the best known of such problems is the halting problem. Turing showed that the halting problem could not be answered by a computer in 1937, though the name "halting problem" is due to the mathematician and computer scientist Martin Davis. The result asks for an algorithm which, when given to a Turing machine, will answer yes or no to this question: Will a given program terminate in a finite amount of time? It turns out there is no such algorithm. Such questions are known as algorithmically undecidable. Questions such as this are related to the fundamental questions about "decidability" in mathematical systems that Kurt Gödel studied.
Complexity classes
In discussing the complexity of different kinds of problems, the problems involved can be placed into different categories. Even for experts who are trying to stay abreast of developments in this rapidly emerging field, there is a complex alphabet soup of classes and types of problems considered. Here I will try merely to address some issues related to the best known of these classes and ideas. Among the kinds of questions that are considered are decision problems (ones that have a "yes" or "no" answer) and optimality problems (problems where one is seeking a "best" solution with respect to some objective function). When talking about complexity it is common to use the phrase "instance" of a problem to refer to a particular case of the problem involved. For example, if one's problem is to sort a list of n integers from smallest to largest, an instance of size 6 would be to sort the six numbers 5, 9, 23, 3, 1, 100 and an instance of size 10 would be to sort 1, 5, 2, 7, 90, 30, 200, 38, 2, 11. One might talk about the problem of factoring integers with n digits and an instance of this problem for n = 8 would be to factor 81,123,121.
The complexity class P consists of problems which can be solved in terms of a natural parameter of the problem n in a polynomial (in terms of n) amount of time (work) . For example, for a list of n positive integers one can ask how long it will take to sort these numbers into a list from smallest to largest. Sorting positive integers is an example of a problem in P, since there are known algorithms which accomplish this in polynomial time. We are not considering which is the best polynomial time algorithm, only whether there is a polynomial time algorithm or not. Clearly, it is of interest for a particular kind of problem like sorting to know "optimal" algorithms. However, in many cases it is valuable merely to know that there is a polynomial upper bound for the effort needed in solving a problem.
The complexity class NP (non-deterministic polynomial) refers to problems where it is possible to check that a proposed answer to an instance of such a problem can be solved in polynomial time. For example, consider the number 81,123,121. While this number may be "hard" to factor, if someone gives a proposed factorization (29 x 83 x 33703) it is quite speedy to check that is indeed a correct factorization. NP is the complexity class of problems for which one can check answers to instances of a problem in a polynomial time amount of work in terms of the size of an instance of the problem. The question of the complexity of factoring is of particular interest because there are cryptological systems whose security depends on the fact that factoring some large integers into primes seems to take a lot of computer time. Currently, it is not known if factoring can be done in polynomial time. Until relatively recently it was not known if one could check if an integer was a prime in polynomial time, but this was done in 2002. However, the algorithm which showed that prime testing was in P is not as practical as other algorithms for checking if a specific number is prime. There are still important natural questions (factoring, graph isomorphism) whose complexity is not fully understood.
When discussing the complexity of instances of a problem, some instances of the problem may be much easier to solve than others. This leads to the distinction between worst-case complexity questions versus average-case complexity. There may be some situations where there are very hard instances of the problem but on average the situation may be much better. For example, there are linear programming problems which require an exponential amount of "work" to solve using a particular algorithm, although "typical" linear programming instances may be answerable using the algorithm rather quickly. As complexity theory has developed, results about the worst-case behavior of problems has typically come earlier than results about the average-case complexity of the same problems. Because some problems come up in applications a lot, there are also issues about finding approximate answers which are "good enough" quickly.
What is your intuition? Are the classes of problem P and NP the same or different? What is being asked is are there problems which belong to P but not to NP? Are there problems which are in NP but not in P? Many people find it easy to imagine that there are problems which are easy to check if one has a proper answer but for which finding an answer might not be easy to do. However, rather astonishingly, we don't know if P = NP or not. This problem is of sufficient importance that it is viewed by many as the most important unsolved problem in computer science and mathematics. The Clay Foundation has offered a million dollar prize for the resolution of this problem.
However, there is a bit more to the issue of P versus NP. Within the class NP there is a class of problems which are called NP-complete. NP-complete problems are problems which have the property that if any one of them could be shown to be in P, then all of them would also be shown to be in P. Thus, from the viewpoint of how hard NP-complete problems are, they are all alike. Either all of them can be solved in polynomial time or they all require exponential time algorithms to solve. Again, if one NP-complete problem could be shown (proved) to require an exponential amount of effort in terms of n, the problem-size parameter, then all of them would require this effort as well. Another term one sometimes sees is NP-hard. NP-hard problems are those that are sometimes informally described as being at least as hard as any problems in NP. What happens if it should turn out that P ≠ NP? In this case the NP-hard problems cannot be solved in polynomial time. Should it be the case that P = NP, the question as to whether or not NP-hard problems can be solved in polynomial time would still be unresolved. Note that despite the appearance of NP in the NP-hard term, there are NP-hard problems that are not in NP. This is a sad artifact of the terminology as it evolved and unfortunately the clock cannot be turned back. In retrospect, a less arcane collection of definitions could have been selected here. When new ideas emerge very rapidly, sometimes terminology doesn't get set optimally. It is a result of Richard Ladner (1975) that there are circumstances where there are problems which are outside the NP-complete problems and the P problems which are in NP. Thus, should it be shown that P and NP are not the same, there would be problems which are neither in P nor are NP-complete. These problems, to add to your list of terminology, are sometimes called NP-intermediate. The example Ladner found is not considered a "natural" question, however. Investigators are still looking for natural examples, that is, one's that come up in mathematical practice.
Here is another intriguing aspect of complexity within computer science and mathematics related to proofs and algorithms. There is a tendency for proofs of mathematical theorems to get more streamlined after a first proof has been found. There seems to be a psychological hurdle such that after a theorem is shown to be true, especially for "important theorems," shorter and more insightful proofs often are found. However, for algorithms for particular problems it is often the other way around. Algorithms sometimes get more complex in attempts to find an algorithm which answers a question "faster." Thus, there may be a conceptually straightforward quadratic-time algorithm for a problem, while one might need to have much more sophisticated ideas (and sometimes data structures) come into play to find an algorithm which improves over quadratic time. Thus, there are algorithms which will sort integers that are quadratic algorithms. These algorithms are not optimal, however, even if they are quite easy to understand. For sorting numbers it is fairly well understood how many different approaches to sorting perform in the worst case and, on average, and they will do better than quadratic complexity. The optimal speed for the solution of particular classes of problems is often extremely difficult to find. For example, an initial algorithm for a problem may be designed to attack all comers - all types of instances within the problem that is under investigation. However, one might be able to find a way to partition the instances of a problem so that different schemes are used to solve an instance within the different partition classes. Thus, one would have a specialized fast algorithm for each class, while no algorithm would run that quickly on any particular instance chosen without regard to the partition.
Evolving ideas
The part of mathematics and computer science concerned with complexity theory has had many major contributors. However, two individuals responsible for a major breakthrough were Stephen Cook and Leonid Levin.
Figure 7 Photo of Leonid Levin (Courtesy of Wikipedia)
Figure 8 Photo of Stephen Cook (Courtesy of Wikipedia)
It was Cook and Levin who pioneered (independently) the notion of NP-complete problems. Though their work was first done in different countries and in different languages and the details of the approaches they used were also different, they got to the same place. In what is now called the Cook-Levin Theorem, they showed that the problem known as SAT belongs to the complexity class now known as the NP-complete problems. A Boolean expression is one that can be built up from the logical connectives "and," "not" and "or." For example: (x and not y) or (z and not x) is a Boolean expression involving the three variables x, y and z. A Boolean expression becomes true or false when the variables in the expression are assigned the truth values true or false. SAT is the question: Given a Boolean expression, is it always true that there is a truth assignment to its variables so the whole expression is true? Cook and Levin showed that this problem was in NP and that for an instance of any other problem Q in NP, there was a polynomial time algorithm for a (deterministic) Turing machine which would transform this instance of Q to an instance of SAT. It followed that NP-complete problems are ones such that they can be transformed to each other using a polynomial time algorithm. If one of these problems was shown to be in P, then all of them would be in P. There still remains the major (million dollar!) question, already mentioned, as to whether or not P and NP are really the same class!
Many investigators fleshed out the ideas that Cook and Levin pioneered. In a surprisingly short amount of time there emerged many new astonishing ideas. In particular, Oded Goldreich, Shafi Goldwasser, Silvio Michali, and Avi Wigderson have done fascinating work. The have shown connections between complexity ideas and cryptography, zero-knowledge proofs, randomness and pseudo-randomness to name just a few of the exciting issues these researchers have explored. For their efforts they have won many important awards in computer science and mathematics. In particular, they have explored issues involving complexity classes where "randomness" plays a role. At the heart of these problems is the issue of whether algorithms that use randomness in an essential way (randomized or probabilistic algorithms) are more powerful than algorithms which are deterministic.
Figure 9 Photo of Oded Goldreich (Courtesy of Wikipedia)
Figure 10 Photo of Shafrira (Shafi) Goldwasser (Courtesy of Wikipedia)
Figure 11 Photo of Silvio Michali (Courtesy of Wikipedia)
Recent news
We all know from personal experience that computers have much more storage and are much faster than they were in even the relatively recent past. We are also rightly impressed with the success of researchers at IBM in designing a computer system, Watson, that successfully defeated the best human opponents at Jeopardy!. The nature of playing Jeopardy! requires knowing a tremendous amount of very diverse "facts" as well as the ability to interpret ordinary language often given in terms of puns.
Figure 12 The "avatar" for IBM's "Watson" (Courtesy of Wikipedia)
Interacting with natural language seems not to be on the radar of the computer systems we interact with on a daily basis, so the accomplishment of Watson is all the more impressive. What seems to have made Watson's accomplishment possible was a combination of hardware--processors that could do computations with incredible speed and memory units to store tremendous amounts of data--put together with ideas from machine learning and artificial intelligence (AI). Watson was programmed so that the more it played Jeopardy!, the better it got. This is the goal of researchers in machine learning, an area which overlaps with artificial intelligence. It has been hard to define artificial intelligence but loosely speaking the idea is to get computers to do things which seem to display "intelligence/creativity."
Not long after Watson won its competition against human opponents on Jeopardy!, Leslie Valiant won the prestigious Turing Award from the ACM (Association for Computing Machinery) for 2011. The citation mentions his many contributions to complexity theory and machine learning.
Figure 13 Photo of Leslie Valiant (Courtesy of Wikipedia)
Mathematics and computer science will continue to look at the issue of complexity in attempts to sort out not only the relative difficulty of solving various problems, but using insights into complexity to understand our world better. April will be a good month to ponder all of these exciting issues!
References
Alligood, K., and T. Sauer, J. Yorke, Chaos: An introduction to dynamical systems, Springer-Verlag. New York, 1997.
Arrow, H., and J. McGrath, J. Berdahl, . Small groups as complex systems: Formation, coordination, development, and adaptation, Sage, Thousand Oaks, Ca., 2000.
Aurora, S., and B. Boak, Computational Geometry a Modern Approach, Cambridge U. Press, New York, 1999.
Bar-Yam, Y., Dynamics of Complex Systems.: Addison-Wesley, Reading, MA., 1997.
Cambel, A., Applied chaos theory: A paradigm for complexity, Academic Press, San Diego, 1993.
Cormen, T., Introduction to Algorithms, (Second Edition), MIT Press, Cambridge, 2001.
Flake, G., The Computational Beauty of Nature: Computer explorations of fractals, chaos, complex systems, and adaptation, MIT Press, Cambridge 1998.
Garey, M. and D. Johnson, Computers and Intractability, A Guide to NP-completeness, W.H. Freeman, 1979.
Gell-Mann, M., The quark and the jaguar: Adventures in the simple and the complex. W.H. Freeman, San Francisco:, 1994.
Goldreich, O., Computational Complexity: A Conceptual Perspective, Cambridge U. Press, New York, 2008.
Goldwasser, S. and Micali, S. Probabilistic Encryption. Special issue of Journal of Computer and Systems Sciences, 28 (1984) 270-299.
Gaston, G. and R. Baeza (eds.), Handbook of Algorithms and Data Structures, Addison Wesley, 1991.
Hubbard, J. and B. West, Differential Equations: A dynamical systems approach, Springer-Verlag, New York, 1991.
Lorenz, E.., The Essence of Chaos, University of Washington Press, Seattle, 1993.
Mainzer, K. . Thinking in Complexity: The Complex Dynamics of Matter, Mind, and Mankind. New York: Springer-Verlag., 1994.
Nicolis, G., and I. Prigogine, Exploring Complexity: An Introduction, W. H. Freeman., San Francisco, 1989.
Nielsen, M. and I. Chuang, Quantum Computation and Quantum Information, Cambridge U. Press, New York, 2000.
Papadimitriou, C., Computational Complexity, Addison Wesley, 1994.
Peak, D., and M. Frame, Chaos under control: The art and science of complexity, W. H. Freeman, New York, 1994.
Peitgen, H.-O., and H. Jurgens, D. Saupe, Chaos and Fractals: New Frontiers of Science. New York: Springer-Verlag, 1992.
Pickover, C. (Ed.), Fractal horizons: The Future Use of Fractals.: St. Martin's Press, New York, 1996.
Prigogine, I., The End of Certainty: Time, Chaos, and the New Laws of Nature, The Free Press, New York, 1996.
Prigogine, I., & Stengers. (1984). Order out of chaos: Man's new dialogue with nature. New York: Bantam.
Rasch, W., & Wolfe, C. (Eds.), Observing Complexity: Systems Theory and Postmodernity. University of Minnesota Press, Minneapolis, 2000.
Ruelle, D., Chance and Chaos, Princeton University Press, Princeton, 1993.
Sipser, M., Introduction to the Theory of Computation, PWS, Boston, 1997.
Smith, P., Explaining chaos, Cambridge University Press, New York, 1998.
Stewart, I., Does God play dice? The new mathematics of chaos (Second ed.), Blackwell, 2002.
Van Leeuwen, J. (ed.), Handbook of Theoretical Computer Science, Vol. A: Algorithms and Complexity, MIT Press, Cambridge, 1990.
Waldrop, M., Complexity: The emerging science at the edge of order and chaos, Simon and Schuster, New York, 1992.
Wolfram, S., A new kind of science, Wolfram Media Champaign, 2001.
Those who can access JSTOR can find some of the materials mentioned above there. For those with access, the American Mathematical Society's MathSciNet can be used to get additional bibliographic information and reviews of some of these materials.