See also: The AMS Blog on Math Blogs: Two mathematicians tour the mathematical blogosphere. Editors Brie Finegold and Evelyn Lamb, both PhD mathematicians, blog on blogs that have posts related to mathematics research, applied mathematics, mathematicians, math in the news, mathematics education, math and the arts and more.
On online collaboration in math, by Claudia Clark
In January 2009, University of Cambridge mathematician Timothy Gowers posted a theorem on his blog and invited readers to work together to prove it. To his surprise, after 2 months--and almost 1,000 comments later--he could declare the theorem proven. Gowers published the proof under the pseudonym "Polymath," and so began the now 5-year-old crowd-sourcing mathematics project of the same name (screen shot at left). The Polymath project "has a dedicated website where people can post and debate suggestions for new challenges--and, if they agree that the challenge is worthwhile, circulate ideas for its solution." Projects that are "broadly accessible and of interest to a large number of mathematicians" have had the most success, notes UCLA mathematician Terence Tao. For example, one month after Yitang Zhang's breakthrough work on 'near-twin' primes became public last May, Tao began coordinating a Polymath project--dubbed "Polymath 8"--to see if the limit between pairs of primes couldn't be reduced. By November, that limit had been reduced from 70 million to 600. Without the kind of rapid, collaborative approach made possible by Polymath, Tao speculates that this result would have taken years.
See the article: "Crowd-sourcing: Strength in numbers," by Philip Ball. Nature, 27 February 2014, pages 422-423.
--- Claudia Clark
On big numbers, by Ben Polletta
In this piece, George Dvorsky gets tips on making sense of "stupid large" numbers from mathematician Spencer Greenberg, a hedge fund manager and blogger who studied machine learning at New York University and writes for AskAMathematician.com. More germanely, Greenberg is the founder of ClearerThinking.org, an organization or possibly website dedicated to helping individuals overcome their cognitive biases. One of these biases, Greenberg suggests, is an inability to think coherently about very large numbers. While people seem to have an innate capacity to reason about small numbers, the kinds of numbers used to quantify phenomena on the scale of countries, planets, and galaxies lie well outside the dynamic range of this number sense. As a result, numbers like a quadrillion and a trillion can seem equivalently huge, despite the fact that a quadrillion is a thousand times larger than a trillion. Greenberg's suggestions on handling this kind of numerical overload mostly boil down to unit changes--expressing the U.S. debt as debt per capita, for example, or as a percentage of the U.S. annual GDP. Some of these conversions are quite creative, however--for example, thinking of 400,000 people as "20 hockey arenas worth," or converting the risk of death from hang gliding from a proportion of hang gliders (1 in 116,000) to a proportion of the baseline risk of death (for a 30-year old, a day that includes hang gliding is 3.2 times likelier to be fatal than a day that doesn't). Another neat trick is to introduce time to understand numbers like San Francisco's population of 4.3 million: if you spoke to every resident of San Francisco for one minute, and did that for eight hours a day (sans lunch break), it would still take you 24 and a half years to "meet" them all.
Greenberg warns against choosing irrelevant units--such as converting monetary sums to heights in stacked pennies--and against using units that are themselves hard to comprehend, such as tons. But he admits that no matter how you slice them, some numbers--such as the number of stars in the Milky Way (300 billion, or roughly 42 stars for each of Earth's 7.1 billion human inhabitants)--remain deafeningly large. "When dealing with large numbers," he says, "we often just have to 'do the math'."
Read the article: "How to Comprehend Incomprehensibly Large Numbers," by George Dvorsky. Io9, 26 February 2014. [Coincidentally, the AMS is publishing a book on big numbers, Really Big Numbers, which is its first children's book.]
--- Ben Polletta
On math and the brain, by Ben Polletta
Perhaps not surprisingly, President Obama's BRAIN Initiative is exactly as involved as President Obama's brain, itself. Its "overarching goal," says Stanford neuroscientist William Newsome--co-chair of the National Institutes of Health panel establishing the Initiative's priorities--is "to map the circuits of the brain, measure the fluctuating patterns of electrical and chemical activity flowing within those circuits and to understand how their interplay creates our unique cognitive and behavioral capabilities." The new science of connectomics--the application of the mathematical and physical theory of networks to the brain's multifaceted architecture--is concerned with the first of these daunting goals. Connectomics attempts to understand the brain as a graph--a collection of nodes connected by edges--and to relate the unique properties of this graph to the functioning of the brain. While this mirrors the physical reality of the brain--which is a collection of cells (neurons) electrically and chemically interconnected to one another (through synapses)--mapping the underlying graph of the human brain at the single-neuron level is a long way off. Instead, researchers study exact maps of the brains of simpler organisms--such as the roundworm C. elegans, which has 383 neurons instead of the ~85 billion of the human brain--and construct relatively coarse maps of the human brain's connectivity. In these finest of these coarse maps, being constructed by the NIH's Human Connectome Project, the nodes are parcels of "gray matter" (neuron-containing tissue) a cubic millimeter in size. The edges in such maps may reflect anatomical connectivity data obtained from imaging the "white matter" tracts (rivers of neuronal processes that terminate in synapses) that join these parcels, or functional connectivity data obtained by observing which parcels exhibit correlated dynamics.
Already, interesting conclusions about the brain's network architecture have emerged from these partial connectomes. For example, both the C. elegans brain and human functional and anatomical connectomes are characterized by a small world architecture--meaning that they exhibit a few highly connected nodes or "hubs," and many sparsely connected nodes. In the human brain, these hubs are found in non-overlapping sub-networks related to important brain functions, such as vision, movement, hearing and touch. And while these functional sub-networks don't overlap much, it turns out that the hubs associated with them are highly connected to each other, forming a so-called "rich club" which may serve to integrate multiple streams of information into a coherent whole. Furthermore, cognitive dysfunction, such as that seen in schizophrenia, attention deficit disorder, and autism, is accompanied by dysfunction at the network level: while schizophrenics exhibit more variable connectivity than normal, subjects with ADD exhibit weaker connectivity, and autistic subjects have more highly linked brains. So although we may be a long way from a final count of the BRAIN Initiative's accomplishments, the exit polls from connectomics look very promising.
See the article: "Cataloging the connections," by Tom Siegfried. Science News, 22 February 2014, pages 22-26.[Learn more about math and the brain in this Mathematical Moments podcast with Van Weeden.] Image: L.L.Wald and V.J.Wedeen, Martinos Center for Biomedical Imaging and the Human Connectome Project.
--- Ben Polletta
"Wikipedia-size maths proof too big for humans to check," by Jacob Aron. New Scientist, 17 February 2014.
This article reports on a claim of a proof of an important result -- a proof that carried out using a computer. The result concerns a conjecture made by the legendary mathematician Paul Erdős, called the Erdős Discrepancy Conjecture. Suppose we have an infinite sequence of +1s and -1s. Erdős conjectured that if we specify any number C, there is a subsequence of the infinite sequence - let's call the subsequence {a_1, a_2, ... a_n} - such that the sum of all the a_i is bigger than C. The C is called the "discrepancy" of the subsequence. Two University of Liverpool mathematicians, Boris Konev and Alexei Lisitsa, have posted a preprint that claims to prove not the full Erdős Discrepancy Conjecture, but only the case where C=2. Using a computer, they were able to prove that any infinite sequence of +1s and -1s has a subsequence of length 1160 such that the numbers of the subsequence add up to 2. "Establishing this took a computer nearly 6 hours and generated a 13-gigabyte file detailing its working," Aron writes. "[Konev and Lisitsa] compare this to the size of Wikipedia, the text of which is a 10-gigabyte download." The proof is so big that no human can check it thoroughly. So is the result really proved? No one is quite sure. But the article quotes Gil Kalai of Hebrew University as saying, "I'm not concerned by the fact that no human mathematician can check this, because we can check it with other computer approaches."
[Editor: This topic was also covered in "Hey Erdős, We Solved Your Really Hard Math Problem, But No You Can't Check Our Work," by News Staff, Science 2.0, 26 February 2014.]
--- Allyn Jackson
On the significance of p-values, by Lisa DeKeukelaere
Although scientists across a range of disciplines commonly use a statistic known as a p-value to evaluate the significance of observations in a data set, some researchers argue that the p-value is often misapplied and misinterpreted. The p-value for an observed correlation between two variables, for example, is the probability of obtaining such observations if no such correlation exists, and scientists often consider a correlation significant--and worthy of publication--if this probability is less than 0.05. The problem is, the p-value alone does not account for the probability that such a correlation exists, and results with significant p-values often are not reproducible because a low probability for the existence of a correlation can lead to a high rate of false positives. Another problem is "p-hacking," the engineering of data collection and analysis to obtain a significant p-value. Some researchers propose that applying multiple statistical techniques to evaluate the validity of results would help mitigate some of the problem associated with p-values, but moving away from such a widely popular statistic as the p-value probably will require a cultural shift in statistical instruction, analysis, and presentation.
See the article: "Statistical Errors," by Regina Nuzzo. Nature, 13 February 2014, pages 150-152.
--- Lisa DeKeukeleare
"The time President James Garfield wrote a mathematical proof," by Mia Risra. io9, 13 February 2014.
This very brief piece focuses on "the time U.S. president James Garfield wrote a proof for the Pythagorean Theorem." The author embeds a video from Khan Academy, explaining the math behind Garfield's Proof.
--- Annette Emerson
"Mathematics: Why the brain sees maths as beauty," by James Gallagher. BBC News, 12 February 2014.
Equation beauty is in the brain of the mathematical beholder, according to a recent brain scan study of 15 mathematicians viewing 60 equations. Just as observing a beautiful painting activates the areas of the brain that process emotion, so too did some of the equations the mathematicians evaluated as being beautiful. Of the 60 equations, Euler's identity was the most beloved. One professor explained that the identity's beauty derives from its combination of simplicity and profundity, while using the five most important mathematical constants (0, 1, e, pi, and i) and the three most important operations (addition, multiplication, and exponentiation). Another professor highlighted the beauty he derived from viewing one of Fermat's proofs involving prime numbers, which he said was motivational for mathematicians. In contrast, the mathematicians ranked Ramanujan's infinite series and Riemann’s functional equation ranked as the ugliest equations viewed.
[Editor: See also: "The neuroscience of mathematical beauty, or, Equation beauty contest," a blog post by Peter Rowlett, The Aperiodical, 13 February 2014.]
--- Lisa DeKeukelaere
"Sound, light and water waves and how scientists worked out the mathematics," by Alok Jha. Guardian, 8 February 2014.
In this article, the author begins by defining waves -- "oscillations that carry energy from one place to another" -- and the one-dimensional wave equation (at left). We are familiar with the physical manifestations of waves: "a material (water, metal, air, etc.) deforms back and forth around a fixed point." There are two types of waves: transverse, such as those formed by dropping a stone in a body of water, and longitudinal, of which sounds waves are an example. "In both cases," Jha writes, "the water or air molecules [respectively] remain, largely, in the same place as they started, as the wave travels through the material. The one-dimensional wave equation…describes how much any material is displaced, over time, as the wave proceeds."
Jha then describes the "long genesis" of the wave equation, "with scientists from many fields circling around its mathematics across the centuries. Among many others, Daniel Bernoulli, Jean le Rond d'Alembert, Leonhard Euler, and Joseph-Louis Lagrange realized that there was a similarity in the [math] of how to describe waves in strings, across surfaces and through solids and fluids." Later, James Maxwell used his eponymous equations (for describing the interaction of electric and magnetic fields in a vacuum) to discover an electromagnetic wave equation with v equal to the speed of light. This recognition that "electromagnetic waves…are transverse oscillations of the electric and magnetic fields," and that light is an electromagnetic wave, led to the prediction and discovery of electromagnetic waves of other wavelengths, including microwaves, infrared, radio waves, ultraviolet light, X-rays, and gamma rays. "The wave equation has also proved useful in understanding…quantum physics," Jha writes, in which electrons are described, "not as a well-defined object in space but as quantum waves for which it is only possible to describe probabilities for position, momentum or other basic properties."
--- Claudia Clark
"Matrix villain spawns 177,000 ways to knot a tie," by Jacob Aron. New Scientist, 7 February 2014.
While the interplay between experiment and theory is what drives science forward, it's not something frequently encountered in the fashion world. But last month, applied algebraic topologist Mikael Vejdemo-Johansson and his colleagues posted a paper to the arXiv (More ties than we thought") that dramatically expanded the theoretical framework used to understand neck tie knots. His theory is the first to include a class of anomalous knots such as the "Merovingian," discovered in 1999 by experimental sartorialists Larry and David Wachowski. The standard model of tie tying was developed in 1999 by Thomas Fink and Yong Mao. The pair used formal language theory to describe the process of tie knotting as a series of moves towards or away from the left, center, and right of the chest. Assuming a regularity property which could be described intuitively as "being covered by a flat stretch of fabric," and that tucks would be reserved for the final move in a knot, Fink and Mao estimated that there were 85 possible neck tie knots. But shortly after the development of their theory, evidence of knots that violated its assumptions began to surface. These new knots -- including the Merovingian as well as the Eldredge and the Trinity -- had folded, rather than flat, outer surfaces, and called for tucks in the middle of the tying process. More surprisingly, they exceeded a theoretical upper bound Mao and Fink had established on the number of moves possible before the occurrence a finite-time blowup known as "running out of tie." The Mao-Fink theory allows only 8 moves, but some newly discovered knots exceed this limit, with the Eldredge requiring 11 moves. To account for these knots, Vedjemo-Johansson and his colleagues used a new bound of 11 tie tying moves, and redefined moves to include tucks and windings either clockwise or anticlockwise around the passive end of the tie. Their theory estimates the number of possible neck tie knots at 177,147, showing that most of the universe of tie knots is made up of non-classical knots like the Merovingian. Taking their work further, Vejdemo-Johansson's team has created a Tie Knot Generator tool allowing random exploration of the space of knots, opening the door to an exciting new world of experimental fashion. (Photo by Mikael Vejdemo Johansson.)
[Editor: The article "Scientist discovers there are 177,000 ways to knot a tie after being inspired by Matrix villain," by Dan Bloom, Daily News, 8 February 2014 includes diagrams from Mikael Vejdemo-Johansson's Tie Knot Generator. This was also covered on NPR ("The Thousands Of Ways To Tie A Tie," 15 February 2014, among other news media.]
--- Ben Polletta
"Science 2013 Visualization Challenge: The Life Cycle of a Bubble Cluster: Insight from Mathematics, Algorithms, and Supercomputers," Science, 7 February 2014, pages 599-610.
Mathematically modeling is behind an Honorable Mention winner in the 2013 Science Visualization Challenge. Robert I. Saye and James A. Sethian used linked partial differential equations to model the intricate processes involved with bubbles in foam. The original research article appeared in the May 10 issue of Science (press release about the research, which links to a video) and their visualization won Honorable Mention in the Posters and Graphics category. Saye says that he is often asked: "Isn't that just a photograph of soap bubbles?," but he adds, "Naturally we are eager to point out that it is in fact a visualization of a physics computational model." Hear Sethian talk about this work (the Mathematical Moment podcast also includes Frank Morgan and students talking about the Double Bubble Conjecture). (Photo: Robert I. Saye and James A. Sethian, Lawrence Berkeley National Laboratory and the University of California, Berkeley.)
--- Mike Breen
"In the End, It All Adds Up to – 1/12," by Dennis Overbye. New York Times, 3 February 2014.
If you were one of the people disgruntled by the at best coincidental accuracy of Numberphile's "derivation" of the "identity" 1+2+3+4+...= -1/12 last year, or by Phil Plait's brave but breathless and credulous coverage of the video for his Slate blog Bad Astronomy (part one and follow-up), Dennis Overbye's coverage of the video for the Grey Lady may smooth your ruffled feathers. As with most things (everything?) in mathematics, the magic is all bound up in the equals sign, a placeholder for a lecture's worth of qualifying assumptions and definitions. The rigorous statements of the identity involve analytic continuation of the Riemann zeta function to -1 or, more interestingly, the loosely-defined technique of regularization. According to mathematical physicist Edward Frenkel, the gist of it is that the divergent sum has three parts -- one that blows up, one that vanishes, and -1/12. Without these qualifications (as all parties have by now acknowledged) the vacuously true manipulations of divergent series portrayed in Numberphile's video have no more validity than the infamous "proof" that 2 = 1. But even after the equals sign is unpacked, the gee whiz factor remains, as regularized sums like this appear routinely in the calculation of physical quantities -- such as the energy of the electron -- which are known to be finite. The resulting predictions of quantum electrodynamics have been verified as some of the most accurate in all of science (Precision tests of QED). Indeed, the numbers 12 and 24 appear throughout string theory, accounting for the 26 dimensions required for a consistent, bosonic string theory ("My Favorite Numbers: 24," by John Baez). Why this should be the case remains, as Phil Plait put it, a "bizarre, brain-melting" mystery.
--- Ben Polletta
"Never Say Never," by David J. Hand. Scientific American, February 2014, pages 72-75.
In this preview of the recently published text "The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day," the author, statistician and emeritus professor of mathematics David Hand, uses examples to introduce his principle and the mathematical laws behind a few of its key strands. According to the first law, the law of truly large numbers, “given enough opportunities, we should expect a specified event to happen, no matter how unlikely it may be at each opportunity.” However, it is not always obvious when the number of opportunities for an event is very large. A good example of this is the "birthday problem," which poses this question: "How many people must be in the room to make it more likely than not that two of them share the same birthday?" The answer -- 23 people -- may strike you as being surprisingly small... until you consider the law of combinations: while the probability that one of these people shares your birthday is very small (0.06), the probability that any two people in the room share a birthday is much larger (0.51) because the number of combinations of 2 people out of 23 is a fairly large number. Professor Hand also applies his improbability principle to lotteries, including instances when lotteries have produced the same group of winning numbers, sometimes within days of each other. Again, he shows how such events are not so extremely unlikely as they might seem.
--- Claudia Clark
Math Digest Archives
|| 2010 || 2009 ||
2008 || 2007 || 2006 || 2005 || 2004 || 2003 || 2002 || 2001 || 2000 || 1999 || 1998 || 1997 || 1996 || 1995
Click here for a list of links to web pages of publications covered in the Digest. |