Skip to Main Content
Linkages (Part II) - Old Wine in New Bottles

Folding rulers

Feature Column Archive


5. Folding Rulers

In addition to the nifty results just described, there is a somewhat older intriguing result associated with (plane) path linkages. It has to do with thinking of the linkage as a model for a carpenter's ruler. A carpenter's ruler is a ruler divided up into pieces of different lengths which are hinged where the pieces meet, which makes it possible to fold the ruler. The problem, originally posed by Sue Whitesides (McGill) is to determine the smallest case into which the ruler will fit when folded. Here we are again idealizing a physical ruler because we are imagining that the the ruler will be allowed to fold onto itself so that it lies along a line segment (whose length is the size of the case) but that no thickness results from the segments which lie on top of each other!

A folded ruler

Note that different orders of lengths can mean a different minimum sized box. For small sized rulers one can work out the smallest box that will hold the ruler for a given particular set of lengths and a particular order in which these lengths are attached. The abstract problem we are raising is: Given a collection of numbers l1, l2, ..., ln which are to be interpreted as the lengths of the sections of the carpenter's ruler, with the first section of the ruler having length l1, the second section of the ruler having length l2, ..., and the last section of the ruler having length ln, can the ruler be stored in a case with length at most K? (For the problem to make sense K should be at least as large as the largest link in the ruler; otherwise there is no hope of fitting the ruler into a case of length K.)

One of the important theoretical developments which resulted from the development of the digital computer was the attempt to understand how much work is necessary to solve a problem, whether the solution is attempted by hand or with a computer. Most problems have a natural size which allows one to compare different versions of the problem. Furthermore, for a given size of problem there may be different procedures or algorithms that allow one to solve it. Such algorithms require a certain amount of time and space (memory in a computer) to solve the problem. By changing the algorithm to solve the problem one may be able to find the solution using the same amount of computer memory but in less time. Alternatively, changing the algorithm may mean that one can solve the problem using more time but less space. Complexity Theory studies issues of this kind - how much time and space it takes to solve problems of a particular size and what might be the best algorithm to solve the problem in a particular amount of space and time. Are there problems so hard that a computer can not solve them at all?

Let us use n to denote the size of a path linkage where n is the number of links in the path, since this seems a natural way to measure size in this case. (A different idea might be to measure the size of the problem in terms of the length of the largest link in the path. For some problems there may be different natural ways to measure the size of the problem. Different measures may lead to different insights and solutions.) One would expect that to solve the carpenter's ruler problem where the ruler has 10 links would require more work than to solve the problem for a ruler with 5 links. Of course, some 10-link ruler problems may be easier to solve than some 5-link ruler problems. We are interested in how much work is involved in solving the hardest version of a ruler problem. One way of measuring the amount of work to solve a problem is to determine an expression (function) which gives, in terms of the problem size, the amount of work that it would take to solve the problem in the worst case. For example the amount of work might be a polynomial in n (e.g. n3) or it might require some exponential amount of work in terms of n (e.g. 5n). Eventually (for large n) exponential functions grow much more rapidly than polynomials. Thus, when one finds a polynomial algorithm for a problem, it is much more likely that large examples of that problem (e.g. n = 40) can be solved than if the only algorithms one knows for the problem are exponential.

A key idea in the theory of complexity is that some problems have the property that if someone provides you with a proposed answer to the problem, it is easy to check that the proposed answer is actually valid. If it is possible to check in a polynomial amount of work that a proposed solution to a problem is a solution, then a problem is said to be in NP. A type of problem is said to be NP-complete  if it is NP and is at least as hard as any other problem in NP. The NP-complete problems have the following ambiguous status. If any NP-complete problem could be shown to be solvable in polynomial time, then all the NP-complete problems could be solved in polynomial time. If any NP-complete problem could be shown to require an exponential amount of work to solve, then all the NP-problems would require an exponential time. In other words, NP-complete problems are all equally easy or equally hard, although we do not know which of the two situations holds!

What John Hopcroft, Deborah Joseph, and Sue Whitesides accomplished was to show that the ruler folding problem is NP-complete. This was done by showing that it was as easy or hard as the problem known as the set partition problem, which is known to be NP-complete. Set partition is the following problem: given n positive integer weights (which may not all be distinct), determine if the weights can be divided into two sets X and Y such that the sum of the elements in X is exactly equal to the sum of the elements in Y. By contrast, Hopcroft, Joseph, and Whitesides showed that the problem of determining whether a chain linkage with exactly m links and where each of the links is at most M units long will fit into a case of size at most K can be solved with an algorithm which takes a linear amount of time in the number of links!

Part of the motivation of Hopcroft, Joseph, and Whitesides in studying the complexity of ruler folding was that it was a very simplified model for a motion planning problem for a robot. As engineers have moved in the direction of designing more complex robotic systems, computer scientists and mathematicians have been studying the complexity of the algorithms that are involved with implementing such robotic systems.

The results of Hopcroft, Joseph, Whitesides and Connelly, Demaine, Röte, and Streinu and others have stimulated new work in areas that build on both the proof methods that they used and the idea domain (linkages, folding, rigidity) that they built on. Their work supports the observation that solving mathematics problems rarely closes off a field. Rather, mathematical progress lays the groundwork for new applications and further progress.



  1. Introduction
  2. Chain and polygonal linkages
  3. Breakthrough
  4. Further steps
  5. Folding rulers
  6. References

  AMS Home | Comments: webmaster@ams.org
© Copyright 2011, American Mathematical Society
Privacy Statement
Search the AMSPowered by Google