## We Recommend a Singular Value DecompositionIn this article, we will offer a geometric explanation of singular value decompositions and look at some of the applications of them. ...David Austin
## IntroductionThe topic of this article, the A singular value decomposition provides a convenient way for breaking a matrix, which perhaps contains some data we are interested in, into simpler, meaningful pieces. In this article, we will offer a geometric explanation of singular value decompositions and look at some of the applications of them. ## The geometry of linear transformationsLet us begin by looking at some simple matrices, namely those with two rows and two columns. Our first example is the diagonal matrix
Geometrically, we may think of a matrix like this as taking a point
The effect of this transformation is shown below: the plane is horizontally stretched by a factor of 3, while there is no vertical change.
Now let's look at
which produces this effect
It is not so clear how to describe simply the geometric effect of the transformation. However, let's rotate our grid through a 45 degree angle and see what happens.
Ah ha. We see now that this new grid is transformed in the same way that the original grid was transformed by the diagonal matrix: the grid is stretched by a factor of 3 in one direction. This is a very special situation that results from the fact that the matrix Said with more mathematical precision, given a symmetric matrix Mv is a scalar multiple of _{i}v; that is_{i}
Mv = λ_{i}_{i}v _{i}where λ M. Because of this property, we call the vectors v _{i}eigenvectors of M; the scalars λ_{i} are called eigenvalues. An important fact, which is easily verified, is that eigenvectors of a symmetric matrix corresponding to different eigenvalues are orthogonal.If we use the eigenvectors of a symmetric matrix to align the grid, the matrix stretches and reflects the grid in the same way that it does the eigenvectors. The geometric description we gave for this linear transformation is a simple one: the grid is simply stretched in one direction. For more general matrices, we will ask if we can find an orthogonal grid that is transformed into another orthogonal grid. Let's consider a final example using a matrix that is not symmetric:
This matrix produces the geometric effect known as a
It's easy to find one family of eigenvectors along the horizontal axis. However, our figure above shows that these eigenvectors cannot be used to create an orthogonal grid that is transformed into another orthogonal grid. Nonetheless, let's see what happens when we rotate the grid first by 30 degrees,
Notice that the angle at the origin formed by the red parallelogram on the right has increased. Let's next rotate the grid by 60 degrees.
Hmm. It appears that the grid on the right is now almost orthogonal. In fact, by rotating the grid in the domain by an angle of roughly 58.28 degrees, both grids are now orthogonal.
## The singular value decompositionThis is the geometric essence of the singular value decomposition for 2 2 matrices: for any 2 2 matrix, we may find an orthogonal grid that is transformed into another orthogonal grid. We will express this fact using vectors: with an appropriate choice of orthogonal unit vectors v, the vectors _{2}Mv and _{1}Mv are orthogonal._{2}
We will use u to denote unit vectors in the direction of _{2}Mv and _{1}Mv. The lengths of _{2}Mv and _{1}Mv--denoted by σ_{2}_{1} and σ_{2}--describe the amount that the grid is stretched in those particular directions. These numbers are called the singular values of M. (In this case, the singular values are the golden ratio and its reciprocal, but that is not so important here.)
We therefore have
Mv = σ_{1}_{1}u _{1}Mv = σ_{2}_{2}u _{2}We may now give a simple description for how the matrix
x = (v_{1}x) v + (_{1}v_{2}x) v _{2}This means that
Mx = (v_{1}x) Mv + (_{1}v_{2}x) Mv _{2}Mx = (v_{1}x) σ_{1}u + (_{1}v_{2}x) σ_{2}u _{2}Remember that the dot product may be computed using the vector transpose
vx = v^{T}x which leads to
Mx = uσ_{1}_{1} v_{1}^{T}x + uσ_{2}_{2} v_{2}^{T}x M = uσ_{1}_{1} v_{1}^{T} + uσ_{2}_{2} v_{2}^{T} This is usually expressed by writing
M = UΣV ^{T}where This shows how to decompose the matrix ## How do we find the singular decomposition?The power of the singular value decomposition lies in the fact that we may find it for
Notice that the major and minor axes are defined by
In other words, the function | v_{i}.The singular values are then given by σ u are obtained as unit vectors in the direction of _{i}Mv. But why are the vectors _{i}u orthogonal?_{i}To explain this, we will assume that σ
Mv = σ_{i}_{i}u _{i}Mv = σ_{j}_{j}u. _{j}Let's begin by looking at the expression Mv and assuming, for convenience, that the singular values are non-zero. On one hand, this expression is zero since the vectors _{j}v, which are eigenvectors of the symmetric matrix _{i}M are orthogonal to one another:^{T}M
Mv _{i}Mv = _{j}v_{i}^{T}M^{T} Mv = _{j}v _{i}M^{T}Mv = λ_{j}_{j}v _{i}v = 0. _{j}On the other hand, we have
Mv _{i}Mv = σ_{j}_{i}σ_{j} u _{i}u = 0 _{j}Therefore, u are othogonal so we have found an orthogonal set of vectors _{j}v that is transformed into another orthogonal set _{i}u. The singular values describe the amount of stretching in the different directions._{i}In practice, this is not the procedure used to find the singular value decomposition of a matrix since it is not particularly efficient or well-behaved numerically. ## Another exampleLet's now look at the singular matrix
The geometric effect of this matrix is the following:
In this case, the second singular value is zero so that we may write:
uσ_{1}_{1} v_{1}^{T}. In other words, if some of the singular values are zero, the corresponding terms do not appear in the decomposition for ## Data compressionSingular value decompositions can be used to represent data efficiently. Suppose, for instance, that we wish to transmit the following image, which consists of an array of 15 25 black or white pixels.
Since there are only three types of columns in this image, as shown below, it should be possible to represent the data in a more compact form.
We will represent the image as a 15 25 matrix in which each entry is either a 0, representing a black pixel, or 1, representing white. As such, there are 375 entries in the matrix.
If we perform a singular value decomposition on
_{1} = 14.72 σ _{2} = 5.22 σ _{3} = 3.31 Therefore, the matrix may be represented as
M=u_{1}σ_{1} v_{1}^{T} + u_{2}σ_{2} v_{2}^{T} + u_{3}σ_{3} v_{3}^{T} This means that we have three vectors u, each of which has 25 entries, and three singular values _{i}σ. This implies that we may represent the matrix using only 123 numbers rather than the 375 that appear in the matrix. In this way, the singular value decomposition discovers the redundancy in the matrix and provides a format for eliminating it._{i}Why are there only three non-zero singular values? Remember that the number of non-zero singular values equals the rank of the matrix. In this case, we see that there are three linearly independent columns in the matrix, which means that the rank will be three. ## Noise reductionThe previous example showed how we can exploit a situation where many singular values are zero. Typically speaking, the large singular values point to where the interesting information is. For example, imagine we have used a scanner to enter this image into our computer. However, our scanner introduces some imperfections (usually called "noise") in the image.
We may proceed in the same way: represent the data using a 15 25 matrix and perform a singular value decomposition. We find the following singular values:
_{1} = 14.15 σ _{2} = 4.67 σ _{3} = 3.00 σ _{4} = 0.21 σ _{5} = 0.19 ... σ _{15} = 0.05 Clearly, the first three singular values are the most important so we will assume that the others are due to the noise in the image and make the approximation
M u_{1}σ_{1} v_{1}^{T} + u_{2}σ_{2} v_{2}^{T} + u_{3}σ_{3} v_{3}^{T} This leads to the following improved image.
## Data analysisNoise also arises anytime we collect data: no matter how good the instruments are, measurements will always have some error in them. If we remember the theme that large singular values point to important features in a matrix, it seems natural to use a singular value decomposition to study data once it is collected. As an example, suppose that we collect some data as shown below:
We may take the data and put it into a matrix:
and perform a singular value decomposition. We find the singular values
_{1} = 6.04 σ _{2} = 0.22 With one singular value so much larger than the other, it may be safe to assume that the small value of σ
This brief example points to the beginnings of a field known as In a similar way, singular value decompositions can be used to detect groupings in data, which explains why singular value decompositions are being used in attempts to improve Netflix's movie recommendation system. Ratings of movies you have watched allow a program to sort you into a group of others whose ratings are similar to yours. Recommendations may be made by choosing movies that others in your group have rated highly. ## SummaryAs mentioned at the beginning of this article, the singular value decomposition should be a central part of an undergraduate mathematics major's linear algebra curriculum. Besides having a rather simple geometric explanation, the singular value decomposition offers extremely effective techniques for putting linear algebraic ideas into practice. All too often, however, a proper treatment in an undergraduate linear algebra course seems to be missing. This article has been somewhat impressionistic: I have aimed to provide some intuitive insights into the central idea behind singular value decompositions and then illustrate how this idea can be put to good use. More rigorous accounts may be readily found. ## References:- Gilbert Strang,
**Linear Algebra and Its Applications**. Brooks Cole.
Strang's book is something of a classic though some may find it to be a little too formal. - William H. Press
*et al*,**Numercial Recipes in C: The Art of Scientific Computing**. Cambridge University Press.
Authoritative, yet highly readable. Older versions are available online. - Dan Kalman, A Singularly Valuable Decomposition: The SVD of a Matrix,
*The College Mathematics Journal***27**(1996), 2-23.
Kalman's article, like this one, aims to improve the profile of the singular value decomposition. It also a description of how least-squares computations are facilitated by the decomposition. -
*If You Liked This, You're Sure to Love That*,*The New York Times*, November 21, 2008.
This article describes Netflix's prize competition as well as some of the challenges associated with it.
David Austin Those who can access JSTOR can find at least of the papers mentioned above there. For those with access, the American Mathematical Society's MathSciNet can be used to get additional bibliographic information and reviews of some these materials. Some of the items above can be accessed via the ACM Portal , which also provides bibliographic services. |
Welcome to the These web essays are designed for those who have already discovered the joys of mathematics as well as for those who may be uncomfortable with mathematics. Search Feature Column Feature Column at a glance |