**2. Vector spaces and graphs**

During the 19th century mathematicians and scientists were developing new tools for trying to maximize the way mathematics could be used to get insight into the concepts of location (points in a space), velocity, and force. These ideas were the outgrowth of a long progression of mathematics set in motion by Newton's spectacular synthesis of using mathematical tools for the benefit of physics. Newton was both a very great mathematician and a very great physicist.

We can represent velocities, which have a magnitude and direction, by line segments with arrows. In the diagram below, the two original vectors are shown in black. Vectors with the same magnitude and direction as these two are shown drawn in red. The sum of the original two vectors is found by taking the tail of one of the vectors and placing it at the head of the other vector.

A new vector, shown in blue, indicates the sum or resultant of the original two. This is the famous *parallelogram law* of vector addition which provides the foundation for a geometry of vectors.

What follows are some rudiments of the theory of vector spaces. Introduce a coordinate system where the origin is at (0, 0) and the coordinates of the points at the heads of the vectors are (a, b) and (c, d). Define an addition on the coordinates of points by adding corresponding coordinates: the sum of (a, b) and (c, d) is (a + c, b + d). One can also introduce a multiplication of points by scalars (nonzero real numbers). Thus (ra, rb) represents a vector having the same direction as the vector (a,b) but a magnitude *r* times as big. (When *r* is 1/2, say, the vector is half as large.) When*r* is negative, we interpret this as meaning the direction of the vector is reversed but the length is *r* times as large. We have here the rudiments of an algebra of points, which can be regarded as vectors. Define a pair of special vectors **i** (= (1, 0)) and **j** (= (0, 1)). The collection {**i**,** j**} is called a *basis. Any* vector (a, b) can be written as a**i** + b**j**, that is, as a linear combination of the two vectors **i **and **j**, and one can not represent any vector just using one of these two basis vectors. Furthermore neither **i **nor **j **is a mere multiple of the other.

Besides vectors, another natural place where vector spaces arise is in the theory of equations of first degree in several variables. First degree equations, the equations of straight lines in 2-space, of planes in 3-space and the analogues of lines and planes in spaces of higher dimension, are of interest for many reasons. These include the simplicity of the form of a linear equation (remember mathematicians like taxonomy) and that they describe flat rather than curved things. An example of a linear equation in three variables is:

x -2y + 4z = 0 (*) .

This equation, like any equation of the form

ax + by + cz = 0 (**)

defines a plane in 3-space which passes through the origin of 3-space, the point (0, 0, 0). It is easy to see all planes of the form (**) pass through (0, 0, 0). The geometry of 3-space is such that two (different) planes are either parallel or intersect in a line. Two different planes of the form (**) have the origin in common, thus, can not be parallel planes. When two distinct planes have a point in common, they must have a line in common. If we have a third plane which goes through the same point, then the common intersection of the three planes must be a single point. Here is an example of a system of linear equations in 4-dimensional (real) space each of which passes through (0, 0, 0, 0):

x-2y+3z-5w = 0

x+3y-z+2w = 0

x-2y+2z-w = 0

2x+y+3z+w = 0

x+3y-z+2w = 0

x-2y+2z-w = 0

2x+y+3z+w = 0

Systems of equations of this kind, where the right hand side constant is 0, are known as *homogeneous *systems of equations. Insisting on having planes (or hyperplanes) pass through the origin seems very confining. After all, a plane that does not pass through the origin is geometrically identical (except for the way two planes are oriented in space, planes are otherwise alike). However, it turns out that the solutions of systems of homogeneous linear equations have two very nice properties: Any multiple of this solution is still a solution and the sum of any two solutions is a solution. For example, x=2, y=5, and z = 2 solves (*), as does x=4, y=10, and z=4, which was obtained by multiplying the first solution by 2. You can also check that the sum of the two solutions x=2, y=5, and z=2, and x=6, y=1, and z=-1; namely x=8, y=6, and z=1 is also a solution to (*). Note that the same is not true for a non-homogeneous linear equation. For example, x=4, y=0, and x=0, y=2 are both solutions to the equation x + 2y = 4, but neither the sum x=4, y=2 nor the scalar multiple x=12, y=0 is a solution.

These are just examples but they illustrate properties shared by vectors and solutions to homogeneous linear equations: Adding or multiplying by a scalar yields a result that belongs to the original collection. Thus the solutions of homogeneous systems of linear algebraic equations (e.g. points that satisfy all the equations in the system) have the same kind of structure as the algebraic system for vectors discussed above.

Graphs are another class of structures widely studied by mathematicians and computer scientists. These are structures which consist of dots and lines, such as the one below.

This diagram might have been drawn to show the system of roads between the small villages on an island. Graphs have many theoretical as well as practical uses. Graphs and vector spaces seem like very different worlds, and in many respects they are, but in the next section we'll see connections between the two.