## Vector Spaces## Philip J. Erdelsky## May 15, 2015 |

Please e-mail comments, corrections and additions to the webmaster at pje@efgh.com.

Let *F* be a field and *n*
a positive integer.
An *n*-dimensional __vector__ over
*F* is a
an ordered *n*-tuple
* x = (x_{1},
x_{2}, ..., x_{n})* of elements of

We define addition of two vectors * x* and

x= (x_{1}, x_{2}, ..., x_{n}),

y= (y_{1}, y_{2}, ..., y_{n}),

x+y= (x_{1}+ y_{1}, x_{2}+ y_{2}, ..., x_{n}+ y_{n}).

It is easy to show that the set of all *n*-dimensional vectors is a
commutative group under addition. The identity element is the
__zero vector__ * O = (0, 0, ..., 0)*, and the inverse of

We also define the multiplication of a vector by a scalar *c*:

x= (x_{1}, x_{2}, ..., x_{n}),

cx= (cx_{1}, cx_{2}, ..., cx_{n}).

It is easy to show that for any scalars *c* and
*d*
and any vectors * x* and

- c(
**x**+**y**) = c**x**+ c**y** - (c+d)
**x**= c**x**+ d**x** - c(d
**x**) = (cd)**x** - 1
**x**=**x** - 0
**x**=**O** *(-1)***x**= -**x**

The set of all *n*-dimensional vectors, with
these operations, is also called the *n*-dimensional
__coordinate space__ over the field *F*,
represented by *F ^{ n}*.

A __linear combination__ of the vectors * v_{1}, v_{2},
..., v_{m}* is a sum of the following form:

wherec_{1}v_{1}+ c_{2}v_{2}+ ... + c_{m}v_{m},

The vectors * v_{1}, v_{2},..., v_{m}* are said to
be

where not all of the coefficients are zero. In this case, at least one of the vectors (one with a nonzero coefficient) is equal to a linear combination of the others. For example, ifc_{1}v_{1}+ c_{2}v_{2}+ ... + c_{m}v_{m}=O,

v_{1}= (-c_{2}/c_{1})v_{2}+ (-c_{3}/c_{1})v_{3}+ ... + (-c_{m}/c_{1})v_{m}.

The converse is also true. If one vector can be expressed as a linear combination of the others, the vectors are linearly dependent.

It is clear that any set of vectors containing the zero vector is linearly dependent, and that a linearly dependent set remains linearly dependent when additional vectors are appended to it.

Vectors which are not linearly independent are said to be
__linearly independent__ (or simply __independent__).

The following *n* *n*-dimensional vectors are linearly independent:

e_{1}= (1, 0, 0, ..., 0)

e_{2}= (0, 1, 0, ..., 0)

e_{3}= (0, 0, 1, ..., 0)

***

e_{n}= (0, 0, 0, ..., 1)

However, this is the maximum number of linearly independent *n*-dimensional
vectors, as the following theorem shows.

**Theorem 1.1.** *Any set of n+1 or more n-dimensional vectors over
the same field are linearly dependent.*

*Proof.* It is sufficient to prove that any set of *n+1*
*n*-dimensional vectors over the same field
are linearly dependent. The proof is by induction on *n*.

For *n = 1*, the result is fairly obvious. If two vectors are both zero vectors,
they are linearly dependent. If *(a)* and *(b)* are not both zero vectors, then
the linear combination *b(a) - a(b) = (0)* shows them to be linearly dependent.

Now assume *n > 1* and let * v_{1}, v_{2},..., v_{n},
v_{n+1}*, be any set of

Let * w_{1}, w_{2},..., w_{n},
w_{n+1}* be the

c_{1}w_{1}+ c_{2}w_{2}+ ... + c_{n}w_{n}= (0, 0, ..., 0),

where the coefficients are not all zero. Assume the vectors
have been arranged so *c _{1} ≠ 0*.

Similarly,

d_{2}w_{2}+ d_{3}w_{3}+ ... + d_{n+1}w_{n+1}= (0, 0, ..., 0),

where the coefficients are not all zero.

Now consider the corresponding linear combinations of the full vectors:

c_{1}v_{1}+ c_{2}v_{2}+ ... + c_{n}v_{n}= (0, 0, ..., 0, e),

d_{2}v_{2}+ d_{3}v_{3}+ ... + d_{n+1}v_{n+1}= (0, 0, ..., 0, f),

If *e = 0* or *f = 0* then one of these shows the *n+1* vectors to be dependent.
In other cases, we multiply the first equation by *-f/e* and add it
to the second to obtain

(-(f/e)c_{1})v_{1}+ (d_{2}-(f/e)c_{2})v_{2}+ ... + (d_{n}-(f/e)c_{n})v_{n}+ d_{n+1}v_{n+1}= (0, 0, ..., 0, 0),

which shows the *n+1* vectors to be linearly dependent, because the
first coefficient, at least, is nonzero.
█

**Lemma 1.2.** *If the vectors v_{1}, v_{2},..., v_{m}
are linearly independent, but the
v_{1}, v_{2},..., v_{m},
v_{m+1} are linearly dependent, the additional vector
v_{m+1} is a linear combination of
v_{1}, v_{2},..., v_{m}.*

*Proof.* By hypothesis,

b_{1}v_{1}+ b_{2}v_{2}+ ... + b_{m}v_{m}+ b_{m+1}w=O,

where not all of the coefficients are zero. If *b _{m+1}* were
zero, this would reduce to a nontrivial linear combination of

w= (-b_{1}/b_{m+1})v_{1}+ (-b_{2}/b_{m+1})v_{2}+ ... + (-b_{m}/b_{m+1})v_{m},

which is the desired result. █

**Theorem 1.3.** *Any set of fewer than n linearly independent n-dimensional
vectors is non-maximal; i.e., another n-dimensional vector can be appended
and the set will still be linearly independent.*

*Proof.*
Assume, for purpose of contradiction, that *m < n* and
* v_{1}, v_{2},..., v_{m}* are a maximal set
of linearly independent

By Lemma 1.2 every *n*-dimensional
vector must be a linear combination of these vectors. In particular,

c_{1,1}v_{1}+ c_{1,2}v_{2}+ ... + c_{1,m}v_{m}= (1, 0, 0, ..., 0) (1.3.1)

c_{2,1}v_{1}+ c_{2,2}v_{2}+ ... + c_{2,m}v_{m}= (0, 1, 0, ..., 0)

***

c_{n,1}v_{1}+ c_{n,2}v_{2}+ ... + c_{n,m}v_{m}= (0, 0, 0, ..., 1)

Now let the coefficients in each row be an *m*-dimensional vector:

(c_{1,1}, c_{1,2}, ... c_{1,m})

(c_{2,1}, c_{2,2}, ... c_{2,m})

***

(c_{n,1}, c_{n,2}, ... c_{n,m})

By Theorem 1.1 these vectors are linearly dependent, so there are
coefficients *d _{1}, d_{2},..., d_{n}*, not
all zero, such that

d_{1}(c_{1,1}, c_{1,2}, ... c_{1,m}) + (1.3.2)

d_{2}(c_{2,1}, c_{2,2}, ... c_{2,m}) +

... +

d_{n}(c_{n,1}, c_{n,2}, ... c_{n,m}) = (0, 0, ..., 0)

Now multiply the *i*-th equation in (1.3.1) by *d _{i}*, add the
resulting equations, and apply (1.3.2) to
obtain:

(0, 0, ..., 0) = (d_{1}, d_{2},..., d_{n}),

which is impossible because not all of the components of the right member are zero. █

The *n*-dimensional vectors defined in Section 1 (sometimes called
__coordinate vectors__) are an example of a
more general structure called a __vector space__ over a field.
To qualify, a set of vectors must be a commutative group under
vector addition, and scalar multiplication must obey the first
five conditions given in Section 1:

- c(
**x**+**y**) = c**x**+ c**y** - (c+d)
**x**= c**x**+ d**x** - c(d
**x**) = (cd)**x** *1***x**=**x**

The other two can be derived from these.

Linearly independent and linearly dependent vectors are defined in the
same manner and the same results apply.
The __dimension__ *dim(V)* of a general vector space
*V* is the maximum
number of linearly independent vectors. A general vector space need
not have a dimension. For example, the set of all sequences
*(x _{1}, x_{2}, x_{3}, ...)*,
with the obvious definitions of addition and scalar multiplication, has the following
infinite set of vectors, of which every finite subset, no matter how
large, is linearly independent:

(1, 0, 0, 0, ...)etc.

(0, 1, 0, 0, ...)

(0, 0, 1, 0, ...)

(0, 0, 0, 1, ...)

Another vector space that has no dimension is the set of all continuous
real-valued
functions, in which addition and scalar multiplication of functions are
defined in the usual manner: *(f+g)(x) = f(x) + g(x), (cf)(x) = c f(x)*.

Vector spaces that have dimensions are said to be
__finite-dimensional__ and those that do not have dimensions
are __infinite-dimensional__.

The *n*-dimensional vectors defined in Section 1 form an *n*-dimensional
vector space under this definition. A set of *n* linearly independent
vectors has been exhibited, and Theorem 1.1 shows that there are
no sets of more than *n* linearly independent vectors.

The zero-dimensional vector space, which consists of a single zero vector, is not included in Section 1, but it is needed to avoid inelegant exceptions to some results.

A __basis__ for a vector space is a finite set of vectors such that
every vector in the space can be expressed uniquely as a linear combination
of the vectors in the basis. For example, the following *n*-dimensional
vectors are a basis, which is usually called the __canonical basis__:

e_{1}= (1, 0, 0, ..., 0)

e_{2}= (0, 1, 0, ..., 0)

e_{3}= (0, 0, 1, ..., 0)

***

e_{n}= (0, 0, 0, ..., 1)

If a vector space does have a dimension *n*, then every set of
*n* linearly
independent vectors is a basis. Moreover, the basis
* v_{1}, v_{2},..., v_{n}* establishes an
isormophism between the vector space and the

c_{1}v_{1}+ c_{2}v_{2}+ ... + c_{n}v_{n}<-> (c_{1}, c_{2}, ..., c_{n})

This provides a somewhat more elegant statement of the proof of Theorem 1.3.
If the *m* vectors were maximal, they would provide an isomorphism between vectors
spaces of different dimensions.

These results can be summarized in a formal theorem:

**Theorem 2.1.** *In a vector space of positive finite dimension n,
every set of n linearly independent vectors is a basis, every set of
fewer than n linearly independent vectors is a proper subset of a basis, and
every basis contains n linearly independent vectors.*

A __subspace__ of a vector space is a subset that is a vector space
over the same field with the same operations. Hence if * x* and

Some properties of subspaces are fairly obvious:

- Any subspace of a finite-dimensional vector space is finite-dimensional.
- The dimension of a proper subspace is less than the dimension of the whole space.
- The intersection of two subspaces is a subspace.

Given a set *S* of vectors in a vector space (which may or may not
have a dimension), the set of all linear combinations of the vectors
in *S* constitutes a subspace, called the __linear span__ of *S*,
or the
subspace __spanned__ by *S*.

**Theorem 2.1** *If a subset S of a vector space has a
maximum number n of linearly independent vectors, then its linear span
has dimension n.*

*Proof.* Any vector in *S* can be expressed as a linear combination
of *n* linearly independent vectors in *S*. Clearly a linear combination
of vectors in *S* can, by combining like terms, be expressed as
a linear combination of the same *n* vectors. Since the vectors
are linearly independent, the representation is unique. Hence the linear
span is isomorphic to the space of *n*-dimensional vectors
defined in Section 1, and its dimension is *n*.
█

**Theorem 2.2** *If two subspaces S and T have dimensions,
then the dimension of the subspace spanned by their union is
dim(S) + dim(T) - dim(S ∩ T)*.

*Proof.* Start with a basis for *S ∩ T*
and extend it to bases for *S* and *T*. The union of the two
extended bases has the required number of vectors, and their linear span
is the subspace spanned by *S* ∪ *T*. We must
show that they are linearly independent. Take a linear combination of
the vectors in the union that adds up to zero and write it as * O = x + s + t*,
where

**Corollary 2.3** *If two subspaces S and T of V have
dim(S) + dim(T) > dim(V), then S and T have a nonzero vector in common.
*

The zero-dimensional vector space is not necessarily an exception; most of the definitions can be stretched to accommodate it. Its basis is empty, and an empty linear combination always evaluates to the zero vector.

The __direct sum__ of two vector spaces *R* and
*S* over the same field is the set of ordered
pairs *R ⨯ S*, where
addition and scalar multiplication are defined as follows:

(r_{1},s_{1}) + (r_{2},s_{2}) = (r_{1}+r_{2},s_{1}+s_{2}),

c (r_{1},s_{1}) = (cr_{1},cs_{1})

The direct sum is usually written as
*R ⊕ S*. It is easily
shown that the direct sum is an associative and commutative operation,
in the sense that *R ⊕ S*
is isomorphic to *S ⊕ R*
and *(R ⊕ S) ⊕ T*
is isomorphic to
*R ⊕ (S ⊕ T)*,
and that *dim(R ⊕ S) = dim(R) + dim(S))*.

Although any two distinct vector spaces over the same field have a direct sum,
two subspaces of the same space have a direct sum only if they have
only the zero vector in common. In this case *R ⊕ S*
is the subspace consisting of all vectors of the form
*r+s* where
*r ∈ R* and
*s ∈ S*. It is easily shown
that direct sums formed in this way are isomorphic to those formed from
distinct vector spaces.

Since the direct sum is associative, direct sums of three or more spaces
can be built up from direct sums of two spaces; e.g.,
*R ⊕ S ⊕ T =
(R ⊕ S) ⊕ T =
R ⊕ (S ⊕ T)*.

Equivalently, the direct sum
*V = S _{1} ⊕
S_{2} ⊕ ... ⊕
S_{m}* of three or more subspaces can be defined directly
if every element of

Direct sums are actually a generalization of the concept of bases.
Subspaces *S _{1}, S_{2}, ..., S_{m}*,
all of nonzero dimension, are
linearly dependent if there is a set of

Observations of this kind, which are fairly easy to prove, are often referred to as "dimensionality arguments" without elaboration.

One-dimensional, two-dimensional and three-dimensional vectors over the field of real numbers have a geometric interpretation. Reasoning by analogy, we can extend many geometric properties to four or more dimensions.

The one-dimensional vector *(x)* is associated with a point on a straight
line *x* units to the right of the origin if
*x ≥ 0*, or
*-x* units to the left
if *x < 0*.

The two-dimensional vector *(x _{1}, x_{2})* is associated with
the point in two-dimensional coordinate system whose abscissa and ordinate
are

Similarly, the three-dimensional vector
*(x _{1}, x_{2},
x_{3})* is associated with
the point in a three-dimensional coordinate system whose coordinates
are

Spaces with four or more dimensions are defined in the same way. In some applications, a vector is thought of, not as a point, but as a line running from the origin to the point.

Let *S* be a subspace of a vector space and let * p*
be a point in the space (but not necessarily in the subspace).
The set

Another way to define a hyperplane * p + S* is to say that
it contains all vectors

A hyperplane that passes through the origin is a subspace of the same dimension.

The subspace *S* in the representation of a hyperplane
* p + S* is
unique, but

Using dimensionality arguments, we can prove some familiar properties of points, lines and planes and extend them to higher dimensions.

A useful technique in geometry is __translation__, a one-to-one mapping of
the form *f( x) = x + t* for some constant
vector

It is well known that two distinct points determine a line, and that three points that do not lie on the same straight line determine a plane.

In general, *m+1* points that do not lie on the same
*(m-1)*-dimensional hyperplane determine a unique
*m*-dimensional hyperplane. To prove this, we first translate
the points so one of them lies at the origin. This reduces the
problem to a simple property of subspaces: *m* points that do not
lie in the same
*(m-1)*-dimensional subspace (i.e., that are linearly independent)
determine a unique *m*-dimensional subspace.

Of special interest are *(n-1)*-dimensional hyperplanes in *n*-dimensional
space (lines in two-dimensional space, planes in three-dimensional space,
etc.). Two such hyperplanes are (1) identical, (2) parallel (disjoint),
or (3) their intersection is an *(n-2)*-dimensional hyperplane.

To prove this, let * p+S* and

If x and y are two linearly independent vectors,
then * O, x, y*
and

If * x* and

x= (x_{1}, x_{2}, ..., x_{n}),

y= (y_{1}, y_{2}, ..., y_{n}),

x ∙ y= x_{1}y_{1}+ x_{2}y_{2}+ ... + x_{n}y_{n}.

An alternate notation for the inner product is *( x, y)*.

The following properties of the inner product are easily verified, where
* x, y* and

**x ∙ y**=**y ∙ x****x ∙ (y**+**z**) =**x ∙ y**+**x ∙ z****x**∙ c**y**= c**x ∙ y**, with equality only when**x ∙ x**≥ 0**x**=**O**

Any operation on a vector space over the field of real numbers that has
these properties is called
an inner product. The one defined for *n*-dimensional vectors is not
the only possible inner product. (For example, *2( x ∙ y)* is another
possibility.) Later, we will prove that such a space, which is usually
called an

The __norm__ (or __length__) of a vector * x* is
represented by ∥

Some important properties of the norm are as follows, where * x* is any
vector and

- ∥
∥**x***≥ 0*, with equality only when**x**=**O** - ∥
*c*∥**x***=*∣*c*∣ ∥∥**x** - ∥
∥**x**+**y***≤*∥∥ + ∥**x**∥, with equality only when**y**and**x**are linearly dependent**y**

Any function on a vector space over the field of real numbers that has
these properties is called a norm, although we shall use only the one
derived from the inner product. Because it conforms to the notion of
distance in Euclidean geometry, it is often called the __Euclidean
norm__.

The first two properties are fairly obvious; it is the third one that requires
a detailed proof. The following theorem is called the __Cauchy-Schwarz
Inequality__ (or the __Cauchy-Bunyakovski-Schwarz Inequality__,
or the __CBS Inequality__). It is a fundamental theorem in a number
of branches of mathematics.

**Theorem 4.1** *For any two vectors x and y,
*∣

*Proof.* The assertion is obvious if * x* and

[(x ∙ x)y- (x ∙ y)x]∙[(x ∙ x)y- (x ∙ y)x] > 0.

We use the properties of inner products to multiply out the left member:

(x ∙ x)^{2}(y ∙ y) - 2 (x ∙ x)(x ∙ y)^{2}+ (x ∙ y)^{2}(x ∙ x) > 0.

We combine like terms:

(x ∙ x)^{2}(y ∙ y) - (x ∙ x)(x ∙ y)^{2}> 0.

We divide both terms by *( x ∙ x)*:

(x ∙ x)(y ∙ y) - (x ∙ y)^{2}> 0,

We move the second term to the right side:

Taking the principal square root of each side produces the desired result. █(x ∙ x)(y ∙ y) > (x ∙ y)^{2}.

We are now ready to establish the third property of norms, starting with the Cauchy-Schwarz inequality for linearly independent vectors:

∣∣ < ∥x ∙ y∥ ∥x∥.y

Since * x ∙ y* ≤ ∣

∥x ∙ y<∥ ∥x∥.y

Multiply by 2 and add some extra terms:

∥x ∙ x+ 2 (x ∙ y) +y ∙ y<x ∙ x+ 2∥ ∥x∥y+.y ∙ y

Factor each member:

(x+y) ∙ (x+y) < (∥∥x+∥∥y).^{ 2}

Then take the principal square root of each side to obtain the desired result:

∥∥x+y<∥∥x+∥∥.y

The result for linearly dependent vectors is easy to prove.

This result is often called the __triangle inequality__.
If the vectors * x, y* and

Two vectors * x* and

We don't have a definition of angles yet, but this will surely be
the case
if the distance from *- x* to

∥-∥x-y=∥∥.x-y

We can square each side to obtain an equivalent equation:

∥-∥x-y∥^{ 2}=∥x-y.^{ 2}

Using the definition of the norm and the properties of inner products, we can perform some obvious algebraic manipulations:

(-x-y)∙(-x-y) = (x-y)∙(x-y),

x ∙ x+ 2(x ∙ y) +y ∙ y=x ∙ x- 2(x ∙ y) +y ∙ y,

2(x ∙ y) = -2(x ∙ y),

which holds if and only if * x ∙ y = 0*.

A set of vectors * v_{1}, v_{2}, ...,
v_{n}* in a vector space over the field of real numbers
is called

The canonical basis vectors are an orthogonal set:

e_{1}= (1, 0, 0, ..., 0)

e_{2}= (0, 1, 0, ..., 0)

e_{3}= (0, 0, 1, ..., 0)

***

e_{n}= (0, 0, 0, ..., 1)

It is easy to show that nonzero orthogonal vectors are linearly independent. Suppose that

c_{1}v_{1}+ c_{2}v_{2}+ ... + c_{n}v_{n}= 0.

Take the inner product of both sides with * v_{i}*:

v_{i}∙(c_{1}v_{1}+ c_{2}v_{2}+ ... + c_{n}v_{n}) = 0.

Then apply the properties of inner products to obtain:

c_{1}v_{i}∙v_{1}+ c_{2}v_{i}∙v_{2}+ ... + c_{i}v_{i}∙v_{i}+ ... + c_{n}v_{i}∙v_{n}= 0

Because the vectors are orthogonal, all terms but one vanish:

c_{i}(v_{i}∙v_{i}) = 0.

Since * v_{i}* is nonzero, this implies that

Hence nonzero orthogonal vectors constitute a special kind of basis
for their linear span, which is called an __orthogonal basis__.

Every basis can be converted to an orthogonal basis for the same
subspace by a procedure called the __Gram-Schmidt Process__.
We describe this process by induction on the number *n* of vectors.

For *n=1*, the process is vacuous; a single nonzero vector
is an othogonal set.

For higher values of *n*, use the process with *n-1*
vectors to create an orthogonal basis * w_{1}, w_{2}, ...,
w_{n-1}* for the linear span of the first

Let * v_{n}* be the

w_{n}=v_{n}- [(v_{n}∙w_{1})/(w_{1}∙w_{1})]w_{1}- [(v_{n}∙w_{2})/(w_{2}∙w_{2})]w_{2}- ... - [(v_{n}∙w_{n-1})/(w_{n-1}∙w_{n-1})]w_{n-1}.

Then it is easily shown that * w_{1}, w_{2}, ...,
w_{n}* is the desired orthogonal basis.

An orthogonal basis in which every basis vector is of unit length (such as
the canonical basis noted above) is called an __orthonormal__ basis.
Any orthogonal basis can be converted to an orthonormal basis by
replacing each vector * v* by the vector

Orthonormal bases are especially useful because it is easy to express any vector in terms of the basis. For example, suppose that

x= c_{1}w_{1}+ c_{2}w_{2}+ ... + c_{n}w_{n}.

Take the inner product of each side with * w_{k}* to
obtain

w_{k}∙x= c_{1}(w_{k}∙w_{1}) + c_{2}(w_{k}∙w_{2}) + ... + c_{k}(w_{k}∙w_{k}) + ... + c_{n}(w_{k}∙w_{n}).

With the orthonormality properties, this reduces to

w_{k}∙x= c_{k}.

Therefore,

x= (w_{1}∙x)w_{1}+ (w_{2}∙x)w_{2}+ ... + (w_{n}∙x)w_{n}.

Moreover, the inner product of two vectors expressed in terms of an othonormal basis is

because(c_{1}w_{1}+ c_{2}w_{2}+ ... + c_{n}w_{n})∙(d_{1}w_{1}+ d_{2}w_{2}+ ... + d_{n}w_{n}) = c_{1}d_{1}+ c_{2}d_{2}+ ... + c_{n}d_{n}.

If *S* is a subspace of an *n*-dimensional vector space, then the
__orthogonal complement__ *S*^{ ⊥} is the set of all vectors that are
orthogonal to every vector in S. The following properties of orthogonal
complements are fairly easy to prove:

*S*^{ ⊥}is a subspace.*dim(S) + dim(S*^{ ⊥}*) = n*.- The whole vector space is the direct sum
*S*and*S*^{ ⊥}. *S*^{ ⊥⊥}*= S*.

Inner products and norms can be defined over more general real vector spaces.
For example, consider the set of all continuous real-valued functions over some
closed interval [a,b],
in which addition and scalar multiplication of functions are
defined in the usual manner: *(f+g)(x) = f(x) + g(x), (cf)(x) = c f(x)*.
The inner product can be defined as

f∙g = ∫_{a}^{b}f(x) g(x) dx.

If *a = 0* and
*b = 2π*, then the functions
*sin(nx), sin(2x), sin(3x) ...* and
*1, cos(nx), cos(2x), cos(3x) ...* are orthogonal.

Inner products are usually defined over real vector spaces, but they can
also be defined over complex vector spaces, with some modifications.
If * u* and

u= (u_{1}, u_{2}, ..., u_{n}),

v= (v_{1}, v_{2}, ..., v_{n}),

u ∙ v= ū_{1}v_{1}+ ū_{2}v_{2}+ ... + ū_{n}v_{n}.

The following properties of the complex inner product are easily verified, where
* x, y* and

and**x ∙ y****y ∙ x****x ∙ (y**+**z**) =**x ∙ y**+**x ∙ z****(y**+**z**) ∙**x**=**y ∙ x**+**z ∙ x****x**∙ e**y**= e**x ∙ y***e***x**∙**y**= ē**x ∙ y****x ∙ x**≥ 0, with equality only when**x**=**O**

We wish to define an angle between two lines in a way that is consistent
with Euclidean geometry in two and three dimensions. Consider the
angle *A* formed by the point * x*, the origin and
the point

We first drop a perpendicular from * y* to the line
joining the origin to

x∙(y- px) = 0,

x∙y- px∙x= 0,

p = (x∙y) / (x∙x).

If the angle is acute *p > 0*, and it is given by

cos(A) =∥p∥x/∥∥y= p∥∥x/∥∥y= [(∥x∙y) / (x∙x)] (∥x/∥∥y),∥

cos(A) = (x∙y) / (∥x∥∥y).

It can be shown that this formula also applies if the angle is a right angle or an obtuse angle.