Module 4 -- Vector Spaces -- Mathematics - 1A


Index

  1. Vector Spaces
  2. The 8 Axioms of a Vector Space
  3. Linear Dependence.
  4. Basis of a Vector Space
  5. Dimension of a Vector Space
  6. How to check if a set is a Basis
  7. Linear Transformations (Maps)
  8. Range and Kernel of a Linear Map
  9. Rank and Nullity
  10. Rank-Nullity Theorem
  11. Inverse of a Linear Transformation
  12. Composition of Linear Maps

Vector Spaces

What is a Vector Space?

vector space (also called a linear space) is a collection of objects called vectors, where you can:

But not every set of objects is a vector space. The set must satisfy specific rules, called the axioms of vector spaces.


Formal Definition

Let V be a set, and let F be a field (like the set of real numbers R or complex numbers C). V is a vector space over F if:

For any u, v, w V and a,b  F

The 8 Axioms of a Vector Space

  1. Associativity of addition(u+v)+w=u+(v+w)
  2. Commutativity of additionu+v=v+u
  3. Existence of zero vector: There is a vector 0V so that v + 0 = v  v.
  4. Existence of additive inverses: For every v, there is a vector v so that v+(v)=0.
  5. Compatibility of scalar multiplicationa(bv)=(ab)v
  6. Identity element of scalar multiplication1v=v
  7. Distributivity of scalar multiplication w.r.t. vector additiona(u+v)=au+av
  8. Distributivity w.r.t. scalar addition(a+b)v=av+bv

Examples of Vector Spaces

  1. Euclidean Space (Rn): Ordinary vectors you are familiar with, like (x,y,z), are elements of R3.
  2. Space of Polynomials: The set of all polynomials of degree ≤n forms a vector space.
  3. Matrices: The set of all m×n matrices (with real entries) forms a vector space.
  4. Functions: The set of all real-valued continuous functions on [1] is a vector space (addition and scalar multiplication done pointwise).

Common Confusions


Linear Dependence.

If you studied the concept of Linear Independence in module 3, then you should know that Linear Dependence of a system of equations is the exact opposite of Linear independence of a system of equations.

Still I will go ahead and provide the formal definition

Let's say you have a set of vectors:

c1v1 + c2v2 +  + cnvn=0

where ci are the scalars and 0 is the zero vector.

The vectors are linearly independent if and only if the solution to these equations are zero, or the scalars are zero.

The vectors are linearly dependent if there is atleast one non-zero solution to the systems of equations.


Basis of a Vector Space

A basis of a vector space V is a set of vectors that:

  1. Spans the vector space(fills up the vector space):
  1. Is linearly independent of other basis vectors:

In short:
A basis is a minimal, non-redundant set of building blocks that can generate the entire space.

Formal Definition:

If B = {b1,b2, , bn} is a set of vectors in space V, then B is a basis if:

v = a1b1 + a2b2 +  + anbn

Why Do We Care About Bases?


Examples

1. Standard Basis of R2:

{[10], [01]}

Any vector

[25]

can be written as:

x[10] + y[01]

Basis for R3

{[100], [010], [001]}

Every [x, y, z] can be uniquely written as x[1] + y[1] +z[1].


Non-Standard Basis

For R2, the set {[1], [1], [1,1]} is also a basis:


How to check if a set is a Basis

No worries if you didn't understand the jargon above.

Suppose you’re given a set of vectors:

For n vectors in Rn, if linearly independent, then they are in a basis.


Quick example to understand this.

Let's check if the set:

{[21], [35]}

is a basis for R2.

So, we need to check for two dimensions, let's put together the vectors as the column of a single 2×2 matrix.

A = [2315]

Since we know that this is a square matrix, we just need to check if the determinant is a non-zero one or not.

det(A) = 10  3 = 7  0

So we don't need to go further into row-echelon form to

Now, since determinant is non-zero, these vectors are linearly independent, which means they can span the space(fill up the space).

Conclusion


Dimension of a Vector Space

What Is Dimension?

The dimension of a vector space is the number of vectors in any basis for that space.


Why Is Dimension Important?


Examples

1. Ordinary Euclidean spaces:

2. Set of all polynomials of degree 3:

Any polynomial a0 + a1x + a2x2 + a3x3 can be written using 4 “basic” polynomials: 1,x,x2,x3.
So, its dimension is 4.

3. Space of m×n matrices (with real entries):

How to Find the Dimension

  1. Find a basis for the vector space.
  2. Count the number of vectors in the basis.

For sets of vectors (like columns of a matrix), the maximum number of linearly independent vectors is the dimension of the span of those vectors (this is also the rank of the matrix—something you already know).


Key Insights

Dimension is a powerful tool:


Working example

From the last example:

{[21], [35]}

We found out that these two vectors are in basis, so to find out the dimension of the space they span across, we just count the number of vectors.

So the dimension of the occupied vector space = 2.


Linear Transformations (Maps)

linear transformation (or linear map) is a function between two vector spaces that preserves the operations of vector addition and scalar multiplication.

So basically what it does is convert or map the vectors from one vector space to another.


Formally:

If T:V W is a function (where V and W are vector spaces), T is a linear transformation, for all u,vV and all scalars c, if and only if:

In simpler terms:
A transformation is linear if the image of a sum is the sum of images, and it also “respects” scaling.

In simple terms this means that a conversion(transformation) from vector space to another is linear only if the rules of addition and scalar multiplication are preserved.


Examples

1. Matrix Multiplication

If A is an m×n matrix, then T(x)=Ax is a linear transformation from Rn to Rm

2. Zero Map

T(v)=0 for all v is always linear.

3. Differentiation

The operation D(f)=f (the derivative of a function) is a linear transformation on many function spaces.

4. Integration

For continuous functions on [1]:T(f) = 01f(x)  dx is a linear transformation from function space to R.


Non-Example (Not Linear)


Properties and Notation


Why Are Linear Transformations Important?


A practical example.

Suppose we have this matrix:

A = [2314]

And we define the transformation T:R2  R2 as:

T(x) = A(x)

where x is a column vector in R2.

Now let's check if this transformation is linear or not.

Let

u = [12], v = [43]

1. Check Additivity:

is T(u+v) = T(u) +T(v) ?

a) Compute u+v

u + v = [12] + [43]= [55]

b) Compute T(u+v)

T([55]) = A[55] = [2×5+3×51×5+4×5][10+155+20] = [2525]

c) Compute T(u) and T(v) separately

T(u) = T[12] = A[12] = [2×1+3×21×1+4×2]= [89]T(v) = T[43] = A[43] = [2×4+3×31×4+4×3] = [1716]T(u) + T(v) = [89] + [1716] = [2525]

So additivity is preserved.


2. Check Scalar Multiplication

Is T(cu) = cT(u) ?

Let's use c = 5

a) Compute cu:

5[12] = [510]

b) Compute T(cu):

= A[510] = [2×5+3×101×5+4×10] = [4045]

c) Compute 5T(u):

T(u) = [89]5T(u) = 5[89] = [4045]

Thus scalar multiplication is preserved as well.

So this transformation: T(x) = Ax is linear.


Range and Kernel of a Linear Map

What is the Kernel (Null Space)?

The kernel (or null space) of a linear transformation T:V  W is the set of all vectors in V that map to the zero vector in W

Kernel(T) = {v V | T(v) = 0}

What is the Range (Image)?

The range (or image) of a linear transformation T:V  W is the set of all possible outputs of T:

Range(T) = {T(v)  vV}

Matrix example

Let's revisit the previous example:

We had a transformation :

T(x) = Ax

And a matrix :

A = [2314]

Step 1: Find the Kernel.

To find the Kernel of a linear transformation, just equation the transformation to zero.

Ax = 0

Let :

x = [x1x2]

So,

[2314] [x1x2] = [00]

Converting that to a linear equation would be:

2x1 + 3x2 = 0x1 + 4x2 = 0

Solving these equations get us:

x1 = 4x2  (ii)

Substituting the values into the first:

2(4x2) + 3x2  5x2 = 0 x2 = 0

So $$x_1 \ = \ 0$$

So the kernel of this matrix is the just the zero vector itself (0,0), meaning the transformation is one-to-one(injective).

This doesn't mean that the kernel always has to be the zero vector itself, it's just that the kernel could be any vector in V that maps to the zero vector in W, including the zero vector of V, itself.


Range Example:

The range is the set of all vectors you can get as Ax. For a 2×2 matrix, with nonzero determinant, here det(A) = 5  0, it means the onto(surjective), the range is all of R2.

If the matrix had less than full rank (e.g all rows were multiples), the range would be a line instead of an entire plane.

TLDR: Range is, as the name implies, the entire range of the transformation, for all values x1,x2,  xn  x


Rank and Nullity

Rank

We already know the concept of the rank of a matrix.

Let's proceed to the rank of a linear transformation.

The rank of a linear transformation (or a matrix) measures the "size" of the range (image):

Didn't understand? Alright, let's go through a detailed example to get this sorted.

Example explanation.

First off, recall that the dimension of a vector space comes from the basis of a vector space, it is the number of vectors that are present in a basis.

Step 1: Find the range of the transformation first.

Consider the linear transformation,

T:R3  R2

which is given by:

T(x) = Ax

where

A = [123456]

Now, split the matrix into column vectors :

a1 = [14], a2 = [25], a3 = [36]

Now, the range can be of three types:

If one vector in the column vector matrix is a multiple of the other, they span a line, however if they don't then they span the entire space.

This is true for:

In this case,

Is a1 a multiple of a2 or vice-versa?

There's no scalar factor that you can multiply to a1 to get a2 or vice-versa, so they are not scalar multiples.

Is a1 a multiple of a3 or vice-versa?

There's no scalar factor that you can multiply to a1 to get a3 or vice-versa, so they are not scalar multiples.

Is a2 a multiple of a3 or vice-versa?

There's no scalar factor that you can multiply to a2 to get a3 or vice-versa, so they are not scalar multiples.

So, since no vector here is scalar multiple of the another, then this means that the range spans across the vector space, which is R2 in this case.


Step 2: Find the basis of the vector matrix which spans the range.

Here we have:

A = [123456]

Since the matrix is non-square, we can't really resort on the determinant here, but instead will have to reduce this to the row-echelon form and count the number of pivots.

For R21 , we set the pivot to R11 = 1.

Thus m = 41 = 4

So,

R2  R2  4R1A = [123036]

Now we can divide R2 by 3 to get the second pivot

A = [123012]

Now to make sure that in the pivot column, the pivot is the only leading non-zero entry, we perform an operation on R1

m = 21 = 2

R1  R1  2R2A = [101012]

Now this is an acceptable REF form.

We have the number of pivots as 2, so the dimension of this basis is 2, which means the rank of this transformation is 2.

So the rank of T(A) is 2


Nullity

The nullity of a linear transformation (or a matrix) measures the "size" of the kernel (null space):

Or, the number of vectors we are left with after equation the transformation to zero.

However, in the process of finding the rank of a transformation, after the rank is calculated,

Nullity can also be calculated as:

Nullity = Number of columns  rank of the transformation(Number of pivots)

So, in our example matrix, we have 3 columns.

The rank is 2.

So the nullity of the kernel of the transformation :

T(x) = Ax

is: 32 = 1


Rank-Nullity Theorem

Now let's delve into the theorem which links both of the concepts together.

For a linear transformation T:V  W (or an m×n matrix A), the theorem states:

Dimension of the domain = Rank + Nullity

Or, for a matrix A with n columns:

n = rank(A) + nullity(A)

What Does This Mean?


Example

From our previous matrix:

A = [101012]

The dimension of the domain the transformation encompasses is : 2+1 = 3


Inverse of a Linear Transformation

Definition

A linear transformation T:V  W is called invertible if there exists another linear transformation s:W  V such that:

S(T(v)) = v, v  V

and

T(S(w)) = w  w  W

For Matrices

For a square matrix A, the associated transformation T(x) = Ax is invertible if and only if A is invertible. (i.e A has an inverse matrix A1 such that AA1 = I) where I is the identity matrix.


How to check if a matrix is invertible or not.

We typically know this by the checking the determinant of the matrix but here are a few other methods as well.

A square matrix is invertible if:

The result will be that the inverted matrix will be the inverted transformation.


Composition of Linear Maps

(T o S)(v) = T(S(v))

The composition of linear maps is always linear.


Example

Suppose we have two linear transformations given by two matrices

Let :

A = [1201], B = [3012]

Applying S(x) then T(x) would be:

T(S(x)) = B(Ax) = (BA)x

So, the matrix of the composition is the product BA.