Module 4 -- Vector Spaces -- Mathematics - 1A
Index
- Vector Spaces
- The 8 Axioms of a Vector Space
- Linear Dependence.
- Basis of a Vector Space
- Dimension of a Vector Space
- How to check if a set is a Basis
- Linear Transformations (Maps)
- Range and Kernel of a Linear Map
- Rank and Nullity
- Rank-Nullity Theorem
- Inverse of a Linear Transformation
- Composition of Linear Maps
Vector Spaces
What is a Vector Space?
A vector space (also called a linear space) is a collection of objects called vectors, where you can:
- Add any two vectors and get another vector in the same set.
- Multiply any vector by a scalar (a number) and get another vector in the set.
But not every set of objects is a vector space. The set must satisfy specific rules, called the axioms of vector spaces.
Formal Definition
Let
For any
- You can add:
- You can scale by a number:
- The following axioms hold:
The 8 Axioms of a Vector Space
- Associativity of addition:
- Commutativity of addition:
- Existence of zero vector: There is a vector
so that . - Existence of additive inverses: For every
, there is a vector so that . - Compatibility of scalar multiplication:
- Identity element of scalar multiplication:
- Distributivity of scalar multiplication w.r.t. vector addition:
- Distributivity w.r.t. scalar addition:
Examples of Vector Spaces
- Euclidean Space (
): Ordinary vectors you are familiar with, like , are elements of . - Space of Polynomials: The set of all polynomials of degree ≤n forms a vector space.
- Matrices: The set of all
matrices (with real entries) forms a vector space. - Functions: The set of all real-valued continuous functions on
is a vector space (addition and scalar multiplication done pointwise).
Common Confusions
- Not All Sets Are Vector Spaces: For example, the set of positive real numbers is NOT a vector space under ordinary addition—it doesn’t contain zero or negative numbers.
- Closure Is Key: After adding or scaling, the result must stay within the set.
Linear Dependence.
If you studied the concept of Linear Independence in module 3, then you should know that Linear Dependence of a system of equations is the exact opposite of Linear independence of a system of equations.
Still I will go ahead and provide the formal definition
Let's say you have a set of vectors:
where
The vectors are linearly independent if and only if the solution to these equations are zero, or the scalars are zero.
The vectors are linearly dependent if there is atleast one non-zero solution to the systems of equations.
Basis of a Vector Space
A basis of a vector space
- Spans the vector space(fills up the vector space):
- Every vector in
can be written as a linear combination of these basis vectors
- Is linearly independent of other basis vectors:
- No basis vector can be written as a combination of the other basis vectors.
In short:
A basis is a minimal, non-redundant set of building blocks that can generate the entire space.
Formal Definition:
If
- For every
in , there are unique scalars,
Why Do We Care About Bases?
- Coordinate System: When you choose a basis, every vector can be uniquely represented by a list of numbers (its coordinates with respect to the basis).
- Simplicity: Helps to work with a minimal number of vectors, without redundancy.
- Dimension: The number of vectors in a basis is called the dimension of the space (we’ll cover this next).
Examples
1. Standard Basis of :
Any vector
can be written as:
Basis for
Every
Non-Standard Basis
For
- You can solve for any
as a combination of them - They are linearly independent.
How to check if a set is a Basis
No worries if you didn't understand the jargon above.
Suppose you’re given a set of vectors:
-
Put them as columns (or rows) of a matrix.
-
They form a basis if:
- They are linearly independent (determinant is nonzero if square, or row-reduce and check if the number of pivots = number of vectors).
- They span the space (you can form any vector as a linear combination).
For
Quick example to understand this.
Let's check if the set:
is a basis for
So, we need to check for two dimensions, let's put together the vectors as the column of a single
Since we know that this is a square matrix, we just need to check if the determinant is a non-zero one or not.
So we don't need to go further into row-echelon form to
Now, since determinant is non-zero, these vectors are linearly independent, which means they can span the space(fill up the space).
Conclusion
- The set is linearly independent and spans
- Therefore, it is a basis for
.
Dimension of a Vector Space
What Is Dimension?
The dimension of a vector space is the number of vectors in any basis for that space.
- Formally: If a vector space
has a basis consisting of vectors, then the dimension of is . - The dimension is always a non-negative integer (can be infinite for infinite-dimensional spaces, but those aren’t our focus here).
Why Is Dimension Important?
- It tells you how many independent directions you can move within the space.
- All bases of a given vector space have the same number of elements—this is a crucial theorem!
Examples
1. Ordinary Euclidean spaces:
(plane) has dimension 2 (any basis has 2 vectors, e.g., and ). (3D space) has dimension 3. has dimension n.
2. Set of all polynomials of degree :
Any polynomial
So, its dimension is 4.
3. Space of matrices (with real entries):
- Dimension is
. - Why? Every matrix entry can be thought of as freely chosen, so m×nm×n independent basis “matrices”—each with a single 1 in a unique place, zeros elsewhere.
How to Find the Dimension
- Find a basis for the vector space.
- Count the number of vectors in the basis.
For sets of vectors (like columns of a matrix), the maximum number of linearly independent vectors is the dimension of the span of those vectors (this is also the rank of the matrix—something you already know).
Key Insights
- Span: The set of all possible linear combinations. Dimension tells you “how far the span goes.”
- Zero Vector Space (just {0}{0}): Its dimension is 0 (the empty basis).
- General rule: No set of more than nn vectors in RnRn can be independent; the largest possible independent set has size nn.
Dimension is a powerful tool:
- You quickly know the minimum number of vectors needed to span the space.
- It helps classify vector spaces and solve problems efficiently.
Working example
From the last example:
We found out that these two vectors are in basis, so to find out the dimension of the space they span across, we just count the number of vectors.
So the dimension of the occupied vector space = 2.
Linear Transformations (Maps)
A linear transformation (or linear map) is a function between two vector spaces that preserves the operations of vector addition and scalar multiplication.
So basically what it does is convert or map the vectors from one vector space to another.
Formally:
If
(addition is preserved) (scalar multiplication is preserved)
In simpler terms:
A transformation is linear if the image of a sum is the sum of images, and it also “respects” scaling.
In simple terms this means that a conversion(transformation) from vector space to another is linear only if the rules of addition and scalar multiplication are preserved.
Examples
1. Matrix Multiplication
If
2. Zero Map
3. Differentiation
The operation
4. Integration
For continuous functions on
Non-Example (Not Linear)
is not linear (because in general). is not linear (doesn’t preserve scalar multiplication: .
Properties and Notation
- The set of all linear transformations from
to is written as . - When
, this is called a linear operator on .
Why Are Linear Transformations Important?
- Connect vector spaces: They reveal the “structure-preserving” ways to move from one space to another.
- Matrix representation: Every linear transformation between finite-dimensional spaces can be represented by a matrix.
- Applications: Used throughout mathematics, physics, engineering, computer science, machine learning, and more (rotations, scaling, projections, etc.).
A practical example.
Suppose we have this matrix:
And we define the transformation
where
Now let's check if this transformation is linear or not.
Let
1. Check Additivity:
is
a) Compute
b) Compute
c) Compute and separately
So additivity is preserved.
2. Check Scalar Multiplication
Is
Let's use
a) Compute :
b) Compute T(cu):
c) Compute :
Thus scalar multiplication is preserved as well.
So this transformation:
Range and Kernel of a Linear Map
What is the Kernel (Null Space)?
The kernel (or null space) of a linear transformation
- It tells you which inputs get "flattened" to zero under the transformation
- The kernel is always a subspace of the domain
.
What is the Range (Image)?
The range (or image) of a linear transformation
- It tells you what vectors you can reach in
by applying to something in . - The range is always a subspace of the co-domain
.
Matrix example
Let's revisit the previous example:
We had a transformation :
And a matrix :
Step 1: Find the Kernel.
To find the Kernel of a linear transformation, just equation the transformation to zero.
Let :
So,
Converting that to a linear equation would be:
Solving these equations get us:
Substituting the values into the first:
So $$x_1 \ = \ 0$$
So the kernel of this matrix is the just the zero vector itself
This doesn't mean that the kernel always has to be the zero vector itself, it's just that the kernel could be any vector in
Range Example:
The range is the set of all vectors you can get as
If the matrix had less than full rank (e.g all rows were multiples), the range would be a line instead of an entire plane.
TLDR: Range is, as the name implies, the entire range of the transformation, for all values
Rank and Nullity
Rank
We already know the concept of the rank of a matrix.
Let's proceed to the rank of a linear transformation.
The rank of a linear transformation (or a matrix) measures the "size" of the range (image):
- Definition: The rank of a transformation
is the dimension of it's range. - For a matrix
, = number of linearly independent rows of columns (we already know this). - Geometric meaning: It tells you the number of “directions” in which the transformation can “move” a vector (i.e., not mapped to zero).
Didn't understand? Alright, let's go through a detailed example to get this sorted.
Example explanation.
First off, recall that the dimension of a vector space comes from the basis of a vector space, it is the number of vectors that are present in a basis.
Step 1: Find the range of the transformation first.
Consider the linear transformation,
which is given by:
where
Now, split the matrix into column vectors :
Now, the range can be of three types:
- Either the vectors can span on a single line
- Or occupy the entire target space (
), which is in this case. - Or something else.
If one vector in the column vector matrix is a multiple of the other, they span a line, however if they don't then they span the entire space.
This is true for:
- If all the column vectors are scalar multiples of each other, then they are linear dependent and thus span a single line.
- If at least two column vectors are not scalar multiples, then their span could be a plane, (if in
or higher), or even the whole space if there are enough independent directions.
In this case,
Is
There's no scalar factor that you can multiply to
Is
There's no scalar factor that you can multiply to
Is
There's no scalar factor that you can multiply to
So, since no vector here is scalar multiple of the another, then this means that the range spans across the vector space, which is
Step 2: Find the basis of the vector matrix which spans the range.
Here we have:
Since the matrix is non-square, we can't really resort on the determinant here, but instead will have to reduce this to the row-echelon form and count the number of pivots.
For
Thus
So,
Now we can divide
Now to make sure that in the pivot column, the pivot is the only leading non-zero entry, we perform an operation on
Now this is an acceptable REF form.
We have the number of pivots as 2, so the dimension of this basis is 2, which means the rank of this transformation is 2.
So the rank of
Nullity
The nullity of a linear transformation (or a matrix) measures the "size" of the kernel (null space):
- Definition: The nullity of
is the dimension of it's kernel.
Or, the number of vectors we are left with after equation the transformation to zero.
However, in the process of finding the rank of a transformation, after the rank is calculated,
Nullity can also be calculated as:
So, in our example matrix, we have 3 columns.
The rank is 2.
So the nullity of the kernel of the transformation :
is:
Rank-Nullity Theorem
Now let's delve into the theorem which links both of the concepts together.
For a linear transformation
Or, for a matrix
What Does This Mean?
-
Rank = The dimension of the image (column space/range) = number of independent output directions.
-
Nullity = The dimension of the kernel (null space) = number of independent “lost directions” (inputs that get sent to zero).
-
Domain dimension = The total number of independent variable “directions” you can choose as input.
-
Useful for:
- Solving systems of equations,
- Understanding how a transformation “compresses” or “loses” information,
- Determining invertibility (full rank = invertible; nullity zero).
Example
From our previous matrix:
- Number of columns = 3
- Rank = 2
- Nullity = 1
The dimension of the domain the transformation encompasses is :
Inverse of a Linear Transformation
Definition
A linear transformation
and
- The transformation
is called the inverse of , usually denoted as
For Matrices
For a square matrix
-
If invertible:
- The transformation is bijective (one-to-one and onto).
- The kernel is just the zero vector.
- The rank is full (
for matrices).
-
If not invertible:
- There are inputs that are lost (kernel is non-trivial).
- The transformation is not one-to-one or not onto (or both).
How to check if a matrix is invertible or not.
We typically know this by the checking the determinant of the matrix but here are a few other methods as well.
A square matrix is invertible if:
- Its determinant is not zero.
- It has full rank (number of pivots = number of columns = number of rows).
- The nullity is zero.
The result will be that the inverted matrix will be the inverted transformation.
Composition of Linear Maps
-
If you have two linear maps:
-
-
-
Their composition is the map
defined by:
- Key property:
The composition of linear maps is always linear.
Example
Suppose we have two linear transformations given by two matrices
given by matrix given by matrix
Let :
Applying
So, the matrix of the composition is the product