Motivation and definition of the inverse of a matrix
The product of a matrix times a vector is defined, and used to show that a system of linear equations is equivalent to a system of linear equations involving matrices and vectors. The example uses a 2x3 system.
Advice to instructors for in-class activities on matrix-vector multiplication and translating between the various equivalent notation forms of linear systems, and suggestions for how this topic can be used to motivate future topics.
A 3x3 system having a unique solution is solved by putting the augmented matrix in reduced row echelon form. A picture of three intersecting planes provides geometric intuition.
A 3x3 matrix equation Ax=b is solved for two different values of b. In one case there is no solution, and in another there are infinitely many solutions. These examples illustrate a theorem about linear combinations of the columns of the matrix A.
The reduced row echelon form is used to determine when a 3x3 system is inconsistent. A picture of planes in 3-dimensional space is used to provide geometric intuition.
Applet to show the configuration of planes corresponding to a 3x3 system having 0, 1, or infinitely many solutions. (need to make tags for this topic)
Sample problems to help understand when a linear system has 0, 1, or infinitely many solutions.
Linear independence is defined, followed by a worked example of 3 vectors in R^3.
Linear transformations are defined, and some small examples (and non examples) are explored. (need tag for R^2 -> R^2 example, general)
Two proofs, with discussion, of the fact that an abstract linear transformation maps 0 to 0.
Examples of special types of linear transformation from R^2 to R^2: dilation, projection, and shear. (Some issues with the video: things re-start around the 10 second mark, and at 3:46 the word "projection" is said, when it should be "transformation". Also, at the end maybe it could be described why it is called a 'shear'.)
For a specific 3x3 matrix, solve Ax=0 by row reducing an augmented matrix.
Visualize 2-d linear transformations by looking at the image of geometric object. (Need topic: Visualize a linear transformation on R^2 by its effect on a region.)
Notation for matrix entries, diagonal matrix, square matrix, identity matrix, and zero matrix.
Definition of sum of matrices, product of a scalar and a matrix
Learning goals: 1. What are the dimension (size) requirements for two matrices so that they can be multiplied to each other? 2. What is the product of two matrices, when it exists?
A 2x2 example is used to show that AB does not always equal BA.
Example 3x2 and a 2x3 matrices are used to show that AB does not always equal BA
The transpose of a matrix is defined, and various properties are explored using numerical examples.
Suggestions for in-class activities on matrix operations: addition, multiplication, transpose, and the fact that multiplication is not commutative.
The definition of matrix inverse is motivated by considering multiplicative inverse. The identity matrix and matrix inverse are defined.
The formula for the inverse of a 2x2 matrix is derived. (need tag for that formula)
Matrix inverses are motivated as a way to solve a linear system. The general algorithm of finding an inverse by row reducing an augmented matrix is described, and then implemented for a 3x3 matrix. Useful facts about inverses are stated and then illustrated with sample 2x2 matrices. (put first: need Example of finding the inverse of a 3-by-3 matrix by row reducing the augmented matrix)
Suggested classroom activities on matrix inverses.
Eleven different statements are shown to be equivalent to "The matrix A is nonsingular". (need a tag for a theorem that contains several of these equivalences)
A guide for discussing the many statements equivalent to "The matrix A is nonsingular". (need a tag for multiple equivalences)
Definition of vector, equality of vectors, vector addition, and scalar vector multiplication. Geometric and algebraic properties of vector addition are discussed. (need a topic on vector addition is commutative and associative)
The linear combination of a set of vectors is defined. Determine if a vector in R^2 is in the span of two other vectors. The span of a set of vectors is related to the columns of a matrix. (need topic: Determine if a vector in R^2 is in the span of two other vectors.)
Definition of the span of a set of vectors. Example of checking if a vector in R^3 is in the span of a set of two vectors. Geometric picture of a span.
Suggestions for in-class activities on linear combination and span of vectors in R^n. (need a topic for the general *process* of determining if a vector is in the span of a set of devtors)
A 4x4 matrix is put into row echelon form (not RREF), with motivation and all the details (need a 4x4 ref tag)
Suggestions for in-class activities for row reduction. (Need to add tags. The refrenced videos should be separate assets.)
This video kicks off the series of videos on vector spaces. We begin by summarizing the essential properties of R^n.
Matrices can be thought of as transforming space, and understanding how this work is crucial for understanding many other ideas that follow in linear algebra...
In this video we continue to list the properties of R^n. The 10 properties listed in this video and the previous video will be used to define a general vecto...
The concept of a vector space is somewhat abstract, and under this definition, a lot of objects such as polynomials, functions, etc., can be considered as vectors. This video explains the definition of a general vector space. In later videos we will look at more examples.
Preliminaries: 1. What is a subset? 2. How to verify a set is a subset of another set? 3. Notations and language of set theory related to subsets. In this video, we introduce the definition of a subspace. We go through a preliminary example to figure out what do subspaces of R^2 look like, and we will continue to talk about how to verify a subset of a vector space is a subspace in later videos.
In-class activity for linear combinations and span.
Linear independence in-class activity
After watching a video defining linear transformations and giving examples of 2-D transformations, students should be able to answer the questions in this quiz.
In-class activity to be completed after an introduction to transformations and ideally in teams. In part 1, students are guided to discover the theorem describing the matrix of a linear transformation from R^n to R^m. In part 2, students learn the one-to-one and onto properties of linear transformations, and are asked to relate these properties to the properties of the matrices (linear independence of columns and columns spanning the codomain).
Students answer multiple questions on the rank and dimension of the null space in a variety of situations to discover the connection between these dimensions leading to the Rank-Nullity Theorem.
An introductory activity on eigenvalues and eigenvectors in which students do basic matrix-vector multiplication calculations to find whether given vectors are eigenvectors, to determine the eigenvalue corresponding to an eigenvector and to find an eigenvector corresponding to an eigenvalue. This activity is self-contained and does not require any previous experience with eigenvalues or eigenvectors.
This is an activity on how linear algebra can be used to find the Lagrange polynomials to construct the polynomial of degree n that passes through n+1 distinct points. This activity requires knowledge of vector spaces (particularly polynomial spaces), linear transformations, and matrix inverses. It would be helpful to have a tag for applications of linear algebra for material like this.
University of Waterloo Math Online -
Slides for the accompanying video from University of Waterloo.
From the University of Waterloo Math Online
Slides from the corresponding video from the University of Waterloo.
Video Lesson from University of Waterloo.
This is a guided discovery of the formula for Lagrange Interpolation, which lets you find the formula for a polynomial which passes through a given set of points.
Quiz from the University of Waterloo.
This is a video from the University of Waterloo. Dot Product, Cross-Product in R^n (which should be in Chapter 8 section 4 about hyperplanes.
Quiz from the University of Waterloo. This is intended to be used after the video of the same name.
This is from the University of Waterloo. It includes content about Projections, as well as some content from the Multivariable Calculus. These notions are developed in Euclidean Space.
This is a quiz from the University of Waterloo. It is a quiz about projections that is strictly in R^n. It additionally asks questions on perpendicular vectors and cross products.
In this video, I'll explain why we only need to test 2 axioms (among the 10 axioms in the definition of a vector space) when figuring out if a subset is a subspace.
One-to-one and onto properties of liner transformations: definition, geometric interpretation, examples, connection with properties of the columns of a matrix representation.
Equivalence of systems of linear equations, row operations, corresponding matrices representing the linear systems
Statements that are equivalent to a square matrix being invertible; examples.
Orthogonal projection onto subspace in R^n minimizes distance; projection formula simplification for orthonormal bases; relation to orthogonal matrices
A real matrix $A$ is symmetric if and only if it is orthogonally diagonalizable (i.e. $A = PDP^{-1}$ for an orthogonal matrix $P$.) Proof and examples.
Inner product of two vectors in R^n, length of a vector in R^n, orthogonality. Motivation via approximate solutions of systems of linear equations, definition and properties of inner product (symmetric, bilinar, positive definite); length/norm of a vector, unit vectors; definition of distance between vectors; definition of orthogonality; Pythagorean Theorem.
Definition of the inverse of a matrix, examples, uniqueness; formula for the inverse of a 2x2 matrix; determinant of a 2x2 matrix; using the inverse to solve a system of linear equations.
How to compute all solutions to a general system $Ax=b$ of linear equations and connection to the corresponding homogeneous system $Ax=0$. Visualization of the geometry of solution sets. Consistent systems and their solution using row reduction.
The effect of row operations on the determinant of a matrix; computing determinants via row reduction; a square matrix is invertible if and only if its determinant is nonzero.
Definition of the eigenspace corresponding to an eigenvector $\lambda$ (and proof that this is a vector space); analysis of simple matrices in R^2 and R^3 to visualize the "geometry" of eigenspaces; proof that eigenvectors corresponding to distinct eigenvectors are linearly independent
Properties of matrix inversion: inverse of the inverse, inverse of the transpose, inverse of a product; elementary matrices and corresponding row operations; a matrix is invertible if and only if it is row-equivalent to the identity matrix; row-reduction algorithm for computing matrix inverse
Basis theorem: for an n-dimensional vector space any linearly independent set with n elements is a basis, as is any spanning set with n elements; dimension of the column space of a matrix equals the number of pivot columns of the matrix; dimension of the null space of a matrix equals the number of free variables of the matrix
Diagonalization theorem: a nxn matrix A is diagonalizable if and only if it has n linearly independent eigenvectors. If so, the matrix factors as A = PDP^{-1}, where D is diagonal and P is invertible (and its columns are the n linearly independent eigenvectors). Algorithm to diagonalize a matrix.
Homogeneous systems of linear equations; trivial versus nontrivial solutions of homogeneous systems; how to find nontrivial solutions; how to know from the reduced row-echelon form of a matrix whether the corresponding homogeneous system has nontrivial solutions.
The pivot columns of a matrix form a basis for its column space; nullspace of a matrix equals the nullspace of its reduced row-echelon form.
Definition of a (real) vector space; properties of the zero vector and the additive inverse in relation to scalar multiplication
Representation (unique) of a vector in terms of a basis for a vector space yields coordinates relative to the basis; change of basis and corresponding change of coordinate matrix
Definition of echelon form, reduction of a matrix to echelon form in order to compute solutions to systems of linear equations; definition of reduced row echelon form
Use matrix transformations to motivate the concept of linear transformation; examples of matrix transformations
Determinant of the transpose equals the determinant of the original matrix; rescaling a column rescales the determinant by the same factor; interchanging two columns changes the sign of the determinant; adding multiple of one column to another leaves determinant unchanged; determinant of the product of two matrices equals product of the two determinants
Associative and distributive properties of matrix multiplication and addition; multiplication by the identity matrix; definition of the transpose of a matrix; transpose of the transpose, transpose of a sum, transpose of a product
Equivalent statements for a matrix A: for every right-hand side b, the system Ax=b has a solution; every b is a linear combination of the columns of A; the span of the columns of A is maximal; A has a pivot position in every row.
Orthonormal sets and bases (definition); expressing vectors as linear combinations of orthonormal basis vectors; matrices with orthonormal columns preserve vector norm and dot product; orthogonal matrices; inverse of an orthogonal matrix equals its transpose
Definition of the column space of a matrix; column space is a subspace; comparison to the null space; definition of a linear transformation between vector spaces; definition of kernel and range of a linear transformation
Given a basis for a n-dimensional vector space V, the coordinate map is a linear bijection between V and R^n; definition isomorphisms between vector spaces and isomorphic vector spaces.
Motivation of the definition of a linear transformation using properties of matrices; examples; geometric intuition; matrix representation of a linear transformation
Theorem: \lambda is an eigenvalue of a matrix A if and only if \lambda satisfies the characteristic equation det(A-\lambda I) = 0; examples; eigenvalues of triangular matrices are the diagonal entries.
Definition of a vector; vector addition; scalar multiplication; visualization in R^2 and R^3; vector space axioms; linear combinations; span.
Definition of similarity for square matrices; similarity is an equivalence relation; similar matrices have the same characteristic polynomial and hence the same eigenvalues, with same multiplicities; definition of multiplicity.
Definition of a subspace of a vector space; examples; span of vectors is a subspace.
We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.
We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.
We begin our study of linear algebra with an introduction and a motivational example.
We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.
We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
We will now be more careful about analyzing the reduced row-echelon form derived from the augmented matrix of a system of linear equations. In particular, we will see how to systematically handle the situation when we have infinitely many solutions to a system, and we will prove that every system of linear equations has either zero, one or infinitely many solutions. With these tools, we will be able to routinely solve any linear system.
We will now be more careful about analyzing the reduced row-echelon form derived from the augmented matrix of a system of linear equations. In particular, we will see how to systematically handle the situation when we have infinitely many solutions to a system, and we will prove that every system of linear equations has either zero, one or infinitely many solutions. With these tools, we will be able to routinely solve any linear system.
We will now be more careful about analyzing the reduced row-echelon form derived from the augmented matrix of a system of linear equations. In particular, we will see how to systematically handle the situation when we have infinitely many solutions to a system, and we will prove that every system of linear equations has either zero, one or infinitely many solutions. With these tools, we will be able to routinely solve any linear system.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
In this section we specialize to systems of linear equations where every equation has a zero as its constant term. Along the way, we will begin to express more and more ideas in the language of matrices and begin a move away from writing out whole systems of equations. The ideas initiated in this section will carry through the remainder of the course.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
We will now be more careful about analyzing the reduced row-echelon form derived from the augmented matrix of a system of linear equations. In particular, we will see how to systematically handle the situation when we have infinitely many solutions to a system, and we will prove that every system of linear equations has either zero, one or infinitely many solutions. With these tools, we will be able to routinely solve any linear system.
In this section we specialize to systems of linear equations where every equation has a zero as its constant term. Along the way, we will begin to express more and more ideas in the language of matrices and begin a move away from writing out whole systems of equations. The ideas initiated in this section will carry through the remainder of the course.
In this section we specialize to systems of linear equations where every equation has a zero as its constant term. Along the way, we will begin to express more and more ideas in the language of matrices and begin a move away from writing out whole systems of equations. The ideas initiated in this section will carry through the remainder of the course.
In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.
In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.
In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.
In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.
In this section we specialize to systems of linear equations where every equation has a zero as its constant term. Along the way, we will begin to express more and more ideas in the language of matrices and begin a move away from writing out whole systems of equations. The ideas initiated in this section will carry through the remainder of the course.
We will now be more careful about analyzing the reduced row-echelon form derived from the augmented matrix of a system of linear equations. In particular, we will see how to systematically handle the situation when we have infinitely many solutions to a system, and we will prove that every system of linear equations has either zero, one or infinitely many solutions. With these tools, we will be able to routinely solve any linear system.
We will now be more careful about analyzing the reduced row-echelon form derived from the augmented matrix of a system of linear equations. In particular, we will see how to systematically handle the situation when we have infinitely many solutions to a system, and we will prove that every system of linear equations has either zero, one or infinitely many solutions. With these tools, we will be able to routinely solve any linear system.
In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.
In this section we define some new operations involving vectors, and collect some basic properties of these operations. Begin by recalling our definition of a column vector as an ordered list of complex numbers, written vertically (Definition CV). The collection of all possible vectors of a fixed size is a commonly used set, so we start with its definition.
We will now be more careful about analyzing the reduced row-echelon form derived from the augmented matrix of a system of linear equations. In particular, we will see how to systematically handle the situation when we have infinitely many solutions to a system, and we will prove that every system of linear equations has either zero, one or infinitely many solutions. With these tools, we will be able to routinely solve any linear system.
In this section we define some new operations involving vectors, and collect some basic properties of these operations. Begin by recalling our definition of a column vector as an ordered list of complex numbers, written vertically (Definition CV). The collection of all possible vectors of a fixed size is a commonly used set, so we start with its definition.
In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.
In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.
In this section we define some new operations involving vectors, and collect some basic properties of these operations. Begin by recalling our definition of a column vector as an ordered list of complex numbers, written vertically (Definition CV). The collection of all possible vectors of a fixed size is a commonly used set, so we start with its definition.
In Section VO we defined vector addition and scalar multiplication. These two operations combine nicely to give us a construction known as a linear combination, a construct that we will work with throughout this course.
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
Linear independence is one of the most fundamental conceptual ideas in linear algebra, along with the notion of a span. So this section, and the subsequent Section LDS, will explore this new idea.
In this section we will provide an extremely compact way to describe an infinite set of vectors, making use of linear combinations. This will give us a convenient way to describe the solution set of a linear system, the null space of a matrix, and many other sets of vectors.
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
In any linearly dependent set there is always one vector that can be written as a linear combination of the others. This is the substance of the upcoming Theorem DLDS. Perhaps this will explain the use of the word “dependent.” In a linearly dependent set, at least one vector “depends” on the others (via a linear combination).
Linear independence is one of the most fundamental conceptual ideas in linear algebra, along with the notion of a span. So this section, and the subsequent Section LDS, will explore this new idea.
Linear independence is one of the most fundamental conceptual ideas in linear algebra, along with the notion of a span. So this section, and the subsequent Section LDS, will explore this new idea.
Linear independence is one of the most fundamental conceptual ideas in linear algebra, along with the notion of a span. So this section, and the subsequent Section LDS, will explore this new idea.
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
In this section we define some new operations involving vectors, and collect some basic properties of these operations. Begin by recalling our definition of a column vector as an ordered list of complex numbers, written vertically (Definition CV). The collection of all possible vectors of a fixed size is a commonly used set, so we start with its definition.
In this section we specialize to systems of linear equations where every equation has a zero as its constant term. Along the way, we will begin to express more and more ideas in the language of matrices and begin a move away from writing out whole systems of equations. The ideas initiated in this section will carry through the remainder of the course.
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
In Section VO we defined vector addition and scalar multiplication. These two operations combine nicely to give us a construction known as a linear combination, a construct that we will work with throughout this course.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
In any linearly dependent set there is always one vector that can be written as a linear combination of the others. This is the substance of the upcoming Theorem DLDS. Perhaps this will explain the use of the word “dependent.” In a linearly dependent set, at least one vector “depends” on the others (via a linear combination).
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
In this section we will provide an extremely compact way to describe an infinite set of vectors, making use of linear combinations. This will give us a convenient way to describe the solution set of a linear system, the null space of a matrix, and many other sets of vectors.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In Section VO we defined vector addition and scalar multiplication. These two operations combine nicely to give us a construction known as a linear combination, a construct that we will work with throughout this course.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
Linear independence is one of the most fundamental conceptual ideas in linear algebra, along with the notion of a span. So this section, and the subsequent Section LDS, will explore this new idea.
In Section VO we defined vector addition and scalar multiplication. These two operations combine nicely to give us a construction known as a linear combination, a construct that we will work with throughout this course.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
Linear independence is one of the most fundamental conceptual ideas in linear algebra, along with the notion of a span. So this section, and the subsequent Section LDS, will explore this new idea.
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
Linear independence is one of the most fundamental conceptual ideas in linear algebra, along with the notion of a span. So this section, and the subsequent Section LDS, will explore this new idea.
After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
In this section we define some new operations involving vectors, and collect some basic properties of these operations. Begin by recalling our definition of a column vector as an ordered list of complex numbers, written vertically (Definition CV). The collection of all possible vectors of a fixed size is a commonly used set, so we start with its definition.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
We saw in Theorem CINM that if a square matrix \(A\) is nonsingular, then there is a matrix \(B\) so that \(AB=I_n\text{.}\) In other words, \(B\) is halfway to being an inverse of \(A\text{.}\) We will see in this section that \(B\) automatically fulfills the second condition (\(BA=I_n\)). Example MWIAA showed us that the coefficient matrix from Archetype A had no inverse. Not coincidentally, this coefficient matrix is singular. We will make all these connections precise now. Not many examples or definitions in this section, just theorems.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.
We saw in Theorem CINM that if a square matrix \(A\) is nonsingular, then there is a matrix \(B\) so that \(AB=I_n\text{.}\) In other words, \(B\) is halfway to being an inverse of \(A\text{.}\) We will see in this section that \(B\) automatically fulfills the second condition (\(BA=I_n\)). Example MWIAA showed us that the coefficient matrix from Archetype A had no inverse. Not coincidentally, this coefficient matrix is singular. We will make all these connections precise now. Not many examples or definitions in this section, just theorems.
We saw in Theorem CINM that if a square matrix \(A\) is nonsingular, then there is a matrix \(B\) so that \(AB=I_n\text{.}\) In other words, \(B\) is halfway to being an inverse of \(A\text{.}\) We will see in this section that \(B\) automatically fulfills the second condition (\(BA=I_n\)). Example MWIAA showed us that the coefficient matrix from Archetype A had no inverse. Not coincidentally, this coefficient matrix is singular. We will make all these connections precise now. Not many examples or definitions in this section, just theorems.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
We saw in Theorem CINM that if a square matrix \(A\) is nonsingular, then there is a matrix \(B\) so that \(AB=I_n\text{.}\) In other words, \(B\) is halfway to being an inverse of \(A\text{.}\) We will see in this section that \(B\) automatically fulfills the second condition (\(BA=I_n\)). Example MWIAA showed us that the coefficient matrix from Archetype A had no inverse. Not coincidentally, this coefficient matrix is singular. We will make all these connections precise now. Not many examples or definitions in this section, just theorems.
We saw in Theorem CINM that if a square matrix \(A\) is nonsingular, then there is a matrix \(B\) so that \(AB=I_n\text{.}\) In other words, \(B\) is halfway to being an inverse of \(A\text{.}\) We will see in this section that \(B\) automatically fulfills the second condition (\(BA=I_n\)). Example MWIAA showed us that the coefficient matrix from Archetype A had no inverse. Not coincidentally, this coefficient matrix is singular. We will make all these connections precise now. Not many examples or definitions in this section, just theorems.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.
There are four natural subsets associated with a matrix. We have met three already: the null space, the column space and the row space. In this section we will introduce a fourth, the left null space. The objective of this section is to describe one procedure that will allow us to find linearly independent sets that span each of these four sets of column vectors. Along the way, we will make a connection with the inverse of a matrix, so Theorem FS will tie together most all of this chapter (and the entire course so far).
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)
The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.
The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.
There are four natural subsets associated with a matrix. We have met three already: the null space, the column space and the row space. In this section we will introduce a fourth, the left null space. The objective of this section is to describe one procedure that will allow us to find linearly independent sets that span each of these four sets of column vectors. Along the way, we will make a connection with the inverse of a matrix, so Theorem FS will tie together most all of this chapter (and the entire course so far).
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.
A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.
A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.
A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.
Linear independence is one of the most fundamental conceptual ideas in linear algebra, along with the notion of a span. So this section, and the subsequent Section LDS, will explore this new idea.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.
In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.
Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.
A vector space is defined as a set with two operations, meeting ten properties (Definition VS). Just as the definition of span of a set of vectors only required knowing how to add vectors and how to multiply vectors by scalars, so it is with linear independence. A definition of a linearly independent set of vectors in an arbitrary vector space only requires knowing how to form linear combinations and equating these with the zero vector. Since every vector space must have a zero vector (Property Z), we always have a zero vector at our disposal.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.
A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.
A vector space is defined as a set with two operations, meeting ten properties (Definition VS). Just as the definition of span of a set of vectors only required knowing how to add vectors and how to multiply vectors by scalars, so it is with linear independence. A definition of a linearly independent set of vectors in an arbitrary vector space only requires knowing how to form linear combinations and equating these with the zero vector. Since every vector space must have a zero vector (Property Z), we always have a zero vector at our disposal.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.
In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.
Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.
Once the dimension of a vector space is known, then the determination of whether or not a set of vectors is linearly independent, or if it spans the vector space, can often be much easier. In this section we will state a workhorse theorem and then apply it to the column space and row space of a matrix. It will also help us describe a super-basis for \(\complex{m}\text{.}\)
A basis of a vector space is one of the most useful concepts in linear algebra. It often provides a concise, finite description of an infinite vector space.
Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
Once the dimension of a vector space is known, then the determination of whether or not a set of vectors is linearly independent, or if it spans the vector space, can often be much easier. In this section we will state a workhorse theorem and then apply it to the column space and row space of a matrix. It will also help us describe a super-basis for \(\complex{m}\text{.}\)
A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.
A basis of a vector space is one of the most useful concepts in linear algebra. It often provides a concise, finite description of an infinite vector space.
Once the dimension of a vector space is known, then the determination of whether or not a set of vectors is linearly independent, or if it spans the vector space, can often be much easier. In this section we will state a workhorse theorem and then apply it to the column space and row space of a matrix. It will also help us describe a super-basis for \(\complex{m}\text{.}\)
Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.
Once the dimension of a vector space is known, then the determination of whether or not a set of vectors is linearly independent, or if it spans the vector space, can often be much easier. In this section we will state a workhorse theorem and then apply it to the column space and row space of a matrix. It will also help us describe a super-basis for \(\complex{m}\text{.}\)
In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.
A basis of a vector space is one of the most useful concepts in linear algebra. It often provides a concise, finite description of an infinite vector space.
Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.
Once the dimension of a vector space is known, then the determination of whether or not a set of vectors is linearly independent, or if it spans the vector space, can often be much easier. In this section we will state a workhorse theorem and then apply it to the column space and row space of a matrix. It will also help us describe a super-basis for \(\complex{m}\text{.}\)
In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.
Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.
In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.
We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.
Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.
We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.
A basis of a vector space is one of the most useful concepts in linear algebra. It often provides a concise, finite description of an infinite vector space.
We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.
In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.
This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.
We have seen in Section SD that under the right conditions a square matrix is similar to a diagonal matrix. We recognize now, via Theorem SCB, that a similarity transformation is a change of basis on a matrix representation. So we can now discuss the choice of a basis used to build a matrix representation, and decide if some bases are better than others for this purpose. This will be the tone of this section. We will also see that every matrix has a reasonably useful matrix representation, and we will discover a new class of diagonalizable linear transformations. First we need some basic facts about triangular matrices.
The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.
The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.
This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.
The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.
A vector space is defined as a set with two operations, meeting ten properties (Definition VS). Just as the definition of span of a set of vectors only required knowing how to add vectors and how to multiply vectors by scalars, so it is with linear independence. A definition of a linearly independent set of vectors in an arbitrary vector space only requires knowing how to form linear combinations and equating these with the zero vector. Since every vector space must have a zero vector (Property Z), we always have a zero vector at our disposal.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
The companion to an injection is a surjection. Surjective linear transformations are closely related to spanning sets and ranges. So as you read this section reflect back on Section ILT and note the parallels and the contrasts. In the next section, Section IVLT, we will combine the two properties.
We have seen in Section SD that under the right conditions a square matrix is similar to a diagonal matrix. We recognize now, via Theorem SCB, that a similarity transformation is a change of basis on a matrix representation. So we can now discuss the choice of a basis used to build a matrix representation, and decide if some bases are better than others for this purpose. This will be the tone of this section. We will also see that every matrix has a reasonably useful matrix representation, and we will discover a new class of diagonalizable linear transformations. First we need some basic facts about triangular matrices.
Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.
There are four natural subsets associated with a matrix. We have met three already: the null space, the column space and the row space. In this section we will introduce a fourth, the left null space. The objective of this section is to describe one procedure that will allow us to find linearly independent sets that span each of these four sets of column vectors. Along the way, we will make a connection with the inverse of a matrix, so Theorem FS will tie together most all of this chapter (and the entire course so far).
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
The companion to an injection is a surjection. Surjective linear transformations are closely related to spanning sets and ranges. So as you read this section reflect back on Section ILT and note the parallels and the contrasts. In the next section, Section IVLT, we will combine the two properties.
Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.
A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.
In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.
We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.
We have seen in Section SD that under the right conditions a square matrix is similar to a diagonal matrix. We recognize now, via Theorem SCB, that a similarity transformation is a change of basis on a matrix representation. So we can now discuss the choice of a basis used to build a matrix representation, and decide if some bases are better than others for this purpose. This will be the tone of this section. We will also see that every matrix has a reasonably useful matrix representation, and we will discover a new class of diagonalizable linear transformations. First we need some basic facts about triangular matrices.
Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.
The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
You may have noticed that many questions about elements of abstract vector spaces eventually become questions about column vectors or systems of equations. Example SM32 would be an example of this. We will make this vague idea more precise in this section.
The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.
In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
We saw in Theorem CINM that if a square matrix \(A\) is nonsingular, then there is a matrix \(B\) so that \(AB=I_n\text{.}\) In other words, \(B\) is halfway to being an inverse of \(A\text{.}\) We will see in this section that \(B\) automatically fulfills the second condition (\(BA=I_n\)). Example MWIAA showed us that the coefficient matrix from Archetype A had no inverse. Not coincidentally, this coefficient matrix is singular. We will make all these connections precise now. Not many examples or definitions in this section, just theorems.
Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
The companion to an injection is a surjection. Surjective linear transformations are closely related to spanning sets and ranges. So as you read this section reflect back on Section ILT and note the parallels and the contrasts. In the next section, Section IVLT, we will combine the two properties.
A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
We have seen in Section MR that a linear transformation can be represented by a matrix, once we pick bases for the domain and codomain. How does the matrix representation change if we choose different bases? Which bases lead to especially nice representations? From the infinite possibilities, what is the best possible representation? This section will begin to answer these questions. But first we need to define eigenvalues for linear transformations and the change-of-basis matrix.
We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.
You may have noticed that many questions about elements of abstract vector spaces eventually become questions about column vectors or systems of equations. Example SM32 would be an example of this. We will make this vague idea more precise in this section.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.
We have seen that linear transformations whose domain and codomain are vector spaces of columns vectors have a close relationship with matrices (Theorem MBLT, Theorem MLTCV). In this section, we will extend the relationship between matrices and linear transformations to the setting of linear transformations between abstract vector spaces.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
We have seen that linear transformations whose domain and codomain are vector spaces of columns vectors have a close relationship with matrices (Theorem MBLT, Theorem MLTCV). In this section, we will extend the relationship between matrices and linear transformations to the setting of linear transformations between abstract vector spaces.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.
We have seen in Section MR that a linear transformation can be represented by a matrix, once we pick bases for the domain and codomain. How does the matrix representation change if we choose different bases? Which bases lead to especially nice representations? From the infinite possibilities, what is the best possible representation? This section will begin to answer these questions. But first we need to define eigenvalues for linear transformations and the change-of-basis matrix.
We have seen in Section MR that a linear transformation can be represented by a matrix, once we pick bases for the domain and codomain. How does the matrix representation change if we choose different bases? Which bases lead to especially nice representations? From the infinite possibilities, what is the best possible representation? This section will begin to answer these questions. But first we need to define eigenvalues for linear transformations and the change-of-basis matrix.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.
The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.
We have seen in Section SD that under the right conditions a square matrix is similar to a diagonal matrix. We recognize now, via Theorem SCB, that a similarity transformation is a change of basis on a matrix representation. So we can now discuss the choice of a basis used to build a matrix representation, and decide if some bases are better than others for this purpose. This will be the tone of this section. We will also see that every matrix has a reasonably useful matrix representation, and we will discover a new class of diagonalizable linear transformations. First we need some basic facts about triangular matrices.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.
The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.
This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.
The companion to an injection is a surjection. Surjective linear transformations are closely related to spanning sets and ranges. So as you read this section reflect back on Section ILT and note the parallels and the contrasts. In the next section, Section IVLT, we will combine the two properties.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
You may have noticed that many questions about elements of abstract vector spaces eventually become questions about column vectors or systems of equations. Example SM32 would be an example of this. We will make this vague idea more precise in this section.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
A basis of a vector space is one of the most useful concepts in linear algebra. It often provides a concise, finite description of an infinite vector space.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.
This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.
Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
A basis of a vector space is one of the most useful concepts in linear algebra. It often provides a concise, finite description of an infinite vector space.
Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.
You may have noticed that many questions about elements of abstract vector spaces eventually become questions about column vectors or systems of equations. Example SM32 would be an example of this. We will make this vague idea more precise in this section.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
We have seen that linear transformations whose domain and codomain are vector spaces of columns vectors have a close relationship with matrices (Theorem MBLT, Theorem MLTCV). In this section, we will extend the relationship between matrices and linear transformations to the setting of linear transformations between abstract vector spaces.
We have seen in Section MR that a linear transformation can be represented by a matrix, once we pick bases for the domain and codomain. How does the matrix representation change if we choose different bases? Which bases lead to especially nice representations? From the infinite possibilities, what is the best possible representation? This section will begin to answer these questions. But first we need to define eigenvalues for linear transformations and the change-of-basis matrix.
This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.
We have seen in Section MR that a linear transformation can be represented by a matrix, once we pick bases for the domain and codomain. How does the matrix representation change if we choose different bases? Which bases lead to especially nice representations? From the infinite possibilities, what is the best possible representation? This section will begin to answer these questions. But first we need to define eigenvalues for linear transformations and the change-of-basis matrix.
We have seen that linear transformations whose domain and codomain are vector spaces of columns vectors have a close relationship with matrices (Theorem MBLT, Theorem MLTCV). In this section, we will extend the relationship between matrices and linear transformations to the setting of linear transformations between abstract vector spaces.
You may have noticed that many questions about elements of abstract vector spaces eventually become questions about column vectors or systems of equations. Example SM32 would be an example of this. We will make this vague idea more precise in this section.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
We have seen in Section SD that under the right conditions a square matrix is similar to a diagonal matrix. We recognize now, via Theorem SCB, that a similarity transformation is a change of basis on a matrix representation. So we can now discuss the choice of a basis used to build a matrix representation, and decide if some bases are better than others for this purpose. This will be the tone of this section. We will also see that every matrix has a reasonably useful matrix representation, and we will discover a new class of diagonalizable linear transformations. First we need some basic facts about triangular matrices.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
We saw in Theorem CINM that if a square matrix \(A\) is nonsingular, then there is a matrix \(B\) so that \(AB=I_n\text{.}\) In other words, \(B\) is halfway to being an inverse of \(A\text{.}\) We will see in this section that \(B\) automatically fulfills the second condition (\(BA=I_n\)). Example MWIAA showed us that the coefficient matrix from Archetype A had no inverse. Not coincidentally, this coefficient matrix is singular. We will make all these connections precise now. Not many examples or definitions in this section, just theorems.
We have seen in Section MR that a linear transformation can be represented by a matrix, once we pick bases for the domain and codomain. How does the matrix representation change if we choose different bases? Which bases lead to especially nice representations? From the infinite possibilities, what is the best possible representation? This section will begin to answer these questions. But first we need to define eigenvalues for linear transformations and the change-of-basis matrix.
The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.
We have seen that linear transformations whose domain and codomain are vector spaces of columns vectors have a close relationship with matrices (Theorem MBLT, Theorem MLTCV). In this section, we will extend the relationship between matrices and linear transformations to the setting of linear transformations between abstract vector spaces.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.
You may have noticed that many questions about elements of abstract vector spaces eventually become questions about column vectors or systems of equations. Example SM32 would be an example of this. We will make this vague idea more precise in this section.
We saw in Theorem CINM that if a square matrix \(A\) is nonsingular, then there is a matrix \(B\) so that \(AB=I_n\text{.}\) In other words, \(B\) is halfway to being an inverse of \(A\text{.}\) We will see in this section that \(B\) automatically fulfills the second condition (\(BA=I_n\)). Example MWIAA showed us that the coefficient matrix from Archetype A had no inverse. Not coincidentally, this coefficient matrix is singular. We will make all these connections precise now. Not many examples or definitions in this section, just theorems.
We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.
The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.
Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.
This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.
A basis of a vector space is one of the most useful concepts in linear algebra. It often provides a concise, finite description of an infinite vector space.
There are four natural subsets associated with a matrix. We have met three already: the null space, the column space and the row space. In this section we will introduce a fourth, the left null space. The objective of this section is to describe one procedure that will allow us to find linearly independent sets that span each of these four sets of column vectors. Along the way, we will make a connection with the inverse of a matrix, so Theorem FS will tie together most all of this chapter (and the entire course so far).
A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.
Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.
There are four natural subsets associated with a matrix. We have met three already: the null space, the column space and the row space. In this section we will introduce a fourth, the left null space. The objective of this section is to describe one procedure that will allow us to find linearly independent sets that span each of these four sets of column vectors. Along the way, we will make a connection with the inverse of a matrix, so Theorem FS will tie together most all of this chapter (and the entire course so far).
We have seen that linear transformations whose domain and codomain are vector spaces of columns vectors have a close relationship with matrices (Theorem MBLT, Theorem MLTCV). In this section, we will extend the relationship between matrices and linear transformations to the setting of linear transformations between abstract vector spaces.
Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.
We have seen in Section SD that under the right conditions a square matrix is similar to a diagonal matrix. We recognize now, via Theorem SCB, that a similarity transformation is a change of basis on a matrix representation. So we can now discuss the choice of a basis used to build a matrix representation, and decide if some bases are better than others for this purpose. This will be the tone of this section. We will also see that every matrix has a reasonably useful matrix representation, and we will discover a new class of diagonalizable linear transformations. First we need some basic facts about triangular matrices.
The companion to an injection is a surjection. Surjective linear transformations are closely related to spanning sets and ranges. So as you read this section reflect back on Section ILT and note the parallels and the contrasts. In the next section, Section IVLT, we will combine the two properties.
Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
A vector space is defined as a set with two operations, meeting ten properties (Definition VS). Just as the definition of span of a set of vectors only required knowing how to add vectors and how to multiply vectors by scalars, so it is with linear independence. A definition of a linearly independent set of vectors in an arbitrary vector space only requires knowing how to form linear combinations and equating these with the zero vector. Since every vector space must have a zero vector (Property Z), we always have a zero vector at our disposal.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.
We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.
Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.
The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.
Once the dimension of a vector space is known, then the determination of whether or not a set of vectors is linearly independent, or if it spans the vector space, can often be much easier. In this section we will state a workhorse theorem and then apply it to the column space and row space of a matrix. It will also help us describe a super-basis for \(\complex{m}\text{.}\)
The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.
A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.
A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.
Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.
Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.
The companion to an injection is a surjection. Surjective linear transformations are closely related to spanning sets and ranges. So as you read this section reflect back on Section ILT and note the parallels and the contrasts. In the next section, Section IVLT, we will combine the two properties.
We have seen in Section SD that under the right conditions a square matrix is similar to a diagonal matrix. We recognize now, via Theorem SCB, that a similarity transformation is a change of basis on a matrix representation. So we can now discuss the choice of a basis used to build a matrix representation, and decide if some bases are better than others for this purpose. This will be the tone of this section. We will also see that every matrix has a reasonably useful matrix representation, and we will discover a new class of diagonalizable linear transformations. First we need some basic facts about triangular matrices.
We have seen in Section SD that under the right conditions a square matrix is similar to a diagonal matrix. We recognize now, via Theorem SCB, that a similarity transformation is a change of basis on a matrix representation. So we can now discuss the choice of a basis used to build a matrix representation, and decide if some bases are better than others for this purpose. This will be the tone of this section. We will also see that every matrix has a reasonably useful matrix representation, and we will discover a new class of diagonalizable linear transformations. First we need some basic facts about triangular matrices.
Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.
We have seen in Section SD that under the right conditions a square matrix is similar to a diagonal matrix. We recognize now, via Theorem SCB, that a similarity transformation is a change of basis on a matrix representation. So we can now discuss the choice of a basis used to build a matrix representation, and decide if some bases are better than others for this purpose. This will be the tone of this section. We will also see that every matrix has a reasonably useful matrix representation, and we will discover a new class of diagonalizable linear transformations. First we need some basic facts about triangular matrices.
The companion to an injection is a surjection. Surjective linear transformations are closely related to spanning sets and ranges. So as you read this section reflect back on Section ILT and note the parallels and the contrasts. In the next section, Section IVLT, we will combine the two properties.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
You may have noticed that many questions about elements of abstract vector spaces eventually become questions about column vectors or systems of equations. Example SM32 would be an example of this. We will make this vague idea more precise in this section.
A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.
In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.
A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.
A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.
In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.
We have seen that linear transformations whose domain and codomain are vector spaces of columns vectors have a close relationship with matrices (Theorem MBLT, Theorem MLTCV). In this section, we will extend the relationship between matrices and linear transformations to the setting of linear transformations between abstract vector spaces.
A rotatable model of the cross product of two vectors.