Linear Algebra and Its Applications

by David C. Lay, Steven R. Lay, and Judi J. McDonald

 

1 Linear Equations in Linear Algebra 

1.1 Systems of Linear Equations 


A 3x3 system having a unique solution is solved by putting the augmented matrix in reduced row echelon form. A picture of three intersecting planes provides geometric intuition.

Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Review
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

A 3x3 matrix equation Ax=b is solved for two different values of b. In one case there is no solution, and in another there are infinitely many solutions. These examples illustrate a theorem about linear combinations of the columns of the matrix A.

Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

The reduced row echelon form is used to determine when a 3x3 system is inconsistent. A picture of planes in 3-dimensional space is used to provide geometric intuition.

Created On
February 15th, 2017
7 years ago
Views
4
Type
 Video
Timeframe
 Review
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

Sample problems to help understand when a linear system has 0, 1, or infinitely many solutions.

  • Linear systems have zero, one, or infinitely many solutions. math.la.t.linsys.zoi
  • math.la.t.rref.consistent
Created On
February 15th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 In-class
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

Notation for matrix entries, diagonal matrix, square matrix, identity matrix, and zero matrix.

Created On
February 17th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Equivalence of systems of linear equations, row operations, corresponding matrices representing the linear systems

Created On
August 21st, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

How to compute all solutions to a general system $Ax=b$ of linear equations and connection to the corresponding homogeneous system $Ax=0$. Visualization of the geometry of solution sets. Consistent systems and their solution using row reduction.

Created On
August 22nd, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Homogeneous systems of linear equations; trivial versus nontrivial solutions of homogeneous systems; how to find nontrivial solutions; how to know from the reduced row-echelon form of a matrix whether the corresponding homogeneous system has nontrivial solutions.

Created On
August 25th, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We begin our study of linear algebra with an introduction and a motivational example.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We will now be more careful about analyzing the reduced row-echelon form derived from the augmented matrix of a system of linear equations. In particular, we will see how to systematically handle the situation when we have infinitely many solutions to a system, and we will prove that every system of linear equations has either zero, one or infinitely many solutions. With these tools, we will be able to routinely solve any linear system.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We will now be more careful about analyzing the reduced row-echelon form derived from the augmented matrix of a system of linear equations. In particular, we will see how to systematically handle the situation when we have infinitely many solutions to a system, and we will prove that every system of linear equations has either zero, one or infinitely many solutions. With these tools, we will be able to routinely solve any linear system.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Definition of coefficients of a linear equation math.la.d.lineqn.coeff
Definition of size of a matrix math.la.d.mat.size
Example of solving a 3-by-3 homogeneous system of linear equations by row-reducing the augmented matrix, in the case of one solution math.la.e.linsys.3x3.soln.homog.row_reduce.o

1.2 Row Reduction and Echelon Forms 


A 3x3 system having a unique solution is solved by putting the augmented matrix in reduced row echelon form. A picture of three intersecting planes provides geometric intuition.

Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Review
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

Equivalence of systems of linear equations, row operations, corresponding matrices representing the linear systems

Created On
August 21st, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Definition of echelon form, reduction of a matrix to echelon form in order to compute solutions to systems of linear equations; definition of reduced row echelon form

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Definition of row reduce a matrix math.la.d.mat.row_reduce
Definition of pivot position math.la.d.mat.pivot_position
Example of putting a matrix in echelon form and identifying the pivot columns math.la.e.mat.echelon.of.pivot
math.la.e.mat.echelon.of.4x4
Definition of pivot math.la.d.mat.pivot
math.la.d.insys.variable.basic
math.la.d.insys.variable.free
math.la.e.linsys.3x5.soln.row_reduce.i
The echelon form can be used to determine if a linear system is consistent. math.la.t.echelon.consistent
Example of using the echelon form to determine if a linear system is consistent. math.la.e.echelon.consistent

1.3 Vector Equations 


The product of a matrix times a vector is defined, and used to show that a system of linear equations is equivalent to a system of linear equations involving matrices and vectors. The example uses a 2x3 system.

License
CC-BY-SA-4.0
Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Definition of vector, equality of vectors, vector addition, and scalar vector multiplication. Geometric and algebraic properties of vector addition are discussed. (need a topic on vector addition is commutative and associative)

Created On
February 19th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

The linear combination of a set of vectors is defined. Determine if a vector in R^2 is in the span of two other vectors. The span of a set of vectors is related to the columns of a matrix. (need topic: Determine if a vector in R^2 is in the span of two other vectors.)

Created On
February 20th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Definition of the span of a set of vectors. Example of checking if a vector in R^3 is in the span of a set of two vectors. Geometric picture of a span.

Created On
February 20th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Suggestions for in-class activities on linear combination and span of vectors in R^n. (need a topic for the general *process* of determining if a vector is in the span of a set of devtors)

Created On
February 20th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 In-class
Language
 English
Content Type
text/html; charset=utf-8

In-class activity for linear combinations and span.

License
GFDL-1.3
Created On
June 8th, 2017
7 years ago
Views
3
Type
 Handout
Timeframe
 In-class
Perspective
 Introduction
Language
 English
Content Type
application/pdf

University of Waterloo Math Online -

Created On
October 23rd, 2013
10 years ago
Views
2
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html;charset=UTF-8

Slides for the accompanying video from University of Waterloo.

Created On
October 23rd, 2013
10 years ago
Views
3
Type
 Handout
Perspective
 Introduction
Language
 English
Content Type
application/pdf

From the University of Waterloo Math Online

Created On
October 23rd, 2013
10 years ago
Views
3
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html;charset=UTF-8

Slides from the corresponding video from the University of Waterloo.

Created On
October 23rd, 2013
10 years ago
Views
3
Type
 Handout
Perspective
 Introduction
Language
 English
Content Type
application/pdf

Quiz from the University of Waterloo.

Created On
October 23rd, 2013
10 years ago
Views
4
Type
 Unknown
Timeframe
 Post-class
Perspective
 Example
Language
 English
Content Type
text/html;charset=UTF-8

Definition of a vector; vector addition; scalar multiplication; visualization in R^2 and R^3; vector space axioms; linear combinations; span.

Created On
September 3rd, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we define some new operations involving vectors, and collect some basic properties of these operations. Begin by recalling our definition of a column vector as an ordered list of complex numbers, written vertically (Definition CV). The collection of all possible vectors of a fixed size is a commonly used set, so we start with its definition.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will provide an extremely compact way to describe an infinite set of vectors, making use of linear combinations. This will give us a convenient way to describe the solution set of a linear system, the null space of a matrix, and many other sets of vectors.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we define some new operations involving vectors, and collect some basic properties of these operations. Begin by recalling our definition of a column vector as an ordered list of complex numbers, written vertically (Definition CV). The collection of all possible vectors of a fixed size is a commonly used set, so we start with its definition.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In Section VO we defined vector addition and scalar multiplication. These two operations combine nicely to give us a construction known as a linear combination, a construct that we will work with throughout this course.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we define some new operations involving vectors, and collect some basic properties of these operations. Begin by recalling our definition of a column vector as an ordered list of complex numbers, written vertically (Definition CV). The collection of all possible vectors of a fixed size is a commonly used set, so we start with its definition.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Vector sum/addition interpreted geometrically in R^n (or C^n) math.la.t.vec.sum.geometric.rncn
Definition of weights in a linear combination of vectors, coordinate vector space math.la.d.vec.lincomb.weight.coord

1.4 The Matrix Equation Ax = b 


The product of a matrix times a vector is defined, and used to show that a system of linear equations is equivalent to a system of linear equations involving matrices and vectors. The example uses a 2x3 system.

License
CC-BY-SA-4.0
Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Advice to instructors for in-class activities on matrix-vector multiplication and translating between the various equivalent notation forms of linear systems, and suggestions for how this topic can be used to motivate future topics.

Created On
February 15th, 2017
7 years ago
Views
3
Type
 Handout
Timeframe
 In-class
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

A 3x3 matrix equation Ax=b is solved for two different values of b. In one case there is no solution, and in another there are infinitely many solutions. These examples illustrate a theorem about linear combinations of the columns of the matrix A.

Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

For a specific 3x3 matrix, solve Ax=0 by row reducing an augmented matrix.

Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Review
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

Learning goals: 1. What are the dimension (size) requirements for two matrices so that they can be multiplied to each other? 2. What is the product of two matrices, when it exists?

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Review
Language
 English
Content Type
text/html; charset=utf-8

The linear combination of a set of vectors is defined. Determine if a vector in R^2 is in the span of two other vectors. The span of a set of vectors is related to the columns of a matrix. (need topic: Determine if a vector in R^2 is in the span of two other vectors.)

Created On
February 20th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Equivalent statements for a matrix A: for every right-hand side b, the system Ax=b has a solution; every b is a linear combination of the columns of A; the span of the columns of A is maximal; A has a pivot position in every row.

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In Section VO we defined vector addition and scalar multiplication. These two operations combine nicely to give us a construction known as a linear combination, a construct that we will work with throughout this course.

  • The matrix equation Ax=b has a solution if and only if b is a linear combination of the columns of A. math.la.t.mat.eqn.lincomb
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.

  • The matrix equation Ax=b has a solution if and only if b is a linear combination of the columns of A. math.la.t.mat.eqn.lincomb
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Example of matrix-vector product, as a linear combination of column vectors math.la.e.mat.vec.prod
Example of solving a 3-by-3 matrix equation math.la.e.mat.eqn.3x3.solve
Definition of matrix-vector product, each entry separately math.la.d.mat.vec.prod.coord
Example of matrix-vector product, each entry separately math.la.e.mat.vec.prod.coord
Matrix-vector product is associative math.la.t.mat.vec.prod.assoc

1.5 Solution Sets of Linear Systems 


How to compute all solutions to a general system $Ax=b$ of linear equations and connection to the corresponding homogeneous system $Ax=0$. Visualization of the geometry of solution sets. Consistent systems and their solution using row reduction.

Created On
August 22nd, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Homogeneous systems of linear equations; trivial versus nontrivial solutions of homogeneous systems; how to find nontrivial solutions; how to know from the reduced row-echelon form of a matrix whether the corresponding homogeneous system has nontrivial solutions.

Created On
August 25th, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we specialize to systems of linear equations where every equation has a zero as its constant term. Along the way, we will begin to express more and more ideas in the language of matrices and begin a move away from writing out whole systems of equations. The ideas initiated in this section will carry through the remainder of the course.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we specialize to systems of linear equations where every equation has a zero as its constant term. Along the way, we will begin to express more and more ideas in the language of matrices and begin a move away from writing out whole systems of equations. The ideas initiated in this section will carry through the remainder of the course.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html
math.la.e.linsys.3x1.soln.homog.row_reduce.i
math.la.e.linsys.3x3.soln.row_reduce.i.parametric
math.la.t.nonhomog.particular_plus_homog

1.6 Applications of Linear Systems 

1.7 Linear Independence 


Linear independence is defined, followed by a worked example of 3 vectors in R^3.

  • Determine if a particular set of vectors in R^3 in linearly independent math.la.e.vec.linindep.r3
  • Definition of linearly independent set of vectors: if a linear combination is zero, then every coefficient is zero, coordinate vector space. math.la.d.vec.linindep.coord
Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Linear independence in-class activity

License
GFDL-1.3
Created On
June 8th, 2017
7 years ago
Views
3
Type
 Handout
Timeframe
 In-class
Perspective
 Example
Language
 English
Content Type
application/pdf

Video Lesson from University of Waterloo.

Created On
October 23rd, 2013
10 years ago
Views
2
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html;charset=UTF-8

Quiz from the University of Waterloo.

Created On
October 23rd, 2013
10 years ago
Views
4
Type
 Unknown
Timeframe
 Post-class
Perspective
 Example
Language
 English
Content Type
text/html;charset=UTF-8

Linear independence is one of the most fundamental conceptual ideas in linear algebra, along with the notion of a span. So this section, and the subsequent Section LDS, will explore this new idea.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In any linearly dependent set there is always one vector that can be written as a linear combination of the others. This is the substance of the upcoming Theorem DLDS. Perhaps this will explain the use of the word “dependent.” In a linearly dependent set, at least one vector “depends” on the others (via a linear combination).

  • Theorem: a set of vectors is linearly dependent if and only if one of the vectors can be written as a linear combination of the other vectors, coordinate vector space. math.la.t.vec.lindep.coord
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Linear independence is one of the most fundamental conceptual ideas in linear algebra, along with the notion of a span. So this section, and the subsequent Section LDS, will explore this new idea.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Linear independence is one of the most fundamental conceptual ideas in linear algebra, along with the notion of a span. So this section, and the subsequent Section LDS, will explore this new idea.

  • Definition of linearly independent set of vectors: if a linear combination is zero, then every coefficient is zero, coordinate vector space. math.la.d.vec.linindep.coord
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Linear independence is one of the most fundamental conceptual ideas in linear algebra, along with the notion of a span. So this section, and the subsequent Section LDS, will explore this new idea.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html
Equivalence theorem: the equation Ax=0 has only the trivial solution. math.la.t.equiv.mat.eqn.homog
math.la.p.vec.lindep.more.rncn
math.la.p.vec.lindep.zero
math.la.p.vec.lindep.coord

1.8 Introduction to Linear Transformations 


Linear transformations are defined, and some small examples (and non examples) are explored. (need tag for R^2 -> R^2 example, general)

Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Two proofs, with discussion, of the fact that an abstract linear transformation maps 0 to 0.

Created On
February 15th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 Pre-class
Perspective
 Proof
Language
 English
Content Type
text/html; charset=utf-8

Examples of special types of linear transformation from R^2 to R^2: dilation, projection, and shear. (Some issues with the video: things re-start around the 10 second mark, and at 3:46 the word "projection" is said, when it should be "transformation". Also, at the end maybe it could be described why it is called a 'shear'.)

Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Post-class
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

After watching a video defining linear transformations and giving examples of 2-D transformations, students should be able to answer the questions in this quiz.

Created On
June 8th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 Pre-class
Language
 English
Content Type
text/html; charset=utf-8

Use matrix transformations to motivate the concept of linear transformation; examples of matrix transformations

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Motivation of the definition of a linear transformation using properties of matrices; examples; geometric intuition; matrix representation of a linear transformation

Created On
September 3rd, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.functions.d.transformation
math.functions.d.transformation.domain
math.functions.d.transformation.codomain
math.functions.d.transformation.image
math.functions.d.transformation.range
Example of a linear transformation on R^2: rotation math.la.e.lintrans.rotation.r2
Example of a linear transformation on R^3: rotation math.la.e.lintrans.rotation.r3

1.9 The Matrix of a Linear Transformation 


Examples of special types of linear transformation from R^2 to R^2: dilation, projection, and shear. (Some issues with the video: things re-start around the 10 second mark, and at 3:46 the word "projection" is said, when it should be "transformation". Also, at the end maybe it could be described why it is called a 'shear'.)

Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Post-class
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

Visualize 2-d linear transformations by looking at the image of geometric object. (Need topic: Visualize a linear transformation on R^2 by its effect on a region.)

Created On
February 15th, 2017
7 years ago
Views
2
Type
 Applet
Timeframe
 Review
Perspective
 Example
Language
 English
Content Type
text/html; charset=UTF-8

After watching a video defining linear transformations and giving examples of 2-D transformations, students should be able to answer the questions in this quiz.

Created On
June 8th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 Pre-class
Language
 English
Content Type
text/html; charset=utf-8

In-class activity to be completed after an introduction to transformations and ideally in teams. In part 1, students are guided to discover the theorem describing the matrix of a linear transformation from R^n to R^m. In part 2, students learn the one-to-one and onto properties of linear transformations, and are asked to relate these properties to the properties of the matrices (linear independence of columns and columns spanning the codomain).

Created On
June 8th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 In-class
Language
 English
Content Type
text/html; charset=utf-8

Motivation of the definition of a linear transformation using properties of matrices; examples; geometric intuition; matrix representation of a linear transformation

Created On
September 3rd, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Matrix describing a rotation of the plane math.la.t.mat.rotation
math.la.math.functions.d.mapping.onto
math.la.math.functions.d.mapping.onetoone
math.la.t.lintrans.onetoone.ker
math.la.t.lintrans.onto.span
math.la.t.lintrans.onetoone.linindep

1.10 Linear Models in Business, Science, and Engineering 

Supplementary Exercises 


2 Matrix Algebra 

 

2.1 Matrix Operations 


Motivation and definition of the inverse of a matrix

License
(CC-BY-NC-SA-4.0 OR CC-BY-SA-4.0)
Created On
January 5th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

The product of a matrix times a vector is defined, and used to show that a system of linear equations is equivalent to a system of linear equations involving matrices and vectors. The example uses a 2x3 system.

License
CC-BY-SA-4.0
Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Notation for matrix entries, diagonal matrix, square matrix, identity matrix, and zero matrix.

Created On
February 17th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Definition of sum of matrices, product of a scalar and a matrix

Created On
February 17th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Learning goals: 1. What are the dimension (size) requirements for two matrices so that they can be multiplied to each other? 2. What is the product of two matrices, when it exists?

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Review
Language
 English
Content Type
text/html; charset=utf-8

A 2x2 example is used to show that AB does not always equal BA.

Created On
February 17th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Review
Language
 English
Content Type
text/html; charset=utf-8

Example 3x2 and a 2x3 matrices are used to show that AB does not always equal BA

Created On
February 17th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Post-class
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

The transpose of a matrix is defined, and various properties are explored using numerical examples.

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Suggestions for in-class activities on matrix operations: addition, multiplication, transpose, and the fact that multiplication is not commutative.

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 In-class
Language
 English
Content Type
text/html; charset=utf-8

The definition of matrix inverse is motivated by considering multiplicative inverse. The identity matrix and matrix inverse are defined.

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Associative and distributive properties of matrix multiplication and addition; multiplication by the identity matrix; definition of the transpose of a matrix; transpose of the transpose, transpose of a sum, transpose of a product

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Definition of matrix multiplication, each entry separately math.la.d.mat.mult.coord
Example of multiplying 3x3 matrices math.la.e.mat.mult.3x3
Example of multiplying matrices math.la.e.mat.mult
XXXX
For matrices, AB=0 does not imply A=0 or B=0 in general. math.la.c.mat.mult.zero_divisor

2.2 The Inverse of a Matrix 


Motivation and definition of the inverse of a matrix

License
(CC-BY-NC-SA-4.0 OR CC-BY-SA-4.0)
Created On
January 5th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

The definition of matrix inverse is motivated by considering multiplicative inverse. The identity matrix and matrix inverse are defined.

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

The formula for the inverse of a 2x2 matrix is derived. (need tag for that formula)

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Matrix inverses are motivated as a way to solve a linear system. The general algorithm of finding an inverse by row reducing an augmented matrix is described, and then implemented for a 3x3 matrix. Useful facts about inverses are stated and then illustrated with sample 2x2 matrices. (put first: need Example of finding the inverse of a 3-by-3 matrix by row reducing the augmented matrix)

Created On
February 19th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Suggested classroom activities on matrix inverses.

Created On
February 19th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 In-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

This is a guided discovery of the formula for Lagrange Interpolation, which lets you find the formula for a polynomial which passes through a given set of points.

Created On
June 8th, 2017
7 years ago
Views
2
Type
 Handout
Perspective
 Application
Language
 English
Content Type
text/html; charset=utf-8

Statements that are equivalent to a square matrix being invertible; examples.

Created On
August 21st, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Definition of the inverse of a matrix, examples, uniqueness; formula for the inverse of a 2x2 matrix; determinant of a 2x2 matrix; using the inverse to solve a system of linear equations.

Created On
August 22nd, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Properties of matrix inversion: inverse of the inverse, inverse of the transpose, inverse of a product; elementary matrices and corresponding row operations; a matrix is invertible if and only if it is row-equivalent to the identity matrix; row-reduction algorithm for computing matrix inverse

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We saw in Theorem CINM that if a square matrix \(A\) is nonsingular, then there is a matrix \(B\) so that \(AB=I_n\text{.}\) In other words, \(B\) is halfway to being an inverse of \(A\text{.}\) We will see in this section that \(B\) automatically fulfills the second condition (\(BA=I_n\)). Example MWIAA showed us that the coefficient matrix from Archetype A had no inverse. Not coincidentally, this coefficient matrix is singular. We will make all these connections precise now. Not many examples or definitions in this section, just theorems.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.

  • The inverse of a matrix (if it exists) can be found by row reducing the matrix augmented by the identity matrix. math.la.t.mat.inv.augmented
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html
math.la.t.mat.inv.2by2
math.la.t.equiv.inverse

2.3 Characterizations of Invertible Matrices 


Motivation and definition of the inverse of a matrix

License
(CC-BY-NC-SA-4.0 OR CC-BY-SA-4.0)
Created On
January 5th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Statements that are equivalent to a square matrix being invertible; examples.

Created On
August 21st, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

We have seen that linear transformations whose domain and codomain are vector spaces of columns vectors have a close relationship with matrices (Theorem MBLT, Theorem MLTCV). In this section, we will extend the relationship between matrices and linear transformations to the setting of linear transformations between abstract vector spaces.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html
math.la.t.equiv.lintrans.onetoone
math.la.t.equiv.lintrans.onto
Equivalence theorem: the transpose of the matrix A has an inverse. math.la.t.equiv.transpose.inv
Definition of invertible linear transformation, coordinate vector space math.la.d.lintrans.invertible.coord
A matrix is called ill-conditioned if it is nearly singular math.la.c.mat.illconditioned
The condition number of matrix measures how close it is to being singular math.la.c.mat.conditionnumber
math.la.d.mat.triangular.upper.exer
math.la.d.mat.triangular.lower.exer

2.4 Partitioned Matrices 

Definition of block/partitioned matrix math.la.d.mat.block
Multiplication of block/partitioned matrices math.la.c.mat.mult.block
Matrix multiplication can be viewed as the dot product of a row vector of column vectors with a column vector of row vectors math.la.t.mat.mult.row.col
Definition of block diagonal matrix math.la.d.mat.block_diagonal

2.5 Matrix Factorizations 

Definition of LU decomposition math.la.d.mat.lu
Algorithm for computing an LU decomposition math.la.t.mat.lu
math.la.d.mat.lu.reduced.exer
math.la.d.mat.rank_factorization.exer
math.la.d.mat.qr.exer
math.la.d.mat.svd.exer
math.la.d.mat.band.exer

2.6 The Leontief Input-Output Model 

2.7 Applications to Computer Graphics 

2.8 Subspaces of Rn 


Basis theorem: for an n-dimensional vector space any linearly independent set with n elements is a basis, as is any spanning set with n elements; dimension of the column space of a matrix equals the number of pivot columns of the matrix; dimension of the null space of a matrix equals the number of free variables of the matrix

Created On
August 25th, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

The pivot columns of a matrix form a basis for its column space; nullspace of a matrix equals the nullspace of its reduced row-echelon form.

Created On
August 25th, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Definition of the column space of a matrix; column space is a subspace; comparison to the null space; definition of a linear transformation between vector spaces; definition of kernel and range of a linear transformation

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we specialize to systems of linear equations where every equation has a zero as its constant term. Along the way, we will begin to express more and more ideas in the language of matrices and begin a move away from writing out whole systems of equations. The ideas initiated in this section will carry through the remainder of the course.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
4
Type
 Textbook
Language
 English
Content Type
text/html

A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Definition of subspace, coordinate vector space math.la.d.vsp.subspace.coord
Definition of subspace spanned by a set of a set of vectors, coordinate vector space math.la.d.vec.span.subspace.coord
The column space of an m-by-n matrix is a subspace of R^m (or C^m) math.la.t.mat.col_space.rncn
Definition of basis of a vector space (or subspace), coordinate vector space math.la.d.vsp.basis.coord

2.9 Dimension and Rank 


Students answer multiple questions on the rank and dimension of the null space in a variety of situations to discover the connection between these dimensions leading to the Rank-Nullity Theorem.

Created On
June 9th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 In-class
Language
 English
Content Type
text/html; charset=utf-8

Representation (unique) of a vector in terms of a basis for a vector space yields coordinates relative to the basis; change of basis and corresponding change of coordinate matrix

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Definition of dimension of a vector space (or subspace), coordinate vector space math.la.d.vsp.dim.coord
If a vector space has dimension n, then any subset set of n vectors that spans the space must be a basis, coordinate vector space. math.la.t.vsp.dim.span.coord
If a vector space has dimension n, then any subset of n vectors that is linearly independent must be a basis, coordinate vector space. math.la.t.vsp.dim.linindep.coord
Equivalence theorem: the dimension of the column space of A is n. math.la.t.equiv.col.dim

Supplementary Exercises 


3 Determinants 

3.1 Introduction to Determinants 


The effect of row operations on the determinant of a matrix; computing determinants via row reduction; a square matrix is invertible if and only if its determinant is nonzero.

Created On
August 22nd, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Formula for the determinant of a 3-by-3 matrix. math.la.t.mat.det.3x3
The determinant of a matrix can be computed as a cofactor expansion across any row or down any column. math.la.t.mat.det.cofactor

3.2 Properties of Determinants 


The effect of row operations on the determinant of a matrix; computing determinants via row reduction; a square matrix is invertible if and only if its determinant is nonzero.

Created On
August 22nd, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Determinant of the transpose equals the determinant of the original matrix; rescaling a column rescales the determinant by the same factor; interchanging two columns changes the sign of the determinant; adding multiple of one column to another leaves determinant unchanged; determinant of the product of two matrices equals product of the two determinants

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
The determinant of a matrix can be expressed as a product of the diagonal entries in a non-scaled echelon form. math.la.t.mat.det.echelon

3.3 Cramer's Rule, Volume, and Linear Transformations 

Cramer's rule math.la.t.cramer
Definition of adjugate/classical adjoint of a matrix math.la.d.mat.classicaladjoint
math.la.t.mat.inv.cofactor
The determinant of a matrix measures the area/volume of the parallelogram/parallelipiped determined by its columns. math.la.t.mat.det.col.volume
The determinant of the matrix of a linear transformation is the factor by which the area/volume changes. math.la.t.lintrans.det.volume
math.la.d.mat.vandermonde.exer

Supplementary Exercises 


4 Vector Spaces 

4.1 Vector Spaces and Subspaces 


This video kicks off the series of videos on vector spaces. We begin by summarizing the essential properties of R^n.

License
CC-BY-SA-4.0
Created On
January 1st, 2017
7 years ago
Views
3
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

In this video we continue to list the properties of R^n. The 10 properties listed in this video and the previous video will be used to define a general vecto...

License
CC-BY-SA-4.0
Created On
December 28th, 2016
7 years ago
Views
3
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

The concept of a vector space is somewhat abstract, and under this definition, a lot of objects such as polynomials, functions, etc., can be considered as vectors. This video explains the definition of a general vector space. In later videos we will look at more examples.

License
CC-BY-SA-4.0
Created On
January 1st, 2017
7 years ago
Views
2
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Preliminaries: 1. What is a subset? 2. How to verify a set is a subset of another set? 3. Notations and language of set theory related to subsets. In this video, we introduce the definition of a subspace. We go through a preliminary example to figure out what do subspaces of R^2 look like, and we will continue to talk about how to verify a subset of a vector space is a subspace in later videos.

License
CC-BY-SA-4.0
Created On
January 3rd, 2017
7 years ago
Views
3
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

In this video, I'll explain why we only need to test 2 axioms (among the 10 axioms in the definition of a vector space) when figuring out if a subset is a subspace.

License
CC-BY-SA-4.0
Created On
June 9th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Definition of a (real) vector space; properties of the zero vector and the additive inverse in relation to scalar multiplication

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Definition of a vector; vector addition; scalar multiplication; visualization in R^2 and R^3; vector space axioms; linear combinations; span.

Created On
September 3rd, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Definition of a subspace of a vector space; examples; span of vectors is a subspace.

Created On
September 3rd, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

  • math.la.e.vsp.polynomial.leq_n
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

  • math.la.e.vsp.function
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A vector space is defined as a set with two operations, meeting ten properties (Definition VS). Just as the definition of span of a set of vectors only required knowing how to add vectors and how to multiply vectors by scalars, so it is with linear independence. A definition of a linearly independent set of vectors in an arbitrary vector space only requires knowing how to form linear combinations and equating these with the zero vector. Since every vector space must have a zero vector (Property Z), we always have a zero vector at our disposal.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.math.functions.d.polynomial.degree
math.la.math.functions.d.polynomial.z
math.la.e.vsp.polynomial
math.la.t.vsp.span.arb
math.la.d.vsp.subspace.intersection.arb.exer
math.la.t.vsp.subspace.intersection.arb.exer
math.la.d.vsp.subspace.sum.arb.exer
math.la.t.vsp.subspace.sum.arb.exer

4.2 Null Spaces, Column Spaces, and Linear Transformations 


Two proofs, with discussion, of the fact that an abstract linear transformation maps 0 to 0.

Created On
February 15th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 Pre-class
Perspective
 Proof
Language
 English
Content Type
text/html; charset=utf-8

Matrices can be thought of as transforming space, and understanding how this work is crucial for understanding many other ideas that follow in linear algebra...

License
Unlicense
Created On
May 25th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

This is a guided discovery of the formula for Lagrange Interpolation, which lets you find the formula for a polynomial which passes through a given set of points.

Created On
June 8th, 2017
7 years ago
Views
2
Type
 Handout
Perspective
 Application
Language
 English
Content Type
text/html; charset=utf-8

Definition of the column space of a matrix; column space is a subspace; comparison to the null space; definition of a linear transformation between vector spaces; definition of kernel and range of a linear transformation

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.d.mat.null_space.right.rep
math.la.t.mat.null_space.rncn.rep
math.la.d.mat.col_space.rep
math.la.t.mat.col_space.rncn.rep

4.3 Linearly Independent Sets; Bases 


In any linearly dependent set there is always one vector that can be written as a linear combination of the others. This is the substance of the upcoming Theorem DLDS. Perhaps this will explain the use of the word “dependent.” In a linearly dependent set, at least one vector “depends” on the others (via a linear combination).

  • math.la.t.vsp.span.basis.rref
  • A set of nonzero vectors contains (as a subset) a basis for its span. math.la.t.vsp.span.basis
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A vector space is defined as a set with two operations, meeting ten properties (Definition VS). Just as the definition of span of a set of vectors only required knowing how to add vectors and how to multiply vectors by scalars, so it is with linear independence. A definition of a linearly independent set of vectors in an arbitrary vector space only requires knowing how to form linear combinations and equating these with the zero vector. Since every vector space must have a zero vector (Property Z), we always have a zero vector at our disposal.

  • Definition of linearly indepentent set of vectors: if a linear combination is zero, then every coefficient is zero, arbitrary vector space. math.la.d.vec.linindep.arb
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A vector space is defined as a set with two operations, meeting ten properties (Definition VS). Just as the definition of span of a set of vectors only required knowing how to add vectors and how to multiply vectors by scalars, so it is with linear independence. A definition of a linearly independent set of vectors in an arbitrary vector space only requires knowing how to form linear combinations and equating these with the zero vector. Since every vector space must have a zero vector (Property Z), we always have a zero vector at our disposal.

  • math.la.d.vec.lindep.relation.rep
  • math.la.d.vec.lindep.relation.trvial.rep
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

A basis of a vector space is one of the most useful concepts in linear algebra. It often provides a concise, finite description of an infinite vector space.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

A basis of a vector space is one of the most useful concepts in linear algebra. It often provides a concise, finite description of an infinite vector space.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Theorem: a set of vectors is linearly dependent if and only if one of the vectors can be written as a linear combination of the other vectors, arbitrary vector space. math.la.t.vec.lindep.arb
math.la.d.vsp.basis.standard.rncn.rep
Removing a linearly dependent vector from a set does not change the span of the set. math.la.t.vsp.span.lindep
math.la.t.mat.col_space.pivot.rep

4.4 Coordinate Systems 


Representation (unique) of a vector in terms of a basis for a vector space yields coordinates relative to the basis; change of basis and corresponding change of coordinate matrix

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Given a basis for a n-dimensional vector space V, the coordinate map is a linear bijection between V and R^n; definition isomorphisms between vector spaces and isomorphic vector spaces.

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

You may have noticed that many questions about elements of abstract vector spaces eventually become questions about column vectors or systems of equations. Example SM32 would be an example of this. We will make this vague idea more precise in this section.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

A vector space is defined as a set with two operations, meeting ten properties (Definition VS). Just as the definition of span of a set of vectors only required knowing how to add vectors and how to multiply vectors by scalars, so it is with linear independence. A definition of a linearly independent set of vectors in an arbitrary vector space only requires knowing how to form linear combinations and equating these with the zero vector. Since every vector space must have a zero vector (Property Z), we always have a zero vector at our disposal.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.t.vsp.basis.coord.mapping.arb

4.5 The Dimension of a Vector Space 


Basis theorem: for an n-dimensional vector space any linearly independent set with n elements is a basis, as is any spanning set with n elements; dimension of the column space of a matrix equals the number of pivot columns of the matrix; dimension of the null space of a matrix equals the number of free variables of the matrix

Created On
August 25th, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Once the dimension of a vector space is known, then the determination of whether or not a set of vectors is linearly independent, or if it spans the vector space, can often be much easier. In this section we will state a workhorse theorem and then apply it to the column space and row space of a matrix. It will also help us describe a super-basis for \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Once the dimension of a vector space is known, then the determination of whether or not a set of vectors is linearly independent, or if it spans the vector space, can often be much easier. In this section we will state a workhorse theorem and then apply it to the column space and row space of a matrix. It will also help us describe a super-basis for \(\complex{m}\text{.}\)

  • A set of vectors containing more elements than the dimension of the space must be linearly dependent, arbitrary vector space. math.la.t.vsp.dim.more.lindep.arb
  • math.la.t.vsp.dim.less.span.arb
  • math.la.t.vsp.dim.span.linindep.arb
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

  • Every basis for a vector space contains the same number of elements, arbitrary vector space. math.la.t.vsp.dim.arb
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

4.6 Rank 


Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Once the dimension of a vector space is known, then the determination of whether or not a set of vectors is linearly independent, or if it spans the vector space, can often be much easier. In this section we will state a workhorse theorem and then apply it to the column space and row space of a matrix. It will also help us describe a super-basis for \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.d.mat.rank.rep
math.la.t.mat.ranknullity.rep

4.7 Change of Basis 


This is a guided discovery of the formula for Lagrange Interpolation, which lets you find the formula for a polynomial which passes through a given set of points.

Created On
June 8th, 2017
7 years ago
Views
2
Type
 Handout
Perspective
 Application
Language
 English
Content Type
text/html; charset=utf-8

We have seen in Section MR that a linear transformation can be represented by a matrix, once we pick bases for the domain and codomain. How does the matrix representation change if we choose different bases? Which bases lead to especially nice representations? From the infinite possibilities, what is the best possible representation? This section will begin to answer these questions. But first we need to define eigenvalues for linear transformations and the change-of-basis matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

4.8 Applications to Difference Equations 

4.9 Applications to Markov Chains 

Supplementary Exercises 


5 Eigenvalues and Eigenvectors 

5.1 Eigenvectors and Eigenvalues 


An introductory activity on eigenvalues and eigenvectors in which students do basic matrix-vector multiplication calculations to find whether given vectors are eigenvectors, to determine the eigenvalue corresponding to an eigenvector and to find an eigenvector corresponding to an eigenvalue. This activity is self-contained and does not require any previous experience with eigenvalues or eigenvectors.

Created On
June 9th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Definition of the eigenspace corresponding to an eigenvector $\lambda$ (and proof that this is a vector space); analysis of simple matrices in R^2 and R^3 to visualize the "geometry" of eigenspaces; proof that eigenvectors corresponding to distinct eigenvectors are linearly independent

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Theorem: \lambda is an eigenvalue of a matrix A if and only if \lambda satisfies the characteristic equation det(A-\lambda I) = 0; examples; eigenvalues of triangular matrices are the diagonal entries.

Created On
September 3rd, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

5.2 The Characteristic Equation 


Theorem: \lambda is an eigenvalue of a matrix A if and only if \lambda satisfies the characteristic equation det(A-\lambda I) = 0; examples; eigenvalues of triangular matrices are the diagonal entries.

Created On
September 3rd, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Definition of similarity for square matrices; similarity is an equivalence relation; similar matrices have the same characteristic polynomial and hence the same eigenvalues, with same multiplicities; definition of multiplicity.

Created On
September 3rd, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html
Definition of determinant of a matrix as a product of the diagonal entries in a non-scaled echelon form. math.la.d.mat.det.echelon
math.la.t.equiv.det.rep
math.la.t.mat.det.product.rep
math.la.t.mat.charpoly.eqn.eig
Definition of multiplicity of an eigenvalue math.la.d.mat.eig.multiplicity
Definition of similarity transform math.la.d.mat.similar.transform

5.3 Diagonalization 


Diagonalization theorem: a nxn matrix A is diagonalizable if and only if it has n linearly independent eigenvectors. If so, the matrix factors as A = PDP^{-1}, where D is diagonal and P is invertible (and its columns are the n linearly independent eigenvectors). Algorithm to diagonalize a matrix.

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.

  • An n-by-n matrix is diagonalizable if and only if the characteristic polynomial factors completely, and the dimension of each eigenspace equals the multiplicity of the eigenvalue. math.la.t.mat.diagonalizable.charpoly
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
An n-by-n matrix is diagonalizable if and only if the sum of the dimensions of the eigenspaces equals n. math.la.t.mat.diagonalizable.eigenspace
An n-by-n matrix is diagonalizable if and only if the union of the basis vectors for the eigenspaces is a basis for R^n (or C^n). math.la.t.mat.diagonalizable.basis

5.4 Eigenvectors and Linear Transformations 


We have seen that linear transformations whose domain and codomain are vector spaces of columns vectors have a close relationship with matrices (Theorem MBLT, Theorem MLTCV). In this section, we will extend the relationship between matrices and linear transformations to the setting of linear transformations between abstract vector spaces.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Definition of matrix representation of a linear transformation from a vector space to itself, with respect to basis of the space, arbitrary vector space math.la.d.lintrans.mat.repn.self.arb
math.la.UNKNOWN

5.5 Complex Eigenvalues 


The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html
Definition of the real part of a vector in C^n math.la.d.vec.real.cn
Definition of the imaginary part of a vector in C^n math.la.d.vec.imaginary.cn
Formula for diagonalizing a real 2-by-2 matrix with a complex eigenvalue. math.la.t.mat.real.diagonalize.complex.2x2

5.6 Discrete Dynamical Systems 

5.7 Applications to Differential Equations 

5.8 Iterative Estimates for Eigenvalues 

Supplementary Exercises 


6 Orthogonality and Least Squares 

6.1 Inner Product, Length, and Orthogonality 


This is a video from the University of Waterloo. Dot Product, Cross-Product in R^n (which should be in Chapter 8 section 4 about hyperplanes.

Created On
October 23rd, 2013
10 years ago
Views
3
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html;charset=UTF-8

Quiz from the University of Waterloo. This is intended to be used after the video of the same name.

Created On
October 23rd, 2013
10 years ago
Views
3
Type
 Unknown
Timeframe
 Post-class
Perspective
 Example
Language
 English
Content Type
text/html;charset=UTF-8

Inner product of two vectors in R^n, length of a vector in R^n, orthogonality. Motivation via approximate solutions of systems of linear equations, definition and properties of inner product (symmetric, bilinar, positive definite); length/norm of a vector, unit vectors; definition of distance between vectors; definition of orthogonality; Pythagorean Theorem.

Created On
August 22nd, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html
math.la.t.innerproduct.self.nonnegative.coord
Definition of orthogonal complement of a subspace math.la.d.subspace.orthogonal_complement
A vector is in the orthogonal complement of a subspace if and only if it is orthogonal to every vector in a basis of the subspace. math.la.t.subspace.orthogonal_complement.basis
The orthogonal complement of a subspace is a subspace. math.la.t.subspace.orthogonal_complement
The null space of a matrix is the orthogonal complement of the column space. math.la.t.mat.row.null.orthogonal_complement

6.2 Orthogonal Sets 


This is from the University of Waterloo. It includes content about Projections, as well as some content from the Multivariable Calculus. These notions are developed in Euclidean Space.

Created On
October 23rd, 2013
10 years ago
Views
3
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html;charset=UTF-8

This is a quiz from the University of Waterloo. It is a quiz about projections that is strictly in R^n. It additionally asks questions on perpendicular vectors and cross products.

Created On
October 23rd, 2013
10 years ago
Views
2
Type
 Unknown
Timeframe
 Post-class
Perspective
 Example
Language
 English
Content Type
text/html;charset=UTF-8

Orthonormal sets and bases (definition); expressing vectors as linear combinations of orthonormal basis vectors; matrices with orthonormal columns preserve vector norm and dot product; orthogonal matrices; inverse of an orthogonal matrix equals its transpose

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

A basis of a vector space is one of the most useful concepts in linear algebra. It often provides a concise, finite description of an infinite vector space.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html
Definition of orthogonal basis of a (sub)space math.la.d.subspace.basis.orthogonal

6.3 Orthogonal Projections 


Orthogonal projection onto subspace in R^n minimizes distance; projection formula simplification for orthonormal bases; relation to orthogonal matrices

Created On
August 21st, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8
A vector can be written uniquely as a sum of a vector in a subspace and a vector orthogonal to the subspace. math.la.t.vec.projection.subspace
The projection of a vector which is in a subspace is the vector itself. math.la.t.vec.projection.element

6.4 The Gram-Schmidt Process 


In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Definition of QR decomposition math.la.d.mat.qr
The QR decomposition of a nonsingular matrix exists. math.la.t.mat.qr

6.5 Least-Squares Problems 

Definition of least-squares solution to a linear system math.la.d.linsys.leastsquares
Formula for computing the least squares solution to a linear system. math.la.t.linsys.leastsquares
The least squares solution to a linear system is unique if and only if the columns of the coefficient matrix are linearly independent. math.la.t.linsys.leastsquares.unique
Definition of least-squares error of a linear system math.la.d.linsys.leastsquares.error
Formula for computing the least squares solution to a linear system, in terms of the QR factorization of the coefficient matrix. math.la.t.linsys.leastsquares.qr

6.6 Applications to Linear Models 

Definition of the least-squares linear fit to 2-dimensional data math.la.d.leastsquares.line
Formula for the least-squares linear fit to 2-dimensional data math.la.t.leastsquares.line

6.7 Inner Product Spaces 

Definition of inner product, arbitrary setting math.la.d.vec.innerproduct.arb
Definition of inner product space, arbitrary setting math.la.d.vsp.innerproduct.arb
Definition of length/norm of a vector, arbitrary setting math.la.d.vec.norm.arb
Definition of orthogonal vectors, arbitrary setting math.la.d.vec.distance.arb
Definition of orthogonal vectors, arbitrary setting math.la.d.vec.orthogonal.arb
Definition of Gram-Schmidt process, arbitrary setting math.la.d.gramschmidt.arb
The Cauchy-Schwarz inequality, arbitrary setting math.la.t.vec.cauchyschwarz.arb
The triangle inequality, arbitrary setting math.la.t.vec.triangle.arb

6.8 Applications of Inner Product Spaces 

Supplementary Exercises 


7 Symmetric Matrices and Quadratic Forms 

7.1 Diagonalization of Symmetric Matrices 


A real matrix $A$ is symmetric if and only if it is orthogonally diagonalizable (i.e. $A = PDP^{-1}$ for an orthogonal matrix $P$.) Proof and examples.

Created On
August 21st, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Definition of orthogonally diagonalizable matrix math.la.d.mat.diagonalizable.orthogonally
Formula for the spectral decomposition for a symmetric matrix math.la.t.mat.symmetric.spectraldecomposition

7.2 Quadratic Forms 

math.la.X

7.3 Constrained Optimization 

7.4 The Singular Value Decomposition 

7.5 Applications to Image Processing and Statistics 

Supplementary Exercises 


8 The Geometry of Vector Spaces 

8.1 Affine Combinations 

8.2 Affine Independence 

8.3 Convex Combinations 

8.4 Hyperplanes 

8.5 Polytopes 

8.6 Curves and Surfaces 


9 Optimization (Online Only) 

9.1 Matrix Games 

9.2 Linear Programming---Geometric Method 

9.3 Linear Programming---Simplex Method 

9.4 Duality 


10 Finite-State Markov Chains (Online Only) 

10.1 Introduction and Examples 

10.2 The Steady-State Vector and Google's PageRank 

10.3 Finite-State Markov Chains 

10.4 Classification of States and Periodicity 

10.5 The Fundamental Matrix 

10.6 Markov Chains and Baseball Statistics