Linear Algebra

by Jim Hefferon

 

1  Linear Systems 

I Solving Linear Systems 

I.1 Gauss's Method 


We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We begin our study of linear algebra with an introduction and a motivational example.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We will now be more careful about analyzing the reduced row-echelon form derived from the augmented matrix of a system of linear equations. In particular, we will see how to systematically handle the situation when we have infinitely many solutions to a system, and we will prove that every system of linear equations has either zero, one or infinitely many solutions. With these tools, we will be able to routinely solve any linear system.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We will now be more careful about analyzing the reduced row-echelon form derived from the augmented matrix of a system of linear equations. In particular, we will see how to systematically handle the situation when we have infinitely many solutions to a system, and we will prove that every system of linear equations has either zero, one or infinitely many solutions. With these tools, we will be able to routinely solve any linear system.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Definition of coefficients of a linear equation math.la.d.lineqn.coeff
Definition of solution to a linear equation math.la.d.lineqn.soln
math.la.c.linsys.gauss
math.la.d.linsys.echelon

I.2 Describing the Solution Set 


A 3x3 matrix equation Ax=b is solved for two different values of b. In one case there is no solution, and in another there are infinitely many solutions. These examples illustrate a theorem about linear combinations of the columns of the matrix A.

Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

Notation for matrix entries, diagonal matrix, square matrix, identity matrix, and zero matrix.

Created On
February 17th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Definition of vector, equality of vectors, vector addition, and scalar vector multiplication. Geometric and algebraic properties of vector addition are discussed. (need a topic on vector addition is commutative and associative)

Created On
February 19th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

University of Waterloo Math Online -

Created On
October 23rd, 2013
10 years ago
Views
2
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html;charset=UTF-8

Slides for the accompanying video from University of Waterloo.

Created On
October 23rd, 2013
10 years ago
Views
3
Type
 Handout
Perspective
 Introduction
Language
 English
Content Type
application/pdf

Quiz from the University of Waterloo.

Created On
October 23rd, 2013
10 years ago
Views
4
Type
 Unknown
Timeframe
 Post-class
Perspective
 Example
Language
 English
Content Type
text/html;charset=UTF-8

Definition of echelon form, reduction of a matrix to echelon form in order to compute solutions to systems of linear equations; definition of reduced row echelon form

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We will now be more careful about analyzing the reduced row-echelon form derived from the augmented matrix of a system of linear equations. In particular, we will see how to systematically handle the situation when we have infinitely many solutions to a system, and we will prove that every system of linear equations has either zero, one or infinitely many solutions. With these tools, we will be able to routinely solve any linear system.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we define some new operations involving vectors, and collect some basic properties of these operations. Begin by recalling our definition of a column vector as an ordered list of complex numbers, written vertically (Definition CV). The collection of all possible vectors of a fixed size is a commonly used set, so we start with its definition.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we define some new operations involving vectors, and collect some basic properties of these operations. Begin by recalling our definition of a column vector as an ordered list of complex numbers, written vertically (Definition CV). The collection of all possible vectors of a fixed size is a commonly used set, so we start with its definition.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.

  • math.la.d.mat.m_by_n.set
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.c.linsys.soln_set.parameter

I.3 General = Particular + Homogeneous 


A 3x3 system having a unique solution is solved by putting the augmented matrix in reduced row echelon form. A picture of three intersecting planes provides geometric intuition.

Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Review
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

Sample problems to help understand when a linear system has 0, 1, or infinitely many solutions.

  • Linear systems have zero, one, or infinitely many solutions. math.la.t.linsys.zoi
  • math.la.t.rref.consistent
Created On
February 15th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 In-class
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

How to compute all solutions to a general system $Ax=b$ of linear equations and connection to the corresponding homogeneous system $Ax=0$. Visualization of the geometry of solution sets. Consistent systems and their solution using row reduction.

Created On
August 22nd, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Homogeneous systems of linear equations; trivial versus nontrivial solutions of homogeneous systems; how to find nontrivial solutions; how to know from the reduced row-echelon form of a matrix whether the corresponding homogeneous system has nontrivial solutions.

Created On
August 25th, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

We will now be more careful about analyzing the reduced row-echelon form derived from the augmented matrix of a system of linear equations. In particular, we will see how to systematically handle the situation when we have infinitely many solutions to a system, and we will prove that every system of linear equations has either zero, one or infinitely many solutions. With these tools, we will be able to routinely solve any linear system.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we specialize to systems of linear equations where every equation has a zero as its constant term. Along the way, we will begin to express more and more ideas in the language of matrices and begin a move away from writing out whole systems of equations. The ideas initiated in this section will carry through the remainder of the course.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we specialize to systems of linear equations where every equation has a zero as its constant term. Along the way, we will begin to express more and more ideas in the language of matrices and begin a move away from writing out whole systems of equations. The ideas initiated in this section will carry through the remainder of the course.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In Section VO we defined vector addition and scalar multiplication. These two operations combine nicely to give us a construction known as a linear combination, a construct that we will work with throughout this course.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.c.linsys.2x2.geometric

II Linear Geometry 

II.1 Vectors in Space 


University of Waterloo Math Online -

Created On
October 23rd, 2013
10 years ago
Views
2
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html;charset=UTF-8
math.la.c.lineqn.3.geometric

II.2 Length and Angle Measures 


This is a video from the University of Waterloo. Dot Product, Cross-Product in R^n (which should be in Chapter 8 section 4 about hyperplanes.

Created On
October 23rd, 2013
10 years ago
Views
3
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html;charset=UTF-8

Quiz from the University of Waterloo. This is intended to be used after the video of the same name.

Created On
October 23rd, 2013
10 years ago
Views
3
Type
 Unknown
Timeframe
 Post-class
Perspective
 Example
Language
 English
Content Type
text/html;charset=UTF-8

Inner product of two vectors in R^n, length of a vector in R^n, orthogonality. Motivation via approximate solutions of systems of linear equations, definition and properties of inner product (symmetric, bilinar, positive definite); length/norm of a vector, unit vectors; definition of distance between vectors; definition of orthogonality; Pythagorean Theorem.

Created On
August 22nd, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

  • math.la.d.vec.orthogonal.coord
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.t.vec.triangle.coord
math.la.t.vec.cauchyschwartz.coord
math.la.d.vec.angle.coord
math.la.d.vec.parallel.coord

III Reduced Echelon Form 

III.1 Gauss-Jordan Reduction 


A 3x3 system having a unique solution is solved by putting the augmented matrix in reduced row echelon form. A picture of three intersecting planes provides geometric intuition.

Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Review
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

Equivalence of systems of linear equations, row operations, corresponding matrices representing the linear systems

Created On
August 21st, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Definition of echelon form, reduction of a matrix to echelon form in order to compute solutions to systems of linear equations; definition of reduced row echelon form

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

We will motivate our study of linear algebra by considering the problem of solving several linear equations simultaneously. The word solve tends to get abused somewhat, as in “solve this problem.” When talking about equations we understand a more precise meaning: find all of the values of some variable quantities that make an equation, or several equations, simultaneously true.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.c.mat.gaussjordan
math.la.t.mat.row_equiv

III.2 The Linear Combination Lemma 


After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

After solving a few systems of equations, you will recognize that it does not matter so much what we call our variables, as opposed to what numbers act as their coefficients. A system in the variables \(x_1,\,x_2,\,x_3\) would behave the same if we changed the names of the variables to \(a,\,b,\,c\) and kept all the constants the same and in the same places. In this section, we will isolate the key bits of information about a system of equations into something called a matrix, and then use this matrix to systematically solve the equations. Along the way we will obtain one of our most important and useful computational tools.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html
math.la.t.linsys.echelon.free

2 Vector Spaces 

I Definition of Vector Space 

I.1 Definition and Examples 


This video kicks off the series of videos on vector spaces. We begin by summarizing the essential properties of R^n.

License
CC-BY-SA-4.0
Created On
January 1st, 2017
7 years ago
Views
3
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

In this video we continue to list the properties of R^n. The 10 properties listed in this video and the previous video will be used to define a general vecto...

License
CC-BY-SA-4.0
Created On
December 28th, 2016
7 years ago
Views
3
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

The concept of a vector space is somewhat abstract, and under this definition, a lot of objects such as polynomials, functions, etc., can be considered as vectors. This video explains the definition of a general vector space. In later videos we will look at more examples.

License
CC-BY-SA-4.0
Created On
January 1st, 2017
7 years ago
Views
2
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Definition of a (real) vector space; properties of the zero vector and the additive inverse in relation to scalar multiplication

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.

  • math.la.e.vsp.mat.m_by_n
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

  • math.la.e.vsp.polynomial.leq_n
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

  • math.la.e.vsp.function
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we present a formal definition of a vector space, which will lead to an extra increment of abstraction. Once defined, we study its most basic properties.

  • math.la.e.vsp.mat.m_by_n
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.d.scalar.arb
math.la.d.vsp.z
math.la.e.vsp.rn
math.la.e.vsp.polynomial
math.la.e.vsp.de.homog
math.la.e.vsp.linsys.homog

I.2 Subspaces and Spanning Sets 


Preliminaries: 1. What is a subset? 2. How to verify a set is a subset of another set? 3. Notations and language of set theory related to subsets. In this video, we introduce the definition of a subspace. We go through a preliminary example to figure out what do subspaces of R^2 look like, and we will continue to talk about how to verify a subset of a vector space is a subspace in later videos.

License
CC-BY-SA-4.0
Created On
January 3rd, 2017
7 years ago
Views
3
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

In this video, I'll explain why we only need to test 2 axioms (among the 10 axioms in the definition of a vector space) when figuring out if a subset is a subspace.

License
CC-BY-SA-4.0
Created On
June 9th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Definition of a subspace of a vector space; examples; span of vectors is a subspace.

Created On
September 3rd, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.

  • math.la.t.vec.span.subspace.arb
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

A vector space is defined as a set with two operations, meeting ten properties (Definition VS). Just as the definition of span of a set of vectors only required knowing how to add vectors and how to multiply vectors by scalars, so it is with linear independence. A definition of a linearly independent set of vectors in an arbitrary vector space only requires knowing how to form linear combinations and equating these with the zero vector. Since every vector space must have a zero vector (Property Z), we always have a zero vector at our disposal.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined relative to some other (larger) vector space. We will discover shortly that we are already familiar with a wide variety of subspaces from previous sections.

  • math.la.t.vsp.subspace.lincomb.arb
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

II Linear Independence 

II.1 Definition and Examples 


In any linearly dependent set there is always one vector that can be written as a linear combination of the others. This is the substance of the upcoming Theorem DLDS. Perhaps this will explain the use of the word “dependent.” In a linearly dependent set, at least one vector “depends” on the others (via a linear combination).

  • math.la.t.vsp.span.basis.rref
  • A set of nonzero vectors contains (as a subset) a basis for its span. math.la.t.vsp.span.basis
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html
Removing a linearly dependent vector from a set does not change the span of the set. math.la.t.vsp.span.lindep
Definition of linearly dependent set of vectors: one of the vectors can be written as a linear combination of the other vectors, arbitrary vector space. math.la.d.vec.lindep.arb
Theorem: a set of vectors is linearly independent if and only if whenever a linear combination is zero, then every coefficient is zero, arbitrary vector space. math.la.t.vec.linindep.arb
math.la.t.vsp.linindep.subset

III Basis and Dimension 

III.1 Basis 


Representation (unique) of a vector in terms of a basis for a vector space yields coordinates relative to the basis; change of basis and corresponding change of coordinate matrix

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A basis of a vector space is one of the most useful concepts in linear algebra. It often provides a concise, finite description of an infinite vector space.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

A basis of a vector space is one of the most useful concepts in linear algebra. It often provides a concise, finite description of an infinite vector space.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

You may have noticed that many questions about elements of abstract vector spaces eventually become questions about column vectors or systems of equations. Example SM32 would be an example of this. We will make this vague idea more precise in this section.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A vector space is defined as a set with two operations, meeting ten properties (Definition VS). Just as the definition of span of a set of vectors only required knowing how to add vectors and how to multiply vectors by scalars, so it is with linear independence. A definition of a linearly independent set of vectors in an arbitrary vector space only requires knowing how to form linear combinations and equating these with the zero vector. Since every vector space must have a zero vector (Property Z), we always have a zero vector at our disposal.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.t.vsp.basis.span.unique

III.2 Dimension 


Basis theorem: for an n-dimensional vector space any linearly independent set with n elements is a basis, as is any spanning set with n elements; dimension of the column space of a matrix equals the number of pivot columns of the matrix; dimension of the null space of a matrix equals the number of free variables of the matrix

Created On
August 25th, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Once the dimension of a vector space is known, then the determination of whether or not a set of vectors is linearly independent, or if it spans the vector space, can often be much easier. In this section we will state a workhorse theorem and then apply it to the column space and row space of a matrix. It will also help us describe a super-basis for \(\complex{m}\text{.}\)

  • A set of vectors containing more elements than the dimension of the space must be linearly dependent, arbitrary vector space. math.la.t.vsp.dim.more.lindep.arb
  • math.la.t.vsp.dim.less.span.arb
  • math.la.t.vsp.dim.span.linindep.arb
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

  • Every basis for a vector space contains the same number of elements, arbitrary vector space. math.la.t.vsp.dim.arb
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.d.vsp.basis.exchange.arb
math.la.t.vsp.span.basis.rep

III.3 Vector Spaces and Linear Systems 


The transpose of a matrix is defined, and various properties are explored using numerical examples.

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Students answer multiple questions on the rank and dimension of the null space in a variety of situations to discover the connection between these dimensions leading to the Rank-Nullity Theorem.

Created On
June 9th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 In-class
Language
 English
Content Type
text/html; charset=utf-8

Associative and distributive properties of matrix multiplication and addition; multiplication by the identity matrix; definition of the transpose of a matrix; transpose of the transpose, transpose of a sum, transpose of a product

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Equivalent statements for a matrix A: for every right-hand side b, the system Ax=b has a solution; every b is a linear combination of the columns of A; the span of the columns of A is maximal; A has a pivot position in every row.

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Definition of the column space of a matrix; column space is a subspace; comparison to the null space; definition of a linear transformation between vector spaces; definition of kernel and range of a linear transformation

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Linear independence is one of the most fundamental conceptual ideas in linear algebra, along with the notion of a span. So this section, and the subsequent Section LDS, will explore this new idea.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be — try to forget these imprecise ideas and go with the new ones given here.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Once the dimension of a vector space is known, then the determination of whether or not a set of vectors is linearly independent, or if it spans the vector space, can often be much easier. In this section we will state a workhorse theorem and then apply it to the column space and row space of a matrix. It will also help us describe a super-basis for \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

A matrix-vector product (Definition MVP) is a linear combination of the columns of the matrix and this allows us to connect matrix multiplication with systems of equations via Theorem SLSLC. Row operations are linear combinations of the rows of a matrix, and of course, reduced row-echelon form (Definition RREF) is also intimately related to solving systems of equations. In this section we will formalize these ideas with two key definitions of sets of vectors derived from a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.d.mat.echelon.linindep
math.la.d.mat.rank.column
Equivalence theorem: the rows of A are linearly independent. math.la.t.equiv.row.linindep

III.4 Combining Subspaces 

Definition of sum of subspaces, arbitrary vector space math.la.d.vsp.subspace.sum.arb
math.la.t.vsp.subspace.independent.lincomb
math.la.t.vsp.subspace.independent.basis
math.la.d.vsp.subspace.independent.arb
math.la.d.vsp.subspace.sum.direct.arb
math.la.t.vsp.subspace.sum.direct.dim
math.la.d.vsp.subspace.complement.arb
math.la.t.vsp.subspace.complement.arb

3  Maps Between Spaces 

I Isomorphisms 

I.1 Definition and Examples 


Two proofs, with discussion, of the fact that an abstract linear transformation maps 0 to 0.

Created On
February 15th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 Pre-class
Perspective
 Proof
Language
 English
Content Type
text/html; charset=utf-8

Given a basis for a n-dimensional vector space V, the coordinate map is a linear bijection between V and R^n; definition isomorphisms between vector spaces and isomorphic vector spaces.

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.d.vsp.automorphism

I.2 Dimension Characterizes Isomorphism 


You may have noticed that many questions about elements of abstract vector spaces eventually become questions about column vectors or systems of equations. Example SM32 would be an example of this. We will make this vague idea more precise in this section.

  • math.la.t.vsp.isomorphic.dim
  • math.la.t.vsp.dim.isomorphic
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.

  • math.la.t.vsp.isomorphic.dim
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

You may have noticed that many questions about elements of abstract vector spaces eventually become questions about column vectors or systems of equations. Example SM32 would be an example of this. We will make this vague idea more precise in this section.

  • math.la.t.vsp.isomorphic.rncn
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.t.vsp.isomorphism.inv
math.la.t.vsp.isomorphism.equiv

II Homomorphisms 

II.1 Definition 


Two proofs, with discussion, of the fact that an abstract linear transformation maps 0 to 0.

Created On
February 15th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 Pre-class
Perspective
 Proof
Language
 English
Content Type
text/html; charset=utf-8

Matrices can be thought of as transforming space, and understanding how this work is crucial for understanding many other ideas that follow in linear algebra...

License
Unlicense
Created On
May 25th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

This is a guided discovery of the formula for Lagrange Interpolation, which lets you find the formula for a polynomial which passes through a given set of points.

Created On
June 8th, 2017
7 years ago
Views
2
Type
 Handout
Perspective
 Application
Language
 English
Content Type
text/html; charset=utf-8

Definition of the column space of a matrix; column space is a subspace; comparison to the null space; definition of a linear transformation between vector spaces; definition of kernel and range of a linear transformation

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.

  • math.la.t.lintrans.vsp
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.

  • math.la.t.lintrans.basis
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.d.lintrans.basis.extension

II.2 Rangespace and Nullspace 


Definition of the column space of a matrix; column space is a subspace; comparison to the null space; definition of a linear transformation between vector spaces; definition of kernel and range of a linear transformation

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.

  • math.la.t.lintrans.equiv.basis
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.

  • math.la.t.lintrans.equiv.nullity
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.

  • math.la.t.lintrans.ranknullity
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will conclude our introduction to linear transformations by bringing together the twin properties of injectivity and surjectivity and consider linear transformations with both of these properties.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
4
Type
 Textbook
Language
 English
Content Type
text/html

Some linear transformations possess one, or both, of two key properties, which go by the names injective and surjective. We will see that they are closely related to ideas like linear independence and spanning, and subspaces like the null space and the column space. In this section we will define an injective linear transformation and analyze the resulting consequences. The next section will do the same for the surjective property. In the final section of this chapter we will see what happens when we have the two properties simultaneously.

  • math.la.t.lintrans.kernel.arb
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html
math.la.t.lintrans.range.vsp
math.la.t.lintrans.inv.vsp
math.la.t.lintrans.lindep
math.la.t.lintrans.equiv.inv
math.la.t.lintrans.equiv.nullspace
math.la.t.lintrans.equiv.rank

III Computing Linear Maps 

III.1 Representing Linear Maps with Matrices 


Advice to instructors for in-class activities on matrix-vector multiplication and translating between the various equivalent notation forms of linear systems, and suggestions for how this topic can be used to motivate future topics.

Created On
February 15th, 2017
7 years ago
Views
3
Type
 Handout
Timeframe
 In-class
Perspective
 Example
Language
 English
Content Type
text/html; charset=utf-8

Learning goals: 1. What are the dimension (size) requirements for two matrices so that they can be multiplied to each other? 2. What is the product of two matrices, when it exists?

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Review
Language
 English
Content Type
text/html; charset=utf-8

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We have seen that linear transformations whose domain and codomain are vector spaces of columns vectors have a close relationship with matrices (Theorem MBLT, Theorem MLTCV). In this section, we will extend the relationship between matrices and linear transformations to the setting of linear transformations between abstract vector spaces.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Definition of matrix-vector product, each entry separately math.la.d.mat.vec.prod.coord

III.2 Any Matrix Represents a Linear Map 


Use matrix transformations to motivate the concept of linear transformation; examples of matrix transformations

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.t.mat.lintrans.rank
math.la.t.lintrans.equiv.rank.col
math.la.t.lintrans.onto.rank
math.la.d.lintrans.nonsingular
math.la.t.equiv.lintrans.isommorphism

IV Matrix Operations 

IV.1 Sums and Scalar Products 


The product of a matrix times a vector is defined, and used to show that a system of linear equations is equivalent to a system of linear equations involving matrices and vectors. The example uses a 2x3 system.

License
CC-BY-SA-4.0
Created On
February 15th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Notation for matrix entries, diagonal matrix, square matrix, identity matrix, and zero matrix.

Created On
February 17th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Definition of sum of matrices, product of a scalar and a matrix

Created On
February 17th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Suggestions for in-class activities on matrix operations: addition, multiplication, transpose, and the fact that multiplication is not commutative.

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 In-class
Language
 English
Content Type
text/html; charset=utf-8

In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We have seen that linear transformations whose domain and codomain are vector spaces of columns vectors have a close relationship with matrices (Theorem MBLT, Theorem MLTCV). In this section, we will extend the relationship between matrices and linear transformations to the setting of linear transformations between abstract vector spaces.

  • math.la.t.lintrans.mat_repn.sum
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We have seen that linear transformations whose domain and codomain are vector spaces of columns vectors have a close relationship with matrices (Theorem MBLT, Theorem MLTCV). In this section, we will extend the relationship between matrices and linear transformations to the setting of linear transformations between abstract vector spaces.

  • math.la.t.lintrans.mat_repn.scalar
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

IV.2 Matrix Multiplication 


Associative and distributive properties of matrix multiplication and addition; multiplication by the identity matrix; definition of the transpose of a matrix; transpose of the transpose, transpose of a sum, transpose of a product

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We have seen that linear transformations whose domain and codomain are vector spaces of columns vectors have a close relationship with matrices (Theorem MBLT, Theorem MLTCV). In this section, we will extend the relationship between matrices and linear transformations to the setting of linear transformations between abstract vector spaces.

  • math.la.t.lintrans.mat_repn.composition
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

Early in Chapter VS we prefaced the definition of a vector space with the comment that it was “one of the two most important definitions in the entire course.” Here comes the other. Any capsule summary of linear algebra would have to describe the subject as the interplay of linear transformations and vector spaces. Here we go.

  • math.la.d.lintrans.composition.arb
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html
Definition of matrix multiplication, each entry separately math.la.d.mat.mult.coord

IV.3 Mechanics of Matrix Multiplication 


Motivation and definition of the inverse of a matrix

License
(CC-BY-NC-SA-4.0 OR CC-BY-SA-4.0)
Created On
January 5th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Notation for matrix entries, diagonal matrix, square matrix, identity matrix, and zero matrix.

Created On
February 17th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Learning goals: 1. What are the dimension (size) requirements for two matrices so that they can be multiplied to each other? 2. What is the product of two matrices, when it exists?

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Review
Language
 English
Content Type
text/html; charset=utf-8

The definition of matrix inverse is motivated by considering multiplicative inverse. The identity matrix and matrix inverse are defined.

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Properties of matrix inversion: inverse of the inverse, inverse of the transpose, inverse of a product; elementary matrices and corresponding row operations; a matrix is invertible if and only if it is row-equivalent to the identity matrix; row-reduction algorithm for computing matrix inverse

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we specialize further and consider matrices with equal numbers of rows and columns, which when considered as coefficient matrices lead to systems with equal numbers of equations and variables. We will see in the second half of the course (Chapter D, Chapter E, Chapter LT, Chapter R) that these matrices are especially important.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section we will back up and start simple. We begin with a definition of a totally general set of matrices, and see where that takes us.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We know how to add vectors and how to multiply them by scalars. Together, these operations give us the possibility of making linear combinations. Similarly, we know how to add matrices and how to multiply matrices by scalars. In this section we mix all these ideas together and produce an operation known as matrix multiplication. This will lead to some results that are both surprising and central. We begin with a definition of how to multiply a vector by a matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We have seen in Section SD that under the right conditions a square matrix is similar to a diagonal matrix. We recognize now, via Theorem SCB, that a similarity transformation is a change of basis on a matrix representation. So we can now discuss the choice of a basis used to build a matrix representation, and decide if some bases are better than others for this purpose. This will be the tone of this section. We will also see that every matrix has a reasonably useful matrix representation, and we will discover a new class of diagonalizable linear transformations. First we need some basic facts about triangular matrices.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We have seen in Section SD that under the right conditions a square matrix is similar to a diagonal matrix. We recognize now, via Theorem SCB, that a similarity transformation is a change of basis on a matrix representation. So we can now discuss the choice of a basis used to build a matrix representation, and decide if some bases are better than others for this purpose. This will be the tone of this section. We will also see that every matrix has a reasonably useful matrix representation, and we will discover a new class of diagonalizable linear transformations. First we need some basic facts about triangular matrices.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.d.mat.unit
math.la.d.mat.permutation
math.la.d.mat.trace
math.la.d.mat.markov

IV.4 Inverses 


Motivation and definition of the inverse of a matrix

License
(CC-BY-NC-SA-4.0 OR CC-BY-SA-4.0)
Created On
January 5th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

The definition of matrix inverse is motivated by considering multiplicative inverse. The identity matrix and matrix inverse are defined.

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Matrix inverses are motivated as a way to solve a linear system. The general algorithm of finding an inverse by row reducing an augmented matrix is described, and then implemented for a 3x3 matrix. Useful facts about inverses are stated and then illustrated with sample 2x2 matrices. (put first: need Example of finding the inverse of a 3-by-3 matrix by row reducing the augmented matrix)

Created On
February 19th, 2017
7 years ago
Views
3
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Suggested classroom activities on matrix inverses.

Created On
February 19th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 In-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

This is a guided discovery of the formula for Lagrange Interpolation, which lets you find the formula for a polynomial which passes through a given set of points.

Created On
June 8th, 2017
7 years ago
Views
2
Type
 Handout
Perspective
 Application
Language
 English
Content Type
text/html; charset=utf-8

Statements that are equivalent to a square matrix being invertible; examples.

Created On
August 21st, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Definition of the inverse of a matrix, examples, uniqueness; formula for the inverse of a 2x2 matrix; determinant of a 2x2 matrix; using the inverse to solve a system of linear equations.

Created On
August 22nd, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Properties of matrix inversion: inverse of the inverse, inverse of the transpose, inverse of a product; elementary matrices and corresponding row operations; a matrix is invertible if and only if it is row-equivalent to the identity matrix; row-reduction algorithm for computing matrix inverse

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

The inverse of a square matrix, and solutions to linear systems with square coefficient matrices, are intimately connected.

  • The inverse of a matrix (if it exists) can be found by row reducing the matrix augmented by the identity matrix. math.la.t.mat.inv.augmented
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We saw in Theorem CINM that if a square matrix \(A\) is nonsingular, then there is a matrix \(B\) so that \(AB=I_n\text{.}\) In other words, \(B\) is halfway to being an inverse of \(A\text{.}\) We will see in this section that \(B\) automatically fulfills the second condition (\(BA=I_n\)). Example MWIAA showed us that the coefficient matrix from Archetype A had no inverse. Not coincidentally, this coefficient matrix is singular. We will make all these connections precise now. Not many examples or definitions in this section, just theorems.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.d.mat.inv.left
math.la.d.mat.inv.right
math.la.t.mat.inv.leftright

V Change of Basis 

V.1 Changing Representations of Vectors 


This is a guided discovery of the formula for Lagrange Interpolation, which lets you find the formula for a polynomial which passes through a given set of points.

Created On
June 8th, 2017
7 years ago
Views
2
Type
 Handout
Perspective
 Application
Language
 English
Content Type
text/html; charset=utf-8

We have seen in Section MR that a linear transformation can be represented by a matrix, once we pick bases for the domain and codomain. How does the matrix representation change if we choose different bases? Which bases lead to especially nice representations? From the infinite possibilities, what is the best possible representation? This section will begin to answer these questions. But first we need to define eigenvalues for linear transformations and the change-of-basis matrix.

  • math.la.t.vsp.change_of_basis
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We have seen in Section MR that a linear transformation can be represented by a matrix, once we pick bases for the domain and codomain. How does the matrix representation change if we choose different bases? Which bases lead to especially nice representations? From the infinite possibilities, what is the best possible representation? This section will begin to answer these questions. But first we need to define eigenvalues for linear transformations and the change-of-basis matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html
math.la.t.equiv.change_of_basis
math.la.t.equiv.identitymap

V.2 Changing Map Representations 

math.la.d.mat.equiv
math.la.t.mat.equiv
math.la.t.mat.equiv.diag
math.la.t.mat.equiv.rank

VI Projection 

VI.1 Orthogonal Projection Into a Line 


This is from the University of Waterloo. It includes content about Projections, as well as some content from the Multivariable Calculus. These notions are developed in Euclidean Space.

Created On
October 23rd, 2013
10 years ago
Views
3
Type
 Video
Perspective
 Introduction
Language
 English
Content Type
text/html;charset=UTF-8

This is a quiz from the University of Waterloo. It is a quiz about projections that is strictly in R^n. It additionally asks questions on perpendicular vectors and cross products.

Created On
October 23rd, 2013
10 years ago
Views
2
Type
 Unknown
Timeframe
 Post-class
Perspective
 Example
Language
 English
Content Type
text/html;charset=UTF-8

VI.2 Gram-Schmidt Orthogonalization 


Orthonormal sets and bases (definition); expressing vectors as linear combinations of orthonormal basis vectors; matrices with orthonormal columns preserve vector norm and dot product; orthogonal matrices; inverse of an orthogonal matrix equals its transpose

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section we define a couple more operations with vectors, and prove a few theorems. At first blush these definitions and results will not appear central to what follows, but we will make use of them at key points in the remainder of the course (such as Section MINM, Section OD). Because we have chosen to use \(\complexes\) as our set of scalars, this subsection is a bit more, uh, … complex than it would be for the real numbers. We will explain as we go along how things get easier for the real numbers \({\mathbb R}\text{.}\) If you have not already, now would be a good time to review some of the basic properties of arithmetic with complex numbers described in Section CNO. With that done, we can extend the basics of complex number arithmetic to our study of vectors in \(\complex{m}\text{.}\)

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Definition of orthogonal basis of a (sub)space math.la.d.subspace.basis.orthogonal

VI.3 Projection Into a Subspace 


Orthogonal projection onto subspace in R^n minimizes distance; projection formula simplification for orthonormal bases; relation to orthogonal matrices

Created On
August 21st, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8
math.la.d.vec.projection_arbitrary.subspace
Definition of orthogonal complement of a subspace math.la.d.subspace.orthogonal_complement
The orthogonal complement of a subspace is a subspace. math.la.t.subspace.orthogonal_complement
math.la.t.subspace.orthogonal_complement.sum

4  Determinants 

I Definition 

I.1 Exploration 


The formula for the inverse of a 2x2 matrix is derived. (need tag for that formula)

Created On
February 17th, 2017
7 years ago
Views
2
Type
 Video
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Definition of the inverse of a matrix, examples, uniqueness; formula for the inverse of a 2x2 matrix; determinant of a 2x2 matrix; using the inverse to solve a system of linear equations.

Created On
August 22nd, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Formula for the determinant of a 3-by-3 matrix. math.la.t.mat.det.3x3

I.2 Properties of Determinants 


The effect of row operations on the determinant of a matrix; computing determinants via row reduction; a square matrix is invertible if and only if its determinant is nonzero.

Created On
August 22nd, 2017
7 years ago
Views
4
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.

  • math.la.t.mat.row.z
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.

  • math.la.t.mat.row.equal
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.d.mat.det.elementaryoperations
math.la.t.mat.det.unique

I.3 The Permutation Expansion 


Determinant of the transpose equals the determinant of the original matrix; rescaling a column rescales the determinant by the same factor; interchanging two columns changes the sign of the determinant; adding multiple of one column to another leaves determinant unchanged; determinant of the product of two matrices equals product of the two determinants

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.d.multilinear
math.la.t.det.multilinear
math.la.d.mat.det.permutation
math.la.t.mat.det.exists

I.4 Determinants Exist 

II Geometry of Determinants 

II.1 Determinants as Size Functions 


Determinant of the transpose equals the determinant of the original matrix; rescaling a column rescales the determinant by the same factor; interchanging two columns changes the sign of the determinant; adding multiple of one column to another leaves determinant unchanged; determinant of the product of two matrices equals product of the two determinants

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

We have seen how to compute the determinant of a matrix, and the incredible fact that we can perform expansion about any row or column to make this computation. In this largely theoretical section, we will state and prove several more intriguing properties about determinants. Our main goal will be the two results in Theorem SMZD and Theorem DRMM, but more specifically, we will see how the value of a determinant will allow us to gain insight into the various properties of a square matrix.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
The determinant of a matrix measures the area/volume of the parallelogram/parallelipiped determined by its columns. math.la.t.mat.det.col.volume
The determinant of the matrix of a linear transformation is the factor by which the area/volume changes. math.la.t.lintrans.det.volume
math.la.t.mat.det.inv

III Laplace's Formula 

III.1 Laplace's Expansion 


Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

Before we define the determinant of a matrix, we take a slight detour to introduce elementary matrices. These will bring us back to the beginning of the course and our old friend, row operations.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
Definition of adjugate/classical adjoint of a matrix math.la.d.mat.classicaladjoint
The inverse of a matrix can be expressed in terms of its matrix of cofactors. math.la.t.mat.inv.cofactors

5  Similarity 

I Complex Vector Spaces 

I.1 Polynomial Factoring and Complex Numbers 

I.2 Complex Representations 

II Similarity 

II.1 Definition and Examples 


Definition of similarity for square matrices; similarity is an equivalence relation; similar matrices have the same characteristic polynomial and hence the same eigenvalues, with same multiplicities; definition of multiplicity.

Created On
September 3rd, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

II.2 Diagonalizability 


Diagonalization theorem: a nxn matrix A is diagonalizable if and only if it has n linearly independent eigenvectors. If so, the matrix factors as A = PDP^{-1}, where D is diagonal and P is invertible (and its columns are the n linearly independent eigenvectors). Algorithm to diagonalize a matrix.

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.d.lintrans.diagonalizable
math.la.t.lintrans.diagonalizable.basis

II.3 Eigenvalues and Eigenvectors 


An introductory activity on eigenvalues and eigenvectors in which students do basic matrix-vector multiplication calculations to find whether given vectors are eigenvectors, to determine the eigenvalue corresponding to an eigenvector and to find an eigenvector corresponding to an eigenvalue. This activity is self-contained and does not require any previous experience with eigenvalues or eigenvectors.

Created On
June 9th, 2017
7 years ago
Views
2
Type
 Handout
Timeframe
 Pre-class
Perspective
 Introduction
Language
 English
Content Type
text/html; charset=utf-8

Definition of the eigenspace corresponding to an eigenvector $\lambda$ (and proof that this is a vector space); analysis of simple matrices in R^2 and R^3 to visualize the "geometry" of eigenspaces; proof that eigenvectors corresponding to distinct eigenvectors are linearly independent

Created On
August 25th, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Diagonalization theorem: a nxn matrix A is diagonalizable if and only if it has n linearly independent eigenvectors. If so, the matrix factors as A = PDP^{-1}, where D is diagonal and P is invertible (and its columns are the n linearly independent eigenvectors). Algorithm to diagonalize a matrix.

Created On
August 25th, 2017
7 years ago
Views
3
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

Theorem: \lambda is an eigenvalue of a matrix A if and only if \lambda satisfies the characteristic equation det(A-\lambda I) = 0; examples; eigenvalues of triangular matrices are the diagonal entries.

Created On
September 3rd, 2017
7 years ago
Views
2
Type
 Video
Language
 English
Content Type
text/html; charset=utf-8

This section's topic will perhaps seem out of place at first, but we will make the connection soon with eigenvalues and eigenvectors. This is also our first look at one of the central ideas of Chapter R.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.

  • math.la.d.mat.eig.multiplicity.geometric
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
2
Type
 Textbook
Language
 English
Content Type
text/html

In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

We have seen in Section MR that a linear transformation can be represented by a matrix, once we pick bases for the domain and codomain. How does the matrix representation change if we choose different bases? Which bases lead to especially nice representations? From the infinite possibilities, what is the best possible representation? This section will begin to answer these questions. But first we need to define eigenvalues for linear transformations and the change-of-basis matrix.

  • math.la.d.lintrans.eig
  • math.la.d.lintrans.eigvec
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good 4×100 meter relay, we will lead-off with one of our better theorems and save the very best for the anchor leg.

License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html

In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section.

  • math.la.d.mat.eig.multiplicity.algebraic
License
GFDL-1.2
Submitted At
September 11th, 2017
 7 years ago
Views
3
Type
 Textbook
Language
 English
Content Type
text/html
math.la.d.lintrans.charpoly
math.la.t.lintrans.eig.exists
math.la.d.lintrans.eigsp
math.la.d.lintrans.eigsp.subspace

III Nilpotence 

III.1 Self-Composition 

math.la.t.lintrans.power.range
math.la.t.lintrans.power.kernel
math.la.d.lintrans.range.generalized.arb
math.la.d.lintrans.kernel.generalized.arb

III.2 Strings 

math.la.d.lintrans.range.generalized.injective
math.la.d.lintrans.generalized.sum
Definition of nilpotent linear transformation math.la.d.lintrans.nilpotent
math.la.d.mat.nilpotent
math.la.d.nilpotent.index
math.la.t.mat.nilpotent.zo

IV Jordan Form 

IV.1 Polynomials of Maps and Matrices 

math.la.d.lintrans.polynomial.apply
math.la.d.mat.polynomial.apply
math.la.d.lintrans.minpoly
Definition of minimal polynomial of a matrix math.la.d.mat.minpoly
math.la.d.mat.minpoly.exists
math.la.d.lintrans.minpoly.exists
math.la.d.lintrans.cayleyhamilton
math.la.d.mat.cayleyhamilton
math.la.t.mat.charpoly.z

IV.2 Jordan Canonical Form 

math.la.t.mat.nilpotent.eig
math.la.d.lintrans.invariantsubspace
math.la.d.lintrans.invariantsubspace.block
math.la.t.mat.det.block
Definition of Jordan form math.la.d.mat.jordan
math.la.t.mat.jordan
math.la.t.mat.jordan.sum