Lecture Notes for Linear Algebra (2021)

Textbooks, Websites, and Video Lectures

Sample Sections : 1.3 and 3.3 and 3.5 and 7.1

Other books by Gilbert Strang

Linear Algebra for Everyone (new textbook, September 2020)

Linear Algebra and Learning from Data (2019)

Introduction to Linear Algebra, 5th Edition (2016)

Differential Equations and Linear Algebra

Computational Science and Engineering

Calculus

Ordering Gilbert Strang's books

Detailed Table of Contents

Textbooks, Websites, and Video Lectures

Part 1 : Basic Ideas of Linear Algebra

1.1 Linear Combinations of Vectors

1.2 Dot Products v · w and Lengths ||v|| and Angles θ

1.3 Matrices Multiplying Vectors : A times x

1.4 Column Space and Row Space of A

1.5 Dependent and Independent Columns

1.6 Matrix-Matrix Multiplication AB

1.7 Factoring A into CR : Column rank =r= Row rank

1.8 Rank one matrices A=(1 column) times (1 row)

Part 2 : Solving Linear Equations Ax = b : A is n by n

2.1 Inverse Matrices A -1 and Solutions x = A -1 b

2.2 Triangular Matrix and Back Substitution for Ux = c

2.3 Elimination : Square A to Triangular U : Ax = b to Ux = c

2.4 Row Exchanges for Nonzero Pivots : Permutation P

2.5 Elimination with No Row Exchanges : Why is A = LU ?

2.6 Transposes / Symmetric Matrices / Dot Products

2.7 Changes in A -1 from Changes in A (more advanced)

Part 3 : Vector Spaces and Subspaces, Basis and Dimension

3.1 Vector Spaces and Four Fundamental Subspaces

3.2 Basis and Dimension of a Vector Space S

3.3 Independent Columns and Rows : Bases by Elimination

3.4 Ax=0 and Ax=b : xnullspace and xparticular

3.5 Four Fundamental Subspaces C(A), C(A T ), N(A), N(A T )

3.6 Rank = Dimension of Column Space and Row Space

3.7 Graphs, Incidence Matrices, and Kirchhoff's Laws

3.8 Every Matrix A Has a Pseudoinverse A +

Part 4 : Orthogonal Matrices Q T = Q -1 and Least Squares for Ax = b

4.1 Orthogonality of the Four Subspaces (Two Pairs)

4.2 Projections onto Lines and Subspaces

4.3 Least Squares Approximations (Regression) : A T Ax ̂ = A T b

4.4 Independent a's to Orthonormal q's by Gram-Schmidt

4.5 The Minimum Norm Solution to Ax = b (n > m) is xrow space

4.6 Vector Norms and Matrix Norms

Part 5 : Determinant of a Square Matrix

5.1 3 by 3 and n by n Determinants

5.2 Cofactors and the Formula for A -1

5.3 Det AB = (Det A) (Det B) and Cramer's Rule

5.4 Volume of Box = | Determinant of Edge Matrix E |

Part 6 : Eigenvalues and Eigenvectors : Ax = λ x and A n x = λ n x

6.1 Eigenvalues λ and Eigenvectors x : Ax = λ x

6.2 Diagonalizing a Matrix : X -1 AX = Λ = eigenvalues

6.3 Symmetric Positive Definite Matrices : Five Tests

6.4 Solve Linear Differential Equations

6.5 Matrices in Engineering : Derivatives to Differences

6.6 Rayleigh Quotients and Sx = λ Mx (Two Matrices)

6.7 Derivatives of the Inverse Matrix and the Eigenvalues

6.8 Interlacing Eigenvalues and Low Rank Changes in S

Part 7 : Singular Values and Vectors : Av = σ u and A = U Σ V T

7.1 Singular Vectors in U and V—Singular Values in Σ

7.2 Reduced SVD / Full SVD / Construct U Σ V T from A T A

7.3 The Geometry of A=U Σ V T : Rotate — Stretch — Rotate

7.4 Ak is Closest to A : Principal Component Analysis PCA

7.5 Computing Eigenvalues of S and Singular Values of A

7.6 Computing Homework and Professor Townsend's Advice

7.7 Compressing Images by the SVD

7.8 The Victory of Orthogonality : Nine Reasons

Part 8 : Linear Transformations and Their Matrices

8.1 Examples of Linear Transformations

8.2 Derivative Matrix D and Integral Matrix D +

8.3 Basis for V and Basis for Y ⇒ Matrix for T : V → Y

Part 9 : Complex Numbers and the Fourier Matrix

9.1 Complex Numbers x+iy=re iθ : Unit circle r = 1

9.2 Complex Matrices : Hermitian S = S T and Unitary Q -1 = Q T

9.3 Fourier Matrix F and the Discrete Fourier Transform

9.4 Cyclic Convolution and the Convolution Rule

9.5 FFT : The Fast Fourier Transform

9.6 Cyclic Permutation P and Circulants C

9.7 The Kronecker Product AB

Part 10 : Learning from Data (Deep Learning with Neural Nets)

10.1 Learning Function F(x, v0) : Data v0 and Weights x

10.2 Playground.Tensorflow.Org : Circle Dataset

10.3 Playground.Tensorflow.Org : Spiral Dataset

10.4 Creating the Architecture of Deep Learning

10.5 Convolutional Neural Nets : CNN in 1D and 2D

10.6 Counting Flat Pieces in the Graph of F

10.7 Three-way Tensors Tijk

Part 11 : Computing Weights by Gradient Descent

11.1 Minimizing F(x) / Solving f(x)=0

11.2 Minimizing a Quadratic Gives Linear Equations

11.3 Calculus for a Function F(x, y)

11.4 Minimizing the Loss : Stochastic Gradient Descent

11.5 Slow Convergence with Zigzag : Add Momentum

11.6 Direction of the Step xk+1 − xk : Step length c

11.7 Chain Rule for ∇ F and ∇ L

Part 12 : Basic Statistics : Mean, Variance, Covariance

12.1 Mean and Variance : Actual and Expected

12.2 Probability Distributions : Binomial, Poisson, Normal

12.3 Covariance Matrices and Joint Probabilities

12.4 Three Basic Inequalities of Statistics

12.5 Markov Matrices and Markov Chains

12.6 The Mean and Variance of z = x + y

Part 13 : Graphs, Flows, and Linear Programming

13.1 Graph Incidence Matrix A and Laplacian Matrix A T A

13.2 Ohm's Law Combines with Kirchhoff's Law : A T CAx = f

13.3 Max Flow-Min Cut Problem in Linear Programming

13.4 Linear Programming and Duality : Max = Min

13.5 Finding Well-Connected Clusters in Graphs

13.6 Completing Rank One Matrices

several

This page has been accessed at least times since February 2021.