MA398: Matrix Analysis and Algorithms

This autumn I will lecture the course MA398 Matrix Analysis and Algorithms at the University of Warwick. The course will be an introduction to matrix analysis and to the basic algorithms of numerical linear algebra, broadly following a text by Andrew Stuart and Jochen Voss [2009 version] [2012 version]. I plan on posting summary notes to this blog, roughly in correspondence with the chapters of the Stuart & Voss text; I’ll tag them all “ma398”.

After some necessary introductory material, the course will cover three basic problems of numerical linear algebra:

  1. Simultaneous Linear Equations (SLE). Given a matrix A ∈ ℂn×n and a vector b ∈ ℂm, find x ∈ ℂn such that Ax = b.
  2. (Overdetermined) Least Squares (LSQ). Given a matrix A ∈ ℂm×n with m ≥ n and a vector b ∈ ℂm, find x ∈ ℂn that minimises ‖ Ax − b ‖22.
  3. Eigenvalue/Eigenvector Problem (EVP). Given a matrix A ∈ ℂn×n, find (xλ) ∈ ℂn×ℂ such that Ax = λx and ‖x2 = 1.

The common feature that I see in these three problems is that they occur naturally in many theoretical and practical settings and are mathematically very simple to describe, but actually solving them efficiently in practice requires some skill. In particular, these problems are all barely above high-school level for m = n = 2 or 3, but take on a rather different character when m and n are of order 106! Sometimes the problems are practical ones of time and resources, but in other cases they are more fundamental: for example, thanks to the Abel–Ruffini theorem, there can in general be no exact solution to (EVP) for n ≥ 5 that uses only the rational numbers and finitely many applications of the elementary arithmetical operations of +, −, ×, ÷ and √.

Some of the general tools that will be used and topics addressed in the course will include:

  1. Vector and Matrix Analysis. Vector norms and inner products. Eigenvalues and eigenvectors. Dual spaces. Matrix norms. Structured matrices.
  2. Matrix Factorizations. Diagonalization. Jordan canonical form. Singular value decomposition. QR, LU and Cholesky factorizations.
  3. Stability and Conditioning. Conditioning of the SLE, LSQ and EVP problems. Stability of algorithms.
  4. Complexity of Algorithms. Computational cost. Matrix-matrix multiplication. Fast Fourier transform. Bidiagonal and Hessenberg forms.
  5. SLE. Gaussian elimination. Partial pivoting. QR factorization.
  6. Iterative Methods. Linear methods. Jacobi method. Gauss–Seidel and SOR methods. Nonlinear methods. Steepest descent. Conjugate gradients.
  7. LSQ. LSQ via normal equations, QR factorization, and SVD.
  8. EVP. Power method. Inverse iteration. Rayleigh quotient iteration. Simultaneous iteration. QR algorithm for eigenvalues. Divide-and-conquer for symmetric problems.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s