Study guide for Midterm 1

Here is a non-exhaustive list of questions you should be able to answer as you prepare for the first midterm. The midterm will cover chapter 1-4.

Scientific Computing

  • What is posedness and conditioning of problems?
  • What are absolute and relative errors?
  • What are forward and backward errors?
  • What are the categories of sources of error in numerical methods?
  • What does it mean for a result to have $n$ accurate digits?
  • How does the number of accurate digits relate to rounding?
  • What is the relative and absolute condition number of evaluating a function for a given input? over a domain of inputs?
  • What are the main components of floating point numbers?
  • What is the typical relative accuracy in a floating point representation of a real number?
  • Why are subnormal numbers used?
  • How many digits can be lost during addition and multiplication of floating point numbers?

Linear Systems

  • What is a vector norm? a normalized vector? a unit ball?
  • What defined a matrix norm? what are induced vector norms? what is the Frobenius norm?
  • What is the matrix condition number?
  • What is the conditioning of solving a linear system? of matrix-vector multiplication?
  • How are the propagated data error, forward error, and backward error related? in terms of conditioning?
  • How does one solve a triangular linear system? What is the cost?
  • What is LU factorization, when does it exist and when is it unique?
  • Why is pivoting necessary and what type of pivoting strategies are possible?
  • What is the cost of Gaussian elimination?
  • How can Gaussian elimination be done with elementary elimination and permutation matrices?
  • How do you solve a linear system given a pivoted LU factorization?
  • What are the Cholesky and $LDL^T$ factorizations? When can they be used and what are their advantages?
  • How can we solve a rank-1 perturbed problem via the Sherman Morrison formula?
  • How can we take advantage of tridiagonal or banded structure in solving a linear system?

Linear Least Squares

  • What is the conditioning of a linear system?
  • How can you solve a (square) linear system using the SVD?
  • What are the norms and condition number using the SVD? what are they for a rectangular matrix?
  • What is the reduced SVD?
  • Why is the SVD helpful for (tall-and-skinny) least-squares system using the SVD? What is the residual in such a problem?
  • How can you solve a least-squares problem using the SVD?
  • Given an SVD of the matrix and a right-hand side, how would you find the 2-norm of the residual of a least-squares problem?
  • How would you use the SVD to solve a (short-and-fat/tall-and-skinny matrix) least-squares problem?
  • How can least squares problems be solved via the normal equations? What are the advantages and disadvantages of that?
  • What is QR factorization, when does it exist and is it unique?
  • What is a projection matrix?
  • What are the classical and modified Gram-Schmidt processes? What can you say about their stability?
  • What is the Householder QR factorization algorithm, and what can you say about its stability?
  • What is a Householder reflector matrix, what properties does it have?
  • What is a Givens rotation matrix, what properties does it have?
  • How can Givens rotations be used to factorize a sparse matrix?
  • How does one solve linear least squares problems using a QR factorization?

Eigenvalue Problems

  • What is an eigenvector? an eigenvalue of a matrix? (i.e. know the definition)
  • What is a similarity transformation?
  • What is the relationship between the SVD and the eigenvalue decomposition?
  • When are eigenvectors linearly independent?
  • What are the Jordan and Schur forms?
  • What is a normal matrix?, a defective matrix? a diagonalizable matrix?
  • What is an eigenvalue multiplicity? what is a complex eigenvalue pair?
  • What is the relationship between the SVD and the eigenvalue decomposition?
  • What is power iteration?
  • What can be obtained using power iteration?
  • What is normalized power iteration? What problem does it address?
  • Given an approximate eigenvector, how can you estimate eigenvalues? What is the Rayleigh Quotient? What is inverse iteration? Rayleigh Quotient iteration?
  • What is the conditioning of eigenvalues and eigenvectors in an eigenvalue problem?
  • What is orthogonal iteration? QR iteration? how are they related?
  • How can one reduce a matrix to Hessenberg form and why is it helpful?
  • How can one incorporate shifting into QR iteration?
  • What is a Krylov subspace? how is it related to a Companion matrix?
  • What is the Arnoldi method? the Lanczos method?