Major Learning Objectives

At the end of the semester, you should be able to

Chapter 1: Scientific Computing

  • Calculate the absolute and relative error and identify when to use absolute or relative error.
  • Explain the difference between truncation and rounding error.
  • Calculate the forward error, backward error and condition number of an algorithm.
  • Understand how floating point number is represented and the concept of machine epsilon.
  • Identify when catastrophic cancellation could occur.

Chapter 2: Linear System

  • Calculate 1-norm, 2-norm and infinity-norm of a vector and interpret their induced matrix norm and condition number.
  • Use SVD to calculate or analyze linear algebra problems (e.g. 2-norm based matrix condition number).
  • Analyze the conditioning, existence and uniqueness of solving a linear system.
  • Solve triangular systems with forward or backward substitution.
  • Understand algorithm for LU factorization with and without partial pivoting to solve a linear system, and explain the benefits of partial pivoting.
  • Analyze the complexity of forward or backward substitution and LU factorization.
  • Know there are ways for solving special linear systems such as Cholesky factorization, banded LU and Sherman-Morrison.

Chapter 3: Linear Least Squares

  • Describe the conditioning of a linear least square problem informally.
  • Solve linear least square problem using normal equations and explain why it may be unstable.
  • Outline how to test matrix orthogonality.
  • Understand algorithms for classical Gram-Schmidt and modified Gram-Schmidt.
  • Explain the general approach of Householder Transformation and Givens Rotations for calculating QR factorizations.
  • Analyze the computational cost of Gram-Schmidt, Householder Transformations and Givens Rotations.
  • Understand numerical rank and use pseudoinverse to solve rank-deficient least square problems.

Chapter 4: Eigenvalue Problems

  • Identify different forms attainable by similarity transformation of different types of matrices (e.g. any real symmetric matrix is orthogonally similar to a real diagonal matrix; any normal matrix is unitarily similar to a diagonal matrix, etc.).
  • Estimate the associated eigenvalue of a given eigenvector using rayleigh quotient.
  • Understand algorithms and analyze the convergence rate of power iteration, inverse iteration and rayleigh quotient iteration. Understand which eigenvalue each of them converges to.
  • Use deflation to remove known eigenvalues.
  • Apply orthogonal and QR iteration (with shift) to compute multiple eigenvectors.
  • Analyze the computational cost of power iteration, inverse iteration, rayleigh quotient iteration, orthogonal iteration and QR iteration. Describe how QR iteration can be accelerated by reducing to upper-Hessenberg matrix.
  • Understand algorithms and analyze the computational cost of Arnoldi iteration and Lanczos iteration for estimating eigenpairs inside the Krylov subspace.

Chapter 5: Nonlinear Equations

  • Know the absolute condition number of nonlinear equations.
  • Define and know how to compute rates of convergence.
  • Understand algorithm for bisection method and analyze its convergence rate.
  • Know the convergence criteria of fixed-point iteration.
  • Understand algorithms for univariate and multivariate Newton's method and its cost and convergence rate.
  • Understand algorithm for secant method and trade-offs with respect to Newton's method.
  • Understand safeguarding and quasi-Newton methods.

Chapter 6: Numerical Optimization

  • Understand optimality conditions defining local minima for both unconstrained and constrained optimization problem.
  • Understand algorithm for Golden Section Search and analyze its convergence rate.
  • Understand algorithm for the steepest descent method.
  • Reason about optimality properties and uses of the conjugate gradient method.
  • Understand algorithm for Newton's method for both 1-dimension and n-dimension optimization problems.
  • Explain how safeguarding in 1-dimension and Quasi-Newton Method in n-dimension can be used to improve robustness of Newton's Method.
  • Explain the benefit of BFGS method compared to Newton's Method.
  • Understand approximation used in Gauss-Newton method for solving nonlinear least squares problems.
  • Understand Sequential Quadratic Programming and penalty/barrier functions for solving constrained optimization problem.

Chapter 7: Interpolation

  • Setup and solve Vandermonde systems.
  • Know Horner's rule for evaluating polynomials.
  • Know the tradeoffs of the Monomial basis, Lagrange basis and Newton basis for polynomial interpolation.
  • Construct orthogonal basis and understand its benefit.
  • Explain informally how Chebyshev nodes can alleviate Runge's phenomenon.
  • Understand the interpolation error bound.
  • Understand Hermite interpolation and splines. Be able to analyze the number of free parameters.

Chapter 8: Numerical Integration and Differentiation

  • Calculate quadrature weights using method of undetermined coefficients.
  • Understand algorithm for midpoint, trapezoid and Simpson's rule.
  • Understand the benefit of Clenshaw-Curtis quadrature compared to Newton-Cotes.
  • Describe the advantages and methodology of Gaussian quadrature.
  • Know the degree of Newton-Cotes quadrature and Gaussian quadrature. Estimate the error of Newton-Cotes quadrature.
  • Understand algorithms for composite quadrature.
  • Compute differentiation numerically using finite differencing. Analyze order of accuracy through Taylor expansions.
  • Use Richardson extrapolation to eliminate leading order error in numerical differentiation and quadrature.

Chapter 9: Initial Value Problems for Ordinary Differential Equations

  • Transform a high order ODE to a first-order ODE.
  • Specify the conditions under which the ODE IVP has a unique solution.
  • Define stable, unstable and asymptotically stable ODE. Specify for what $\lambda$ the ODE $y'=\lambda y$ is stable, unstable and asymptotically stable.
  • Quantify local error, global error, and accuracy of a method.
  • Understand Forward Euler and Backward Euler methods and specify their stability region.
  • Describe the challenge posed by stiff ODEs and understand which methods address these.
  • Understand multistep and multistage methods.
  • Know how quadrature rules relate to derivation of multistage (Runge-Kutta) methods.

Chapter 10: Boundary Value Problems for Ordinary Differential Equations

  • Know when the solution of a linear first order ODE BVP exists.
  • Know that ODE BVP solutions can be expressed in terms of a Green's function.
  • Outline the idea of the shooting method and multiple shooting methods.
  • Understand how to use finite differences to discretize differential equations.
  • Understand choices and tradeoffs involved in collocation and weighted residual methods.
  • Understand the role of test functions (Galerkin method) and the point of the weak form.

Chapter 11: Partial Differential Equations

  • Classify PDEs as hyperbolic, parabolic, or elliptic.
  • Understand the relationship of PDEs and ODEs through characteristic curves.
  • Use finite difference methods or finite element methods to discretize a simple PDE.
  • Know basic stability relationships (CFL condition) for model time-dependent PDEs.
  • Understand basic sparse iterative methods: Jacobi, Gauss-Siedel, Conjugate-Gradient