What | Where |
---|---|
Time/place | Wed/Fri 2:00pm-3:15pm 1109 Siebel / Catalog |
Class URL | https://bit.ly/fastalg-f19 |
Class recordings | Watch » |
Piazza | Discuss » |
Calendar | View » |
Many of the algorithms of introductory scientific computing have super-linear runtime scaling. Gaussian elimination or LU decomposition are good examples, their runtime scales as $O(n^3)$ with the number of unknowns $n$. Even simple matrix-vector multiplication exhibits quadratic scaling. Problems in scientific computing, especially those arising from science and engineering questions, often demand large-scale computation in order to achieve acceptable fidelity, and such computations will not tolerate super-linear, let alone quadratic or cubic scaling.
This class will teach you a set of techniques and ideas that successfully reduce this asymptotic cost for an important set of operations. This leads to these techniques being called "fast algorithms". We will begin by examining some of these ideas from a linear-algebraic perspective, where for matrices with special structure, large cost gains can be achieved. We will then specialize to PDE boundary value problems, which give rise to many of the largest-scale computations. We will see that integral equations are the natural generalization of the linear-algebraic tools encountered earlier, and we will understand the mathematical and algorithmic foundations that make them powerful tools for computation. All throughout, we will pay much attention to the idea of representation–i.e. the choice of what the numerical unknowns of the system to be solved should be.
Note: the section headings in this tree are clickable to reveal more detail.
CAUTION!
These scribbled PDFs are an unedited reflection of what we wrote during class. They need to be viewed in the context of the class discussion that led to them. See the lecture videos for that.
If you would like actual, self-contained class notes, look in the outline above.
These scribbles are provided here to provide a record of our class discussion, to be used in perhaps the following ways:
By continuing to read them, you acknowledge that these files are provided as supplementary material on an as-is basis.
We will be using Python with the libraries numpy, scipy and matplotlib for assignments. No other languages are permitted. Python has a very gentle learning curve, so you should feel at home even if you've never done any work in Python.
While you are free to install Python and Numpy on your own computer to do homework, the only supported way to do so is using the supplied virtual machine image.
For randomized linear algebra: Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions by Halko/Martinsson/Tropp
For FMMs:
A fast algorithm for particle simulations by Greengard and Rokhlin.
A Fast Adaptive Multipole Algorithm for Particle Simulations
For numerical linear algebra:
Golub and van Loan is the definitive reference, with an emphasis on reference
Trefethen and Bau is less comprehensive but more approachable
Mike Heath's sci comp book also contains almost everything you actually need to know about numerical LA
For the functional analysis basics we need as well as the integral equation material, the best match for the course is Kress's book on linear integral equations. The references in the notes are for the second edition, linked above. A third edition is also available.
These can be downloaded as an ebook through the UIUC library's subscription to SpringerLink.