Copyright (C) 2020 Andreas Kloeckner
import numpy as np
import numpy.linalg as la
A = np.random.randn(3, 3)
Let's start from regular old (modified) Gram-Schmidt:
Q = np.zeros(A.shape)
q = A[:, 0]
Q[:, 0] = q/la.norm(q)
# -----------
q = A[:, 1]
coeff = np.dot(Q[:, 0], q)
q = q - coeff*Q[:, 0]
Q[:, 1] = q/la.norm(q)
# -----------
q = A[:, 2]
coeff = np.dot(Q[:, 0], q)
q = q - coeff*Q[:, 0]
coeff = np.dot(Q[:, 1], q)
q = q - coeff*Q[:, 1]
Q[:, 2] = q/la.norm(q)
Q.dot(Q.T)
array([[ 1.00000000e+00, 6.15868752e-17, -5.86239841e-16], [ 6.15868752e-17, 1.00000000e+00, -3.18779032e-16], [ -5.86239841e-16, -3.18779032e-16, 1.00000000e+00]])
Now we want to keep track of what vector got added to what other vector, in the style of an elimination matrix.
Let's call that matrix $R$.
R = np.zeros((A.shape[0], A.shape[0]))
Q = np.zeros(A.shape)
q = A[:, 0]
Q[:, 0] = q/la.norm(q)
R[0,0] = la.norm(q)
# -----------
q = A[:, 1]
coeff = np.dot(Q[:, 0], q)
R[0,1] = coeff
q = q - coeff*Q[:, 0]
Q[:, 1] = q/la.norm(q)
R[1,1] = la.norm(q)
# -----------
q = A[:, 2]
coeff = np.dot(Q[:, 0], q)
R[0,2] = coeff
q = q - coeff*Q[:, 0]
coeff = np.dot(Q[:, 1], q)
R[1,2] = coeff
q = q- coeff*Q[:, 1]
Q[:, 2] = q/la.norm(q)
R[2,2] = la.norm(q)
R
array([[ 0.37598334, 1.41035995, -2.41234028], [ 0. , 0.79661434, 1.04638607], [ 0. , 0. , 0.54718387]])
la.norm(Q@R - A)
5.5511151231257827e-17
This is called QR factorization.