## Solving Systems Of Linear Equations

### July 20, 2010

In today’s exercise we continue the examination of matrix operations that we began previously. Our goal is to be able to solve a system of equations; along the way we will see two methods for decomposing matrices.

We begin with some terminology. All the matrices we will be looking at are square, meaning that they have the same number of rows and columns. A lower-triangular matrix L has all entries Lij = 0 for i < j; thus, all entries above the main northwest-to-southeast diagonal are zero. An upper-triangular matrix U has all entries Uij = 0 for i > j; thus, all entries below the main northwest-to-southeast diagonal are zero. A lower- or upper-triangular matrix is unit lower- or upper-triangular if all the entries along the main diagonal are 1. A permutation matrix P has exactly one 1 in each row and column and 0 elsewhere; it is called a permutation matrix because multiplying a vector X by a permutation matrix has the effect of permuting the elements of X. The identity matrix I has 1 in each entry along the main diagonal and 0 elsewhere. A matrix M is singular if it has no inverse, that is, there is no matrix M-1 such that M M-1 = I.

An LU decomposition of a matrix A finds two matrices L, which is unit lower-triangular, and U, which is upper-triangular, such that A = L U. The algorithm is called Gaussian elimination, and works from top to bottom. First, multiples of the first equation are subtracted from the other equations so that the first variable is removed from those equations. Then multiples of the second equation are subtracted from the remaining equations so that the second variable is removed from those equations. Then the third equation, and the fourth, and so on, until all the equations have been processed and the matrix is in upper-triangular form. Here is an example of the LU decomposition of matrix A into its factors L × U:

$\begin{pmatrix} 2 & 3 & 1 & 5 \\ 6 & 13 & 5 & 19 \\ 2 & 19 & 10 & 23 \\ 4 & 10 & 11 & 31 \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 3 & 1 & 0 & 0 \\ 1 & 4 & 1 & 0 \\ 2 & 1 & 7 & 1 \end{pmatrix} \times \begin{pmatrix} 2 & 3 & 1 & 5 \\ 0 & 4 & 2 & 4 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 3 \end{pmatrix}$

There are two problems with LU decomposition: First, the algorithm leads to a divide-by-zero error on singular matrices. Second, it is prone to numerical instability for small divisors. The solution is to rearrange, or permute, the equations so that the pivot element is always the largest remaining element, greatly reducing the likelihood of numerical instability.

An improved decomposition is the LUP decomposition, which finds for an input matrix A three matrices L, U, and a permutation matrix P such that P A = L U. Rather than actually moving equations, the permutation matrix records the rearrangements. For example, here is the LUP decomposition of the matrix A given by P × A = L × U:

$\begin{pmatrix} 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 \end{pmatrix} \times \begin{pmatrix} 2 & 0 & 2 & 3/5 \\ 3 & 3 & 4 & -2 \\ 5 & 5 & 4 & 2 \\ -1 & -2 & 17/5 & -1 \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 2/5 & 1 & 0 & 0 \\ -1/5 & 1/2 & 1 & 0 \\ 3/5 & 0 & 2/5 & 1 \end{pmatrix} \times \begin{pmatrix} 5 & 5 & 4 & 2 \\ 0 & -2 & 2/5 & -1/5 \\ 0 & 0 & 4 & -1/2 \\ 0 & 0 & 0 & -3 \end{pmatrix}$

Given the LUP decomposition, it is simple to solve a system of linear equations. Forward substitution solves the lower-triangular system by calculating the first variable, which is part of an equation with one unknown, then substitutes that into the second equation, reducing it from two unknowns to one unknown, and so on. Then back substitution runs backward, calculating the final values of the variables in the original matrix. Here’s an example, where we wish to solve for the vector X given A X = B:

$\begin{pmatrix} 1 & 2 & 0 \\ 3 & 5 & 4 \\ 5 & 6 & 3 \end{pmatrix} X = \begin{pmatrix} 1/10 \\ 25/2 \\ 103/10 \end{pmatrix}$

The LUP decomposition P A = L U is

$\begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix} \times \begin{pmatrix} 1 & 2 & 0 \\ 3 & 5 & 4 \\ 5 & 6 & 3 \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 3/5 & 1 & 0 \\ 1/5 & 4/7 & 1 \end{pmatrix} \times \begin{pmatrix} 5 & 6 & 3 \\ 0 & 7/5 & 11/5 \\ 0 & 0 & -13/7 \end{pmatrix}$,

the result of forward substitution L Y = P B is

$\begin{pmatrix} 1 & 0 & 0 \\ 3/5 & 1 & 0 \\ 1/5 & 4/7 & 1 \end{pmatrix} \times Y = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix} \times \begin{pmatrix} 1/10 \\ 25/2 \\ 103/10 \end{pmatrix} = \begin{pmatrix} 103/10 \\ 25/2 \\ 1/10 \end{pmatrix}$, giving $Y = \begin{pmatrix} 103/10 \\ 158/25 \\ -39/7 \end{pmatrix}$,

and the result of the back substitution U X = Y is

$\begin{pmatrix} 5 & 6 & 3 \\ 0 & 7/5 & 11/5 \\ 0 & 0 & -13/7 \end{pmatrix} \times X = \begin{pmatrix} 103/10 \\ 158/25 \\ -39/7 \end{pmatrix}$, giving $X = \begin{pmatrix} 1/2 \\ -1/5 \\ 3 \end{pmatrix}$,

which is the solution.

Your task is to write functions that perform LU-decomposition and LUP-decomposition and solve systems of linear equations. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

Pages: 1 2

### 4 Responses to “Solving Systems Of Linear Equations”

1. Remco Niemeijer said

Since I’m currently at a conference I don’t have as much time to devote to this as I’d like, but here’s my implementation of LU decomposition:

lu :: Fractional a => [[a]] -> ([[a]], [[a]])
lu = unzip . map unzip . elim where
elim [] = []
elim ~((r:rs):xs) = zip (1 : repeat 0) (r:rs) :
zipWith (:) (map (\(y:_) -> (y / r, 0)) xs)
(elim $map sub xs) where sub ~(y:ys) = zipWith (-) ys$ map (y / r *) rs


If I have some time before Friday I’ll see if I can work on the rest.

Also, you might want to fix the examples in the exercise, since they contain some errors (e.g. in the first example: 1×3 + 3×4 + 1×0 + 0x0 = 15, not 19). The versions used in the solution code are correct.

2. programmingpraxis said

Fixed four errors. I didn’t do that very well, did I?

3. […] Praxis – Solving Systems Of Linear Equations By Remco Niemeijer In yesterday’s Programming Praxis exercise our task is to implement some more matrix-related functions, […]

4. Remco Niemeijer said

My Haskell solution (see http://bonsaicode.wordpress.com/2010/07/21/programming-praxis-solving-systems-of-linear-equations/ for a version with comments):

import Control.Arrow
import Data.List
import qualified Data.List.Key as K

mult :: Num a => [[a]] -> [[a]] -> [[a]]
mult a b = [map (sum . zipWith (*) r) $transpose b | r <- a] elim :: Fractional a => [a] -> [a] -> [a] elim ~(x:xs) ~(y:ys) = zipWith (-) ys$ map (y / x *) xs

identity :: Num a => [[a]] -> [[a]]
identity = zipWith (zipWith const) (iterate (0:) (1 : repeat 0))

lu :: Fractional a => [[a]] -> ([[a]], [[a]])
lu = unzip . map unzip . f where
f []      = []
f ~(x:xs) = zip (1 : repeat 0) x :
zipWith (:) (map (\(y:_) -> (y / head x, 0)) xs)
(f $map (elim x) xs) perm :: (Fractional a, Ord a) => [[a]] -> [[a]] perm m = f$ zip (identity m) m where
f [] = []
f xs = a : f (map (second $elim b)$ delete (a,b) xs)
where (a,b) = K.maximum (abs . head . snd) xs

lup :: (Fractional a, Ord a) => [[a]] -> ([[a]], [[a]], [[a]])
lup xs = (perm xs, l, u) where (l,u) = lu $mult (perm xs) xs lupsolve :: (Fractional a, Ord a) => [[a]] -> [a] -> [a] lupsolve a b = f y u where (p,l,u) = lup a y = foldl (\x (l', pb') -> x ++ [pb' - sum (zipWith (*) x l')]) [] (zip l (concat . mult p$ map return b))
f _ [] = []
f ~(y':ys) ~((r:rs):us) = (y' - sum (zipWith (*) rs z)) / r : z
where z = (f ys \$ map tail us)