Author: David Merkley
Language: Python
Disclaimer: I will have all of the code that is in my software manual linked up here at the start of the code. I added this code at the start of the assignment because of time. All code that is necessary to complete every task is listed in every task.
Task 1
Routine Name: DiagonalSquareMatrix
Description/Purpose: This routine is designed to create a random square matrix that is diagonally dominant. The output showed that the diagonally dominant matrix was created. The second half of the task was to update my row echelon form and back substitution, as I updated my row echelon code in task 4. The vector output shows that my code is functional.
Implementation/Code:
import random
def diagonalSquareMatrix(n):
return [[n * 1 if i == j else random.random() for j in range(n)] for i in range(n)]
def rowEchelonFormAndBackSub(A, b=[], mutation=False, leaveUpperTriangleImplicit=True, scalePartialPivoting=False):
return backSub(rowEchelonForm(A, b, mutation, leaveUpperTriangleImplicit, scalePartialPivoting))
def backSub(A, b=[]):
if (len(b) != 0):
n = len(b)
ans = [x for x in b]
else:
n = len(A)
m = len(A[0])
ans = [A[i][m-1] for i in range(n)]
for i in range(n - 1, -1, -1):
for j in range(n - 1, i, -1):
ans[i] -= A[i][j] * ans[j]
ans[i] /= A[i][i]
return ans
print(diagonalSquareMatrix(5))
A1 = diagonalSquareMatrix(5)
b1 = [random.random() for i in range(5)]
sol1 = rowEchelonFormAndBackSub(A1, b1)
print(sol1)
Output:
[[5, 0.21540784825555948, 0.5478104241350467, 0.10767492321682837, 0.5453005118211416],
[0.9728476621676051, 5, 0.9750209247716793, 0.5793394116128517, 0.0561255715888469],
[0.30945144480384124, 0.27317941231985243, 5, 0.19696315679839416, 0.04747237331471488],
[0.9495294036740869, 0.7390341971312506, 0.11547768090459576, 5, 0.07443969390818928],
[0.3612979947460463, 0.6736964639630165, 0.3625881418562986, 0.5470493378317955, 5]]
[0.5856186074585928, 0.853940346273862, 0.6353852563163455, 0.9553241516078023, 0.5414693010523792]
Task 2
Routine Name: LUFactorization
Description/Purpose: This code completes an LU factorization for a matrix. Then, the factorized matrix will be solved. I slightly modified the forward substitution to accept the LU variable. The vector output shows that it works.
Implementation/Code:
import random
def LUFactorization(A, mutation=False):
if (mutation is True):
LU = A
else:
LU = [[a for a in row] for row in A]
n = len(LU)
for k in range(n - 1):
for i in range(k + 1, n):
factor = LU[i][k] / LU[k][k]
for j in range(k + 1, n):
LU[i][j] -= factor * LU[k][j]
LU[i][k] = factor
return LU
def matrixVectorMultiply(A, b):
n = len(b)
ans = [0] * n
for i in range(n):
for j in range(n):
ans[i] += A[i][j] * b[j]
return ans
def computeLU(LU, b):
x = forwardSub(LU, b, True)
return backSub(LU, x)
def backSub(A, b=[]):
if (len(b) != 0):
n = len(b)
ans = [x for x in b]
else:
n = len(A)
m = len(A[0])
ans = [A[i][m-1] for i in range(n)]
for i in range(n - 1, -1, -1):
for j in range(n - 1, i, -1):
ans[i] -= A[i][j] * ans[j]
ans[i] /= A[i][i]
return ans
def forwardSub(A, b, diagonalOne=False):
n = len(b)
ans = [x for x in b]
for i in range(n):
for j in range(i):
ans[i] -= A[i][j] * ans[j]
if not diagonalOne:
ans[i] /= A[i][i]
return ans
def diagonalSquareMatrix(n):
return [[n * 1 if i == j else random.random() for j in range(n)] for i in range(n)]
A1 = diagonalSquareMatrix(5)
b1 = [random.random() for i in range(5)]
LU1 = LUFactorization(A1)
print(LU1)
solution = computeLU(LU1, b1)
print(solution)
Output:
[[5, 0.6977048808069973, 0.3400441579289524, 0.34276770812285196, 0.4271339619909261],
[0.035424915732505634, 4.975283863391255, 0.675908791100081, 0.07771192817465528, 0.5950820436691011],
[0.1353632732840224, 0.0675747445140766, 4.9082961458482215, 0.21397742287187674, -0.0006002443992702833],
[0.05709928430638074, 0.19052283347860097, 0.0077237148042717625, 4.963969611843111, 0.151506310211174],
[0.023284870607333773, 0.15929574852320638, 0.17373852603574935, 0.17740522349718194, 4.868486476136998]]
[0.17663859799289594, 0.0933677750010876, 0.06840328075003192, 0.023077915215681758, 0.15841405947166495]
Task 3
Routine Name: HilbertMatrix
Description/Purpose: First, we have function that will generate a Hilbert Matrix. Then, we test the LU computing code to see what happens. The output shows that at higher n, the computing is less accurate, which makes sense.
Implementation/Code:
def hilbertMatrix(n):
return [[1/(i + j + 1) for j in range(n)] for i in range(n)]
def printMatrix(A, ind=0):
n = len(A[0])
for row in A:
printVector(row, ind)
printMatrix(hilbertMatrix(5))
for n in range(4, 11):
A = hilbertMatrix(n)
b = matrixVectorMultiply(A, [1]*n)
LU = LUFactorization(A)
solution = computeLU(LU, b)
printVector(solution)
Output:
1.00000 0.50000 0.33333 0.25000 0.20000
0.50000 0.33333 0.25000 0.20000 0.16667
0.33333 0.25000 0.20000 0.16667 0.14286
0.25000 0.20000 0.16667 0.14286 0.12500
0.20000 0.16667 0.14286 0.12500 0.11111
1.00000 1.00000 1.00000 1.00000
1.00000 1.00000 1.00000 1.00000 1.00000
1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
1.00000 1.00000 1.00000 1.00000 0.99999 1.00001 0.99998 1.00001 1.00000
1.00000 1.00000 1.00000 1.00002 0.99990 1.00027 0.99956 1.00042 0.99978 1.00005
Task 4
Routine Name: RowEchelonForm
Description/Purpose: I modified my original RowEchelonForm code to include the scalePartialPivoting argument, so I can complete task 5 below using the scalePartialPivoting. Code can be seen below.
Implementation/Code:
def rowEchelonForm(A, b=[], mutation=False, leaveUpperTriangularImplicit=False, scalePartialPivoting=False):
m = len(A)
n = len(A[0])
if (mutation):
A2 = A
else:
A2 = [[A[i][j] for j in range(n)] for i in range(m)]
if (scalePartialPivoting):
s = [max(row) for row in A2]
indexMap = [x for x in range(m)]
for k in range(m - 1):
maxk = k
maxScale = A2[indexMap[k]][k] / s[indexMap[k]]
for i in range(k + 1, m):
iScale = A2[indexMap[i]][k] / s[indexMap[i]]
if (iScale > maxScale):
maxk = i
maxScale = iScale
indexMap[k], indexMap[maxk] = indexMap[maxk], indexMap[k]
else:
indexMap = range(m)
if (len(b) != 0):
for i in range(m):
A2[i].append(b[i])
n += 1
for k in range(m - 1):
krow = indexMap[k]
for i in range(k + 1, m):
irow = indexMap[i]
factor = A2[irow][k] / A2[krow][k]
for j in range(k + 1, n):
A2[irow][j] -= factor * A2[krow][j]
if (not leaveUpperTriangularImplicit):
for i in range(m):
imap = indexMap[i]
for j in range(i):
A2[imap][j] = 0
return [A2[indexMap[i]] for i in range(m)]
def backSub(A, b=[]):
if (len(b) != 0):
n = len(b)
ans = [x for x in b]
else:
n = len(A)
m = len(A[0])
ans = [A[i][m-1] for i in range(n)]
for i in range(n - 1, -1, -1):
for j in range(n - 1, i, -1):
ans[i] -= A[i][j] * ans[j]
ans[i] /= A[i][i]
return ans
def rowEchelonFormAndBackSub(A, b=[], mutation=False, leaveUpperTriangleImplicit=True, scalePartialPivoting=False):
return backSub(rowEchelonForm(A, b, mutation, leaveUpperTriangleImplicit, scalePartialPivoting))
Task 5
Routine Name: HilbertMatrix
Description/Purpose: This above function in task 4 is now tested on this task. With higher n, the scaled partial pivoting is helping, but the accuracy is still off. Something else needs to be done if a more accurate number is required.
Implementation/Code:
def hilbertMatrix(n):
return [[1/(i + j + 1) for j in range(n)] for i in range(n)]
def matrixVectorMultiply(A, b):
n = len(b)
ans = [0] * n
for i in range(n):
for j in range(n):
ans[i] += A[i][j] * b[j]
return ans
def LUFactorization(A, mutation=False):
if (mutation is True):
LU = A
else:
LU = [[a for a in row] for row in A]
n = len(LU)
for k in range(n - 1):
for i in range(k + 1, n):
factor = LU[i][k] / LU[k][k]
for j in range(k + 1, n):
LU[i][j] -= factor * LU[k][j]
LU[i][k] = factor
return LU
def computeLU(LU, b):
x = forwardSub(LU, b, True)
return backSub(LU, x)
def backSub(A, b=[]):
if (len(b) != 0):
n = len(b)
ans = [x for x in b]
else:
n = len(A)
m = len(A[0])
ans = [A[i][m-1] for i in range(n)]
for i in range(n - 1, -1, -1):
for j in range(n - 1, i, -1):
ans[i] -= A[i][j] * ans[j]
ans[i] /= A[i][i]
return ans
def forwardSub(A, b, diagonalOne=False):
n = len(b)
ans = [x for x in b]
for i in range(n):
for j in range(i):
ans[i] -= A[i][j] * ans[j]
if not diagonalOne:
ans[i] /= A[i][i]
return ans
def rowEchelonForm(A, b=[], mutation=False, leaveUpperTriangularImplicit=False, scalePartialPivoting=False):
m = len(A)
n = len(A[0])
if (mutation):
A2 = A
else:
A2 = [[A[i][j] for j in range(n)] for i in range(m)]
if (scalePartialPivoting):
s = [max(row) for row in A2]
indexMap = [x for x in range(m)]
for k in range(m - 1):
maxk = k
maxScale = A2[indexMap[k]][k] / s[indexMap[k]]
for i in range(k + 1, m):
iScale = A2[indexMap[i]][k] / s[indexMap[i]]
if (iScale > maxScale):
maxk = i
maxScale = iScale
indexMap[k], indexMap[maxk] = indexMap[maxk], indexMap[k]
else:
indexMap = range(m)
if (len(b) != 0):
for i in range(m):
A2[i].append(b[i])
n += 1
for k in range(m - 1):
krow = indexMap[k]
for i in range(k + 1, m):
irow = indexMap[i]
factor = A2[irow][k] / A2[krow][k]
for j in range(k + 1, n):
A2[irow][j] -= factor * A2[krow][j]
if (not leaveUpperTriangularImplicit):
for i in range(m):
imap = indexMap[i]
for j in range(i):
A2[imap][j] = 0
return [A2[indexMap[i]] for i in range(m)]
def rowEchelonFormAndBackSub(A, b=[], mutation=False, leaveUpperTriangleImplicit=True, scalePartialPivoting=False):
return backSub(rowEchelonForm(A, b, mutation, leaveUpperTriangleImplicit, scalePartialPivoting))
def printVector(row, ind=0):
print(" "*ind, end="")
for x in row:
print("{:10.5f}".format(x), end="")
print()
def printMatrix(A, ind=0):
n = len(A[0])
for row in A:
printVector(row, ind)
print("Without Scaled Partial Pivoting")
for n in range(4, 20):
A = hilbertMatrix(n)
b = matrixVectorMultiply(A, [1]*n)
sol = rowEchelonFormAndBackSub(A, b, mutation=True, scalePartialPivoting=False)
print("n=", n)
printVector(sol)
print("\nWith Scaled Partial Pivoting")
for n in range(4, 20):
A = hilbertMatrix(n)
b = matrixVectorMultiply(A, [1]*n)
sol = rowEchelonFormAndBackSub(A, b, mutation=True, scalePartialPivoting=True)
print("n=", n)
printVector(sol)
Output:
Without Scaled Partial Pivoting
n= 4
1.00000 1.00000 1.00000 1.00000
n= 5
1.00000 1.00000 1.00000 1.00000 1.00000
n= 6
1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
n= 7
1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
n= 8
1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
n= 9
1.00000 1.00000 1.00000 1.00000 0.99999 1.00001 0.99998 1.00001 1.00000
n= 10
1.00000 1.00000 1.00000 1.00002 0.99990 1.00027 0.99956 1.00042 0.99978 1.00005
n= 11
1.00000 1.00000 0.99999 1.00016 0.99906 1.00329 0.99286 1.00973 0.99193 1.00373 0.99926
n= 12
1.00000 1.00000 0.99992 1.00112 0.99186 1.03538 0.90231 1.17542 0.79577 1.14866 0.93853 1.01102
n= 13
1.00000 0.99998 1.00061 0.98976 1.09198 0.50100 2.74085 -3.03606 7.28388 -5.49353 5.27087 -0.61819 1.26883
n= 14
1.00000 0.99999 1.00031 0.99622 1.01995 0.98174 0.67520 2.92290 -4.49810 10.51596 -9.42821 8.09476 -1.74096 1.46025
n= 15
1.00000 1.00000 0.99999 0.99864 1.02567 0.78193 2.06684 -2.27904 7.53239 -7.35505 7.38067 -1.12904 0.42575 1.73328 0.81795
n= 16
1.00000 0.99999 1.00036 0.99292 1.07461 0.52921 2.89611 -4.01576 9.67067 -8.18007 5.33857 3.21268 -3.85272 4.21687 -0.00460 1.12118
n= 17
1.00000 0.99999 1.00044 0.99135 1.09240 0.40134 3.51108 -6.07458 14.66127 -17.41646 19.36296 -15.27536 16.89343 -13.83110 10.82238 -2.75772 1.61858
n= 18
1.00000 1.00000 0.99980 1.00745 0.89788 1.71766 -1.82146 7.00036 -3.67460 -5.37022 15.49821 1.96900 -21.71402 8.65465 34.36643 -45.12892 25.48868 -3.89090
n= 19
1.00000 0.99999 1.00044 0.99096 1.09750 0.37637 3.52002 -5.63994 12.53606 -12.34196 12.17138 -8.31209 10.38101 -7.19103 7.40377 -5.68775 7.32337 -2.32880 1.70071
With Scaled Partial Pivoting
n= 4
1.00000 1.00000 1.00000 1.00000
n= 5
1.00000 1.00000 1.00000 1.00000 1.00000
n= 6
1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
n= 7
1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
n= 8
1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
n= 9
1.00000 1.00000 1.00000 1.00000 0.99999 1.00002 0.99998 1.00001 1.00000
n= 10
1.00000 1.00000 1.00000 1.00001 0.99995 1.00015 0.99976 1.00023 0.99988 1.00003
n= 11
1.00000 1.00000 0.99997 1.00029 0.99825 1.00616 0.98656 1.01836 0.98472 1.00708 0.99860
n= 12
1.00000 1.00000 0.99993 1.00098 0.99296 1.03048 0.91608 1.15035 0.82531 1.12693 0.94759 1.00938
n= 13
1.00000 0.99998 1.00079 0.98666 1.12082 0.33991 3.31703 -4.40103 9.44959 -7.76912 6.78992 -1.20144 1.36689
n= 14
1.00000 1.00000 1.00016 0.99710 1.02839 0.83083 1.65611 -0.72064 4.10773 -2.87006 4.26327 -0.77855 1.56515 0.92052
n= 15
1.00000 1.00000 1.00006 0.99771 1.03250 0.75729 2.08397 -2.05023 6.42714 -4.70487 3.44480 2.64147 -1.85118 2.52402 0.69732
n= 16
1.00000 0.99999 1.00025 0.99518 1.04868 0.70610 2.12953 -1.85326 5.78130 -4.28094 4.88244 -1.29780 2.77134 -0.47015 1.74026 0.84707
n= 17
1.00000 0.99999 1.00058 0.98905 1.11308 0.28704 3.92469 -7.07439 16.15782 -18.04018 16.01132 -4.91700 0.52623 2.38190 0.74089 0.84596 1.05301
n= 18
1.00000 1.00000 1.00020 0.99451 1.07217 0.45282 3.60751 -7.15315 17.91392 -21.68400 18.79318 -4.50440 0.30323 -2.82284 9.90371 -6.20452 3.74469 0.58298
n= 19
1.00000 1.00001 0.99985 1.00072 1.01055 0.84238 1.92814 -2.06828 7.21340 -7.16756 9.56875 -9.99144 15.60776 -9.85752 -0.05016 9.99725 -6.13206 3.38453 0.71369
Task 6
While reading the Wikipedia article, it is said that the Hilbert matrix is a prime example of an ill-conditioned matrix. Now that I have worked with this matrix, I have seen that this is indeed so. My favorite thing I learned about the Hilbert matrix was that it can be derived from the integral H_{ij} = ∫_{0}^{1}x^{i+j-2} dx. This is of course the 5x5 Hilbert matrix, but I just thought it was excited where the matrix can be derived from.