乐趣区

关于算法:MTH3320-线性表达式

SSCHOOL OF MATHEMATICS
MTH3320
Computational Linear AlgebraAssignment 3
Due date: Friday 12 May, 2023, 11:55pm (submitthe electronic copy of your assignment and thematlab code via Moodle)
This assignment contains four questions for a total of 100 marks (equal to 7.5% of thefinal mark for this unit).Late submissions will be subject to late penalties (see the unit guide for fulldetails).
The relevant output of the programming exercises are to be submitted as part of yourelectronic submission. You can either include the outputs as a part of your pdf file (ifyou can edit the pdf document), or include them in a separate word document (you needto label the figures and outputs consistently with the question number). In addition,collect all your Matlab m-files in one zip file and submit this file through Moodle.1. Unitary Similarity Transformation to a Lower Triangular Matrix. (20 marks)Use induction to show that every square matrix can be transformed to a lower triangularmatrix via a unitary similarity transformation. That is, for a matrix
always identify a unitary matrix Q and a lower triangular matrix L
Hint: use the same procedure in the proof of the existence of Schur factorisation, andstart from the lower-rightmost entry.2. Convergence of the Inverse Iteration on SymmetricMatrices. (30 marks)
Consider the inverse iteration (Algorithm 6.32) applied to a real symmetric matrix
. Suppose 位 K is the closest eigenvalue toL is the second closest, that is,, for each i K.School of Mathematics Monash UniversityLetn denote eigenvectors corresponding the eigenvalues of A. Suppose furtherwe have an initial vectorin the inverse iteration converges as,(iii) Use the results in (i) and (ii) to complete the rest of the proof.3. Implementation of the steepest descent and conjugategradient methods. (30 marks)You are asked to implement the steepest descent and conjugate gradient methods forA in Matlab, and apply the methods to the 2D model problem (downloadbuild laplace 2D.m or build laplace 2D kron.m to build the 2D Laplacian matrix).(a) Implement the following in m-file steepest.m:function [x res steps]=steepest(A,x0,b,tol,maxit)% Performs a sequence of steepest descent iterations on% A x = b using the initial estimate x0, until the 2-norm% of the residual is smaller than tol (relative% to the initial residual), or until the number of iterations% reaches maxit. is a vector with the% residual size after every interation (the 2-norm of% the residual vector).
(b) Implement the following in m-file conjGrad.m:function [x res steps]=conjGrad(A,x0,b,tol,maxit)% Performs a sequence of conjugate gradient iterations% on A x = b using the initial estimate u0, until the 2-norm% of the residual is smaller than tol (relative2022 2
School of Mathematics Monash University% to the initial residual), or until the number of iterations% reaches maxit.residual vector).
Submit your m-files steepest.m and conjGrad.m.(c) Download test steepest cg.m to test the correctness of (a) and (b). Report onthe errors and number of steps. (The errors should be smaller than vector with all twos. Use maxit=400 andtol=1e-6. Generate a plot in which you compare, for N = 30, the residual convergence for the steepest descent and conjugate gradient methods (starting from azero initial guess), as a function of the iteration. For each of the methods, plot the10-log of the residuals as a function of iteration number. Which method requiresthe least iterations to obtain an accurate solution, and is this as you expected?(e) Provide a table in which you list, for steepest descent and conjugate gradient, howmany iterations you need to reduce the residual by 1 N = 16, 32, 64.(You can use maxit = 500.) What are the total number of unknowns and thecondition number for each problem size? (You can use cond(full(A)); no needto do this for N = 64 though, because it may take too long.) For each method, doyou see the expected behaviour in the number of required iterations as a functionof the total number of unknowns? (Explain.) Using these numerical results, brieflydiscuss the computational cost/complexity of each of the methods as a functionof the total number of unknowns (discuss the cost per iteration, the number ofiterations required, and the total cost, as a function of total problem size). Whichmethod is best for large problem size?4. Implementation of the Inverse Iteration and the RayleighQuotient Iteration. (20 marks)(a) Implement the inverse iterations in Matlab according to the pseudocode discussedin class. Your implementation InverseIteration.m should store estimated eigenvalues and eigenvectors in each step, except the step 0. You should use the functionheader given below.
function [B, lamb] = InverseIteration(A, mu, x, k)%
% Usage: [B, lamb] = InverseIteration(A, mu, x, k) carries k steps% of the inverse iteration for estimating the eigenvalue of A,% with an initial shift mu, and an initial vector x.% It generates outputs B and lamb.2022 3
School of Mathematics Monash University% — B is an (n x k) matrix that stores the estimated eigenvector at% i-th step as its i-th column, B(:,i)% — lamb is a (1 x k) vector that stores the estimated eigenvalue at% i-th step as its i-th element, lamb(i)% Note that the initial vector and initial eigenvalue estimate are% not stored.
n = size(A, 1); % size of the matrixB = zeros(n, k); % matrix Blamb = zeros(1, k); % vector lamb% your code
end
(b) Implement the Rayleigh quotient iteration in Matlab according to the pseudocodediscussed in class. Your implementation RayleighIteration.m should store estimated eigenvalues and eigenvectors in each step, except the step 0. You shoulduse the function header given below.function [B, lamb] = RayleighIteration(A, x, k)%
% Usage: [B, lamb] = RayleighIteration(A, x, k) carries k steps% of the Rayleigh quotient iteration for estimating the eigenvalue% of A, with an initial vector x% It generates outputs B and lamb.% — B is an (n x k) matrix that stores the estimated eigenvector at% i-th step as its i-th column, B(:,i)% — lamb is a (1 x k) vector that stores the estimated eigenvalue at% i-th step as its i-th element, lamb(i)% Note that the initial vector and initial eigenvalue estimate are% not stored.
n = size(A, 1); % size of the matrixB = zeros(n, k); % matrix Blamb = zeros(1, k); % vector lamb% your code
end
(c) Check the correctness of your implementation of RayleighIteration.m as follows.Download the Matlab files flip vec.m, test Rayleigh.m and matrix data Q4.matunder Assignment 3 on Moodle. Save the test Rayleigh.m as my test Rayleigh.m.Then modify my test Rayleigh.m as the driver to call your RayleighIteration.mto solve an eigenvalue problem. The matrix and initial vector are provided (see2022 4
School of Mathematics Monash UniversityStep 1 in test Rayleigh.m). Comment on the result plot if your implementationproduces a desirable answer.(d) Run the same test for your implementation of InverseIteration.m using theinitial shift value given in my test Rayleigh.m and matrix data Q4.mat. Modify the convergence plots in my test Rayleigh.m to also plot the convergenceof eigenvalue and eigenvector estimates produced by the inverse iteration on thesame graph showing the convergence of the Rayleigh quotient iteration. Includea printout of the plots produced by my test Rayleigh.m in the main assignmentpackage. Comment on the performance of your implementation of the Rayleighquotient iteration, compared with the inverse iteration.(e) You should submit the code InverseIteration.m, RayleighIteration.m andmy test Rayleigh.m to the Moodle.2022 5

退出移动版