Sunday, June 2, 2019

Direct and iterative method

post and iterative methodINTRODUCTION TO DIRECT AND repetitive METHODMany important practical problems give draw near to systems of linear comparabilitys written as the matrix equationAx = c, where A is a given n nnonsingular matrix and c is an n-dimensional vector theproblem is to scrape up an n-dimensional vector x satisfying equation .Such systems of linear equations arise mainly from discrete approximations of partialdifferential equations. To solve them, two types of methods argon ordinarily utilize directmethods and iterative methods.Directapproximate the antecedent after a finite number of floating point operations.Since computer floating point operations groundwork only be obtained to a givenprecision, the computed final result is usually different from the exact root. When asquare matrix A is large and sparse, solving Ax = c by direct methods whoremaster be impractical,and iterative methods become a viable alternative.Iterative methods, based on separate A into A = MN, compute ordered approximationsx(t) to obtain more accurate solutions to a linear system at each iterationstep t. This dish up fucking be written in the nominate of the matrix equationx(t) = Gx(t1) + g, where an n n matrix G = M1N is the iteration matrix. The iteration processis halt when some predefined criterion is satisfied the obtained vector x(t) is anapproximation to the solution. Iterative methods of this form are called linear nonmovingiterative methods of the first item. The method is of the first degree because x(t)depends explicitly only on x(t1) and not on x(t2), . . . , x(0). The method is linearbecause neither G nor g depends on x(t1), and it is stationary because neither G nor gdepends on t. In this book, we also consider linear stationary iterative methods of thesecond degree, represented by the matrix equationx(t) = Mx(t1) Nx(t2) + h.HISTORY OF DIRECT AND ITERATIVE METHOD Direct methods to solve linear systemsDirect methods for solving the linear sy stems with the Gauss elimination method is given byCarl Friedrich Gauss (1777-1855). Thereafter the Choleski gives method for symmetric positive certain(prenominal) matrices. Iterative methods for non-linear equations The Newton_Raphson method is an iterative method to solve nonlinear equations. The method is defined byIsaac Newton (1643-1727)andJoseph Raphson (1648-1715). Iterative methods for linear equations The standard iterative methods, which are used are the Gauss-Jacobi and the Gauss-Seidel method.Carl Friedrich Gauss (1777-1855)is a very famous mathematician working on abstract and applied mathematics.Carl Gustav Jacob Jacobi (1804-1851)is well known for instance for the Jacobian the determinant of the matrix of partial derivatives. He has also done work on iterative methods leading to the Gauss-Jacobi method.Another iterative method is the Chebyshev method. This method is based on orthogonal polynomials bearing the take a shit ofPafnuty Lvovich Chebyshev (1821-1894). Th e Gauss-Jacobi and Gauss-Seidel method use a very simple polynomial to approximate the solution. In the Chebyshev method an optimal polynomial is used.DIRECT AND ITERATIVE METHODDirect methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed ininfinite precision arithmetical. Examples includeGaussian elimination, theQRfactorization method for solvingsystems of linear equations, and thesimplex methodoflinear programming.In contrast to direct methods,iterative methodsare not expected to terminate in a number of steps. Starting from an initial guess, iterative methods form successive approximations thatconvergeto the exact solution only in the limit. Aconvergence criterionis specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples i ncludeNewtons method, thebisection method, andJacobi iteration. In computational matrix algebra, iterative methods are generally infallible for large problems.Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g.GMRESand theconjugate gradient method. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is pass judgment in the same manner as for an iterative method.In the case of asystem of linear equations, the two main classes of iterative methods are thestationary iterative methods, and the more generalKrylov subspacemethods. unmoving iterative methodsstationary iterative methods solve a linear system with anoperatorapproximating the original one and based on a measurement of the error (the residual), form acorrection equationfor which this process is repeated. While these methods are simple to derive, implement, a nd analyse, convergence is only guaranteed for a limited class of matrices. Examples of stationary iterative methods are theJacobi method,GaussSeidel methodand theSuccessive over-relaxation method. Krylov subspace methods Krylov subspacemethods form anorthogonal basisof the sequence of successive matrix powers magazines the initial residual (theKrylov sequence). The approximations to the solution are then formed by minimizing the residual over the subspace formed. The prototypical method is theconjugate gradient method(CG). different methods are the generalized minimal residual method and the biconjugate gradient methodEXAMPLE OF DIRECT METHODGAUSS ELIMINATION METHOD -Inlinear algebra,Gaussian eliminationmethod is an algorithmfor solvingsystems of linear equations, finding therankof amatrix, and conniving the inverse of aninvertible square matrix. Gaussian elimination is named after German mathematician and scientistCarl Friedrich Gauss.Elementary row operationsare used to discre dit a matrix torow echelon form.GaussJordan elimination, an extension of this algorithm, reduces the matrix come on toreduced row echelon form. Gaussian elimination alone is sufficient for many applications.EXAMPLESuppose that our goal is to find and describe the solution(s), if any, of the pursualsystem of linear equationsThe algorithm is as follows pass x from all equations below L1 and then eliminate y from all equations below L2 .This will form a trilateral form.Using the back substitution each(prenominal) unknown tail be solved .In the example, x is eliminated from l2 by adding 3/2L1to L2. X is then eliminatedmfrom L3 by adding L1 to L3 The result isNowyis eliminated fromL3by adding 4L2toL3The result isThis result is a system of linear equations in triangular form, and so the first part of the algorithm is complete.The second part, back-substitution, consists of solving for the unknowns in reverse order. It can be seen thatThen,zcan be substituted intoL2, which can then b e solved to obtainNext,zandycan be substituted intoL1, which can be solved to obtainThe system is solved.Some systems cannot be reduced to triangular form, yet still have at least one valid solution for example, ifyhad not occurred inL2andL3after the first step above, the algorithm would be unable to reduce the system to triangular form. However, it would still have reduced the system toechelon form. In this case, the system does not have a unique solution, as it contains at least onefree variable. The solution set can then be expressed parametrically .In practice, one does not usually deal with the systems in terms of equations but or else makes use of theaugmented matrix(which is also suitable for computer manipulations). The Gaussian Elimination algorithm applied to theaugmented matrixof the system above, beginning withwhich, at the end of the first part of the algorithm That is, it is inrow echelon form.At the end of the algorithm, if theGaussJordan eliminationis appliedThat is , it is inreduced row echelon form, or row canonical form.EXAMPLE OF ITERATIVE METHOD OF SOLUTIONA. JACOB METHOD -The Jacobi method is a method of solving amatrix equationon a matrix that has no zeros along its main diagonal (Bronshtein and Semendyayev 1997, p.892). Each diagonal element is solved for, and an approximate value taken in. The process is then iterated until it converges. This algorithm is a stripped-down version of theJacobi transformationmethod ofmatrix diagonalization.The Jacobi method is easily derived by examining each of the equations in thelinear system of equationsAx=b in isolation. If, in theith equation solve for the value ofwhile assuming the other entries ofremain fixed. This gives which is the Jacobi method.In this method, the order in which the equations are examined is irrelevant, since the Jacobi method treats them independently. The definition of the Jacobi method can be expressed withmatricesasB. Stationary Iterative MethodsIterative methods that can b e expressed in the simple formWhere neighter B nor c depend upon the iterative count k) are called stationary iterative method. The four main stationary iterative method the Jacobi method, the Gauss Seidel method ,Successive Overrelaxation method and the symmetric Successive Overrelaxation method C. The Gauss-Seidel MethodWe are considering an iterative solution to the linear systemwhere is ansparse matrix,xandbare vectors of lengthn, and we are solving forx. Iterative solvers are an alternative to direct methods that flack to calculate an exact solution to the system of equations. Iterative methods attempt to find a solution to the system of linear equations by repeatedly solving the linear system using approximations to the vector. Iterations continue until the solution is within a predetermined acceptable bound on the error.Iterative methods for general matrices include the Gauss-Jacobi and Gauss-Seidel, while conjugate gradient methods exist for positive definite matrices. Use of iterative methods is the convergence of the technique. Gauss-Jacobi uses all values from the previous iteration, while Gauss-Seidel requires that the most recent values be used in calculations. The Gauss-Seidel method has better convergence than the Gauss-Jacobi method, although for softened matrices, the Gauss-Seidel method is sequential. The convergence of the iterative method must be examined for the application along with algorithm performance to ensure that a useful solution to can be found.The Gauss-Seidel method can be written aswhere is theunknown in during theiteration,and, is the initial guess for theunknown in, is the coefficient ofin therow andcolumn, is thevalue in.or whereK(k)is theiterative solution to is the initial guess atxDis the diagonal ofALis the of rigorously lower triangular portion ofAUis the of strictly upper triangular portion ofAbis right-hand-side vector.EXAMPLE.101x2+ 23= 6,x1+ 112x3+ 34= 25,21x2+ 103x4= 11,32x3+ 84= 15. Solving forx1,x2,x3andx4gi vesx1=x2/ 10 x3/ 5 + 3 / 5,x2=x1/ 11 +x3/ 11 34/ 11 + 25 / 11,x3= x1/ 5 +x2/ 10 +x4/ 10 11 / 10,x4= 32/ 8 +x3/ 8 + 15 / 8Suppose we choose(0,0,0,0)as the initial approximation, then the first approximate solution is given byx1= 3 / 5 = 0.6,x2= (3 / 5) / 11 + 25 / 11 = 3 / 55 + 25 / 11 = 2.3272,x3= (3 / 5) / 5 + (2.3272) / 10 11 / 10 = 3 / 25 + 0.23272 1.1 = 0.9873,x4= 3(2.3272) / 8 + ( 0.9873) / 8 + 15 / 8 = 0.8789.x1x2x3x40.62.32727 0.9872730.8788641.030182.03694 1.014460.9843411.006592.00356 1.002530.9983511.000862.0003 1.000310.99985The exact solution of the system is (1,2,-1,1)APPLICATION OF DIRECT AND ITERATIVE METHOD OF SOLUTIONFRACTIONAL SPLITING METHOD OF FIRST ORDER FOR LINEAR EQUATIONFirst we describe the simplest operator-splitting, which is calledsequential operator-splitting, for the following linear system of ordinary differential equations(3.1)where the initial condition is. The operatorsand are linear and bounded operators in a Banach spaceThe sequential op erator-splitting method is introduced as a method that solves two subproblems sequentially, where the different subproblems are connected via the initial conditions. This means that we replace the original problem with the subproblemswhere the splitting time-step is defined as. The approximated solution is.The stand-in of the original problem with the subproblems usually results in an error, calledsplitting error. The splitting error of the sequential operator-splitting method can be derived as whereis the commutator ofAandB The splitting error iswhen the operatorsA andB do not commute, otherwise the method is exact. Hence the sequential operator-splitting is called thefirst-order splitting method.THE ITERATIVE SPLITINGThe following algorithm is based on the iteration with fixed splitting discretization step-size. On the time intervalwe solve the following subproblems consecutively for(4.1)where is the known split approximation at the time level.We can generalize the iterative spli tting method to a multi-iterative splitting method by introducing new splitting operators, for example, spatial operators. Then we obtain multi-indices to control the splitting process each iterative splitting method can be solved independently, while connecting with further steps to the multi-splitting method

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.