Since the 5-point formula gives a method, one wonders if it is worth going to higher order: take and truncate the finite difference approximation using center differences at the next order:
Although error , this is not popular method: renders too many points as near-boundary ones (even on square grid!) requiring laborious treatment. More problematic, however, it gives systems that are considerably more expensive to solve!!
ALTERNATIVELY: The ``Nine-Point Formula'':
In homework you will see that computation with 5-point formula is consistent with error decaying proportionally with . Will find, however, that for 9-point formula error decays , in spite of above error estimate result.
Why this discrepancy? Because truncation error estimate above was performed on Laplace equation , not Poisson .
From (164) the 9-point formula is an approximation of the equation
Note exists for sufficiently small . Multiply (165) by and let get error.
We can exploit this fact as follows: for (165) with , multiply both sides by (for sufficiently small)
so we see that (166) differs from by terms it is a good approximation to orignal problem!
SOLVING THE RESULTING SYSTEM
Define the resulting system
Let's consider the 5-point formula in some detail. Schematically, the matrix can be expressed in terms of smaller matrices:
Where is tridiagonal and is the identity matrix, where
diags are all equal
is said to be TST (Toeplitz Symmetric Tridiagonal), and each block is itself TST.
See class notes from 475A for a review of solving (167). Direct Method: OK for small matrices. Use a Cholesky factorization . Such factorization will take a time which is roughly proportional to number to nonzero elements in main diagonal. For sparse matrices, this is not too big a deal in terms of storage.
Incomplete splits into the tridiagonal part and the rest. Iteration is carried out on the tridiagonal part. Also called ``line relaxation method.'' The method is of low cost and is easy, but not the fastest.
ILU FACTORIZATION (Incomplete LU)
Suppose you want to solve
A linear one-step stationary scheme would lead to
We can write (171) in the form of (169): let
The ILU iteration (171) is an example of ``regular splitting.'' More generally , where is nonsingular matrix, and build
Theorem: Suppose both and are symmetric and positive definite. Then (172) converges.
Proof: See numerical linear algebra book.
Remarks What other techniques can we use? From the classical iterative techniques we could use use Jacobi, Gauss-Seidel, SOR, Conjugate Gradient. SOR would be a good choice since we can tune the relaxation parameter and get speeds that exceed Jacobi and Gauss-Seidel, since we can easily compute the spectral radius of the matrix problem that results from the 5-point scheme. Conjugate Gradient is the other logical choice. Later on we'll introduce another fast solver technique, call ``Multi-grid'' techniques. Are there faster ways? One of the most important developments in linear algebra in recent years has not been in the area of new methods for the solution (although plenty of new idea and schemes have been developed) but where significant strides are made is in algorithmic aspects of the solver techniques: computer-science solutions to increase the speed or efficiency in storage of general linear algebra solvers. See Demmel's linear algebra book for full details on the latest techniques.
FASTER ITERATIVE TECHNIQUES FOR POISSON PROBLEM
One method that clearly out performs SOR is ADI: let where originates in the -direction central difference and originates in -direction central differences.
represents an ILU decomposition to the problem , we can write
The choice of that beats SOR is given in
Another method (which is highly recommended) is the Conjugate gradient method (see details in Math 475A notes) which is applicable to symmetric positive definite 's. It is very fast.
The conjugate gradient and the preconditioned congugate gradient methods are widely available as mature code (see Netlib). These methods are part of a family of methods called ``Krylov Space Methods.'' Other Krylov-space methods worth knowing about are (GMRes) Generalized Minimal Residual and bi-conjugate gradient methods.
There are other types of methods for the solution of the Poisson equation and some of its close cousins: there's ``cyclic reduction and factorization,'' there is the very fast ``Multipole Techniques.'' We will feature two more methods here, both are very powerful and are widely applicable. First we'll consider ``FFT-Based Poisson Solvers'' and then briefly talk about ``Multigrid Methods.''
Remarks As a general rule of thumb, if the problem you're solving is very large, you'll have to resort to high performance solvers. But the first thing you should do is decide whether you have a problem that's big enough to warrant looking at high performance algorithms...this includes foreseeing that in the future you might look at big problems. Alternatively, suppose you have to solve the same problem millions of times (well, many times), whether it is small or large, you could save yourself time and storage by using a high performance solver. In any event, it doesn't hurt to be aware that these sort of things exist and most likely are in the form of mature code that you can adapt to your problem.