next up previous contents
Next: ``FAST'' POISSON SOLVER Up: NUMERICAL METHODS FOR THE Previous: The 5 and 9-Point   Contents

Higher-Order Finite Difference Methods

Since the 5-point formula gives a ${\cal O}(\Delta x^2)$ method, one wonders if it is worth going to higher order: take $\nabla^2U=f$ and truncate the finite difference approximation using center differences at the next order:

$\displaystyle \frac{1}{(\Delta
x)^2}\left[\delta^2_x+\delta^2_y-\frac{1}{12}(\delta^4_x+\delta^4_y)\right]u_{k, l}=f_{k,l}
$

The resulting computational cell for the differential operator will be as depicted in Figure 35.
Figure 35: Computational Stencil for the differential operator, using simple 9-point formula.
\includegraphics[height=3in]{nin1.eps}

Although error ${\cal O}(\Delta x^4)$, this is not popular method: renders too many points as near-boundary ones (even on square grid!) requiring laborious treatment. More problematic, however, it gives systems that are considerably more expensive to solve!!

ALTERNATIVELY: The ``Nine-Point Formula'':

$\displaystyle \frac{1}{(\Delta)x^2}\left(\delta^2_x+\delta^2_y+\frac{1}{6}\delta^2_x
\delta^2_y\right)u_{k,l}=f_{k,l}
$

Computational Cell is given in Figure 36
Figure 36: Computational Stencil for the differential operator, using proper 9-point formula.
\includegraphics[height=3in]{nin2.eps}
In homework you will derive the truncation error to be

(163) $\displaystyle \frac{1}{\Delta x^2}(\delta^2_x + \delta^2_y + \frac{1}{6}\delta^...
...)\phi-\nabla^2\phi =\frac{1}{12}(\Delta x^2)\nabla^4\phi + {\cal O}(\Delta x^2)$

i.e. same as 5-point!! So apparently, nothing is gained by adding 4 other nearest neighbor points.

In homework you will see that computation with 5-point formula is consistent with error decaying proportionally with ${\cal O}(\Delta x^2)$. Will find, however, that for 9-point formula error decays ${\cal O}(\Delta x^4)$, in spite of above error estimate result.

Why this discrepancy? Because truncation error estimate above was performed on Laplace equation $\nabla^2 U=0$, not Poisson $\nabla^2U=f$.

From (164) the 9-point formula is an approximation ${\cal O}(\Delta x^4)$ of the equation

(164) $\displaystyle \Big[\nabla^2+\frac{1}{12}(\Delta x)^2\nabla^4\Big]U=f$

let ${\cal L}_{\Delta x}\equiv I+\frac{1}{12}(\Delta x)^2\nabla^2$

Note ${\cal L}^{-1}_{\Delta x}$ exists for sufficiently small $\Delta x$. Multiply (165) by ${\cal L}^{-1}_{\Delta x}$ and let $f=0\Rightarrow$ get ${\cal O}(\Delta x^4)$ error.

We can exploit this fact as follows: for (165) with $f\ne 0$, multiply both sides by ${\cal L}^{-1}_{\Delta x}$ (for $\Delta x$ sufficiently small)

(165) $\displaystyle \nabla^2 U={\cal L}^{-1}_{\Delta x}f$

is a Poisson equation for which the $9$-point formula produces ${\cal O}(\Delta x^4)$ error. Not the equation we wanted to solve, but we can do the following:

let ${\tilde f}(x,y)=\tilde {\cal L}^{-1}_{\Delta x}f=f(x,y)+
\frac{1}{12}
\nabla^2 f(x,y)+{\cal O}(\Delta x^4)$

so we see that (166) differs from $\nabla^2U=f$ by ${\cal O}(\Delta x^4)$ terms % latex2html id marker 28701
$ \therefore$ it is a good approximation to orignal problem!

\begin{displaymath}
\left[
\begin{array}{l}
\mbox {So the \lq\lq modified 9-point sch...
...\mbox { is } {\cal O} (\Delta x^4) \\
\\
\end{array}\right]
\end{displaymath}

This is an example of ``Preconditioning''. It is a very powerful way to get either a better truncation or a faster solution to linear systems with extra computation that is minimal... See homework.

SOLVING THE RESULTING SYSTEM

Define the resulting system

(166) $\displaystyle A {\bf u}={\bf b}$

Let's look at the structure of $A$ and from there decide what are some methods for solving the linear system. For both the 5-point as well as the 9-point formula it is clear that the matrices are symmetric and diagonally dominant. For small sized problems, direct solution is fine. For large problems, the fastest algorithm would have at its core a conjugate gradient iterative solve.

Let's consider the 5-point formula in some detail. Schematically, the matrix can be expressed in terms of smaller matrices:

$ A=\left[
\begin{array}{lllll}
C & I & 0 & &0\\
I & C & I & &0\\
0 & \ddots & \ddots & \ddots &0\\
: & & I & C & I\\
0 & & 0 & I & C
\end{array}\right]$

Where $C$ is tridiagonal and $I$ is the identity matrix, where

$ I=\left[
\begin{array}{lllll}
1 & & & &\\
& 1 & & 0 &\\
& & \ddots & & \\
& 0 & & 1 & \\
& & & & 1
\end{array}
\right]$ and $ C = \left[
\begin{array}{llllll}
-4 & 1 &0 &\vdots &0 \\
1 & -4 &1 &\ddots &\...
...dots &0 \\
: & \ddots &1 &-4 &1 \\
0 & \ddots &0 &1 &4 \\
\end{array}\right]$

diags are all equal

$A$ is said to be TST (Toeplitz Symmetric Tridiagonal), and each block is itself TST.

See class notes from 475A for a review of solving (167). Direct Method: OK for small matrices. Use a Cholesky factorization $A=LL^T$. Such factorization will take a time $T$ which is roughly proportional to number to nonzero elements in main diagonal. For sparse matrices, this is not too big a deal in terms of storage.

Iterative Methods

Incomplete $LU\rightarrow$ splits $A$ into the tridiagonal part and the rest. Iteration is carried out on the tridiagonal part. Also called ``line relaxation method.'' The method is of low cost and is easy, but not the fastest.

ILU FACTORIZATION (Incomplete LU)

Suppose you want to solve

(167) $\displaystyle A {\bf v}={\bf b} \quad v\in \mathbb{R}^n , {\bf b} \in \mathbb{R}^n \quad A$$\displaystyle \mbox { matrix is }n\times n$

Recall:

A linear one-step stationary scheme would lead to

(168) $\displaystyle {\bf x}^{k+1}=T{\bf x}^k+{\bf c}\quad {\bf x} \in \mathbb{R}^n$

$k$ is iteration index.

such that $ \displaystyle \lim_{k\rightarrow\infty} {\bf x^{k+1}}={\bf v}$

and $ {\bf x}^{k+1}=t_k{\bf (x^0, x^1, x^2\cdots x^k)}$     $k=0, 1, 2, \cdots$

  $\displaystyle t_k$ $\displaystyle =$ $\displaystyle \underbrace{\mathbb{R}^n\times \mathbb{R}^n\times \mathbb{R^n}}
\rightarrow \mathbb{R^n}\quad k=0, 1, 2\cdots$
(169)     $\displaystyle k$

Let's go back to (168): suppose that $A$ is in the form $A=\tilde A-E$ where the underlying $LU$ factorization of $\tilde A$ (nonsingular) can be done easily. For example $\tilde A$ may be bounded (or in the context of elliptic problems, TST). Moreover assume $E$ is small compared to $\tilde A$. Rewrite (168) as:

$\displaystyle \tilde A{\bf v}=E{\bf v}+{\bf b}
$

which suggests the iterative scheme

(170) $\displaystyle \tilde A{\bf x}^{k+1}=E {\bf x}^k+{\bf b}$

This is the ILU (incomplete LU factorization). It requires a single LU (or Cholesky if $\tilde A$ symmetric) factorization, which can be reused at each iteration.

We can write (171) in the form of (169): let $T=-\tilde A^{-1}E$ and $ {\bf c}=\tilde A^{-1}{\bf b}$

  % latex2html id marker 28776
$\displaystyle \therefore (I-T)A^{-1}{\bf b}$ $\displaystyle =$ $\displaystyle (I-\tilde A^{-1}E)(\tilde
A-E){\bf b} ={\tilde A}^{-1}(\tilde A-E)(\tilde A-E)^{-1}{\bf b}$
    $\displaystyle =$ $\displaystyle \tilde A^{-1}{\bf b}={\bf c}$

Of course, numerically, we solve the form given by (171) and not in the form directly above.

$\Box$

The ILU iteration (171) is an example of ``regular splitting.'' More generally $A=P-N$, where $P$ is nonsingular matrix, and build

(171) $\displaystyle P{\bf x}^{k+1}=N{\bf x}^k+{\bf b} \quad k=0, 1, 2 \cdots$

Where $P$ is simple to solve (using LU or other means). Note that, formally, $T=P^{-1}N=P^{-1}(P-A)=I-P^{-1}A$ and $ {\bf c}=P^{-1} {\bf b}$.

Theorem: Suppose both $A$ and $P+P^T-A$ are symmetric and positive definite. Then (172) converges.

Proof: See numerical linear algebra book. $\Box$

Remarks What other techniques can we use? From the classical iterative techniques we could use use Jacobi, Gauss-Seidel, SOR, Conjugate Gradient. SOR would be a good choice since we can tune the relaxation parameter and get speeds that exceed Jacobi and Gauss-Seidel, since we can easily compute the spectral radius of the matrix problem that results from the 5-point scheme. Conjugate Gradient is the other logical choice. Later on we'll introduce another fast solver technique, call ``Multi-grid'' techniques. Are there faster ways? One of the most important developments in linear algebra in recent years has not been in the area of new methods for the solution (although plenty of new idea and schemes have been developed) but where significant strides are made is in algorithmic aspects of the solver techniques: computer-science solutions to increase the speed or efficiency in storage of general linear algebra solvers. See Demmel's linear algebra book for full details on the latest techniques.

FASTER ITERATIVE TECHNIQUES FOR POISSON PROBLEM

One method that clearly out performs SOR is ADI: let $A=A_x+A_y$ where $A_x$ originates in the $x$-direction central difference and $A_y$ originates in $y$-direction central differences.

Assuming that $ \tilde A {\bf x^{k+1}}=E{\bf x^k}+{\bf b}\quad k=0, 1, 2, \ldots$ represents an ILU decomposition to the problem $Ax=b$, we can write

  $\displaystyle (A_y-2I)x^{k+1}$ $\displaystyle =$ $\displaystyle -(A_x+2I)x^k+{\bf b}\quad k=0, \lim$
  $\displaystyle \mbox {column-wise line relaxation}$    
  $\displaystyle (A_x-2I)x^{k+1}$ $\displaystyle =$ $\displaystyle (A_y+2I)x^k+{\bf b}\quad k=0, 1, 2, \cdots$
  $\displaystyle \mbox {row-wise line relaxation.}$    

So generally, choose parameters $\alpha_0, \alpha_1, \cdots $ and iterate

ADI \begin{displaymath}\left\{
\begin{array}{ll}
(A_x-\alpha_{2k}I)x^{2k+1}=-(A_y+\a...
..._{2k+1})x^{2k+1}+{\bf b}\quad k=0, 1, \cdots
\end{array}\right.\end{displaymath}

The choice of $\{\alpha_k\}$ that beats SOR is given in

\begin{displaymath}\left[
\begin{array}{l}
\mbox {Wachpress (1966) \lq\lq Iterative ...
...ll. Also, see Demmel's Linear Algebra book.}
\end{array}\right.\end{displaymath}

Another method (which is highly recommended) is the Conjugate gradient method (see details in Math 475A notes) which is applicable to symmetric positive definite $A$'s. It is very fast.

The conjugate gradient and the preconditioned congugate gradient methods are widely available as mature code (see Netlib). These methods are part of a family of methods called ``Krylov Space Methods.'' Other Krylov-space methods worth knowing about are (GMRes) Generalized Minimal Residual and bi-conjugate gradient methods.

There are other types of methods for the solution of the Poisson equation and some of its close cousins: there's ``cyclic reduction and factorization,'' there is the very fast ``Multipole Techniques.'' We will feature two more methods here, both are very powerful and are widely applicable. First we'll consider ``FFT-Based Poisson Solvers'' and then briefly talk about ``Multigrid Methods.''

Remarks As a general rule of thumb, if the problem you're solving is very large, you'll have to resort to high performance solvers. But the first thing you should do is decide whether you have a problem that's big enough to warrant looking at high performance algorithms...this includes foreseeing that in the future you might look at big problems. Alternatively, suppose you have to solve the same problem millions of times (well, many times), whether it is small or large, you could save yourself time and storage by using a high performance solver. In any event, it doesn't hurt to be aware that these sort of things exist and most likely are in the form of mature code that you can adapt to your problem.


next up previous contents
Next: ``FAST'' POISSON SOLVER Up: NUMERICAL METHODS FOR THE Previous: The 5 and 9-Point   Contents
Juan Restrepo 2003-05-02