Complex Magic, Part 3: Function Evaluation via Integration

Accurate evaluation of the function {f(x) = (e^x - 1)/x} is a classic problem in numerical analysis. This function can be seen as an {O(h)} approximation to the first derivative of {e^x} at {x = 0}, {(e^x)'|_{x=0} = \lim_{h \rightarrow 0} (e^h - 1)/h}. More generally, {f(x)} is the divided difference of {e^x} at {x} and {0} and this fact combined with our discussion of catastrophic cancellation due to roundoff in divided differences in a previous post (Complex Magic, Part 1) should trigger an alarm.

The “usual” accurate method to evaluate {f(x)} is to instead evaluate {g(x) = (e^x - 1)/\ln(e^x)}, unless {e^x = 1}, in which case the result is set to 1, given that, say for continuity, {f(0)} is defined to be 1. This is the method employed in MATLAB’s expm1 whose implementation is based on an algorithm due to William Kahan.

Why do {f(x)} and {g(x)}, which are equal in exact arithmetic for all {x > 0}, evaluate differently in floating point arithmetic? The reason may be rigorously explained with a floating point error analysis; see Nick Higham’s ASNA, for example. The difference of {g(x)} and {f(x)}, in floating point arithmetic, is that the former, by design, enjoys a cancellation of errors in division that the latter does not. Note that neither the numerator {e^x - 1} nor the denominator {\ln(e^x)} of {g(x)} can be evaluated accurately in floating point arithmetic. For x = 9.000000000000000e-015, using double precision in MATLAB R2009a on a machine running Windows XP with an Intel Core 2 Duo E8400 CPU, the former is 9.103828801926284e-015 and the latter is 9.103828801926243e-015, while in exact arithmetic they are, up to 16 significant digits, 9.000000000000040e-015 and 8.999999999999999e-15, respectively. The ratios, however, are both equal to 1.000000000000004 up to 16 significant digits!

From the title of this post, and the fact that the method just illustrated was referred to as “usual,” it shouldn’t be too hard to expect that another method is to be discussed here. The underlying idea of this other method, due to [KassamTrefethen2005], is the same as the one discussed in Complex Magic, Part 2: Cauchy’s integral formula, only this time Cauchy helps with calculating the function itself, not its derivative,

\displaystyle  a_0 = f(z_0) = \frac{1}{2\pi i} \oint_\gamma {f(z) (z-z_0)^{-1}}\, dz. \ \ \ \ \ (1)

An elegant heuristic proof of (1), based on a fluid flow interpretation from Mark Levi’s Mathematical Mechanic [Levi2009], can be given by recasting the definition of holomorphicity of the function {f(z)} at {z_0}, namely that there exists {f'(z_0) = (a + ib)} such that

\displaystyle f(z) = f(z_0) + f'(z_0)(z - z_0) + o(z - z_0),

(in other words, namely that there exists a linear approximation to {f(z)} at {z_0}) as the linear combination of a source generating function, {a(z - z_0)^{-1}}, a vortex generating function, {ib(z - z_0)^{-1}}, and an ideal (incompressible and irrotational) flow generating function, {f'(z_0)},

\displaystyle f(z)(z - z_0)^{-1} = a(z - z_0)^{-1} + ib(z - z_0)^{-1} + f'(z_0),

noting that the contributions of these three terms to the contour integral {\oint_\gamma {f(z) (z-z_0)^{-1}}\, dz} are {a 2\pi i}, {ib 2\pi i}, and {0}, respectively, and finally summing up these contributions to get

\displaystyle \oint_\gamma {f(z) (z-z_0)^{-1}}\, dz = a 2\pi i + bi 2\pi i + 0 = 2\pi i f(z_0).

– The word “generating” is important in the above “proof.” The source, vortex, and ideal flow functions arise from treating the complex conjugates of the aforementioned generating functions as vector fields.
– Donald Knuth’s “{O}” Calculus letter to Notices of the American Mathematical Society [PDF, TeX] offers a lively discussion on {O}, and {o}, notation.

With only a slight modification (the highlighted lines) to the diffc of Complex Magic, Part 2 we can get it to work for zeroth derivative (i.e., function evaluation), too.


function [S_M,delta_M] = diffc(f,varargin)

[x0,n,r,M] = deal(0,1,.1,3);

optArgs    = {x0,n,r,M};
emptyArgs  = cellfun(@isempty,varargin);
[optArgs{~emptyArgs}] = varargin{~emptyArgs};
[x0,n,r,M] = optArgs{:};

f_x0      = f(x0);
[G_M,S_M] = deal(0);
for m = 1:M

    if n == 0
        mn = M;
        mobius_m = 1;
    else
        mn = m*n;
        mobius_m = mobius(m);
    end

    t   = linspace(1/(mn),1,mn);
    g   = real( f( x0 + r * exp(1i * 2*pi * t) ) );

    G_M = max( abs( [G_M f_x0 g] ) );
    S_M = S_M + mobius_m * ( mean(g) - f_x0 );

    if n == 0, break, end
end

delta_M = eps(G_M) * G_M/S_M;
S_M     = factorial(n)/(r^n) * S_M;

Apropos of nothing, note that because linspace(d1,d2,n)‘s implementation outputs [d1+(0:n-2)*(d2-d1)/(floor(n)-1) d2], there is always at least one point, namely d2, returned by linspace, even if n = 0. Also, note that apparently the method MATLAB uses for printing the values of symbolic variables in the Command Window is different from what it uses for double variables, as there is no indentation when symbolic variables are output to the Command Window. Compare x = 1, y = sym(1), z = vpa(1).


f = @(x) (exp(x) - 1)./x;

x_0 = 1e-18; x_vpa = vpa(x_0,32); n = 0; r = .5; M = 16;
tic, f_vpa         = f(x_vpa),              toc
tic, [f_M,delta_M] = diffc(f,x_0,n,r,M),    toc

We see from the output of the above snippet that, using variable precision arithmetic (vpa) with 32 digits, the value of {(e^x - 1)/x} for x = 1e-18 is 1.0000000000000000004999535947385 and that on a contour of radius {r = 0.5} using only {M = 16} points for the trapezoidal quadrature, diffc achieves 1.000000000000000 which agrees with the vpa evaluation in all its 16 digits.

The integrand in (1) was written as {f(z)(z-z_0)^{-1}}, and not the conventional {\frac{f(z)}{(z-z_0)}}. Why would we want to do that?

-----BEGIN DIGRESSION-----

Let us digress, a bit and not to the Neverland! According to Gilbert Strang, there are only two functions whose power series one should know by heart: {e^t} and {1/(1-t)}. Let’s not touch the former, lest we digress too much and get lost! As for the latter, recasting it as {(1-t)^{-1}} allows its power series to be written with the familiar form

\displaystyle  (1-t)^{-1} = 1 + t + t^2 + t^3 + \cdots, \ \ \ \ \ (2)

for {t}‘s that can come from different fields {F}; in particular {t} can be a scalar or a square matrix and (2) stays intact. As an aside, it is noted that (2) may be used, formally, to discover the formula for {(1-ab )^{-1}} for the more general case of one-sided inverses in rings. More on this may be read in How to Construct {(1-ab )^{-1}}.

Another place this kind of recasting can yield a unified treatment is in projections. For {a} and {b} elements of a vector space {V} over the complex field {\mathbb{C}}, the projection of {b} over {a} is usually written as {p = \frac{a^H b}{a^H a} a}, with {a^H} denoting the Hermitian (complex conjugate) transpose of {a}. Recasting this as {p = a (a^H a)^{-1} a^H b} makes this expression identical to the one that holds in the more general case when {a} is a matrix and the projection is into a subspace of {V} spanned by the columns of {a} (that is, the column space of {a}).

-----END DIGRESSION-----

A similar rationale is behind recasting the integrand in (1) from the usual division to multiplication by the inverse: the latter holds in the more general case when {z_0} is a square matrix, with the slight modification of {z} to {z1}, where {1} is the multiplicative identity of the field {F}. The contour {\gamma} has to enclose the eigenvalues of {z_0}—if {z_0} is a scalar (that is, if {F} is the complex field {\mathbb{C}}) its eigenvalue is {z_0} itself, as {z_0 = [z_0]_{1\times 1}}.

Why is the operator {(.)^{-1}(e^{(.)} - 1)} so important? Because it appears in many important numerical methods. One of the places that it appears is in Exponential Time Differencing (ETD) schemes; see, for example, [CoxMatthews2002]. ETD schemes are particularly useful when solving stiff PDEs. Discretizing the spatial dimension of a PDE, the result may be written as a system of ODEs of the form

\displaystyle  u_t = Au + F(u,t), \ \ \ \ \ (3)

where {A} and {F} are the discretized linear and nonlinear parts of the original PDE, respectively. (For instance, supposing that {u} denotes state variables of a mechanical system, {Au} and {F} could be the normalized, linear and nonlinear forces acting on the system, respectively. If the constitutive laws are imposed, via a matrix {C} and nonlinear function {C(u,t)} for the linear and nonlinear parts of (3), normalization can be dropped from the the previous statement.) When the eigenvalues of {A} have very different magnitudes—either because there is a very negative eigenvalue, indicating strong damping (exponential decay) as in dissipative systems, or because there is a large imaginary eigenvalue, indicating rapid oscillations as in dispersive systems—the system (3) is categorized as stiff. (It is assumed that {A} is “normal” so that analyzing its eigenvalues gives us a true prediction of what it represents. Let’s not enter the world of pseudospectra at this time.) It is noted that a differential equation has attached to it a differential operator whose definition is not complete without the necessary initial conditions. As a result, the notion of stiffness depends not only on the bare system of differential equations but also on the initial conditions. Furthermore, many problems are not initially stiff; they become stiff only when the solution approaches a certain state.

In a stiff system of ODEs two restrictions apply to the time steps a numerical solver can take; one is imposed by stability, dictated by the largest eigenvalue (in magnitude), just as in the case of a single ODE, and the other is imposed by the desired accuracy. The former pushes the time step to be much smaller than is required by the latter and this is a dilemma. A scheme not specially designed to handle stiffness is not aware of these two different time scales, would take very small steps and as a result would take an awfully long time to solve the a stiff problem. It is important to note that this (time) inefficiency is the only problem such a scheme would have; other things being equal, stiffness does not render a nonstiff scheme unstable or divergent. (See Cleve Moler’s “corner” article Stiff Differential Equations and Gilbert Strang’s treasure trove “Introduction to Applied Mathematics” for more details.)

The discussion of ETD here is based on [CoxMatthews2002], which is a very nice read. The underlying idea of ETD schemes is simple: apply the matrix exponential {e^{-At}} to (3) and approximate the resulting integral. ({e^{-At}} is an integrating factor for the linear part of (3) so the integral to approximate is coming from the nonlinear part.) Over a single time step of length {h} from {t_n} to {t_{n+1}} this gives

\displaystyle  u(t_{n+1}) = e^{Ah}u(t_n) + e^{Ah} \int_0^h e^{-A\tau}F( u(t_n + \tau),t_n + \tau ) \, d\tau. \ \ \ \ \ (4)

A first order approximation to {F} already brings out the notorious {(.)^{-1}(e^{(.)} - 1)} in (4)

\displaystyle  u_{n+1} = e^{Ah}u_n + F_n A^{-1}(e^{Ah} - 1), \ \ \ \ \ (5)

where {F_n = F(u_n,t_n)} and {u_n} denotes an approximation to {u(t_n)}. With higher order approximations, we have higher order versions of {(.)^{-1}(e^{(.)} - 1)} and the resulting cancellations due to roundoff are more sever. For example, here is Cox and Matthews ETD4RK, a fourth-order ETD scheme with Runge-Kutta time stepping:

\displaystyle  \begin{array}{rcl}  a_n & = & e^{Ah}/2u_n + A^{-1} (e^{Ah}/2 - I) N(u_n,t_n),\\ b_n & = & e^{Ah}/2u_n + A^{-1} (e^{Ah}/2 - I) N(a_n,t_n + h/2),\\ c_n & = & e^{Ah}/2a_n + A^{-1} (e^{Ah}/2 - I)(2N(b_n,t_n + h/2) - N(u_n,t_n)),\\ u_{n+1} & = & e^{Ah}u_n + h^{-2} A^{-3}\{ [ -4 - Ah + e^{Ah} ( 4 - 3Ah + (Ah)^2 ) ] N(u_n,t_n)\\ & & {} + 2[ 2 + Ah + e^{Ah} (-2 + Ah) ]( N(a_n,t_n + h/2) + N(b_n,t_n + h/2) )\\ & & {} + [ -4 - 3Ah - (Ah)^2 + e^{Ah}(4 - Ah) ] N(c_n,t_n + h)\}. \end{array}

(The terms multiplying powers of {A^{-1}} in the above equations are prone to severe cancellation due to roundoff.)

We are now ready to write a simple function fevalc that computes (1) for the general case that {z_0} is a matrix:


function f_z0 = fevalc(f,varargin)

[z0,r,M]  = deal(0,.1,16);
optArgs   = {z0,r,M};
emptyArgs = cellfun(@isempty,varargin);
[optArgs{~emptyArgs}] = varargin{~emptyArgs};
[z0,r,M] = optArgs{:};

I    =   eye(size(z0));
f_z0 = zeros(size(z0));

zs = r * exp( 1i*pi*( ((1:M) - .5 )/M ) );
for j = 1:M
    z = zs(j);
    zI_z0_inv = inv(z*I - z0);
    f_z0 = f_z0 + f(z) * zI_z0_inv * z; %#ok<MINV>
end
f_z0 = real(f_z0)/M;

Note that for the special case when {z_0} is a scaler this function is equivalent to the one-liner zs = r * exp( 1i*pi*( ((1:M) - .5 )/M ) ); f_z0 = real(mean(f(zs))).

For the sake of demonstration, let us calculate one of the “troubled” terms (or “troublesome,” depending on your view point!), say {h^{-2} A^{-3} [-4 - Ah + e^{Ah} ( 4 - 3Ah + (Ah)^2 ) ]}, in (5) when the time step is {h = 1/10} and {A} is the second-order, Chebyshev differentiation matrix for a problem with constant Dirichlet boundary conditions:


function ETD4RK_term_eval
N  = 5;
D  = cheb(N);
h  = 1/10;
D2 = D^2; D2 = .01*D2(2:N,2:N);
Ah = h*D2;
f  = @(z) ( h*z^(-3) * (-4 - z + exp(z)*(4 - 3*z + z^2) ) );

f_A   = f(Ah);
f_A_c = fevalc(f,Ah,20,10);

function [D,x] = cheb(N)

if N == 0, D = 0; x = 1; return, end
x = cos(pi*(0:N)/N)';
c = [2; ones(N-1,1); 2].*(-1).^(0:N)';
X = repmat(x,1,N+1);
dX = X - X';
D  = (c*(1./c)')./(dX + (eye(N+1)));
D  = D - diag(sum(D,2));

(The border rows and columns are being chopped off from D2 to impose the constant boundary conditions. For more details about the implementation of boundary conditions and how the function cheb generates the Chebyshev differentiation matrix see [Trefethen2000].)

This is {f(A)} calculated by direct substitution, which is completely off,

{-3.0302\times 10^{7}} {-3.0086\times 10^{7}} {-3.0085\times 10^{7}} {-3.0302\times 10^{7}}
{-9.0451\times 10^{7}} {-8.9806\times 10^{7}} {-8.9806\times 10^{7}} {-9.0451\times 10^{7}}
{-9.0451\times 10^{7}} {-8.9806\times 10^{7}} {-8.9806\times 10^{7}} {-9.0451\times 10^{7}}
{-3.0302\times 10^{7}} {-3.0085\times 10^{7}} {-3.0086\times 10^{7}} {-3.0302\times 10^{7}}

This is {f(A)} calculated using fevalc

{-1.7858\times 10^{5}} {-1.0354\times 10^{2}} {3.0126\times 10^{1}} {-1.8017\times 10^{1}}
{-5.9729\times 10^{1}} {-1.7876\times 10^{5}} {-4.7287\times 10^{1}} {1.5536\times 10^{1}}
{1.5536\times 10^{1}} {-4.7287\times 10^{1}} {-1.7876\times 10^{5}} {-5.9729\times 10^{1}}
{-1.8017\times 10^{1}} {3.0126\times 10^{1}} {-1.0354\times 10^{2}} {-1.7858\times 10^{5}}

Of course, as mentioned before, the calculation in ETD4RK_term_eval is for demonstration only. The “troubled” terms in (5) need not, and in fact should not, be calculated one by one using fevalc, as the inverse matrix {(z1 - z_0)^{-1}} is shared among them and need be calculated only once. Furthermore, because these terms have powers of {z^{-1}}, the multiplication by {z} in ETD4RK_term_eval can be accounted for by reducing the power of {z^{-1}} by {1}, instead of actually doing the multiplication. Refer to [KassamTrefethen2005] for more details. As an aside, it is noted that this is one of the very rare cases that the inverse matrix itself is needed; no system of linear equations is being solved and as a result what inv is doing cannot be achieved using the backslash operator. Another case in which the inverse matrix itself is needed and cannot be replaced by the backslash operator is calculating the standard errors of the estimates, in an identification problem, based on the inverse of the Fisher information matrix.

Posted in High Order and Spectral Methods | 3 Comments

Complex Magic, Part 2: Differentiation via Integration

In the previous post, Complex Magic, Part 1, the use of complex variables in approximating the first derivative of a holomorphic function was discussed. This post delves deeper and introduces formulas suitable for approximating the {n}th derivative of a holomorphic function.

As elaborate as they might seem, the ideas to be discussed are based on two familiar topics: Cauchy’s integral theorem and quadratures.

Cauchy’s integral theorem states that if {f} is a holomorphic function in a neighborhood which contains a closed disk with the boundary {\gamma} (a closed rectifiable curve with winding number one about {z_0}), then for every point {z_0} in the interior of that disk

\displaystyle  a_n = {f^{(n)}(z_0) \over n!} = {1 \over 2\pi i} \oint_\gamma {f(z) (z-z_0)^{-(n+1)}}\, dz. \ \ \ \ \ (1)

The magic is that (1) converts the problem of calculating a derivative to calculating an integral! And contour integrals of holomorphic functions can be readily approximated using trapezoidal rule, with exponential accuracy. What follows is how this approximation is done, based on [LynessMoler1967].

On the interval {[0,1]} partitioned into {m} segments of equal length, the trapezoidal quadrature may be defined with the following sum operator:

\displaystyle  R^{[m,1]} g(t) = {1 \over m} \left[ {1 \over 2}g(0) + g\left( {1 \over m} \right) + \cdots + g\left( {m-1 \over m} \right) + {1 \over 2}g(1) \right]. \ \ \ \ \ (2)

Suppose {f(x)} is a real function of {x} whose nth derivative we wish to approximate at {x_0}. If {f(z)} satisfies the conditions required for (1) to hold at {z = x_0}, then the derivative can be calculated according to

\displaystyle  a_n = {f^{(n)}(x_0) \over n!} = {1 \over r^n} \sum_{m = 1}^{\infty} \mu(m) \left[ R^{[mn,1]} g(t) - f(x_0) \right], \ \ \ \ \ (3)

in which {g(t) = \mathop{Re} f(x_0 + r e^{2\pi i t})} and {\mu(m)} is the Möbius function, defined for all positive integers {m} as {1}, if {m} is squarefree and even, {-1}, if {m} is squarefree and odd, and {0} otherwise. With the sum going to {\infty}, (3) is, of course, an exact formula. Truncating the infinite sum to an {M}-term partial sum {S_M = \sum_{m = 1}^{M} \mu(m) \left[ R^{[mn,1]} g(t) - f(x_0) \right]} yields an approximate formula for the derivative.

A condensed version of the proof of [LynessMoler1967] for (3) is as follows: substituting the parametrization {z = x_0 + r e^{2\pi i t}} in (1) gives the Fourier coefficients

\displaystyle a_n = {2 \over r^n} \int_0^1 {f(x_0 + r e^{2\pi i t}) \cos(2\pi nt)}\, dt

which, using the Poisson summation formula, can be connected to the trapezoidal rule sum operator (2) to yield

\displaystyle  \begin{array}{rcl}  b_n &=& R^{[n,1]} f(x_0 + r e^{2\pi it}) - f(x_0) \\ &=& 2 \sum_{m = 1}^{\infty} \int_0^1 {f(x_0 + r e^{2\pi it}) \cos(2\pi mnt)}\, dt\\ &=& \sum_{m = 1}^{\infty} r^{mn}a_{mn}. \end{array}

The set of equations {b_n = \sum_{m = 1}^{\infty} r^{mn}a_{mn}} is then inverted, using the Möbius inversion formula, to yield {a_n = {1 \over r^n} \sum_{m = 1}^{\infty} \mu(m)b_{mn}} in which substituting for {a_n} from (1), with {z_0 = x_0}, and {\mathop{Re} R^{[n,1]} f(x_0 + r e^{2\pi it}) - f(x_0)} for {b_n} (since for {f(x)} real, {a_n} is real) completes the proof. Notice that only in the final step of the proof was the fact that {f(x)} is a real function used.

The parametrization {z = x_0 + r e^{2\pi i t},\ 0 \le t \le 1}, is in effect taking the contour {\gamma} to be a circle of radius {r} centered at {x_0}. The important point about this parametrization is not so much the particular shape that it gives to {\gamma}, but the symmetry of that shape with respect to the real axis. Furthermore, for a real function {f(x)}, {g(t) = \mathop{Re} f( x_0 + r e^{2\pi i t} )} is also symmetric with respect to {t = 1/2}, or equivalently {f( x_0 + r e^{2\pi i t} )} is also symmetric with respect to the real axis. This symmetry of {\gamma} and {f(x)}, together with the circular shape of the former, can be exploited to simplify (2). Specifically, since {g(0) = g(1)}, (2) reduces simply to a mean of {f} over {\gamma}, which can be approximated by taking the real part of the means of {f} over equally spaced points along the upper (or lower) half of {\gamma}. It is emphasized that because {f(x)} is a real function, the mean of {f} and {\mathop{Re} f} over {\gamma} (the full circle) are equal since the imaginary parts of {f} from the upper and lower halves of {\gamma} cancel each other (up to roundoff).

It was mentioned in Complex Magic, Part 1 that finite difference formulas have coefficients that vary significantly in magnitude, and that this causes the absolute errors in function values to be amplified. Notice that all coefficients in (3) are of magnitude {1} and as a result they will not cause any amplification of the absolute errors in function values. Consequently the partial sum {S_M} is expected to have an absolute error of the same order as that of the largest function value contributing to it, which is {G_M = \max\{|f(x_0),|g(t_j)|\},\ t_j = j/mn,\ 1 \le m \le M} and since dividing {S_M} by {r^n} retains the relative accuracy, {\delta_M = \epsilon G_M/S_M} can serve as a measure of the relative error incurred in calculating the derivative due to roundoff. ({\epsilon} is the maximum relative roundoff error in function evaluations.) If {\delta_M} is not acceptable, we can increase the radius, {r}, of the contour {\gamma}, to have the function evaluations {g(t) = \mathop{Re} f( x_0 + r e^{2\pi i t} )} done on points prominently distinguishable from {x_0} (in floating point sense), and repeat the approximation.

Here is a simple implementation of the Möbius function, using table look-up for the first 77 values and factorization for the rest:


function mu = mobius(n)

persistent muList
if isempty(muList)

    muList  = [1,-1,-1,0,-1,1,-1,0,0,1,-1,0,-1,1,1,0,-1,0,-1,0,...
               1,1,-1,0,0,1,0,0,-1,-1,-1,0,1,1,1,0,-1,1,1,0,-1,...
              -1,-1,0,0,1,-1,0,0,0,1,0,-1,0,1,0,1,1,-1,0,-1,1,0,...
               0,1,-1,-1,0,1,-1,-1,0,-1,1,0,0,1];
end

if n <= length(muList), mu = muList(n); return, end

fac     = factor(n);
len_fac = length(fac);
if length(unique(fac)) == len_fac
    mu = (-1)^len_fac;
else
    mu = 0;
end

And here is a function diffc that implements (3):


function [S_M,delta_M] = diffc(f,varargin)

[x0,n,r,M] = deal(0,1,.1,3);

optArgs    = {x0,n,r,M};
emptyArgs  = cellfun(@isempty,varargin);
[optArgs{~emptyArgs}] = varargin{~emptyArgs};
[x0,n,r,M] = optArgs{:};

f_x0      = f(x0);
[G_M,S_M] = deal(0);
for m = 1:M
    t      = linspace(1/(m*n),1,m*n);
    g      = real( f( x0 + r * exp(1i * 2*pi * t) ) );

    G_M = max( abs( [G_M f_x0 g] ) );
    S_M = S_M + mobius(m) * ( mean(g) - f_x0 );
end

delta_M = eps(G_M) * G_M/S_M;
S_M     = factorial(n)/(r^n) * S_M;

The following code snippet puts diffc to work, approximating the 10th derivative of the function {e^x / (\sin^{3}x + \cos^{3}x)} at {x_0 = 0}, against an exact calculation using symbolic variables. The exact value of this derivative is 13829824, as seen from the symbolic calculation.


syms x real
f_sym = exp(x)/(sin(x)^3 + cos(x)^3)
f     = matlabFunction(f_sym)

x_0 = 0; n = 10; r = .5; M = 7;
tic, df10d10x_sym         = subs(diff(f_sym,n),x_0), toc
tic, [df10d10x_M,delta_M] = diffc(f,x_0,n,r,M),      toc

On a contour of radius {r = 0.5} using a partial sum {S_M} of only {M = 7} terms, diffc achieves 13829824.00000018 and the time it takes is two orders of magnitude less than that of the symbolic calculation (0.001s compared to 0.6s).

Note

There exists a bug in MATLAB R2009a (5.2) and R2008b (5.1) due to which the “MuPAD kernel can take several minutes or longer to initialize on Windows. Any command from the Symbolic Math Toolbox that requires starting MuPAD might seem to hang.” This bug is fixed in R2009b and a workaround is available for the two releases before that, http://www.mathworks.com/support/bugreports/523801.(When I was first writing this post, without the “patch” applied, I noticed that the symbolic calculation was taking an unreasonably long time, somewhere around 15s. As satisfied as I was as with my uber-fast “complex” implementation, I knew that something was wrong with the Symbolic Math Toolbox because even creating a symbolic variable, say syms x real, would take over 10s.)

Posted in High Order and Spectral Methods | Tagged , , | Leave a comment

Complex Magic, Part 1: Differentiation via Function Evaluation at a Single Point

At the heart of conventional methods of derivative approximation based on divided differences lies a sum of the form {\sum a_i f(x_i)} in which the coefficients {a_i} have different signs and vary (usually considerably) in magnitude. As a result, when these coefficients of different magnitude multiply function values, the absolute errors in the latter are amplified and the error in the outcome becomes quantitatively unpredictable [LynessMoler1967].

As a special case, consider the second-order central difference formula, one of the standard derivative approximation methods for a real valued function,

\displaystyle  f'(x_0) \approx (f(x_0+h) - f(x_0-h))/2h, \ \ \ \ \ (1)

for which, as the name should suggest, the truncation error is {O(h^2)}. In theory, the truncation error could be made arbitrarily small by making the step size small enough. In practice, on a finite-precision machine with floating point arithmetic, this cannot be done due to the existence of another type of error, the rounding error. In the particular case of (1), for instance, too small a step size would lead to a catastrophic subtractive cancellation in the numerator, whose culprit is the rounding done when evaluating the exponential. Clearly, a compromise must be made between minimizing truncation and rounding errors. In the case of (1), analyzing the monotonic behavior reveals that when rounding becomes prominent, the approximate value given by (1) oscillates around the true value [SteplemanWinarsky1979]. Although this behavior can be used to select an optimal {h}, only marginal improvements can be made [SquireTrapp1998].

Using complex variables it is possible to derive derivative approximation formulas that do not involve subtractions and hence are free from catastrophic cancellations. Let us derive a complex-step numerical approximation formula for the first derivative with the same order of truncation error as (1). If {f(z)} is an holomorphic function of a complex variable {z} that is real valued on the real axis, then its Taylor expansion, with complex step {ih}, about the real point {x_0} is

\displaystyle  f(x_0 + ih) = f(x_0) + ihf'(x_0) + (ih)^2 f''(x_0)/2! + (ih)^3 f'''(x_0)/3! + \cdots. \ \ \ \ \ (2)

Taking imaginary parts of both sides of (2) yields

\displaystyle  f'(x_0) = \mathop{Im}[f(x_0 + ih)]/h + h^2 f'''(x_0)/3! - \cdots. \ \ \ \ \ (3)

The first term on the RHS of (3) is an {O(h^2)} estimate to the LHS which is the first derivative. As promised, {\mathop{Im}[f(x_0 + ih)]/h} does not involve subtractions and, consequently, it will not suffer from the associated cancellation due to rounding.

This approach can be easily implemented to compute the jacobian matrix. (The implementation below is based on Yi Cao’s.)


function [J,z] = jacobianComplexStep(fun,x,varargin)

z = fun(x);
n = numel(x);
m = numel(z);
J = zeros(m,n);

h = n*eps;

optArgs = {h};
emptyArgs = cellfun(@isempty,varargin);
[optArgs{~emptyArgs}] = varargin{~emptyArgs};
[h] = optArgs{:};

for k = 1:n
    x1 = x;
    x1(k)  = x1(k) + h*1i;
    J(:,k) = imag(fun(x1))/h;
end

Here is an equivalent implementation of the jacobian based on the second order central difference formula:


function [J,z] = jacobianCentralDiff(fun,x,varargin)

z = fun(x);
n = numel(x);
m = numel(z);
J = zeros(m,n);

h = n*eps;

optArgs = {h};
emptyArgs = cellfun(@isempty,varargin);
[optArgs{~emptyArgs}] = varargin{~emptyArgs};
[h] = optArgs{:};
h_2 = 2*h;

for k = 1:n
    [x_left,x_right] = deal(x);
    x_right(k)  = x(k) + h;
    x_left( k)  = x(k) - h;
    J(:,k)      = (fun(x_right) - fun(x_left))/h_2;
end

And here is a sample invocation of the functions jacobianComplexStep and jocobianCentralDiff:


f = @(x) x^(9/2);
x = 1.5;

le = -20;
ri = -2;
m  = ri - le + 1;
h  = logspace(ri,le,m).';

[dfdxComplexStep,dfdxCentralDiff] = deal(zeros(m,1));
for iRow=1:m
    dfdxComplexStep(iRow) = jacobianComplexStep(f,x,h(iRow));
    dfdxCentralDiff(iRow) = jacobianCentralDiff(f,x,h(iRow));
end

fprintf('%.16e %.16e %.16e\n',[h dfdxComplexStep dfdxCentralDiff]')

The tables below compare the two methods by listing their approximations to the first derivative of the function {f(x) = x^{9/2}} at {x_0 = 1.5} for different step sizes, using MATLAB R2009a on a machine running Windows XP with an Intel Core 2 Duo E8400 CPU.

{h} {f'(1.5)}, Complex Step
{1.0000000000000000\times 10^{-2}} {1.8599607128036336\times 10^{1}}
{1.0000000000000000\times 10^{-3}} {1.8600800678177631\times 10^{1}}
{1.0000000000000000\times 10^{-4}} {1.8600812613698931\times 10^{1}}
{9.9999999999999991\times 10^{-6}} {1.8600812733054148\times 10^{1}}
{9.9999999999999995\times 10^{-7}} {1.8600812734247697\times 10^{1}}
{9.9999999999999995\times 10^{-8}} {1.8600812734259637\times 10^{1}}
{1.0000000000000000\times 10^{-8}} {1.8600812734259762\times 10^{1}}
{1.0000000000000001\times 10^{-9}} {1.8600812734259762\times 10^{1}}
{1.0000000000000000\times 10^{-10}} {1.8600812734259762\times 10^{1}}
{1.0000000000000001\times 10^{-11}} {1.8600812734259758\times 10^{1}}
{9.9999999999999998\times 10^{-13}} {1.8600812734259762\times 10^{1}}
{1.0000000000000000\times 10^{-13}} {1.8600812734259765\times 10^{1}}
{1.0000000000000000\times 10^{-14}} {1.8600812734259762\times 10^{1}}
{1.0000000000000001\times 10^{-15}} {1.8600812734259762\times 10^{1}}
{9.9999999999999998\times 10^{-17}} {1.8600812734259762\times 10^{1}}
{9.9999999999999992\times 10^{-18}} {1.8600812734259758\times 10^{1}}
{1.0000000000000001\times 10^{-18}} {1.8600812734259762\times 10^{1}}
{9.9999999999999998\times 10^{-20}} {1.8600812734259762\times 10^{1}}
{1.0000000000000001\times 10^{-20}} {1.8600812734259762\times 10^{1}}

{h} {f'(1.5)}, Central Difference
{1.0000000000000000\times 10^{-2}} {1.8602018344501879\times 10^{1}}
{1.0000000000000000\times 10^{-3}} {1.8600824790340198\times 10^{1}}
{1.0000000000000000\times 10^{-4}} {1.8600812854816517\times 10^{1}}
{9.9999999999999991\times 10^{-6}} {1.8600812735591887\times 10^{1}}
{9.9999999999999995\times 10^{-7}} {1.8600812732749716\times 10^{1}}
{9.9999999999999995\times 10^{-8}} {1.8600812743407857\times 10^{1}}
{1.0000000000000000\times 10^{-8}} {1.8600812623503771\times 10^{1}}
{1.0000000000000001\times 10^{-9}} {1.8600814222224926\times 10^{1}}
{1.0000000000000000\times 10^{-10}} {1.8600814222224926\times 10^{1}}
{1.0000000000000001\times 10^{-11}} {1.8600854190253809\times 10^{1}}
{9.9999999999999998\times 10^{-13}} {1.8602897000619123\times 10^{1}}
{1.0000000000000000\times 10^{-13}} {1.8589574324323621\times 10^{1}}
{1.0000000000000000\times 10^{-14}} {1.8562928971732617\times 10^{1}}
{1.0000000000000001\times 10^{-15}} {2.0428103653102880\times 10^{1}}
{9.9999999999999998\times 10^{-17}} {0.0000000000000000\times 10^{0}}
{9.9999999999999992\times 10^{-18}} {0.0000000000000000\times 10^{0}}
{1.0000000000000001\times 10^{-18}} {0.0000000000000000\times 10^{0}}
{9.9999999999999998\times 10^{-20}} {0.0000000000000000\times 10^{0}}
{1.0000000000000001\times 10^{-20}} {0.0000000000000000\times 10^{0}}

The first table demonstrates that the complex step approximation improves with decreasing {h}. On the other hand, the second table shows that the central difference approximation deteriorates after {h} becomes less than, roughly, {10^{-12}} and finally succumbs to the catastrophic subtractive cancellation.

There is more to be said about using complex variables in numerical differentiation, so stay tuned….

Posted in High Order and Spectral Methods | Tagged , | Leave a comment

Just Nest It

The usefulness of nested functions in MATLAB has been the subject of heated debate (check out the comments in Loren Shure’s post Nested Functions and Variable Scope for instance) mainly due to the complexity of data flow.

This is not an attempt to completely disprove the aforementioned complexity arguments. The relative complexity is there but one might argue that it is inevitable due to data sharing. Probably an editor affordance (Tim Davis’s “definitions”: (1) the ability of one to pay the costs for attending a prom, (2) what you do if you turn the wheels too sharply in your Mustang while driving on ice; Joel Spolsky’s definition in the context of a GUI) could make the situation more tenable and the coding process less error prone. But that is another story. This is, however, an attempt to showcase the beauty and versatility that nested functions can bring to a piece of code, providing the author is mindful of when and how to take advantage of the data sharing features they provide, without compromising the maintainability.

Before embarking on this somewhat tortuous journey, a refresher is in order. What follows is a list of points to bear in mind when dealing with nested function, excerpted from either MATLAB documentation on Nested Functions or the discussions in the comments of the related posts by Loren Shure (both with modifications).

  • A nested function can be called from
    – the level immediately above it.
    – a function nested at the same level within the same parent function.
    – a function at any lower level.
  • Nested functions are not accessible to the str2func or feval function. You cannot call a nested function using a handle that has been constructed with str2func. And, you cannot call a nested function by evaluating the function name with feval. To call a nested function, you must either call it directly by name, or construct a function handle for it using the @ operator.
  • As a rule, a variable used or defined within a nested function resides in the workspace of the outermost function that both contains the nested function and accesses that variable. The scope of this variable is then the function to which this workspace belongs, and all functions nested to any level within that function.
    The special case of varargin and varargout variables is particularly interesting: if a nested function includes varargin or varargout in its function declaration line, then the use of varargin or varargout within that function returns optional arguments passed to or from that function. If varargin or varargout are not in the nested function declaration but are in the declaration of an outer function, then the use of varargin or varargout within the nested function returns optional arguments passed to the outer function.
  • Variables containing values returned by a nested function are not in the scope of outer functions.
  • Externally scoped variables that are used in nested functions for which a function handle exists are stored within the function handle. So, function handles not only contain information about accessing a function, for nested functions, they also store the values of any externally scoped variables required to execute the function.
    It is interesting to note that whos fh_nested, where fh_nested is a function handle to a nested function, will only show the size of the function handle itself not everything that it encapsulates. This can be verified by examining the structure struct_fh_nested = functions(fh_nested).
  • Variables cannot be “poofed” into the workspace of nested functions. [The way Loren Shure, so eloquently, put it.] The scoping rules for nested, and in some cases anonymous, functions require that all variables used within the function be present in the text of the M-file code. MATLAB issues an error if you attempt to dynamically add a variable to the workspace of an anonymous function, a nested function, or a function that contains a nested function. An important operation that causes variables to be dynamically added to the workspace is loading variables from a MAT file using load without an output, which can be avoided by using the form of load that returns a MATLAB structure.

The examples that follow illustrate what might be dubbed memory effects, or, as T. Driscoll calls them in Learning MATLAB, “lasting side effects.” (Side effects refer to the notion of functions in imperative programming, i.e., to the fact that the same language expression can result in different values depending on the state of the executing function. This implies referential opaqueness, or equivalently lack of referential transparency, which in turn implies impossibility of memoization.)

The first example features two functions that perform the same task of returning segments of an Excel sheet, advancing daily. An initial array is read during the first run of either function and is used as a cache for past data up to the current day. In a subsequent run, another array is read and used as a cache for future data, from which every time the current day data row is picked and appended to the past data cache, dropping the most obsolete (i.e., the first) row of the past data cache every time.
One of these functions is implemented using an ordinary, m-file, function (which can be made a subfunction of the invoking function too) and the other using a nested function. The differences between the two implementations are highlighted.

Implementation based on Ordinary functions/Subfunctions


function [curDate,E] = readInputFileDaily_sub(inpFileName,inpSheet,...
                                              colRange,endDate,varargin)

persistent isFirstRun rowSheet rowE prevEs newEs newDates

optArgs   = {2,252,500};
emptyArgs = cellfun(@isempty,varargin);
[optArgs{~emptyArgs}] = varargin{~emptyArgs};

[sheetStartRow,rowInitEst,nDays2Read] = optArgs{:};

nCols = colRange{2} - colRange{1} + 1;

if isempty(isFirstRun)
    isFirstRun = false;
    rowSheet   = rowInitEst;
    inpRange   = [colRange{1} num2str(sheetStartRow) ':' ...
				  colRange{2} num2str(rowInitEst)];
    
    [prevEs,initDate] = xlsread(inpFileName,inpSheet,inpRange);
    
    notNan 	  = all(~isnan(prevEs),2);
    prevEs    = prevEs(notNan,:);
    initDate  = initDate(notNan,1);
    
    curDate   = initDate(end);
    E         = prevEs;
    return
end

while true
    rowSheet = rowSheet + 1;
    
    if isempty(newEs) || (rowE == size(newEs,1))
        rowE     = 1;
        inpRange = [colRange{1} num2str(rowSheet) ':' ...
                    colRange{2} num2str(rowSheet + nDays2Read - 1)];
        
        [newEs,newDates] = xlsread(inpFileName,inpSheet,inpRange);
    else
        rowE = rowE + 1;
    end
    
    curDate  = newDates(rowE);
    
    if strcmp(curDate,endDate) || isempty(curDate)
        [curDate,E] = deal({},[]);
        return
    end
    
    curRowE = newEs(rowE,:);
    if all(~isnan(curRowE)) && (length(curRowE) == nCols - 1)
        E = [prevEs(2:end,:);
            curRowE];
        break
    end
end

prevEs = E;

Implementation based on Nested functions


function fh_readInputFileCore = readInputFileDaily_nested(...
                                inpFileName,inpSheet,colRange,endDate,varargin)

optArgs   = {2,252,500};
emptyArgs = cellfun(@isempty,varargin);
[optArgs{~emptyArgs}] = varargin{~emptyArgs};

[sheetStartRow,rowInitEst,nDays2Read] = optArgs{:};

nCols = colRange{2} - colRange{1} + 1;

fh_readInputFileCore = @readInputFileCore;

[rowSheet,rowE]        = deal(0);
[prevEs,newEs]         = deal([]);
newDates               = {};

    function [curDate,E] = readInputFileCore()

        if isempty(prevEs)
            rowSheet = rowInitEst;
            inpRange = [colRange{1} num2str(sheetStartRow) ':' ...
                        colRange{2} num2str(rowInitEst)];

            [prevEs,initDate] = xlsread(inpFileName,inpSheet,inpRange);

            notNan    = all(~isnan(prevEs),2);
            prevEs    = prevEs(notNan,:);
            initDate  = initDate(notNan,1);

            curDate = initDate(end);
            E       = prevEs;
            return
        end

        while true
            rowSheet = rowSheet + 1;

            if isempty(newEs) || (rowE == size(newEs,1))
                rowE     = 1;
                inpRange = [colRange{1} num2str(rowSheet) ':' ...
                            colRange{2} num2str(rowSheet + nDays2Read - 1)];

                [newEs,newDates] = xlsread(inpFileName,inpSheet,inpRange);
            else
                rowE = rowE + 1;
            end

            curDate = newDates(rowE);

            if strcmp(curDate,endDate) || isempty(curDate)
                [curDate,E] = deal({},[]);
                return
            end

            curRowE = newEs(rowE,:);
            if all(~isnan(curRowE)) && (length(curRowE) == nCols - 1)
                E = [prevEs(2:end,:);
                    curRowE];
                break
            end
        end

        prevEs = E;
    end
end

Here is a snapshot showing the differences of readInputFileDaily_sub and readInputFileDaily_nested side by side.

Differences of readInputFileDaily_sub and readInputFileDaily_nested

Note that except for isFirstRun, which is not used in readInputFileDaily_nested, the rest of persistent variables in readInputFileDaily_sub are externally scoped variables with respect to readInputFileCore and will be saved in the function handle returned by readInputFileDaily_nested.

Invoking readInputFileDaily_sub


function maxlikeTransformedData_sub

nTradeDays    = 252;

inpFilePath   = ['.' filesep];
inpFileName   = 'LEHMQ_AIG';
inpFileName   = fullfile(inpFilePath,inpFileName);

colRange      = {'A','D'};
sheetStartRow = 2;
initEstRow    = sheetStartRow + nTradeDays;
endDate       = '9/15/2008';

firms         = {'AIG','LEHMQ'};

inpSheets     = cellfun(@(firmName) ['Input' firmName],firms,...
                                     'UniformOutput',false);

nFirms    = length(firms);
idxFirms  = 1:nFirms;
for iFirm = idxFirms
    clear readInputFileDaily_sub

    while true
        [curDate,E] = readInputFileDaily_sub(inpFileName,inpSheets{iFirm}, ...
                                             colRange,endDate,[],initEstRow);
        if isempty(curDate)
            break
        end
        disp(firms{iFirm}), disp(curDate), disp(E)
    end
end

Invoking readInputFileDaily_nested


function maxlikeTransformedData_nested

nTradeDays    = 252;

inpFilePath   = ['.' filesep];
inpFileName   = 'LEHMQ_AIG';
inpFileName   = fullfile(inpFilePath,inpFileName);

colRange      = {'A','D'};
sheetStartRow = 2;
initEstRow    = sheetStartRow + nTradeDays;
endDate       = '9/15/2008';

firms         = {'AIG','LEHMQ'};

inpSheets     = cellfun(@(firmName) ['Input' firmName],firms,...
                                     'UniformOutput',false);

nFirms    = length(firms);
idxFirms  = 1:nFirms;
for iFirm = idxFirms
    readNextSegment.(firms{iFirm}) = ...
        readInputFileDaily_nested(inpFileName,inpSheets{iFirm},...
                                  colRange,endDate,[],initEstRow);
end

while true
    for iFirm = idxFirms
        [curDate,E] = readNextSegment.(firms{iFirm})();
        if isempty(curDate)
            idxFirms(idxFirms == iFirm) = [];
            continue
        end
        disp(firms{iFirm}), disp(curDate), disp(E)
    end
    if isempty(idxFirms), break, end
end

Here is a snapshot showing the differences of maxlikeTransformedData_sub and maxlikeTransformedData_nested side by side.

Differences of maxlikeTransformedData_sub and maxlikeTransformedData_nested

The beauty of the code that uses nested functions should be apparent by now. For each firm (which corresponds to a sheet in the Excel file) in the cell array firms the function readInputFileDaily_nested is invoked once to create a unique function handle to readInputFileCore which, using dynamic field names, is stored in the structure readNextSegment.(firms{iFirm}). (This use of nested functions together with function handles can be thought of as creating light weight objects in an OOP context.) The actual reading of data for each firm is done by invoking the function readInputFileCore through the function handle readNextSegment.(firms{iFirm}). As the reading state (i.e., the externally scoped variables rowSheet, rowE, prevEs, newEs, newDates) for each firm (i.e., sheet) is stored in its corresponding function handle, it is possible to read a segment of all sheets simultaneously, perform the computations that depend on all these data segments being present (such as calculating the joint probability of an event for the firms, after estimating the parameters required for each individual firm), and proceed to another segment.
Clearly the implementation that uses ordinary functions (or subfunctions, for that matter) will not allow this simultaneous processing, as at each point in time there can only be one set of persistent variables for the function maxlikeTransformedData_sub and processing different sheets simultaneously requires independent sets of these variables at the same time. This requirement is also the reason the function maxlikeTransformedData_sub has to be cleared from memory before each sheet is processed—this makes sure we are not using the values left in the persistent variables from previous invocations of this function.

As another beautiful use of the memory effects provided by nested functions, let’s look at an example excerpted, with modifications, from “Learning MATLAB.”
Suppose our “objective” is to find the smallest x such that the maximum real part of the eigenvalues of a certain matrix A(x) equals 1. This is a straightforward task using fzero but now suppose that not only the value of x is sought but also we would like to know the eigenvalues of the matrix A(x) at the “optimal” x, without repeating the eigenvalue computation—this is a perfectly reasonable “constraint” as we have already done that computation and should not need to redo it.
Coding the objective as a nested function makes short work of fulfilling this task.


function x0 = findx
B  = diag(ones(49,1),1); 
A  = B - B';
x0 = fzero(@objective,[0 10]);
plot(e,'*')

    function r = objective(x)
        A(1,1) = x;
        e = eig(A); r = max(real(e)) - 1;
    end
end

Again it is noted that the appearance of e (the highlighted line) in the definition of the parent function findx is critical for having its value shared between the parent and nested functions.

See also

John D’Errico’s loopchoose which is a “looped version of nchoosek. nchoosek can generate all combinations of a set of numbers. But sometimes that set can grow too large to store.” The solution is to generate each member of the set of all combinations in turn. loopchoose does exactly that.

Attachments

LEHMQ_AIG.xls
(The extension of this file has been changed to doc to allow uploading to WordPress.com. To be used without changes to the code in this post, this file should be renamed to LEHMQ_AIG.xls after downloading.)

Posted in Fundamentals | Tagged , , , , | Leave a comment

Neumann Boundary Conditions, Decoded

The following function (from L.N. Trefethen, Spectral Methods in MATLAB, with slight modifications) solves the 2nd order wave equation in 2 dimensions (u_{tt} = u_{xx}+u_{yy}) using spectral methods, Fourier for x and Chebyshev for y direction. On its rectangular domain, the equation is subject to Neumann boundary conditions along the sides, u_{y}(x,\pm1,t) = 0, and periodic boundary conditions at the ends, u(+3,y,t) = u(-3,y,t).


function wavetank
%WAVETANK 2D "wave tank" with Neumann BCs for |y| = 1

% x variable in [-A,A], Fourier:
A = 3; Nx = 50; dx = 2*A/Nx; x = -A + dx*(1:Nx)';
D2x = (pi/A)^2*toeplitz([-1/(3*(dx/A)^2) - 1/6 ...
    .5*(-1).^(2:Nx)./sin((pi*dx/A)*(1:Nx-1)/2).^2]);

% y variable in [-1,1], Chebyshev:
Ny = 15; [Dy,y] = cheb(Ny); D2y = Dy^2;
BC = -Dy([1 Ny+1],[1 Ny+1])\Dy([1 Ny+1],2:Ny);

% Grid and initial data:
[xx,yy] = meshgrid(x,y);
vv = exp(-8*((xx + 1.5).^2 + yy.^2));
dt = 5/(Nx + Ny^2);
vvold = exp(-8*((xx + dt + 1.5).^2 + yy.^2));

% Time-stepping by leap frog formula:
clf, plotgap = round(2/dt); dt = 2/plotgap;
for n = 0:2*plotgap
    t = n*dt;
    if rem(n + .5,plotgap)<1
        subplot(3,1,n/plotgap + 1), mesh(xx,yy,vv), view(-10,60)
        axis([-A A -1 1 -0.15 1]), colormap(1e-6*[1 1 1]);
        text(-2.5,1,.5,['t = ' num2str(t)],'fontsize',18),
        set(gca,'ztick',[]), grid off, drawnow
    end
    vvnew = 2*vv - vvold + dt^2*(vv*D2x + D2y*vv);
    vvold = vv; vv = vvnew;
    vv([1 Ny+1],:) = BC*vv(2:Ny,:);       % Neumann BCs for |y| = 1
end

function [D,x] = cheb(N)
%CHEB  compute D = differentiation matrix, x = Chebyshev grid

if N == 0, D = 0; x = 1; return, end
x = cos(pi*(0:N)/N)';
c = [2; ones(N-1,1); 2].*(-1).^(0:N)';
X = repmat(x,1,N+1);
dX = X - X';
D  = (c*(1./c)')./(dX + (eye(N+1)));      % off-diagonal entries
D  = D - diag(sum(D,2));                  % diagonal entries

The way Neumann boundary conditions are implemented here (highlighted lines) is elegant, and compressed. With hints from Prof. Trefethen in a private communication, here is a “decompression” of the implementation.

The goal of the statement


vv([1 Ny+1],:) = BC*vv(2:Ny,:)

is to set the data on the top and bottom of the tank to the right values, i.e., values that will make the normal derivative equal to zero. The aim is to end up with values on the top and bottom such that when Dy is applied to do the differentiation, i.e., we compute


Dy * vv(:,:)

the result is zero in rows 1 and Ny+1. In other words, we want


Dy([1 Ny+1],:) * vv(:,:)

to be a matrix consisting of two rows of zeros. Another way to say that is that


Dy([1 Ny+1],[1 Ny+1]) * vv([1 Ny+1],:)

should be the negative of


Dy([1 Ny+1],2:Ny) * vv(2:Ny,:)

In other words,


vv([1 Ny+1],:) = -Dy([1 Ny+1],[1 Ny+1]) \  ( Dy([1 Ny+1,2:Ny]) * vv(2:Ny,:) )

And there in effect we have the matrix BC, i.e.,


BC = -Dy([1 Ny+1],[1 Ny+1])\Dy([1 Ny+1],2:Ny)
vv([1 Ny+1],:) = BC*vv(2:Ny,:)

The key to “decoding” these lines of code is looking at the matrix-matrix multiplication as a linear combination of the rows of the second matrix with the weights coming from the rows of the first, much the same way the application of elimination matrices is interpreted in Gaussian elimination and one of the various interpretations of matrix-matrix or matrix-vector
multiplication.

See also

Lecture 1 of Numerical Linear Algebra (Trefethen and Bau) discusses some of various interpretations of matrix-matrix and matrix-vector multiplications.

Posted in High Order and Spectral Methods | Tagged , | Leave a comment