Complex Magic, Part 2: Differentiation via Integration

In the previous post, Complex Magic, Part 1, the use of complex variables in approximating the first derivative of a holomorphic function was discussed. This post delves deeper and introduces formulas suitable for approximating the {n}th derivative of a holomorphic function.

As elaborate as they might seem, the ideas to be discussed are based on two familiar topics: Cauchy’s integral theorem and quadratures.

Cauchy’s integral theorem states that if {f} is a holomorphic function in a neighborhood which contains a closed disk with the boundary {\gamma} (a closed rectifiable curve with winding number one about {z_0}), then for every point {z_0} in the interior of that disk

\displaystyle  a_n = {f^{(n)}(z_0) \over n!} = {1 \over 2\pi i} \oint_\gamma {f(z) (z-z_0)^{-(n+1)}}\, dz. \ \ \ \ \ (1)

The magic is that (1) converts the problem of calculating a derivative to calculating an integral! And contour integrals of holomorphic functions can be readily approximated using trapezoidal rule, with exponential accuracy. What follows is how this approximation is done, based on [LynessMoler1967].

On the interval {[0,1]} partitioned into {m} segments of equal length, the trapezoidal quadrature may be defined with the following sum operator:

\displaystyle  R^{[m,1]} g(t) = {1 \over m} \left[ {1 \over 2}g(0) + g\left( {1 \over m} \right) + \cdots + g\left( {m-1 \over m} \right) + {1 \over 2}g(1) \right]. \ \ \ \ \ (2)

Suppose {f(x)} is a real function of {x} whose nth derivative we wish to approximate at {x_0}. If {f(z)} satisfies the conditions required for (1) to hold at {z = x_0}, then the derivative can be calculated according to

\displaystyle  a_n = {f^{(n)}(x_0) \over n!} = {1 \over r^n} \sum_{m = 1}^{\infty} \mu(m) \left[ R^{[mn,1]} g(t) - f(x_0) \right], \ \ \ \ \ (3)

in which {g(t) = \mathop{Re} f(x_0 + r e^{2\pi i t})} and {\mu(m)} is the Möbius function, defined for all positive integers {m} as {1}, if {m} is squarefree and even, {-1}, if {m} is squarefree and odd, and {0} otherwise. With the sum going to {\infty}, (3) is, of course, an exact formula. Truncating the infinite sum to an {M}-term partial sum {S_M = \sum_{m = 1}^{M} \mu(m) \left[ R^{[mn,1]} g(t) - f(x_0) \right]} yields an approximate formula for the derivative.

A condensed version of the proof of [LynessMoler1967] for (3) is as follows: substituting the parametrization {z = x_0 + r e^{2\pi i t}} in (1) gives the Fourier coefficients

\displaystyle a_n = {2 \over r^n} \int_0^1 {f(x_0 + r e^{2\pi i t}) \cos(2\pi nt)}\, dt

which, using the Poisson summation formula, can be connected to the trapezoidal rule sum operator (2) to yield

\displaystyle  \begin{array}{rcl}  b_n &=& R^{[n,1]} f(x_0 + r e^{2\pi it}) - f(x_0) \\ &=& 2 \sum_{m = 1}^{\infty} \int_0^1 {f(x_0 + r e^{2\pi it}) \cos(2\pi mnt)}\, dt\\ &=& \sum_{m = 1}^{\infty} r^{mn}a_{mn}. \end{array}

The set of equations {b_n = \sum_{m = 1}^{\infty} r^{mn}a_{mn}} is then inverted, using the Möbius inversion formula, to yield {a_n = {1 \over r^n} \sum_{m = 1}^{\infty} \mu(m)b_{mn}} in which substituting for {a_n} from (1), with {z_0 = x_0}, and {\mathop{Re} R^{[n,1]} f(x_0 + r e^{2\pi it}) - f(x_0)} for {b_n} (since for {f(x)} real, {a_n} is real) completes the proof. Notice that only in the final step of the proof was the fact that {f(x)} is a real function used.

The parametrization {z = x_0 + r e^{2\pi i t},\ 0 \le t \le 1}, is in effect taking the contour {\gamma} to be a circle of radius {r} centered at {x_0}. The important point about this parametrization is not so much the particular shape that it gives to {\gamma}, but the symmetry of that shape with respect to the real axis. Furthermore, for a real function {f(x)}, {g(t) = \mathop{Re} f( x_0 + r e^{2\pi i t} )} is also symmetric with respect to {t = 1/2}, or equivalently {f( x_0 + r e^{2\pi i t} )} is also symmetric with respect to the real axis. This symmetry of {\gamma} and {f(x)}, together with the circular shape of the former, can be exploited to simplify (2). Specifically, since {g(0) = g(1)}, (2) reduces simply to a mean of {f} over {\gamma}, which can be approximated by taking the real part of the means of {f} over equally spaced points along the upper (or lower) half of {\gamma}. It is emphasized that because {f(x)} is a real function, the mean of {f} and {\mathop{Re} f} over {\gamma} (the full circle) are equal since the imaginary parts of {f} from the upper and lower halves of {\gamma} cancel each other (up to roundoff).

It was mentioned in Complex Magic, Part 1 that finite difference formulas have coefficients that vary significantly in magnitude, and that this causes the absolute errors in function values to be amplified. Notice that all coefficients in (3) are of magnitude {1} and as a result they will not cause any amplification of the absolute errors in function values. Consequently the partial sum {S_M} is expected to have an absolute error of the same order as that of the largest function value contributing to it, which is {G_M = \max\{|f(x_0),|g(t_j)|\},\ t_j = j/mn,\ 1 \le m \le M} and since dividing {S_M} by {r^n} retains the relative accuracy, {\delta_M = \epsilon G_M/S_M} can serve as a measure of the relative error incurred in calculating the derivative due to roundoff. ({\epsilon} is the maximum relative roundoff error in function evaluations.) If {\delta_M} is not acceptable, we can increase the radius, {r}, of the contour {\gamma}, to have the function evaluations {g(t) = \mathop{Re} f( x_0 + r e^{2\pi i t} )} done on points prominently distinguishable from {x_0} (in floating point sense), and repeat the approximation.

Here is a simple implementation of the Möbius function, using table look-up for the first 77 values and factorization for the rest:

function mu = mobius(n)

persistent muList
if isempty(muList)

    muList  = [1,-1,-1,0,-1,1,-1,0,0,1,-1,0,-1,1,1,0,-1,0,-1,0,...

if n <= length(muList), mu = muList(n); return, end

fac     = factor(n);
len_fac = length(fac);
if length(unique(fac)) == len_fac
    mu = (-1)^len_fac;
    mu = 0;

And here is a function diffc that implements (3):

function [S_M,delta_M] = diffc(f,varargin)

[x0,n,r,M] = deal(0,1,.1,3);

optArgs    = {x0,n,r,M};
emptyArgs  = cellfun(@isempty,varargin);
[optArgs{~emptyArgs}] = varargin{~emptyArgs};
[x0,n,r,M] = optArgs{:};

f_x0      = f(x0);
[G_M,S_M] = deal(0);
for m = 1:M
    t      = linspace(1/(m*n),1,m*n);
    g      = real( f( x0 + r * exp(1i * 2*pi * t) ) );

    G_M = max( abs( [G_M f_x0 g] ) );
    S_M = S_M + mobius(m) * ( mean(g) - f_x0 );

delta_M = eps(G_M) * G_M/S_M;
S_M     = factorial(n)/(r^n) * S_M;

The following code snippet puts diffc to work, approximating the 10th derivative of the function {e^x / (\sin^{3}x + \cos^{3}x)} at {x_0 = 0}, against an exact calculation using symbolic variables. The exact value of this derivative is 13829824, as seen from the symbolic calculation.

syms x real
f_sym = exp(x)/(sin(x)^3 + cos(x)^3)
f     = matlabFunction(f_sym)

x_0 = 0; n = 10; r = .5; M = 7;
tic, df10d10x_sym         = subs(diff(f_sym,n),x_0), toc
tic, [df10d10x_M,delta_M] = diffc(f,x_0,n,r,M),      toc

On a contour of radius {r = 0.5} using a partial sum {S_M} of only {M = 7} terms, diffc achieves 13829824.00000018 and the time it takes is two orders of magnitude less than that of the symbolic calculation (0.001s compared to 0.6s).


There exists a bug in MATLAB R2009a (5.2) and R2008b (5.1) due to which the “MuPAD kernel can take several minutes or longer to initialize on Windows. Any command from the Symbolic Math Toolbox that requires starting MuPAD might seem to hang.” This bug is fixed in R2009b and a workaround is available for the two releases before that, I was first writing this post, without the “patch” applied, I noticed that the symbolic calculation was taking an unreasonably long time, somewhere around 15s. As satisfied as I was as with my uber-fast “complex” implementation, I knew that something was wrong with the Symbolic Math Toolbox because even creating a symbolic variable, say syms x real, would take over 10s.)

This entry was posted in High Order and Spectral Methods and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s