Laplace Transform – visualized

The Laplace Transform is a particular tool that is used in mathematics, science, engineering and so on.  There are many books, web pages, and so on about it.

My animations are now on Wikipedia: https://en.wikipedia.org/wiki/Laplace_transform

And yet I cannot find a single decent visualization of it!  Not a single person that I can find appears to have tried to actually visualize what it is doing.  There are plenty of animations for the Fourier Transform like:

fourier

But nothing for Laplace Transform that I can find.

So, I will attempt to fill that gap.

What is the Laplace Transform?

It’s a way to represent a function that is 0 for time < 0 (typically) as a sum of many waves that look more like:

laplace

Graph of e^t cos(10t)

Note that what I just said isn’t entirely true, because there’s an imaginary component here too, and we’re actually integrating.  So take this as a crude idea just to get started, and let’s move onto the math to get a better idea:

Math

The goal of this is to visualize how the Laplace Transform works:

\displaystyle\mathscr{L}\{f(t)\}=F(s)

To do this, we need to look at the definition of the inverse Laplace Transform:

\displaystyle f(t) = \mathscr{L}^{-1}\{F(s)\}=\frac{1}{2\pi j}\int^{c+j\infty}_{c-j\infty} F(s)e^{st}\mathrm{d}s

While pretty, it’s not so nice to work with, so let’s make the substitution:

\displaystyle s := c+jr

so that our new limits are just \infty to -\infty, and \mathrm{d}s/\mathrm{d}r = j giving:

\displaystyle f(t) = \mathscr{L}^{-1}\{F(s)\}=\frac{1}{2\pi j}\int^{\infty}_{-\infty} F(c+jr)e^{(c+jr)t}j\mathrm{d}r

\displaystyle = \frac{1}{2\pi}\int^{\infty}_{-\infty} F(c+jr)e^{(c+jr)t}\mathrm{d}r

Which we will now approximate as:

\displaystyle \approx \frac{1}{2\pi}\sum^{n}_{i=-n} F(c+jr_i)e^{(c+jr_i)t}\Delta r_i

Code

The code turned out to be a bit too large for a blog post, so I’ve put it here:

https://github.com/johnflux/laplace-transform-visualized/blob/master/Laplace%20Transform.ipynb

Results

Note: The graphs say “Next frequency to add: … where s = c+rj“, but really it should be “Next two frequencies to add: … where s = c\pm rj” since we are adding two frequencies at a time, in such a way that their imaginary parts cancel out, allowing us to keep everything real in the plots.  I fixed this comment in the code, but didn’t want to rerender all the videos.

A cubic polynomial:

A cosine wave:

Now a square wave.  This has infinities going to infinity, so it’s not technically possible to plot.  But I tried anyway, and it seems to visually work:

 

Gibbs Phenomenon

Note the overshoot ‘ringing’ at the corners in the square wave. This is the Gibbs phenomenon and occurs in Fourier Transforms as well. See that link for more information.

 

Now some that it absolutely can’t handle, like: \delta(t).  (A function that is 0 everywhere, except a sharp peak at exactly time = 0).  In the S domain, this is a constant, meaning that we never converge.  But visually it’s still cool.

Note that this never ‘settles down’ (converges) because the frequency is constantly increasing while the magnitude remains constant.

There is visual ‘aliasing’ (like how a wheel can appear to go backwards as its speed increases). This is not “real” – it is an artifact of trying to render high frequency waves. If we rendered (and played back) the video at a higher resolution, the effect would disappear.

At the very end, it appears as if the wave is just about to converge. This is not a coincidence and it isn’t real. It happens because the frequency of the waves becomes too high so that we just don’t see them, making the line appear to go smooth, when in reality the waves are just too close together to see.

The code is automatically calculating this point and setting our time step such that it only breaksdown at the very end of the video. If make the timestep smaller, this effect would disappear.

And a simple step function:

A sawtooth:

Advertisement

Compiler is ignoring my C code!

Sometimes you come across a weird bug where the output seems to be completely impossible.  And it’s extremely hard to debug or search on google for, if you don’t know about this:

If your code has Undefined Behaviour, the compiler is allowed to assume it won’t happen, and can ‘optimize out’ chunks of your code.

Here’s a code snippet in a real bug I found yesterday:

 

#define FOO_SIZE 10
int foo2[FOO_SIZE];
...
int n=0;
do {
    p->foo[n] = foo2[n];
    n++;
} while( (foo2[n] != 0) && (n < FOO_SIZE) );
printk("n is %d\n", n);

 

 

This printed out:

   n is 165

and the system crashed.

But how can n become larger than FOO_SIZE=10 ?  Because the compiler ‘optimized’ away the check.  Here’s the asm:

} while( (foo2[n] != 0) && (n < FOO_SIZE) );
   21a14:       f813 2f01       ldrb.w  r2, [r3, #1]!
   21a18:       2a00            cmp     r2, #0
   21a1a:       d1f8            bne.n   21a0e <NV_Get+0x82>
   21a1c:       e7ca            b.n     219b4 <NV_Get+0x28>

What we are seeing here is that the “n < FOO_SIZE”  check has been completely removed by the compiler.  Why?

Because in the check, if n == FOO_SIZE, we would be checking if:  foo2[FOO_SIZE] != 0.  But this is out of bounds for foo.  The compiler knows that this is out of bounds, and so knows that this is undefined behaviour.  But the compiler is allowed to assume that undefined behaviour doesn’t happen, and so it assumes that n can NEVER be >= FOO_SIZE.  Thus it can remove the n < FOO_SIZE check.

This can be fixed by switching the order of the && like:

} while( (n < FOO_SIZE) && (foo2[n] != 0) );

Or, by checking n-1 instead  (which is slightly different behaviour, but good enough for me.  I was changing a lot of code with this bug.)

} while( (foo2[n-1] != 0) && (n < FOO_SIZE) );
   21a26:       7809            ldrb    r1, [r1, #0]
   21a28:       2900            cmp     r1, #0
   21a2a:       d0c3            beq.n   219b4 <NV_Get+0x28>
   21a2c:       429a            cmp     r2, r3
   21a2e:       d1f5            bne.n   21a1c <NV_Get+0x90>
   21a30:       e7c0            b.n     219b4 <NV_Get+0x28>

Now we see the second comparison in the asm.