5.6   Making the Train, Making a Packet


As we noticed earlier, the problem with the sinusoidal waveforms we’ve considered so far, is that they go on for ever. According to our probabilistic model, this effectively would mean that there was an equal chance of a particle being detected anywhere in the universe. Cormorant is quick to point out that this seems like an unpromising starting point for modelling something on the scale of an atom. But Joseph Fourier pipes up from the corner: “Bon courage, petit oiseau! Have you so soon forgotten what I taught you? You can synthesise any waveform you wish if you superpose, superpose and superpose encore.” Joseph announced this incredibly important discovery to the world at the beginning of the 19th century, to a deafening chorus of indifference. Jospeh ranks high in the tragic league table of scientists who went underappreciated in their lifetimes (“Tell me about it” moans Sir Isaac; much coughing and spluttering ensues).   

Joseph showed that almost any smooth, periodic function can be constructed as a sum of series of sine and cosine functions. We’ll take a simplified, special case of a Fourier series to explore what this might mean for us.  Figure V-iii shows, for example, what can happen if we superpose a series of cosine curves (which are just sine curves but shifted by π/2 radians, so that they have a peak centred on zero), with different wavelengths. The equation of each of the individual curves is  y  =  cos(2πx/λ), which we can write more concisely as y = cos(kx), where k is 2π/λ, which is officially called the angular wavenumber.

Fig. V-iii.  (a) shows a series of aligned curves, y = cos(kx), differing in the value of the angular wavenumber (k), as shown.  (b) shows superpositions of such curves, as described. From the bottom upwards, the lower three are sums of three, five and nine waveforms respectively, spanning the same range of k but with increasing numbers of intermediate values. Each superposition has been scaled by dividing by the number of contributing waveforms so, in effect, the value of the superposition wave function, Y, at any value of x, will be the mean of the values in the individual component waveforms. The top curve is obtained by scaling an integration of y = cos(kx) over this same range: we’ll see the significance of this a bit later.

You can see that all of the curves have an aligned positive peak at x=0:  this happens because the value of cos(kx) = 1 when x=0, regardless of the value of k. So, if we think of these as waveforms, when we construct the superpositions, there is always constructive interference at the centre, around this value. As we move away from x = 0, however, destructive interference starts to kick in: you can see, for example, that where x=A, the y value in the different curves fluctuates between positive and negative, so that the sum tends towards zero. The result is that the superposition has a region of focussed intensity around x=0.

What is also clear from the diagram, however, is that regions of constructive interference arise not just around x=0 but also at regular distances to either side.  This happens at positions where peaks coincide for each of the superposed waveforms. If we superpose actual waves, moving along in space, in this way, the individual localised regions of wave activity are, in physics, called wave packets. A series of regularly spaced wave packets, such as we have generated in this example, is called a wave train.

Let's investigate one of these wave trains more closely:

The more different wave forms of different wavelength are superposed, the scarcer the common multiples of these wavelengths will be and therefore the fewer the values of x where all of them are interfering constructively. Hence the wave packets will become further apart in the wave train, as is evident in figure V-iii. In the limit of an infinite number of wave forms, the only place where constructive interference will be focused will be around zero, so that just one single wave packet is then formed.  This is true localisation and so it is, as Nefertiti and Cormorant are now quick to realise, exciting in the context of a matter wave because, in terms of our probabilistic interpretation, it could form the beginnings of an explanation for how our particle is localised in one specific region of space.


So, how can we find the function that results from superposing an infinite number of wave forms with infinitesimally different k values spanning a specified range?  “Integration” shouts Cormorant excitedly. “Indeed, you are right” says Joseph Fourier. “This is the basis of another of my inventions. In your world it Is called a Fourier transform whereas in my time it was referred to as “another worthless piece of excrement from that waste of space Fourier.”  The Fourier transform is actually an enormously powerful tool which can reconstruct any smoothly varying function in terms of sinusoidal functions. It provides a method for discovering the necessary weighting of each contributing sinusoid to get a specific outcome:  for example, in music a pure tone is a pressure which fluctuates with time according to a simple sinusoidal waves. Almost all real sounds - such as musical chords - are superpositions of these individual sinusoids which more more complex waveforms, corresponding to a mixture of contributing notes. Fourier transformation can be used to work out the individual notes (their individual frequencies and their relative loudness) that are the components of the sound. This has real-life applications is audio-processing. 


The mathematics of the Fourier transform is, unfortunately, a step too far for us at this stage. Instead, we will take a very simplified approach, in which we will assume equal weighting of all possible sinusoids y = cos(kx), over a defined range of k. We'll devise a way to find <y>, the average value of y, as a function of x:  

This question should have led you to a general result a function y = f(k,x):  if we fix on a specific value of x, which we’ll call xo, we can now find the mean value of the function when x = xo, averaged over all the infinitesimally spaced points in a range between k1 and k2:

So, if we now restore x to its variable status, we can find the value of  <y> for any and all values of x. This means, in effect, that we can now derive the “average” form of our function, y = f(k,x), over this range of k. We will call this average function Y(x):

If each individual function, y = f(k,x) is a waveform,  then this average form is what we’ve been looking for:  it’s ψ(x), the superposition of all possible waveforms with values of k between k1 and k2 values, reduced to the scale of an individual waveform by dividing by (k2-k1).

Let's see how that works out for our example of superposing waveforms y = cos(kx): 

This exercise gives us an expression for our scaled superposition:

The integrated superposition shown earlier in figure V-iii was obtained using this equation with k1 = 0 and k2 = 2. Here it is again:

The key difference from the superpositions of limited numbers of waveforms is that inclusion of all possible waveforms now reduces the train of wave packets to a single one, focussed on x=0. This is now real localisation, akin - we may think - to localisation of an electron in a single region of space. The wave packet still has a finite width, however, and our next job is to explore the limitations on this.