5.8A  Getting Deeper into Uncertainty

In our formulation, we have been generating wave packets by superposing y = cos (kx) over a range of k, the angular wavenumber (k = 2π/λ).  Prince Louis’ equation (p = h/λ) then establishes the momentum connection:  p = (hk/2π).  The uncertainty in any measurement of momentum would be related to the spread in the values of k. The uncertainty in position is then related to the width of the wave packet – and, more specifically, to the probability density distribution obtained by squaring the wavefunction. When you have distributions of possible values of variables, like this, the usual way to quantify the spread of the distribution is to calculate the standard deviation and this is how the uncertainty principle is generally formulated.  


If you have a distribution of discrete values of a variable x, the standard deviation (σ) quantifies the spread of those values around the mean, as the square root of the mean squared deviation from the mean value, <x>. The mean and standard deviation are therefore given by:

These ideas can also be applied to a continuous distribution by replacing the summation with an integration: if any value of x has a probability density function F(x), we get:

A rearrangement of the definition of the standard deviation which we are going to find useful is:

Now what we can do is take the simplified localisation model we have put together and use that to examine the standard deviations and how they depend on the range of angular wavenumbers (k) in the superposition. It won't be rigorous because we put in the artificial restriction that all the individual waveforms should be weighted equally and also because some of the maths isn't going to be accessible. But we can still get there, more or less:

Considering first the momentum, Nefertiti immediately reminds us of something important:  while wavelength is clearly a scalar (and so, therefore, is k), momentum is a vector (since velocity is a vector). This means that we should really write p = ±(hk/2p) and for any value of k, a positive or negative momentum is equally likely. “We’ll need to keep this in mind as we explore the momentum uncertainty,” she warns. Let's do that:

So  we have an expression for σp, the momentum standard deviation: 

The key thing to keep in mind is that it is proportional to kmax, the maximum value of the angular wavenumber to be included in the superposition. 

Now let’s turn to the spread in the probability distribution of the position of the particle, as defined by the superposition. We know that the probability of the particle being at a particular position, x, if a measurement is made, is determined by the value of ψ2 in the superposition of waveforms. We also know, qualitatively, from figure V-vii, in the previous section, that the dominant central peak of the ψ2 versus x  curve gets narrower - meaning greater localisation - as the range of angular wavenumbers (k) in the superposition is increased. Now we want to look at that more quantitatively.  This, unfortunately, is where we really have to start cheating because, otherwise, the maths is otherwise going to get a bit beyond us. 

We'll take a closer look at that central peak in ψ2 graphs like those we obtained for superpositions with different ranges of k (as in figure V-vii) .  In order to quantify the differences in the widths of this peak, we're going to measure the width at half height for each. This parameter is used, for example, by spectroscopists to compare linewidths in spectra.

What we really want is to relate this peak width to the standard deviation, so that we can find the relationship to the momentum standard deviation. This is not easy to do but, once again, a nifty bit of cheating can give us a good idea of it. If you're a bit of a statistician, the shape of the peaks in the ψ2 v x graphs may have reminded you (if we ignore the small side bands and just focus on the central peak) of a normal distribution. Physicists call this shape a Gaussian function. We can see that this is more than a vague similarity in figure V-viii, which matches the peak from the <k> = 1 distribution to the closest fitting Gaussian (as calculated using non-linear regression):  

Fig. V-viii

Here, the orange curve is ψ2, where ψ is obtained by integrating y =  cos kx, over the range k=0 to k=2 (see equation V-vi. in section 5-6).

The blue curve is the Gaussian function (normal distribution curve) that most closely fits the ψ2 curve, over the width of the central peak.

The helpful thing about this is that for the Gaussian curve, we can fairly easily relate the standard deviation to the width at half height, which we have measured. We'll now do this and derive an expression for  σx , the position standard deviation:

This leads us to an estimate of σx , given by:       σx  =  0.59 / kmax  

So now we have expressions for both uncertainties, expressed as standard deviations:

σp   =  hkmax / 23π      and         σx  =  0.59 / kmax    

Noticing the reciprocal dependence on kmax ,  let's do something very simple but very important with these results:

This leads us to an estimate of σx , given by:       σx  =  0.59 / kmax  

So now we have expressions for both uncertainties, expressed as standard deviations:

σp   =  hkmax / 23π      and         σx  =  0.59 / kmax    

Noticing the reciprocal dependence on kmax ,  let's do something very simple but very important with these results:

What this confirms - as we found in section 5.8 and as Werner and his mates first established - that there is a fundamental limit on how precisely it is possible to know both the position and the momentum of a particle. Using our simplified model, we have quantified this as:

σxσp =  h/6π                                                                                                       

Now we need to remember that, although we've taken a more sophisticated approach than we did in section 5.8, we still cheated in deriving this.  First we limited ourselves, artificially, to the case where the waveforms in our superposition were all weighted equally. Second, we did not properly derive σx for the function we generated as a result instead of this superposition. Instead we approximated the dominant central peak as a Gaussian function and ignored the small sideband peaks (which means we are likely to have underestimated the actual σx). We should expect this to have introduced some error into our result. Happily, though, this error is actually quite small:  when Earle Kennard did the proper, rigorous derivation, he came up with σxσp =  h/4π.  So our value was of the right order of magnitude and we were actually only out by a factor of about 1.5.

This value is the fundamental limit on the precision that is obtainable, because of the waveform superposition inherent in localising a particle. In a real experiment, the may be all kind of practical reasons why the actual precision achievable will be lower than this.  For this reason, the relationship is generally written as an inequality.  So Heisenberg's uncertainty principle is generally expressed in quantitative form as: 

σxσp   h/                                                                                     (V-12)