Using interval arithmetic for better error propagation

07 Mar 2023

Errors and their propagation

If an experiment measures a number, the number usually contains some uncertainty that is often called the value’s error. This might result from the limited resolution of the measurement device or resulting from statistical analysis of multiple measurements.

For example for a value $x=5$, we call its error $\Delta x=0.2$. This is sometimes written as $x = 5\pm 0.2$.

If we want to derive another value $y = f(x)$, the error in $x$ will result in an error $\Delta y$ for $y$. The process of calculating this new error is called error propagation.

The usual method to do this in natural science is Gauss error propagation law. I think it is a badly behaved approximation and should basically never be used. Interval arithmetic is conceptionally simpler and always exact.

Our example

We will look at a value $x=0$ with $\Delta x = 2\pi$ and $y=\sin(x)$ and $z=\cos(x)$ for demonstration. This is kind of an evil example because it uses a transcendental function with an error larger than the value.

A less extreme version of this problem might happen in the physics of resonance. I vaguely remember using an in-principle similar function for a spin wave experiment.

Gaussian error propagation

Gauss had some clever ideas1 to approximate the error, but the resulting formulas are easy enough. In this article we will only need the case for one input variable which is very simple:

\[\begin{align*} \Delta y = |f'(x)|\Delta x \end{align*}\]

Our example

The derivative of $\sin$ and $\cos$ are easy enough to calculate so we get:

\[\begin{align*} \Delta y &= |\cos(x)|\Delta x = 2\pi \cos(0) = 2\pi \\ \Delta z &= |-\sin(x)|\Delta x = 2\pi \sin(0) = 0 \end{align*}\]

Both of these are insane. $y = 0 \pm 2\pi$ is bigger than the possible range of $sin(x)\in [0,1]$. $z = 1 \pm 0$ does not have any error no matter what.

The value with its error comes from the full range of $\sin$ and $\cos$ and so the result should be able to take every possible resulting value of the functions and their ranges must be $y = z = 0\pm 1$.

Interval arithmetic

With interval arithmetic, we stop calculating with numbers and only consider intervals in which a number must be. For example, we can get an interval from a value and error as $x_I = [x-\Delta x, x+\Delta x]$ For our initial example of $x = 5\pm 0.2$ we get $x_I = [4.8, 5.2]$.

We can always2 recover a value and error from an interval $x_I=[x_1, x_2]$ trivially as $x = \frac{x_1+x_2}{2}$ and $\Delta x = \frac{x_2-x_1}{2}$.

The general formula for error propagation with intervals is to calculate the image of the function on the interval

\[\begin{align*} f[x_I] = \{f(x): x \in x_I\} \end{align*}\]

Then the resulting interval is $y_I = [\inf f[x_I], \sup f[x_I]]$.

This simplifies to something much simpler in most cases, where you only have to consider the interval bounds. For example, the multiplication for two intervals is

\[\begin{align*} [a, b] \cdot [c, d] = [\min \{ac, ad, bc, bd\}, \max \{ac, ad, bc, bd\}] \end{align*}\]

which is trivial to calculate.

For a function like $\cos(x)$ it is a little more tricky:

  1. Shift $x$ by some multiple of $2\pi$, so it is in $[0, 2\pi]$.
  2. Find the endpoint $x_{max}$ of the interval that is nearest to either $0$ or $2\pi$.
  3. Find the point $x_{min}$ that is nearest to $\pi$. This might be either of the endpoints of the interval, or the interval might contain this $\pi$.
  4. The resulting interval is $y_I = [\cos(x_{min}), \cos(x_{max})]$.

That is not so hard.

Our example

Considering the example, we get the interval $x_I = [-2\pi, 2\pi]$. This still is obviously the full range of a period of our functions, and we get:

\[\begin{align*} y_I = z_I = [-1, 1] \end{align*}\]

Practical considerations: Using python

No one should ever do error propagation by hand, except to torture junior students. The most popular and sane3 tool to analyze measurements is python. I will demonstrate how similar it is using appropriate packages.

For example, we will use some values $x \in [0, 2]$ with error $\Delta x = \frac{x}{2}$ and calculate $\sin x$.

Gaussian error propagation

Using the package uncertainties, the resulting code is:

xs = [x/25 for x in range(50)]

from uncertainties import ufloat
import uncertainties.umath as umath

xsu = [ufloat(x, x/2) for x in xs]
ysu = [umath.sin(x) for x in xsu]

Interval arithmetic

Using the package interval, the resulting code is very similar:

xs = [x/25 for x in range(50)]

from interval import interval, inf, imath

xsi = [interval[x-x/2, x+x/2] for x in xs]
ysi = [imath.sin(x) for x in xsi]

Plotting it

Now after we calculate if it is easy enough to plot it and compare them graphically. Plot of the calculation of the two arrays above. As before the Gaussian error propagation has error bars that are above $1$, and they vanish around $x=\frac{\pi}{2} \approx 1.57$.

Conclusion

While using computers, interval arithmetic is just as easy as conventional error propagation and it avoids many strange artifacts of Gaussian error propagation.


  1. The basic idea of gauss was that errors are very small (which is absolutely not always true) and linearize the function that derives the new value and calculates a spheroid around the value based on the input errors. Nobody really explains this ever, because it is very confusing and clever. 

  2. Except if the interval is infinite on at least one side. This is a good thing because this makes intervals fundamentally more powerful than value and error. 

  3. Julia might be slightly more sane than python