Jekyll2024-02-28T14:48:39+01:00https://xn--andreasvlker-cjb.de/feed.xmlAndreas VölkerSome post on computers, math and physics by Andreas VölkerComparing JPEG, WebP and HEIC images after bit flips2024-02-28T11:00:11+01:002024-02-28T11:00:11+01:00https://xn--andreasvlker-cjb.de/2024/02/28/image-formats-bitflips<p>In the last few months, I have thought a little about long-term archiving files, the effects
of random bit errors and how it interacts with compression. So today, I looked a little at
the different image formats.</p>
<p>I am not an expert in modern image compression, so I ran a <em>totally unscientific</em> experiment for fun.
I wrote a <a href="https://aproblemsquared.libsyn.com">terrible
a Python script</a> that uses ImageMagick to convert an image into
different formats and flip either a single or four random bits. This is repeated nine times
and the images are converted to PNG and composed into a grid.</p>
<p>I look at JPEG, WebP and HEIC (Apple’s format based on H265). My assumption is that JPEG uses
much older and less efficient compression algorithms that leave some redundancy in the file
and might degrade less, while WebP and HEIC seem kind of similar to me.</p>
<h2 id="1-bit-flipped">1 bit flipped</h2>
<p>Click images to enlarge.</p>
<h3 id="jpeg">JPEG</h3>
<p><a href="/assets/images/image-formats-bitflips/1/out.jpg.png">
<img width="600px" src="/assets/images/image-formats-bitflips/1/out.jpg.png" />
</a></p>
<p>Eight of the nine images look fine, while in the middle image, something important seems to be
destroyed. Maybe something is in the file header.</p>
<h3 id="webp">WebP</h3>
<p><a href="/assets/images/image-formats-bitflips/1/out.webp.png">
<img width="600px" src="/assets/images/image-formats-bitflips/1/out.webp.png" />
</a></p>
<p>Here the horrible blocky artifacts start from some line and everything after is broken, but
retains the basic color palette.</p>
<p>Two images look fine, and two others have major artifacts (to me, comparable to old photographs with major
water damage), but if the motif was important to me, they seem kind of okay.</p>
<p>The other five images are basically destroyed. You might recognize them when you know the image, but they seem mostly useless to me.</p>
<h3 id="heic">HEIC</h3>
<p><a href="/assets/images/image-formats-bitflips/1/out.heic.png">
<img width="600px" src="/assets/images/image-formats-bitflips/1/out.heic.png" />
</a></p>
<p>This one confuses me. The artifacts seem only to wander down and to the right, and they are incredible horrible
to look at. Blocky geometric shapes in strange colors.</p>
<p>Again, two images look fine. For the other images, I am at a loss how to categorize them.
There are more perfectly good pixels than in the WebP images, but the artifacts are horrible.
Some might be okay as mementos if you blur and fade out the color of the artifacts. Five might be salvageable in this way.</p>
<p>One is a complete loss except for the bird head, but even that might be good in some cases.</p>
<h2 id="4-bits-flipped">4 bits flipped</h2>
<p>Click images to enlarge.</p>
<h3 id="jpeg-1">JPEG</h3>
<p><a href="/assets/images/image-formats-bitflips/2/out.jpg.png">
<img width="600px" src="/assets/images/image-formats-bitflips/2/out.jpg.png" />
</a></p>
<p>Four images are fine. The other ones have a line where the color shift or the motif is shifted a little.
They are not great, but totally acceptable. This time, there was no header damage.</p>
<h3 id="webp-1">WebP</h3>
<p><a href="/assets/images/image-formats-bitflips/2/out.webp.png">
<img width="600px" src="/assets/images/image-formats-bitflips/2/out.webp.png" />
</a></p>
<p>Three images are kind of okay with some major artifacts; five are for bird-head enthusiasts only, and the last one is a complete loss.</p>
<h3 id="heic-1">HEIC</h3>
<p><a href="/assets/images/image-formats-bitflips/2/out.heic.png">
<img width="600px" src="/assets/images/image-formats-bitflips/2/out.heic.png" />
</a></p>
<p>This is a horror show.</p>
<ul>
<li>One image was too damaged to be an image and shows up as a white spot</li>
<li>One image is almost only an artifact without any relevance to the source image</li>
<li>Five images are again for the bird-head enjoyers</li>
<li>Two images have the main motif kind of undamaged, but there are again absolutely horrible artifacts beside it.</li>
</ul>
<h2 id="bmp-with-1000-bit-flips">BMP with 1000 bit flips</h2>
<p>Now, for fun, let’s look at BMP as an uncompressed file format, but let’s introduce 1000 bit errors
instead of four.</p>
<p><a href="/assets/images/image-formats-bitflips/out.bmp.png">
<img width="600px" src="/assets/images/image-formats-bitflips/out.bmp.png" />
</a></p>
<p>This looks perfectly fine. There are obviously some pixel errors, but they don’t destroy the image.
I think it would be trivial to find the worst of them and simply replace them with an average of
the surrounding pixels.</p>
<p>Of course, at this bit-flip rate, we were lucky not to destroy the header fields.
Even if this happens, we might be able to fix the header with a text editor and calculator, because
BMP is a really simple format.</p>
<h2 id="restrictions">Restrictions</h2>
<p>Especially the comparison with the bitmap is kind of unfair. The uncompressed file is orders of magnitude
bigger and thus will have a much higher rate of bit error.</p>
<p>If I had written less horrible Python code, I might have corrected the flip rate for different file sizes.</p>
<h3 id="what-did-we-learn-at-the-end">What did we learn at the end?</h3>
<p>Use redundant storage with checksums, so that there are no bit flips.</p>
<p>JPEG is kind of an okayish compromise. It is still much more efficient to store five copies of every
JPEG instead of a BMP.</p>
<p>Why are the artifacts so different between WebP and HEIC? This might be a question for an expert
in image compression, instead of some dubious experiment.</p>In the last few months, I have thought a little about long-term archiving files, the effects of random bit errors and how it interacts with compression. So today, I looked a little at the different image formats.Readjusting watch hands on UTS DCF clockworks2023-06-08T19:00:11+02:002023-06-08T19:00:11+02:00https://xn--andreasvlker-cjb.de/2023/06/08/uts-dcf<p><img alt="Image of the backside of the UTS DCF clockwork. The reset pins are marked in the upper left corner and the pinhole in the lower middle." src="/assets/images/uhrwerk.jpg" width="600" /></p>
<p>UTS DCF is a high-quality automatic clockwork based on the German <a href="https://en.wikipedia.org/wiki/DCF77?useskin=vector">DCF77</a> standard. I have a clock using one of these I like very much, but due to unfortunate circumstances, the watch hands got into the wrong alignment<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>. So I very much wanted to fix this. This is what I did follow my instruction at your own risk.</p>
<p>At first, you need to get the clockwork into the 0:00:00 position. To do this ensure that the clock has a battery and short the
reset pins. I used a small slow screwdriver. The clockwork will now go into the position where all hands
are supposed to point directly up<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup>. Then the clockwork stops and will do nothing.</p>
<p>The next step is to pull out the battery and put a pin into the pinhole to lock the mechanism.
New clockworks tend to come with a nice pin that fits perfectly for this, but you can use any
small enough pin.</p>
<p>Now you can adjust the hands with careful rotation or just pull them out and put them in anew. Adjust them
that all of them point directly up.</p>
<p>Now plug out the pin and insert the battery. Don’t do this the other way around. Now the clock should start to move to 4:00:00.</p>
<p><strong>Mine did not.</strong> So now comes a strange part that scares me, but worked perfectly for two of these clockworks.
Put the battery in <em>the wrong way</em> wait a few seconds and put it in normally.</p>
<p>After this, my clock did a full rotation of the second hand and then moved to 4:00:00.
This is a little strange and most clockworks are supposed to go a full round to 0:00:00, but this one goes
to the next interval round 4 hours. So if you plug out the battery again and reinsert it, it will go to
8:00:00 and on the next try to 0:00:00.</p>
<p>Now you should wait a few minutes and the clock will show the correct time when it gets a signal.
It will usually go faster if you put the clock next to a window.</p>
<p>I hope this will help you to make your clock work correctly in less than the two hours I needed.</p>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>You should think before you rip out clock hands. I didn’t. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>At least in Germany this is usually the 12. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>Using interval arithmetic for better error propagation2023-03-07T06:00:11+01:002023-03-07T06:00:11+01:00https://xn--andreasvlker-cjb.de/2023/03/07/interval-error-propagation<h2 id="errors-and-their-propagation">Errors and their propagation</h2>
<p>If an experiment measures a number, the number usually contains some uncertainty that is often called the value’s error. This might result from the limited resolution of the measurement device or resulting from statistical analysis of multiple measurements.</p>
<p>For example for a value $x=5$, we call its error $\Delta x=0.2$. This is sometimes written as $x = 5\pm 0.2$.</p>
<p>If we want to derive another value $y = f(x)$, the error in $x$ will result in an error $\Delta y$ for $y$. The process of calculating this new error is called error propagation.</p>
<p>The usual method to do this in natural science is <a href="https://en.wikipedia.org/wiki/Propagation_of_uncertainty">Gauss error propagation law</a>. I think it is a badly behaved approximation and should basically never be used. <a href="https://en.wikipedia.org/wiki/Interval_arithmetic">Interval arithmetic</a> is conceptionally simpler and always exact.</p>
<h3 id="our-example">Our example</h3>
<p>We will look at a value $x=0$ with $\Delta x = 2\pi$ and $y=\sin(x)$ and $z=\cos(x)$ for demonstration. This is kind of an evil example because it uses a transcendental function with an error larger than the value.</p>
<p>A less extreme version of this problem might happen in the physics of resonance. I vaguely remember using an in-principle similar function for a <a href="https://en.wikipedia.org/wiki/Spin_wave">spin wave</a> experiment.</p>
<h2 id="gaussian-error-propagation">Gaussian error propagation</h2>
<p>Gauss had some clever ideas<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup> to approximate the error, but the resulting formulas are easy enough. In this article we will only need the case for one input variable which is very simple:</p>
\[\begin{align*}
\Delta y = |f'(x)|\Delta x
\end{align*}\]
<h3 id="our-example-1">Our example</h3>
<p>The derivative of $\sin$ and $\cos$ are easy enough to calculate so we get:</p>
\[\begin{align*}
\Delta y &= |\cos(x)|\Delta x = 2\pi \cos(0) = 2\pi \\
\Delta z &= |-\sin(x)|\Delta x = 2\pi \sin(0) = 0
\end{align*}\]
<p>Both of these are insane. $y = 0 \pm 2\pi$ is bigger than the possible range of $sin(x)\in [0,1]$. $z = 1 \pm 0$ does not have any error no matter what.</p>
<p>The value with its error comes from the full range of $\sin$ and $\cos$ and so the result should be able to take every possible resulting value of the functions and their ranges must be $y = z = 0\pm 1$.</p>
<h2 id="interval-arithmetic">Interval arithmetic</h2>
<p>With interval arithmetic, we stop calculating with numbers and only consider intervals in which a number must be. For example, we can get an interval from a value and error as $x_I = [x-\Delta x, x+\Delta x]$ For our initial example of $x = 5\pm 0.2$ we get $x_I = [4.8, 5.2]$.</p>
<p>We can always<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup> recover a value and error from an interval $x_I=[x_1, x_2]$ trivially as $x = \frac{x_1+x_2}{2}$ and $\Delta x = \frac{x_2-x_1}{2}$.</p>
<p>The general formula for error propagation with intervals is to calculate the <a href="https://en.wikipedia.org/wiki/Image_(mathematics)#Image_of_a_subset">image</a> of the function on the interval</p>
\[\begin{align*}
f[x_I] = \{f(x): x \in x_I\}
\end{align*}\]
<p>Then the resulting interval is $y_I = [\inf f[x_I], \sup f[x_I]]$.</p>
<p>This simplifies to something much simpler in most cases, where you only have to consider the interval bounds. For example, the multiplication for two intervals is</p>
\[\begin{align*}
[a, b] \cdot [c, d] = [\min \{ac, ad, bc, bd\}, \max \{ac, ad, bc, bd\}]
\end{align*}\]
<p>which is trivial to calculate.</p>
<p>For a function like $\cos(x)$ it is a little more tricky:</p>
<ol>
<li>Shift $x$ by some multiple of $2\pi$, so it is in $[0, 2\pi]$.</li>
<li>Find the endpoint $x_{max}$ of the interval that is nearest to either $0$ or $2\pi$.</li>
<li>Find the point $x_{min}$ that is nearest to $\pi$. This might be either of the endpoints of the interval, or the interval might contain this $\pi$.</li>
<li>The resulting interval is $y_I = [\cos(x_{min}), \cos(x_{max})]$.</li>
</ol>
<p>That is not so hard.</p>
<h3 id="our-example-2">Our example</h3>
<p>Considering the example, we get the interval $x_I = [-2\pi, 2\pi]$. This still is obviously the full range of a period of our functions, and we get:</p>
\[\begin{align*}
y_I = z_I = [-1, 1]
\end{align*}\]
<h2 id="practical-considerations-using-python">Practical considerations: Using python</h2>
<p>No one should ever do error propagation by hand, except to torture junior students. The most popular and sane<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup> tool to analyze measurements is python. I will demonstrate how similar it is using appropriate packages.</p>
<p>For example, we will use some values $x \in [0, 2]$ with error $\Delta x = \frac{x}{2}$ and calculate $\sin x$.</p>
<h3 id="gaussian-error-propagation-1">Gaussian error propagation</h3>
<p>Using the package <a href="https://pythonhosted.org/uncertainties/">uncertainties,</a> the resulting code is:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">xs</span> <span class="o">=</span> <span class="p">[</span><span class="n">x</span><span class="o">/</span><span class="mi">25</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">50</span><span class="p">)]</span>
<span class="kn">from</span> <span class="nn">uncertainties</span> <span class="kn">import</span> <span class="n">ufloat</span>
<span class="kn">import</span> <span class="nn">uncertainties.umath</span> <span class="k">as</span> <span class="n">umath</span>
<span class="n">xsu</span> <span class="o">=</span> <span class="p">[</span><span class="n">ufloat</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">x</span><span class="o">/</span><span class="mi">2</span><span class="p">)</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">xs</span><span class="p">]</span>
<span class="n">ysu</span> <span class="o">=</span> <span class="p">[</span><span class="n">umath</span><span class="p">.</span><span class="n">sin</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">xsu</span><span class="p">]</span>
</code></pre></div></div>
<h3 id="interval-arithmetic-1">Interval arithmetic</h3>
<p>Using the package <a href="https://pyinterval.readthedocs.io/en/latest/">interval,</a> the resulting code is very similar:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">xs</span> <span class="o">=</span> <span class="p">[</span><span class="n">x</span><span class="o">/</span><span class="mi">25</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">50</span><span class="p">)]</span>
<span class="kn">from</span> <span class="nn">interval</span> <span class="kn">import</span> <span class="n">interval</span><span class="p">,</span> <span class="n">inf</span><span class="p">,</span> <span class="n">imath</span>
<span class="n">xsi</span> <span class="o">=</span> <span class="p">[</span><span class="n">interval</span><span class="p">[</span><span class="n">x</span><span class="o">-</span><span class="n">x</span><span class="o">/</span><span class="mi">2</span><span class="p">,</span> <span class="n">x</span><span class="o">+</span><span class="n">x</span><span class="o">/</span><span class="mi">2</span><span class="p">]</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">xs</span><span class="p">]</span>
<span class="n">ysi</span> <span class="o">=</span> <span class="p">[</span><span class="n">imath</span><span class="p">.</span><span class="n">sin</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">xsi</span><span class="p">]</span>
</code></pre></div></div>
<h3 id="plotting-it">Plotting it</h3>
<p>Now after we calculate if it is easy enough to plot it and compare them graphically.
<img src="/assets/images/interval-vs-gauss.svg" alt="Plot of the calculation of the two arrays above." />
As before the Gaussian error propagation has error bars that are above $1$, and they vanish around $x=\frac{\pi}{2} \approx 1.57$.</p>
<h1 id="conclusion">Conclusion</h1>
<p>While using computers, interval arithmetic is just as easy as conventional error propagation and it avoids many strange artifacts of Gaussian error propagation.</p>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>The basic idea of gauss was that errors are very small (which is absolutely not always true) and linearize the function that derives the new value and calculates a spheroid around the value based on the input errors. Nobody really explains this ever, because it is very confusing and clever. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Except if the interval is infinite on at least one side. This is a good thing because this makes intervals fundamentally more powerful than value and error. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p><a href="https://julialang.org">Julia</a> might be slightly more sane than python <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>Errors and their propagationRelease of my first iOS app2023-02-08T06:00:11+01:002023-02-08T06:00:11+01:00https://xn--andreasvlker-cjb.de/2023/02/08/released-first-ios-app<p>I release my first app for iPhone/iPad on the app store. In principle, it is really simple and it helps you to figure out when to ventilate your home without messing up the humidity.</p>
<p>It currently only works in Germany, but you can get it here:</p>
<p><a style="margin: 0 auto;" href="https://apps.apple.com/us/app/lüften-jetzt/id6443673652"><img src="/assets/appstorebadge/en.svg" height="60" /></a></p>
<p>Here come some thoughts on how it went.</p>
<h2 id="programming-language-swift">Programming language: Swift</h2>
<p>These days <a href="https://swift.org">swift</a> is the language of choice for native iOS apps. In the end, I also used swift for the relatively simple server component to gather the data from the weather service.</p>
<p>I think it is a really nice language. It has all the features I want without being too complicated. The syntax is nice to look at for me and easy enough to read.</p>
<p>Of cause it is a little messy, due to its compatibility with some parts of <a href="https://de.wikipedia.org/wiki/Objective-C">Objective-C</a> which was preferred by Apple before developing Swift. It is very much a multi-paradigm language with all the pros and cons of that.</p>
<p>I had to implement some basic data structures like a <a href="https://en.wikipedia.org/wiki/Pairing_heap">Pairing heap</a> and a <a href="https://en.wikipedia.org/wiki/Bounding_volume_hierarchy">Bounding volume hierarchy</a> for arbitrary dimensional data. From my experience, this can get messy fast in most languages, but it was unexpectedly simple.</p>
<h2 id="ui-framework-swiftui">UI Framework: SwiftUI</h2>
<p><a href="https://developer.apple.com/xcode/swiftui/">SwiftUI</a> is Apple’s relatively new UI Framework. It kind of follows the idea of <a href="https://reactjs.org">react</a> and <a href="https://vuejs.org">vue.js</a>, but I like it much more than these two. A big part of this is that there are specific features built into Swift to make it nicer to use, which Javascript misses at the moment. I also think it took some ideas from <a href="https://en.wikipedia.org/wiki/Immediate_mode_GUI">Immediate mode GUI</a> that work very well. I think it is very well designed.</p>
<h2 id="data-source-germany-public-weather-service">Data source: Germany public weather service</h2>
<p>I get my data from the <a href="https://www.dwd.de/">DWD</a>. It has an amazing amount of freely accessible data for Germany. There is also a huge amount of data with a worldwide application, but it is not as nicely preprocessed as the local data.</p>
<p>The documentation and data formats are a little old-school at first, but easy to parse when you figured out what you need. <strong>Tip</strong>: If nothing works check if that data uses <a href="https://de.wikipedia.org/wiki/ISO_8859-1">latin1</a> encoding instead of <a href="https://de.wikipedia.org/wiki/UTF-8">UTF-8</a>. As far as I figured out this is not documented anywhere…</p>I release my first app for iPhone/iPad on the app store. In principle, it is really simple and it helps you to figure out when to ventilate your home without messing up the humidity.A proof for the closed form of triangular numbers2022-12-31T06:00:11+01:002022-12-31T06:00:11+01:00https://xn--andreasvlker-cjb.de/2022/12/31/triangular-number-proof<p>The triangular numbers</p>
\[T_n = \sum_{i=1}^n i, \text{for }n>0\]
<p>have the generally known closed form</p>
\[T_n = \frac{1}{2}n(n+1)\]
<p>This can be easily proven by <a href="https://proofwiki.org/wiki/Closed_Form_for_Triangular_Numbers/Proof_by_Induction">induction</a> or <a href="https://proofwiki.org/wiki/Closed_Form_for_Triangular_Numbers/Direct_Proof">young Gauss’s regrouping of terms</a>.</p>
<p>Calculating some sums, I found another way using the sum of the squares. Even though I haven’t found it on the web yet, it is probably well-known.</p>
<h2 id="proof">Proof</h2>
<p>We look at the sum of squares</p>
\[S = \sum_{i=0}^{n+1} i^2\]
<p>On the one hand, we can remove the last term from the series:</p>
\[S = (n+1)^2 + \sum_{i=0}^n i^2\]
<p>On the other hand, we can remove the first term and shift indices:</p>
\[\begin{aligned}
S &= 0 + \sum_{i=1}^{n+1} i^2 = \sum_{i=0}^n (i+1)^2 \\
&= \sum_{i=0}^n (i^2+2i+1) = \sum_{i=0}^n i^2+\sum_{i=0}^n 2i+\sum_{i=0}^n 1 \\
&= \sum_{i=0}^n i^2 + 2\sum_{i=0}^n i + (n+1)
\end{aligned}\]
<p>Equating both sides we get</p>
\[\begin{aligned}
&(n+1)^2 + \sum_{i=0}^n i^2 = \sum_{i=0}^n i^2 + 2\sum_{i=0}^n i + (n+1) \\
\Leftrightarrow &(n+1)^2 = n+1+ 2\sum_{i=0}^n i \\
\Leftrightarrow &2\sum_{i=0}^n i = (n+1)^2-(n+1) = n(n+1) \\
\Leftrightarrow &\sum_{i=0}^n i = \sum_{i=1}^n i = \frac{1}{2}n(n+1)
\end{aligned}\]
<p>QED</p>
<h2 id="some-thoughts">Some thoughts</h2>
<p>This is kind of similar to the <a href="https://proofwiki.org/wiki/Closed_Form_for_Triangular_Numbers/Proof_by_Telescoping_Sum">proof by telescoping series</a>. A similar idea to remove the first and last element of a series is often used to calculate the partial sums of the <a href="https://en.wikipedia.org/wiki/Geometric_series">geometric series</a>.</p>
<p>This proof does not give as much insight as Gauss’s proof, but it is relatively easy for a proof by manipulation of sums. It also generalizes to the sum of $i^k$ for $k \in \mathbb{N}$ by always using the sum of $i^{k+1}$ and the binomic expansion of $(i+1)^{k+1}$.</p>The triangular numbersA simple script for cron Prometheus export2022-11-30T06:00:11+01:002022-11-30T06:00:11+01:00https://xn--andreasvlker-cjb.de/2022/11/30/prometheus-for-cron<p>I like <a href="https://prometheus.io">Prometheus</a> for collecting metrics from servers and programs. Sadly the <a href="https://en.wikipedia.org/wiki/Cron">cron deamon</a> does not support it. So I wrote a simple python script to report some metrics:</p>
<ul>
<li>Start time</li>
<li>Duration of time</li>
<li>Exit code</li>
</ul>
<p>It works with the help of the <a href="https://github.com/prometheus/node_exporter">node exporter</a> that is most likely used in most Prometheus setups.</p>
<h2 id="the-script">The script</h2>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">#!/usr/bin/env python3
</span>
<span class="kn">from</span> <span class="nn">prometheus_client</span> <span class="kn">import</span> <span class="n">REGISTRY</span><span class="p">,</span> <span class="n">write_to_textfile</span><span class="p">,</span> <span class="n">Gauge</span>
<span class="kn">import</span> <span class="nn">prometheus_client</span>
<span class="kn">from</span> <span class="nn">time</span> <span class="kn">import</span> <span class="n">time</span>
<span class="kn">import</span> <span class="nn">sys</span>
<span class="kn">import</span> <span class="nn">subprocess</span>
<span class="c1"># debian/ubuntu
#OUTPUT_DIR = "/var/lib/prometheus/node-exporter"
#freebsd
</span><span class="n">OUTPUT_DIR</span> <span class="o">=</span> <span class="s">"/var/tmp/node_exporter"</span>
<span class="n">jobname</span> <span class="o">=</span> <span class="n">sys</span><span class="p">.</span><span class="n">argv</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">command</span> <span class="o">=</span> <span class="n">sys</span><span class="p">.</span><span class="n">argv</span><span class="p">[</span><span class="mi">2</span><span class="p">:]</span>
<span class="n">output_file</span> <span class="o">=</span> <span class="n">OUTPUT_DIR</span><span class="o">+</span><span class="s">"/"</span><span class="o">+</span><span class="n">jobname</span><span class="o">+</span><span class="s">".prom"</span>
<span class="n">prometheus_client</span><span class="p">.</span><span class="n">REGISTRY</span><span class="p">.</span><span class="n">unregister</span><span class="p">(</span><span class="n">prometheus_client</span><span class="p">.</span><span class="n">GC_COLLECTOR</span><span class="p">)</span>
<span class="n">prometheus_client</span><span class="p">.</span><span class="n">REGISTRY</span><span class="p">.</span><span class="n">unregister</span><span class="p">(</span><span class="n">prometheus_client</span><span class="p">.</span><span class="n">PLATFORM_COLLECTOR</span><span class="p">)</span>
<span class="n">prometheus_client</span><span class="p">.</span><span class="n">REGISTRY</span><span class="p">.</span><span class="n">unregister</span><span class="p">(</span><span class="n">prometheus_client</span><span class="p">.</span><span class="n">PROCESS_COLLECTOR</span><span class="p">)</span>
<span class="n">start_time</span> <span class="o">=</span> <span class="n">time</span><span class="p">()</span>
<span class="n">completed</span> <span class="o">=</span> <span class="n">subprocess</span><span class="p">.</span><span class="n">run</span><span class="p">(</span><span class="n">command</span><span class="p">)</span>
<span class="n">end_time</span> <span class="o">=</span> <span class="n">time</span><span class="p">()</span>
<span class="n">exit_code</span> <span class="o">=</span> <span class="n">completed</span><span class="p">.</span><span class="n">returncode</span>
<span class="n">Gauge</span><span class="p">(</span><span class="s">'cron_starttime'</span><span class="p">,</span> <span class="s">'Start time of cron job'</span><span class="p">,</span> <span class="p">[</span><span class="s">'job'</span><span class="p">]).</span><span class="n">labels</span><span class="p">(</span><span class="n">jobname</span><span class="p">).</span><span class="nb">set</span><span class="p">(</span><span class="n">start_time</span><span class="p">)</span>
<span class="n">Gauge</span><span class="p">(</span><span class="s">'cron_duration'</span><span class="p">,</span> <span class="s">'Duration of cron job in seconds'</span><span class="p">,</span> <span class="p">[</span><span class="s">'job'</span><span class="p">]).</span><span class="n">labels</span><span class="p">(</span><span class="n">jobname</span><span class="p">).</span><span class="nb">set</span><span class="p">(</span><span class="n">end_time</span><span class="o">-</span><span class="n">start_time</span><span class="p">)</span>
<span class="n">Gauge</span><span class="p">(</span><span class="s">'cron_exitcode'</span><span class="p">,</span> <span class="s">'Exit code of cron job'</span><span class="p">,</span> <span class="p">[</span><span class="s">'job'</span><span class="p">]).</span><span class="n">labels</span><span class="p">(</span><span class="n">jobname</span><span class="p">).</span><span class="nb">set</span><span class="p">(</span><span class="n">exit_code</span><span class="p">)</span>
<span class="n">write_to_textfile</span><span class="p">(</span><span class="n">output_file</span><span class="p">,</span> <span class="n">registry</span><span class="o">=</span><span class="n">REGISTRY</span><span class="p">)</span>
</code></pre></div></div>
<h2 id="how-to-install">How to install</h2>
<p>This assumes basic familiarity with Prometheus and node_exporter.</p>
<ul>
<li>Install, setup and run the prometheus node_exporter</li>
<li>Install the <code class="language-plaintext highlighter-rouge">prometheus_client</code>package for python <sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup></li>
<li>Copy the script to <code class="language-plaintext highlighter-rouge">/usr/bin/cron_exporter</code> <sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup></li>
<li>Make it executable</li>
<li>Adjust the <code class="language-plaintext highlighter-rouge">OUTPUT_DIR</code> variable for your system. I already have examples for your</li>
</ul>
<h2 id="usage">Usage</h2>
<p>Edit your crontab as usual, but put the <code class="language-plaintext highlighter-rouge">cron_exporter</code> and the name of the job in front of every line. So for example</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@daily /usr/bin/certbot renew
</code></pre></div></div>
<p>should become</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@daily cron_exporter certbot /usr/bin/certbot renew
</code></pre></div></div>
<h2 id="access-from-prometheus-with-promql">Access from prometheus with PromQL</h2>
<ul>
<li>Duration of certbot: <code class="language-plaintext highlighter-rouge">cron_duration{exported_job="certbot"}</code></li>
<li>Time since the last run for all jobs (in minutes): <code class="language-plaintext highlighter-rouge">(time()-cron_starttime)/60</code></li>
<li>Exit codes for a job named backup: <code class="language-plaintext highlighter-rouge">cron_exitcode{exported_job="backup"}</code></li>
</ul>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>You can simply to <code class="language-plaintext highlighter-rouge">pip install prometheus-client</code> as root. Most people seem to think this is a bad idea, but it works for me. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>I think in principle it would belong into <code class="language-plaintext highlighter-rouge">/usr/local/bin</code>, but that does not work out with <code class="language-plaintext highlighter-rouge">$PATH</code> in Debian and Ubuntu <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>I like Prometheus for collecting metrics from servers and programs. Sadly the cron deamon does not support it. So I wrote a simple python script to report some metrics: Start time Duration of time Exit codeSolving a $x^2+\varepsilon x=1$ with perturbation theory for two regimes2022-11-27T06:00:11+01:002022-11-27T06:00:11+01:00https://xn--andreasvlker-cjb.de/2022/11/27/quadratic-equation-perturbation<h2 id="solving-the-equation-directly">Solving the equation directly</h2>
<p>Let’s look at the equation
\(x^2+\varepsilon x=1\)
We can rewrite it as
\(x^2+\varepsilon x-1=0\)
for convenience. As with all quadratic equations, this equation has two solutions. Luckily both solutions of our equation are real numbers for any $\varepsilon$.</p>
<p>In this article, we will assume $\varepsilon > 0$ and we will only look at the bigger solution. So we <a href="https://www.wolframalpha.com/input?key=&i=x%5E2%2Bepsilon*x%3D1">obviously</a> get
\(x(\varepsilon)=\frac{1}{2}\big(\sqrt{\varepsilon^2+4}-\varepsilon\big)\).</p>
<h3 id="limits-for-0-and-infty">Limits for $0$ and $+\infty$</h3>
<p>Let’s look at some limits of the function. At $0$ we trivially get
\(\lim_{\varepsilon\rightarrow\infty}x(\varepsilon)=\frac{1}{2}\big(\sqrt{0^2+4}-0\big)=1\).</p>
<p>$\lim_{\varepsilon\rightarrow\infty}x(\varepsilon)=0$ is more tricky to proof. The <a href="https://en.wikipedia.org/wiki/Limit_of_a_function#Limits_at_infinity">definition</a> for this is<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup></p>
\[(\forall C > 0) (\exists \varepsilon > 0)(\forall s>\varepsilon) x(s) < C\]
<p>Because $x(\varepsilon)$ is strictly monotonically decreasing<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup> and bigger then $0$ this all comes down to finding an $\varepsilon$ so that $x(\varepsilon) < C$ for any $C > 0$. So let’s check:</p>
\[\begin{aligned}
&\frac{1}{2}(\sqrt{\varepsilon^2+4}-\varepsilon) < C\\
\Leftrightarrow &\sqrt{\varepsilon^2+4} < 2C+\varepsilon\\
\Leftrightarrow &\varepsilon^2+4 < (2C+\varepsilon)^2 = 4C^2+4C\varepsilon+\varepsilon^2 \\
\Leftrightarrow & 4 < 4C^2+4C\varepsilon\\
\Leftrightarrow &\frac{1-C^2}{C} < \varepsilon
\end{aligned}\]
<p>QED</p>
<h2 id="pertrubation-theory-for-small-varepsilon">Pertrubation theory for small $\varepsilon$</h2>
<p>So let’s assume $\varepsilon$ is small. If we assume $\varepsilon = 0$ we can solve the resulting equation $x^2 - 1 = 0$ instantly. This is nice and we can formalize this by using a <a href="https://en.wikipedia.org/wiki/Power_series">power series</a> for $x$ as</p>
\[x = \sum_{n=0}^\infty x_n\varepsilon^n = x_0 + \varepsilon x_1 + \varepsilon^2 x_2 + \varepsilon^3 x_3 + \varepsilon^4 x_4+\dots\]
<p>with unknown coefficients $x_n$.</p>
<p>So we can plug this in our equation and get
\(\begin{aligned}0&=x^2+\varepsilon x-1\\
&=(x_0 + \varepsilon x_1 + \varepsilon^2 x_2 + \varepsilon^3 x_3 + \varepsilon^4 x_4+\dots)(x_0 + \varepsilon x_1 + \varepsilon^2 x_2 + \varepsilon^3 x_3 + \varepsilon^4 x_4+\dots)\\
&\quad+\varepsilon(x_0 + \varepsilon x_1 + \varepsilon^2 x_2 + \varepsilon^3 x_3 + \varepsilon^4 x_4+\dots)-1\end{aligned}\)</p>
<p>We can multiply this out
\(\begin{aligned}0=&x_1^2+2\varepsilon x_0x_1+2\varepsilon^2 x_0x_2+\varepsilon^2 x_1^2+2\varepsilon^3x_0x_3+2\varepsilon^3x_1x_2+\\
&2\varepsilon^4x_0x_4+2\varepsilon^4x_1x_3+\varepsilon^4x_2^2+\\
&\varepsilon x_0+\varepsilon^2 x_1+\varepsilon^3 x_2+\varepsilon^4 x_3-1+\mathcal{O}(\varepsilon^5)
\end{aligned}\)</p>
<p>Grouping together by powers of $\varepsilon$ we get
\(\begin{aligned}
0=&(x_1^2-1)+\varepsilon(2x_0x_1+x_0)+\varepsilon^2(2x_0x_2+x_1^2+x_1)\\
&\varepsilon^3(2x_0x_3+2x_1x_2+x_2)+\varepsilon^4(2x_0x_4+2x_1x_3+x_2^2+x_3)+\mathcal O(\varepsilon^5)
\end{aligned}\)</p>
<p>This equation needs to be zero for an arbitrary $\varepsilon$. This is only possible if each factor is equal to $0$. So we will look at each factor.</p>
<h3 id="varepsilon01">$\varepsilon^0=1$</h3>
<p>We get $x_0^2-1=0$. Again we only want the larger solution so we take $x_0=1$. After this the higher orders are <a href="https://www.youtube.com/watch?v=SxdOUGdseq4">simpler (but mostly not easier)</a>, due to only being linear equations.</p>
<h3 id="varepsilon">$\varepsilon$</h3>
<p>We have $2x_0x_1+x_0=0$. Solving for $x_1$ we get
\(x_1 = -\frac{1}{2x_0}x_0=-\frac{1}{2}\).</p>
<h3 id="varepsilon2">$\varepsilon^2$</h3>
<p>We have $2x_0x_2+x_1^2+x_1$ and solving for $x_2$ we get
\(x_2 = -\frac{1}{2x_0}(x_1^2+x_1)\).</p>
<p>Now we have a small problem. There are still variables in our formula for $x_2$. Luckily (and by construction) we already know $x_0$ and $x_1$ and can plug them in. So with easy calculation $x_2 = \frac{1}{8}$.</p>
<h3 id="varepsilon3">$\varepsilon^3$</h3>
<p>\(2x_0x_3+2x_1x_2+x_2 \Rightarrow x_3 = -\frac{1}{2x_0}(2x_1x_2+x_2)=-\frac{1}{32}\)</p>
<h3 id="higher-orders">Higher orders</h3>
<p>We can continue this schema as long as we want. The equations get increasingly longer but stay in principle simple.</p>
<h3 id="summing-up-the-series">Summing up the series</h3>
<p>So I calculated two more orders before I got bored. Let’s plug them into our definition of the power series again:</p>
\[x = 1-\frac{1}{2}\varepsilon+\frac{1}{8}\varepsilon^2-\frac{1}{32}\varepsilon^3
+0\varepsilon^4+\frac{1}{512}\varepsilon^5+\mathcal O (\varepsilon^6)\]
<p>To get an approximation we just ignore all orders we did not calculate:</p>
\[x \approx 1-\frac{1}{2}\varepsilon+\frac{1}{8}\varepsilon^2-\frac{1}{32}\varepsilon^3
+0\varepsilon^4+\frac{1}{512}\varepsilon^5\]
<p><img src="/assets/images/quadratic-equation-perturbation/simple.svg" alt="Plot of the function" /></p>
<p>So for $\varepsilon$ smaller $1$ our approximation works out well and gets worse for larger $\varepsilon$. We assumed $\varepsilon$ is small at the beginning so all this is as expected.</p>
<p>If we look at the limits, at $\varepsilon\rightarrow0$ everything fits, and at $\varepsilon\rightarrow\infty$ our series diverges.</p>
<p>This is all rather well-known. If you have some formal training in quantum physics or advanced ordinary differential equations you will have heard this before. In the next part, I will show you something quite similar, but it looks funny and it is a little surprising that it works.</p>
<h2 id="pertrubation-theory-for-large-varepsilon">Pertrubation theory for large $\varepsilon$</h2>
<p>In the last step, we looked at a power series that works well for $\varepsilon$ small. Now let us construct a power series that works well when $\varepsilon$ becomes very large. In this case, $\varepsilon^{-1}=\frac{1}{\varepsilon}$ becomes very small, so let’s use it for a series:
\(x=\sum_{n=0}^{\infty}x_n\varepsilon^{-n}\)</p>
<p>We can again plug this into our quadratic equation and look at the resulting powers of $\varepsilon$.</p>
<h3 id="varepsilon-1">$\varepsilon$</h3>
<p>Somewhat suprisingly we still have a positive power of $\varepsilon$ (from the term with $\varepsilon$ in the equation):
\(x_0=0\)</p>
<p>Nothing to do here. Turns out we could have started our series at $n=1$.</p>
<h3 id="varepsilon01-1">$\varepsilon^0=1$</h3>
<p>We get $x_0^2+x_1-1=0$, so $x_1=1$.</p>
<p>Another surprise is that we don’t get to choose between two solutions this time. I think this could be because the smaller solution diverges at infinity. <a href="https://www.youtube.com/watch?v=9PYgCN2kIsg">If you work it out tell me what you find</a>.</p>
<h3 id="varepsilon-1">$\varepsilon^{-1}$</h3>
<p>We get $2x_0x_1+x_2=0$ and so $x_2=0$. It turns out all even order terms in the series are zero. If you don’t believe me, try to prove it by induction. It is not that hard, but messy to write down correctly.</p>
<h3 id="varepsilon-2">$\varepsilon^{-2}$</h3>
<p>Now we get to the first order without a surprise. The equation is $2x_0x_2+x_1^2+x_3=0 \Rightarrow x_3 = -x_1^2 = -1$. Now we can go on for the higher(lower) orders.</p>
<h3 id="summing-up">Summing up</h3>
<p>With some more orders, I got</p>
<p>\(x = \varepsilon^{-1}-\varepsilon^{-3}+3\varepsilon^{-5}\).</p>
<p>In most cases, it is a very bad omen, when the coefficients in a power series increase, but it is not that bad in this case. For $\epsilon \rightarrow 0$ this diverges badly anyway.</p>
<p>Due to $x_0=0$, this converges at $\varepsilon\rightarrow\infty$ to $0$ as it should.</p>
<p><img src="/assets/images/quadratic-equation-perturbation/both.svg" alt="Plot of the function" />
If we plot this function, as expected it works well for large $\varepsilon$ (in this case larger than $3$) and does not work for small values.</p>
<h2 id="the-strange-case-of-varepsilonapprox-1">The strange case of $\varepsilon\approx 1$</h2>
<p>So we found a good approximation for small and large $\varepsilon$ with simple calculations, but they both break down at $\varepsilon=1$. That seems strange to me. After some months of thought, I have not found something similar that works there.</p>
<p>Of cause, you can calculate more orders and the approximation at $1$ will become good enough for practical applications.</p>
<p>You might try a series of the form
\(x = \sum_{n=0}^\infty x_n(\varepsilon-1)^n\),
but this looks so ugly to me, that I did not even want to work it out for fun.<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup></p>
<h2 id="is-this-useful">Is this useful?</h2>
<p>Not really for quadratic (or other algebraic) equations.</p>
<p>The principle for small $\varepsilon$ is used in <a href="https://en.wikipedia.org/wiki/Feynman_diagram">many calculations in high energy physics</a>. You can think of this quadratic equation as one of the simplest possible model systems for the method.</p>
<p>If one could work out the the method for large $\varepsilon$ for quantum mechanics, it could possibly help for some quantum-systems with <a href="https://en.wikipedia.org/wiki/Coupling_constant#Weak_and_strong_coupling">strong coupling</a>. Maybe.</p>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>Don’t be confused: the names of the variables look kind of strange because we already used $x$ and $\varepsilon$ <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>For $\varepsilon \ge 0$</p>
\[\begin{aligned}
&x'(\varepsilon) = \frac{1}{2}\big(\frac{\varepsilon}{\sqrt{\varepsilon^2+4}}-1\big) < 0\\
\Leftrightarrow & \frac{\varepsilon}{\sqrt{\varepsilon^2+4}} < 1\\
\Leftrightarrow & \varepsilon < \sqrt{\varepsilon^2+4}
\end{aligned}\]
<p>QED <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>I think it would be beautiful if you can make the combined series
\(\sum_{n=-\infty}^{\infty}a_n\varepsilon^n\)
work, but I just get an infinite number of terms for each order. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>Solving the equation directly Let’s look at the equation \(x^2+\varepsilon x=1\) We can rewrite it as \(x^2+\varepsilon x-1=0\) for convenience. As with all quadratic equations, this equation has two solutions. Luckily both solutions of our equation are real numbers for any $\varepsilon$.Interpolation on earth surface (far from the dateline)2022-10-27T07:00:11+02:002022-10-27T07:00:11+02:00https://xn--andreasvlker-cjb.de/2022/10/27/interpolation-on-earth<h2 id="introduction">Introduction</h2>
<p>If you want to get the current temperature you could go outside, but you can also approximate it with interpolation. All over the world are public weather stations where you can get the current temperature there.</p>
<p>You could just take the nearest to your home and believe it, which often is fine, but let’s do better anyway. You can take the ones around you and use the average of them. Even better you weigh them and take give the values nearer you more relevance. In other words: You interpolate their temperatures at your current location.</p>
<p>In this article, I will discuss how you figure out which stations are around you and how you weigh them and how you can do that kind of fast with a computer.</p>
<p>“Around you” is surprisingly hard to define and figure out. The simplest way is to take the k nearest stations, but this does ignore a central problem: occlusion. If there are two information sources in a line from you, you always only want to look at the first of them and the farther behind and occluded ones should almost always be ignored.</p>
<p>Now the title of this post says we only do interpolation far from the date line. As you almost surely know a point on the earth is often addressed with a latitude and longitude which are basically angles. As always with angles they skip somewhere from 360 degrees back to 0 degrees. On the earth, this is defined to happen in the region of the pacific. This is messy to handle and you don’t always need it. For example, I work with the data from the Germanys public weather service which is only really good inside of Germany and Germany is very far from the pacific. Luckily Germany also does not have colonies anymore.</p>
<h2 id="why-not-do-natural-neighbor-interpolation">Why not do natural neighbor interpolation</h2>
<p>(You can safely skip this paragraph if you don’t know what natural neighbor interpolation is or you know what it is and don’t like it at all)</p>
<p>If you have ever looked at multidimensional interpolation you heard of <a href="https://en.wikipedia.org/wiki/Natural_neighbor_interpolation">natural neighbor interpolation</a>. This is a really clever algorithm based on Voronoi diagrams. It automatically handles occlusion and it gives a kind of natural way to get the weights without having to mess around.</p>
<p>I think it is a brilliant idea and I don’t like it at all. I collaborated on a based on this principle some time ago and I don’t like it very much anymore. This algorithm is horrible to implement fast and even then it is still slow. It is a mess of numerical problems if points have too much symmetry. And implementation is only the smallest problem.</p>
<p>Naturalness is very nice because you don’t have to tune any parameters. Until your and the algorithm’s definition of naturalness is not the same. Then you want to tune parameters and you can’t. For example, on earth, the distance on the surface is not the same as the distance between the coordinates as vectors. There is no direct way to make this work with natural neighbor interpolation.</p>
<p>Someone clever might figure out how to make this work for this special case. I’m sure that will make the algorithm even more horrible to implement efficiently and you can’t reuse it for the next modification. For example, you might want to include the different elevations of the weather stations, because it turns out the temperature on the next mountain is not that applicable in the valley below. Let us use something else.</p>
<h2 id="getting-the-weights">Getting the weights</h2>
<p>Getting the weights between the at a target location from the source stations is easy from their distance. For any station, you take the distance and apply a decreasing function to it.</p>
<p>Classically you use a <code class="language-plaintext highlighter-rouge">weight[i] = 1/pow(distance[i], e)</code> for some exponent <code class="language-plaintext highlighter-rouge">e</code> which is usually at least 2<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>. Now the sum of all weights is not 1 as it should be so you have to calculate their sum and divide each weight by the sum.</p>
<p>In swift, this could look like the following<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup></p>
<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">func</span> <span class="nf">interpolate</span><span class="p">(</span><span class="nv">datapoints</span><span class="p">:</span> <span class="p">[</span><span class="kt">Double</span><span class="p">],</span> <span class="nv">distances</span><span class="p">:</span> <span class="p">[</span><span class="kt">Double</span><span class="p">])</span> <span class="o">-></span> <span class="kt">Double</span><span class="p">{</span>
<span class="k">let</span> <span class="nv">weights</span> <span class="o">=</span> <span class="n">distances</span><span class="o">.</span><span class="n">map</span><span class="p">{</span><span class="n">distance</span> <span class="k">in</span> <span class="mf">1.0</span><span class="o">/</span><span class="nf">pow</span><span class="p">(</span><span class="n">distance</span><span class="p">,</span> <span class="mi">2</span><span class="p">)}</span>
<span class="k">let</span> <span class="nv">weightSum</span> <span class="o">=</span> <span class="n">weights</span><span class="o">.</span><span class="nf">sum</span><span class="p">()</span>
<span class="k">let</span> <span class="nv">normalizedWeights</span> <span class="o">=</span> <span class="n">weights</span><span class="o">.</span><span class="n">map</span><span class="p">{</span><span class="n">weight</span> <span class="k">in</span> <span class="n">weight</span><span class="o">/</span><span class="n">weightSum</span><span class="p">}</span>
<span class="k">let</span> <span class="nv">interpolated</span> <span class="o">=</span> <span class="nf">zip</span><span class="p">(</span><span class="n">normalizedWeights</span><span class="p">,</span> <span class="n">datapoints</span><span class="p">)</span><span class="o">.</span><span class="n">map</span><span class="p">{</span> <span class="n">weight</span><span class="p">,</span> <span class="n">data</span> <span class="k">in</span>
<span class="k">return</span> <span class="n">weight</span><span class="o">*</span><span class="n">data</span>
<span class="p">}</span><span class="o">.</span><span class="nf">sum</span><span class="p">()</span>
<span class="k">return</span> <span class="n">interpolated</span>
<span class="p">}</span>
</code></pre></div></div>
<p>You can use any decreasing function of the distance. I kind of like a shifted and mirrored <a href="https://en.wikipedia.org/wiki/Sigmoid_function">sigmoid functions</a>, but it is fun and easy to try different functions that come to your mind.</p>
<p>We can also put anything else into this formula. For example, we might add 100m more distance for every 1m of elevation difference.</p>
<p>To get the distance it might work fine to ignore the curvature of the earth and assume longitude and latitude are a flat grid<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup> and take their distance with the Pythagoras theorem. If your stations are near enough this works fine, but we can do better and use the real distance on the earth’s surface. This is easy to calculate by using the (Haversine formula)[https://rosettacode.org/wiki/Haversine_formula]<sup id="fnref:4" role="doc-noteref"><a href="#fn:4" class="footnote" rel="footnote">4</a></sup>.</p>
<h2 id="making-it-faster-with-nearest-neighbors">Making it faster with nearest neighbors</h2>
<p>So until now, we look at every station in (potentially) the world and this sounds like a lot of useless work. So let us just get the nearest ones. In most cases <code class="language-plaintext highlighter-rouge">k=5-10</code> stations are enough.</p>
<p>A simple way to do this is to loop through all stations and put them into a max-heap ordered by the distance. Whenever the heap contains more than <code class="language-plaintext highlighter-rouge">k</code> values you can remove the most distant one to keep the heap small and efficient. So in the end the heap will contain the <code class="language-plaintext highlighter-rouge">k</code> nearest stations.</p>
<p>This can be done much more efficiently by using bounding volume hierarchys<sup id="fnref:5" role="doc-noteref"><a href="#fn:5" class="footnote" rel="footnote">5</a></sup>. If the heap contains at least <code class="language-plaintext highlighter-rouge">k</code> element its max is always the farthest a relevant point can ever be. We can use its distance to prune large parts of the bounding volume hierarchy.</p>
<p>We could use the correct distance, but this will get messy real fast when we want to use the bounding volume hierarchy. Just use the simple Pythagorean distance!</p>
<p>The simplest implementation of this might look something like this:</p>
<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">func</span> <span class="kt">KNN</span><span class="p">(</span><span class="n">_</span> <span class="nv">k</span><span class="p">:</span> <span class="kt">Int</span><span class="p">,</span> <span class="nv">at</span><span class="p">:</span> <span class="kt">Vec2</span><span class="p">,</span> <span class="n">from</span> <span class="nv">sources</span><span class="p">:</span> <span class="p">[</span><span class="kt">Station</span><span class="p">])</span> <span class="o">-></span> <span class="p">[</span><span class="kt">Station</span><span class="p">]{</span>
<span class="k">var</span> <span class="nv">heap</span> <span class="o">=</span> <span class="kt">BinaryHeap</span><span class="o"><</span><span class="kt">Station</span><span class="o">></span><span class="p">(</span><span class="nv">withComparision</span><span class="p">:</span> <span class="p">{</span><span class="n">a</span><span class="p">,</span> <span class="n">b</span> <span class="k">in</span> <span class="kt">Vec2</span><span class="o">.</span><span class="nf">distance</span><span class="p">(</span><span class="n">at</span><span class="p">,</span> <span class="n">a</span><span class="o">.</span><span class="nf">position</span><span class="p">())</span> <span class="o">>=</span> <span class="kt">Vec2</span><span class="o">.</span><span class="nf">distance</span><span class="p">(</span><span class="n">at</span><span class="p">,</span> <span class="n">b</span><span class="o">.</span><span class="nf">position</span><span class="p">())})</span>
<span class="k">for</span> <span class="n">point</span> <span class="k">in</span> <span class="n">sources</span><span class="p">{</span>
<span class="n">heap</span><span class="o">.</span><span class="nf">insert</span><span class="p">(</span><span class="n">point</span><span class="p">)</span>
<span class="k">if</span> <span class="n">heap</span><span class="o">.</span><span class="n">count</span> <span class="o">></span> <span class="n">k</span><span class="p">{</span>
<span class="n">heap</span><span class="o">.</span><span class="nf">extract</span><span class="p">()</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="k">var</span> <span class="nv">ret</span> <span class="o">=</span> <span class="p">[</span><span class="kt">Station</span><span class="p">]()</span>
<span class="k">while</span> <span class="n">heap</span><span class="o">.</span><span class="n">count</span> <span class="o">></span> <span class="mi">0</span><span class="p">{</span>
<span class="n">ret</span><span class="o">.</span><span class="nf">append</span><span class="p">(</span><span class="n">heap</span><span class="o">.</span><span class="nf">extract</span><span class="p">()</span><span class="o">!</span><span class="p">)</span>
<span class="p">}</span>
<span class="k">return</span> <span class="n">ret</span>
<span class="p">}</span>
</code></pre></div></div>
<h2 id="occlusion">Occlusion</h2>
<p>Now we got a few nearest points left we can do occlusion. So let’s just do the simplest thing possible. For each point we define an area behind it, that is occluded from the view of our target location. So for every station, we check which other stations are in the occluded area and throw them away.</p>
<p><img src="/assets/images/interpolation-on-earth/OccluderDiagram.png" alt="Occlusion" width="500px" /></p>
<p>We same basic linear algebra and trigonometry you can work this out and you might find the following:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>func isOccluded(from source: Vec2, occluder: Vec2, occluded: Vec2, openingAngle: Double = 45) -> Bool{
let limit = cos(openingAngle*Double.pi/180)
if occluded == occluded{
return false
}
let normal = (occluder-source).normalize()
let cos = normal.dot(occluded-occluder)/Vec2.distance(occluder, occluded)
return cos > limit
}
</code></pre></div></div>
<p>Now we can check each pair of stations and reject all that is occluded in any way and we are done. Because we only have a few nearest stations this is fast enough for most practical applications and we don’t need to think of a more clever algorithm to do this.</p>
<p>I use an opening angle of 45°, but you might tune this to your liking and application.</p>
<h2 id="how-to-incorporate-some-curvature">How to incorporate some curvature</h2>
<p>Until now we ignored the curvature of the earth for the most part, but this is simple enough to fix.</p>
<p>We can take something like 30% more nearest neighbors than we really want. Now we sort these by the correct surface distance and only take the nearest <code class="language-plaintext highlighter-rouge">k</code> of them. So after that, we have almost surely the nearest points considering curvature with very little more work.</p>
<h2 id="review">Review</h2>
<p>So let us review what we do</p>
<ul>
<li>Think of a target point for our interpolation</li>
<li>Find a little more than <code class="language-plaintext highlighter-rouge">k</code> nearest stations with some approximations maybe with the help of a bounding volume hierarchy</li>
<li>Select the real nearest <code class="language-plaintext highlighter-rouge">k</code> from them</li>
<li>Check which ones are occluded by others and ignore them</li>
<li>Calculate the weights from the distance to the target point and make them sum up to 1</li>
<li>Get the weighted average of the temperature</li>
<li>Go for a walk with temperature adequate cloth</li>
</ul>
<p>This obviously works for any other positions on earth than stations and any other quantity than temperature.</p>
<h2 id="how-to-work-near-the-dateline">How to work near the dateline</h2>
<p>So this only works when the latitude and longitude are near enough to normal rectangular coordinates. This definitely does not hold near the date line. So what to do there?</p>
<p>The trick is not to use latitude and longitude. We can simply use a 3D coordinate system for the earth which are always well behaved. It is simple to <a href="https://en.wikipedia.org/wiki/Spherical_coordinate_system#Cartesian_coordinates">calculate the positions in real space</a> and it turns out bounding volume hierarchies and our occlusion code generalizes without any problems to 3D. It only gets a little slower to calculate.</p>
<p>For the calculation of the weights, we might want to go back to the latitude and longitude because that formula works perfectly well with the date line.</p>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>This is called <a href="https://en.wikipedia.org/wiki/Inverse_distance_weighting">Shepard’s method</a>. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>For this to work sum must be defined something like this</p>
<div class="language-swift highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">extension</span> <span class="kt">Sequence</span> <span class="k">where</span> <span class="kt">Element</span><span class="p">:</span> <span class="kt">BinaryFloatingPoint</span><span class="p">{</span>
<span class="kd">func</span> <span class="nf">sum</span><span class="p">()</span> <span class="o">-></span> <span class="kt">Element</span><span class="p">{</span>
<span class="k">return</span> <span class="k">self</span><span class="o">.</span><span class="nf">reduce</span><span class="p">(</span><span class="mf">0.0</span><span class="p">,</span> <span class="o">+</span><span class="p">)</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div> </div>
<p><a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>This is basically a <a href="https://en.wikipedia.org/wiki/Mercator_projection">Mercator projection</a>, which is often used for world maps. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:4" role="doc-endnote">
<p><a href="https://en.wikipedia.org/wiki/Vincenty%27s_formulae">Vincenty’s formula</a> is even more precise and considers that the earth is more an ellipsoid and not a perfect sphere. I’m pretty sure you don’t need this for interpolation. <a href="#fnref:4" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:5" role="doc-endnote">
<p><a href="https://jacco.ompf2.com/2022/04/13/how-to-build-a-bvh-part-1-basics/">This article series</a> is a really good overview for conventional bounding volume hierarchies. I like to do this rather differently and will try to write it down one day. <a href="#fnref:5" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>IntroductionSI model stability with unusual perturbation theory2022-04-12T07:00:11+02:002022-04-12T07:00:11+02:00https://xn--andreasvlker-cjb.de/2022/04/12/stability-perturbation<p>The <a href="https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology#The_SIR_model">SI model</a> is one of the simplest models
to describe the spread of infectious disease using differential equations. I figured out a
funny way to look at its steady states. It reproduces the well-known and fairly obvious result.</p>
<h2 id="the-model">The model</h2>
<p>The model is defined by the equations</p>
\[\begin{aligned}
\dot I(t) &= \beta S(t) I(t) - \gamma I(t)\\
\dot S(t) &= -\beta S(t) I(t).
\end{aligned}\]
<p>where $S(t) \in [0,1]$ is the still suseptible part of the population and $I(t) \in [0, 1]$ the currently infected part.
$\beta >0$ and $\gamma>0$ are constants for the rates of infection and resolution of infections.</p>
<p>The idea is that the rate of new infections is proportional to the chance of a susceptible individual meeting an infected individual.
We assume the infected are only infectious for some time and shrink exponentially in absence of new infections.
This is a really basic model that ignores reinfections, interventions, inhomogeneous populations, spatial dynamics…</p>
<p>As with almost any nonlinear equation, there is no practical general solution, so lets
at least look at the steady states of the system.</p>
<h2 id="steady-states-and-their-stability">Steady states and their stability</h2>
<p>The obvious steady state for this model is $S=\bar S$ arbitrary and $I=0$. If there are no infections nobody
else can get infected. The slightly unusual thing, in this case, is, that there are an infinite number of
dense steady states.</p>
<p>Now we will try to see if these are stable, i.e. if a little perturbation shrinks back to zero or explodes.
The usual tool for this is <a href="https://en.wikipedia.org/wiki/Linear_stability">linear stability analysis</a>. It looks at the eigenvalues of the <a href="https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant">jacobian</a> so let’s calculate it for the steady states:</p>
\[J = \begin{bmatrix}\beta \bar S & \gamma\\\\ 0 & 0\end{bmatrix}\]
<p>The last row of this matrix is $0$ and this implies that $0$ is one of its eigenvalues.
Sadly, in this case, linear stability analysis can’t make any conclusion about the stability so we need another tool: perturbation theory.</p>
<h2 id="perturbation-theory">Perturbation theory</h2>
<p>The idea of <a href="https://pirsa.org/speaker/Carl-Bender">perturbation theory</a> is to transform a hard problem into a
infinite series of easy problems and get an approximation to arbitrary precision by solving these.
As is usual we insert a small parameter $\varepsilon$ into our system
and develop the solution as power series of it.
So let’s do this:</p>
\[\begin{aligned}
I(t) &= \sum_{i=0}^{\infty} \varepsilon^n I_n(t)\\
S(t) &= \sum_{i=0}^{\infty} \varepsilon^n S_n(t)
\end{aligned}\]
<p>Now we can plug these into our equations:</p>
\[\begin{aligned}
\dot I_0 +\varepsilon \dot I_1+ \cdots &= \beta (S_0+\varepsilon S_1+\dots)(I_0+\varepsilon I_1+\dots)-\gamma(I_0+\varepsilon I_1+\dots) \\ &=\beta S_0I_0-\gamma I_0+\varepsilon(\beta I_0 S_1+\beta I_1 S_0-\gamma I_1)+\dots\\
\dot S_0 +\varepsilon \dot S_1+ \dots &= -\beta (S_0+\varepsilon S_1+\dots)(I_0+\varepsilon I_1+\dots) \\
&= -\beta S_0I_0+\varepsilon(\beta I_0 S_1+\beta I_1 S_0)+\dots
\end{aligned}\]
<p>If you have some experience with perturbation theory you might think we should have put an $\varepsilon$ into our equations.
Even worse: if we look at all terms without an $\varepsilon$ you find we still have the old equations from before
and we still don’t know how to solve them.</p>
<p>Now here comes my unusual trick: We put an $\varepsilon$ into the initial condition of the equations and set $S(0) = \bar S$
and $I(0) = \varepsilon \bar I$ with an arbitrary (and in the end boring) $\bar I$. Now let’s solve our system by order!</p>
<h3 id="varepsilon0-order">$\varepsilon^0$ Order</h3>
<p>If we collect everything without an $\varepsilon$ we get:</p>
\[\begin{aligned}
\dot I_0 &= \beta S_0 I_0 - \gamma I_0 \\
\dot S_0 &= -\beta S I_0 \\
S_0(0) &= \bar S \text{ and } I_0(0) = 0 \\
\end{aligned}\]
<p>Now if we look at the first equation we see an $I_0$ in every term and with the initial condition it turns out this equation
can be trivially be solved as</p>
\[I_0(t) = 0.\]
<p>Plugging this in the second equation the right side becomes zero and so the solution of the equation is a constant.
With the initial condition, we directly get</p>
\[S_0(t) = \bar S.\]
<p>By solving the 0th order we basically showed that the steady-state is steady.</p>
<h3 id="varepsilon1-order">$\varepsilon^1$ order</h3>
<p>Now we come to the interesting terms. Collecting everything with $\varepsilon$ we get</p>
\[\begin{aligned}
\dot I_1 &= \beta S_0 I_1 + \beta S_1 I_0 -\gamma I_1= (\beta \bar S-\gamma) I_1 \\
\dot S_1 &= -\beta S_0 I_1 - \beta S_1 I_0 = \beta \bar S I_1 \\
S_1(0) &= 0 \text{ and } I_1(0) = \bar I
\end{aligned}\]
<p>So the first equation is linear, homogeneous with constant coefficients and the solution is well known and easy to check.
When plugging in the initial condition we get</p>
\[I_1(t) = \bar I e^{(\beta\bar S-\gamma) t}\]
<p>If we plug this into the second equation it becomes an integrable equation:</p>
\[\dot S_1 = -\beta \bar S \bar I e^{(\beta\bar S-\gamma) t} \Rightarrow S_1 = -\beta \bar I \bar S \int dt\, e^{(\beta\bar S-\gamma) t}\]
<p>and with the initial condition, we get</p>
\[S_1(t) = -\frac{\beta \bar I \bar S}{\beta\bar S-\gamma}\left(e^{(\beta\bar S-\gamma) t}-1\right).\]
<h3 id="higher-orders">Higher orders</h3>
<p>With the same principle we always get the equations for each order and could in principle solve them fairly easily (using the substitution $I_n(t) = \tilde I(t) e^{(\beta \bar S-\gamma)t}$).
This gets increasingly messy and I have not figured out the general form of them.
But with some thought, we can see $I_N$ und $S_N$ have the form</p>
\[\sum_{n=0}^N c_n e^{n(\beta \bar S-\gamma) t}\]
<p>with constants $c_n$. This means these higher terms have similar but faster growth/shrink behavior as the first orders.</p>
<p>Now a mathematician should check the radius of convergence which is not obvious at all.
Luckily I come from a physics background and can ignore such “details”.</p>
<h2 id="what-that-means-for-stability">What that means for stability</h2>
<p>So let’s collect what we figured out:</p>
\[\begin{aligned}
I(t) &= \varepsilon \bar I e^{(\beta\bar S-\gamma) t} +\mathcal O (\varepsilon^2)\\
S(t) &= \bar S -\varepsilon\frac{\beta \bar I \bar S}{\beta\bar S-\gamma}\left(e^{(\beta\bar S-\gamma) t}-1\right)+\mathcal O (\varepsilon^2)
\end{aligned}\]
<p>Looking at the exponential function they shrink exactly if $\beta \bar S-\gamma < 0 \Leftrightarrow \bar S < \frac{\gamma}{\beta}=:\frac{1}{R_0}$. If the sign is flipped the number of infections explodes. The newly defined $R_0$ is these days fairly well known as the <a href="https://en.wikipedia.org/wiki/Basic_reproduction_number">basic reproduction rate</a>. Our condition reproduces exactly the condition for herd immunity, which is nice.</p>
<p>Going back to our equation we can see if this condition holds infection fall back to zero and it seems to be stable.
But we also need to look at the stability of the susceptible population $S(t)$.
Turns out it does not return to $\bar S$ and shrinks somewhat. This is not compatible with the definition of stability!</p>
<p>So in the classical sense there exist no stable steady states. But above the herd immunity threshold, small perturbations are fairly inconsequential and only move us a little down the line of steady states. For practical concerns, I would call this stable.</p>
<h2 id="checking-with-numerics">Checking with numerics</h2>
<p>Now we got a solution we should check if this first-order works at all.
The model is easy to solve with numerics <a href="https://gist.github.com/a-voel/3dfb7acd5e65606bd51a1c148650ae36">(Sourcecode)</a>.</p>
<p>Overall it seems to work well, if the exponent $E = \beta \bar S - \gamma <0 $ and $\bar I$ is smaller. If this gets near $0$ there
are significant deviations. The explanation for this is, that with a big $E$ or $\bar I$ the changes of $S(t)$ are bigger
and the first order basically assumes $S(t) = \bar S$. This might improve in the higher orders.</p>
<h3 id="e-002-bar-i0001">$E=-0.02$ $\bar I=0.001$</h3>
<p><img src="/assets/images/stability-perturbation/peturbation.svg" width="550px" /></p>
<h3 id="e-002-bar-i0001-1">$E=-0.02$ $\bar I=0.001$</h3>
<p><img src="/assets/images/stability-perturbation/peturbation2.svg" width="550px" /></p>
<h2 id="conclusion">Conclusion</h2>
<p>In some cases, you can do perturbation theory by putting your perturbation constant $\varepsilon$ only into the initial conditions.
To my astonishment, it solves the equation, does not produce contradictions, and is in this case the most straightforward
method to show and analyze the stability I found.</p>
<p>The great thing with perturbation theory is you can put an $\varepsilon$ in all kinds of creative places and see what happens
and sometimes it even works out.</p>The SI model is one of the simplest models to describe the spread of infectious disease using differential equations. I figured out a funny way to look at its steady states. It reproduces the well-known and fairly obvious result.Writing scientific software by yourself2021-07-25T12:00:11+02:002021-07-25T12:00:11+02:00https://xn--andreasvlker-cjb.de/2021/07/25/writing-scientific-software<p>Today many scientists have problems that are solved by developing new programs. Sadly most of them are not software developers,
so the quality of the resulting programs is mostly up to luck and individual interest. They mostly only have to take
basic courses into programming by other scientists. Often the professors mostly teach from books because they only wrote basic programs 30 years ago in their own Ph.D. time.</p>
<p>I was in the situation to write some software for theoretical physics, except I studied computer science before and did some
work as a programmer on a larger project. In this article I will write about some ideas and techniques I found along the way.</p>
<p>I will only look at programs written by a single researcher by himself, as it was done in my community. Projects with many researchers
and distributed to end-users are a much more complex problem. I will only look at program development, not project management.</p>
<h2 id="why-care-about-this">Why care about this?</h2>
<p>Scientists might ask why they should care. They might have some Fortran files they copied from someone else and mess around in till it mostly works. They are here to do science, not software. They might even dismiss their program as mere “codes”.
I will try to give some reasons you
might want to care.</p>
<ul>
<li><a href="https://science.sciencemag.org/content/338/6113/1426">In this linked article</a>, Freeman Dyson discusses if tools are as important as ideas to science.
Even if you disagree, it makes clear that tools are important and programs are clearly tools for science.</li>
<li>The programs are a relevant result of the research. They often contain all the messy details and tricks left out in articles and I think they should even be published with the articles. Sadly not even computer science does that so it might take decades for natural sciences to get there.</li>
<li>You will be faster and more comfortable using and modifying your programs for your research.</li>
<li>You will be (a little) more confident in your programs if they are less messy.</li>
<li>These days there are many jobs involving software if you ever leave academia.</li>
<li>Making better programs is fun.</li>
</ul>
<h2 id="an-example-of-a-program">An example of a program</h2>
<p>To make the kind of programs discussed here more clear I will explain my own research. I look
at the time evolution of electrons in quantum wires described by differential equations. There
were multiple numerical schemas to solve these equations and many ways to combine them with
different models of electrodynamics. At first, I looked at some aspects only in 1D systems and
later extended them to 2D. These simulations take hours of calculation time and produced up to
a few gigabytes of raw data.</p>
<p>The next step was a program to analyze the raw data and figure out how to get the relevant
information. I needed to do much experimentation to figure out what I could get out of the data.
Here every run might take a few minutes and output less than a megabyte of data.</p>
<p>The last step is to take the results and make good-looking plots with clear labeling. This takes
a few seconds and does not contain any ideas relevant to physics.</p>
<h2 id="infrastructure">Infrastructure</h2>
<p>Let’s begin with something seemingly boring, that everyone working in software knows anyway. There is some
infrastructure around your programming. Let’s look at some stuff.</p>
<ul>
<li>
<p><strong>Programming language</strong>: Look at the programming language you use. Spend some hours to learn what it can do. There are usually good tutorials online on their official website.
I use <a href="https://isocpp.org/">C++</a> for the computationally complex part and <a href="https://www.python.org/">python</a> for the rest because I know them well.</p>
</li>
<li><strong>Text editor</strong>: You will spend much time programming in a text editor or integrated development environment. Spend an hour to test a few of them and stay with the one you like. Don’t blindly use what everyone else in your group uses; it might be worse than <a href="https://en.wikipedia.org/wiki/Turbo_Pascal">what was used in the 90s</a>. I like <a href="https://code.visualstudio.com/">Visual Studio Code</a> because it works on most operating systems and has many cool features through plugins.</li>
<li>
<p><strong>Version control systems</strong>: Every halfway reasonable software developer uses version control to put his software into and to have access to all previous changes. You should also do this. I think you should use <a href="https://git-scm.com/">git</a>. It is the industry standard, it works and it is easy to get help with it.
There are even multiple graphical interfaces to use git, but I don’t know much about them.</p>
<p>Git allows you to have a remote server to sync with. This is especially useful if you work from different computers on the same program. You can check if your university provides such a server or use a (free) commercial service like <a href="https://github.com/">github</a>. It might even work as (hopefully another) backup.</p>
</li>
<li>
<p><strong>Build system</strong>: You probably need to compile your software. Coping the command to do that from the first line of your source file or your shell history is a bad idea.
You should use a system for that as everyone else in software does for 40 years. At least use <a href="https://en.wikipedia.org/wiki/Make_(software)">Make</a> or some shell script.</p>
<p>I use <a href="https://cmake.org/">CMake</a> for my C++ programs. It is very flexible, works fine, and even has integration into my editor.</p>
</li>
<li><strong>Compilers</strong>: If there are multiple good compilers for your programming language make your code work with more than one of them.
This usually finds bugs in your programs and improves the chances that they conform to standards and will continue to work in a few years or on the cluster of your organization. I made sure my programs work with GCC and Clang on Linux and Visual C++ on Windows.</li>
</ul>
<p>I know all of this sounds like tedious work, but it is all useful. You will see it in the end!</p>
<h2 id="modules">Modules</h2>
<p>It is good practice to split the software into modules and reuse these. The idea is to have the components in a library with well-structured implementations and use these to
build multiple programs. This prevents copying of code and enforces good interfaces.</p>
<p><img src="/assets/images/writing-scientific-software/diagram2.svg" width="500px" /></p>
<p>In the diagram above you can see a library <code class="language-plaintext highlighter-rouge">simcore</code>, that is used by multiple executables in the row below in my project. This library contains</p>
<ul>
<li>Multiple implementations of physical systems with
different numerical implementation or dimensionality</li>
<li>Basic mathematical helper functions an</li>
<li>Many predefined laser pulses</li>
<li>Helpers for file output</li>
<li>…</li>
</ul>
<p>The library has a few 1000 lines of code, while all the binaries are below 200 lines of code.
If I am doing something in multiple binaries, I search for an abstraction and put it into the libraries. Trivial examples of this are writing a matrix to disk, showing the progress of the program, or compression of output data.</p>
<p>I solved this problem by building <code class="language-plaintext highlighter-rouge">simcore</code> as a <a href="https://en.wikipedia.org/wiki/Static_library">static library</a> that is linked to all the binaries, that there is no time wasted with compiling code multiple times. This is very easy with the build system CMake I use, but there are many more possibilities with different
infrastructure.</p>
<p>The reason for having multiple binaries is to investigate your system in different environments. First, it is just the most basic thing possible and then you can add more interesting components. Of course, you can incrementally improve one program, but then you can’t go back to the basic program if you eventually don’t understand your complex problem and need to get back to the basics.</p>
<p>The only hard part is that you must sometimes make sure you don’t break your old basic programs while improving the advanced ones. You should build all of them regularly.</p>
<h2 id="automatic-testing">Automatic testing</h2>
<p>These days it is usual to test software automatically.
The classical approach is to test little parts of the software with
so-called unit tests. These can be very helpful and Kent Beck has written an
<a href="https://www.oreilly.com/library/view/test-driven-development/0321146530/">excellent book</a>
about these.</p>
<p>In my experience, there is another component to testing scientific software.
The underlying models often have special cases with know (approximate) solutions,
so you can run your general program to handle these cases and compare them to analytic results.</p>
<p>For these tests perfect is the enemy of the good. It is better to have an incomplete
test case than none because it would be hard. Some example from my work are:</p>
<ul>
<li>I looked at wave packets moving with some speed. So I initialized a gaussian wavepacket and look where its maximum value was after some time and checked if it had the expected velocity. There was no fit to get the exact center of the wave or ensure its shape. Just a check if a rough approximation of the velocity is in
10% of the expected value</li>
<li>There are complex formulas to predict the state of a two-level quantum system excited with different lasers. I just looked at the simplest perfectly tuned pulse with the simplest formula to see
if it worked the same with my numerics</li>
</ul>
<p>Now if you structured your program into modules, as shown above, this is all very easy to do. Your tests can just be another binary besides the other programs to test all the
complicated stuff in the library.</p>
<p>Just run a simplified simulation, check the results in your program and print the
deviation if something wrong and a green checkmark or smiley face if everything worked.</p>
<p>If at all possible make sure your tests can run in a few seconds. Otherwise, you might get frustrated to wait and stop using them regularly.</p>
<h2 id="configuration">Configuration</h2>
<p>Almost every scientific program needs many external constants to work. These might be physical constants, material parameters, setup constants, times, numerical parameters.
Managing these in a well-structured manner can significantly help to keep an overview of everything at the time of development and even years later.
For me, a good configuration system should fulfill the following requirements</p>
<ul>
<li>Options should be named. Otherwise, you have to remember the 5. line is the speed of light and you will be confused eventually.</li>
<li>No information should be redundant/be copied. If you just want to run another simulation, you should not always have to reenter the speed of light.</li>
<li>Move some (simple) logic from your program into your configuration.</li>
</ul>
<p>One solution to this is use a scripting language like <a href="https://www.python.org/">Python</a>, <a href="https://www.lua.org/">Lua</a> or <a href="https://en.wikipedia.org/wiki/Scheme_%28programming_language%29">Scheme</a> for configuration. In my experience, these are kind of complicated to integrate with a program, but otherwise
I like this solution a lot and it is <a href="https://github.com/NanoComp/meep/">sometimes used</a>. It might be possible to make it a little easier to expose your numerics to python by something like <a href="https://numpy.org/doc/stable/f2py/">F2PY</a> or <a href="https://github.com/boostorg/python">Boost.Python</a>.</p>
<p>I prefer a simpler approach to define my own simple configuration format. I base the format on <a href="https://en.wikipedia.org/wiki/INI_file">.ini</a> files, but variables can
contain formulas and it is possible to include base files. A configuration might look like this:</p>
<p><em>Base.conf</em></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>c = 3e8
# space
c = 300
Nx = 100
dx = x/Nx
#time
Nt = T/dt
dt = 0.98/c/sqrt(1/(dx^2)+1/(dy^2))
</code></pre></div></div>
<p><em>Sim.conf</em></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{Base.conf}
T = 30
x = 500
pulse_width = x/5
pulse_energy = exp(5*pi*acos(T/4))
</code></pre></div></div>
<p>In my implementation (<em><a href="https://gist.github.com/a-voel/913f246b2041c5685d564304314ec65b">Sourcecode on github</a></em>) I would use these file as</p>
<div class="language-c highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">#define AFPCFG_IMPL
#include "avcfg.h"
</span>
<span class="k">typedef</span> <span class="k">struct</span> <span class="p">{</span>
<span class="kt">double</span> <span class="n">pulse_energy</span><span class="p">;</span>
<span class="kt">double</span> <span class="n">pulse_location</span><span class="p">;</span>
<span class="cm">/* ... */</span>
<span class="p">}</span> <span class="n">config</span><span class="p">;</span>
<span class="n">config</span> <span class="nf">load_config</span><span class="p">()</span>
<span class="p">{</span>
<span class="n">config</span> <span class="n">mycfg</span><span class="p">;</span>
<span class="n">avcfg_config</span> <span class="o">*</span><span class="n">config</span> <span class="o">=</span> <span class="n">avcfg_load</span><span class="p">(</span><span class="s">"Sim.conf"</span><span class="p">,</span> <span class="nb">NULL</span><span class="p">);</span>
<span class="n">mycfg</span><span class="p">.</span><span class="n">pulse_energy</span> <span class="o">=</span> <span class="n">avcfg_getf</span><span class="p">(</span><span class="n">config</span><span class="p">,</span> <span class="s">"pulse_energy"</span><span class="p">);</span>
<span class="cm">/* Give default value if the variable is not defined in the configfile */</span>
<span class="n">mycfg</span><span class="p">.</span><span class="n">pulse_location</span> <span class="o">=</span> <span class="n">avcfg_getf_op</span><span class="p">(</span><span class="n">config</span><span class="p">,</span> <span class="s">"pulse_location"</span><span class="p">,</span> <span class="mi">0</span><span class="p">.</span><span class="mi">0</span><span class="p">);</span>
<span class="cm">/* ... */</span>
<span class="n">avcfg_free</span><span class="p">(</span><span class="n">config</span><span class="p">);</span>
<span class="k">return</span> <span class="n">mycfg</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>
<p>A nice trick is that you can now write the processed config (with only numerical values and no formulas in a single file) to disk with the output of your simulation.
Here you can check if the values are as expected and you can rerun exactly the same configuration. Because it is basically a .ini file there is a parser
for almost any programming language you could want to use for analysis.</p>
<h2 id="data-storage">Data storage</h2>
<p>Simulations can create huge quantities of data. It is in most cases favored to save as much
data as possible to do any imaginable analysis without having to rerun the slow simulations.</p>
<p>It took me a long time to figure out a nice structure to store that simulation output and
derived analysis. I think in the end it would be optimal to have a versioned database that can run analysis in the background and dynamically move data between fast local storage to large network shares. But that would be a large system with many moving parts that would take many months of design and development from skilled
programmers.</p>
<p>In the absence of that the best structure I found was something like:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/raw-data/ (Many gigabytes)
|- /Simulation run 1/
|- /Simulation run 2/
'- ...
/analysis-results/ (A few Megabytes)
|- /Simulation run 1/
|- /Simulation run 2/
'- ...
/plots/ (Some pictures)
|- /Simulation run 1/
|- /Simulation run 2/
'- ...
</code></pre></div></div>
<p>It might be counterintuitive to interleave the data of different simulation runs, but it has some nice properties.</p>
<ul>
<li>The raw data can live on a slow (high latency) storage elsewhere, but it is trivial to just copy the analysis result onto your local computer to make plots faster.</li>
<li>You can also take the analysis data onto your laptop for a conference or home office where you might not fast (or any) internet.</li>
<li>If you want to write a paper you can simply copy all the plots folder without putting too much irrelevant stuff next to your Latex documents. If you redo your sim/analysis it is trivial to replace them without confusion and messing up.</li>
</ul>
<p>This exactly replicates my description of time scales from above and also the different iteration numbers. You will
presumably, do much more iteration on your analysis than your simulation and hopefully even more iteration on your plots!</p>
<h2 id="numerics-and-algorithms">Numerics and algorithms</h2>
<p>This is the hard part you have to solve and I can’t help you. Sorry.</p>
<h2 id="neat-tricks">Neat tricks</h2>
<p>At last, I will share a few little ideas you can integrate into your program without too much hassle to make life a little easier.</p>
<h3 id="floating-point-precision">Floating point precision</h3>
<p>Almost any computer you will use has 32bit (float, single) and 64bit (double, full) floating-point numbers. I science most people
use 64 bit for more precision and fewer numerical problems. This is mostly a good idea, but in most cases these numbers are
(kind of) twice as slow. This can be rather time-wasting while developing your program. Sadly most compilers don’t let you
switch automatically, but in C and C++ you can build that yourself easily. For example, I use:</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#if 0 /*set to 1 for full precision */
typedef float F;
typedef std::complex<F> C;
typedef Eigen::ArrayXf VecF;
typedef Eigen::ArrayXcf VecC;
#else</span>
<span class="k">typedef</span> <span class="kt">double</span> <span class="n">F</span><span class="p">;</span>
<span class="k">typedef</span> <span class="n">std</span><span class="o">::</span><span class="n">complex</span><span class="o"><</span><span class="n">F</span><span class="o">></span> <span class="n">C</span><span class="p">;</span>
<span class="k">typedef</span> <span class="n">Eigen</span><span class="o">::</span><span class="n">ArrayXd</span> <span class="n">VecF</span><span class="p">;</span>
<span class="k">typedef</span> <span class="n">Eigen</span><span class="o">::</span><span class="n">ArrayXcd</span> <span class="n">VecC</span><span class="p">;</span>
<span class="cp">#endif
</span></code></pre></div></div>
<p>And you get very short type names for free!</p>
<p>Also, see if <a href="https://gcc.gnu.org/wiki/FloatingPointMath">-ffast-math</a> or something similar can
speed up your test runs in development.</p>
<h3 id="simulation-progress">Simulation progress</h3>
<p>Sometimes you wait eagerly in front of your running simulation for results. I build myself a little
progress reporter for the command line. It can show percentage progress and extrapolated time
remaining, so you know if there is enough idle time for making tea or even lunch at the moment.</p>
<p>My systems need to know the number of the total simulation steps at the beginning of the program
and has to be notified after every step.
To not use too much computation time I print the information only if 5 seconds elapsed or 5% of total progress were made.
There are many more possibilities how you can make this more awesome, but this was enough for me.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>config.ini 0.1% done in 0:06. Expected remaining time: 75:51
config.ini 0.3% done in 0:12. Expected remaining time: 76:34
config.ini 0.4% done in 0:18. Expected remaining time: 76:32
config.ini 0.5% done in 0:24. Expected remaining time: 76:04
</code></pre></div></div>
<p>My (not that advanced) implementation of this idea is on <a href="https://gist.github.com/a-voel/9b0347397db95bc829953079ad047e74">GitHub</a>.</p>
<h3 id="program-version-in-data">Program version in data</h3>
<p>At the beginning of this article, I told you about some boring infrastructure stuff and now it is time
to build something cool with it!</p>
<p>If you use version control your code always has a revision ID (kind of version number).
I found on the internet how to always put that ID into your code and resulting executable
with my build system. Now I told my program to always put a file with this revision number next
to the output files.</p>
<p>If you also saved your configuration file as I did you now know exactly what program and what configuration you used to generate a simulation output. With this, you can trivially and with confidence reproduce the output
if you mess it up or investigate if it turns out to be strange a few months later.</p>
<p>This might be useless in most cases, but it also might save you from a few sleepless panicked nights before a deadline.</p>Today many scientists have problems that are solved by developing new programs. Sadly most of them are not software developers, so the quality of the resulting programs is mostly up to luck and individual interest. They mostly only have to take basic courses into programming by other scientists. Often the professors mostly teach from books because they only wrote basic programs 30 years ago in their own Ph.D. time.