SI model stability with unusual perturbation theory

12 Apr 2022

The SI model is one of the simplest models to describe the spread of infectious disease using differential equations. I figured out a funny way to look at its steady states. It reproduces the well-known and fairly obvious result.

The model

The model is defined by the equations

\[\begin{aligned} \dot I(t) &= \beta S(t) I(t) - \gamma I(t)\\ \dot S(t) &= -\beta S(t) I(t). \end{aligned}\]

where $S(t) \in [0,1]$ is the still suseptible part of the population and $I(t) \in [0, 1]$ the currently infected part. $\beta >0$ and $\gamma>0$ are constants for the rates of infection and resolution of infections.

The idea is that the rate of new infections is proportional to the chance of a susceptible individual meeting an infected individual. We assume the infected are only infectious for some time and shrink exponentially in absence of new infections. This is a really basic model that ignores reinfections, interventions, inhomogeneous populations, spatial dynamics…

As with almost any nonlinear equation, there is no practical general solution, so lets at least look at the steady states of the system.

Steady states and their stability

The obvious steady state for this model is $S=\bar S$ arbitrary and $I=0$. If there are no infections nobody else can get infected. The slightly unusual thing, in this case, is, that there are an infinite number of dense steady states.

Now we will try to see if these are stable, i.e. if a little perturbation shrinks back to zero or explodes. The usual tool for this is linear stability analysis. It looks at the eigenvalues of the jacobian so let’s calculate it for the steady states:

\[J = \begin{bmatrix}\beta \bar S & \gamma\\\\ 0 & 0\end{bmatrix}\]

The last row of this matrix is $0$ and this implies that $0$ is one of its eigenvalues. Sadly, in this case, linear stability analysis can’t make any conclusion about the stability so we need another tool: perturbation theory.

Perturbation theory

The idea of perturbation theory is to transform a hard problem into a infinite series of easy problems and get an approximation to arbitrary precision by solving these. As is usual we insert a small parameter $\varepsilon$ into our system and develop the solution as power series of it. So let’s do this:

\[\begin{aligned} I(t) &= \sum_{i=0}^{\infty} \varepsilon^n I_n(t)\\ S(t) &= \sum_{i=0}^{\infty} \varepsilon^n S_n(t) \end{aligned}\]

Now we can plug these into our equations:

\[\begin{aligned} \dot I_0 +\varepsilon \dot I_1+ \cdots &= \beta (S_0+\varepsilon S_1+\dots)(I_0+\varepsilon I_1+\dots)-\gamma(I_0+\varepsilon I_1+\dots) \\ &=\beta S_0I_0-\gamma I_0+\varepsilon(\beta I_0 S_1+\beta I_1 S_0-\gamma I_1)+\dots\\ \dot S_0 +\varepsilon \dot S_1+ \dots &= -\beta (S_0+\varepsilon S_1+\dots)(I_0+\varepsilon I_1+\dots) \\ &= -\beta S_0I_0+\varepsilon(\beta I_0 S_1+\beta I_1 S_0)+\dots \end{aligned}\]

If you have some experience with perturbation theory you might think we should have put an $\varepsilon$ into our equations. Even worse: if we look at all terms without an $\varepsilon$ you find we still have the old equations from before and we still don’t know how to solve them.

Now here comes my unusual trick: We put an $\varepsilon$ into the initial condition of the equations and set $S(0) = \bar S$ and $I(0) = \varepsilon \bar I$ with an arbitrary (and in the end boring) $\bar I$. Now let’s solve our system by order!

$\varepsilon^0$ Order

If we collect everything without an $\varepsilon$ we get:

\[\begin{aligned} \dot I_0 &= \beta S_0 I_0 - \gamma I_0 \\ \dot S_0 &= -\beta S I_0 \\ S_0(0) &= \bar S \text{ and } I_0(0) = 0 \\ \end{aligned}\]

Now if we look at the first equation we see an $I_0$ in every term and with the initial condition it turns out this equation can be trivially be solved as

\[I_0(t) = 0.\]

Plugging this in the second equation the right side becomes zero and so the solution of the equation is a constant. With the initial condition, we directly get

\[S_0(t) = \bar S.\]

By solving the 0th order we basically showed that the steady-state is steady.

$\varepsilon^1$ order

Now we come to the interesting terms. Collecting everything with $\varepsilon$ we get

\[\begin{aligned} \dot I_1 &= \beta S_0 I_1 + \beta S_1 I_0 -\gamma I_1= (\beta \bar S-\gamma) I_1 \\ \dot S_1 &= -\beta S_0 I_1 - \beta S_1 I_0 = \beta \bar S I_1 \\ S_1(0) &= 0 \text{ and } I_1(0) = \bar I \end{aligned}\]

So the first equation is linear, homogeneous with constant coefficients and the solution is well known and easy to check. When plugging in the initial condition we get

\[I_1(t) = \bar I e^{(\beta\bar S-\gamma) t}\]

If we plug this into the second equation it becomes an integrable equation:

\[\dot S_1 = -\beta \bar S \bar I e^{(\beta\bar S-\gamma) t} \Rightarrow S_1 = -\beta \bar I \bar S \int dt\, e^{(\beta\bar S-\gamma) t}\]

and with the initial condition, we get

\[S_1(t) = -\frac{\beta \bar I \bar S}{\beta\bar S-\gamma}\left(e^{(\beta\bar S-\gamma) t}-1\right).\]

Higher orders

With the same principle we always get the equations for each order and could in principle solve them fairly easily (using the substitution $I_n(t) = \tilde I(t) e^{(\beta \bar S-\gamma)t}$). This gets increasingly messy and I have not figured out the general form of them. But with some thought, we can see $I_N$ und $S_N$ have the form

\[\sum_{n=0}^N c_n e^{n(\beta \bar S-\gamma) t}\]

with constants $c_n$. This means these higher terms have similar but faster growth/shrink behavior as the first orders.

Now a mathematician should check the radius of convergence which is not obvious at all. Luckily I come from a physics background and can ignore such “details”.

What that means for stability

So let’s collect what we figured out:

\[\begin{aligned} I(t) &= \varepsilon \bar I e^{(\beta\bar S-\gamma) t} +\mathcal O (\varepsilon^2)\\ S(t) &= \bar S -\varepsilon\frac{\beta \bar I \bar S}{\beta\bar S-\gamma}\left(e^{(\beta\bar S-\gamma) t}-1\right)+\mathcal O (\varepsilon^2) \end{aligned}\]

Looking at the exponential function they shrink exactly if $\beta \bar S-\gamma < 0 \Leftrightarrow \bar S < \frac{\gamma}{\beta}=:\frac{1}{R_0}$. If the sign is flipped the number of infections explodes. The newly defined $R_0$ is these days fairly well known as the basic reproduction rate. Our condition reproduces exactly the condition for herd immunity, which is nice.

Going back to our equation we can see if this condition holds infection fall back to zero and it seems to be stable. But we also need to look at the stability of the susceptible population $S(t)$. Turns out it does not return to $\bar S$ and shrinks somewhat. This is not compatible with the definition of stability!

So in the classical sense there exist no stable steady states. But above the herd immunity threshold, small perturbations are fairly inconsequential and only move us a little down the line of steady states. For practical concerns, I would call this stable.

Checking with numerics

Now we got a solution we should check if this first-order works at all. The model is easy to solve with numerics (Sourcecode).

Overall it seems to work well, if the exponent $E = \beta \bar S - \gamma <0 $ and $\bar I$ is smaller. If this gets near $0$ there are significant deviations. The explanation for this is, that with a big $E$ or $\bar I$ the changes of $S(t)$ are bigger and the first order basically assumes $S(t) = \bar S$. This might improve in the higher orders.

$E=-0.02$ $\bar I=0.001$

$E=-0.02$ $\bar I=0.001$

Conclusion

In some cases, you can do perturbation theory by putting your perturbation constant $\varepsilon$ only into the initial conditions. To my astonishment, it solves the equation, does not produce contradictions, and is in this case the most straightforward method to show and analyze the stability I found.

The great thing with perturbation theory is you can put an $\varepsilon$ in all kinds of creative places and see what happens and sometimes it even works out.