## The adaptive-triggering policy.

On page 12 of a document put out by Imperial College London, which has been very widely read and commented on, and which has had a significant influence on UK policy concerning the coronavirus, there is a diagram that shows the possible impact of a strategy of alternating between measures that are serious enough to cause the number of cases to decline, and more relaxed measures that allow it to grow again. They call this *adaptive triggering*: when the number of cases needing intensive care reaches a certain level per week, the stronger measures are triggered, and when it declines to some other level (the numbers they give are 100 and 50, respectively), they are lifted.

If such a policy were ever to be enacted, a very important question would be how to optimize the choice of the two triggers. I’ve tried to work this out, subject to certain simplifying assumptions (and it’s important to stress right at the outset that these assumptions are questionable, and therefore that any conclusion I come to should be treated with great caution). This post is to show the calculation I did. It leads to slightly counterintuitive results, so part of my reason for posting it publicly is as a sanity check: I know that if I post it here, then any flaws in my reasoning will be quickly picked up. And the contrapositive of that statement is that if the reasoning survives the harsh scrutiny of a typical reader of this blog, then I can feel fairly confident about it. Of course, it may also be that I have failed to model some aspect of the situation that would make a material difference to the conclusions I draw. I would be very interested in criticisms of that kind too. (Indeed, I make some myself in the post.)

Before I get on to what the model is, I would like to make clear that I am not *advocating* this adaptive-triggering policy. Personally, what I would like to see is something more like what Tomas Pueyo calls The Hammer and the Dance: roughly speaking, you get the cases down to a trickle, and then you stop that trickle turning back into a flood by stamping down hard on local outbreaks using a lot of testing, contact tracing, isolation of potential infected people, etc. (This would need to be combined with other measures such as quarantine for people arriving from more affected countries etc.) But it still seems worth thinking about the adaptive-triggering policy, in case the hammer-and-dance policy doesn’t work (which could be for the simple reason that a government decides not to implement it).

## A very basic model.

Here was my first attempt at modelling the situation. I make the following assumptions. The numbers are positive constants.

- Relaxation is triggered when the rate of infection is .
- Lockdown (or similar) is triggered when the rate of infection is .
- The rate of infection is of the form during a relaxation phase.
- The rate of infection is of the form during a lockdown phase.
- The rate of “damage” due to infection is times the infection rate.
- The rate of damage due to lockdown measures is while those measures are in force.

For the moment I am not concerned with how realistic these assumptions are, but just with what their consequences are. What I would like to do is minimize the average damage by choosing and appropriately.

I may as well give away one of the punchlines straight away, since no calculation is needed to explain it. The time it takes for the infection rate to increase from to or to decrease from to depends only on the ratio . Therefore, if we divide both and by 2, we decrease the damage due to the infection and have no effect on the damage due to the lockdown measures. Thus, for any fixed ratio , it is best to make both and as small as possible.

This has the counterintuitive consequence that during one of the cycles one would be imposing lockdown measures that were doing far more damage than the damage done by the virus itself. However, I think something like that may actually be correct: unless the triggers are so low that the assumptions of the model completely break down (for example because local containment is, at least for a while, a realistic policy, so national lockdown is pointlessly damaging), there is nothing to be lost, and lives to be gained, by keeping them in the same proportion but decreasing them.

Now let me do the calculation, so that we can think about how to optimize the ratio for a fixed .

The time taken for the infection rate to increase from to is , and during that time the number of infections is

.

By symmetry the number of infections during the lockdown phase is (just run time backwards). So during a time the damage done by infections is , making the average damage . Meanwhile, the average damage done by lockdown measures over the whole cycle is .

Note that the lockdown damage doesn’t depend on and : it just depends on the proportion of time spent in lockdown, which depends only on the ratio between and . So from the point of view of optimizing and , we can simply forget about the damage caused by the lockdown measures.

Returning, therefore, to the term , let us say that . Then the term simplifies to . This increases with , which leads to a second counterintuitive conclusion, which is that for fixed , should be as close as possible to 0. So if, for example, , which tells us that the lockdown phases have to be twice as long as the relaxation phases, then it would be better to have cycles of two days of lockdown and one of relaxation than cycles of six weeks of lockdown and three weeks of relaxation.

Can this be correct? It seems as though with very short cycles the lockdowns wouldn’t work, because for one day in three people would be out there infecting others. I haven’t yet got my head round this, but I think what has gone wrong is that the model of exponential growth followed instantly by exponential decay is too great a simplification of what actually happens. Indeed, data seem to suggest a curve that rounds off at the top rather than switching suddenly from one exponential to another — see for example Chart 9 from the Tomas Pueyo article linked to above. But I think it is correct to conclude that the length of a cycle should be at most of a similar order of magnitude to the “turnaround time” from exponential growth to exponential decay. That is, one should make the cycles as short as possible provided that they are on a timescale that is long enough for the assumption of exponential growth followed by exponential decay to be reasonably accurate.

## What if we allow a cycle with more than two kinds of phases?

So far I have treated and and as parameters that we have no control over at all. But in practice that is not the case. At any one time there is a suite of measures one can take — encouraging frequent handwashing, banning large gatherings, closing schools, encouraging working from home wherever possible, closing pubs, restaurants, theatres and cinemas, enforcing full lockdown — that have different effects on the rate of growth or decline in infection and cause different levels of damage.

It seems worth taking this into account too, especially as there has been a common pattern of introducing more and more measures as the number of cases goes up. That feels like a sensible response — intuitively one would think that the cure should be kept proportionate — but is it?

Let’s suppose we have a collection of possible sets of measures . For ease of writing I shall call them measures rather than sets of measures, but in practice each is not just a single measure but a combination of measures such as the ones listed above. Associated with each measure is a growth rate (which is positive if the measures are not strong enough to stop the disease growing and negative if they are strong enough to cause it to decay) and a damage rate . Suppose we apply for time . Then during that time the rate of infection will multiply by . So if we do this for each measure, then we will get back to the starting infection rate provided that . (This is possible because some of the are negative and some are positive.)

There isn’t a particularly nice expression for the damage resulting from the disease during one of these cycles, but that does not mean that there is nothing to say. Suppose that the starting rate of infection is and that the rate after the first stages of the cycle is . Then . Also, by the calculation above, the damage done during the th stage is .

### In what order should the be applied?

This has an immediate consequence for the order in which the should be applied. Let me consider just the first two stages. The total damage caused by the disease during these two stages is

.

To make that easier to read, let’s forget the term (which we’re holding constant) and concentrate on the expression

.

If we reorder stages 1 and 2, we can replace this damage by

.

This is an improvement if the second number is smaller than the first. But the first minus the second is equal to

,

so the reordering is a good idea if . This tells us that we should start with smaller and work up to bigger ones. Of course, since we are applying the measures in a cycle, we cannot ensure that the form an increasing sequence, but we can say, for example, that if we first apply the measures that allow the disease to spread, and then the ones that get it to decay, then during the relaxation phase we should work from the least relaxed measures to the most relaxed ones (so the growth rate will keep increasing), and during the suppression phase we should start with the strictest measures and work down to the most relaxed ones.

It might seem strange that during the relaxation phase the measures should get gradually more relaxed as the spread worsens. In fact, I think it *is* strange, but I think what that strangeness is telling us is that using several different measures during the relaxation phase is not a sensible thing to do.

### Which sets of measures should be chosen?

The optimization problem I get if I try to balance the damage from the disease with the damage caused by the various control measures is fairly horrible, so I am going to simplify it a lot in the following way. The basic principle that there is nothing to be lost by dividing everything by 2 still applies when there are lots of measures, so I shall assume that a sensible government has taken that point on board to the point where the direct damage from the disease is insignificant compared with the damage caused by the measures. (Just to be clear, I certainly don’t mean that lives lost are insignificant, but I mean that the number of lives lost to the disease is significantly smaller than the number lost as an indirect result of the measures taken to control its spread.) Given this assumption, I am free to concentrate just on the damage due to the measures , so this is what I will try to minimize.

The total damage across a full cycle is , so the average damage, which is what matters here, is

.

We don’t have complete freedom to choose, or else we’d obviously just choose the smallest and go with that. The constraint is that the growth rate of the virus has to end up where it began: this is the constraint that , which we saw earlier.

Suppose we can find such that , but . Then in particular we can find such with . If all the are strictly positive, then we can also choose them in such a way that all the are still strictly positive. So if we replace each by , then the numerator of the fraction decreases, the denominator stays the same, and the constraint is still satisfied. It follows that we had not optimized.

Therefore, if the choice of is optimal and all the are non-zero (and therefore strictly positive — we can’t run some measures for a negative amount of time) it is not possible to find such that , but . This is equivalent to the statement that the vector is a linear combination of the vectors and . In other words, we can find such that for each . I wrote it like that because the smaller is, the larger the damage one expects the measures to cause. Thus, the points form a descending sequence. (We can assume this, since if one measure causes both more damage and a higher growth rate than another, then there can be no reason to choose it.) Thus, will be positive, and since at least some are positive, and no measures will cause a *negative* amount of damage, is positive as well.

The converse of this statement is true as well. If for every , then , from which it follows that the average damage across the cycle is , regardless of which measures are taken for which lengths of time.

This already shows that there is nothing to be gained from having more than one measure for the relaxation phase and one for the lockdown phase. There remains the question of how to choose the best pair of measures.

To answer it, we can plot the points . The relaxation points will appear to the right of the y-axis and the suppression points will appear to the left. If we choose one point from each side, then they lie in some line , of which is the intercept. Since is the average damage, which we are trying to minimize, we see that our aim is to find a line segment joining a point on the left-hand side to a point on the right-hand side, and we want it to cross the y-axis as low as possible.

It is not hard to check that the intercept of the line joining to is at . So if we rename the points to the left of the y-axis and the points to the right , then we want to minimize over all .

### Can we describe the best choice in a less formal way?

It isn’t completely easy to convert this criterion into a rule of thumb for how best to choose two measures, one for the relaxation phase and one for the suppression phase, but we can draw a couple of conclusions from it.

For example, suppose that for the suppression measures there is a choice between two measures, one of which works twice as quickly as the other but causes twice as much damage per unit time. Then the corresponding two points lie on a line with negative gradient that goes through the origin, and therefore lies below all points in the positive quadrant. From this it follows that the slower but less damaging measure is better. Another way of seeing that is that with the more severe measure the total damage during the lockdown phase stays the same, as does the total damage during the relaxation phase, but the length of the cycle is decreased, so the *average* damage is increased.

Note that I am not saying that one should always go for less severe measures — I made the strong assumption there that the two points lay in a line through the origin. If we can choose a measure that causes damage at double the rate but acts three times as quickly as another measure, then it may turn out to be better than the less damaging but slower measure.

However, it seems plausible that the set of points will exhibit a certain amount of convexity. That is because if you want to reduce the growth rate of infections, then at first there will be some low-hanging fruit — for example, it is not costly at all to run a public-information campaign to persuade people to wash their hands more frequently, and that can make quite a big difference — but the more you continue, the more difficult making a significant difference becomes, and you have to wheel out much more damaging measures such as school closures.

*If* the points were to lie on a convex curve (and I’m definitely not claiming this, but just saying that something like it could perhaps be true), then the best pair of points would be the ones that are nearest to the y-axis on either side. This would say that the best strategy is to alternate between a set of measures that allows the disease to grow rather slowly and a set of measures that causes it to decay slowly again.

This last conclusion points up another defect in the model, which is the assumption that a given set of measures causes damage at a constant rate. For some measures, this is not very realistic: for example, even in normal times schools alternate between periods of being closed and periods of being open (though not necessarily to a coronavirus-dictated timetable of course), so one might expect the damage from schools being 100% closed to be more than twice the damage from schools being closed half the time. More generally, it might well be better to rotate between two or three measures that all cause roughly the same rate of damage, but in different ways, so as to spread out the damage and try to avoid reaching the point where the rate of one kind of damage goes up.

## Summary of conclusions.

Again I want to stress that these conclusions are all quite tentative, and should certainly not be taken as a guide to policy without more thought and more sophisticated modelling. However, they do at least *suggest* that certain policies ought not to be ruled out without a good reason.

If adaptive triggering is going to be applied, then the following are the policies that the above analysis suggests. First, here is a quick reminder that I use the word “measure” as shorthand for “set of measures”. So for example “Encourage social distancing and close all schools, pubs, restaurants, theatres, and cinemas” would be a possible measure.

- There is nothing to lose and plenty to gain by making the triggers (that is, the infection rates that cause one to switch from relaxation to suppression and back again) low. This has the consequence that the triggers should be set in such a way that the damage from the measures is significantly higher than the damage caused by the disease. This sounds paradoxical, but the alternative is to make the disease worse without making the cure any less bad, and there is no point in doing that.
- Within reason, the cycles should be kept short.
- There is no point in having more than one measure for the relaxation phase and one for the suppression phase.
- If you must have more than one measure for each phase, then during the relaxation phase the measures should get more relaxed each time they change, and during the suppression phase they should get less strict each time they change.
- Given enough information about their consequences, the optimal measures can be determined quite easily, but doing the calculation in practice, especially in the presence of significant uncertainties, could be quite delicate.

Point number 1 above seems to me to be quite a strong argument in favour of the hammer-and-dance approach. That is because the conclusion, which looks to me quite robust to changes in the model, is that the triggers should be set very low. But if they are set very low, then it is highly unlikely that the enormous damage caused by school closures, lockdowns etc. is the best approach for dealing with the cases that arise, since widespread testing and quarantining of people who test positive, contacts of those people, people who arrive from certain other countries, and so on, will probably be far less damaging, even if they are costly to do well. So I regard point number 1 as a sort of reductio ad absurdum of the adaptive-triggering approach.

Point number 2 seems quite robust as well, but I think the model breaks down on small timescales (for reasons I haven’t properly understood), so one shouldn’t conclude from it that the cycles should be short on a timescale of days. That is what is meant by “within reason”. But they should be as short as possible provided that they are long enough for the dominant behaviour of the infection rate to be exponential growth and decay. (That does not imply that they should not be shorter than this — just that one cannot reach that conclusion without a more sophisticated model. But it seems highly likely that there is a minimum “reasonable” length for a cycle: this is something I’d be very interested to understand better.)

Point number 3 was a clear consequence of the simple model (though it depended on taking 1 seriously enough that the damage from the disease could be ignored), but may well not be a sensible conclusion in reality, since the assumption that each measure causes damage at a rate that does not change over time is highly questionable, and dropping that assumption could make quite a big difference. Nevertheless, it is interesting to see what the consequences of that assumption are.

Point number 4 seems to be another fairly robust conclusion. However, in the light of 3 one might hope that it would not need to be applied, except perhaps as part of a policy of “rotating” between various measures to spread the damage about more evenly.

It seems at least possible that the optimal adaptive-triggering policy, if one had a number of choices of measures, would be to choose one set that causes the infections to grow slowly and another that causes them to shrink slowly — in other words to fine tune the measures so as to keep the infection rate roughly constant (and small). Such fine tuning would be very dangerous to attempt now, given how much uncertainty we are facing, but could become more realistic after a few cycles, when we would start to have more information about the effects of various measures.

One final point is that throughout this discussion I have been assuming that the triggers would be based on the current rate of infection. In practice of course, this is hard to measure, which is presumably why the Imperial College paper used demand for intensive care beds instead. However, with enough data about the effects of various measures on the rate of spread of the virus, one would be less reliant on direct measurements, and could instead make inferences about the likely rate of infection given data collected over the previous few weeks. This seems better than using demand for ICU beds as a trigger, since that demand reflects the infection rate from some time earlier.

March 28, 2020 at 6:26 pm |

I wonder how growing immunity plays into this. The answer may be obvious to some, but it isn’t to me. Two thoughts:

If we eventually get immunity in a substantial proportion of the population (that is, if you do not succeed in keeping the infection rate small all the way through), then the rates of infection in all phases should drop because before Fred is immune, the virus could pass from Joe through Fred to Brenda, but after Fred has become immune, that path will be closed.

On the other hand that fall in rate of infection may not matter, so that all your conclusions still hold, if we are only trying to decide what to do in the forthcoming cycle (rather than trying to decide at the beginning what the plan will be for all cycles). It will just be that the decision for the forthcoming cycle might be a bit different from the decisions for preceding cycles.

March 28, 2020 at 6:27 pm |

I follow the information on covid since January and the thing that surprised me most was how little reliable data we have during the pandemic. So the most problematic assumption is that we can determine all these values well enough.

March 28, 2020 at 6:31 pm |

I guess what changes the dynamics is that the growth should be according to a logistic model that takes into account that the rate of infections is also proportional to “population size minus number of infected people”. I would guess that this means that in finitely many cycles one would reaches a final state where (after infinite time) finally everybody will become infected.

March 28, 2020 at 6:49 pm

That sort of consideration seems to have been taken into account in the Oxford paper (still provisional and over-hyped by journalists) available here:

https://www.medrxiv.org/content/10.1101/2020.03.24.20042291v1

March 28, 2020 at 6:51 pm

I didn’t state it explicitly, but I was assuming throughout that the total number of infections isn’t high enough to have a substantial effect on the transmission rate. The alternative would require massive overload of the NHS, so I don’t think it should be contemplated.

March 28, 2020 at 7:11 pm

One should distinguish between “currently infected” and “recovered” (and “dead”). The NHS only has to deal with currently infected people and after 14 days those either have recovered or not. After that, recovered people are immun (at least that is another assumption). I understood that the aim is to slow down the infection in order for the health system to be able to deal with “currently infected” people. In order to keep this number low enough suitable cycles are put into place. Here another factor is that it takes some time until measures are in place and show any effect. But if that wasn’t the case it might indeed be best to counterbalance increasing and decreasing effects as often as possible in order to stay as close as possible to the actual capacity of the health system.

March 28, 2020 at 9:41 pm

I see no reason whatever to try to stay close to the capacity of the health system. That capacity is low enough that if we stayed below it, it would take years to reach anything like herd immunity, by which time we would probably have a vaccine anyway (and also it isn’t clear that immunity would last for years). Therefore, achieving herd immunity quickly means a huge number of deaths, so that is a very bad policy. And as I argue in the post, if one is going to have cycles to keep the number of currently infected people at roughly some level , then there is nothing to lose from using precisely the same cycles with a lower starting point, so that the number of currently infected people is kept at a lower level. What is there to gain from staying close to the capacity of the health system rather than staying close to, say, 5% of the capacity of the health system?

March 28, 2020 at 7:21 pm

From what I understand the final state (of the mathematical model that I have in mind) will be that a basically everyone who is alive has “recovered and immun” from a virus infection. If immunity can be lost (as is the case for different virus infections) there will be an additional groups of “currently infected” and “not immun”.

One could try to keep the total number of infections lower using the cycles as you describe, but then it would never end.

April 1, 2020 at 10:14 pm

Tim said: “The alternative would require massive overload of the NHS, so I don’t think it should be contemplated.” Let me add that waiting for a large percent of the population to be infected should not be contemplated also because of the large number of casualties even without taking into account the NHS capacity. As both China and South Korea realized the only viable option is to suppress the disease well before herd immunity of any kind.

April 1, 2020 at 10:27 pm

@Gil Kalai: “should not be contemplated” is rather wishful thinking, given that suppressing the virus would require either vaccination (>1 year away, probably much more before it can scale up to whole countries) or a well-functioning contact tracing system (won’t happen any time soon in Europe due to the abysmal slowness and incompetency of government IT; probably won’t happen in the US as long as FDA and CDC are running the show). Not everything that works in Singapore or Israel will work in the West; progress is not a linear order. Of course there is always the possibility of science doing miracles, but before they happen, successful suppression is just science fiction. And if waiting for this science fiction to happen results in a year of mass joblessness, closed inter-European borders and all-around chaos, it’s far from clear it’s a good tradeoff.

April 1, 2020 at 11:00 pm

Hi Darij, that’s an interesting perspective.

Indeed testing and tracing are key points. Regarding testing, I think that the success of South Korea largely relied on very many tests and here the progress can be rapid. As for tracing, there could be also various voluntary tracing systems. See proposal 6 in David Ellis post. https://davidellis2.wordpress.com/2020/03/23/nine-proposals-for-combatting-the-coronavirus-pandemic-in-the-uk/

It is also quite possible that putting masks on can be effective.

So I am not sure that suppressing the pandemic represents a science fiction.

The tradeoff is also an interesting issue. Chaos, massive unemployment and devastating economical consequences, are expected both ways. And I am not sure why closing the borders for one year (if it is necessary at all) is a big deal.

If the number of casualties among risk groups is as reported then not taking the harsh steps needed to suppress the pandemic is also morally wrong.

March 28, 2020 at 7:58 pm |

“Returning, therefore, to the term \theta(T-S)/(\log T-\log S), let us say that T=(1+\alpha)S. Then the term simplifies to \theta\alpha S/\log(1+\alpha). This increases with \alpha, which leads to a second counterintuitive conclusion, which is that for fixed S, \alpha should be as close as possible to 1.”

Shouldn’t this be “\alpha must be as close as possible to 0”?

Many thanks — corrected now.March 28, 2020 at 8:22 pm |

I think it’s worth considering the full consequences of the result that the lockdown-relaxation cycles should be as short as possible. In the limiting case, they happen so quickly that the infection rate is constant in time, i.e. R0 = 1. In more realistic terms, it means that the containment measures are set to some constant level, that is milder than full lockdown but harsher than full relaxation.

And then remember the purpose of the exercise to begin with: achieving herd immunity without overwhelming hospitals. I think this is a foolish goal, but suppose it were attempted anyway. This means that there is an end state: 60% (or whatever is the needed fraction) of the population infected with the disease.

We want to get to that end state as quickly as possible so we can go back to normal. That means we want to keep infection rates as high as we can, so that “throughput” is as high as possible, but without them getting so high that they exceed the capacity of hospitals to deal with them. We want to brush as high as possible against that “ceiling”, but no higher.

This has a further implication. As the virus spreads through the population, and people get infected and recover, it will have a harder job of finding new susceptibles to infect. If the containment measures are kept constant, effective R0 will decrease and the infection rate will fall. Therefore, to keep infection rates constant, the containment measures must be gradually loosened. In fact, they should asymptote to zero over time.

March 28, 2020 at 9:44 pm

I disagree very strongly that achieving herd immunity without overwhelming hospitals is the purpose of the exercise. If you do a back-of-envelope calculation for how long it would take to infect 40,000,000 in the UK without overwhelming hospitals (taking account of the fact that only a smallish percentage of them would need to be hospitalized), you find that it takes several years. And in that time all sorts of things could happen: a vaccine may become available, and some of the first people to get the disease may have lost immunity.

March 28, 2020 at 10:01 pm

>”I disagree very strongly that achieving herd immunity without overwhelming hospitals is the purpose of the exercise. If you do a back-of-envelope calculation for how long it would take to infect 40,000,000 in the UK without overwhelming hospitals (taking account of the fact that only a smallish percentage of them would need to be hospitalized), you find that it takes several years. And in that time all sorts of things could happen: a vaccine may become available, and some of the first people to get the disease may have lost immunity. ”

Yes, that was exactly my reaction when I heard the government say they were aiming for herd immunity, hence why it’s so foolish. It just can’t be done. But adaptive triggering only makes sense if herd immunity is in fact the goal. I can’t think of any reason whatsoever to try it otherwise.

This is speculation, but from reading between the lines I think the sudden govt U-turn is because nobody actually did this Fermi estimate until several days after it was floated.

If herd immunity is not the goal, the only alternative is to keep cases as low as possible by getting R0 to below 1. Nuke the curve; the series converges. Quarantine arrivals from the outside to prevent re-emergence.

March 28, 2020 at 8:46 pm |

Another big consideration: we only have very imperfect estimates of the true infection rate at any given time. Whether it’s from random samples of testing, or by extrapolation from current patient and death counts, there are huge error bars on the estimates for current infection rate, and on the growth/shrink parameters (lambda and mu).

Then consider the incubation period: it takes time for the effects of the interventions to be seen and measured. You have to forecast what cases will be 1-2 weeks into the future and make sure that they don’t tip over the hospital capacity red line.

Some of those error sources are on the coefficient outside the exponential, and some of them are on the parameters inside the exponential. It’s an explosive combination.

March 28, 2020 at 9:46 pm

That’s certainly true now, though as I comment in the post, if one were to adopt this policy, then one could hope that after a few cycles, and after studying what has happened in different countries in response to different measures, these uncertainties would be significantly reduced (though not to zero). But again, as the post makes clear, there is nothing to gain in trying to keep the number of hospitalizations close to the capacity of the hospitals — one might as well aim for a far smaller number.

March 28, 2020 at 9:20 pm |

Perhaps one way to summarize is that the optimal situation is one in which there’s a set of measures that keeps the infection constant and low. One would just use this set of measures all the time, with no switching. This suggests a tweak to the model: should not be constant but an increasing function of . That is, drastic measures are likely to be more costly.

Another issue that bothers me about the model is that I don’t understand how robust it is to uncertainties and delays: decisions need to be made based on noisy measurements that lag behind (because of things like incubation period).

March 29, 2020 at 10:48 am

This is the “mitigation” strategy, which seeks a steady state for an extended period of time. The other alternative is “containment”, which seeks to eliminate the virus as quickly as possible.

The “trigger” strategy discussed in the post is a particular instance of “mitigation”, but it does not take into account the fact that there are more than 2 possible sets of measures. Finding the best set of measures is a high-dimensional optimization problem, for which we lack an accurate model. I think that the approach currently taken to it can be mathematically described as stochastic steepest descent.

March 29, 2020 at 5:02 pm

My (first) point is simply that if you look at the graph of infection over time (in the model described above, with 2 sets of measures) in the limit you get a line.

(For what is worth, the paper defines mitigation as aiming to reduce R but not necessarily below 1, while supression aims to reduce R below 1.)

March 28, 2020 at 9:21 pm |

OK: catte said the same but faster, while I was typing. 🙂

Many thanks — corrected it now.March 29, 2020 at 7:43 am |

You operate under the implicit assumption thet the various delays in the system can be neglected. Usually such delays mean that our models have to be partial differential equations and controling such infinite dimensional systems is a notoriously hard problem. Examples are 1) “Don’t drive drunk” or 2) try to steer a robot on mars. If the delay is large enough one in general gets larger and larger oscillations until something breaks. Adaptive triggering does work in theory if your delays are small enough. However, the only delay we can influence is the test. The incubation period is out of our control. For me this is an indication that addaptive triggering is not going to work without some justification involving delays. However, we can use the same strategy as in my above two examples. You stop, you drive a short distance, you stop again aso. I guess, this is what they now call “The hammer and the dance”.

March 29, 2020 at 11:16 am |

I don’t know the numbers in the UK but during a flu season in the US about 500,000 require hospitalization with up to 50,000 deaths. This means that a number of deaths per week during the peak of flu season is probably close to 1000 with over 10,000 hospitalizations. Also, I’ve heard that it is extremely rare for a flu patient to be treated in ICU. So my question is: right before the moment of death, how is flu different from covid19 that those patients are not treated in ICU, while a quarter of covid19 hospitalizations are treated in ICU? Does anybody know?

March 29, 2020 at 4:48 pm |

I see several other problems with adaptive triggering outside your model (independent of the ones you find, probably reinforcing them):

1. Enforcement of rules gets progressively harder and more error-prone the faster the rules change. This is particularly true when it’s not clear who is in charge (e.g., Germany recently had to deal with competing regulations from towns and from states).

2. When restrictions on (say) restaurants are lifted, people won’t just patronize restaurants as usual; they will flock to restaurants in proportions never seen before. Same holds for holiday resorts, public transportation, … anything apart from schools and workplaces, I guess. Everyone will be rushing to see their old friends plus the new ones they found on Discord and Zoom. I don’t see how to adapt the model for that, but clearly the transmission rate will be higher than predicted.

3. The prospect of restrictions coming back in the nearest time will additionally strengthen this “yolo effect”. (Case in point: Having my return to Germany preponed by a month, I went on a hiking binge last week.)

The “dance” part of “the hammer and the dance” (an article that is far too uncritical of the Chinese approach for me to trust it fully — I wish someone would rewrite it with newer and better sourced data) suffers from issue 2 as well, but at least it seems to avoid constant backsliding into lockdown regimes. I have my doubts as to how well European governments can learn the subtle steps of such a “dance” — e.g., I would be very surprised if Germany had a functional contact tracing system established within one year (I’m reminded of the “LKW-Maut” and of the Berlin airport). Also, the prospect of closed borders for a year or two (which I think is inherent in any approach that doesn’t involve herd immunity) makes me heavily queasy. However, Pueyo at least pays some tribute to political considerations and other effects that cannot easily be modeled. I wish politicians relying on epidemological modelings would be likewise aware of these.

March 29, 2020 at 5:47 pm |

Listening to Chris Whitty over the last few weeks, my understanding is that the UK science advisers have believed since this virus left China that it will enter the pool of seasonal flu viruses, so they are aiming to manage that transition without overwhelming the NHS, and to avoid a surge in Covid-19 infections in winter. My reading of the Imperial paper is that this “triggering” strategy still involves building up some degree of immunity, as a way to slow transmission: every time we go round the block we get a bit more immunity and the transmission rate slows down. My understanding is that this comes from their SIR model. I don’t know if this is a sensible strategy or not (I’ve not done any calculations, and wouldn’t feel confident in my answers if I attempted to). Allyson Pollock agrees with the people at New England Complex Systems Institute that we should be following a “hammer and dance” strategy but that we aren’t doing so because the NHS has been denuded of resources and expertise (https://www.allysonpollock.com/?p=2901).

March 30, 2020 at 10:51 am

Oh wait, sorry, I misremembered the paper: it looks like under the triggering strategy after the initial surge in infections, we don’t see any significant increase in immunity and we’re in a holding pattern until a vaccine comes along.

March 30, 2020 at 8:01 pm |

Gowers,

Sorry for being out of topic, but I’ve been desperate since a long time to know your opinion on how to develop a good aesthetic sense in Mathematical research (such as that of weyl, penrose, etc) which leads our noses to fruitful ideas? I’ll be happy even if you point to some source or a link.

March 31, 2020 at 7:06 pm |

View at Medium.com

This short paper provides a rationale for 4 day work and then 10 days lock-down

April 1, 2020 at 11:57 am

Thanks for letting me know about this.

April 1, 2020 at 5:50 pm |

A paragraph from the paper Eyal linked to provides a very clear rationale for ‘a minimum reasonable length for a cycle’ (in this case, for a minimum lockdown duration of 10 days): ‘We want the average R to be less than one. The main effect for a 4-day work/10-day lockdown schedule relies on the disease timeline. Exposed individuals are non-infectious for about 3–4 days on average, and are then infectious for another 3–4 days on average[5] . Thus, in a 4/10 strategy, MOST PEOPLE WHO GET EXPOSED ON WORKDAYS WILL BE INFECTIOUS DURING LOCKDOWN, limiting the spread of the disease. Those that develop symptoms might be infectious for longer, but these individuals will not return to work.’

April 1, 2020 at 6:17 pm |

Three more brief remarks: I agree with conclusion 1, that there is no harm (and a lot of good) in keeping both T and S small. Conclusion 2 is trickier (see my first comment, above, or the Alon et al article which Eyal links to): one would like people who were exposed during the relaxation period to be at their most infectious during lockdown (typically the peak-infectiousness occurs 5 days after infection), and this will be a significant effect as long as the relaxation periods are not too long compared to 5 days. 10-day lockdowns (as Alon et al suggest) would also avoid some of the practical downsides of changing everything too often. A third, minor remark: the ’rounding off’ you refer to, in Chart 9 of the Pueyo article, may just be an artefact of the lag between true infections and reported infections. It’s also possible that it’s partly due to the virus persisting on surfaces for a few days after the beginning of a lockdown, but biologists I’ve spoken to believe the latter effect would be small.

April 2, 2020 at 8:47 am |

I live in a rural area of the US, with 3 ICU beds and probably only 1 ventilator in the whole county, and 2 of the 4 neighboring counties are even smaller. From here, I can see two potential problems with these kinds of schemes.

The first is that, in reality, infection rates are stochastic. In most of England, there are enough people that the noise term isn’t very significant. Here, we have to worry that, if S and T are small (or even not that small), the noise term will actually be driving the infection rates during relaxation phases, and, what is more, we won’t know what the noise is until it’s too late and we are in for a very long lockdown. The concern here isn’t that we start having too much infection in the population at large; rather it’s that one infectious person at a well-attended wedding could completely overwhelm the hospital.

The second is that, with limited resources and a small population, our measures of infection rates can turn out to be quite discrete. Especially if we go by something crude like ICU usage, the only possible percentages in our county are 0%, 33%, 67%, and 100%. This gets better (though not by that much) if we average across the region as a whole, but there are significant costs, socially, economically, and from a public health perspective, to helicoptering patients to the city 100 miles away.

April 6, 2020 at 11:00 am |

Another good reason for considering (and analysing) the adaptive triggering strategy (in addition to the reasons you mention) is that we don’t yet know for sure whether the ‘Dance’ part of the ‘Hammer and Dance’ strategy will work in Western Europe, once life returns to a semblance of normality. How have countries with earlier outbreaks fared? China looks to have been successful in suppressing its outbreak and preventing (so far) a resurgence of new cases, but it does e.g. use mass-surveillance for contact-tracing in a way that would presumably be impossible in Western Europe. Among the other countries with a large outbreak, South Korea looks to have been the most successful in the suppression strategy. (South Korea never imposed a full, UK-style lockdown; instead they used mass-testing and contact-tracing, and strict isolation rules for known cases. Restaurants, cafes and many businesses have remained open.) The trend in South Korea is promising, with a reliable daily decrease in the number of known active cases since 11th March, but they haven’t yet reopened schools, kindergartens or universities for example, all three of which (modelling suggest) are important factors in transmission. Singapore (another success story) also decided to close schools and most workplaces on 3rd April after a recent increase in the number of cases. If South Korea continues to be able to suppress new outbreaks after reopening schools etc, without imposing lockdowns, I would say this would be a good sign that the ‘Dance’ strategy can work in Europe. (One should note that contact-tracing as practised in South Korea is more invasive than what is currently legal in the UK, and also quite manpower-intensive. On the other hand, South Korea hasn’t yet made extensive use of a voluntary contact-tracing app along the lines of the app being developed in the UK by NHSX; their Corona100m app is quite different and might be less effective.)

April 8, 2020 at 8:05 am |

It seems to me that “short cycle” result comes from a slightly artificial detail of how the problem is interpreted.

The result appears if you fix the lower-threshold and optimize the upper-threshold. However if instead you fixed the upper-threshold and optimized the lower-threshold, then the optimal cycle-length will be long.

If you allow *both* thresholds to be chosen (both T and S) then, as you note, they’ll both be zero. However this isn’t practical because it ignores the cost of getting to zero given that we start with a finite set of cases.

I think instead a better approximation of the problem we face is choosing an optimal time-path of policy given some start-point and end-point. If we solve that problem then that case the optimal path will be a path of gradually decreasing strictness, without any zig-zags.

More details here:

http://tecunningham.github.io/2020/04/05/front-loading-restrictions/

April 11, 2020 at 12:17 am |

In the BBC4 contagion documentary the simulation resulted in 43 million infections in approximately three months. Enough for ‘herd immunity’? A long way off from ‘ several years’.

Admittedly this was at an unacceptable number of fatalities (approximately 900,000).

April 11, 2020 at 11:54 pm |

The comments have buttressed the close fit of the different scenarios/models presented to the classic logistic equation of Verhulst and its later modifications in the population dynamics prey-predator model of Lotka and Voltara. The logistic equation (mentioned in a comment earlier) has perhaps been studied most intensively in its mode as the logistic map which involves the deceptively simple nonlinear difference equations map so familiar in dynamical systems theory. A specific feature of the logistic map, though, needs special attention in its bearing on the models of adaptive triggering and related schemes of dealing with

covid over the coming months and years, and that is the complicated possiblities of what different parameter values lead to, a situation due just to the mathematical structure of the logistic map and not what the map is supposedly modeling. Most pronounced is perhaps the perod doubling route to chaos enacted, e.g., by changing values of the birth-death parameter studied by Sir Robert May and from which the two Feigenbaum constants were derived. Wikipedia sums up this rich behavior nicely: “bistability in some parameter range, as well as a monotonic decay to zero, smooth exponential growth, punctuated unlimited growth (i.e., multiple S-shapes), punctuated growth or alternation to a stationary level, oscillatory approach to a stationary level, sustainable oscillations, finite-time singularities as well as finite-time death.” Again these phenomenon have to do purely with the mathematical workings of the logistic map and not its applications in any practical condition such as adaptive triggering, etc. Hence, great care needs to be given to not confusing what is observed in specific applications with that of solely mathematical phenomenon.

April 19, 2020 at 1:08 pm |

[…] see us through the next year or more, we must all prepare for several cycles of a ‘suppress and lift’ policy — cycles during which restrictions are applied and relaxed, applied again and relaxed […]

April 20, 2020 at 8:18 am |

[…] see us through the next year or more, we must all prepare for several cycles of a ‘suppress and lift’ policy — cycles during which restrictions are applied and relaxed, applied again and relaxed […]

April 22, 2020 at 4:31 pm |

[…] likely that we’ll see restriction-relaxation cycles for the next year or two as we try to extinguish the remaining pockets of the virus. Or the hammer […]

April 24, 2020 at 1:24 am |

[…] incubation period of two weeks. Some epidemiologists propose rolling lockdowns or cycles of a ‘suppress and lift’ policy around the incubation period that can keep both the pandemic and social costs manageable. […]

April 24, 2020 at 9:26 pm |

[…] incubation period of two weeks. Some epidemiologists propose rolling lockdowns or cycles of a ‘suppress and lift’ policy around the incubation period that can keep both the pandemic and social costs […]

April 25, 2020 at 2:46 pm |

[…] see us through the next year or more, we must all prepare for several cycles of a ‘suppress and lift’ policy — cycles during which restrictions are applied and relaxed, applied again and relaxed […]

May 2, 2020 at 10:41 pm |

I think the ‘breaking down’ of the model over short time frames is due to a disconnect between the methodology above implicitly assuming that an infectious person infecting three people infects them all simultaneously. In reality they may well be more likely to infect one person a day on three consecutive days, which would explain the apparent delay in suppression becoming effective.

But if this is the case, and assuming r=3 in the absence of suppression, then a set of extreme measures which make r=0 applied for 2 days on and 1 day off would achieve the goal r=1 for ‘the dance’.

May 5, 2020 at 9:55 am |

May 14, 2020 at 1:47 pm |

Dear Sir Gower,

So far, I have know very well that beside you and Pro. Terence Tao is not only colleague but also close friends. By the way, after looking at your both’s face carefully , I wonder that why you both look alike. It looks like two of you are brothers in the family.

Shaw prize in 2020, may is coming : you can vote one to Pro.Tao. I think he deserves to win it after many his efforts in maths.

Thank you, Pro.Gower

May 27, 2020 at 11:16 pm |

Hey – noticed it’s been a couple of weeks since the last response, and I’d be interested to know if anyone had any further thoughts on this.