Two Plagues? Implausible.
Let me put a marker down. I’m hearing increasing chatter about the idea that part of the plan is to put out another plague here in the relatively near future.
The last “civilization panicking plague” was the Spanish Flu in 1918. If we range from 1918 to 2021, call that once in the last century. If we look at fouryear intervals, you can stick it in the calculator for Poisson events^{1}. With a base rate of 1/25 (one every 25 “4 year intervals”; call it Presidencies if it helps), and an occurrence of 2 in this interval, we can see that the odds of two “civilization panicking plagues” is about 1 in 1,250.
Impossible? No. But unlikely. Sufficiently unlikely that unless you really have a 1in1,250 level of trust in our government(s), and nobody should, that it would be sufficient to call bullshit.
I would point out that some of our intuition of odds like that may be skewed by the fact that we are often talking about odds for things that we’re going to draw multiple samples from. If you gamble enough, you’ll hit a 1in1,250 chance somewhat often. But for one 4year period, we get only one draw at the odds. And it’s one draw total, not one per person or anything like that.
“Once is coincidence, twice is enemy action” has never been more true. If this happens, expect that to be worn into cliche even more than it already is, but don’t let that stop you from remembering that it’s true.
As in my vaccine post, while some may be able to quibble with details here and there, I’ve also deliberately left some buffer in the other direction, too:
 Starting the time frame I’m getting the base rate from right at the Spanish flu raises the probability. If you include some of the time before that, the probability goes rapidly down. For instance, at a base rate of once every 133 years, or 0.03 in the calculator, the odds nearly halve to 1in2,250.
 I am reluctant to go back much farther than that, because plagues start becoming more common due to bad hygiene. But that means the base rate of civilizationpanicking plagues would be going down, probably quite a lot over the last 100 years. Simply counting these two plagues also ignores all medical advances, hygiene advances, workfromhome, etc^{2}.
 The Spanish Flu plague wasn’t a bad hygiene plague. It seems to be the result of a lot of bad things coming together in a perfect storm. However, few or none of those things are happening today, which means, again, you guessed it, the real base rate would go down.
 Of course, one must question whether COVID really counts as a “civilization panicking plague” in the first place. Part of the reason I provide a calculator is to easily let the reader come to their own conclusions. If you don’t accept COVID as being such a plague the base rate plummets, especially given the above.
 As whether or not something becomes a plague is also related to public health response, public awareness, etc. one would expect that any plague trying to get a foothold in the midst of alreadyactive measures against another plague ought to face an uphill battle!
I can come up with a few objections that would raise the base rate, such as “well, the population density has been increasing so it seems like they would be more likely”. The problems with these objections are twofold:
 They are intrinsically speculative; there’s not a lot of evidence that the base rate on “civilizationpanicking plagues” is going up. For this class of event, even an increase in “normal” plagues isn’t that relevant, if there even is such a thing. It could easily be a measurement effect, where better measurement is detecting smaller outbreaks.
 There’s just no way you’re going to nitpick you way to odds of 1in2.5 or anything else in the “plausible” range. To get to 1in2.5, just to pick an example, requires a base rate on “civilizationpanicking plagues” of just over one every three years. Clearly that is not happening. Even to get to 1in100 odds of two in four years requires about 1 every 25 years, also clearly not happening.
The result is robust. The hypothesis that two major plagues would randomly occur within one fouryear time period given is not plausible. That doesn’t tell us the reason, but it does mean we can reject the random chance hypothesis very strongly.
Statistical Thinking
In the writing above you see a way of approaching statistics that I think is undertrained and underappreciated. Statistics is as much art as science. How you select the “statistical universe” has a huge impact on the final result. Your initial assumptions have a huge impact on the final result.
When you get a result, such as the one I describe above, one of the ways I approach it is to see how robust it is to small perturbations. If I pick a bit of a different sample space, how much do the probabilities change? If I slightly tweak the borders between two categories, what happens to the result?
A good statistical result ought to be robust against such perturbations. As you see in my analysis above, the idea that two plagues in such quick succession is exceedingly unlikely isn’t just a matter of the exact parameters I chose. I can tweak a wide variety of the parameters in the direction of increasing the probability, and still get the result that two plagues in quick succession is unlikely. And on the flip side, there’s plenty of space to take the assumptions in the other direction and see it as even less likely than my initial analysis suggests. “Two distinct plagues happening in quick succession is very unlikely” is a robust result.
By contrast, one of the reasons I don’t like the p = 0.05 standard for papers is that if you have something that runs right up against it, you’re virtually guaranteed to have a result that is not robust to perturbations. One mouse zigs instead of zags and suddenly you’ve got a p = 0.073 study instead. A slight tweak to the categorization of who is slow vs. fast means you’ve got a p = 0.097 study instead. If you’re producing a worthwhile fact, it ought to be robust.
It doesn’t help that p = 0.05 is already perilously close to “happened by random chance” since the definition of p = 0.05 is precisely that such results are expected 1 out of 20 times if you run a random test on something with no effect. With the number of studies out there in the world every year that’s just an absurdly low, nonrobust result to obtain.
So while the examination of alternatives and other possibilities above is there somewhat to preempt anklebiters, it’s also a valuable way to approach a statistical question to examine and gain some intuition for how strong a result it is. It’s pretty easy for people to give you a song and dance about how inevitable multiple plagues in quick succession are at some point and perhaps even snow you with some numbers, but if you’ve done an analysis like this you’ll know how hard they had to bend the truth to obtain that result.

I’ve updated it a bit since it was originally posted; it now also expresses the result in “1inX” probabilities, in case that makes it easier to understand. ↩︎

I realize our opinion of the medicalindustrial complex may be crashing rapidly, but don’t forget that there legitimately is more medical knowledge today than there was in 1920, even if one must pick through it quite carefully. ↩︎
The Perils of Education  Prepping the Kids 