Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.

# Coronavirus Calculations: The Perils of Projections

There’s a saying that I learned back in the 1980s while studying probability, statistics, and mathematical modeling: “*Torture numbers enough and they’ll confess to anything.*” It was right up there with “correlation does not imply causation” and “GIGO.” (GIGO stands for *garbage in, garbage out*.)

As most of you know by now, I’ve been skeptical of the catastrophic projections of the expected progression of the WuFlu. I do not think that the figures presented are an intentional “hoax,” though I suspect that some people and institutions, particularly the major media, have an ideological reason to exaggerate the danger. But I suspect that the bigger problem is the limited amount of information presently available, even to the most sophisticated modelers.

Mathematical modeling is tricky. It is sensitive to a number of choices. The first choice is the mathematical formula selected to model the phenomenon. Typically, there are a great many formulas that might be selected. The second choice – or more often, the second and third and fourth and fifth – are the values selected for various parameters of the mathematical formula.

I’m going to present several hypothetical models of WuFlu spread, to demonstrate how difficult it is to differentiate between such models in the early period. These examples are ** not** intended to be an accurate prediction of the actual progress of the WuFlu. They are intended to demonstrate how it is possible to select several alternative formulas, adjust the parameters so that each appears to be a good fit in the early period, and yet generate predictions that can vary enormously in as little as 2-4 weeks.

I hope that this will be of interest to some of you.

__I. What We Expect__

A general rule of epidemiology is that disease outbreaks follow an “S-Curve.” This is called *Farr’s Law* (here and here are two papers describing the rule; I’ll add a brief technical discussion in the comments). If you want to impress your friends, you can use the 50-cent term for an S-Curve, which is “Sigmoid Function.”

Another model assumes “exponential growth,” which means a constant rate of daily increase. For example, an exponential growth model may increase by 10% daily, or 33% daily, or more.

It turns out that, in the early period of many S-Curves, the graph is pretty close to an exponential growth curve. Here is an example:

Notice that in this example, the curves look quite similar until Day 36, then diverge sharply. The S-Curve, in this case, is calibrated to reach a final level of 500,000. I had to truncate the exponential growth curve at Day 41, or the S-Curve graph would have ended up looking like a straight line at the bottom of the graph. This is because, by Day 63 in this graph, the exponential growth function would be approximately 477 million, almost 1,000 times greater than this particular S-Curve.

The S-Curve is a generic term for curves shaped like the one in the graph above. There are several mathematical formulas that follow this general shape (some described at the Wikipedia entry for S-Curves, here).

__II. An Example – Evaluation of Three Models__

I’ve previously posted graphs showing reported WuFlu cases by country, with my most recent posts focusing on cases per million. In this example, I use the actual data for reported cases by country, per million, in Italy and the US, through yesterday’s reporting (Saturday, March 21, 2020). Each graph starts on the day when the country reaches 10 or more cases per million.

I’ve created three separate mathematical models of the increase in cases per million, which will be labeled Model 1, 2, and 3. I will provide further information about their characteristics later.

Note that these are simplified models for illustration purposes. A true model would be much more complex, and would account for variables such as the rate of infection, the degree of contact between the population, the time lag between infection and the onset of symptoms, and others.

A note on logarithmic scale: I’ll be showing some graphs in normal (linear) scale, and some in logarithmic scale. Logarithmic scale is a bit counter-intuitive, until you’re used to it. The trick is to notice that the increment between intervals on the vertical axis (the y-axis) is not fixed, but grows as you look higher up the axis. So instead of the vertical hash-marks being at 1, 2, 3, and so on, they are at 1, 10, 100, and so on.

The logarithmic scale is not meant to mislead, but it can be misleading if you are not accustomed to it. It has two advantages (at least) in examining phenomena with rapid growth: (1) it makes it easier to differentiate the curves in the early period (the left side of the graph), and (2) it makes it easier to discern whether growth is exponential, because exponential growth appears linear in a logarithmic scale. A disadvantage of logarithmic scaling is that it makes large differences in the late period (the right side of the graph) appear smaller than they actually are.

On to the models. Here is the graph for the first week, in logarithmic scale:

Notice that all 3 models seem pretty accurate in this period, with Model 2 notable for underpredicting in the first few days. Here is how the same information looks in linear scale:

The data in this graph is exactly the same as the first, but the scale is different. Notice that: (1) it is harder to differentiate between the curves in the first few days, and (2) the differences at the end of the week look a bit bigger than in the logarithmic graph.

Now let’s move forward to the second week. The data for the US ends after the first week, because we are only 7 days into the time series (*i.e. *the US first exceeded 10 cases per million on March 15). Here is the graph for the second week, in linear scale:

Notice that all 3 models are significantly over-predicting the course of the disease in Italy (dark blue). Model 2 is the worst, over-estimating by a factor of about 5. Model 1 is the best, but even this model is about twice the actual figure.

*Lesson 1: A model that looks accurate for the first week can be
*

*quite inaccurate just one week later.*On to the third week. Here is the graph, first in logarithmic scale:

Notice how things have changed. All three models continue to significantly over-predict the course of the disease in Italy (dark blue), but Model 2 is curving downward, and is now the most accurate after three weeks, though it was the least accurate after two weeks.

Also notice how all three models don’t appear to wildly overstate the actual case number in Italy – because we’ve switched to a logarithmic scale. Here is how the same data looks in linear scale:

In the linear scale, the huge divergence between Italy’s actual reports, and all three models, becomes clear. Model 2 is now the best, but is about 3 times higher than the actual figure. Model 3 is the worst, approximately 7 times the actual figure, with Model 1 in the middle.

*Lesson 2: The model that looked the worst last week can
look the best just one week later.*

Three weeks is about all of the data that we have for Italy so far. But let’s run the projections forward for another three weeks, to week 6. Here is how it looks in logarithmic scale:

Notice how Model 2 has leveled off, and Model 1 has grown to surpass Model 3 (on day 36). Italy’s trend line is still below all three models, and curving down slightly, though it appears to be on a path to surpass Model 2 in the next few days. Still, in the logarithmic scale, none of the models look wildly incorrect.

But look at it in the linear scale:

In this scale, the extent of the vast over-estimates generated by Model 1 and Model 3 is apparent. Notice that Model 1 is now predicting about *1.2 million cases per million* – in other words, 120% of the population has had the WuFlu after 6 weeks. (This is theoretically possible, I suppose, if it turns out that there are high levels of re-infection, but it seems extremely unlikely.)

Notice further that the projections generated by Model 1 and Model 3 are so huge, by the end of Week 6, that the result of Model 2 appears to be a flat line at the bottom of the graph, and the actual reported figures in the US and Italy are not even visible. By the end of 6 weeks, Model 1 projects about 600 times as many cases as Model 2, and Model 3 projects about 250 times as many cases as Model 2.

*Lesson 3: Projections over periods as short as 6 weeks can lead to drastically different
results, which can easily be wrong by a factor of 100 or even 1,000.*

**III. How The Magic Trick Is Done**

Model 1 is an exponential growth model, assuming a daily growth rate of 33%. This is the number that has been pretty widely used in a number of media sources. It is pretty accurate in the very early period, and the US remains *above* this trend line after the first 7 days. But Model 1 leads to an enormous estimate of the number of cases after just 6 weeks – about 120% of the entire population. Moreover, it is already quite wrong as to Italy – Model 1 projects that Italy would have over 7,000 cases (per million) yesterday, while the true figure is about 885 cases (per million). That is about ** 8 times **the number of cases that Italy has actually reported, after about 3 ½ weeks.

Models 2 and 3 are examples of the *generalized logistic function* (Wikipedia entry here), which takes the form:

If that makes your eyes glaze over, I don’t blame you. Notice that there are six parameters that can be adjusted (labeled A, B, C, K, Q, and v). Each of these parameters can be adjusted independently. The parameter K is the upper asymptote (when C=1, which I assumed in these two models). Thus, I was able to input, into my formula, a pre-determined maximum number of cases (per million). The independent variable is t (time), and the “e” in the formula is not a parameter, but is the transcendental number e (about 2.7182).

Model 2 was designed to have an upper bound of 2,000 cases per million (*i.e. *0.2% of the population). Remember that this was the model that gave the greatest *overestimate *of actual cases (in Italy) after the first two weeks.

Model 3 was designed to have an upper bound of 500,000 cases per million (*i.e. *50% of the population).

It was pretty easy for me to adjust the parameters of these models to be fairly accurate over the first week or two, compared to actual reported cases in Italy.

__IV. What Difference Does It Make__

The problem, at present, is that we are being presented with quite alarming projections of the progress of the WuFlu. It appears that our President, and other leaders, are making major decisions on the basis of such information. Specifically, there was a report released by Imperial College London (here), which included a projection of approximately **2.2 million deaths in the US** (page 7). It predicted “deaths per day per 100,000 population” in a graph (page 7), with this prediction passing 5 around mid-May, and peaking at about 17 in early June (you have to estimate these figures from the graph, which will be shown a bit further on).

With a total US population of about 330 million, this implies **about 16,500 deaths per day** beginning about 8 weeks after release of the report (7 weeks from now),

**peaking at about 56,000 deaths**about 11 weeks after release of the report (10 weeks from now).

*per day*Is this at all realistic? We have no idea. I don’t think that the doctors who performed the study have any idea. But – they can plug certain assumptions into a model, and out come the results.

It would be nice if the Imperial College report provided a prediction about the daily number of new reported cases, which would allow us to assess, over time, whether reality is following the predictions. The report does predict that ** 81% of the US population** would be infected (page 6), though it did not state when this would occur. The graph shows the number of daily deaths will be declining by mid-June, so presumably, we will be approaching the 81% infection rate by that time – about 90 days from now.

The Imperial College model is much more sophisticated than mine – but it has an even greater number of parameters, none of which are known with confidence. Small changes to any of those parameters might cause huge swings in the predicted spread of the disease.

As an example, I present my Model 4. This is another *generalized logistic function*, tweaked to accomplish 2 things: (1) to match the disease progression in Italy thus far, and (2) to achieve the 81% ultimate infection rate predicted by the Imperial College report in approximately 90-120 days.

Here are the graphs, starting with the first 24 days (through yesterday, March 21, for Italy):

Is that a great fit or what?

Now here is the projection through 14 weeks – which will be around June 21 in the US:

Remember that this graph is total cases *per million*, and the model is designed to reach 810,000. With a population of about 330 million, this predicts **about 267 million cases** over the next 3 months or so. As of yesterday (March 21), there were about 27,000 reported cases in the US and about 54,000 in Italy. Notice that you can’t even see the figures for Italy (data for 24 days) or the US (data for 7 days), because they cannot be distinguished from zero on this scale.

This leads to our final lesson:

*Lesson 4: In most complicated mathematical models, you can tweak the
parameters to show almost any result that you want.*

As a final demonstration that my Model 4 is quite similar to the Imperial College report, I’ve graphed the expected number of daily deaths. Again, their model is more complex than mine, but my simple assumptions are: 14-day lag between case onset and death; 0.82% death rate. Here is my graph, and the one from the Imperial College report (page 7, Fig 1A), side-by-side:

Pretty uncanny, isn’t it? My graph shows peak death rate a bit higher than the Imperial College report for the US (though very similar for the UK). The total number of estimated deaths in both models is the same, 2.2 million (for the US). Note that this graph shows total daily deaths *per 100,000 *— so you can multiply that graph by about 3,300 to get the actual projections, which peak at around 75,000 *per day *in my model and about 56,000 *per day *in theirs.

__V. Conclusion__

The point of this post is not to make projections about the spread of the WuFlu, or of the ultimate death toll. The point is to demonstrate how easy it is to put together a mathematical model that will show anything that you might want to show, *and *that matches the data collected to date.

I want to emphasize again that the Imperial College model is far more complex than mine. I do not have enough information to evaluate it. However, it is projecting an extraordinary spread for this disease, and I am very skeptical of this projection. As demonstrated in Section II, such a projection could easily be wrong by a factor of 100 to 1,000.

Published in Healthcare
Here’s the technical note about Farr’s Law.

I asserted in the OP that Farr’s Law demonstrates that disease outbreaks follow an S-Curve. Per a 2018 article in

Infectious Disease Modeling(here):So wait – the article says that the curve is bell-shaped, and Giordano says that it’s an S-Curve. What’s going on? Both are correct, because we’re talking about different, but related, curves.

The bell-shaped curve is the number of new cases

each day. This is called the “probability density function.” The S-Curve is the number oftotal cases so far. This is called the “cumulative distribution function.” For a normal distribution, the probability density function is bell-shaped, while the cumulative distribution function is an S-Curve. There’s a good example at Wikipedia (here).I appreciate all the work you did on these. I have seen so many predictions in my life that have not panned out, that I think I am by nature skeptical when they are sensationalized. Sometimes they can look impressive and be junk. A math professor told my son about 3% of people get to Calc 3. We are basically pretty innumerate. Especially when dealing with very large or very tiny numbers. I bet a lot of trees have given their life for poor math and modeling.

Thanks, Jerry!

It is absolutely crucial to point out that starting on about day 7 of the real-world data, Italy imposed some of the strictest lockdown measures conceivable.

Given that this virus spreads though human interaction and that both the Imperial College model and other logarithmic/logistic models assume a constant degree of human interaction, a major change in those interactions is of major relevance and must be taken into account when considering this post.

We don’t have any hard data about the actual effects of these draconian measures because we simply don’t have enough data in general. Therefore, the most prudent summary of Jerry’s post should be:

Under conditions of massive restrictions in human interactions, the case growth rates underperform the models.Reminds me of the models circa 2006 that projected (with authority and great fanfare from the media megaphones) that we would be having 8,429 hurricanes between June 1 and November 30 this year.

A great post. Now do one on Uncertainty Analysis as applied to the current data sets. Please.

As you showed, when it comes to non-linear problems it’s hard to determine the parameters of your distribution. This is a fact of life, not a momentary hardship. When the underlying distribution has “fat tails,” any sample will tend to take on the characteristics of the largest member. Nassim Taleb uses the comparison of weight (linear) and wealth (non-linear). Imagine I take 100 people and calculate the average weight. Then I add the heaviest person in the world. The average will not change much. By contrast, if I take 100 people and calculate average wealth, adding the wealthiest person in the world would completely change the picture. So it is with s-curves: a few examples can dominate the data.

Expected value is a poor guide when talking about catastrophic risk. If you go bankrupt, you won’t be around for future gains. The analogy here is that a true pandemic may be so consequential that our typical risk and cost/benefit models break down. Is that Wuhan virus? I have no idea. There are people who ought to know who seem pretty worried. But they seem not able to explain why they’re worried in terms that make me worried.

It’s very easy to make an intervention look good (or bad) if you calculate only the benefits (or costs). So far, I see lots of epidemiologists calculating the

benefitsof shuttin’ ‘er down for the duration. I see little by way of anyone calculating thecostof doing so — including the cost of lost lives. There is no “conservative” approach, no “better safe than sorry” decision. Decision-makers seem to stuck in a one-way ratchet of increasing restriction and panic. How could any governor say enough is enough? Once one institutes a lock-down, how can others not follow? If one calls in the National Guard, is there any doubt five will?At some point you fight an epidemic not with doctors, but with an economy. For a day, it’s easy to figure out whose job is essential and whose isn’t. For a week, probably. A month? I’m not so sure. Three months? I doubt it. It’s just not possible to understand the level of complexity of even seemingly simple products.

Personally, I really dislike how the lock-down in my area has empowered the kind of people who just won’t be happy unless they’re telling other people what to do. My kid’s high school, which is closed, received a complaint from a parent because a few players from the football team were playing catch on the field. This suggests a lack of perspective.

@darinjohnson, I think that should be engraved over the doorway of the National Institutes of Health.

I do enjoy these posts, and I am certainly interested in the numbers, as I am also very interested to hear what other people have to say about them (a particular thanks to @mendel for his expertise). But, at the end of the day, we are left with a certain reality on the ground: Our hospitals are either a) overrun, or b)

notoverrun.As my wife pointed out, last night: “I don’t care what the prognosis is, right now it would be an absolute nightmare even trying to get care for anything.” So it would.

There is a COVID drive-through clinic here in town. When you pull in, they take your temp. If it’s not

extremelyhigh (someone said 106, and her daughter works at the clinic, but that seems high), they don’t test you. Yesterday, 99 people went through the clinic, and 1 was tested.I will be

veryinterested to look at all of the charts and numbers and breakdowns when this thing is in the rear-view mirror and we’re trying to figure out what exactly happened. In the meantime, I’mworried, because while you can crunch the numbers to see positive signs or negative ones, the one thing you cannot do is create space in a hospital when there is none.Mendel, I understand your point, but you are speculating. Neither of us knows the extent to which the lock-down measures are helping. I would certainly expect them to help, but we have no empirical evidence of the magnitude.

You are correct that the Imperial College model (at least the part reported) assumes a constant degree of human interaction.

Relying on the Wikipedia description of the Italy lockdowns (here), they did the following:

The 3-8-2020 measure does not appear to be the “strictest lockdown measures conceivable.” The were: a travel restriction; banning funerals and cultural events; requiring social distancing (1 meter) in public locations such as restaurants, churches, and supermarkets. Restaurants and cafes were allowed to remain open, but hours limited to 6 am -6pm.

So actually, Italy’s severe measures were imposed 10-14 days into the data that I report (not 7 as you stated). We don’t know how much of an effect they would have, but I would expect a 1-2 week lag between implementation and effectiveness, due to the incubation period.

I do not believe that it is accurate to conclude,

yet, that the measures taken in Italy have had a significant effect.I don’t think that I have the data to do this — and I actually don’t recall the term “Uncertainty Analysis” from my prior studies. I looked it up, and it seems to apply to experimental studies, and to be similar to “sensitivity analysis,” with which I am familiar.

True. And I guess hospitals are swamped symptomatic people wanting tests.

It will be interesting, and looking at the stimulus that has grown from 800 bil to 1.8 trillion, I wonder what can be done in the future, for the next crisis.

Well, in the very least, perhaps some local hospitals will develop some sort of better for plan for when this happens again, which it undoubtedly will.

A couple trillion preparing for a repeat of this crisis and, rest assured, we will be caught flat footed by whatever the next one turns out to be.

I believe it will be buckets full of unicorn dust.

I am not suggesting that the government can or should do anything. In fact, I’d argue that it should not. But the medical community sure can. And, if I ran a hospital, I’d be taking some pretty serious notes right now… Heck, even as a consumer, I think it is wise enough to slightly revise what sorts of things you keep well supplied in your house. The government doesn’t have to be involved in the way we internalize lessons from this mess.

When I was doing my Masters in Outcomes Research, after I retired from surgical practice, I did a project on dialysis graft survival using Medicare claims data. All patients on dialysis are Medicare patients after 6 months. Since they need dialysis to live, there is 100% followup. As a vascular surgeon, I was concerned about why these grafts , which are necessary to hook the patient to the machine, fail and whether differences in technique might be responsible. Of course, there are factors, like age and sex but there are comorbidities like diabetes, smoking and emphysema, plus arteriosclerosis for which we used heart disease as a indicator. We ran a logistic regression using those variables plus zip code. The only significant effect was zip code. I assumed that zip code was a marker for the surgeons doing the work.

After I graduated and left, the next group of Masters students used my data base, which included all Medicare claims data for New England renal failure patients, to study mortality using the same variables. Again the only significant variable was zip code.

Ima be a party pooper, and point out that what is going in the OP is modeling only in the grossest sense. It’s really curve fitting of somewhat arbitrary proposed equations, with highly sensitive parameters, and leading to either convergent or divergent extrapolations due to the choice of equation, not to the available data.

This has the same peril that afflicts so-called climate ‘modeling’, which again proposes somewhat arbitrary equations to fit somewhat sketchy data. What they have in common is no proposed

mechanismfor, in one case, the growth of COVID cases, and in the other, climate response to natural or artificial inputs.Taking this sort of curve fitting as seriously as the Imperial College model is an error of type. Whatever its potential flaws, the IC model embeds a very detailed model of transmission mechanism, which was originally developed for modeling influenza epidemics. Here’s part of the description of how they have adapted it:

This is obviously not back fitting some arbitrarily chosen equations to partial observed data. They are actually changing a simulated epidemic based on a model of what’s happening. While the OP is interesting in showing the impact of different assumption, particularly for those who aren’t into math and simulation, it does not have standing equivalent to the IC model when it comes to evaluating potential policies.

Personally, I would suggest Trump adopt the Heritage Foundation proposal and let the Democrats kill the bill. Instead

By applying the same paid sick and family leave provisions to workers of large employers—including quick access to refunds through employer credits—policymakers could reduce or eliminate the need for separate, selective bailouts to big business. The credit should also be expanded to all employees of businesses required to shut down or significantly slow down operations per government orders and recommendations for any period of time. It should also apply to businesses needing to reduce the number of employees who perform work or the hours of employees performing work.And

The federal government can inject billions of dollars into the economy at this critical time by pre-purchasing predictable goods and services from the private sector. Purchase agreements should target sectors such as travel and hospitality that are most harmed by the economic effects of the response to the epidemic. The government should make these purchases competitively, or based on a discount for single-sourcing, benefiting taxpayers with reduced future expenditures as well as participating businesses with immediate cash flow. A properly designed pre-purchase program would benefit everyone involved and support the economy at a critical time without being a bailout.https://www.heritage.org/public-health/report/the-senates-coronavirus-bill-bailouts-missed-opportunities-and-positive

An example of a projection that matters now. It’s Columbia Surgery’s daily update for 3/20/20:

What is so deceptive about the Imperial College study (and others like it) is the phrasing that this is our future “if we do nothing”, since A) they know full well that there is precisely a 0% chance that we’ll “do nothing”, and since B) they know full well that “if we do nothing” will be interpreted by the masses as “if we don’t do everything”.

It therefore discourages a nuanced discussion of the tradeoffs involved in the variety of responses available to us — some of them low hanging fruit, and some of them brutally difficult.

Seems to me “do everything” appears the smart thing in NY and NJ, while “do the low hanging fruit” might be wiser in Houston, say.

Some nuance would be nice.

Jerry,

I think you are being to generous, that the Imperial College knows what they are talking about. I do projections myself and I am wrong much more often the more different something is from previous studies.

I don’t trust experts when they have not studied in detail the issue/object the more it varies (in important ways) from previous studies of phenomena/items/categories. It needs to be study well until I start trusting the “Experts”. The more different something is, using extrapolation from previous know trends, you rate of error is going to be much higher.

Econ talk back in December had a great podcast on how bad complex models are. https://www.econtalk.org/gerd-gigerenzer-on-gut-feelings/

Listen between 24:00 & 30:00 when they talk about seasonal flu predicative models. Google tired to create and it failed badly for H1N1. Long story short, a one variable model looking at the previous two weeks was way more accurate than a 142 variable model. Consider I do predictive forecasting for a living, were I have pretty good data. I don’t have a lot of faith in future models, on poorly study subjects. So until they study the virus well, I think experts only think about worst case.

They also don’t care about cost benefit. That is why I don’t trust them. They are like Lawyers, they are all about reducing risk and have no concept of cost/trade-offs.

The best way of saying it public policy/laws should always be based on experience not what ifs.

What I want to know is how good Imperial College models were with things like SARRs & H1N1 when they had similar limited data and info as they due right now. If they can show there model at this phase of understanding was accurate on Critical/deaths. Then I will stand corrected.

I do believe that. Before this all started a nurse relative said she can guess the zip code by the patient. My husband says that about selling used cars. He’s a number guy.

I can’t remember the dr that wrote the book “The Last Well Patient” (from memory) said you socio-economic group is the single biggest indicator of your overall lifespan than a bunch of medical tests and interventions. I think he was a professor of endocrinology. I think it was based on or coincided with? the short essay by Dr. C. K. Meador …”A supervising doctor asks a medical resident “What is a well person?” With a straight face — evidently — the resident confidently replies: “A well person is a patient who has not been completely worked up.”

I’ve heard that’s why they don’t do full body scans on people as part of an annual checkup – because they’d find too much stuff (most of which is completely innocuous.)

My now deceased father-in-law (an obstetrician) had a full body scan probably two decades before his eventual death but would never discuss it. Unlikely the scan at that time had any indication of the cancer that took him quite a bit later, but whatever he saw I don’t think he was happy about it. He certainly didn’t tout it as revealing what a really fine specimen he was.

My mother had a scan a few years before she died, and they found a brain aneurysm, by accident. After going to a few neurologists, they decided it had probably been there for years, and if they operated the odds where higher something would go wrong than if they left it. But it was one more thing she worried about.

We had a woman (the story is in my book) whose husband had dropped dead with no medical history. After she had made all the arrangements and had some time, she decided to get a check up. Her workup was normal except an elevated alkaline phosphatase, which can be either liver or bone trouble. Hers was bone type and a bone scan lit up like a Christmas tree. After a workup we found a tiny breast cancer, no more than 5 mm. It had spread everywhere,

When I was a cardiac surgery resident, a well liked Urologist at our hospital had a coronary arteriogram. It was so bad, we were all sworn to secrecy. He lived another 25 years.

Excellent analysis.

I would only add that the devil is in the assumptions and residual analysis.

Epidemiology is essentially statistics, very little biology or medicine involved.

81% infection rate in the US? This is simply absurd and laughable

regarding Italy, shortages in ICU beds and ventilators/respirators meant doctors ignored anyone age 60 and over.

Italy is also one of the least vaccinated countries in Europe.

These exponential (aka delusional) models also assume that mankind has no immune system and zero brains.