Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
Myths About COVID-19
A number of false ideas are circulating about this disease.
They surveyed 3000 people and found that 1.5% were positive for the antibody to COVID-19. This means that 80 times more people are asymptomatic with COVID-19 than symptomatic. So the lockdown wasn’t necessary.
Nope. 1.5% doesn’t exceed the false positive rate for this type of test, so it doesn’t prove any excess of asymptomatic cases at all. If, say, half of the people screened were positive it would be significant, but not if it’s just 1.5%. There is other evidence of asymptomatic cases, but not nearly 80 times as many; maybe twice to 7 times as many.
The projections for numbers of people infected have been repeatedly wrong and had to be revised repeatedly. We should pay no attention to them.
No, the projections have been reasonably accurate. (How could they miss? The range of uncertainty they published was huge.) Also, it’s normal to make revisions as data comes in. The worst mistake in interpreting these projections occurred early on when the Imperial College group published projections for various scenarios in the US. The first such projection of 2.2 million dead in the US hit the news and, understandably, captured the imagination. It was only later when the rest of the same study was considered that people found out that there was another scenario studied that predicted 60,000 deaths. People thought that this was a revision of the first, higher projection, but that was wrong. It was a scenario that estimated deaths if strong measures to suppress the virus were used. The first number was the estimate for what would happen if nothing was done. (And this was academic anyway since there was no way people would continue to act normally.) And so it has gone, with one failure to understand what the epidemiologists were saying following another. The epidemiologists said that the stricter the efforts of virus suppression the more the lives saved. What choice did politicians have? There’s no question that the efforts to suppress the virus have been successful. The decline in the daily death rates started just about when one would expect them to, 10 to 14 days after suppression efforts started. And there is nothing that will keep the virus from flaring up again if we try to go back to normal.
Efforts to suppress the virus are causing economic harm. We must end these measures to save the economy.
The damage will not entirely end if the economy is suddenly “opened” because people are not going to act normally. Especially not if the virus flares up again. The economic damage is going to continue, hopefully at reduced levels, until the virus is gone or nearly so. To be sure there are a number of unnecessary restrictions that can be dispensed with now, but things are not going back to normal any time soon. You can argue that the government didn’t need to act at all, that people would know to avoid dangerous activities, but people expected the government to act. Most of the work of opening the economy is to convince people that it’s safe to go out again, and to do that we’ve got to make it truly safe.
We’ll have a vaccine in 12-18 months.
Maybe, but don’t count on it. As it turns out they have been trying to produce a vaccine for coronavirus since 2003 when SARS broke out. It’s apparently very tricky. Vaccines strong enough to confer immunity so far have killed the monkeys. We have the advantage right now that there are far more people and labs working on it, but it may take more time or may never happen at all.
The pandemic will follow a bell shaped curve.
Maybe, but there is no guarantee of that. It may be like the Spanish Flu, with multiple waves, or like HIV/AIDS, with a long fat tail that simmers on and on.
Hydroxychloroquine…
This is being used in various places. Other than the occasional testimonies, which are hard to interpret, the evidence that this stuff really works is thin. The history of medicine is repleat with examples of promising treatments with many testimonials of effectiveness that turned out not to work when tested carefully. Our ability to fool ourselves that way is huge. Some antivirals like remdezivir have proven to suppress the virus in animals and show more promise.
Published in Healthcare
Did you actually listen to that whole Dr. Mikovits thing? She’s making some wild accusations. Why should we trust what she has to say (including suggesting that Dr. Fauci had someone killed)?
Two out of my four kids have had their tonsils removed, and one of the other two probably will too eventually.
I think this is reasonable. It appears to be the policy Texas state government is using, relaxing restrictions and trying to figure out how to resume normal activities as possible.
Lifting restrictions on gatherings necessarily increases the spread because spread is a function of contacts. The only thing reopening has to do to avoid costing extra lives, is exactly what the shutdown was originally designed to do: prevent the death of anyone who could have been saved but for a lack of medical resources at that point in time. Slowing the spread only saves lives that would have been lost because resources were not available at the time they were needed. As more people develop immunity and medical resources increase, we can handle more interactions, and more importantly, because we know more about the disease, we know more about what resources are necessary and the number of cases needed to overwhelm the system.
Interestingly, the biggest impediments to a full reopening after customers and workers who do not want to be in public, are the number of workers who can’t come in due to closed schools.
Further criticism of this model. Probably deserves to be ignored.
Calling out limitations in the model is better than not calling them out. But publishing a model whose limitations make it useless at best — misleading at worst — isn’t really great. Every consultant I know has his own Excel Wuhan Virus model. The temptation to play is overwhelming. They would never publish one.
I don’t believe finding more cases ensures a higher R will be calculated. For example, let’s say the virus started in the US not in February but in June of 2019 (I’m just making this up, but time t=0 is a variable). Then existing infections today — and thus new infections tomorrow — could be high even with a low R.
I have looked into this issue a bit further. The Bendavid main paper is here, and the statistical appendix is here.
There are three sources of variance in their observations (each observation being either a positive or negative result of a particular person tested).
I think that Roderic’s comment and criticism was based on specificity, i.e. the risk of a false positive. He is correct that, for example, if you’re using a test with a false positive rate of 5%, and you’re trying to detect something with a prevalence of 2-3%, the number of false positives is essentially going to swamp your ability to estimate the true population proportion.
But based on the Bendavid paper, they took this into account. They used 3 different measures of sensitivity and specificity: one based on the manufacturer’s reports, one based on independent testing that they did at Stanford (i.e. testing known positive samples to see if they were correctly detected, and testing known negative samples to determine if they were correctly detected), and a third scenario — probably the best — that combined both the manufacturer’s and Stanford’s testing.
They reported all three scenarios, but I’ll only report for the third. They found a sensitivity of 80.3%, with a confidence interval of 72.1-87.0%. They found a specificity of 99.5%, with a confidence interval of 98.3-99.9%.
[Cont’d]
I think that Roderic’s criticism is based on looking at the confidence interval of the specificity estimate, which is 98.3%. This means that there could be as many as 1.7% false positives, and since the detected rate of infection (prior to adjustment for demographic variables) was 1.5%, his conclusion was (I think) that you can’t separate the signal (if any) from the statistical noise.
This is a fair enough criticism, as far as it goes, but there is a more sophisticated model used in the paper (see the statistical appendix) that used a technique called the “delta method” to calculate the overall variance. It’s a complex calculation involving a Taylor series approximation (and I have to admit that I didn’t even entirely master Taylor’s theorem when I studied this stuff, 30-35 years ago).
Essentially, Roderic’s criticism assumes something like the “worst case scenario” for the specificity estimate. Bendavid’s team uses the delta method to compute the overall variance as the weighted sum of the three sources of variance (which, again, were sampling variability, error in the sensitivity estimate, and error in the specificity estimate).
This “delta method” allows a more accurate calculation of the ultimate variance, which means that they can better calculate a true confidence interval.
For the overall estimate under scenario 3 (i.e. using both the manufacturer’s and Stanford’s testing of sensitivity and specificity), the COVID-19 prevalence estimate was 2.75%, with a confidence interval of 2.01-3.49%.
They state a limitation in the delta method: “There is one important caveat to this formula: it only holds as long as (one minus) the specificity of the test is higher than the sample prevalence. If it is lower, all the observed positives in the sample could be due to false-positive test results, and we cannot exclude zero prevalence as a possibility. As long as the specificity is high relative to the sample prevalence, this expression allows us to recover population prevalence from sample prevalence, despite using a noisy test.”
I think that Roderic’s criticism was based on a solid intuition, but that it does not necessarily hold in these circumstances. The issue is complicated, and I’m already near the limits of my expertise in evaluating their method.
The delta method, which I had not heard of before (or if I did, I forgot it), appears to be a fairly standard technique. Here’s a Wikipedia entry, and here’s an explanation taken from a statistics text.
https://www.researchgate.net/publication/290474717_Rapid_resolution_of_hemorrhagic_fever_Ebola_in_Sierra_Leone_with_ozone_therapy
You can look up the research paper by Doctor Robert Rowen, M.D. where he discusses the very successful results with Ozone on Ebola. He also mentions how the WHO pressured the Sierra Leone government to kick them out because “treating Ozone is criminal.” Because, you see, there is no corruption in the medical community, everyone is saintly and you are all crazy tin-foil hat conspiracy theorists.
Roderic isn’t saying shut it all down. Jerry isn’t saying “let it all rip and let God sort it out”. Roderic’s post doesn’t contradict Jerry’s posts on C19; all of us know there’s going to be a set of trade-offs. We need that decision point debated as sharply and intelligently as we can, free of mainstream prejudices but also free of wishful thinking. When Ricochet is at its best, that’s what happens.
I agree. Based on Andrew Gellman’s comments, there may reason to be a bit skeptical, and I regret my “benefit of the doubt” chastisement of Roderic. They did think about false positives, but it sounds like they may have blown it.
Have you looked at the similar USC study?
I think the fundamental error those guys are making is trying to find a signal at the very limits of the assay’s specificity, and that’s always a bad idea.
The only “peer review” I’ve read has been nothing more than people saying this study is bunk because the percentage positive falls within the false positive range… And as you demonstrate, that is not a very persuasive argument, nor is it something that the authors didn’t explicitly consider. Where I’m seeing it is from people who want the study to be wrong because it’s only a bunch of anti-science right-wing dolts who don’t agree with the lockdowns. There’s nothing “peer” about that sort of review.
To OP:
This is a misguided post.
You are wrong on almost everything.
You are the one spreading myths.
MB
Then rebut it.
People without income will return to work
You are being incredibly naive, your entire post is naive
NY is 0.46?
good links
https://rt.live/
transmission rate < 1 means safe to open
OP doesn’t know what he is talking about it.
stay in your lane
Um.
Not necessarily. The lockdown may be the thing keeping R0 under 1, if it is under 1.
I’m generally on your side in favoring a relaxation of strict measures, but I think that we have to be realistic about the consequences. There are bad consequences no matter what we do.
Hammer, I think that this is a bit unfair. There are doubtless some people who want the study to be wrong, but there are also legitimate criticisms. Further, even if criticisms turn out to be incorrect, it is a good thing that they be made, discussed, and rebutted.
I’ve been against the lockdown from the start, and favor a major re-opening, though I do not favor a complete return to pre-COVID normal.
Then wouldn’t the question become: Which course would ultimately affect fewer people?
WT*?
I don’t think it can be reduced to that question. You might still have to weigh mildly bad consequences to a large number of people against horrific consequences to a small number. It’s still a complicated judgment call requiring us to debate and go after each other’s throats.
And we simply don’t know the answer to the question. We will guess, use our best judgment, and make decisions accordingly, but this is not like adding 2+2.
How is that going to work? People go back to work and there’s no work to do because no customers? Restaurants, cruise liners, buses, air travel, events involving mass gatherings, etc., are not coming back to normal until people think it’s safe.
Actually, it is my lane.
It turns out that the Stanford team had no expert in sampling and statistics on it. So it’s not surprising that they messed up. A Columbia U. statistics professor comments on the study here. He didn’t think much of it.
It doesn’t prove the thesis is wrong, just that the data doesn’t prove it.
I think that link was posted earlier.