Need Help With Upcoming Episode on Mask Junk Science

Alex

Administrator
The authors of this study are ghosting me, but I'm still going to do a show. Please limit comments to details about the study.

1631733427461.png
Massive randomized study is proof that surgical masks limit coronavirus spread, authors say

People affected by the pandemic in Bangladesh eat food distributed by Mehmankhana, a nonprofit organization, on July 26. (Munir Uz Zaman/AFP/Getty Images)
By
Adam Taylor
and
Ben Guarino

September 1, 2021 at 2:09 p.m. EDT

NEW! Gift this article to share free access


881
The authors of a study based on an enormous randomized research project in Bangladesh say their results offer the best evidence yet that widespread wearing of surgical masks can limit the spread of the coronavirus in communities.

The preprint paper, which tracked more than 340,000 adults across 600 villages in rural Bangladesh, is by far the largest randomized study on the effectiveness of masks at limiting the spread of coronavirus infections.
Its authors say this provides conclusive, real-world evidence for what laboratory work and other research already strongly suggest: mask-wearing can have a significant impact on limiting the spread of symptomatic covid-19, the disease caused by the virus.

“I think this should basically end any scientific debate about whether masks can be effective in combating covid at the population level,” Jason Abaluck, an economist at Yale who helped lead the study, said in an interview, calling it “a nail in the coffin” of the arguments against masks.

https://www.poverty-action.org/site..._RCT____Symptomatic_Seropositivity_083121.pdf
 
The authors of this study are ghosting me, but I'm still going to do a show. Please limit comments to details about the study.

View attachment 2153
Massive randomized study is proof that surgical masks limit coronavirus spread, authors say

People affected by the pandemic in Bangladesh eat food distributed by Mehmankhana, a nonprofit organization, on July 26. (Munir Uz Zaman/AFP/Getty Images)
By
Adam Taylor
and
Ben Guarino

September 1, 2021 at 2:09 p.m. EDT

NEW! Gift this article to share free access


881
The authors of a study based on an enormous randomized research project in Bangladesh say their results offer the best evidence yet that widespread wearing of surgical masks can limit the spread of the coronavirus in communities.

The preprint paper, which tracked more than 340,000 adults across 600 villages in rural Bangladesh, is by far the largest randomized study on the effectiveness of masks at limiting the spread of coronavirus infections.
Its authors say this provides conclusive, real-world evidence for what laboratory work and other research already strongly suggest: mask-wearing can have a significant impact on limiting the spread of symptomatic covid-19, the disease caused by the virus.

“I think this should basically end any scientific debate about whether masks can be effective in combating covid at the population level,” Jason Abaluck, an economist at Yale who helped lead the study, said in an interview, calling it “a nail in the coffin” of the arguments against masks.

https://www.poverty-action.org/site..._RCT____Symptomatic_Seropositivity_083121.pdf

here are some of the comments Andy Paquette and I have made:
https://www.dropbox.com/s/95xn9iz6edkcouh/Mask_shared-comments-yale-junk-science-3.pdf?dl=0
 
I would try to find someone who has more expertise to critique the study. Many of the criticisms from the cat blogger (what is their background - economics?) seem questionable. I'll see if I can find anything (haven't so far - just press release types of reporting).

I didn't see any dropbox comments that were useful criticisms. Most were questions that would be answered by careful reading.

The biggest concern with this study was the lack of blinding. And that lack of blinding could have biased some of the measuring/reporting enough to decrease or increase the effect. They address it in the discussion section, and some of their arguments are persuasive. But in the end, it still makes the results weaker.
 
I would try to find someone who has more expertise to critique the study. Many of the criticisms from the cat blogger (what is their background - economics?) seem questionable. I'll see if I can find anything (haven't so far - just press release types of reporting).

I didn't see any dropbox comments that were useful criticisms. Most were questions that would be answered by careful reading.

thx. can you give examples.

The biggest concern with this study was the lack of blinding. And that lack of blinding could have biased some of the measuring/reporting enough to decrease or increase the effect. They address it in the discussion section, and some of their arguments are persuasive. But in the end, it still makes the results weaker.

isn't the biggest problem the null result. the "intervention" group was 180,000 people and they counted 33 fewer cases of covid???
 
thx. can you give examples.

Examples of questionable criticisms from the cat blogger:

Most of the criticism is about a failure to establish a baseline. Yet reading the study carefully, plus the supporting documents which describe the procedures in greater detail, show that they did collect much of that information.

A lot of criticism was about variability in the baseline, as though that would bias the results. But the point of randomization is that anything that might influence the outcome one way or the other (such as variations in testing between villages) is equally likely to be found in the control group vs. the intervention group, so it wouldn't bias the results. Instead, increased variability makes the results noisier, which makes it less likely you can find a statistically significant result (the signal gets lost in the noise). And the authors did what they could to do what was asked for - reduce the baseline variability.

isn't the biggest problem the null result. the "intervention" group was 180,000 people and they counted 33 fewer cases of covid???

There wasn't a null result. There was a statistically significant decrease in the number of COVID cases. Over 180,000 people, the decrease was about 126 cases in 10 weeks. And that was an undercount because only 40% were tested. So the number is more like 300. And over a year that would be 3000. So extrapolating that to the US population would be a decrease in COVID cases in the millions per year (and that was with 40% mask use). Of course, that's way too much extrapolation. I just wanted show that it's not a trivial effect, if real.

I would be interested in a more critical look at the study from somebody with expertise. All the other reporting I can find on it treats it like a slam dunk, with quotes from random scientists crowing about how it's the nail-in-the-coffin for mask skeptics. We may not be able to find any thoughtful critiques until it's published, though.
 
can you help point me to the specifics of this claim?

Figure 1:

1631894770113.png
The regression results are also reported in tables A6 (the pre-specified analysis) and A7 in the report (significant values marked at the 1%, 5% and 10% levels).

You could could question some of the significant findings, because the confidence intervals for some measures include 1.00 (the numbers in the square brackets). In general, confidence intervals which are greater than 1 or less than zero mean that they include a null effect. This wasn't the case in the pre-specified analysis.


Not a valid comparison. For one, you are using the wrong population size. They had symptom information on 146,783 control and 160,323 intervention people. And you can't make a direct comparison like that, when your population sizes are different. The difference was at a rate of 7 per 10,000.

it's percent so I don't think there is an undercount

It's definitely an undercount. It's like you were put into a room with a hundred people facing away from you, and you were told to count how many of them had glasses. You ask them to turn around so you can see their faces, and 40 of them do so. Of those 40, 4 of them have glasses. So you leave and report that 4 people had glasses, out of the hundred people.
 
Not a valid comparison. For one, you are using the wrong population size. They had symptom information on 146,783 control and 160,323 intervention people. And you can't make a direct comparison like that, when your population sizes are different. The difference was at a rate of 7 per 10,000.

1631975871218.png

so I realize these #s changed a little bit, but these were published along the claims he made in the washington post.


It's definitely an undercount. It's like you were put into a room with a hundred people facing away from you, and you were told to count how many of them had glasses. You ask them to turn around so you can see their faces, and 40 of them do so. Of those 40, 4 of them have glasses. So you leave and report that 4 people had glasses, out of the hundred people.

but if you reported the 10% had glasses and then reported that based on that 10 probably had glasses you'd be ok. that's what I did... multiply the population by .68%


====

here's the net net... did this huge ass study with 300,000 people looking for something that they did not find. at the end of the day they had 5 fewer cases. this is a joke any way you cut it.

1631975699000.png[/QUOTE]
 

Attachments

  • 1631974768923.png
    1631974768923.png
    107 KB · Views: 3
  • 1631975603766.png
    1631975603766.png
    10.6 KB · Views: 2
Medical Data Science PhD here chiming in. I wouldn't worry about this study until it actually goes through peer review -- final version likely won't appear for a year at least and it was hurt badly in that regard by the antics of the economist who conducted it. I don't know if it will even be published in a medical journal at all, even with it supporting the broader "consensus" narrative.

I do think you have grasped the essence of the issue with the paper, Alex. The question people are interested in is "does increased mask utilization decrease Covid prevalence at a population level". The question the authors studied was "does increased mask utilization decrease symptomatic seroprevalence at the population level". Now, symptomatic seroprevalence is, to my knowledge, a novel metric created by the authors.

Asymptomatic seroprevalence, is something I have seen before in the literature, itis a measure of the presence of antibodies in the blood of the population who didn't report symptoms. It is useful for figuring out how many people are "catching covid" but aren't displaying any symptoms and are therefore likely to be missed by normal testing regimes. Here, the authors chose to screen for symptoms prior to conducting blood draws and serological tests on those who consented and displayed symptoms.

The first question is, why do this? Why not simply randomly test for antibodies in the populations, i.e. screen for overall seroprevalence. The only answers I can think of are costs--which seems unlikely since they ended up doing a very large number of tests anyway-- or an added experimental degree of freedom and an attempt to boost the power if they expected a small effect, that is a method of p-hacking.

It basically looks like this is what happened, they conduct their analysis on symptomatic seroprevalence and get a statistically significant result in the regression. There are lots of nuts and bolts problems with this experiment, but even ignoring those, there's a problem here with generalizing from "symptomatic seroprevalence" to the actual thing we are interested in, i.e. population prevalence of covid-19. Basically, there are two layers of noise added on top to get from symptomatic seroprevalence to reality -- the noise introduced by the symptomatic reporting and consent/dropout (which was very significant with something like 60% of those reporting symptoms not going on to get blood drawn), and the noise introduced by the operational error in the antibody testing (also nontrivial with these tests, see: https://www.ncbi.nlm.nih.gov/labs/pmc/articles/PMC7784824/pdf/YACB_0_1861885.pdf)

What does this noise do? Well, essentially it increases the expected error of the estimate, and thus the width of the confidence interval. In the case of this paper, the interval of the cloth mask wing already included zero, which under normal circumstances is interpreted as a statistically insignificant effect or null result. With the surgical masks, it butted right up against zero, and if one were to account for the added error from the symptom filtering and error of the tests, it almost certainly would include zero as well--again, null result.

Usually, accounting for such extra noise isn't done -- this is out of laziness and the "publish or perish" model, and probably has a relatively large influence on things like the reproducibility crisis you may have heard of. But consider that earlier in the pandemic when Stanford researchers did antibody screening (with a different test, but liable to errors as well), that showed many more people having had Covid than were currently counted at the time (which implied lowering the estimate of of scary metrics like infection fatality rate), the "consensus" was that such errors had to be accounted for and the estimates and analysis suitably adjusted, which the authors did before peer review (https://www.medrxiv.org/content/10.1101/2020.04.14.20062463v2).

I think that a similar adjustment is warranted for this paper, and even in today's political climate with regard to things like masks I suspect they will have to provide it if they want to get published in a high-tier medical journal. Of course, such an adjustment would show what we all suspect and was established knowledge prior to April of 2020--that masks don't have a significant impact on slowing the spread of respiratory viruses.
 
Last edited:
Medical Data Science PhD here chiming in. I wouldn't worry about this study until it actually goes through peer review -- final version likely won't appear for a year at least and it was hurt badly in that regard by the antics of the economist who conducted it. I don't know if it will even be published in a medical journal at all, even with it supporting the broader "consensus" narrative.

I do think you have grasped the essence of the issue with the paper, Alex. The question people are interested in is "does increased mask utilization decrease Covid prevalence at a population level". The question the authors studied was "does increased mask utilization decrease symptomatic seroprevalence at the population level". Now, symptomatic seroprevalence is, to my knowledge, a novel metric created by the authors.

Asymptomatic seroprevalence, is something I have seen before in the literature, itis a measure of the presence of antibodies in the blood of the population who didn't report symptoms. It is useful for figuring out how many people are "catching covid" but aren't displaying any symptoms and are therefore likely to be missed by normal testing regimes. Here, the authors chose to screen for symptoms prior to conducting blood draws and serological tests on those who consented and displayed symptoms.

The first question is, why do this? Why not simply randomly test for antibodies in the populations, i.e. screen for overall seroprevalence. The only answers I can think of are costs--which seems unlikely since they ended up doing a very large number of tests anyway-- or an added experimental degree of freedom and an attempt to boost the power if they expected a small effect, that is a method of p-hacking.

It basically looks like this is what happened, they conduct their analysis on symptomatic seroprevalence and get a statistically significant result in the regression. There are lots of nuts and bolts problems with this experiment, but even ignoring those, there's a problem here with generalizing from "symptomatic seroprevalence" to the actual thing we are interested in, i.e. population prevalence of covid-19. Basically, there are two layers of noise added on top to get from symptomatic seroprevalence to reality -- the noise introduced by the symptomatic reporting and consent/dropout (which was very significant with something like 60% of those reporting symptoms not going on to get blood drawn), and the noise introduced by the operational error in the antibody testing (also nontrivial with these tests, see: https://www.ncbi.nlm.nih.gov/labs/pmc/articles/PMC7784824/pdf/YACB_0_1861885.pdf)

What does this noise do? Well, essentially it increases the expected error of the estimate, and thus the width of the confidence interval. In the case of this paper, the interval of the cloth mask wing already included zero, which under normal circumstances is interpreted as a statistically insignificant effect or null result. With the surgical masks, it butted right up against zero, and if one were to account for the added error from the symptom filtering and error of the tests, it almost certainly would include zero as well--again, null result.

Usually, accounting for such extra noise isn't done -- this is out of laziness and the "publish or perish" model, and probably has a relatively large influence on things like the reproducibility crisis you may have heard of. But consider that earlier in the pandemic when Stanford researchers did antibody screening (with a different test, but liable to errors as well), that showed many more people having had Covid than were currently counted at the time (which implied lowering the estimate of of scary metrics like infection fatality rate), the "consensus" was that such errors had to be accounted for and the estimates and analysis suitably adjusted, which the authors did before peer review (https://www.medrxiv.org/content/10.1101/2020.04.14.20062463v2).

I think that a similar adjustment is warranted for this paper, and even in today's political climate with regard to things like masks I suspect they will have to provide it if they want to get published in a high-tier medical journal. Of course, such an adjustment would show what we all suspect and was established knowledge prior to April of 2020--that masks don't have a significant impact on slowing the spread of respiratory viruses.

awesome. thx. I'm still digesting/ researching a lot of what you've written here... but in the meantime... I was hoping to bounce a couple of things off of you:

1. this looks like a very weak signal / indicator / affect. I mean, I can't imagine they went into this huge study the expectation that they'd have 5 fewer covid cases. they can slice and dice the data however they want it's just hard to get away from this number.

2. the hype in the washington post seems way over-the-top for any serious scientist... even considering all the covid craziness we've gone through... whos says stuff like this:
- “a nail in the coffin” of the arguments against masks
- should basically end any scientific debate

3. have you seen the final paper:
https://www.nber.org/papers/w28734
-- from what I can tell they're completely moving away from their incredibly weak data regarding reduction in covid... it looks like they're repackaging it as " how to get people to wear masks"
-- this seems incredibly dishonest... but a familiar pattern... lead with big sensational headlines... and then run and hide when the data doesn't back it up.


- provides conclusive, real-world evidence
 
Medical Data Science PhD here chiming in. I wouldn't worry about this study until it actually goes through peer review -- final version likely won't appear for a year at least and it was hurt badly in that regard by the antics of the economist who conducted it. I don't know if it will even be published in a medical journal at all, even with it supporting the broader "consensus" narrative.

I do think you have grasped the essence of the issue with the paper, Alex. The question people are interested in is "does increased mask utilization decrease Covid prevalence at a population level". The question the authors studied was "does increased mask utilization decrease symptomatic seroprevalence at the population level". Now, symptomatic seroprevalence is, to my knowledge, a novel metric created by the authors.

Asymptomatic seroprevalence, is something I have seen before in the literature, itis a measure of the presence of antibodies in the blood of the population who didn't report symptoms. It is useful for figuring out how many people are "catching covid" but aren't displaying any symptoms and are therefore likely to be missed by normal testing regimes. Here, the authors chose to screen for symptoms prior to conducting blood draws and serological tests on those who consented and displayed symptoms.

The first question is, why do this? Why not simply randomly test for antibodies in the populations, i.e. screen for overall seroprevalence. The only answers I can think of are costs--which seems unlikely since they ended up doing a very large number of tests anyway-- or an added experimental degree of freedom and an attempt to boost the power if they expected a small effect, that is a method of p-hacking.

It basically looks like this is what happened, they conduct their analysis on symptomatic seroprevalence and get a statistically significant result in the regression. There are lots of nuts and bolts problems with this experiment, but even ignoring those, there's a problem here with generalizing from "symptomatic seroprevalence" to the actual thing we are interested in, i.e. population prevalence of covid-19. Basically, there are two layers of noise added on top to get from symptomatic seroprevalence to reality -- the noise introduced by the symptomatic reporting and consent/dropout (which was very significant with something like 60% of those reporting symptoms not going on to get blood drawn), and the noise introduced by the operational error in the antibody testing (also nontrivial with these tests, see: https://www.ncbi.nlm.nih.gov/labs/pmc/articles/PMC7784824/pdf/YACB_0_1861885.pdf)

What does this noise do? Well, essentially it increases the expected error of the estimate, and thus the width of the confidence interval. In the case of this paper, the interval of the cloth mask wing already included zero, which under normal circumstances is interpreted as a statistically insignificant effect or null result. With the surgical masks, it butted right up against zero, and if one were to account for the added error from the symptom filtering and error of the tests, it almost certainly would include zero as well--again, null result.

Usually, accounting for such extra noise isn't done -- this is out of laziness and the "publish or perish" model, and probably has a relatively large influence on things like the reproducibility crisis you may have heard of. But consider that earlier in the pandemic when Stanford researchers did antibody screening (with a different test, but liable to errors as well), that showed many more people having had Covid than were currently counted at the time (which implied lowering the estimate of of scary metrics like infection fatality rate), the "consensus" was that such errors had to be accounted for and the estimates and analysis suitably adjusted, which the authors did before peer review (https://www.medrxiv.org/content/10.1101/2020.04.14.20062463v2).

I think that a similar adjustment is warranted for this paper, and even in today's political climate with regard to things like masks I suspect they will have to provide it if they want to get published in a high-tier medical journal. Of course, such an adjustment would show what we all suspect and was established knowledge prior to April of 2020--that masks don't have a significant impact on slowing the spread of respiratory viruses.

1631989360007.png

one more thing... I can't believe I'm right about this but my look at the numbers suggest that this wacky 35% figure they've been throwing around represents 1less covid case in the intervention group versus the control group. this is like a bad joke:

1631989316026.png
 

Attachments

  • 1631989242043.png
    1631989242043.png
    11.2 KB · Views: 1
awesome. thx. I'm still digesting/ researching a lot of what you've written here... but in the meantime... I was hoping to bounce a couple of things off of you:

1. this looks like a very weak signal / indicator / affect. I mean, I can't imagine they went into this huge study the expectation that they'd have 5 fewer covid cases. they can slice and dice the data however they want it's just hard to get away from this number.

2. the hype in the washington post seems way over-the-top for any serious scientist... even considering all the covid craziness we've gone through... whos says stuff like this:
- “a nail in the coffin” of the arguments against masks
- should basically end any scientific debate

3. have you seen the final paper:
https://www.nber.org/papers/w28734
-- from what I can tell they're completely moving away from their incredibly weak data regarding reduction in covid... it looks like they're repackaging it as " how to get people to wear masks"
-- this seems incredibly dishonest... but a familiar pattern... lead with big sensational headlines... and then run and hide when the data doesn't back it up.


- provides conclusive, real-world evidence

1. Yes, I agree.That's why I suspect they will have great difficulty getting this published in a top-tier medical journal (for reference the DanMask study -- which showed that surgical masks didn't have a significant impact with respect to preventing the wearer from getting covid-- was published in an elite journal (I believe Annals of Internal Medicine). This has a very sloppy methodology with a lot of questions, and frankly, the pro-mask bent of the current environment may actually work against them, since their results were so weak and showed no statistical significant effect on cloth masks (which has been what was recommended by CDC and is a policy flashpoint, etc.).

2. Needless to say I've been disgusted by public facing scientists and scientific journalism throughout this whole ordeal. The hyperbole and frankly bizarre triumphalist interpretation of the study in those mainstream venues is not really surprising anymore, but completely absurd and embarrassing for science.

3. I haven't read that. But this seems more like an attempt to get more than one paper out of a big expensive study (which scientists are all guilty of), and it is in an economics venue, not medical. But for the author's career (main one is an economist) it may be better to publish something in that field than something in a low-tier medical journal (or a joke like PNAS)
 
View attachment 2163

one more thing... I can't believe I'm right about this but my look at the numbers suggest that this wacky 35% figure they've been throwing around represents 1less covid case in the intervention group versus the control group. this is like a bad joke:

View attachment 2162


I suspect issues like this are going to be a problem for them in peer-review. This whole notion of dropping a preprint like this with a press release is a bit shocking to begin with.

[edit: I believe they used the entire populations for the denominator (which is also a bit weird). Then you're dealing with raw differences on the scale of a few hundred in their down-sampled datasets, not single digits, but it is still a very questionable difference when factoring in all of the variability induced by the study design.]

[edit 2: Also, the estimates for symptomatic seroprevalence at <1% are hard to square with using that as a surrogate measure for covid generally. I can't really believe that fewer than 1% of the population contracted covid over the study timeline. And even if that is the case it adds the question of whether you can generalize results from such low-prevelance conditions to actual outbreaks like you have in Europe or America, where infection rates are much higher than 1%].
 
Last edited:
but if you reported the 10% had glasses and then reported that based on that 10 probably had glasses you'd be ok. that's what I did... multiply the population by .68%

Except that percentage is 22% in this study. That is, out of the tens of thousands of people in the room, 10,000 turned around, and 2200 of them had eyeglasses. If we apply that percentage to all the tens of thousands of people in the room (those with symptoms that may be COVID), then we are looking at 6000+ COVID cases.

here's the net net... did this huge ass study with 300,000 people looking for something that they did not find. at the end of the day they had 5 fewer cases. this is a joke any way you cut it.

View attachment 2159


Except that the 0.68% doesn't come from (number of people with COVID)/(number of people tested for COVID). The 0.68% comes from (number of people with COVID)/(number of people asked about symptoms of COVID).

The number of (people with COVID)/(people tested) in the the control group was 1116/4971 = 22%. The number of (people with COVID)/(people tested) in the intervention group was 1106/5006 = 22%. You will notice that the percentages are the same, and they should be. Out of all the people who got ILI symptoms, in both groups, the same portion had COVID vs. other types of ILI. The difference started earlier. The difference between the control and intervention group was that significantly fewer people had ILI symptoms in the intervention group, in the first place.

If we had done what you suggested above (apply the "10%" to the whole group) then we would have been looking at a numerical difference of 20 fewer cases per 10,000 tested. Still seems like a small number, until you look at the population this applies to. You apply this finding to the US and now you are talking about millions of cases. And again, that's just with a mask rate of 40%.

Big picture - I've had more time to go over this paper and this was a decent study. It was a very good study on the front end. It was much better than the other mask studies in terms of what they actually tested - community masking instead of the essentially useless individual masking. Yeah, earlier RCTs are a bit cleaner in their execution but they ask the wrong question - nobody thinks random individuals wearing masks is going to make a difference to COVID transmission, even if you think they work. So this study is way ahead of the others in terms of how they went about asking the question. Where it falls off is that there were limitations in execution. The researchers seemed to do the best they could within those limitations. But the limitations weakened the results.

Full disclosure - prior to this study, I've been neutral on whether or not it's been shown masks make a difference. As far as I can tell, there's a reasonable chance (in the range of even odds) that they help. Pre-clinical research is fairly solid, and the clinical research hasn't disproved it (asking the wrong questions, not big enough). And it's such a trivial ask, I'm happy to wear a mask. But this study isn't good enough to persuade a skeptic. It's probably good enough to persuade someone who is sitting on the fence, but already leaning to the approval side. It hasn't changed my mind - I'm waiting until it goes through peer-review and I see what others have to say (by "others", I mean people who actually know what they're talking about).

My semi-qualified opinion...it's reasonable to challenge this study as a slam-dunk. It's not "junk" or "dismal" though.
 
awesome. thx. I'm still digesting/ researching a lot of what you've written here... but in the meantime... I was hoping to bounce a couple of things off of you:

1. this looks like a very weak signal / indicator / affect. I mean, I can't imagine they went into this huge study the expectation that they'd have 5 fewer covid cases. they can slice and dice the data however they want it's just hard to get away from this number.

2. the hype in the washington post seems way over-the-top for any serious scientist... even considering all the covid craziness we've gone through... whos says stuff like this:
- “a nail in the coffin” of the arguments against masks
- should basically end any scientific debate

3. have you seen the final paper:
https://www.nber.org/papers/w28734
-- from what I can tell they're completely moving away from their incredibly weak data regarding reduction in covid... it looks like they're repackaging it as " how to get people to wear masks"
-- this seems incredibly dishonest... but a familiar pattern... lead with big sensational headlines... and then run and hide when the data doesn't back it up.


- provides conclusive, real-world evidence
In addition to what you and jh1517 say, statistically significant isn't the end of the analysis. Statistical power is important. I posted a beginner/introduction to that concept below. Hopefully not insulting anyone with its childishness. Just figured that many here may not be familiar with the concept. Statistical significance alone gets way over emphasized.

Another thing that people always fail to consider is cost as well as benefit (see global warming is an unmitigated disaster! For everyone! Really?). Mask wearing by the general public is shown to have a number of health downsides. The general public doesn't know how to wear as mask properly and probably can't even if they did know. In a hospital, masks, gloves, etc are worn for a specific purpose and then thrown away immediately. Among the general public, dirty masks are worn multiple times. They are taken on and off by hands that are not clean. Germs are put on the masks and inhaled. They are also applied directly to the mouth and nose by dirty hands fidgeting with masks. That could be covid, but also would be lots of other pathogens - and those do result in infections other than covid. There are more health hazards.

Even if the results of the experiment could be replicated and some kind of meaningful statistical power could be demonstrated, the value of masks, if the same results were consistently returned, are very small and probably do not overcome the downside of masks.

I haven't read the study because, for one, I have a giant hangover this morning and for another, my wife is demanding I go shopping with her. Maybe I'll read it later.

https://www.bing.com/videos/search?...1EC99648227D4E5746F41EC&view=detail&FORM=VIRE
 
Medical Data Science PhD here chiming in. I wouldn't worry about this study until it actually goes through peer review -- final version likely won't appear for a year at least and it was hurt badly in that regard by the antics of the economist who conducted it.

Ooh...dirt...what happened?

Asymptomatic seroprevalence, is something I have seen before in the literature, itis a measure of the presence of antibodies in the blood of the population who didn't report symptoms. It is useful for figuring out how many people are "catching covid" but aren't displaying any symptoms and are therefore likely to be missed by normal testing regimes. Here, the authors chose to screen for symptoms prior to conducting blood draws and serological tests on those who consented and displayed symptoms.

The first question is, why do this? Why not simply randomly test for antibodies in the populations, i.e. screen for overall seroprevalence.

They did do this. Apparently they haven't run those tests yet - they ran the symptomatic seroprevalence first. I'm also very interested in those results, although I didn't notice anything weird in the test results so far.

The only answers I can think of are costs--which seems unlikely since they ended up doing a very large number of tests anyway-- or an added experimental degree of freedom and an attempt to boost the power if they expected a small effect, that is a method of p-hacking.

I suspect there must be an element of cost and access? This is Bangladesh, after all. Anyways, like you say, I think we should wait and see what it says when it's published.
 
2. Needless to say I've been disgusted by public facing scientists and scientific journalism throughout this whole ordeal. The hyperbole and frankly bizarre triumphalist interpretation of the study in those mainstream venues is not really surprising anymore, but completely absurd and embarrassing for science.

Yeah, I think that would be a valid challenge to make wrt this study.
 
Back
Top