Need Help With Upcoming Episode on Mask Junk Science

well, I strongly disagree with you and think it is the correct basis.

"I have a headache" does not equal "I am proven to have covid".

If the hypothesis is that masks protect us from covid, then we need to look at the numbers from the control and experiment group that are proven to have, or not have, covid.

I could care less about people in either group merely feeling sick. Maybe they have food poisoning or were bitten by a toxic insect, or whatever.There are all kinds of illnesses in third world humid environments. People are constantly suffering from one symptom or another. Covid symptoms overlap with many forms of illness.

It's entirely possible that masks lower the incidence of people putting dirty things in their mouths or from breathing much larger mold spores in dust on the road system. Those are just examples of confounds.If you're only looking at those who report symptoms, then this experiment means nothing, zip, 0 nada.
 
Last edited:
Shooting from the hip on Masks. (Not serious, but very serious)

.001 chance of stopping covid in a small room with closed windows/ no ventilation, duh

30% chance that if you'll comply to wearing a mask, you won't insist on informed consent for a vaccine.

80% chance that you will feel like your "helping" regardless how far away the proof is.

45% chance you will be offended by those who don't comply.


(obviously joking about the numbers, but serious about the effect)
 
well, I strongly disagree with you and think it is the correct basis.

"I have a headache" does not equal "I am proven to have covid".

If the hypothesis is that masks protect us from covid, then we need to look at the numbers from the control and experiment group that are proven to have, or not have, covid.

I could care less about people in either group merely feeling sick. Maybe they have food poisoning or were bitten by a toxic insect, or whatever.There are all kinds of illnesses in third world humid environments. People are constantly suffering from one symptom or another. Covid symptoms overlap with many forms of illness.

It's entirely possible that masks lower the incidence of people putting dirty things in their mouths or from breathing much larger mold spores in dust on the road system. Those are just examples of confounds.If you're only looking at those who report symptoms, then this experiment means nothing, zip, 0 nada.

1) At no point in this study was that pre-specified as an outcome, nor was this data collected in any way which would allow you test that outcome. Suddenly making up a new outcome, and then using data collected for a different reason to serve as a "measure" of your outcome, because you weren't happy with the results of the real outcome, would be the junkiest of all junk science.

2) Using this measure, masks could prevent 99% of COVID cases, and your measure would still show up as statistically insignificant. Try it for yourself and see. Make up some huge number of people who report respiratory symptoms, in the no-mask group - let's say 10,000. Applying our seropositivity rate of 22%, that gives us 2200 in the no-mask/proven to have COVID group and 7800 in the no-mask/proven to not have COVID group. Now let's say only 100 people reported respiratory symptoms in the mask group. Applying our seropositivity rate of 22% gives us 22 in the mask/proven to have COVID group and 78 in the mask/proven not to have COVID group. Run your Chi-square test and see what you get...p=1. So masks prevented 2178/2200 cases, but your "outcome" shows that "masks don't make a difference"? What if it was the other way around. What if masks caused COVID and there were 2178 more cases in the mask group. But the researchers went ahead and claimed that masks weren't harmful, because p=1. Would you really be prepared to buy that?

3) If you want to use that as an outcome - people proven to have COVID vs. people proven not to have COVID and mask/no-mask - you need to sample a different population, and you need to take a random sample. And you need a much larger sample. The researchers gave 25,000 as the number needed for this sample (I don't know what the power calculation was). And some of this was collected. But you don't know the results.

4) Alex asked about doing a test on the primary study outcome, and varying the numbers by a few cases up or down. Your analysis tells him nothing about what he wanted to know.
 
1) At no point in this study was that pre-specified as an outcome, nor was this data collected in any way which would allow you test that outcome. Suddenly making up a new outcome, and then using data collected for a different reason to serve as a "measure" of your outcome, because you weren't happy with the results of the real outcome, would be the junkiest of all junk science.

2) Using this measure, masks could prevent 99% of COVID cases, and your measure would still show up as statistically insignificant. Try it for yourself and see. Make up some huge number of people who report respiratory symptoms, in the no-mask group - let's say 10,000. Applying our seropositivity rate of 22%, that gives us 2200 in the no-mask/proven to have COVID group and 7800 in the no-mask/proven to not have COVID group. Now let's say only 100 people reported respiratory symptoms in the mask group. Applying our seropositivity rate of 22% gives us 22 in the mask/proven to have COVID group and 78 in the mask/proven not to have COVID group. Run your Chi-square test and see what you get...p=1. So masks prevented 2178/2200 cases, but your "outcome" shows that "masks don't make a difference"? What if it was the other way around. What if masks caused COVID and there were 2178 more cases in the mask group. But the researchers went ahead and claimed that masks weren't harmful, because p=1. Would you really be prepared to buy that?

3) If you want to use that as an outcome - people proven to have COVID vs. people proven not to have COVID and mask/no-mask - you need to sample a different population, and you need to take a random sample. And you need a much larger sample. The researchers gave 25,000 as the number needed for this sample (I don't know what the power calculation was). And some of this was collected. But you don't know the results.

4) Alex asked about doing a test on the primary study outcome, and varying the numbers by a few cases up or down. Your analysis tells him nothing about what he wanted to know.
Ok Ellis,
Now you're bias and lack of understanding is starting to show through.

How are you making statements like, "So masks prevented 2178/2200 cases, but your "outcome" shows that "masks don't make a difference"?"

Huh? No one knows how many cases the masks did or didn't prevent. You can't make that extrapolation - at least not that way.

Everything else you wrote is, sorry to be harsh, gibberish. Of course you can take a sample from each the control and the experimental group and do a blood test and then perform inferential statistical analysis. That's done every day. That's what confidence intervals are used for in the way that most people encounter them.

Why did the researchers look at blood serum if it wasn't the definitive outcome (i.e dependent variable)?

If the researchers are using self-reported symptoms as the dependent variable, then the study is junk science. Period. Full stop.

jh1517 stated early in this thread - "The question the authors studied was "does increased mask utilization decrease symptomatic seroprevalence" - was jh1517 wrong? Why did you not debate him on that metric?
 
Last edited:
Does anyone have the ability to calculate the P value if there were a couple of more positive covid blood tests in the intervention group.

... or a couple of more negative tests in the control group.

isn't the published p val very close to null effect already... what would it take to push it over?


Alex,
Here are my thoughts now that I've had a breather from work and a few minutes to really look at this.

First, I'm done toying with Ellis. I have long suspected that he is an IO operative; now I am sure he is. He has revealed himself. Whether self-directed or having a handler matters not. IO is IO (ironic b/c some people here think I am an IO stooge with my lack of acceptance of some conspiracy theories that fly around here as truth). So that side mission is out of the way.

Anyhow, the purpose of the study, as stated by the authors was to see if mask wearing reduced the incidence of covid as defined by presence of it in blood serum (or absence of presence).

Regression analysis is, at the end of day, correlational, not demonstrating causation. I know people that try to say otherwise, some of them knowledgeable, but they are fundamentally wrong. I am not alone in that conclusion. The best this study could have ever hoped to state, given regression analysis, is that mask wearing is associated with reduced incidence of covid in blood serum.

The correct analysis would be something like ANOVA. You introduce the independent variable of mask wearing. You take a random sample of the population of mask wearers and a random sample from the control group (non-wearers) and you test for signs of covid exposure in the blood. Very simple. Run ANOVA on the results.

Why did they want to look at each town separately? Masks either work or they don't. Town of residence should not matter one bit. Actually, I can guess why they did that. They have all kinds of noise in the data, like how much mask wearing, how much social distancing. These are continuous variables, not binary, and they are trying to smooth that out by using regression analysis. I get it. Actually, one of the obvious flaws of the study is that they are trying to handle all of these different continuous variables fairly and, at the same time, having a causation result. Way too messy! To much uncertainty. Too much subjective defining!

If someone agrees and wants to pass me agreed upon population sizes (all mask wearers and then all non-mask wearers) and then figures that we all agree on for the outcome of blood tests, I'll run the ANOVA for you (or there's probably an online site for that - just plug in the number kind of thing). I think the results are not going to be statistically significant.

The problem I'm having is that what they did is not clear to me (hence why I ask for agreed upon figures).

They queried both control and experimental group for symptoms. Those who reported symptoms (in either group?) were then asked to sit for a blood test. The results being analyzed are of the blood tests of those who agreed to be tested. I believe that's it.

If the above is accurate, then regression is absolutely wrong and ANOVA is easy (or even confidence intervals).

The 7.62% and the 8.62% reporting symptoms are red herrings. It's just there for informational purposes (though Ellis appears to have followed the false scent). It's just telling us what they had to work with when requesting permission and agreement for a blood test. Otherwise, it means nothing. Nothing can be inferred from it and, in fairness, the authors are not asking us to infer anything from it nor inferring anything themselves.

The 10,952 figure (consenting to blood test) is a key figure for the analysis. However, the authors then obscure the ability to assess the results by not telling us the split of the 10,952 between mask wearers and non-mask wearers. We need that split to perform ANOVA. If we had that split we could retro-engineer the the number in each group (mask/ no mask) that blood tested covid positive. Maybe that info is somewhere deeper in the write-up and maybe someone with more time could dig it out.

So, to be clear, there are three major distillations of the population (N). 1. masks v no masks 2. self-reported symptoms v no self reported symptoms 3. Consent to blood test and actually be tested --------> blood test positive v blood test negative

That should cut through the smoke and mirrors in the abstract.

Someone gets me the figures for the distillations and I will give back the P value for this experiment.
 
Last edited:
How are you making statements like, "So masks prevented 2178/2200 cases, but your "outcome" shows that "masks don't make a difference"?"

Huh? No one knows how many cases the masks did or didn't prevent. You can't make that extrapolation - at least not that way.

Its a way to find out if your test is useful. Create a data set where your know that there is a large effect, because you put it there. Then use your test and see if it finds the effect.

Your test does not find the effect. And it's not a surprise, because it compares the number of people who got sick from COVID with the number of people who got sick from something else. Like you said at the beginning, nobody cares about people who got a headache, and nobody cares if there were more or less people who got sick from something else. So nobody cares if your test finds a significant difference or not.

We only care about finding the people who got sick from COVID. And the comparison group for that is the people who didn't get sick, as well as the people who got sick, but it wasn't from COVID.

Why did the researchers look at blood serum if it wasn't the definitive outcome (i.e dependent variable)?

We agree that it was the definitive outcome. But "how many people got sick with something else?" was not.

jh1517 stated early in this thread - "The question the authors studied was "does increased mask utilization decrease symptomatic seroprevalence" - was jh1517 wrong? Why did you not debate him on that metric?

AFAICT, jh1517 was referring to "does increased mask utilization decrease symptomatic seroprevalence within the population?" not "within people who get sick with something else?"
 
Your test does not find the effect. And it's not a surprise, because it compares the number of people who got sick from COVID with the number of people who got sick from something else. Like you said at the beginning, nobody cares about people who got a headache, and nobody cares if there were more or less people who got sick from something else. So nobody cares if your test finds a significant difference or not.

OK. I agree, but that's what is so screwed up with this study. They definitely state that they took the people reporting symptoms and tested their blood and they also state that the blood serum test is the ultimate dependent variable, at least as far as non-mask wearers go.

I was going off this statement from the study, "Outcomes included symptomatic SARS-CoV-2 seropreva- lence (primary) and prevalence of proper mask-wearing, physical distancing, and symptoms consistent with COVID-19 (secondary)." That suggested to me that they were testing a sample of mask wearers and non-mask wearers for blood serum levels. What does it suggest to you? Bad writing on their part?

What they use for mask wearers is totally unclear to me.What is the dependent variable for non-mask wearers?

Yeah. They should have randomly selected a sample from both the non-mask wearers and the mask wearers, tested their blood and been done with it with an ANOVA. That would address asymptomatic covid too. That what I thought I was writing. I was unclear. Excuse is that I was kept referring back to the study which is a pathetic mishmash of concepts and methods. But, you're correct about what I did write.


only care about finding the people who got sick from COVID. And the comparison group for that is the people who didn't get sick, as well as the people who got sick, but it wasn't from COVID.

If that's what you think they were doing, then they should have taken the people who got sick from covid, defined by positive blood test, then looked at the proportion of the covid sick people who wore masks v those who did not. But that is messy because they would then have work their way back to how that data relates to the initial population of all mask wearers v all non-mask wearers. Where are they doing that? What are the figures? If you understand it, then spell out. It alludes me. This study is way too complicated and there is way too much room for numbers manipulation. My suggested way is the right way. They way I originally wrote, which you correctly pointed out as confirmation bias seems to me, to be what they did (by your description) , just in the opposite direction.


agree that it was the definitive outcome. But "how many people got sick with something else?" was not.

Agree. It was you who introduced the idea that they were looking at the sick v not-sick as self reported


, jh1517 was referring to "does increased mask utilization decrease symptomatic seroprevalence within the population?" not "within people who get sick with something else?"
Agree and I did not suggest otherwise. Again it was you who confused what was being measured.

You wrote, "If you want to use that as an outcome - people proven to have COVID vs. people proven not to have COVID and mask/no-mask - you need to sample a different population, and you need to take a random sample. And you need a much larger sample. The researchers gave 25,000 as the number needed for this sample (I don't know what the power calculation was). And some of this was collected. But you don't know the results."

I totally agree. That is the only way to do a fair study.

I still do not understand what the hell they did. I've a long day and a long week already. Not inclined to re-read their messy write-up. Please spell out for me exactly what you think they did and why. Thanks.
 
Last edited:
Why did they want to look at each town separately?

There are several reasons - these are just some. Someone who lives in a village where mask wearing only increased to 20% is going to be at greater risk than someone who lives in a village where it increased to 80%. And that applies to all the people who live in the village. Same thing goes for how much COVID is circulating in a particular village. Maybe one village is averaging 1 case a day, and the other is averaging 1 per week. And one village has 500 people in it and the other 2000. If both villages added 10 cases each to the total of symptomatic COVID cases, during the study, you miss out on a lot of useful information if you separate those cases from the information about their baseline risk.

You run into Simpson's paradox.
https://en.wikipedia.org/wiki/Simpson's_paradox

Your variance calculations are inaccurate because people within a village are more correlated than a random selection of people from different villages, which throws off your probability calculations.

The numbers you are looking for are in table A1 in the paper.
 
There are several reasons - these are just some. Someone who lives in a village where mask wearing only increased to 20% is going to be at greater risk than someone who lives in a village where it increased to 80%. And that applies to all the people who live in the village. Same thing goes for how much COVID is circulating in a particular village. Maybe one village is averaging 1 case a day, and the other is averaging 1 per week. And one village has 500 people in it and the other 2000. If both villages added 10 cases each to the total of symptomatic COVID cases, during the study, you miss out on a lot of useful information if you separate those cases from the information about their baseline risk.

You run into Simpson's paradox.
https://en.wikipedia.org/wiki/Simpson's_paradox

Your variance calculations are inaccurate because people within a village are more correlated than a random selection of people from different villages, which throws off your probability calculations.
No argument from me. I even noted as much, in fairness to the researchers.

However, this is confusing/commingling way too many variables. And, again, the best they can say, assuming all is well with the study, is that mask wearing is associated with a lower prevalence of covid infection; not that it prevents it.

Imagine all of the potential confounding variables in those towns especially at the ends of the mask wearing spectrum. You are assuming that low mask wearing exposes mask wearers to higher risk. That is pro-mask bias talking. There are host of other variable that could contribute to more (or less)risk. All kinds of cultural/behavioral/environmental factors. In fact, I call BS on it. It assumes unproven facts. You're doing what you accused me doing. Designing a study to confirm what I want to confirm. You have already decided that masks work and no masks represent a risk and you're designing the study around that belief, if that's what they did and why.

Again, you have a huge sample of mask wearers and same for non-mask wearers. Randomly sample them and test their blood. If you think you need 25K and can't manage to handle that many tests then don't do the study! Don't settle for junk science just to get published and funded!

That said, I still do not understand what the hell they were measuring as dependent variables for nonsymptom reporters. Blood serum for those reporting symptoms. And _____ for those not reporting symptoms.
 
Last edited:
BTW, this, in the study is also a joke, "
All intervention arms received free masks, information on the impor- tance of masking, role modeling by community leaders, and in-person reminders for 8 weeks. The control group did not receive any interventions. Neither participants nor field staff were blinded to intervention assignment."
Like people in these towns/villages don't talk to each other and share info.

And here, Ellis, they say they did what I stated they should, "We attempted to collect blood samples from all symptomatic individuals. Of these, 10,952 (40.3%) consented to have blood collected, including 40.8% in the treatment group and 39.9% in the control group (the difference in consent rates is not statistically significant, p = 0.24). We show in Table A2 that consent rates are about 40% across all demographic groups in both treatment and control villages."

So ANOVA could be and should be used.

I retract all of my retractions and regret having succumbed to Ellis' distractions.

Use ANOVA.

But this study, now that I've wasted a couple hours of my life reading it thoroughly, is classic WHO, World Bank NGO funding seeking bullshit, full of fluff about interventions and % success to get people to mask up blah blah blah. All of that obscures that should be the meat and potatoes of a simple study. Masked v unmasked. Sample both. Test the blood. Run ANOVA. Done.

That it wasn't shows me that they have ulterior motives. The statistics in this study are BS too. If they ran ANOVA significance would disappear. No more Ellis defending this garbage by making up what they did or didn't do.
 
Last edited:
OK. I agree, but that's what is so screwed up with this study. They definitely state that they took the people reporting symptoms and tested their blood and they also state that the blood serum test is the ultimate dependent variable, at least as far as non-mask wearers go.

What they use for mask wearers is totally unclear to me.What is the dependent variable for non-mask wearers?

Same thing for both.

Yeah. They should have randomly selected a sample from both the non-mask wearers and the mask wearers, tested their blood and been done with it with an ANOVA. That would address asymptomatic covid too. That what I thought I was writing. I was unclear. Excuse is that I was kept referring back to the study which is a pathetic mishmash of concepts and methods. But, you're correct about what I did write.

Yeah. They took the samples, but they haven't run the tests yet. Very frustrating.

If that's what you think they were doing, then they should have taken the people who got sick from covid, defined by positive blood test, then looked at the proportion of the covid sick people who wore masks v those who did not. But that is messy because they would then have work their way back to how that data relates to the initial population of all mask wearers v all non-mask wearers. Where are they doing that?

That's what they use regression for. Don't ask me to explain the regression to you.

What are the figures? If you understand it, then spell out. It alludes me. This study is way too complicated and there is way too much room for numbers manipulation.

I still do not understand what the hell they did. I've a long day and a long week already. Not inclined to re-read their messy write-up. Please spell out for me exactly what you think they did and why. Thanks.

Rats.

The main outcomes are fairly straightforward. And what makes or breaks a study is usually the main outcomes anyways. The regressions are mostly there to adjust numbers for some of the main comparisons (equalize the factors that might confound the results otherwise), to test all the secondary interventions they added, and to run tests on how robust or balanced the results are. I would just ignore them.
 
Masked v unmasked. Sample both. Test the blood. Run ANOVA. Done.

That it wasn't shows me that they have ulterior motives. The statistics in this study are BS too. If they ran ANOVA significance would disappear. No more Ellis defending this garbage by making up what they did or didn't do.

What result are talking about that you think they didn't report or run a statistical test on? They reported on all their pre-specified outcomes, plus significance testing. Including the test you just suggested (except with regression, not ANOVA).
 
What result are talking about that you think they didn't report or run a statistical test on? They reported on all their pre-specified outcomes, plus significance testing. Including the test you just suggested (except with regression, not ANOVA).
What result are talking about that you think they didn't report or run a statistical test on? They reported on all their pre-specified outcomes, plus significance testing. Including the test you just suggested (except with regression, not ANOVA).
Nope.

You are being an apologist for junk science. I have already stated my opinion of why you are. In fact, it's not even science. It is a naked grab for funding and/or an information operation designed to influence public perception. I know what those look like. It's kind of like the scurrilous and totally phony Steel Dossier. These things are written to grab the attention of the media, who then, in turn, disseminate the the story into the popular collective mind.

The "study" has all kinds of red herrings. Look how we influenced mask wearing! Aren't we cool. Mask wearers reported fewer symptoms! Go on media, latch onto that! Blood serum tests for those that didn't wear masks, but later for those who did too. Maybe. We won't show those numbers. It's a series of hooks for the media. I read it (Unlike earlier when I was busy at work and just looked at the abstract and comments here). They talk about all kinds of things that aren't what the study is supposed to be. In fact, they contradict themselves in the study's objectives, methods, dependent variables, etc. It is very confusing. That is a red flag for it being exactly what I say it is. They're throwing crap at the wall and seeing what sticks. It's story telling dressed up to look like science.

This is impossible to make sense of because it doesn't make sense. It is internally inconsistent in its methods and objectives. It's not just badly written up. It is deliberately a hot mess. It's smoke and mirrors. Not even you can consistently explain what the study is supposed to be and how it was done. Your answers to my questions are completely vague. That is on purpose to keep this pile of crap alive and swarming with flies.

If you want to continue arguing for the virtues of this thing, go ahead. You're just confirming my opinion of what you're all about. You have contradicted yourself, made false statements and misquotes from the "study"and piled on misdirections. I will not discuss further with you because you are not an honest player, IMO. I have no time for your propaganda.
 
Last edited:
Me: OK. I agree, but that's what is so screwed up with this study. They definitely state that they took the people reporting symptoms and tested their blood and they also state that the blood serum test is the ultimate dependent variable, at least as far as non-mask wearers go.

What they use for mask wearers is totally unclear to me.What is the dependent variable for non-mask wearers?

You: Same thing for both. means what? vague

Me: Yeah. They should have randomly selected a sample from both the non-mask wearers and the mask wearers, tested their blood and been done with it with an ANOVA. That would address asymptomatic covid too. That what I thought I was writing. I was unclear. Excuse is that I was kept referring back to the study which is a pathetic mishmash of concepts and methods. But, you're correct about what I did write.

You: Yeah. They took the samples, but they haven't run the tests yet. Very frustrating. but that is the objective. To test blood serum levels. So how is the study concluded if they haven't run the test yet? It's right there in the study (blood serum being the objective). Read it.

Me: If that's what you think they were doing, then they should have taken the people who got sick from covid, defined by positive blood test, then looked at the proportion of the covid sick people who wore masks v those who did not. But that is messy because they would then have work their way back to how that data relates to the initial population of all mask wearers v all non-mask wearers. Where are they doing that?

You: That's what they use regression for. Don't ask me to explain the regression to you. Gotcha! You're easy. They not only contradict that later in the study, but that is shit analysis even if that's what they did. Those results are in no way valid or reliable. It's all mathematical magic that no one understands, that will be destroyed if this hunk of crap ever gets peer reviewed. But they don't care about peer review because by the time that happens, if ever, the "study" will have been cited and assimilated by millions of pro-fascist state covid panic zombies. This is the nature of modern IOs.

Me: What are the figures? If you understand it, then spell out. It alludes me. This study is way too complicated and there is way too much room for numbers manipulation.
I still do not understand what the hell they did. I've a long day and a long week already. Not inclined to re-read their messy write-up. Please spell out for me exactly what you think they did and why. Thanks.
You: Rats. ?

The main outcomes are fairly straightforward. LOL. Ok, if you say so Ace
And what makes or breaks a study is usually the main outcomes anyways. Duh. So what is the main outcome? Cute to not say. Keep on bobbing and weaving.
The regressions are mostly there to adjust numbers for some of the main comparisons (equalize the factors that might confound the results otherwise), to test all the secondary interventions they added, and to run tests on how robust or balanced the results are. I would just ignore them. Bwahaha ha ha ha ha. A shorter Ellis "don't question the significance finding. Just accept that masks are good!" OK Ace. It's been fun, but I'm done with your nonsense. You must train harder at your trade craft. It is not good. Maybe they sent you here (and no doubt elsewhere) to hone your skills. You need it.

I'm sure you will respond with something about conspiracy theories instead of making an attempt at explaining the study in three detailed yet crisp paragraphs. Skeptiko is indeed and unfortunately plagued with stupid conspiracy theories. I'll agree with you there. But info ops are real. The lesson from MK Ultra etc was not shooting presidents or blowing up buildings full of citizens on US soil or to create sexually abused zombie children Manchurian candidates. No. That's crude risky unpleasant low pay-off stuff. Bad risk/reward matrix. Rather, the lesson learned was that the best mind influence is normal human psychology weaponized to influence people and make them compliant in kinder and gentler ways, like information operations in mass media/social media. Own the message, own reality, own the world. Sick shit, but that's what it is. You're a part of it as is this "study".
 
Last edited:
"Our primary outcome measures symptomatic seroprevalence: this is the fraction of individualswho are symptomatic during our intervention period and seropositive at endline. Some of theseindividuals may have antibodies from infections occurring prior to our intervention. If so, the im-pact of our intervention on symptomatic seroprevalence may understate the impact on symptomatic seroconversions occurring during our intervention (i.e. the fraction of symptomatic infections pre-vented by masks)......thee magnitude of the difference between symptomatic seroconversions and symptomatic seropositives will depend on the fraction of symptomatic seropositives which are pre-existing at baseline."

And they didn't even perform this analysis even though it is the primary outcome (in their own words). WTF?

But they do hav a cool analysis regarding mask wearing and the color of the masks. And they sure do speak to The Who about how affordable their program would be to fund and how they are really really good at getting people to wear masks Cart before horse- they have yet to analyze their primary outcome measure).

But the media sure eats it up.
 
Me: OK. I agree, but that's what is so screwed up with this study. They definitely state that they took the people reporting symptoms and tested their blood and they also state that the blood serum test is the ultimate dependent variable, at least as far as non-mask wearers go.

What they use for mask wearers is totally unclear to me.What is the dependent variable for non-mask wearers?

You: Same thing for both. means what? vague

Everyone who had symptoms was asked for a serum sample. Full stop. No "proper mask-wearing" information was collected on individuals, so there were no "proper mask-wearing individuals" and "non-proper mask wearing individuals". There were only "proper mask-wearing in 'x' proportion" villages.

"Outcomes included symptomatic SARS-CoV-2 seroprevalence (primary) and prevalence of proper mask-wearing, physical distancing, and symptoms consistent with COVID-19 (secondary)."

Where is that Oxford comma when you need it, right?

You: Rats. ?

I was making a light-hearted joke. "Please don't ask me to explain something that would take an essay just to cover the bare basics. Oh, you are asking me? Rats."

The main outcomes are fairly straightforward. LOL. Ok, if you say so Ace
And what makes or breaks a study is usually the main outcomes anyways. Duh. So what is the main outcome? Cute to not say. Keep on bobbing and weaving.
The regressions are mostly there to adjust numbers for some of the main comparisons (equalize the factors that might confound the results otherwise), to test all the secondary interventions they added, and to run tests on how robust or balanced the results are. I would just ignore them. Bwahaha ha ha ha ha. A shorter Ellis "don't question the significance finding. Just accept that masks are good!" OK Ace. It's been fun, but I'm done with your nonsense. You must train harder at your trade craft. It is not good. Maybe they sent you here (and no doubt elsewhere) to hone your skills. You need it.

Research papers aren't directed at people who know little to nothing about the subject and need their hand held on every aspect of the study. And they are not presented in easily-digested 30 second sound-bites. Research papers are directed at peers, and try to pack in any and all relevant information. If you want to read, understand, and criticize the paper, despite all that (like me - I'm definitely not a "peer"), it's on you to put in the work. Even a peer would at least read the whole paper carefully before criticizing it. Someone who is starting at the ground floor cannot just skip that step and expect to understand anything. And you not understanding anything, under those circumstances, is not the fault of the authors. Their only obligation is to make sure their peers can understand it.

I read it (Unlike earlier when I was busy at work and just looked at the abstract and comments here). They talk about all kinds of things that aren't what the study is supposed to be. In fact, they contradict themselves in the study's objectives, methods, dependent variables, etc. It is very confusing. That is a red flag for it being exactly what I say it is. They're throwing crap at the wall and seeing what sticks. It's story telling dressed up to look like science.

This is impossible to make sense of because it doesn't make sense. It is internally inconsistent in its methods and objectives. It's not just badly written up. It is deliberately a hot mess. It's smoke and mirrors. Not even you can consistently explain what the study is supposed to be and how it was done. Your answers to my questions are completely vague. That is on purpose to keep this pile of crap alive and swarming with flies.

None of that is about the paper. It's about what you are able to understand on a quick pass through a paper through a paper full of methodologies and analyses you have little to knowledge about. And it turns out the answer is "not much". That's on you, not them.

And I apologize. Based on your demeanor, I assumed a basic level of understanding of some of these methodologies and analyses. I didn't mean to be vague. I thought my answers would be easily understandable. I realize that sounds condescending. I don't mean it that way. I explain things I know about, and I like it when people explain things to me when I don't know something. I just didn't guess right on what I should explain and what I didn't need to.

And all this applies to me too. I don't have enough knowledge on the some of the methodologies and analyses to fully understand them. I've put in the time to read and re-read sections of the paper until I understand what they are saying. And I've been careful to direct my criticisms at the parts of the study I do have knowledge about, and waiting for peer-review for the parts I don't. None of that should make me the enemy.
 
First off, please accept my apologies for the harsh tone. You've been a good sport about that. I am having a very rough week at work and it's showing.
Everyone who had symptoms was asked for a serum sample. Full stop. No "proper mask-wearing" information was collected on individuals, so there were no "proper mask-wearing individuals" and "non-proper mask wearing individuals". There were only "proper mask-wearing in 'x' proportion" villages.

"Outcomes included symptomatic SARS-CoV-2 seroprevalence (primary) and prevalence of proper mask-wearing, physical distancing, and symptoms consistent with COVID-19 (secondary)."

Where is that Oxford comma when you need it, right?

You are frustrating because I said they measured everyone who agreed for blood serum and then you said they didn't and said it would be bad methodology. You had me convinced for a minute. Then I carefully read the study and went back to my original position. Now you're agreeing with me. Now, where are the results f the blood serum analysis? Maybe I missed them. I've been asking you to point them out. You have not so far.



papers aren't directed at people who know little to nothing about the subject and need their hand held on every aspect of the study. And they are not presented in easily-digested 30 second sound-bites. Research papers are directed at peers, and try to pack in any and all relevant information. If you want to read, understand, and criticize the paper, despite all that (like me - I'm definitely not a "peer"), it's on you to put in the work. Even a peer would at least read the whole paper carefully before criticizing it. Someone who is starting at the ground floor cannot just skip that step and expect to understand anything. And you not understanding anything, under those circumstances, is not the fault of the authors. Their only obligation is to make sure their peers can understand it.

Nope. Science is not supposed to be so arcane. At least the abstract should be clear at a high level, but it's not. It's on them to clearly display the methods and results.

I'm not a peer either in a strict sense. However, I used to read a lot of research papers, most recently for my career in insurance. Normally, I can understand what they did in the study and the results. I have a masters degree in economics and I have had advanced graduate level statistics and research design training. I would not call myself a statistical expert, but I understand the concepts and I use statistics in my work; including multivariate regression, ANOVA, Chi-squared, survival analysis, small population/rare event analysis like Poisson distributions and much more. Yet I still have no clue what they did in this study and I stick to my point that these guys are all over the place and for the reasons why. This is science as an information operation and/or funding seeking.


of that is about the paper. It's about what you are able to understand on a quick pass through a paper through a paper full of methodologies and analyses you have little to knowledge about. And it turns out the answer is "not much". That's on you, not them.
Nope. Science is not supposed to be so arcane. At least the abstract should be clear at a high level, but it's not. It's on them to clearly display the methods and results.

I apologize. Based on your demeanor, I assumed a basic level of understanding of some of these methodologies and analyses. I didn't mean to be vague. I thought my answers would be easily understandable. I realize that sounds condescending. I don't mean it that way. I explain things I know about, and I like it when people explain things to me when I don't know something. I just didn't guess right on what I should explain and what I didn't need to.

Nice try. See above.

all this applies to me too. I don't have enough knowledge on the some of the methodologies and analyses to fully understand them. I've put in the time to read and re-read sections of the paper until I understand what they are saying. And I've been careful to direct my criticisms at the parts of the study I do have knowledge about, and waiting for peer-review for the parts I don't. None of that should make me the enemy.

Then spell out what they did. I've asked you many times. You've taken the time to write a whole bunch of comments. Why not take the time to spell it out for us dummies?
 
Last edited:
From table A1 in the paper. In my copy it's on page 47. It's (line 2) minus (line 3) plus (line 4).



I don't know. It will be explained somewhere in the fine print.

ok, but don't we need to go off of the headline data they presented:

1632321499260.png




here is the same data from the Stanford part of the team:

The researchers found that among the more than 350,000 people studied, the rate of people who reported symptoms of COVID-19, consented to blood collection and tested positive for the virus was 0.76% in the control villages and 0.68% in the intervention villages, showing an overall reduction in risk for symptomatic, confirmed infection of 9.3% in the intervention villages regardless of mask type.

unless they tell us otherwise I think we have to go with these numbers.

and it looks to me like those numbers show a difference of about 9 cases between the control group and the intervention group.

1632323166219.png
 

Attachments

  • 1632321091690.png
    1632321091690.png
    36.2 KB · Views: 5
  • 1632322778185.png
    1632322778185.png
    26 KB · Views: 2
Last edited:
and as far as p value... if there were the exactly the same number of positive blood tests in both groups the p-value would be 1.

so, even though I realize there wouldn't be a straight linear change in the P value... somehow you got to get from .043 to 1.0 in 9 steps.
 
First off, please accept my apologies for the harsh tone. You've been a good sport about that. I am having a very rough week at work and it's showing.

Thank you. I don't mind a harsh tone, as long as someone is engaging in discussing the points. I do mind using a harsh tone to escape from having to engage in discussing the points.

You are frustrating because I said they measured everyone who agreed for blood serum and then you said they didn't and said it would be bad methodology. You had me convinced for a minute. Then I carefully read the study and went back to my original position. Now you're agreeing with me. Now, where are the results f the blood serum analysis? Maybe I missed them. I've been asking you to point them out. You have not so far.

The results are in Results, Section 4.3, under Symptomatic Seroprevalence (right were you'd expect them to be). They give you the percentage and tell you the denominator they used. If you want Symptomatic Seroprevalence as a raw numbers, you can do the math using the percentage, and the numbers from Table A2. Also, look over all of the recent posts between me and Alex. I repeated that information there, and also did the math. I was unaware that you weren't reading any of my posts to Alex.

Nope. Science is not supposed to be so arcane. At least the abstract should be clear at a high level, but it's not. It's on them to clearly display the methods and results.

The abstract is clear to me. And I'm not operating at their level. And the methods and results are clear to me. But I read through them carefully, including the footnotes.

I'm not a peer either in a strict sense. However, I used to read a lot of research papers, most recently for my career in insurance. Normally, I can understand what they did in the study and the results. I have a masters degree in economics and I have had advanced graduate level statistics and research design training. I would not call myself a statistical expert, but I understand the concepts and I use statistics in my work; including multivariate regression, ANOVA, Chi-squared, survival analysis, small population/rare event analysis like Poisson distributions and much more.

Then why were you asking me to explain the regression to you? Why didn't you just read the sections/appendices where they explain in detail what they did and their results were?

Then spell out what they did. I've asked you many times. You've taken the time to write a whole bunch of comments. Why not take the time to spell it out for us dummies?

Just carefully read the study. I'm happy to write a whole bunch of comments to discuss legitimate issues, like my discussions with jh.
 
Back
Top