Presentiment Paper Discussion

Jay

What I'm disagreeing with is the notion that "the bias is actually due to the averages being undefined for sequences of all calm or all emotional trials."

I think this is a(n almost) complete red herring unless the total number of trials is very small, because otherwise the probability of there being all calm or all emotional trials is infinitesimal. So the problem of the average being undefined could be fixed up in any number of ways, all of which would have virtually no effect on the statistics.

But it can't be fixed up like that, because bias is intrinsic to the whole distribution of the sequences that will come into play during an experiment, not just to the minute tails of the distribution that are practically never observed.

The dog is wagging the tail, not vice versa. And that's just as it should be. :D
 
Jay

What I'm disagreeing with is the notion that "the bias is actually due to the averages being undefined for sequences of all calm or all emotional trials."

Just to be clear, in the sentence above, you are quoting your own post.
 
Yes - because you yourself quoted it, with the comment "Well, that's essentially what I think too"!
 
Yes - because you yourself quoted it, with the comment "Well, that's essentially what I think too"!

Well, "essentially." What I literally think is that the estimator isn't even defined, so there is nothing even to be biased or unbiased. Since the estimator in question doesn't exist, we have to invent one that is, and any one that we invent that involves per-sequence differences between treatment means will be biased by carryover effects.
 
The point is that if the only difficulty were those two extremely rare sequences for which the averages aren't defined, then you could come up with any number of estimators which would have only a minuscule bias (unless the sequences of trials were very short). Almost all of the bias is coming from the correlations I've pointed out, not from those two sequences.
 
The point is that if the only difficulty were those two extremely rare sequences for which the averages aren't defined, then you could come up with any number of estimators which would have only a minuscule bias (unless the sequences of trials were very short).

I agree with that. FWIW, though, as I've said several times, I think the sequences should be very short, fixed, and the number of subjects per sequence large. Truth be told, I cannot imagine using a crossover design when strong differential carryover is plausible. In other words, the only good sequence length in such a situation is 1.
 
Yes, I agree with that. Or at least, the only way to overcome the bias problem in an exact sense would be to take only one observation per sequence - whether it's the first one, the tenth or the twentieth.

But if it's a question of minimising the bias, I wonder how well a strategy of counting only one out of every few trials would work. I suspect the bulk of the bias is coming from the correlation between consecutive trials, or else trials separated by only 1 or 2 others, and that the bias would be greatly diminished if we counted - say - only every 5th or 10th trial. Maybe Laird could modify his script easily to put a number to it (but if it's not easy please don't go to too much trouble!).
 
I'll give it a go, Chris. My gut feeling though is that this won't change the bias much if at all (sorry to be a contrarian). For reference, I've just uploaded the script as it currently exists (i.e. prior to making the additions you suggest) to my GitHub account: expectation-bias-analysis.php.
 
Oh, and I almost forgot, not that I expect anybody to particularly care (I seem to be the only one fascinated by the patterns in the data):

Changing the reset arousal level from 0 to 1 biased the symmetry in the pattern of sums of averages for calm trials (second last column), and then (i.e. also) changing the arousal increment from 1 to 2 biased the symmetry in the pattern of sums of averages for emotional trials, so in that case it's not possible to extrapolate exactly the missing value for row 1.
 
Last edited:
(sorry to be a contrarian)

Nothing wrong with being a contrarian. Though I instinctively feel that disagreeing with me is a very bad idea. However, as you disagreed with me about this and it turned out I was wrong, I can't object too strongly ...
 
Well, my gut feel was wrong, and your instinct was right. :-)

The below bias quantifications are with respect to the following parameters:

  • Arousal reset of 0.
  • Arousal increment of 1.
  • 17 trials per sequence.
  • Probability of a calm trial = probability of an emotional trial = 0.5.
  • Rounding (for display) to six decimal places.
  • Averaging at sequence level i.e. the method of analysis under which the bias is maximally expressed.

In the table below:
  • "Start" is the trial number (counted from the left) within a sequence at which sampling begins.
  • After "Start", trials are sampled every "Inc" trials within the sequence.
  • "Bias" is calculated as per the formula shared in my post numbered 247.

Code:
Start   Inc      Bias
  1      1    26.649143%
  1      2    17.358205%
  2      2    17.358205%
  1      3     9.5609% 
  2      3     9.106407%
  3      3     9.907532%
  1      4     4.952661%
  2      4     5.142769%
  3      4     5.010884%
  4      4     4.951818%
  1      5     2.639884%
  2      5     2.483599%
  1      6     1.359148%
  2      6     1.267534%
  1      7     0.674554%
  1      8     0.336043%
  1      9     0.130548%

I wonder whether we'd see similar results for pooled averages too.

P.S. I've updated the script on GitHub, the commit which adds these changes is ef803ee6525f7b85583591bf89016c686f776478.
 
Last edited:
Nothing wrong with being a contrarian. Though I instinctively feel that disagreeing with me is a very bad idea.

That's because you rarely make any sense. I don't know about the stats stuff, but on other issues that has certainly been the case. You feel this instinctively and thus, on a subconscious level, know that you're incapable of producing any coherent content with which to disagree.
 
That's because you rarely make any sense. I don't know about the stats stuff, but on other issues that has certainly been the case. You feel this instinctively and thus, on a subconscious level, know that you're incapable of producing any coherent content with which to disagree.

Thanks. I'll think about that, and may reply if I can make any sense of it.
 
Well, my gut feel was wrong, and your instinct was right. :)

Thanks for this. It's reassuring that my instinct wasn't too far from the mark.

The reason I expected this is that, with the model of expectation we're thinking about, an intervening emotional trial will reset the response and shield the second trial from any dependence on the first. The only way the dependence can survive is if there are no emotional trials between the first and second trials, and the probability of that is only 1/2^N, where N is the number of intervening trials. And your figures do show the bias approximately halving every time N is increased. Just ignoring chunks of 4 trials reduces it by a factor of 10.

In other words, at least in terms of the simple model we're discussing, the dependence between different trials is a very "short-range" effect which decays exponentially with distance along the sequence. So I'm sure you'll see the same thing when different sequences are pooled.

The other thing about this is that it's not just tending to screen out the bias - it's tending to screen out all the statistical dependence between different trials. In that sense it's a big improvement on the strategy of considering sums instead of averages, which eliminates the bias (i.e. gives the correct mean, or expected value) but leaves us, in principle, with a pathological and unknown statistical distribution for the deviation from the mean.
 
Just a quick note to say that the calculations of expectation bias in my last post were inappropriate, but that even after correcting them, there's still a decreasing trend for expectation bias. I'm too tired right now to explain more or update the code.
 
Back
Top