1. Paul C. Anagnostopoulos

    Paul C. Anagnostopoulos Nap, interrupted. Member

    Joined:
    Oct 31, 2013
    Messages:
    4,486
    Marilyn Schlitz, Arnaud Delorme, and Daryl Bem have registered a new replication of Bem's experiment 4 at the Koestler registry:

    http://www.koestler-parapsychology.psy.ed.ac.uk/TrialRegistryDetails.html

    It is not only a replication but also an attempt at testing the experimenter effect. I have emailed Schlitz with a couple of suggestions regarding the protocol.

    ~~ Paul
     
  2. JCearley

    JCearley New

    Joined:
    Nov 3, 2013
    Messages:
    512
    This should be interesting, and should start putting to rest proponent unease about the KPU registry at the very least (given at least two of those names are proponents, I'm not sure about the third.)
     
  3. Paul C. Anagnostopoulos

    Paul C. Anagnostopoulos Nap, interrupted. Member

    Joined:
    Oct 31, 2013
    Messages:
    4,486
    malf, Bucky and DarthT15 like this.
  4. Baccarat

    Baccarat New

    Joined:
    Jan 1, 2016
    Messages:
    805
    Psi is not lab reproducible...its too mechanic. I've had random psi experiences, unless they find the mechanism that causes it I'm not interested
     
    Vault313 likes this.
  5. Baccarat

    Baccarat New

    Joined:
    Jan 1, 2016
    Messages:
    805
  6. Bucky

    Bucky Member

    Joined:
    Oct 31, 2013
    Messages:
    1,681
    I tend to agree.
    It would be very difficult to study in the lab something like grief or ecstasy etc, as it is practically impossible to reproduce those states on command.
     
    Vault313 likes this.
  7. Laird

    Laird Member

    Joined:
    Apr 28, 2015
    Messages:
    1,329
    Lacked the concentration to read the study that Paul kindly posted (maybe later), but noted another potential for follow-up from the article that Baccarat posted (hopefully this inspires Paul to report back once again!):

     
  8. Paul C. Anagnostopoulos

    Paul C. Anagnostopoulos Nap, interrupted. Member

    Joined:
    Oct 31, 2013
    Messages:
    4,486
  9. Bucky

    Bucky Member

    Joined:
    Oct 31, 2013
    Messages:
    1,681
    Me too... I will give it a go in the next days.

    Ouch!
    I have to be honest here: I don't know about you but if researchers (in any field, mind you) have to try all sorts of statistics techniques to obtain the results they "need", I am not sure I am interested in this kind of work! :mad: Which is probably why so much "science" out there is pretty much wrong, but we think it isn't 'cause we trust the process so much...

    I mean, it sounds very much like p-hacking, only wiht the extra annoyance of having to pre-register new studies for every different attempt.

    Maybe I misunderstand the comment by Daniel Engbe.
     
  10. Paul C. Anagnostopoulos

    Paul C. Anagnostopoulos Nap, interrupted. Member

    Joined:
    Oct 31, 2013
    Messages:
    4,486
    The analysis they tossed on to the end of the study linked above is certainly a form of p-hacking. There is much concern about p-hacking and other analysis tricks in the scientific community these days, especially in the soft sciences. Some people even credit the original Bem study with raising awareness. So that's a good thing.

    They can certainly register new studies in which they use single trial analysis.

    ~~ Paul
     
    DarthT15 likes this.
  11. Baccarat

    Baccarat New

    Joined:
    Jan 1, 2016
    Messages:
    805
    I think Mark passio said that science usually studies the effects not the root causes. Keyword usually I know they like to link a lot of diseases to genes but as science has dug deeper we now have evidence for our environment changing our gene expression.

    I find these psi studies dissappoiting especially coming from experts in various fields. Psi is not something you can consciously control like telling yourself to pick up a cup whenever you want. To many variables go into psi which science doesn't have a grasp on either. You're emotional state, anxiety, fear, energy etc can effect psi results. Psi has the deck stacked against them it can easily be said to be luck or coincidence. You want evidence of Psi? Grab a notebook and start recording your dreams, start practicing and documenting. These poorly designed studies will always be debatable, I'm actually disappointed to call these men and women scientists, time to take them off the pedestal.... What happened to innovation and creativity in science?
     
  12. Baccarat

    Baccarat New

    Joined:
    Jan 1, 2016
    Messages:
    805
    I wrote from my phone sorry
     
  13. Max_B

    Max_B Member

    Joined:
    Nov 1, 2013
    Messages:
    3,155
    Home Page:
    Worthwhile reading these comments about pre-registration...

    https://sites.google.com/site/speec...lpre-registrationofstudiesbegoodforpsychology

    Some of them make some really good points...
     
    Bucky and Trancestate like this.
  14. Bucky

    Bucky Member

    Joined:
    Oct 31, 2013
    Messages:
    1,681
  15. Max_B

    Max_B Member

    Joined:
    Nov 1, 2013
    Messages:
    3,155
    Home Page:
    A few interesting quotes that stood out for me as I was reading...

    "On the way to work I began to think that the strategy might be to begin with their acknowledgement that prereg might not be best for all science and then say that rather than just dismissing all those who disagree as self-serving we should ask when it should be used. When do the costs outweigh the benefits?

    Clinical trials: Big investment, high cost of suppressing -ve outcomes. The cost of prereg is relatively small.

    Small scale science: For all the reasons we've mentioned, the cost of prereg is high, but where are the benefits? The process of science is ultimately self-correcting - we'll get there in the end, but will we get there faster with prereg? Highly unlikely.

    Exploratory science: What would be the point? Collect data, analyse it any way you see fit.

    They are really conflating two issues here - public trust in science, and the practice of science.

    Sad to say, but the public neither hears about nor cares about small scale science, and by the time it feeds into large scale science not only will many of the problems be ironed out, but then it will be subject to more stringent control such as prereg."
    and this one (point 7 stands out for me)...

    1) I worry that, erroneously, people with think that a priori results are more "truthful" than post-hoc. This is not the case if the statistics are done correctly.

    2) The reason it could be important is that there is a suspicion that scientists currently pretend ideas are a priori when in fact they are post-hoc. This is a problem that does need some form of solution. The current solution is that we let others try and replicate and non-replicable results disappear. I have no problem with this solution and it is the best we have and I think better than pre-reg.

    3) What I think I do not like is that it is saying that we should not trust each other as scientists. I find this quite depressing, even if there is some truth in it. I also feel that this is the thin end of the wedge - why trust people have collected the data! I think we need better education and more trust.

    4) Practically I have no idea how it can work. Who judges the pre-reg? They would have to have some many levels of expertise in each field to know that all the degrees of freedom of any analysis have been pre-registered. A good example of a problem is number of subjects for fMRI. Some would say we need >20 subjects perhaps more other, like Karl, would argue that 16 is more than sufficient and indeed one may start reporting false negatives if the number increases. How will the power of an fMRI study be assessed in pre-reg. Who polices the pre-reg. Will they be anonymous or not?

    5) Also one can think of examples where it will fail. I pre-reg to run 20 subjects. Do I have to pre-reg all exclusion criteria in advance? I guess I would have to for pre-reg to be valid. Then one subject has different behavior but has a clear reason to be excluded but one I did not specify. What do I do? If I run more subjects then it will look suspicious as I have not stuck to pre-reg. But if I do not exclude the subject then significant results will be reported as null.

    6) I dislike the fact that if you decide not to publish the 'paper' is published as a retraction. This is a very loaded word.

    7) I worry that it will be used only for people who want to find non-replications of studies that they do not like. I confess I have already thought of using it like this! Pre-reg a replication study. Fail to replicate fairly easy to do if you want to. Kill a study in the field because your study has more "truth".
    the closing points of another post...

    It's hard to think of a comparison to illustrate the problem of the pre reg idea, but maybe we can think about it as similar to asking people whether they have any intentions committing a shop lifting crime in the near future. My guess is that not only would most people say "no", but some of the biggest thieves would be the most moralistic.

    Post script
    Hubel and Weisel's orientation sensitive neurons would be top of mine.
    Observational studies of neurospych patients would never get off the ground.

    It's even ridiculous to suggest that after running an experiment for a year, one has the same hypothesis as when one began the experiment - the reason I think is in order to change my mind.​

    and this...

    I'm not at all opposed to pre-registration, and I think it'll be an
    interesting experiment to see whether research practices improve and
    "scientific quality," or replicability, increases. But I can see the
    danger in that being viewed as "saintly" research with the rest of it
    tainted. I think it's important that you've opened up this debate.​
     
    malf and Trancestate like this.
  16. malf

    malf Member

    Joined:
    Oct 30, 2013
    Messages:
    4,031
    I've only just got round to reading this thread. Crickey! This validates everything that some of the resident skeptics have been saying about Bem's study (and the field in general) for ages. That Slate article is damning, and Bem does himself no favours with his response:

    “I’m all for rigor,” he continued, “but I prefer other people do it. I see its importance—it’s fun for some people—but I don’t have the patience for it.” It’s been hard for him, he said, to move into a field where the data count for so much. “If you looked at all my past experiments, they were always rhetorical devices. I gathered data to show how my point would be made. I used data as a point of persuasion, and I never really worried about, ‘Will this replicate or will this not?’ ”

    Wow. I guess we all owe Linda an apology, hey guys?
     
  17. Paul C. Anagnostopoulos

    Paul C. Anagnostopoulos Nap, interrupted. Member

    Joined:
    Oct 31, 2013
    Messages:
    4,486
    Yeah, his statement does appear to tell us not to trust his experiments.

    ~~ Paul
     
  18. Silence

    Silence Member

    Joined:
    Oct 12, 2016
    Messages:
    362
    That's not how I read the article. Perhaps in 2017, yes, it tells us not to trust his experiments.

    But in 2010 that wasn't so much the case. Many of the credentialed academics praised the rigor of his experimentation.

    I read the article more as a referendum on the nexus of science and scientists. Those pesky humans are so prone to bias, p-hacking and other "veiling" traits. ;)
     
  19. Bart V

    Bart V straw materialist Member

    Joined:
    Oct 31, 2013
    Messages:
    604
    He has previous on that sort of talk.

    This, "Feeling The Future, Smelling The Rot", is a review of an article,"Writing the Empirical Journal Article", written by Bem.
    Three quotes:

    Basically a manual in data dredging, P-hacking, and moving goal-posts.
     

Share This Page