[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [AUDITORY] Registered reports



I concur with Massimo's remarks. I would like to add the following, in response to some of the emails that have been sent earlier:

- I share the concern that not all (good) science is hypothetico-deductive. And certainly pre-registration should not be made mandatory: purely exploratory studies should certainly be allowed, encouraged and publishable. However, they should be published for what they are: exploratory. A result from an exploratory study (or an unexpected result in a pre-registered study) needs to be replicated, even more so than one from a hypothetico-deductive study, so it should carry less weight. Widespread use of pre-registration would help distinguish what results are serendipitous, exploratory or predicted. I really cannot see what's there not to like in this. Like everyone else here I have published results that were unexpected or unpredicted, or I have ran studies that were formulated as questions rather than hypotheses, but I suspect that the way I presented them in the published report sometimes could suggest otherwise, partly because I was encouraged by my supervisors to write a "story", or a narrative around the results. I think that pre-registration would be prevent this to a large extent.

- I can also see that pre-registration might not apply to all types of science (see the computer science example from Nilesh). However, I suspect that it could apply to a lot of experimental studies (except purely exploratory ones).

- I also understand the concern that pre-registration would increase overhead. In my opinion, this is unlikely and I would think that on the contrary, it would decrease that burden. That a study is pre-approved by peers means that the introduction and part of the methods do not need to be re-reviewed at the second review stage. Also, as reviewers, we would not have to keep wondering whether what is presented as a predicted result was indeed predicted or if history was re-written at the same time as the manuscript (as I suspect is too often the case).

- I think we shouldn't underestimate how much we can deceive ourselves (even scientists can) and to what extent adequate incentives can change behaviours. We're deceiving ourselves if we think that self-policy, integrity and ethics can solve problems such as replicability, biased reporting and p-hacking.

- I am not sure I see what is wrong in accepting a manuscript "provisionally", "in principle", or even in publication being "guaranteed" once the pre-registration is accepted. If the study has been deemed worthy by reviewers in the first stage, I cannot see why it shouldn't be accepted in the second unless the agreed protocol has not been followed or factual errors are made. Since there is a second round of review, presumably, this means that reviewers can ask authors for revisions and that errors can be corrected. Of course, if the agreed experimental protocol has not been followed, the study should not be published. I think what is meant by "provisionally" is that the manuscript is pre-approved whatever results come out of the experiment. This is meant to counter biased reviews from reviewers who might want to reject the study because they disagree with the outcome. Provided that  statistical power has been properly evaluated in the first phase, there is nothing wrong with flooding the literature with null results (it needs it).

- If one has an a priori hypothesis, tests it and finds that the data do not support it, they should be able to report both the (wrong) a priori hypothesis and the results. This would better reflect the non-linearity of scientific discovery and this is precisely what pre-registration allows and what the current system discourages. This is not HARKing. What constitutes HARKing is when one changes their hypothesis post-hoc but writes the manuscript as if the new post-hoc hypothesis had been their a priori hypothesis all along.

- I agree that choosing a specific statistical test in advance would be unnecessarily restrictive. Choosing a statistical test after data have been measured or even looked at is not necessarily p-hacking. If this was the case, then testing the normality of a distribution before deciding whether to use a parametric or a non-parametric test in a small sample (for instance) would be called p-hacking. It's not. In fact, it's recommended. It is p-hacking only if both tests are run and one chooses the best of both outcomes.

Dr Julien

(My students actually call me Dr Julien, believe it or not).

On 06/06/2018 09:09, Massimo Grassi wrote:
First of all I would like to thank Tim for the initiative.

A few replies and comments:
- registered reports have the results section divided in the parts: the
"planned analysis" (those you discussed with editor and reviewers) and
the "new exploratory analysis". Therefore, I do not see the problem
risen by Les.

- in my opinion registered reports rise the standard level of current
science. Registered reports (like a preregistration but even better)
reveal how limited is our ability to predict. It is difficult to predict
how the data will look like, what data point will be an outlier, whether
data should be analysed in this or that way. We teach to students that
the path of science is hypothetical deductive. In reality we move more
like a carpenter trying to adapt and adjust things in real time.

- about the possible "uncontrolled dissemination of null results", I
think that (for science) the current uncontrolled dissemination of type
I errors is worse.

A nice day to everybody from a summer-sunny Italy,

m

Dear List,

For this topic, I'll violate my rule of not posting replies here.  I
agree with Ms. Rankovic.  I sure did not miss the substance and detail
of Mr. Schoof's email.  I also read over the information in the links.
Indeed, the proposed plan provides for a second review.  It seems to me,
however, that the provisional acceptance is a key aspect of the
process.  If it were the case that manuscripts were rejected upon second
review with substantial frequency, then the philosophy of the registered
report would be violated and the system would collapse.  So, unless
there are egregious errors or flaws in the full manuscript, it seems
that it would be published.  Note that, in this linked reference
<https://orca.cf.ac.uk/59475/1/AN2.pdf>, publication is assumed to be
"guaranteed."

In my opinion, the criticism found within the FAQ here
<https://cos.io/rr/>, that "The Registered Reports model is based on a
naïve conceptualisation of the scientific method." is well-founded! The
reply offered to counter that criticism is quite weak and unconvincing.
I would replace "scientific method" in that criticism with "the way good
science is done."

Question 17 in Chambers et al. (2014-- linked above) provides an apt
example.  In the process of conducting complex experiments, it is very
often the case that unexpected results lead to important follow-up or
control experiments.  Chambers et al. handle this issue by proposing
that in Stage 1 of a registered report, contingencies be stated such
that "If A is observed, then we will..."  That, of course, assumes that
one knows the decision tree in advance!  In my experience, science
simply does not work that way.

While I find the intent of registered reports to be laudable, in my
opinion, it substitutes one potential set of problems with another based
on a narrow view of how science proceeds.  Indeed, one may have a
hypothesis to be tested and gather a set of data to address it only to
find that the results support a substantially altered view.  Is that,
NECESSARILY, the dreaded "HARKing?"  I think not.  Scientific thought
and inquiry do not always proceed in a linear fashion.  One cannot and
should not always know the precise questions or list of contingencies a
priori and be restricted to answering only those.  Then there are
experiments in which there are no specific hypotheses.  They may be of
the form, "What is the effect of variable A on measurements of X?"
Assuming the question is non-trivial, those are often the most revealing
experiments because any outcome is of interest.  There is no "positive"
or "negative."  Sure, one can cast such experiments in terms of
hypotheses but doing so often involves a contrivance.

Then there is the matter of "p-hacking" and what I would call
"statistics shopping."  Indeed, it is a problem.  Unexpected outcomes
and patterns of data in a complex experiment often require one to choose
the appropriate statistic after the fact. It is sometimes the correct
thing to do!  Whether it is proper can and should be judged by reviewers
with the requisite expertise.  Good peer-review should distinguish
between p-hacking and a rational choice that conveys information and
"truth."  The notion that one can and should use only the statistic
decided upon in advance is unnecessary restrictive.

Finally, there is the matter of archival value.  According to Chambers
et al., "...if the rationale and methods are sound then the journal
should agree to publish the final paper regardless of the specific
outcome."  It is often the case that rationale and methods are sound but
the data provide no substantial advance or archival value.  I'm not sure
that "approving" a method and rationale and virtually guaranteeing
publication will afford the same level of judgment in terms of archival
value that is afforded by the current system.

Les Bernstein

--
*Leslie R. Bernstein, Ph.D. **| *Professor
Depts. of Neuroscience and Surgery (Otolaryngology)| UConn School of
Medicine
263 Farmington Avenue, Farmington, CT 06030-3401
Office: 860.679.4622 | Fax: 860.679.2495







<https://urldefense.proofpoint.com/v2/url?u=https-3A__cos.io_rr_&d=DwMFAg&c=EZxp_D7cDnouwj5YEFHgXuSKoUq2zVQZ_7Fw9yfotck&r=2Pw2GwelGcMR4953G-STHGpPJm2-pYYYSPmTwJk3sWM&m=Sr0Ep-Gx1c9KJlrgGBL4rmcUvd9qeDUDnFKUymDoKpI&s=vXqZBKaP1dUovPzwBwC5DalLCB6UxwKuM9x_SQCbw5I&e=>

On 6/4/2018 7:51 AM, Christine Rankovic wrote:

Mr. Schoof:

It is beyond ridiculous to accept partial manuscripts for publication.

Christine Rankovic, PhD

Scientist, Speech and Hearing

Newton, MA  USA

rankovic@xxxxxxxxxxxxxxxx

*From:*AUDITORY - Research in Auditory Perception
[mailto:AUDITORY@xxxxxxxxxxxxxxx] *On Behalf Of *Schoof, Tim
*Sent:* Monday, June 04, 2018 4:06 AM
*To:* AUDITORY@xxxxxxxxxxxxxxx
*Subject:* Registered reports

Dear list,

I'm going to try and get hearing science journals to start offering
registered reports. These reports are basically peer-reviewed
pre-registration documents where you outline your methods and proposed
analyses. If this document makes it through peer-review, the
manuscript is provisionally accepted for publication. This process
should reduce certain questionable research practices, such
as selective reporting of results and publication bias. If you're
sceptical about registered reports, the Center for Open Science has
compiled a nice FAQ list that might address some of your concerns:
https://cos.io/rr/
<https://urldefense.proofpoint.com/v2/url?u=https-3A__cos.io_rr_&d=DwMFAg&c=EZxp_D7cDnouwj5YEFHgXuSKoUq2zVQZ_7Fw9yfotck&r=2Pw2GwelGcMR4953G-STHGpPJm2-pYYYSPmTwJk3sWM&m=Sr0Ep-Gx1c9KJlrgGBL4rmcUvd9qeDUDnFKUymDoKpI&s=vXqZBKaP1dUovPzwBwC5DalLCB6UxwKuM9x_SQCbw5I&e=>

I think this is the direction science is going in now and it would be
great if hearing science joined in. I plan to contact as many hearing
science journals as possible. I'm compiling a list of journals to
contact. Please add to this list if I'm missing anything:
https://tinyurl.com/yaf9r7bk
<https://urldefense.proofpoint.com/v2/url?u=https-3A__tinyurl.com_yaf9r7bk&d=DwMFAg&c=EZxp_D7cDnouwj5YEFHgXuSKoUq2zVQZ_7Fw9yfotck&r=2Pw2GwelGcMR4953G-STHGpPJm2-pYYYSPmTwJk3sWM&m=Sr0Ep-Gx1c9KJlrgGBL4rmcUvd9qeDUDnFKUymDoKpI&s=sk2rFf3fImx-wI9S05uLc7WYgADb5BupEMAQvL3hz-0&e=>.
I don't think any of these journals offer (or are in the process of
offering) registered reports yet, but correct me if I'm wrong.

If you agree that registered reports are a good idea and want to sign
the letter I intend to send (see here for a template:
https://osf.io/3wct2/wiki/Journal%20Requests/
<https://urldefense.proofpoint.com/v2/url?u=https-3A__osf.io_3wct2_wiki_Journal-2520Requests_&d=DwMFAg&c=EZxp_D7cDnouwj5YEFHgXuSKoUq2zVQZ_7Fw9yfotck&r=2Pw2GwelGcMR4953G-STHGpPJm2-pYYYSPmTwJk3sWM&m=Sr0Ep-Gx1c9KJlrgGBL4rmcUvd9qeDUDnFKUymDoKpI&s=G-jhAt3_0f5cPPX7aRpPgVfihZYm_ZTuPohnhVfxWFw&e=>),
let me know and I'll add you to the list. And please spread the word
of course. The more people agree, the more likely it is we can get
some of these journals on board!

Best,


Tim Schoof

--

Research Associate

UCL Speech, Hearing and Phonetic Sciences

Chandler House

2 Wakefield Street

London WC1N 1PF

United Kingdom




-- 
------------------------------------------------------
Julien Besle
Assistant Professor
Department of Psychology
Faculty of Arts and Sciences
American University of Beirut
Riad El-Solh / Beirut 1107 2020
Lebanon

Jesup Hall, Room 103E
Tel: +961 1 350 000 ext. 4927
-----------------------------------------------------