John Ioannidis discusses the potential effects on clinical research of a 2017 proposal to lower the default P value threshold for statistical significance from .05 to .005 as a means to reduce false-positive findings.
A study has revealed a high prevalence of inconsistencies in reported statistical test results. Such inconsistencies make results unreliable, as they become “irreproducible”, and ultimately affect the level of trust in scientific reporting.
Reproducible research includes sharing data and code. The reproducibility policy at the journal Biostatistics rewards articles with badges for data and code sharing. This study investigates the effect of badges at increasing reproducible research, specifically, data and code sharing, at Biostatistics.
Nearly 100 Scientists Spent 2 Months on Google Docs to Redefine the P-Value
A new paper recommends that the label “statistically significant” be dropped altogether; instead, researchers should describe and justify their decisions about study design and interpretation of the data, including the statistical threshold.
When statistical fudging is buried in the way data are sliced and diced after the fact or put through tortured analysis in a search for significant results.
We wish to answer this question: If you observe a ‘significant’ p -value after doing a single unbiased experiment, what is the probability that your result is a false positive?
Controversial Software Is Proving Surprisingly Accurate at Spotting Errors in Psychology Papers
When Dutch researchers developed an open-source algorithm designed to flag statistical errors in psychology papers, it received mixed reaction from the research community.
Today we’re rolling out new features in Sheets that make it even easier for you to visualize and share your data, and find insights your teams can act on.