The American Statistical Association made it easier for researchers to use p-values. By laying down few simple principles, ASA aims at improving the understanding of a commonly misused statistic. Most importantly, ASA stresses the relevance of rigor in research over a mere “p-hacking” practice. In its recent statement, ASA concludes that “no single index should substitute for scientific reasoning.”
The Boards of Directors of ASA starts by acknowledging the expansion in the use of data. The recent proliferation of data-sets has fostered an unprecedented interest in statistical analysis. Along with this interest has come the need for validation of research and analysis through the extensive use of data. Thus, the first instrument in such validation is statistical significance for which the p-value allures oversimplification of statistical rigor. Mistakes in interpreting the statistic are such that “some scientific journals are discouraging the use of p-values”.
ASA explains the concept of a p-value “informally”, by stating what is and what is not.
The first principle ASA refers to, in its statement, is the reason behind the simplification in the use of p-values. P-values give an indication of “how incompatible the data are with a specified statistical model”. In ordinary words, to what extent the null hypothesis can be rejected by the index. “The smaller the p-value, the greater the statistical incompatibility of the data with the null hypothesis.” The second aspect of p-values that ASA points out links its limits about the researcher’s hypothesis.
The p-value either approves or disapproves the validity of the used data in connection with the specified hypothesis, exclusively. The emphasis here is the limit of the p-value, which is not to assert the validity of the hypothesis itself, but rather to bolster either the pertinence or impertinence of the data being used by the research. Neither does the p-value measure the size nor the importance of an effect of the result.
That being said, it looks like p-values are certainly essential for reaching scientific conclusions. However, they are not the only criterion for analyzing data and in doing research. But, its relevance has derailed researchers to become “p-hackers” in search solely of what ASA calls a “mechanical ‘bright line’ rules (such as “p<0.05″)”. ASA urges researchers to outpace the mere use of the p-value as the main criterion for statistical analysis by advocating for contextual factors in research.
The fourth issue stated by ASA refers to transparency in reporting research findings. For ASA, “p-values and related analysis should not be reported selectively”. It is the researcher’s duty to report and disclose the rationale behind data collection, analysis, and computations.
ASA reaches two main conclusion in its paper. The statement advocates for ending the use of a single index as a substitution for scientific reasoning as well as the relevance of a good practice of research. The Board of Directors emphasized that scientific research must correspond to high standards of research and study design, appropriate numerical and graphical summaries of data, cogent understanding of the phenomenon being studied, interpretation of the results in context, complete reporting, and the understanding of what data summaries mean.