People bang on to me about research evidence and educational practice. When you present them with published evidence, they often don't want to hear it, but when you consider the quality of the data, or at least, the presentation of the data in most education research, perhaps it's not surprising. This paper is a very clear exposition of how to improve this woeful situation. If you are reviewer or an editor for education research, please read this and think about your responsibilities.
The Other Half of the Story: Effect Size Analysis in Quantitative Research. (2013) CBE-Life Sciences Education, 12(3), 345-351
Abstract: Statistical significance testing is the cornerstone of quantitative research, but studies that fail to report measures of effect size are potentially missing a robust part of the analysis. We provide a rationale for why effect size measures should be included in quantitative discipline-based education research. Examples from both biological and educational research demonstrate the utility of effect size for evaluating practical significance. We also provide details about some effect size indices that are paired with common statistical significance tests used in educational research and offer general suggestions for interpreting effect size measures. Finally, we discuss some inherent limitations of effect size measures and provide further recommendations about reporting confidence intervals.
"In education research, statistical significance testing has received valid criticisms, primarily because the numerical outcome of the test is often promoted while the equally important issue of practical significance is ignored. As a consequence, complete reliance on statistical significance testing limits understanding and applicability of research findings in education practice. Therefore, authors and referees are increasingly calling for the use of statistical tools that supplement traditionally performed tests for statistical significance. One such tool is the confidence interval, which provides an estimate of the magnitude of the effect and quantifies the uncertainly around this estimate. A similarly useful statistical tool is the effect size, which measures the strength of a treatment response or relationship between variables. By quantifying the magnitude of the difference between groups or the relationship among variables, effect size provides a scale-free measure that reflects the practical meaningfulness of the difference or the relationship among variables."
"Our intent is to emphasize that no single statistic is sufficient for describing the strength of relationships among variables or evaluating the practical significance of quantitative findings. Therefore, measures of effect size, including confidence interval reporting, should be used thoughtfully and in concert with significance testing to interpret findings. Already common in such fields as medical and psychological research due to the real-world ramifications of the findings, the inclusion of effect size reporting in results sections is similarly important in educational literature."