Practice Perfect 662
Effect Size – Don’t Consider the Validity of a Research Study Without It

Talking about research and statistics is a sure fire way to make people stop reading today’s editorial.

But wait!  Stop!  Don’t delete me yet. I have some life left as a useful blog post yet! Hear me out.

Get a steady stream of all the NEW PRESENT Podiatry e-Learning by becoming our Facebook Fan. Effective e-Learning and a Colleague Network awaits you.

Of all the statistical concepts we learn in school and residency, one of the most important and least discussed is the concept of effect size. You see, we read all those journal articles that list significant p-values, and then we think we’re being “evidence-based.” We read excellent journals like The Journal of Foot and Ankle Surgery and think we’re up-to-date on the newest and best practices pertinent to the lower extremity. But we may actually be fooling ourselves.

Are you intrigued? Maybe just a teensy tiny little bit?

For those of you that are, let’s keep the momentum going. For those of you who are not, then feel free to come back later, when all your friends are talking about the importance of effect size and you have no clue what’s going on. Hey, who knows? Maybe this will be the next big dinner party topic!

Subscribe Now To see Conference Videos, Interviews, Sneak Peek Lecture Videos, and More!

What is Effect Size?

Let’s first define effect size. This is the magnitude of some phenomenon between groups. If you punched me in the face (because I roped you into reading this editorial) versus rubbing my face with a feather (because you’re so happy you kept reading), clearly there would be a difference in effect in regards to my facial appearance: black and blue versus my usual face (Some would say the former is an improvement over the latter!).

Effect size refers to either the raw difference between groups (termed absolute effect size) or a standardized measure of effect (calculated to transform the effect into an easily understood scale).

What’s the “Effect” of Effect Size?

Let’s take a look at a hypothetical example. If a medication improves some physiological aspect by 50%, that sounds significant. But what if I told you that same 50% improvement was a jump from a value of 1 to 1.5 out of a possible 100? It may have been a 50% improvement, but the overall effect was minimal.

Herein lies the crux of the problem. A result can be statistically significant but have an effect size that is so small it is clinically insignificant.The P value, that commonly reported result we are so used to seeing in almost every research study, only reports statistical significance, but has nothing to do with effect size. When doing a research study, both effect size (substantive difference) and P value (statistical significance) should be reported.

A result can be statistically significant but have an effect size that is so small it is clinically insignificant. When doing a research study, both effect size (substantive difference) and P value (statistical significance) should be reported.

Similarly, when constructing a research study, effect size is needed before starting the study to determine the number of subjects needed to avoid a Type II error (concluding there is no effect when one exists). A power analysis is needed to determine what number of subjects in the study will be sufficient to ensure (to a particular degree of certainty) that the study has acceptable power.

Join Now To get the Latest News, Press Releases, Announcements, Conference Information, and More!

The Big and Small Problem with P-Values

Have you ever read a research study with a very large N, a large study group size? Just as very small studies have potential issues, so too do overly large studies. If the sample size is large enough, a statistical test will always demonstrate a statistically significant difference even if there is no clinically significant effect at all (an effect size of zero). This is one of the ways researchers can manipulate their data to create so called significant results. Keep adding patients to your study until your result becomes statistically significant. This is also called “P-hacking.”

On the converse of this, very small differences between groups, even if statistically significant, are often meaningless. For example, a study with a sample size of 10,000 is likely to have a significant P value, even if the difference between groups is negligible.

The drift, here, is to use P values with caution and look for studies that report effect sizes because, unlike significance tests, effect size is independent of sample size.

Podiatric Residency Education Summit East 2019

Let’s finish this off by including a table of some commonly reported measures of effect size1. Most research studies that report an effect size already have this calculated, but in all too many studies, this is ignored (likely because it will make the study worthless). If you take nothing else away from today’s editorial (assuming you made it this far), it’s that you should not trust P values as the end all be all of statistical tests. Look at the P values and then figure out the effect size.

Effect size is independent of sample size. If you take nothing else away from today’s editorial, it’s that you should not trust P values as the end all be all of statistical tests.

Commonly Reported Measures of Effect Size

Test Description Interpretation
Between Group Comparisons
Cohen’s d d= (avg group 1 – avg group 2)/standard deviation of either group. Small 0.2
Medium 0.5
Large 0.8
Very large 1.3
Odds ratio Group 1 odds of outcome/group 2 odds of outcome 1: equal odds of either
outcome occurring.
Small 1.5
Medium 2
Large 3
Relative risk ratio Ratio of the probability of outcome in group 1 versus group 2. Small 2
Medium 3
Large 4
Measure of Associations
Pearson’s correlation Range -1 to 1 Small ±0.2
Medium ±0.5
Large ±0.8
Best wishes.

Jarrod Shapiro, DPM
PRESENT Practice Perfect Editor
[email protected]
References
  1. Sullivan G and Feinn R. Using Effect Size – Or Why The P Value Is Not Enough. J Grad Med Educ. 2012;4(3):279-281.
    Follow this link

Major Sponsor