When you hear of a statistical finding, you should want to understand that number’s reliability. If the research that produced the number were repeated several times, how much would the results vary?
Consider an example. Let’s make the simplifying assumption that the participants in the GC Metrics benchmark survey make up a reasonably random and representative sample of at least U.S. law departments. The margin of error for findings from a set of normal numbers shrinks in proportion to the square root of the size of the set. Hence, a benchmark finding based on 200 law departments – the participant base of the HBR Consulting (nee HildebrandtBaker Robbins) report – has a margin of error of 14.1. A finding from the GC Metrics report, based on 800 law departments, four times as many, has a margin of error of 28.3. That means the margin of error shrinks in half from the smaller to the four-times-larger survey.
A close approximation of the margin of error is 0.98/√n where n is the sample size. With 800 law departments (n=800), the margin of error calculates to approximately 3.5 percent. A finding based on that group could vary up or down by 3.5 percent and be just as reliable or likely as the finding given. For 200 law departments, the swing is 6.9 percent — four times more participants cuts in half the confidence interval so the results from the larger set is more precise and reliable (See my post of Dec. 9, 2005: margin of error and sample size; Aug. 30, 2006; sampling error; April 22, 2007: error; and Oct. 31, 2007: formula for margin of error.). With benchmarks, respondent size matters.