Published on:

Statistical significance has applicability to law department managers

With many metrics, you can test mathematically whether the difference from one year to the next or from one group of law departments to another is merely measurement error or meaningless random fluctuation within the bounds of normal, or whether the change is evidence of a definitive difference. Tests for what is called “statistical significance” determine for two sets of data the degree to which the difference between them is highly unlikely to have occurred by chance.

Take the difference in spending per lawyer between one industry and another. It’s a common benchmark number, but obviously there is some variation simply due to how you gathered the data. If the delta (amount of difference) between two industry’s metrics passes the statistical significance test, such as at what is commonly referred to as the 0.05% level, you can responsibly trust that something meaningful, not haphazard, accounts for the variance.

The calculation for statistical significance depends on the number of data points in the each set, the absolute size of the gap, and the level of certainty you want to test for. If there are 20 law departments in each industry group, and the aerospace industry shows $900,000 per lawyer as against the automotive industry’s $960,000 per lawyer, at a common level of 0.05% the difference may not statistically significant. Analysts and journalists should not make anything of the $60,000 swing, as it may be due to factors other than an underlying, reliable difference.

Posted in:
Published on:
Updated:

Comments are closed.