How statistics, if drawn from a selective pool of data, can mislead managers

Deep in an article about economics expert witnesses, in the Wall St. J., Vol. 249, March 19, 2007 at A1, A11, there appears at first read an astonishing factoid. “Peter Nordberg, a partner in the Philadelphia law firm of Berger & Montague, counts 87 cases since 2000 where economic or accounting witnesses have come under scrutiny by federal appeals courts. The witnesses’ testimony was knocked out 40% of the time.”

Shocking! The testimony of four out of ten expensive, fancy-resume experts eviscerated? Can’t you see the headline: “40 percent of experts’ money spent by law departments goes down the drain”?

Wrong. Any time an analysis of metrics draws conclusions from a non-representative set of data, such as the quoted figure which came not from all cases where economic or accounting witnesses took the stand but only from a small subset of those cases that were appealed, the conclusions must be fairly described. The appeals might have been taken precisely because the expert testimony was suspect.

For another example, if plaintiffs prevail in 65 percent of the cases decided by the United States Supreme Court, it would be mistaken to conclude more broadly that plaintiffs win in trial courts half the time. Or, if three quarters of the cases with multinational adversaries taken to arbitration result in a resolution within three months, it is important to realize that the cases taken to arbitration are a subset of all such disputes – only the ones eligible for arbitration or where the parties seek that form of resolution. This error resembles the survivor bias, in that the misstep happens when someone draws a conclusion too broad for the data set to support, since the data itself was pre-selected and partial (See my post of April 2, 2005 on the survivor bias.).

We welcome comments

Your email address will not be published. Required fields are marked *