Published on:

Fundamental weaknesses of “casual” benchmarking

In Jeffrey Pfeffer and Robert I. Sutton, Hard Facts, Dangerous Half-Truths & Total Nonsense: Profiting from Evidence-Based Management (Harvard Bus. School Press 2006) at 7, the authors criticize many benchmarking projects, which they deride as “casual benchmarking,” because the projects suffer from a pair of fundamental problems. “The first is that people copy the most visible, obvious, and frequently least important practices.” A law department example might be convergence, the unthinking reduction in the absolute number of law firms retained.

The second problem is that law departments “often have different strategies, different competitive environments, and different business models – all of which make what they need to do to be successful different from what others are doing.” The cultural attitude of a company toward lawyers, the regulatory environment, and the entwinement of key law firms are each examples of why law departments should not casually adopt benchmarking’s so-called best practices (See my post of May 14, 2005 on benchmarks for metrics and benchmarks for practices.).

Pfeffer and Sutton attribute the core reason for sloppy benchmark to managers’ failures to “ask the basic question of why something might enhance performance” (emphasis in original). They properly point out that “if you can’t explain the underlying logic or theory of why something should enhance performance” (at 8), you’re benchmarked borrowing will likely founder.

Posted in:
Published on:
Updated:

Comments are closed.