Published on:

There’s methodology in that madness of benchmarking

With so much of my effort the past five months on the General Counsel Metrics benchmark survey, I made a bard pun to lead into my posts on benchmark methodology.

Several of them grapple with the meanings of key terms (See my post of Dec. 15, 2009: how to define “full-time equivalent”; Feb. 4, 2010: nuances of “lawyer” as in lawyer per billion; March 2, 2010: “industry”; Feb. 23, 2010: revenue and accounting treatments of insurance companies; March 30, 2010: more “lawyer” nuances; April 16, 2010: are European patent agents “lawyers?; and June 16, 2010: what do we cover with “litigation”.).

A handful of posts fret about the completeness and accuracy of data submitted on benchmark surveys (See my post of Sept. 29, 2009: needed: independent audits of benchmark methodology; Feb. 10, 2010: varying reliability of benchmark metrics; Feb. 19, 2010: representativeness of participants; Feb. 22, 2010: incomplete revenue figures from privately-traded companies; May 10, 2010: data confounded by branch offices; May 20, 2010: four methodological challenges when you collect benchmarks; and June 14, 2010: reliability of answers to a question on outsourcing.).

Almost furtively, I returned several times to respondents who did not want their company listed (See my post of March 29, 2010: Dynegy bares all; May 24, 2010: difference between anonymous and confidential responses; May 31, 2010: why a few GCs request anonymity; and June 11, 2010: a good reason to request non-disclosure.).

Other posts are methodology one-offs (See my post of March 8, 2010: innovations in benchmarking methods; April 13, 2010: data on headquarters company of participants; April 30, 2010: three points about precision; May 25, 2010: response rate terminology; June 7, 2010: excellent presentation of demographic data; and June 13, 2010: more on explanation of the survey population.).

Posted in:
Published on:
Updated:

Comments are closed.