Reliability of benchmark findings – how representative are the participants?

Do benchmark surveys get a normal and fair distribution of participants? All surveys contend with the methodological problem of self-selection. A portion – usually small – of the people invited to take a survey respond, many decline. The small percentage that elects to complete the survey may be unlike the larger group; it is always to some degree problematic in non-coercive surveys whether the responders are sufficiently representative of the entire population.

The global survey now underway by General Counsel Metrics, LLC, has garnered in its first month more than 225 respondents from a pool of thousands that have received email invitations or otherwise learned about the opportunity. In the reports that will be produced, as with all surveys, those who study them ought to bear in mind that the numbers may be skewed by non-representativeness.

The bias that results from self-selection can alter the results many ways. It might be that general counsel of relatively well-run legal departments disproportionately fill out benchmark surveys. After all, they know they run a tight ship, they are proud of it, and they welcome quantitative confirmation of their prowess. If respondents fall more into the well-managed camp, then benchmarks look tougher than they really are. It would be like measuring body mass indices only of people at gyms.

On the other hand, maybe general counsel who are under pressure, possibly because their management touch has been less deft, seek guidance from benchmarks. Or those that sense they need to control costs or staffing better gravitate toward taking the survey, in which case benchmark findings are laxer than they really are. It would be like a survey of people’s weight at a weight watchers convention.

As a general proposition, general counsel who relegate management to a lesser priority than legal service will more likely pass by an opportunity to see comparative performance figures than a general counsel who knows that just practicing excellent law is not enough.

Or, from a different perspective, new general counsel may tell themselves that they haven’t had time to put their stamp on the department’s performance; others may wish to establish a benchmark baseline so they can show impressive improvements later. Until there is mandatory participation in a survey or penetration rates are much higher than now prevail, it will be unknown how representative benchmarks are. That said, the larger the number of participants, the closer the metrics come to being solidly representative as the different selection biases cancel out.

Representativeness has cousins in the world of statistical reliability. Relatives include selection bias in the initial invitations; low participation rates; and weighting (See my post of April 9, 2005: representativeness in surveys and low participation rates; May 14, 2005: representative sample for survey; March 28, 2005: balanced sample for survey; Nov. 19, 2005: D&O claims; March 25, 2005: methodological questions about the solidity of survey responses; March 29, 2005: AIPLA data on costs of patent litigation; March 17, 2006: representativeness of survey respondents; Oct. 16, 2006: self-selection bias as it afflicts surveys; Dec. 1, 2006: selection bias; March 20, 2007: data not representative if drawn selectively from a pool; Aug. 4, 2007: client satisfaction surveys; Feb. 7, 2009: partial views on offshore incidence; and June 15, 2009: weighting findings to make them nationally representative.).


Improve representativeness! Click on the box upper right to take the six-minute, confidential online survey of your fundamental 2009 metrics and get your full report in April.


We welcome comments

Your email address will not be published. Required fields are marked *