Published on:

Does data of participation in a benchmark survey reflect significantly different metrics?

An open question has been whether the time of participation in a benchmark metrics survey correlates to statistically different samples.  Put differently, do law departments that submit data early in the collection period differ materially on key benchmarks from departments that submit their data months later, in the final stages of data collection?

 

To research this methodological issue, I took the law departments from U.S. and Canadian companies that signed up for the General Counsel Metrics benchmark survey this year for the first time.  My reasoning was that participants in the 2010 or 2011 survey who returned had responded to different factors than newcomers.  There are 191 law departments in the newcomer set, which I divided into thirds by the date they completed the online survey.

 

The earliest third of the participants – very roughly the Spring participants – are about a half lawyer per billion smaller than the late participants – think of those as the Fall participants.  The Spring participants also had much lower total legal spending as a percentage of revenue than the Falls (0.35% compared to 0.55%).  In terms of size, the median number of lawyers, three, was dead even for both groups.

 

I looked at the middle third also, the Summers.  They look the same on these medians as the Springs.

 

In short, from this set of data, the early participants have better metrics on two key measures, lawyers per billion of revenue and total legal spending as a percentage of revenue, but they are very much the same size by number of lawyers.  It is hard for me to come up with plausible explanations for the disparity, since all three groups knew their 2011 staffing and spending data for the entire period.  Probably, some other factor(s) are causing the differences in the metrics, such as a trade group bringing into the fold a group in one period.

Posted in:
Published on:
Updated:

Comments are closed.