To compare metrics over time, it’s not how long a survey has run but how consistent is its base of respondents
To proclaim changes in broad metrics, such as total legal spending from last year to this, a survey needs to have a fairly consistent core of respondents answering similar questions over the period of time. If there is churn – last year 100 took part but this year 50 of them along with 50 new ones – or if the questions and their definitions shift – last year asked about worldwide lawyers and this year about total or domestic lawyers – then year-over-year comparisons of the results are silly.
Hence, touted “continuity” of a benchmark survey may carry no water. Yes, 25 years ago Equitable’s legal department commissioned Price Waterhouse to do a consulting project, from which emerged the distant ancestor of one of today’s staffing and spending surveys. Later, the survey trundled over to Hildebrandt, continued when Thomson Reuters acquired it, and kept plugging away when BakerRobbins merged in. Now, a survey that places great stock in its lineage perches with a fourth company, the fledgling HBR Consulting (not to be confused with the Harvard Business Review – HBR). Continuity of a survey doesn’t matter; continuity of participants does, so findings ought to disclose turnover in the ranks.