Published on:

The methodology of a survey tells you how much you can rely on the data

If someone surveys law departments to collect data and publishes findings, they should provide readers with a minimum set of background facts about their methodology.  The findings only have credibility to the extent the methodology holds up to scrutiny.  Here are some questions they should answer.

 

How many law departments responded, out of how many invited, and based on what method of invitation?  Good surveys have a sizeable number of respondents: perhaps 100+ who represent 5-to-10 percent of all those invited by a broad-based invitation.

 

Were the respondents representative of law departments generally?  If the analysis purports to speak broadly about U.S. law departments, then the respondents to the survey need to roughly match the characteristics of that wide range of departments.

 

Did the respondents know what they were talking about when they responded?  People will guess or fabricate answers unless they have actual data or experience to back them up.  Don’t ask paralegals about total settlements paid last year.

 

Were the survey and its protocols effectively designed?  For example, clear wording, neutral order and presentation, specific numbers asked for and not ranges, single points per question and not conjunctive style (not “How many cases did you lose by jury or resolve by settlement?”).

 

Were the questions demographic only, or did they ask for numbers or rankings?  “Are you centralized?” is not as useful as “On a scale of 1-7, how centralized are you?”

 

What were the commercial interests of the surveyor?  Academics strive for neutrality; vendors of software seek results that promote their offerings.

 

Were comparisons over time based on reasonably similar sets of respondents?  A trend identified over a year or two needs to be based on a reasonably consistent base – if a large portion of the respondents in the second survey were not in the first survey, how can you know that the change wasn’t due to different participants rather than some underlying shift?