Ranting about methodological shortcomings of surveys doesn’t help me much. If I pick at this survey because it uses misleading averages, that survey because it hides the number of participants, and another survey because its sample size is very small, other collectors of survey data may not understand in the overall what to do right. So, let me state positively what I believe a report based on data from law departments should describe about its participants (See my post of July 21, 2008: survey methodology with 40 references, 25 internal references.).
- How many people in law departments did the surveyor invite?
- What was the source of the invitation group, such as all law departments of the Fortune 500, members of ACC, general counsel who belong to the New Jersey General Counsel group?
- How were the responses obtained, such as online, hardcopy, telephone, during a conference?
- Could there be multiple respondents from the same law department?
- How many people or law departments responded, stated as a percentage of the invitee total?
- What were the positions, countries, industries, size of departments – in general, the demographics – of the respondents?
- Is there any reason to think the non-respondents and the respondents differ materially (See my post of Feb.19, 2010: representativeness in survey respondents with 13 references.)?
- What limitations did the analysts note about the participant pool? For example, it was dominated by pharmaceutical companies or large French departments.
- How long did the data collection go and with what follow up?
- How many responses were complete and credible? What was the accuracy control?
Many people seek data from legal departments, but few of them honor the basic obligation of a researcher to lay out fully their collection methodology (See my post of March 2, 2008: surveys of law departments with 72 references.).