Close
Updated:

On survey methodology

Many times I have dissected survey results where I suspected poor methodology, so I pulled together my posts on survey methodology. I have followed a framework (in bold) defined by the Manual for Complex Litigation (See my post of Jan. 16, 2006: the Manual on trustworthy surveys; and Oct. 26, 2007: only poor methodology could explain bizarre results.).

The population was properly chosen and defined.

The sample reported on needs to have been representative of the population (See my posts of May 14, 2005 and March 28, 2005.). This starting point implicates how the respondents were invited (See my posts of Oct. 16, 2006 and June 22, 2008.). The survey should also have obtained enough respondents for the statistics to be meaningful (See my posts of May 11, 2008: large survey on morale in law departments; April 22, 2007: power tests and sample size; and March 28, 2005: number of respondents.). Low participation rates cast doubt on the attribution of findings to the entire population (See my posts of Dec. 19, 2007 and April 9, 2005: few respondents from a large invitee pool; and May 8, 2007 #4: bill padding example.).

If multiple lawyers from the same law departments respond, there is an issue of representativeness (See my posts of Sept. 5, 2007 and Jan. 30, 2008: probability-weighted surveys.).

As to selection bias, see my post of July 21, 2008: 8 references cited.). The questions always remain whether an individual respondent accurately and fairly speaks for his or her law department (See my post of March 17, 2006.) as well as whether the respondent has sufficient knowledge to be credible (See my posts of Jan. 30, 2006: mixture of knowledgeable and less knowledgeable respondents; and Sept. 3, 2006: views likely to be expressed on survey despite inadequate knowledge.).

The data gathered was accurately reported.

Unless the questions are competently posed, the data from their answers will diminish in usefulness, so it can’t be accurately reported (See my posts of July 3, 2007: asking for ranges of numbers instead of specific figures; April 8, 2007: clear definitions, such as “smaller firms”; Aug. 26, 2007: same order and terminology; and May 11, 2008: similar words and phrases used throughout.).

Accurate interpretation and reporting raises other concerns (See my posts of Feb. 9, 2008: content analysis; Aug. 30, 2006: false precision in numerical findings; and July 27, 2007 and July 20, 2008: year-to-year data needs groups whose participants were mostly the same.).

The data were analyzed in accordance with accepted statistical principles.

Statistical robustness is a vast topic, so I have only gathered representative posts from Law Department Management (See my posts of April 22, 2007 and Aug. 30, 2006; sampling error; Oct. 31, 2007: formula for margin of error and two references; Dec. 9, 2005: margin of error and sample size; Sept. 25, 2006 and Oct. 18, 2006: net scores combined; March 28, 2005 and July 20, 2008: averages compared to medians; and May 31, 2006: generally on statistics and references cited.)

Rankings allow much more analysis, yet don’t require much more effort on the part of respondents (See my posts of May 11, 2008: morale boosters; Oct. 17, 2005: ratings compared to rankings; June 10, 2007: a better way for surveys to rank; March 10, 2005: setting priorities; April 8, 2007: ranking compensation by practice area; and April 4, 2008: rankings and percentages.).

The questions were clear and not leading.

My particular bugaboo is multiple-choice questions (See my post of July 15, 2008: multiple-choice questions and 15 references.).

The process was conducted so as to assure objectivity.

Everyone who spends time and money to conduct a survey does so for a purpose, so there is no such thing as a completely objective survey. But bias toward a particular direction of findings is rampant in surveys of legal departments (See my post of Aug. 5, 2007: skewed surveys and 13 references cited.).