Published on:

Methodology and benchmarks: artifacts of collecting or handling the data that undesirably influence results

Scientists call artifacts “observations in their research that are produced entirely by some aspect of the method of research,” as explained by Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus & Giroux 2011) at 110. Artifacts, weeds in the gardens of benchmarks, crop up when how the data is collected or prepared for analysis distorts the findings in some way. This risk of skewed irregularity goes to methodology, not to sample size or analysis (See my post of Feb. 19, 2010: representativeness of survey respondents; May 20, 2010: four methodological bumps; May 25, 2010: effective response rates; and June 13, 2010: example of well-described methodology.).

Many artifacts lurk around benchmark surveys of law departments:

Delivery: it might make a difference whether the survey was mailed, available only online, or asked by telephone or in person (See my post of Feb. 12, 2009: included telephone solicitation.).

People who fill out the survey: the knowledge of the person who actually completes the survey matters (See my post of April 25, 2011: 65% are GCs for GC Metrics survey.).

Topics covered: For example, the collection of compensation data might skew the respondent group for other benchmark metrics in some systemic way.

Countries and languages: what countries – for that matter, languages – predominate and was the survey translated (See my post of Sept. 20, 2010: many don’t prefer English.).

Questions: How many questions does the survey ask, or more broadly, the order, format, and style of the questions themselves can later the overall data results (See my post of June 23, 2010: 7 posts on clarity of terms.).

Favored population: The benchmark organizer might focus, intentionally or not, on a particular industry or size of prospective participant (See my post of Oct. 28, 2011: drop off in Hildebrandt survey perhaps due to over-emphasis on very large departments.).

Time period open and covered: a survey that closes down its collection of data earlier than another could produce a seasonality distortion (See my post of Nov. 26, 2011: shortened period to collect data.).

Posted in:
Published on:
Updated:

Comments are closed.