The NY Times, Aug. 27, 2006 at 10WK, discusses a smorgasbord of survey methodology risks (See my post of Aug. 29, 2006 on margin of error and subgroups.). False precision and non-randomness deserve comments.
“When a polling story presents data down to tenths of a percentage point, what the pollster demonstrates is not precision but pretension.” An example is BTI’s survey about law department technology (See my post of April 26, 2006 on its survey of “more than 200 lawyers.). The margin of error being more than plus or minus ten, it is misleading to report that respondents most wanted technology solutions to be “user friendly” at the level of 19.3%. BTI should have written much more broadly, like “between one in four and one in five chose “user friendly.’”
The Times also emphasizes how the respondents to a survey must be randomly selected for there to be statistical reliability. For example, if a survey of law departments only collects data from an internet site, randomness evaporates. One reason is that some significant number of in-house lawyers do not venture into the online ether; another reason is that of those who did go on the Internet only some chose to respond, which creates a self-selection bias (See my post of May 14, 2005 for an example of selection bias as well as my fretting about systemic bias in surveys from my post of April 9, 2005 about a survey by Serengetti and Aug. 27, 2005 about a survey of IT respondents.).
Ironic, isn’t it, how I castigate sloppy surveys but crave more law department data.