Not to be a zealot about the value of scoring RFP responses, I still advocate as part of the review that evaluators convert responses to comparative scores. If an evaluator reads all of the answers to the same question, one after the other, the evaluator should be able to rank them from best to worst, even if the single-question review is in isolation from the other questions, even if the question’s answer vary greatly in style and content, even if the evaluator is not well informed about the topic.
For example, consider a question about how the firm proposes to handle pending matters re-assigned to them from the current firm (See my post of Sept. 12, 2008: transfers of matters to new counsel with 8 references.).
If five law firms describe their approach and economics, a reader of their five answers back to back will be able to rank them from one to five. Such a ranking done for several questions, when you add up all the comparative scores from those questions, the aggregate scores create at least a platform for discussion of one view of the relative strengths of the firms. Another step is to weight those questions – and their scores – according to your sense of the questions’ relative importance to you (See my post of April 4, 2008: a better method to rank RFP responses.).