Rees Morrison’s Morsels #161: posts longa, morsels breva

More law department sales. Vendors, let me know of your successes. I still hope to announce license arrangements. So, to prime the pump, note that LexisNexis CounselLink was chosen by Fannie Mae and Hawaiian Electric, according to Met. Corp. Counsel, Nov. 2011 at 41.

Arguments by analogy are fallacies. “Almost any analogy between any two things contains some grain of truth, but one cannot tell what that is until one has an independent explanation for what is analogous to what, and why.” David Deutsch elaborates on this point in The Beginning of Infinity: Explanations that Transform the World (Viking 2011) at 371. A well-run law department is a sewing machine requires the reader to step back and know all kinds of fundamentals about both sides of that analogic metaphor (See my post of Oct. 12, 2010: the fundamental cognitive function of metaphors.).

Eigenvectors and matrix mathematics. A matrix would be a table of the five law firms you paid the most during the past five years. The first column names the firm and the five columns to the right give for each year the ranking of the firm, where a 1 means it was paid the most that year, a 2 the second most, and so on. The 5X5 table is a matrix and mathematical tools can calculate the “score” of each firm. That score is the so-called “first-rank” eigenvector. Eigenvectors are useful for sophisticated mathematical functions, we learn from John D. Barrow, 100 Essential Things You Didn’t Know You Didn’t Know: Math Explains Your World (Norton 2008) at 223-24, who gives an example of a matrix (See my post of Oct. 29, 2011: matrices.).

Strange attractors in phase space. Those who research complexity and chaos have discovered that many systems, while outwardly appearing disordered and chaotic, in fact can disclose underlying order. The orderly portions, referred to as “strange attractors,” occur in what researchers call “phase space, a space in which all possible states of the system are represented. These ideas come from Olivia Parr Rud, Business Intelligence Success Factors (Wiley 2009) at 34. Strange attractors create order from turbulence and can be described by a few simple mathematical equations. All of this esoteric stuff creates a metaphor for forces that bring about effective in order in law departments (See my post of Nov. 8, 2010: chaos theory as metaphor.).

Causal thinking with stories compared to statistical thinking with numbers

Our evolution equipped us to create causal explanations for events much more readily than to grasp underlying statistical explanations, to use the terms of Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus & Giroux 2011). Causal explanations, often in the form of a narrative, explain what has happened by people or understood forces doing explicable things to bring it about. “Chris had a good day in court, so we won.” “Mega Corp.’s CEO had to close the deal before year end to get his bonus, so they conceded several points.”

Statistical thinking, by contrast, derives conclusions about individual cases from properties of categories and ensembles. Chris’s company typically prevails on 75 percent of its cases. Or, more than 60 percent of deals that reach a certain stage go on to close.

Savings attributed to the start of a new process or new software or new training often exemplify a causal explanation. “We did X and Y followed.” Our quick System 1 minds favor neat patterns recognized and stories fit snugly together. In fact, it weaves them at the snap of a finger and out of few facts. But it may often be wrong or fanciful. Our slower System 2 minds can turn to probabilities and bigger-picture explanations.

The improvement might well be statistical: the random walk of events or regression to the mean, the Hawthorne effect, selective attention, confirmation or cognitive bias – the blind stumble of chance.

The typical longevity of a matter management system averages five to seven years

Early in the data collection this year for the GC Metrics benchmark survey, 69 law departments stated how many years they had been using their matter management system. The average number of years those departments had had their system installed was 6.4 years. Later, with 652 law departments in the survey, twice as many (139) had submitted longevity data. The average dropped a year, to 5.4, and the median number of years was 4. The law department with the longest history of a matter management system had just reached the two decade mark!

Given the cost, diversion of time and energy, and risks that face law departments when they license and install a matter management system, it surprises me to see a five-year average life. But it could be that several of these law departments had another system before the current one, so their average years of using that kind of software is higher. The question asked only about the current system and how many years since its installation date.

For law departments, cyclical and structural changes

Economists often refer to changes that are cyclical, meaning that there is some regularity in the pattern of change and reversion, and changes that are structural. Structural changes are permanent, profound, often take effect gradually, and are sometimes hard to identify at the time. For example the demographic change of an aging population appears to be a structural shift, one that is much more permanent and deep than a cyclical swing.

For law departments, some management ideas are cyclical. I suspect that decentralization of reporting ebbs and flows in popularity. Convergence may be a cyclical swing of the pendulum as might partnering and a strong focus on cost control.

Structural changes that will affect law departments may include a shift in power and influence away from law firms. Over a number of years, off shoring may prove to be another structural change as well as unbundling services and third-party investments in law firms. Perhaps US style litigation is a structural change that will ripple out from our shores. Some people forecast that technology will transform the practice of law (See my post of Sept. 25, 2008: Cisco’s Mark Chandler with 30 references.).

A method to represent data with more balance: Winsorize it

To lessen the influence of outliers, a set of numbers can be Winsorized. Named after Charles Winsor, to Winsorize data, tail values at the extremes are set equal to some specified percentile of the data, such as plus and minus four standard deviations. Here’s how it’s done. For a 90 percent Winsorization, the bottom 5 percent of the values are set equal to the value corresponding to the 5th percentile while the upper 5 percent of the values are set equal to the value corresponding to the 95th percentile. This adjustment is not equivalent to throwing some of the extreme data away, which happens with trimmed means (See my post of May 28, 2007: five percent trimmed mean.). Once you Winsorize your data, your medians will not change but your average will.

Using the data from the 652 participants so far in the GC Metrics benchmark survey, I Winsorized the data for number of lawyers. Using a 90 percent process, at the small end meant changing 9 of them to 1 lawyer, which was the 5th percentile value from the bottom (the other 23 were already 1). At the high end, I changed the 32 with the most lawyers to the 95th percentile figure, 110 lawyers. After Winsorization, the median of 6 lawyers stayed the same but the average dropped from 25.96 lawyers to 18.03 – a decline of 30 percent because several very large law departments were drastically Winsorized.

Part LXI in my series of collected metaposts

  1. Bet the company lit (See my post of Oct. 27, 2011: bet-the-company with 8 references.).

  2. Fulbright litigation survey (See my post of Nov. 26, 2011: F&J litigation survey with 21 references.).

  3. Leadership II (See my post of Oct. 19, 2011: leadership with 10 references.).

  4. Legal research online by law firms (See my post of Oct. 10, 2011: online systems for law-related research with 7 references.).

  5. Matrices, matrix (See my post of Oct. 29, 2011: matrices, grids, tables with 9 references.).

  6. Number of lawyers in the US (See my post of Nov. 21, 2011: count of in-house lawyers in the U.S. with 7 references.)

  7. Online legal resources (See my post of Nov. 26, 2011: Internet information for legal guidance with 9 references.).

  8. Percentage of work done inside (See my post of Oct. 30, 2011: work amount inside versus outside and methodology with 9 references.).

  9. Pros and cons of inside career (See my post of Oct. 10, 2011: advantages of practicing law in a company with 10 references.).

  10. Supply and demand (See my post of Nov. 26, 2011: supply and demand and 6 references.).

Chaos theory – non-linear functions in legal departments

Chaos theory studies phenomenon where small changes in the initial conditions result in major changes in consequences (a butterfly flapping its wings in Brazil famously results in a hurricane off Bermuda). As used for physical systems, chaos events are non-linear: they are not wildly unpredictable and mysterious – the popular notion of chaos – but they have emergent properties much different than our minds easily grasp. Feedback loops account for some of the unexpected outcomes and unpredictable.

A word of advice from an in-house lawyer early in the deliberations over a pricing decision could have massive consequences years later (antitrust investigation avoided; millions in profits rightfully earned). A well-timed settlement offer takes the company down one path; botched, the bills and vexations pile up for years. If we borrow the metaphor of non-linear systems and apply them to legal teams, we risk mis-applying it, which is the lesson of Stephen E. Kellert, Borrowed Knowledge: Chaos theory and the challenge of learning across disciplines (Univ. Chic. 2008). Even so, such a power idea, caught as a conceptual metaphor, can help us understand and describe some things that happen to law departments.

The basic economics of demand for legal services and supply of them

Economists would distill the corporate legal market into the DEMAND by corporations for legal services and the SUPPLY of those services.

Demand, what drives legal services, has been a topic returned to more than a few time on this blog (See my post of July 2, 2007: what drives a company to hire its first in-house lawyer; and Dec. 7, 2010: six primary drivers of total legal spending. Many topics reflect changes in demand for the advice of lawyers: global spread of business, regulatory requirements, complexity, intellectual property’s ascendance.

Supply, what meets the demand for legal services, has many forms. Foremost for organizations with a legal team of employees are inside lawyers followed by external counsel. Recently, LPOs have muscled in, along with educated clients who practice self-serve. Increasingly, online resources will be available to dispense legal guidance and documents (See my post of Oct. 31, 2005: online legal resources; Nov. 15, 2005: online resources; Jan. 10, 2006: lawyers and research online; Jan. 13, 2006: free online information; March 9, 2007: the price of legal information is being driven to zero; April 27, 2007: the internet and four generations of resources; Jan. 25, 2008: Martindale-Hubble and shared evaluations of law firms; Sept. 9, 2008: economics of information; and Jan. 11, 2010: Ning with 600+ IP blogs.).

Notions of supply and demand have shown up together on this blog in specific contexts (See my post of Feb. 1, 2006: auctions; March 20, 2008: in-house compensation; Sept. 22, 2008: social networks for lawyers, a metapost;; Nov. 17, 2008: without fungible sellers, legal economy violates standard economic assumptions; May 11, 2009: Say’s Law that supply creates its own demand; and April 29, 2010: no gap between supply of value by law firms and demand for it according to classical economists.).

The echo-chamber effect when surveys tap similar groups year after year

Many posts on this blog have dipped into the well of the annual Fulbright & Jaworski surveys of litigation data. Each year the firm polls a few more than 400 law departments in the United States and the United Kingdom.

If the responding group year after year remains significantly similar might not the results reflect some degree of self-reference? If one year’s results show a decline in international arbitrations, to imagine one possibility, might the respondents read that and be influenced in their behavior during the following year, or just in how they remember and answer the next year? They hear the echo of their own voices and answer questions to some degree influenced by their own reported behavior.

My question ranges more broadly than F&J’s admirable investigations. We could surmise the same reverberation and influence from the annual ACC/Serengeti surveys. The echo chamber should have little effect on quantities that are easily counted, such as budget figures or staffing numbers. Its risk rises substantially on qualitative assessments, such as regarding the perceived effectiveness of alternative billing arrangements. When respondents have heard their own, collective assessments one year, the bandwagon effect the next year may reinforce beliefs and thus answers in the same direction.

As it happens, this blog has cited the Fulbright survey many times (See my post of March 8, 2005: Fulbright & Jaworski survey; April 8, 2005: lawsuits per billion of revenue; Oct. 19, 2005: measures of law department success; Oct. 27, 2005: median time to resolution of “roughly half a year”; Oct. 27, 2005: litigation costs and settlement rates at trial; Oct. 27, 2005: one quarter of companies did not sue during the year; Oct. 27, 2005: dubious data on litigation costs; Oct. 29, 2005: class action metrics; Oct. 29, 2005: diversity; Oct. 30, 2005: negative qualities of law firms; Oct. 26, 2007: discovery spending; Oct. 29, 2007: litigation matters per inside counsel; Oct. 29, 2007: drop in e-discovery counsel; Oct. 29, 2007: fixed fees and frequency; Oct. 30, 2005 #3: lack of innovation by law firms; Oct. 31, 2007: insurance coverage against nine kinds of litigation; Dec. 6, 2007: discovery costs; Dec. 23, 2008: no data on lawsuits per billion of revenue; Dec. 23, 2008: many sub-$100 million companies have in-house lawyers; Dec. 23, 2008: size and unlisted status; and Dec. 26, 2008: odd data on litigation budgets.).

Methodology and benchmarks: artifacts of collecting or handling the data that undesirably influence results

Scientists call artifacts “observations in their research that are produced entirely by some aspect of the method of research,” as explained by Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus & Giroux 2011) at 110. Artifacts, weeds in the gardens of benchmarks, crop up when how the data is collected or prepared for analysis distorts the findings in some way. This risk of skewed irregularity goes to methodology, not to sample size or analysis (See my post of Feb. 19, 2010: representativeness of survey respondents; May 20, 2010: four methodological bumps; May 25, 2010: effective response rates; and June 13, 2010: example of well-described methodology.).

Many artifacts lurk around benchmark surveys of law departments:

Delivery: it might make a difference whether the survey was mailed, available only online, or asked by telephone or in person (See my post of Feb. 12, 2009: included telephone solicitation.).

People who fill out the survey: the knowledge of the person who actually completes the survey matters (See my post of April 25, 2011: 65% are GCs for GC Metrics survey.).

Topics covered: For example, the collection of compensation data might skew the respondent group for other benchmark metrics in some systemic way.

Countries and languages: what countries – for that matter, languages – predominate and was the survey translated (See my post of Sept. 20, 2010: many don’t prefer English.).

Questions: How many questions does the survey ask, or more broadly, the order, format, and style of the questions themselves can later the overall data results (See my post of June 23, 2010: 7 posts on clarity of terms.).

Favored population: The benchmark organizer might focus, intentionally or not, on a particular industry or size of prospective participant (See my post of Oct. 28, 2011: drop off in Hildebrandt survey perhaps due to over-emphasis on very large departments.).

Time period open and covered: a survey that closes down its collection of data earlier than another could produce a seasonality distortion (See my post of Nov. 26, 2011: shortened period to collect data.).