Articles on Benchmarks

Dispersion of branch offices and number of blogs supported by U.S. law firms

Continuing this series on law firms and their blogs, I hypothesized that firms with wider-spread branch offices would support more blogs. My reasoning was that if your footprint of clientele and prospective clients is broad, you need marketing efforts that reach broadly. Prospective clients everywhere can read blogs so they inherently reach broadly. On the other hand, firms whose offices are clustered more around their largest office may rely relatively more on local marketing.

The plot below shows the total number of blogs hosted by groups of firms where the firms are categorized by different average distances of their branch offices from their main office.

blogs by avg distance from largest office

Single office firms are on the left side of the horizontal X-axis and the short column tells us that they host only 2or 3 blogs. What I termed “local” firms have an average distance of their branch offices from their main office of less than 100 miles: “local < 100”. As a group they hosted 17 blogs. “State” firms have an average distance of their branch offices at less than 200 miles, so I thought of them, roughly speaking, as covering one state. If the average of the firm’s branch offices was between 200 miles and 400 miles, the firm was categorized as “contiguous” since their offices were likely clustered in their home state and bordering states. The category “multi-state < 1000” suggests firms that have established branch offices beyond bordering states, but perhaps not spanning the country, which is what the category “national” firms represents.

My hypothesis is borne out for the most local firms but otherwise, the aggregate numbers of blogs by category are fairly close. Admittedly, the category labels are crude, but the average miles are correct. This plot starts to give us at least some idea of the correlation between numbers of blogs and networks of branch offices.

Law firms without blogs and whether number of lawyers might be causal

I have previously described a set of 65 U.S. law firms with 200 to 300 lawyers.   Click here to read the full list of firms and understand the background:

Of that group, fourteen firms support no blog, as well as I can determine. Those fourteen firms, as they are identified in the plot that follows with shortened names, are Archer Greiner, Boies Schiller, Cahill Gordon, Chapman Cutler, Honigman Miller, Loeb Loeb, Miller Canfield, Quintairos Prieto, Robins Kaplan, Shutts Bowen, Stroock Stroock, Vedder Price, Wachtell Lipton, and Williams Connolly.

My hypothesis was that smaller firms would predominate, simply because they have fewer lawyers who might choose to start and sustain a blog. To test this hypothesis, I combined data my blog data set with numbers of lawyers at each firm in 2013, from ALM’s NLJ350 data set. The plot sorts the firms from the smallest at the lower left to the largest at the upper right.

From this data set and inspection of the plot, the hypothesis is false. As the plot below shows, the non-blogging firms range very evenly across the spectrum of number of lawyers in the firm. They do not cluster at the lower left side which is occupied by the relatively smaller firms (those in the 180-to-240 lawyer range).

nonblog firms June 24

Spring Release of GC Metrics published; take part now to get the Summer Release!

We sent out the Spring Release last week. It provides benchmark data on staffing and spending from 286 companies. The release shows medians on six fundamental benchmarks, such as total legal spend as a percentage of revenue as well as a range of other results.

If you would like to get a copy of the Spring Release, complete the confidential online survey here; (  Aside from some demographic questions like name, email and industry, the no-cost survey asks for your 2013 number of lawyers, paralegals, and other staff; inside and external legal spend; and revenue.  Participants will receive the Summer Release in August.

If your external spend is high, consider actions other than outside counsel management

A post on LDO Buzz back in April reprinted an article originally published in InsideCounsel. They article covers the usual points regarding what benchmark data to consider, what drives those metrics, and some steps to take to address problems. Nothing new, nothing objectionable.

One line, however, missed a key point: “High outside counsel spending as a percent of revenues compared to peers and/or increasing spending levels over time indicates (sic) the need to more effectively manage outside counsel.”

True, if your outside counsel spend has run at 0.25 percent of revenue for several years (or is increasing) while your industry’s median is 0.20 percent, you should take a look at how you use outside counsel. But, you might also look at whether your internal staff is adequate, because with too few lawyers, paralegals or other support staff your department might resort to external counsel more than would be necessary with the right talent. Further, you might look at how clients use the law department, since if they ask for services that consume too much time or inappropriate services, there will be more need for outside counsel.

The point is that if benchmarks on external spending deserve attention, outside counsel management is only one of several directions to evaluate.

Poisson analysis can tell us how much data to review to be confident in our findings

Andy Kraftsow wrote a piece for Inside Counsel (February 21, 2014).  He explained the mathematics of the Poisson distribution to show in discovery how to dramatically reduce the number of documents that need to be reviewed to understand what they say about the issues.

Most of the piece explains the iterative process of requesting documents and categorizing them by keywords and phrases into what Kraftsow calls an “organizational schema.”

He then highlights the advantage of a Poisson calculation. “Assume that the organizational schema consists of 50 categories and that each category has been populated with 2,000 documents. Do you need to read all 100,000 documents to understand what the collection says about each of the 50 issues? Poisson says “no.” You need only read 15,000.”

Kraftsow explains: “To be 95 percent certain you have seen all of the relevant language that appears in more than 1 percent of the documents in the category (a “rare event”), you need only read 300 documents in that category. In other words, by reading 300 randomly selected documents from each category, you are 95 percent certain to see the relevant language that appears in all but 50 (1 percent) of the 2,000 documents in each category.” Thus, 300 times 50 categories equals 15,000 documents or 15% of the collection.

As analysts tackle larger and larger collections of data that managers of lawyers care about, Poisson-based calculations will help them figure out how much they need to analyze to reach a high degree of confidence in their conclusions.

The data perfection syndrome that may hobble some general counsel

It is a mistake to think that your data has to be complete and clean for you to push ahead with analytics.  You will leave on the table significant savings and insights that could be realized even from imperfect and provisional models or conclusions based on partial or not-fully-scrubbed data.  For example, if you did not more than study the distribution of timekeepers who bill time to you from the five firms you use the most, that will be progress.


Lawyers like completeness and tidiness, but neither is a feature of complex data.  Resist the conservative reins!  This idea came from the Deloitte Review (undated) at page 16.  Data is never perfect, so it is better to get your hands dirty and work with what you have than to delay, spend money, grow frustrated and perhaps never learn anything.  Plunging in will help you figure out better what to collect and how to collect it.

Might availability of benchmark metrics cause a “race to the top”?

Let’s assume that in the coming years general counsel who give a thought to law department benchmarks can readily find some of those basic metrics.  If they can find them without submitting their own department’s data, they may decide not to submit if they know they compare unfavorably.  If they foresee, for instance, that their total legal spending is out of line with their industry peers, they may conclude that they should let the sleeping dogs of embarrassing metrics lie.


If that decision happens very much, then law department benchmark participants will tip more and more toward those departments that believe themselves are well situated in comparison to the metrics.  There will be a “race to the top” where relatively poor performers drop out and benchmarks will become tougher and tougher.  They will also grow less and less representative.


This would be a shame, because then the entire industry will have nothing but a distorted sense of the typical range of metrics.  After all, you would not want to base your sense of body mass index on those BMI metrics gathered only from runners of marathons.

You can get Release 4.0 of the GC Metrics benchmark survey: more than 1,100 participants

Release 3.0 of the General Counsel Metrics benchmark survey of staffing and spending went out two weeks ago.  It covered 1,079 law departments in 28 industries.

You can get Release 3.0 if you take part before December 8th.

Here is the UR: There is no cost to complete the quick, confidential survey and get the  Releases.  Aside from some demographic questions like name, email and industry, the survey asks for six 2012 figures: number of lawyers, paralegals, and other staff; inside and external legal spend; and revenue.

Please comment if you have questions, or write me directly.

A metric proxy for the value of a patent – in how many countries is the patent registered

There is no reliable way to measure a patent’s value. But, according to an article in the Economist, January 5, 2013 at 52, “one can use a rough-and-ready yardstick: in how many places did the inventors seek a patent for the same technology?”


Somewhere the data is available to show that a given patent has been filed for and granted in a given number of countries.  On a parallel track, the more revenue a company gets internationally, the more widespread one would expect its patents to be, which might distort the value proxy.  In the United States, 27% of its inventors seek to patent their ideas abroad; in Europe, 40% do.  Each is a rough indicator of the other although patents may be a forward-looking indicator, since revenue follows.


Some companies routinely register new patents in tiers of countries.  To the degree that companies blanket patent like that, a metric based on value-indicated-by-number-of-countries offers less insight.


Both of these metrics ought to correlate with the percentage of a law department’s lawyers who are based internationally. Thus, when combined, these three metrics ought to point toward an index of globalization.  Any index has the potential for benchmark comparisons.

Law department benchmarks are not law department opinions counted up

Regarding law departments, we often use the term “benchmark metrics” loosely.   Start with “metrics.”  They are something you can count that exists independently of the counting.  The square feet of a law department’s office space is a metric; the amount paid in overtime to secretaries is a metric; the number of law firms retained in Canada is a metric.  Before someone measured or counted, the feet, pay, and firms existed.


When you collect metrics over time for yourself or from several departments, and calculate the average or median or whatever of those metrics, you have created a benchmark metric.  It is a benchmark because you can compare yourself to the pattern of metrics.


By contrast, if someone asks law department respondents “Will you increase your office square feet next year?” you can calculate the percentage that reply “Yes” and “No” and “Don’t know” but that finding is not a benchmark metric by my definition.  Nothing existed until you asked the question, tallied the responses, and described the distribution of the responses.  Similarly, “Please rank the following methods for managing outside counsel costs” does not produce what I think of as a metric, let alone a benchmark.  It produces a tally which gives some insight but no comparison.


To benchmark your department’s metrics is to match how you stand comparatively based on metrics about existing and countable things.  To find out distributions of opinions and views is something else.