You can get Release 4.0 of the GC Metrics benchmark survey: more than 1,100 participants

Release 3.0 of the General Counsel Metrics benchmark survey of staffing and spending went out two weeks ago.  It covered 1,079 law departments in 28 industries.

You can get Release 3.0 if you take part before December 8th.

Here is the UR: https://novisurvey.net/n/GCMetrics2013.aspx There is no cost to complete the quick, confidential survey and get the  Releases.  Aside from some demographic questions like name, email and industry, the survey asks for six 2012 figures: number of lawyers, paralegals, and other staff; inside and external legal spend; and revenue.

Please comment if you have questions, or write me directly.

Reproducible research as an aspirational goal for legal metrics

In the sciences, a recent movement is often referred to as “reproducible research.”  What it  espouses is a philosophy of transparency regarding data and analysis – share the data you collected, what you did to it, and how you did calculations and graphics.  Those who conduct surveys, for example, should make every step of what they did clear to others and available to them for review.  They should explain how they gathered their data, what they did to prepare it for analysis, the steps they carried out in the mathematical analyses and then, of course, their conclusions.

 

In the sciences, reproducible research has gone even further to make the actual data sets available to others.  Unfortunately, too many times scientific findings have failed to be corroborated by others.  Indeed, there have been some well-publicized instances of fraudulent research, fake data, and unsupportable conclusions.  That sort of check on quality is possible only if someone else can follow your tracks.

 

To the extent that law department data developed by vendors, consultants, and academics is used to produce findings, reproducible research should be the aspiration.  We may not be able to go so far as to expose the actual proprietary data that is collected, but all of us can go much farther than we do now to explain how the data was collected and what was done with it.  Explain your methodology!  Moving in that direction would improve the quality of findings in the result reliability of results.

The “industry” of a company and a way to create an index of diversification with entropy measurements

Mostly for lack of a better way to classify companies, benchmark surveys ask respondents to choose from a list of “industries.”  We see those lists all the time: manufacturing, technology, pharmaceutical, and so on.  In the messy real world, we all realize, companies are not so neatly boxed and defined.  Indeed, almost any company of much size does business in what could be considered more than one industry.

 

One way to measure diversification and therefore more accurately study the effects of industry on legal staffing and spending would be to make use of an entropy calculation.  The entropy of a company where P is the proportion of firm sales in SIC code i for a firm with N different four-digit SIC business units,

 

Total entropy = ∑ Piln(1/Pi) with N over the sigma symbol and i=1 underneath.

 

That formula is a weighted average of the segments’ sales share, the weight for each segment being the logarithm of the inverse of its share. The measure thus takes into consideration two elements of diversification: (1) the number of segments in which a company operates; and (2) the relative importance of each segment in the total sales of the company.

 

A hypothetical company, called EntropyCo, has two-thirds of its revenue coming from one four-digit SIC code business unit and the other third from a different SIC code business unit.  The total entropy formula would then be .66 times the logarithm of 1/.66 plus .33 times the logarithm of 1/.33.  That equals .278.  A comparable company that earned three quarters of its revenue from one business unit and one quarter from the other would have an entropy figure of .2557, which is lower, as it is less diversified than EntropyCo.

 

With the tool of entropy, if participants in a benchmark survey were to break down their revenue by more than one industry, the benchmark metrics could be more finely calibrated.

 

*******************

Take the GC Metrics 2013  survey; https://novisurvey.net/n/GCMetrics2013.aspx.  The no-cost survey asks for 2012 number of lawyers, paralegals, and other staff; inside and external legal spend; and revenue.  You will receive the Winter Release with more than 1,100 law departments.

Currency conversion and some methodology decisions for benchmark studies

Some benchmark surveys ask for spending data in U.S. dollars and leave it to the participants to convert their non-dollar spending however they choose to do so.  Other surveys, including GC Metrics, accepts data in whatever currency the participant uses and then has to decide on a conversion rate.

 

What I have done is taken the approximate average of the currency against the U.S. dollar for the calendar year involved.  By approximate I mean that I eyeball the exchange rate for the year and pick a figure that seems as representative as possible.  There are undoubtedly more precise ways to convert currencies, but they would be much more computationally intensive and harder to explain to those who receive the report.

*******************

Take the GC Metrics 2013  survey; https://novisurvey.net/n/GCMetrics2013.aspx.  The no-cost survey asks for 2012 number of lawyers, paralegals, and other staff; inside and external legal spend; and revenue.  You will receive the Winter Release with more than 1,100 law departments.

 

The data perfection syndrome that may hobble some general counsel

It is a mistake to think that your data has to be complete and clean for you to push ahead with analytics.  You will leave on the table significant savings and insights that could be realized even from imperfect and provisional models or conclusions based on partial or not-fully-scrubbed data.  For example, if you did not more than study the distribution of timekeepers who bill time to you from the five firms you use the most, that will be progress.

 

Lawyers like completeness and tidiness, but neither is a feature of complex data.  Resist the conservative reins!  This idea came from the Deloitte Review (undated) at page 16.  Data is never perfect, so it is better to get your hands dirty and work with what you have than to delay, spend money, grow frustrated and perhaps never learn anything.  Plunging in will help you figure out better what to collect and how to collect it.

Might availability of benchmark metrics cause a “race to the top”?

Let’s assume that in the coming years general counsel who give a thought to law department benchmarks can readily find some of those basic metrics.  If they can find them without submitting their own department’s data, they may decide not to submit if they know they compare unfavorably.  If they foresee, for instance, that their total legal spending is out of line with their industry peers, they may conclude that they should let the sleeping dogs of embarrassing metrics lie.

 

If that decision happens very much, then law department benchmark participants will tip more and more toward those departments that believe themselves are well situated in comparison to the metrics.  There will be a “race to the top” where relatively poor performers drop out and benchmarks will become tougher and tougher.  They will also grow less and less representative.

 

This would be a shame, because then the entire industry will have nothing but a distorted sense of the typical range of metrics.  After all, you would not want to base your sense of body mass index on those BMI metrics gathered only from runners of marathons.

A graph, with nodes and edges, that conveys much about a law department’s use of outside counsel

A network graph, when the term is used by mathematicians, means a structure comprised of nodes and edges.  For example, a law department could represent the law firms it retained during the previous year by means of such a graph.  The department would be the central node on the graph and each law firm would be at the end of an edge.

 

The visual depiction would be more informative if the edges were of different thicknesses to indicate the amount paid the law firm during the year.  Likewise, the length of the edge could be proportional to the number of matters handled by the law firm.

 

If the data from a law department were represented this way, you could also show the size of the law firms by the area of their node shape (a circle).  One further use of this graph would be to color or shape the node of each law firm according to some other factor, such as the number of timekeepers used by the firm or the number of areas of law serviced.

 

This style of a network group would represent large amounts of information from matter management software in a visually dramatic and insightful way.

 

A metric proxy for the value of a patent – in how many countries is the patent registered

There is no reliable way to measure a patent’s value. But, according to an article in the Economist, January 5, 2013 at 52, “one can use a rough-and-ready yardstick: in how many places did the inventors seek a patent for the same technology?”

 

Somewhere the data is available to show that a given patent has been filed for and granted in a given number of countries.  On a parallel track, the more revenue a company gets internationally, the more widespread one would expect its patents to be, which might distort the value proxy.  In the United States, 27% of its inventors seek to patent their ideas abroad; in Europe, 40% do.  Each is a rough indicator of the other although patents may be a forward-looking indicator, since revenue follows.

 

Some companies routinely register new patents in tiers of countries.  To the degree that companies blanket patent like that, a metric based on value-indicated-by-number-of-countries offers less insight.

 

Both of these metrics ought to correlate with the percentage of a law department’s lawyers who are based internationally. Thus, when combined, these three metrics ought to point toward an index of globalization.  Any index has the potential for benchmark comparisons.

US legal system costs for liabilities are higher than those of the Eurozone by fifty percent

The U.S. legal system is the world’s most costly, according to a study released this week [PDF] by the U.S. Chamber Institute for Legal Reform (ILR). The study, conducted by NERA Economic Consulting, shows that the American system costs about one and half times more than the Eurozone average.  I would be remiss if I did not mention that the ILR may have a political agenda.

The NERA study compared liability costs as a percentage of a country’s gross domestic product. The 13 countries included in the study have similar levels of regulation and legal protection, leading analysts to conclude that higher costs could be attributed to more frequent and/or costly claims.

According to the NERA study, the U.S. costs were about 1.7 percent of GDP.  For our $13 trillion economy, that finding would say that “liability costs” consume on the order of $221 billion.  That amount includes outside counsel costs, but also many point items.

Countries on the low end of the range—the Netherlands, Belgium, and Portugal—had costs around 0.4 percent. Legal liability costs in the U.S. were found to be about 50 percent more than costs in the U.K.

Law department benchmarks are not law department opinions counted up

Regarding law departments, we often use the term “benchmark metrics” loosely.   Start with “metrics.”  They are something you can count that exists independently of the counting.  The square feet of a law department’s office space is a metric; the amount paid in overtime to secretaries is a metric; the number of law firms retained in Canada is a metric.  Before someone measured or counted, the feet, pay, and firms existed.

 

When you collect metrics over time for yourself or from several departments, and calculate the average or median or whatever of those metrics, you have created a benchmark metric.  It is a benchmark because you can compare yourself to the pattern of metrics.

 

By contrast, if someone asks law department respondents “Will you increase your office square feet next year?” you can calculate the percentage that reply “Yes” and “No” and “Don’t know” but that finding is not a benchmark metric by my definition.  Nothing existed until you asked the question, tallied the responses, and described the distribution of the responses.  Similarly, “Please rank the following methods for managing outside counsel costs” does not produce what I think of as a metric, let alone a benchmark.  It produces a tally which gives some insight but no comparison.

 

To benchmark your department’s metrics is to match how you stand comparatively based on metrics about existing and countable things.  To find out distributions of opinions and views is something else.