Spring Release of GC Metrics published; take part now to get the Summer Release!

We sent out the Spring Release last week. It provides benchmark data on staffing and spending from 286 companies. The release shows medians on six fundamental benchmarks, such as total legal spend as a percentage of revenue as well as a range of other results.

If you would like to get a copy of the Spring Release, complete the confidential online survey here; (https://novisurvey.net/n/GCM2014.aspx).  Aside from some demographic questions like name, email and industry, the no-cost survey asks for your 2013 number of lawyers, paralegals, and other staff; inside and external legal spend; and revenue.  Participants will receive the Summer Release in August.

Reproducible research regarding legal management surveys – lessons from pharma

Bearing in mind the benefits of more reproducible research regarding legal management, a piece in the Economist, July 25, 2015 at page 8, makes a good point. That short article explains how pharmaceutical companies have not been publishing results from clinical trials regarding their drug research that are negative or inconclusive. Without the full results, no one can accurately and comprehensively assess the efficacy of a new drug.

 

Stated differently, someone is not practicing reproducible research if they cherry pick only the results that show their drug in clinical trials has met with success. It would be akin to a surveyor who asks law firms or law departments a set of questions but then publicizes only the data that puts the surveyor’s views or products or services in a favorable light. In contrast, research that is done with integrity discloses contradictory findings, unexplained findings, as well as favorable findings. Reproducible research implies full disclosure.

Weighting survey responses so that the findings better represent underlying demographics

Surveyors sometimes weight their data to make the findings more representative of some other set of information. This point comes through in an article in the New York Times, July 23, 2015 at 83 regarding political polls. Pollsters may get too few responses from some demographic slice, such as farmers, and want to correct for that imbalance when they present conclusions respecting the entire population. The polling company weights the few farmer respondents more heavily to make up for the imbalance and represent the locations of residents more in line with reality.

 

How does this transformation of data apply in surveys for the legal industry? Let’s assume that we know roughly how many companies in the United States there are that have revenue over $100 million by each major industry. Let’s also assume that a benchmark survey of law departments has gathered compensation data regarding the lawyers in the responding law departments.

 

If the participants in the law department survey materially under-represent some industry — the proportions in each industry don’t match the proportions that we know to be true – it is not hard to adjust the compensation data. One way would be to replicated representatives in industries that have been insufficient number to be proportional by enough to make up the difference. This is what is happens when a surveyor weights survey data to present more proportional data.

 

To summarize, you need to have some basis for an underlying distribution of data, such as numbers of companies above a certain size or industry. Secondly, you need a survey data set that you can adjust so that it reflects the proportions of the first data set.

Nything but trivial – the crucial ubiquity of “N = “ in survey findings

A precept of reproducible research, such as survey results that allow readers to understand the methodology and credibility of the findings, is to make generous use of “N = some number”. That conventional shorthand for “how many are we talking about” shows up in almost every reproducible-research graphic. Whether in the title of a plot, the text that relates to it, on the plot itself or in a footnote, a reader should always be quickly able to learn how many respondents answered each question or how many documents were reviewed or how many law departments had a given benchmark, or whatever pertains to the topic of the plot.

 

The larger the N, the more reliable the averages or medians that result from the data. For example, if the “average base compensation of general counsel” rose 2% from one year to the next, it makes a huge difference whether that change applies to N = 8 [general counsel] or N = 80.  Changes in small numbers of observations have much less credibility than changes in large numbers.

Choices on plots that involve flipping axes, using points instead of bars, and axis labels for intervals

We can take one more look at the seminal Winston & Strawn plot, now streamlined and improved as discussed previously. A few graphical design choices deserve comment. We emphasize, however, that graphical design choices are many, which means the permutations and combinations of them are even more numerous. Experience (and some research on how humans perceive and interpret graphs) suggest quite a few well-accepted guidelines, such as simplicity and clarity, but graphical visualization remains in the subjective domain of what feels appropriate to the designer. We could analogize to writing style.

 

A convention in plotting is that the so-called factors run along the x-axis at bottom and the quantitative values run up the y-axis on the left. With such long axis labels, however, that choice has no appeal here. If we shorten the labels and rotate them, it is possible, as seen in the plot below.

 

Another choice would have eschewed bars in favor of points.

 

Finally, had there been finer Interval numbering on the lower axis there would have been no need for the obtrusive numbers at the end. The plot below shows how this would have looked with points and intervals and short, rotated labels.

Rplot points angles

Attractive spacing and width of bars on plots; informative labels

Returning once again to the same plot from the Winston & Strawn survey report, but shifting from criticism, we should praise several aspects of the original plot.

Screenshot (6)_snip Winston pg19

The somewhat-narrow width of the bars makes a more appealing impression than when bars are thick and therefore tightly packed shoulder to shoulder. Compare the version below where thick bars put more ink on the plot, but offer no more insights or clarity.

Rplot08nojunk

Similarly, the spacing between the bars helps a reader take in the message of the plot, and better than very narrow lines. The version above takes away that spacing although it adds around each box a frame colored black to clarify individual bars. This is not an improvement!

 

Third, the labels for each risk element are clearly written and spelled out on the left, vertical axis. An alternative choice could have been placing the text above the bars. The plot below shows labels on top of the bars.

Rplot label over bars

 

Fourth the plot takes up most of the page and has been placed squarely in the middle of it and therefore becomes the obvious focus of attention.

Superfluous elements – chart junk – but two useful additions

We revisit the same Winston & Strawn plot which appears as the plot as it was in the most recent post in its improved re-incarnation. Now, let’s take up four more observations.

 

The thick black line on the vertical y-axis adds nothing: It is an example of what is referred to as “chart junk”, an element of a plot that adds no useful information but clutters up the plot and makes it that much harder to grasp.

 

Second, neither axis has a label to explain what the axis represents. Labels are generally a good thing so that a plot can stand on its own without explanations in the report text.

 

Third, the plot lacks a title, which also helps make it self-contained. By that term I mean that a reader can understand what the plot has to offer without having to read elsewhere. It is true that the header of the page serves like a plot title, but it is in a different color and font and location than the plot itself. For PowerPoint decks, headers often serve a different purpose than as a surrogate plot title.

 

A final two steps took out ticks and panel borders. The text labels quite adequately match up to the horizontal bars, so the tiny tick marks on the left, y-axis fall into disfavor. And, nothing is added by the gray border around the plot, in my opinion. Just the facts, ma’am.

 

Let’s unveil the de-cluttered, self-contained plot!

Rplot08nojunk

Excessive use of colors in a plot; sorting an axis

Another aspect of the plot that has been discussed previously [Click here for the latest post in this series] should be called out.

Whoever prepared the plot chose to color differently each bar of the three risks most often selected. The blue bar represents “geographic locations in which the company operates”, a sort-of red bar represents another risk, and the third with yellow. In addition to those color distinctions, the plot also embeds the labels of those three risks in black boxes with white font. Shown below is the plot as it originally appeared.

Screenshot (6)_snip Winston pg19Neither of these graphical techniques add value to the plot or, indeed, make sense. They make readers work more to figure them out. Are the choices of colors significant, as in red-yellow-green means something? Is there a linkage between the coloring and the boxing? What do either or both tell us that the length of the bar and the label at the end don’t?

To emphasize the three leading risks, this plot could have sorted the risks in decreasing order of selection, as shown below.

Winstnocolorsorted

It is conventional to place the largest item at the top and the others in descending order down to the smallest on the bottom. Sorting data by something meaningful makes a clearer point than random coloring and redundant boxing.

Multiple and superfluous typography used on a plot

We return to the same survey plot and our topic of effective visualization of survey results. To see the previous post that explains the source data and the purpose of this series, click here. The version shown below incorporates the changes recommended previously regarding redundant data and serves as the starting point for the improvements discussed here. Let’s focus on the typography.

Winstonpg19noredundantdata2

 

A font comes from a font family, such as the familiar Helvetica, Courier or Times Roman. The face of a font could be normal, italic, bold, upper case, or other formats. Third, with any family and face, the size of a letter, number or symbol can be small, medium, large or some specified size. There are other ways to characterize type (such as kerning and left or right alignment), but we will limit ourselves here to the three of family, face and size. We will use the term “typeset” to summarize font, face, and size.

 

The font on the left-hand, y-axis labels is different from the font on the x axis along the bottom, and both of those fonts differ from the bulky numbers at the ends of the columns. Additionally, on the original plot, but not shown here, there are black rectangles around three of the labels, which also have white coloring instead of black, so we could say that there are four different typesets employed in this one plot.

 

Compounding the multiplicity of fonts and colors, the typeset comes in at least three sizes.

 

Sometimes the designer of a plot deliberately interjects a different font/family/size, such as for emphasis, or to bring to the reader’s attention something important. But none of the four variations on the original plot convey any special meaning (although the numbers at the ends of the bars give the gist of the plot and might therefore justify the bold face).

 

To show how one might improve the plot by unifying the typeset, the plot below renders each of the text elements in Helvetica, 12 point, plain, black. Unless there is an informational reason to change fonts, stick with the same set.

Winstonsamefont

Redundant display of data on plots

In this series of blog posts, we will use a survey by the U.S. law firm Winston & Strawn to learn about survey methodology. In 2013 the firm produced a 33-page report based on the survey results entitled “The Winston & Strawn International Business Risk Survey 2013”.  To download a PDF of the report, click here.

The plot in the image below comes from page 18 of the report. The survey had asked respondents the question stated in the header, given them eight choices, and this plot presents the results as a graph. Here we will focus on one aspect of that plot: how effectively it presents the sum of the number of times respondents selected each of the risk choices.

Screenshot (6)_snip Winston pg19

 

Notice that the plot identifies the number of companies selecting a risk by three methods. One is the horizontal x-axis that ranges from zero on the left to 80 on the right. For example, “Rogue employees” is just to the left of the 50 marker on the x-axis so a reader could estimate 47-49 respondents chose if from that bar’s end point, where it reaches on the x-axis, and the figure from the y-axis.

 

The second method is the numeric label at the end of each bar. “Rogue employees” proclaims a large “48”.

 

Third, from the bottom axis light, vertical dotted lines extend upward from each interval. These vertical “grid lines” as they are referred to by data visualizers, are spaced at even intervals of five. If there were a label that explained the intervals someone could count nine of them from the left plus a little bit and estimate that 47-49 respondents chose “Rogue Employees”.

 

The plot would be less cluttered, less redundant, and more precise if it omitted the superfluous grid lines as well as the unnecessary x-axis. It would leave the numeric labels as the salient, immediately understandable statements of the results.

 

The plot below recreates the original plot using the R programming language. The reproduction does not exactly copy every feature of the original plot. For example, the multi-line spacing of the left axis labels do not conform, nor do the width and spacing of the bars or the type fonts or the black boxes of the left axis labels. That said, the revised plot has improved the chart components discussed above: the redundant representation of the numeric totals.

Winstonpg19noredundantdata2

 

Lawyer Resistance to Seeking Feedback from Their Clients

by James S. Wilber, Esq., Altman Weil, Inc.

 

As companies continue to pay careful attention to controlling costs, law departments remain under scrutiny. Accordingly, in-house lawyers regularly look for ways to demonstrate value to their clients. One of the easiest and most cost effective ways of doing this is to seek regular feedback from clients.

 

Some companies require support functions, such as law, to conduct annual satisfaction surveys, often in connection with performance evaluations of law department lawyers and staff. In many companies, however, law departments rarely, if ever, seek formal feedback from their clients about whether they are meeting client needs.

 

Lawyer personality data reveal that lawyers generally are averse to seeking feedback. This is not due to a lack of concern for clients, but rather to unique characteristics of the lawyer personality. Personality profile testing shows that most lawyers have particularly low resilience – the quality that determines how well one responds to criticism and rejection. Therefore, it is not surprising that asking for feedback is particularly difficult for lawyers.

 

If your department’s lawyers can overcome this barrier, they will find that merely asking clients for their opinions typically creates an enhanced image of the law department and an improved relationship between the department and its clients. This phenomenon seems to endure for several months to a year following the survey. Conducting satisfaction surveys seems to create a halo effect for the surveyor.

 

A 2002 study conducted by Paul M. Dholakia of Rice University’s Jesse H. Jones Graduate School of Management, and Vicki G. Morwitz of New York University’s Stern School of Business, supports the halo effect theory and concludes that merely conducting satisfaction surveys enhances client loyalty and profitability. [1]

 

Continually assessing and improving the level of service your law department provides will allow it to demonstrate its value to the company and meet the increasingly high demands of clients.

[1] The study, reported in the Journal of Consumer Research in September 2002, was highlighted in the May 2002 Harvard Business Review.