Prior clerkship experience of Supreme Court clerks has changed dramatically in the last 10 and 20 years

David Lat’s tireless efforts to chronicle the hiring of Supreme Court clerks prompted me to look at a trend that’s developed in recent years. It increasingly appears that multiple clerkships are a prerequisite to securing a Supreme Court clerkship. So I looked at the data for this October Term 2023 class, along with comparisons to the credentials of the OT2013 and OT2003 classes. The results were pretty dramatic. (I looked only at the 36 clerks of the active justices, and the 35 when Chief Justice Rehnquist served on the Court and only hired three clerks instead of the usual four.)

For OT2003, just twenty years ago, 33 clerks came off of one previous court of appeals clerkship, and just two others had multiple clerkships (one of which was not on the federal court of appeals). In 2013, the number off a single prior court of appeals clerkship had dropped to 25. Another nine had two prior clerkships (one of which was not court of appeals), and two more had newer development of two separate court of appeals clerkships. Today, for October Term 2023, just seven of the 36 clerks came from one prior court of appeals clerkship. Fourteen had two prior clerkships, at least one of which was not on the federal court of appeals. And 11 had two prior court of appeals clerkships, and four with the novel development of three prior clerkships.

I’ve lamented that the hoops to jump through for a law school teaching position often involve a series of short-term stints and moves over a course of a few short years. Likewise, I’m not sure this is a particularly welcome development. Admittedly, Supreme Court clerks are a fraction of career outcomes. But many more, I think, are likewise chasing similar credentials of serial clerkships even if they do not get a Supreme Court clerkship in the end. I am not sure that it redounds to the benefit of law students, who as fourth or fifth year associates have much higher billing rates and expectations, but much less practical experience in the actual practice of law. For judges, I am sure that clerks with experience are beneficial, but in previous eras that role may have been given to a career clerk. I don’t know what the longer-term ramifications are, but it’s a trend I’m watching.

Law schools say they're "boycotting" the USNWR rankings, but their admissions practices suggest otherwise

Earlier, I pointed out that law schools “boycotting” the USNWR law school rankings really meant that they would not be completing the survey forms circulated to them. Some data, including expenditures per student or law student indebtedness, cannot readily be gathered elsewhere. USNWR responded by modifying its rankings criteria to use only publicly-available data.

I also noted that some schools appeared to still be “participating” in other elements of the rankings. Some, for instance, circulated promotional material to prospective USNWR voting faculty about the achievements of their schools and their faculty.

But I wanted to focus on another mismatch between what law schools are saying and what they are doing. And that’s in admissions.

Yale and Harvard, in their opening salvo, lamented the over-emphasize on the median LSAT and UGPA of incoming students. So, the thought might have gone, we are going to consider admissions based on our own criteria, not dictated by USNWR. As I chronicled a decade ago, USNWR significantly distorts the incentives for law school admissions by driving schools to admit students with either an above-target-median LSAT or an above-target-median UGPA. Higher-caliber students who fall just below the cusp are not admitted, as measured by the “index score,” which is most predictive of law school performance. Lower-caliber students who excel on one of these two measures are admitted. (UPDATE: I added a link and clarified the points made here.)

One might expect other boycotting schools to go their own way on admissions. From UCLA:

The rankings’ reliance on unadjusted undergraduate grade point average as a measure of student quality penalizes students who pursue programs with classes that tend to award lower grades (in STEM fields, for example), regardless of these students’ academic ability or leadership potential.

And from Northwestern:

First, by over-valuing median LSAT and UGPA, it incentivizes law schools to provide scholarships to students at their medians and above rather than to students with the greatest need.

You can find similar approaches from schools like Vanderbilt, Fordham, and other schools. But it appears any admissions-related concerns have been a non sequitur. These schools acknowledge that ABA data already provides median statistics on incoming classes.

So would boycotting schools simply ignore the consequences of the USNWR formula and instead admit classes less focused on medians? It appears not.

With almost pinpoint precision, you can see that law schools continue to target a particular median LSAT and UGPA in their admissions statistics. Self-reported LSD.law, which in various iterations has been a go-to source of self-reported law school admissions information for twenty years, reflects that these law schools, so far, continue to push medians.

Each image is a snapshot of where law school admissions stand today. Each green dot is an acceptance. Each is a school purporting to “boycott” the USNWR rankings, which means, in theory, it need not worry about its median LSAT & UGPA. If that were the case, we would expect admissions to decrease from the upper right to the bottom left with some gradations.

Instead, you can see that these schools have four quadrants, strongly disfavoring anyone in the lower left quadrant, i.e., those who are “below” targeted medians.

In other words, these law schools are still admitting students principally on the basis of how it would affect their USNWR ranking.

Now, many caveats. Of course LSD.law (and Law School Numbers before that) is self-reported data and self-selected data. It may not get the schools’ precise medians right as I’ve outlined in red. But it certainly reflects a wide swath of prospective students, including those who were not admitted, both above and below any median. Some students are admitted below the medians, for personal statement reasons, for socioeconomic factors, and for a variety of other reasons. But you can see the overwhelming target of admissions remains centered around targeted medians. These, of course, could change by the time the class is admitted.

But I’ll be watching the schools purporting to “boycott” because USNWR inadequately values the things they purport to value in admissions, and then see if they have changed their approach to admissions. So far, it looks like they haven’t.

How can we measure the influence of President Biden's court of appeals judges?

Recent media reports have been discussing President Joe Biden’s influence on the federal judiciary, including the rapid pace of nominating and ensuring confirmation of federal judges. And it’s been something of a proxy for “influence “ or “impact.” It’s true that more judges participating in argument and voting in panels, particularly judges on the federal courts of appeals, is one way of measuring influence.

But another way to measure influence could be to examine written appellate opinions. And it appears President Biden’s court of appeals judges are publishing opinions (at least, in their names) less frequently than other recent judges.

This is hard to measure comparatively across years, of course. For instance, the workloads of the court can change (consider the decline in cases before the Federal Circuit in recent years, for instance). The number of filled seats for active judges, and the workload of senior judges, can change. Consider, for example, that new appointees to a court that is shorthanded probably have much more work than new appointees to a court that has no vacancies, and a court with many active senior judges may have less of a workload of new appointees than a court without many such judges. The practices on each circuit vary wildly in terms of how often decisions are published per curiam or with summary orders rather than in the name of a judge. Getting up to speed if one was confirmed in the middle of a pandemic (say, summer of 2021) may have looked different than previous eras. In short, there are myriad reasons for differences.

Regardless of the reason, there may still be changes in output. I dug into the Westlaw database to try to collect some information and make some comparisons. Using the “JU( )” field (and later, the “DIS( )” and “CON( )” fields joined with the “PA( )” field), I looked at the 10 judges President Biden had confirmed in the first year of his presidency (really, calendar year 2021). (I excluded now-Justice Ketanji Brown Jackson, who was elevated to the Supreme Court in the middle of this window.) All judges were confirmed 14 to 20 months ago. I tried to exclude judges sitting by designation, names shared with others judges, Westlaw’s odd way of handling en banc, and so on, with a quick perusal of results and adjustment to totals.

These 10 Biden-appointed court of appeals judges from 2021 have combined for around 140 majority, named-author opinions (regardless of whether these opinions were “precedential” or "non-precedential”) through mid-February 2023. That’s around 14 per judge. (These 10 judges have also combined for around 31 concurring or dissenting opinions.)

I then went to President Donald Trump’s nominees. They had some similarities: there were 12 court of appeals nominees in 2017, confirmed between 14 and 21 months before February 16, 2019. These 12 judges combined for around 415 majority, named-author opinions. That’s around 34 per judge. (These 12 judges also combined for around 60 concurring or dissenting opinions.)

President Barack Obama had only three federal appellate judges confirmed in his first year. They combined for around 80 majority opinions by mid-February 2011.

As I mentioned, these are rough figures, likely off by a few in one direction or another, as the Westlaw fields are imprecise and I had to cull some data on my own with quick checks. There are probably other ways of looking at the data, including the number of arguments held, the length of time from argument to an issued opinion on a case by case basis, and so on. It’s also a very short window so far, and it’s possible that once the years stretch one we’ll see some smoothing out of the trends. But so far, Biden’s court of appeals appointees have been publishing fewer majority opinions in their names. That’s not to say their influence may not be felt elsewhere, particularly in shaping opinions authored by other judges, in per curiam or unsigned opinions, and so on. It also is not a measure of the influence of any particular opinion, as not all opinions are the same, and some have more impact than others. As I mentioned, the reason has many complexities one could consider. But on this one dimension of frequency, however, so far, there’s been a different pace.

Modeling and projecting USNWR law school rankings under new methodologies

I mused earlier about the “endgame” for law schools “boycotting” the rankings. It’s apparent now that USNWR will not abandon the rankings, and it’s quite unclear (and I would guess doubtful) that these rankings with different metrics will be less influential to prospective law students or employers than the past. But the methodology will change. What might that mean for law schools?

I assume before they made a decision to boycott, law schools modeled some potential results from the boycott to determine what effect it may have on the rankings. We have greater clarity now than those schools did before the boycott, and we can model a little bit better some of the potential effects we’ll see in a couple of months. I developed some models to look at (and brace for) the potential upcoming landscape.

I’ve talked with plenty of people at other schools who are privately developing their own models. That’s great for them, but I wanted to give something public facing, and to plant a marker to see how right—or more likely, how wrong!—I am come spring. (And believe me, if I’m wrong, I’ll write about it!)

First, the criteria. USNWR disclosed that it would no longer use privately-collected data and instead rely exclusively on publicly-available data, with the exception of its reputational survey data. (You can see what Dean Paul Caron has aggregated on the topic for more.) It’s not clear whether USNWR will rely on public data other than the ABA data. It’s also not clear whether it will introduce new metrics. It’s given some indications that it will reduce the weight of the reputational survey data, and it will increase the weight of output metrics.

The next step is to “model” those results. (Model is really just a fancy word for educated guess.) I thought about several different ways of doing it before settling on these five (without knowing what the results of the models would entail).

  Current weight Model A Model B Model C Model D Model E
Peer score 0.25 0.225 0.225 0.225 0.2 0.15
Lawyer/judge score 0.15 0.125 0.125 0.125 0.1 0.1
UGPA 0.0875 0.09 0.09 0.09 0.1 0.1
LSAT 0.1125 0.12 0.13 0.16 0.17 0.15
Acceptance rate 0.01 0.02 0.03 0.02 0.03 0.03
First-time bar passage 0.03 0.05 0.1 0.07 0.05 0.12
10 month employment rate 0.14 0.3 0.25 0.23 0.25 0.3
Student/faculty ratio 0.02 0.04 0.04 0.03 0.05 0.05
Librarian ratio* 0.01 0.01 0.01 0 0 0
Ultimate bar passage 0 0.02 0 0.05 0.05 0
Other factors 0.19 0 0 0 0 0

Note that at least 19% of the old methodology is being cut out of the new methodology. Note, too, that there’s some diminished weight to, at least, the peer score and the lawyer/judge score. That means these categories have to be made up somewhere else. There’s only so much pie, and a lot of pieces simply have to get bigger. Despite the purported focus on outputs, I think some increased weight on inputs will be inevitable (absent significant new criteria being added).

I added a potential category of “ultimate bar passage” rate in three of the five models. It’s a possible output that USNWR may adopt, as it is based on publicly-available information and uses outputs, something USNWR has said it intends to rely upon more heavily.

I also added a “librarian ratio” in two of the five models. But it’s a different one from the existing librarian ratio. USNWR has indicated it will not use its internal library resources question (which was a kind of proprietary calculation of library resources), but it has not indicated that it would not use a student-faculty ratio equivalent for full time and part time librarians, so I created that factor in two of the five models.

If I had to guess, I would guess more minimal adjustments are most likely, highlighted more by Models A & B, but I think there is certainly the possibility for more significant changes, as I highlighted in Models C, D, & E.

Crucially, in all of them, the 10-month employment metric is significantly increased in all models, an assumption that may be wrong, but one that also increases some “responsiveness” (read: volatility) in the rankings, as highlighted below. (I also had to reverse-engineer the weights for the employment metric, which may be in error, and which could change beyond the changes USNWR has presently indicated.) This is one of the most uncertain categories (and the most likely I erred in these predictions), particularly given how much weight it receives in almost any new model. It is also likely going to be the most significant thing law schools can do to move their rankings year to year. If you are wondering how or why a school moved significantly, it is likely attributable to this factor. Getting every graduate into some kind of employment is crucial for success.

I used last year’s peer and lawyer/judge scores, given how similar they tend to be over the years, but with one wrinkle. On the peer scores, I reduced any publicly “boycotting” schools’ peer score by 0.1. I assume that the refusal to submit peer reputational surveys from the home institution (or, perhaps, the refusal of USNWR to count those surveys) puts the school at a mild disadvantage on this metric. I do not know that it means 0.1 less for every school (and there are other variables every year, of course). I just made it an assumption for the models (which of course may well be wrong!). Last year, 69% of survey recipients responded, so among ~500 respondents, the loss of around 1% of respondents, even if quite favorable to the responding institution, would typically not alter the survey average. But as more respondents remove themselves (at least 14% have suggested publicly they will, with others perhaps privately doing so), each respondent’s importance increases. It’s not clear how USNWR will handle the reduced response rate. This adds just enough volatility, in my judgment, to justify the small downgrade.

Next, I ran all the data from the schools, scaled them, weighed them, and ranked them. These are five different models, and they led to five different sets of rankings (unsurprisingly). I then chose the median ranking of each school among the five models. (So the median for any one school could be from any one of the models.)

Let me add one note. From these five different models, there was very little variance between them, despite some significant differences in the weighting. Why is that? Well, many of these items are highly correlated with one another, so adjusting the weights actually affects relatively little. The lower you go, however, the more compressed the rankings are, and the more volatile even small changes can be.

You’ll also note little change for most schools, perhaps no more than a typical year’s ups and downs. Unless the factors removed or given reduced weight worked as a group to favor or disfavor a school, we aren’t likely to see much change.

One last step was to offer a potential high-low range among the rankings. For each of the five models, I gave each school a rank one step up and one step down, to suggest some degree of uncertainty in how USNWR calculates, for instance, the 10-month employment positions, or diploma privilege admission for bar passage, among other things. That gave me 15 potential rankings—a low, projected, and high among each of the five models. I took the lowest of the low and the highest of the high for a projected range. With high degrees of uncertainty, this range is an important caveat.

Below are my projections. (Again, it’s apparent a lot of schools are doing this privately, so I’ll just do one publicly for all to see, share, and, of course, critique. I accidentally switched a couple of schools in a recent preview of this ranking, so I’ve tried to double check as often as I can to ensure the columns are accurate!) Schools should not be overly worried or joyful at these rankings (or any rankings), but they should manage expectations about how they might handle any changes with appropriate stakeholders in the months ahead. Because these are projected “median” rankings, it will not cleanly add up in a true rank order relative to one another (e.g., two schools are projected at 10, and one school is projected at 11).


School Median projected rank Projected range Current rank
Yale 1 1 3 1
Stanford 1 1 3 2
Chicago 3 1 4 3
Harvard 4 3 6 4
Penn 5 3 8 6
Columbia 6 4 8 4
NYU 6 4 8 7
Virginia 8 5 9 8
Berkeley 9 8 12 9
Duke 10 8 13 11
Northwestern 10 8 13 13
Michigan 11 9 14 10
Cornell 13 10 14 12
UCLA 14 12 15 15
Georgetown 15 14 17 14
Vanderbilt 16 14 18 17
Washington Univ. 17 15 19 16
Texas 18 16 20 17
USC 19 15 20 20
Minnesota 20 19 23 21
Florida 21 19 23 21
Boston Univ. 22 20 25 17
Georgia 22 19 25 29
North Carolina 24 21 27 23
Notre Dame 24 21 27 25
BYU 26 23 31 23
Emory 26 22 33 30
Ohio State 27 24 30 30
George Washington 28 25 34 25
Wake Forest 29 24 33 37
Arizona State 30 26 33 30
Boston College 31 26 34 37
Fordham 33 30 37 37
Irvine 33 30 38 37
Alabama 34 30 37 25
Iowa 36 33 42 28
George Mason 36 30 42 30
Texas A&M 37 30 42 46
Illinois 38 33 42 35
Washington & Lee 39 34 43 35
Utah 39 34 44 37
Wisconsin 42 37 51 43
William & Mary 43 39 47 30
Pepperdine 43 41 49 52
Villanova 43 38 47 56
Indiana-Bloomington 46 41 51 43
Florida State 46 42 53 47
Davis 48 44 58 37
Arizona 48 46 55 45
Maryland 48 44 53 47
Washington 48 44 53 49
SMU 48 43 53 58
Baylor 53 46 56 58
Kansas 53 46 58 67
Colorado 55 47 61 49
Cardozo 55 48 61 52
Temple 56 53 58 63
UCSF (Hastings) 58 56 72 51
Richmond 58 55 63 52
Wayne State 58 51 64 58
Tulane 60 56 69 55
Tennessee 61 54 66 56
Oklahoma 61 58 67 88
Loyola-Los Angeles 62 58 69 67
Houston 64 60 69 58
Miami 66 61 69 73
South Carolina 66 61 72 84
San Diego 68 64 74 64
Northeastern 68 61 74 73
Seton Hall 68 61 77 73
Connecticut 69 66 78 64
Florida International 69 61 80 98
Missouri 73 67 78 67
Drexel 73 62 83 78
Georgia State 73 66 80 78
St. John's 73 68 83 84
Oregon 76 68 83 67
Case Western 76 68 85 78
Penn State Law 77 72 85 64
Kentucky 77 68 85 67
American 77 72 87 73
Denver 80 72 85 78
Marquette 82 72 85 105
Texas Tech 84 74 92 105
Cincinnati 85 81 92 88
Lewis & Clark 85 81 92 88
Penn State-Dickinson 86 83 92 58
UNLV 86 83 93 67
Loyola-Chicago 86 83 92 73
Pitt 86 83 92 78
Stetson 86 78 93 111
Chicago-Kent 92 85 93 94
Nebraska 93 91 99 78
Rutgers 94 91 102 86
Drake 94 91 99 111
St. Louis 95 93 99 98
St. Thomas (Minnesota) 96 92 105 127
West Virginia 97 93 108 118
Michigan State 98 94 106 91
Louisville 99 94 104 94

As I’ve gone through “winners” and “losers” in previous posts on individual metrics, not all of those shake out in the final rankings, which include far more than just the isolated categories I looked at earlier. But some obvious winners emerge: Georgia, Texas A&M, and Villanova each see significant improvement, regardless of which version of the methodology is used.

You may well ask, “Why is X over Y?” or “How can A be ranked at B?” The answer is, I gave you the model weights and my assumptions, and this is the result it puts out. It’s all publicly available data, and, again, many schools are privately doing this already and know their areas of strengths and weaknesses as to other law schools.

At the end of the day, we’ll see how wrong I am.

But I think it’s also at least some sign that the shakeup, for most schools, may not be nearly as dramatic as one may suppose.

UPDATE 1/18/2023: I had originally thought I made an error with the calculations I used on the bar passage rates, but some schools have created some inconsistencies that I had to back and check. I was right the first time!

Which law schools are affected the most by the USNWR dropping at-graduation employment rates?

Following up on my post, “Who's likely to benefit from the new USNWR law school rankings formula?,” I’ve continued to look at other changes to the rankings. USNWR is now only using publicly-available data. I went through some of those changes, but I wanted to look at another one: the absence of at-graduation employment outcomes.

USNWR does not publicly disclose precisely how it weighs the various employment metrics, but it does offer some relative cues, and it also says it gives “full weight” to full-time, long-term, bar passage-required or J.D. advantage jobs. How a school places in that category is highly correlated with how it places in the overall employment category. Now, changes to this category are coming, as I noted (i.e., USNWR will give “full weight” to school-funded positions and to students pursuing graduate degrees).

Setting aside that change for the moment, USNWR also offers 4% of its ranking based on a school’s at-graduation employment rate. This metric tends to favor the more “elite” law schools that place a significant number of graduates into judicial clerkships or large law firms, because those employers tend to hire before a 3L has graduated. That, however, is not data collected by the ABA, which only collects data in 10-month employment statistics.

I looked at “elite” employment outcomes for students 10 months after graduation, and compared them to the overall at-graduation employment rate reported to USNWR. (“Elite” being placement in private practice law firms with 101 or more attorneys, and in federal judicial clerkships.) In this first chart, you can see a pretty good relationship between the at-graduation rates and the elite law firm placement rates. (You can also see a number of schools that don’t report their at-graudation rate

Now here’s the chart for the relationship between those same jobs and the 10-month employment rate. As you see, overall employment rates rise significantly among the schools with the least “elite” employment outcomes. It means that the shift from at-graduation to 10-month may well favor placement into public interest, government, and smaller law firm jobs compared to how those positions have been handled in the past.

On to the change in methodology. I ran the numbers from last year’s figures to see what would happen if the 10-month employment rate were weighted at 18% instead of its present 14%, and abolished the at-graduation employment rate. I only used the top-line full-weight “employment” figures, so those are less precise than trying to use the proprietary blend USNWR uses for its actual ranking; but, I did standardize each score and looked at where it fell. While imprecise, this should give a “band” of the schools most likely to over-perform and under-perform based on this change alone. It should be noted that many schools do not presently share at-graduation employment statistics with USNWR, and probably all of them would be better off, to some degree or another.

SCHOOLS LIKELY TO BENEFIT

At grad v. 10 month

Elon 10.2%, 78.7%

Dayton 30.0%, 87.1%

Willamette 30.6%, 85.2%

Texas A&M 46.9%, 93.8%

Gonzaga 31.7%, 83.7%

Regent 31.2%, 83.1%

Houston 31.9%, 81.9%

Arkansas 39.3%, 86.6%

Northern Illinois 26.3%, 77.5%

Samford 31.4%, 80.7%

Arkansas-Little Rock 7.1%, 64.6%

DePaul 32.3%, 79.3%

Campbell 33.6%, 78.6%

North Dakota 29.9%, 76.1%

Idaho 30.6%, 76.5%

Seattle 32.4%, 76.9%

Liberty 24.6%, 71.9%

LSU 41.9%, 82.0%

Oklahoma 30.2%, 74.5%

Belmont 36.0%, 78.0%

These tend to be schools that do not place an overwhelming number of students into large law firms or judicial clerkships, but that do have a fairly strong 10-month employment rate relative to their peers. Interestingly, there are not any California law schools on the list, a cohort I had assumed might benefit most from the state’s difficult bar examination and perhaps a higher “wait and see” approach from prospective employers.

Now, to schools more likely to be adversely affected.

SCHOOLS LIKELY TO BE ADVERSELY AFFECTED

At grad v. 10 month

Massachusetts-Dartmouth 33.9%, 47.5%

Yale 89.2%, 89.2%

Stanford 88.5%, 89.0%

BYU 82.8%, 85.9%

Northwestern 87.9%, 89.5%

CUNY 36.1%, 56.5%

Loyola-New Orleans 52.1%, 66.9%

Vanderbilt 82.2%, 86.1%

Georgetown 83.1%, 86.8%

NYU 86.6%, 89.5%

Berkeley 86.7%, 90.0%

Chicago 94.6%, 95.1%

Columbia 95.3%, 95.6%

USC 76.2%, 83.6%

Virginia 92.7%, 94.3%

Cornell 90.3%, 92.8%

Montana 81.2%, 87.0%

Irvine 58.7%, 72.7%

Connecticut 58.6%, 72.9%

Harvard 88.1%, 91.8%

Recall, of course, it is on this one metric alone I’m looking at the change. And recall because the schools’ data are standardized in each category, those likely to gain or lose may look a little different than one may expect on the raw numbers alone. But it’s a mix of schools that have a very high at-graduation employment rate and receive a significant boost relative to their peers; and schools that are fairly low in both categories that were farther outliers in the at-graduation rates.

There are many other changes that could help or adversely affect other schools. Note, for instance, that I suggested in an earlier post that BYU could gain significantly in some other categories; here, it appears they could be adversely affected more. Texas A&M, to name another, performs well here, as it did in other places. How much weight USNWR gives to any change matters greatly.

But I think this highlights just how uncertain many changes are in the upcoming rankings. As I pick off different categories, there are schools likely to change their performance in each category. How those shake out in the end—whether they tend to be beneficial or not—remains to be seen.

By knocking off expenditure metrics and devaluing peer reputation scores in the new USNWR formula, did law schools just kill the faculty's golden goose?

As Aesop tells it, there was a goose that laid golden eggs. The greedy farmer saw the goose and thought there must be more gold inside the goose. The farmer kills the goose and finds nothing special inside—but he has now lost the ability to gather any more golden eggs.

It may not be the same story with the USNWR boycott and subsequent rankings changes. Law schools may well have attacked the goose thinking it was a wolf. But upon its demise, it may well be that law schools have permanently lost one of their most significant bargaining chips with central universities in trying to secure more funding for the law school.

Let me at the outset point out that I’ve long been critical of many aspects of the USNWR rankings, including expenditure data. It’s been opaque and been a kind of arms race for schools to figure out which accounting tricks they can use to raise their expenditure figures. And let me add that eliminating is in many respects a good thing, because the costs often fell on student loan borrowers and tuition hikes. So the analysis below is a small violin for many, indeed!

But a sober look at the change is in order. I posited yesterday about a potential effect of eliminating the expenditures-per-student metric:

By the way, it’s worth considering a new and different incentive for law schools situated within universities right now. Law schools could presently make the case to central administration that high spending on resources, including on law professor salaries, was essential to keeping one’s place in the rankings. No longer. It’s worth considering what financial incentive this may have on university budgets in the years ahead, and the allocation of resources.

From some offline and private conversations, this factor has been one of the most eye-opening to the law professoriate.

In the past, law schools could advocate for more money by pointing to this metric. “Spend more money on us, and we rise in the rankings.” Direct expenditures per student—including law professor salaries—were 9% of the overall rankings in the most recent formula. They were also one of the biggest sources of disparities among schools, which also meant that increases in spending could have higher benefits than increases in other categories. It was a source for naming gifts, for endowment outlays, for capital campaigns. It was a way of securing more spending than other units at the university.

And indirectly, the 40% of the formula for reputation surveys, including 25% for peer surveys and 15% for lawyer/judge, was a tremendous part of the formula, too. Schools could point to this factor to say, “We need a great faculty with a public and national reputation, let us hire more people or pay more to retain them.” Yes, it was more indirect about whether this was a “value” proposition, but law faculty rating other law faculty may well have tended to be most inclined to vote for, well, the faculty they thought were best.

Now, the expenditure data is gone, completely. And peer surveys will be diminished to some degree, a degree only known in March.

Some increase in the measurement of outputs, including bar passage data and employment outcomes, will replace it.

For law faculty specifically, and for law schools generally, this is a fairly dramatic turn of events.

To go to a central university administration now and say, “We need more money,” the answer to the “why” just became much more complicated. The easy answer was, “Well, we need it for the rankings, because you want us to be a schools rated in the top X of the USNWR rankings.” That’s gone now. Or, at the very least, diminished significantly, and the case can only be made, at best, indirectly.

The conversation will look more like, “Well, if you’re valued on bar passage and employment, what are you doing about those?

A couple of years ago, I had these long thoughts on the hollowness of law school rankings. For schools that lack the confidence in their institution and lack the vision to be able to articulate the value of the institution without reference to rankings, rankings provided easy external validation. They have also provided easy justification for these kinds of asks over the years.

Those easy days are over. Funding requests will need to look very different in a very short period of time.

Are there are other things that law schools can point to for a specific investment in law faculty in the USNWR rankings? Well, one such measure may have been citation metrics, which I had some tentative but potentially positive things to say as USNWR considered those. But law schools mounted a pressure campaign to nix that idea, too.

At the end of the day, then, the rankings formula will have very little to say with anything about the quality of law school faculty or the school’s financial investment in its faculty. An indirect case, of course, including a diminished peer reputation score. And faculty do contribute to bar passage and to employment outcomes. There will still be a faculty-student ratio.

But I think the financial case for law schools may well look very different in the very near future. This will be almost impossible to measure, and the anecdotes coming from these changes may well be wild and unpredictable. It’s also contingent on other USNWR changes, of course. But it’s a question I’ll be trying to watch closely over the next decade.

Who's likely to benefit from the new USNWR law school rankings formula?

Melissa Korn at the Wall Street Journal dropped the news today that USNWR plans on changing its formula for the law school rankings:

In a letter sent Monday to deans of the 188 law schools it currently ranks, U.S. News said it would give less weight in its next release to reputational surveys completed by deans, faculty, lawyers and judges and won’t take into account per-student expenditures that favor the wealthiest schools. The new ranking also will count graduates with school-funded public-interest legal fellowships or who go on to additional graduate programs the same as they would other employed graduates.

This is a remarkable sea change. As I recently pointed out, I did not anticipate much of a match between the boycotting law school’s tactics and complaints, and the ultimate USNWR response. I pointed out earlier that many of the complaints were about information that was already publicly available. And sure enough, the per-student expenditures, which were not the subject of complaints but the thing USNWR could not independently collect without voluntary participation from schools, are the first on the chopping block.

So which schools might it benefit or adversely affect the most? Let’s take a look at these three categories, and one other variable I’ll mention at the end.

1. Reduced weight to reputational survey data. Reputational surveys get 40% of the overall weight in the rankings—a lot. That’s 25% for the “peer” survey (i.e., other law school survey respondents), and 15% from the “lawyer/judge” survey.

Let’s start with the “peer” survey. Among the current overall-rated “top 50” schools, the schools that would benefit the most from this change (i.e., their peer score has lagged their overall score). (And this is just a raw comparison of rank v. rank; there are more nuanced issues dealing with the weighted Z-scores, scaling, and the like for another day…. And, of course, differences can be exaggerated the farther down the rankings one goes, a reason I also confined this to the “top 50” for now.)

POTENTIAL WINNERS, PEER CHANGE

George Mason (65 v. 30)

BYU (52 v. 23)

Alabama (36 v. 25)

Florida (31 v. 21)

Utah (47 v. 37)

Wake Forest (47 v. 37)

Texas A&M (56 v. 46)

Georgia (36 v. 29)

How about the other side? That is, the schools that would be adversely affected the most from the change? Again, focusing just on the existing USNWR “top 50” for now:

POTENTIAL LOSERS, PEER CHANGE:

UC-Irvine (20 v. 37)

Washington (36 v. 49)

Colorado (36 v. 49)

UC-Davis (24 v. 37)

Wisconsin (31 v. 43)

Boston College (27 v. 37)

Emory (20 v. 30)

There are a few others in the middle of some interest, because near the top smaller variations matter more. NYU (3 v. 7) and Georgetown (11 v. 14) are harmed the most.

Now, the degree to which this benefits or harms a school entirely depends, of course, on how much USNWR chooses to reduce the weight of the category.

For schools presently outside the “top 50,” schools that stand to gain the most include Wayne State, Baylor, Penn State-Dickinson, Tennessee, and Penn State-University Park. Schools that stand to be harmed the most include Santa Clara, Howard, Brooklyn, Rutgers, Denver, Georgia State, American, and Hastings.

Now, over to the “lawyer/judge” survey. It’s a smaller percentage, and, again, it depends on how much the change in weight goes. For those who stand to gain the most:

POTENTIAL WINNERS, LAWYER/JUDGE CHANGE:

Texas A&M (90 v. 46)

Arizona (77 v. 45)

George Mason (54 v. 30)

BYU (43 v. 23)

Arizona State (49 v. 30)

Alabama (43 v. 25)

Utah (54 v. 37)

Maryland (64 v. 47)

Boston University (28 v. 17)

And who are likely to be adversely affected the most:

POTENTIAL LOSERS, LAWYER/JUDGE CHANGE:

Boston College (24 v. 37)

Washington & Lee (24 v. 35)

Washington (39 v. 49)

Wisconsin (33 v. 43)

William & Mary (20 v. 30)

Emory (20 v. 30)

It should be noted, schools that appear on both lists in the best/worst categories may have much more to be happy/unhappy about.

Outside the top 50, schools most likely to benefit include UNLV, Wayne State, Florida International, Hawaii, Georgia State, Penn State-University Park, and Arkansas. Schools most likelty to be harmed include Howard, Oklahoma, Miami, Michigan State, Hastings, Lewis & Clark, Pittsburgh, and Case Western.

2. Eliminating per-student expenditures. Again, this is not publicly-available data, but it doesn’t take much effort to realize that the wealthiest schools, typically (but not always!) private schools, tend to be harmed the most by this change. Public schools (but not all!) are likely to benefit most. It was by far the biggest differentiator among many schools, giving schools like Yale and Harvard nearly insurmountable leads. Of course, it was also notoriously opaque and subject to manipulation.

If you want to consider the schools most adversely affected, it would be also useful to look at private law schools that have risen sharply in the rankings in recent years, or law schools that have had recent naming gifts or building/capital expenditures that allow an influx of spending.

By the way, it’s worth considering a new and different incentive for law schools situated within universities right now. Law schools could presently make the case to central administration that high spending on resources, including on law professor salaries, was essential to keeping one’s place in the rankings. No longer. It’s worth considering what financial incentive this may have on university budgets in the years ahead, and the allocation of resources.

3. Counting graduates with school-funded public-interest legal fellowships or who go on to additional graduate programs the same as they would other employed graduates. A couple of caveats. First, schools do not have to self-report if their full-time, long-term, bar passage-required or JD-advantage jobs that they subsidize are “public interest” or not, although one could suspect they mostly are public interest. These categories are also very small overall. Among the 35,713 graduates in 2022, 427 were pursuing graduate degrees (1.1%), 276 had school-funded bar passage-required jobs (0.7%), and 56 (0.2%) had school-funded JD-advantage jobs. The change will take pursuing a graduate degree from lesser to full weight, and school-funded positions from a lesser weight to full weight. (This is the only change really directly responsive to some of the elite schools’ earliest complaints.)

But those positions are not all equitably distributed. Who has the most in these three categories, which I’m lumping together for present purposes:

Yale 17%

Stanford 8.7%

District of Columbia 8.3%

San Diego 8.3%

South Dakota 7.7%

BYU 7.6%

Berkeley 7.0%

UC-Irvine 6.6%

Harvard 6.2%

Penn 6.2%

It’s worth considering whether the totals in these positions will climb in future years as the incentives have changed.

4. What fills the gap. USNWR has just announced it would eliminate expenditure data (per-student is 9%, “indirect” expenditures at 1%, if that’s also on the chopping block; as to library resources, it remains unclear for now). It also announced it would give reduced weight to the 40% categories of reputational surveys.

How USNWR backfills its formula to get to 100% will matter tremendously to schools. If you are doing well in all other areas of the rankings, maybe not so much. But if you are a school with relatively poor admissions medians and strong employment outcomes (a great value for students, to be frank!), you benefit much more if the new formula gives more weight to the employment outcomes. And if you are a school with really strong admissions medians but relatively poor employment outcomes, you benefit much more if the new formula gives more weight to the admissions statistics. Time will tell. UPDATE: USNWR has disclosed that it intends to give “increased weight on outcome measures.”

In short, there’s much uncertainty about how it will affect many law schools. I feel fairly confident that a handful of the schools identified above as winners in several categories, including Alabama, BYU, Georgia, and Texas A&M, will benefit significantly in the end, but one never knows for sure. It also has the potential to disrupt some of the more “entrenched” schools from their positions, as the more “legacy”-oriented factors, including spending and the echo chamber of reputational surveys, will receive less value. Law schools must increasingly face the value proposition for students (e.g., lower debt, better employment outcomes), with some other potential factors in the mix, in the years ahead.

UPDATE: It’s worth adding that USNWR has indicated, “We will rank law schools in the upcoming rankings using publicly available data that law schools annually make available as required by the American Bar Association whether or not schools respond to our annual survey.” That suggests other data, including the “employed at graduation” statistics and the “indebtedness at graduation” statistics, at the very least, would also disappear.

Annual Statement, 2022

Site disclosures

Total operating cost: $192

Total content acquisition costs: $0

Total site visits: 58,935 65,305 (-10% over 2021)

Total unique visitors: 51,745 (-7% over 2021)

Total pageviews: 70,822 (-12% over 2021)

Top referrers:
Twitter (4396)
Revolver.news (4240)
Leiter’s Law School Reports (1494)
Instapundit (760)
Reddit (518)
TaxProf Blog (469)
How Appealing (386)
David Lat’s Substack (195)

Most popular content (by pageviews):
What does it mean to “render unto Caesar”? (May 3, 2020) (11,606)
Ranking the most liberal and conservative law firms among the top 140, 2021 edition (November 8, 2021) (9426)
Federal judges have already begun to drift away from hiring Yale Law clerks (March 19, 2022) (8911)
Ranking the most liberal and conservative law firms (July 16, 2013) (3143)
Some dramatic swings as USNWR introduces new bar exam metric (March 28, 2022) (3107)
California’s “baby bar” is not harder than the main bar exam (May 28, 2021) (1308)

I have omitted "most popular search results" (99% of search results not disclosed by search engine, very few common searches in 2022).

Sponsored content: none

Revenue generated: none

Disclosure statement

Platform: Squarespace

Privacy disclosures

External trackers: one (Google Analytics)

Individuals with internal access to site at any time in 2022: one (Derek Muller)

*Over the course of a year, various spam bots may begin to visit the site at a high rate. As they did so, I added them to a referral exclusion list, but their initial visits are not disaggregated from the overall totals. These sites are also excluded from the top referrers list. Additionally, all visits from my own computers are excluded.

What does the endgame look like for law schools refusing to participate in USNWR rankings?

Whether it’s Ralph Waldo Emerson or The Wire, the sentiment in the expression, “If you come at the king, you best not miss” is a memorable one. That is, if you seek to overthrow the one in charge of something, you must truly overthrow it. If you don’t, and merely wound or annoy, well, perhaps all you’ve done is make that person angry at you, and perhaps you’re in a worse place than when you began.

I’ve been puzzling over this sentiment over the last week as Yale and Harvard (and now a handful of other elite schools in tow) announce they will not “participate” in the USNWR rankings in the future.

The puzzle is this: so what? Or, what’s the endgame here?


To start, USNWR has announced it will continue to rank law schools. Not much of a surprise here. And, as I mentioned, much data USNWR uses is available from the American Bar Association or from its own internal collection.

Law schools refusing to participate, then, may do one of two things. First, they may “delegitimize” the rankings by refusing to participate and hope that prospective law students, employers, and others take note. A related form of “delegitimization” is asterisks beside “non-participating” schools where USNWR imputes data that it cannot otherwise obtain publicly, suggesting that the rankings are “tainted,” at least as far as these non-participating law schools are concerned.

I think there’s little likelihood of his happening because of the second reason, which I’ll get to in a moment. As more and more schools refuse to participate, I think the sense of the rankings being “tainted” is, well, less and less. Everyone’s doing the same thing (well, not everyone, and more on this a little after that). The incentive, then, is for USNWR to treat everyone the same.

So, second, persuade USNWR to change its formula. As I mentioned in the original Yale and Harvard post, their three concerns were employment (publicly available), admissions (publicly available), and debt data. So the only one with any real leverage is debt data. But the Department of Education does disclose a version of debt metrics of recent graduates, albeit a little messier to compile and calculate. It’s possible, then, that none of these three demanded areas would be subject to any material change if law schools simply stopped reporting it.

Instead, it’s the expenditure data. That is, as my original post noted, the most opaque measure is the one that may ultimately get dropped if USNWR chooses to go forward with ranking, it would need to use only publicly-available data. It may mean expenditure data drops out.

Ironically, that’s precisely where Yale and Harvard (and many other early boycotters) excel the most. They have the costliest law schools and are buoyed in the rankings by those high costs.

So, will the “boycott” redound to the boycotters’ benefit? Perhaps not, if the direction is toward more transparent data.

Paul Caron has blogged about another interesting feature. Many of the schools that have joined the early boycott have fallen in USNWR in recent years, perhaps a suggestion, again, of a desire to “lock in” the present ranking. And many other schools that have publicly come out against the boycott (or who have been noticeably silent) have had advantageous boosts in recent years. So there are different incentives for those who want to change the rankings (by boycotting, forcing the hand) and those who do not (by maintaining the status quo).

Another brief point. While it’s mostly elite law schools that have “boycotted” so far, a few others have joined, including those whose “ranking” is more marginal and those who are the sole flagship law school in a state. In another sense, these schools also have little to “lose,” if you will, like the most elite schools whose reputations are firmly cemented in the existing hierarchical structure. A few schools (including a couple of HBCUs) have not given USNWR data for many years (to no particular public praise, for what it’s worth). Schools with a more marginal ranking have little value in trying to stay at, say, #110 or #124. And for sole flagship law schools, they also often cater to a different market of prospective students and employers.

Instead, we see a number of schools not boycotting who are generally, perhaps, “indistinguishable” to the prospective student or the employer, and who need to understanding of what the school’s “peers” may be as a reference point. USNWR offers this crudely and imperfectly, but it certainly adds this value in some ways.

Finally, what “boycotting” looks like. I openly asked whether schools “boycotting” intended to “boycott” the surveys sent around to law schools. It’s apparent that answer is no. “Boycotting” law schools submitted promotional material to prospective USNWR reputation survey voters (including me) even after announcing the boycott. So it’s a partial boycott, one in which schools intend to maintain as high a ranking as possible while also being recalcitrant in handing over data.