One-point increase in student loan rates could cost new law school graduates tens of thousands of dollars in added debt

CNBC reports the new student loan interest rate figures, and they are pretty dire for higher education in general and law schools in particular:

For graduate students, loans will probably come with a 8% interest rate, compared with 7% now, he said.

Plus loans for graduate students and parents may have a 9% interest rate, an increase from 8%.

From a simple student loan calculator, we can make some estimates on debt and repayment. $150,000 in student loans ($50,000 per year) at 7% interest results in around $90,000 in total interest. That jumps to $105,000 in total interest if the rate is 8%. That’s an extra $15,000 in interest (and debt), hidden from students at the outset of the loan, and that does not redound to the benefit of the law school.

On $75,000 per year in student loans ($225,000 in total), interest jumps from $135,000 to $158,000, an increase of $23,000.

Even more modest debt, like $25,000 per year ($75,000 in total) sees interest jump to from $45,000 to $53,000, although $8000 is much more manageable increase.

(These figures of course are exacerbated by another hidden cost, the ending of subsidized graduate student loans in 2011, which allows interest to accrue during law school.)

Student debt loads reported by the Department of Education can factor in these interest figures, and they can be helpful in assessing which programs see some of the highest debt loads upon graduation (and shortly after graduation), along with salary data.

But it makes a robust economy (as big law firm salaries are $215,000 to start and big law hiring remains high, for now, but is on the down slope), and a school’s robust loan repayment assistance program (LRAP), all the more important for legal education. But as a hidden cost, it requires some foresight from law schools to anticipate and prepare for the challenges ahead.

Which law schools saw the biggest changes in employment placement after USNWR gave "full weight" to new categories of jobs?

Back in 2016, I noted how a lot of law school-funded positions “dried up” once USNWR stopped giving those jobs “full weight” in its law school rankings. Yes, correlation does not equal causation. And yes, there were other contributing causes (e.g., changes in how the ABA required reporting of such positions). But the trend was stark and the timing noteworthy.

The trend is likewise stark, at least in one category.

USNWR is now giving weight to full-time, long-term (1 year or longer) bar passage-required and J.D. advantage jobs funded by law schools. It is also giving weight to those enrolled full time in a graduate degree program.

In two categories, graduate degree and bar passage-required, there were not significant variances from previous years. For bar passage-required jobs, that is perhaps understandable. Such positions have hovered between 200 and 300 for several years (239 last year, 212 the year before), and they are really driven by a handful of schools that can sustain a kind of “bridge” program for students interest in public interest work.

For graduate degree, it actually hit an all-time low since reporting began—just 344, down from 375 last year, and down from the record 1231 for the Class of 2010. I had thought this might be a tempting position for schools to press students into to give them “full weight” positions for USNWR purposes. Not so.

But the one category that did stand out was J.D. advantage jobs funded by the school. Here, again, we are in an incredible small category of jobs—just 97 for the Class of 2023, only one quarter of one percent of all jobs. (And again, it’s worth noting, even though these three categories combine for less than 700 graduates among 35,000 graduates, it was one of the leading charges of the pro-”boycott” law schools.) But there is a marked uptick, returning to a pre-2015 high.

We also know that not all these jobs are randomly distributed. They can be concentrated at some schools. We can also try to identify if some schools saw a marked rise in these three categories of jobs last year. But of course, there can be volatility from year to year in any particular category.

I looked at the 2020, 2021, and 2022 average of law schools’ output into these three categories of previously-lesser weight employment outcomes. I then compared to see how the placement in the Class of 2023 compared to the previous three-year average in these combined categories. The top 15 schools are listed below.

Employment placement in full time, long term, bar passage required or JD advantage jobs funded by the school or in graduate degree programs
SchoolName 2020-2022 avg 2023 Delta
PEPPERDINE UNIVERSITY 2.2% 8.2% 6.0
WASHINGTON UNIVERSITY 0.6% 4.8% 4.2
CATHOLIC UNIVERSITY OF AMERICA 0.0% 3.7% 3.7
ARKANSAS, LITTLE ROCK, UNIVERSITY OF 2.2% 5.7% 3.5
YALE UNIVERSITY 11.7% 15.2% 3.4
WASHBURN UNIVERSITY 0.0% 3.3% 3.3
GEORGE MASON UNIVERSITY 1.2% 4.3% 3.1
SOUTH DAKOTA, UNIVERSITY OF 0.4% 3.4% 3.0
CORNELL UNIVERSITY 1.9% 4.6% 2.7
DUQUESNE UNIVERSITY 0.3% 2.9% 2.7
ARKANSAS, FAYETTEVILLE, UNIVERSITY OF 3.3% 5.9% 2.6
MISSISSIPPI, UNIVERSITY OF 2.5% 5.0% 2.4
LIBERTY UNIVERSITY 1.0% 3.3% 2.3
UNIVERSITY OF BUFFALO-SUNY 0.3% 2.5% 2.3
WISCONSIN, UNIVERSITY OF 1.2% 3.5% 2.3

As I wrote back in 2016, correlation is not causation, and there are of course confounding variables and factors in place at any given institution. But there’s no question the change in “full weight” categories by USNWR comes at a time when some schools are undergoing material changes to their typical employment outcomes in categories that previously did not receive “full weight” but now do. And while many of these figures appear to be small changes, we know that very small changes in the new methodology can yield big differences: “By shifting about 3 percentage points of a class from “unemployed” to a “full weight” job (in a school of 200, that’s 6 students), a school can move from being ranked about 100 in that category to 50.” (Note: this effect is somewhat diluted as it is a two-year employment average, but if the same thing happens year over year, the effects will remain the same.)

As the law firm hiring market slows down, I’ll be watching the overall trends and the individual trends for the Class of 2024 in particular.

Overall legal employment for the Class of 2023 improves slightly, with large law firm and public interest placement growing

That is literally the same headline I had for the class of 2022, but it’s another year of incremental improvement. Outcomes improved incrementally once again. Below are figures for the ABA-disclosed data (excluding Puerto Rico’s three law schools). These are ten-month figures from March 15, 2024 for the Class of 2023.

  Graduates FTLT BPR Placement FTLT JDA
Class of 2012 45,751 25,503 55.7% 4,218
Class of 2013 46,112 25,787 55.9% 4,550
Class of 2014 43,195 25,348 58.7% 4,774
Class of 2015 40,205 23,895 59.4% 4,416
Class of 2016 36,654 22,874 62.4% 3,948
Class of 2017 34,428 23,078 67.0% 3,121
Class of 2018 33,633 23,314 69.3% 3,123
Class of 2019 33,462 24,409 72.9% 2,799
Class of 2020 33,926 24,006 70.8% 2,514
Class of 2021 35,310 26,423 74.8% 3,056
Class of 2022 35,638 27,607 77.5% 2,734
Class of 2023 34,848 27,828 79.9% 2,167

Placement continues to be very good. There was an increase of over a few hundred full-time, long-term bar passage-required jobs year-over-year, and the graduating class size was dipped a bit. Those factors combined for a placement rate of 79.9%. J.D. advantage jobs decreased somewhat, perhaps consistent with a hot law firm market last year.

It’s remarkable to compare the placement rates from the Class of 2012 to the present, from 56% to 80%. And it’s largely attributable to the decline in class size.

Here’s some comparison of the year-over-year categories.

FTLT Class of 2022 Class of 2023 Net Delta
Solo 160 174 14 8.7%
2-10 5,070 4,751 -319 -6.3%
11-25 2,115 2,047 -68 -3.2%
26-50 1,360 1,340 -20 -1.5%
51-100 1,175 1,157 -18 -1.5%
101-205 1,246 1,234 -12 -1.0%
251-500 1,145 1,223 78 6.8%
501+ 6,137 6,360 223 3.6%
Business/Industry 2,797 2,236 -561 -20.1%
Government 3,591 3,766 175 4.9%
Public Interest 2,875 2,991 116 4.0%
Federal Clerk 1,130 1,182 52 4.6%
State Clerk 2,053 2,067 14 0.7%
Academia/Education 375 367 -8 -2.1%

The trend continues a longstanding uptick in public interest placement, which is not an outlier. Public interest job placement is up over 100% since the Class of 2017. These eye-popping number continue to rise. It is likely not an understatement to say that law students are increasingly oriented toward public interest, and that there are ample funding opportunities in public interest work to sustain these graduates. (I include a visualization of the trend of raw placement into these jobs here.)

Sole practitioners continue to slide significantly (they were in the low 300s not long ago in raw placement).

Additionally, large law firm placement continues to boom. Placement is up more than thousands graduates in the last several years. Placement in firms with at least 101 attorneys is around 8800. A full 25% of all law school graduates landed in a “Big Law” firm, and more than 30% of those who were employed in a full-time, long-term, bar passage-required job landed in a “Big Law” firm. (Charts showing both the raw placement and the percentage of graduates are included here in another visualization (charting both on two different axes to see similar trends).

How did law schools and law firms survive the purported “death of Big Law”? Well, Big Law seems to be doing better than ever. It’s not clear we’ll have as hot a market for this year’s class, but it’s something to watch.

One slightly interesting observation is a sharp decline in “business” placement. These tend to be JD advantage positions, and if there’s a decline in JD advantage placement we’d expect to see a decline here, too, but it seems more significant than just JD advantage jobs.

Federal clerkship placement improved a bit but has remained mostly steady.

There’s a lot more to examine in light of USNWR methodology changes, but I’ll save that for another post.

The 2024-2025 USNWR law school rankings: methodology tweaks may help entrench elite schools, but elite schools see reputation decline among lawyers and judges

Hours after the release of last year’s dramatic change to the USNWR methodology, I noted the dramatic increase in “compression and volatility” in the coming rankings.

USNWR changed a couple of things in its methodology:

There were a couple differences in how the rankings were calculated, described below. In summary, U.S. News averaged its bar passage and employment indicators over two years. Also, the lawyers and judges assessment score had a second source of ratings besides names supplied by law schools.

While it might not be the design—more on that in a moment—its effect may well be to entrench elite schools.

1. Changes to employment (and bar passage)

USNWR decided to use two-year figures for both employment and bar passage. Here’s how it explained the employment changes.

To improve measurement of this indicator – given the common year-to-year fluctuations associated with outcome measures and the small sizes of some graduating J.D. classes – this indicator was derived from the average of the 2021 and 2022 graduating class outcomes 10 months after graduation.

This isn’t entirely true for several reasons. First, the problem of “small sizes” of classes is not the issue—and it’s an issue that’s been true for the decades that USNWR used the categories, but it never thought to include a two-year average until now. And there have always been fluctuations, again, in the decades that USNWR has used these metrics.

The issue, instead, is a problem about compression in their rankings system with the new methodology and high volatility in the categories given the most weight.

Compare this visualization of schools two years ago to last year, and where the raw schools put the top ~60 schools.

The methodology changes removed or reduced the weight of categories that created a broader spread across schools. That created the compression. Then it gave additional weight to the categories that are the most volatile. That would lead to this year’s projected more dramatic changes among schools—not just volatility, but volatility within a highly compressed rating system.

So why did USNWR decide to change this year? There are two possible explanations for this change, and, tellingly, either explanation looks bad for USNWR.

One explanation is that USNWR was simply unaware of the potential volatility in their ranking sand is responding now. That is a bad look for USNWR. It took me minutes to spot this likely problem. If it escaped their entire data team’s months-long vetting, it’s a telling concession.

The other explanation is that USNWR was aware of the potential volatility, but it took a step this year to react to reduce it. That’s a bad look, too—if it was aware of the problem, why didn’t it address the problem then? It did, after all, have all of the granular employment data in previous years. And if it was aware last year, what prompted the change this year?

The related answer to both, by the way, is that it saw something problematic in what the outputs would be, and it modified the weighing to avoid undesirable results. This is not something I have proof for, I admit. I can only infer from the actions take in response to some events of the last year.

But we saw a few schools—notably, as I pointed out, NYU and Cornell—that would disproportionately suffer under the new system. I projected NYU to slide to 11 and Cornell to 18. Instead, with a re-weighing, NYU slid only to 9, and Cornell to 14. Other schools—particularly Washington University in St. Louis, North Carolina, and Texas A&M—were projected to rise much faster. The rankings changes are designed to put a governor on moves down—or up—the rankings.

Now, it’s not possible to prove that USNWR saw that NYU and Cornell would slide much faster than they thought appropriate and changed the methodology. But I can simply point out that these arguments were raised publicly for months, and this methodological change is designed to slow down the kinds of dramatic changes that we publicly expected this year. It’s not a good look whatever the motivation was, because it reflects a lack of competence about the changes instituted last year. Relatedly, USNWR is here conceding that too much volatility is a bad thing. That is, it would prefer to see less movement (and more entrenchment) in its final product.

(The lengthening window of data is creating increasingly strange results. For instance, today’s prospective law students are considering what their employment and outcome prospects look like in 2027. The current methodology has data stretching back to the Class of 2019 (two-year average of ultimate bar passage rates for the Classes of 2019 and 2020). That said, perhaps it’s better to think of schools over a longer period of time rather than just one-year data sets each year.)

2. Added value of career development (and bar support) at law schools

Last December, I blogged, “Perhaps the most valuable legal education job in the new USNWR rankings landscape? Career development.” It that was true then, it’s essentially doubly true now.

When I looked at the dramatic opportunity for law schools to rethink how they do admissions, I highlighted how broad the spread was for employment outcomes, and how small fluctuations could effect a school’s place dramatically. (See earlier for NYU and Cornell.) The same was true, to a lesser extent, on bar passage.

Now, in pure mathematical terms, the effect of a given class’s employment output is unchanged. It was 33% of the rankings last year; it’s now (effectively) 16.5% of the rankings for each year of two years. Formally, no difference.

But, I would posit, I think employment effects have now effectively doubled.

A good year will redound to a school for two years; a bad year will need to be managed across two years. No more opportunities to rip the bandage off and move to the next year; a bad year will linger. And yes, while it receives less weight in a given year, a school is seeking to maximize the effect each time every year.

So what I said before, about career development being the most valuable job in legal education? Doubly true.

The legal profession is witnessing a slowdown in hiring. Tougher times are coming to graduating law classes in the very near future. And you don’t want to be preparing for the storm in the middle of it. Law schools should be in the process of adding to their career development offices—in fact, I’d say, as a rule of thumb, doubling the size. And if you’re not… well, I hate to use the term “academic malpractice” without an individualized assessment, but it’s the term I’m likely to use anyway. And while that may sound like overkill, recall that this isn’t simply a USNWR gimmick. It benefits students to have high quality career advising and mentoring for their future professional careers, particularly as economic challenges arise in the near future.

(The same is true for bar passage, but at many schools, I think, the value will largely be in ensuring that students get over the finish line at the end of the day if they fail the bar exam on the first attempt. The state-specific relative metric of the bar exam makes it tougher to quantify here. So the same is true, I think, just to a smaller degree, of bar support more generally.)

3. Changes to lawyer and judge peer reputation surveys.

One more methodological change of note:

Legal professionals – including hiring partners of law firms, practicing attorneys and judges – rated programs' overall quality on a scale from 1 (marginal) to 5 (outstanding), and were instructed to mark "don't know" for schools they did not know well enough to evaluate. A school's score is the average of 1-5 ratings it received across the three most recent survey years. U.S. News administered the legal professionals survey in fall 2023 and early 2024 to recipients that law schools provided to U.S. News in summer 2023. Of those recipients surveyed in fall 2022 and early 2023, 43% responded. For this edition, U.S. News complemented these ratings by surveying partners at big law firms, sampled based on their size – larger firms were more frequently surveyed – while establishing geographic dispersion. Leopard Solutions, which partnered with U.S. News on its Best Companies to Work For: Law Firms list, provided U.S. News with the contacts from which a sample was drawn.

USNWR recognized that as schools “boycotted” the survey, they would have a smaller universe of lawyers and judges to survey. In the past, schools submitted 10 names (up to ~2000 names). The response rate was quite low, so USNWR used a three-year average. As schools stop submitting names, USNWR looked elsewhere.

And it deliberately selected a category: “partners at big law firms, sampled based on their size—larger firms were more frequently surveyed.”

In “Where Do Partners Come From?,” Professor Ted Seto tracked where NLJ 100 law firm partners came from—partners at the largest law firms. The data is from 2012, but we know that partnership in large law firms is also not susceptible to significant fluctuations. Here’s the top 20, with the raw number of partners listed:

1 Harvard 946

2 Georgetown 729

3 NYU 543

4 Virginia 527

5 Columbia 516

6 George Washington 447

7 Michigan 444

8 Chicago 426

9 Texas 384

10 Northwestern 365

11 Pennsylvania 329

12 Boston University 317

13 Fordham 306

14 UC Berkeley 287

15 UCLA 257

16 Yale 253

17 Stanford 240

18 UC Hastings 233

19 Duke 219

20 Boston College 213

These 20 schools are nearly all in the “top 20” or just outside of it in the USNWR rankings, and the handful that fall outside (e.g., George Washington, UC Law SF formerly Hastings) have, at varying times, been closer to the “top 20”. We can expect some affinity (or bias) for these partners’ home institutions, and perhaps for “peer” institutions as well (e.g., where their fellow partners at their firms attended school).

With almost clinical precision, then, USNWR has opted for a category to “complement” the survey that is likely to benefit the most elite law schools.

So, did it work? Well, to be fair, perhaps my assumption is wrong.

It’s worth noting that 11 of the “top 14” schools are experiencing all-time lows, either new lows or lows that tie previous lows, since USNWR began this metric in 1998 in the lawyer and judge survey category. Here’s the score in this category (on a 1-5 scale), with the comparison of the all time high for each school.

Stanford: 4.7 (all-time high: 4.9)

Harvard: 4.6 (4.9)

Chicago: 4.6 (4.8)

Columbia: 4.5 (4.8)

Yale: 4.5 (4.9)

Michigan: 4.4 (4.7)

Virginia: 4.4 (4.6)

Duke: 4.3 (4.5)

NYU: 4.3 (4.6)

Berkeley: 4.3 (4.6)

Georgetown: 4.2 (4.5)

Penn saw a decline from 4.4 to 4.3, and Cornell saw a decline from 4.3 to 4.2, but neither was an all-time low. Only Northwestern saw its score stable at 4.3 and not experience an all-time low. UPDATE: I mistakenly had Virginia’s previous high at 4.5 instead of 4.6.

Compare that to the next 86 schools in this category that have been ranked since 1998, and just 6 others experienced all-time low, again either new lows or lows that tie previous lows.

What could cause this disparity? Causation is tough to identify here, but let me posit two things.

First, we are seeing the slow phase-out of “boycotting” schools’ data inputs. Now two-thirds of the schools’ data is out of the mix. Schools were inclined to include their own supporters, and they are gone. Now, it’s hard to say that this is happening with such clinical precision at only the elite law schools and nowhere else. But perhaps a lot of elite law schools boycotted, and there’s some tag along effects as elite schools tend to rate elite schools comparably. That said, “complementing” with big law partner data should help shore up these figures, but perhaps there’s not enough here (indeed, we have no idea how they are mixed in with the data).

Second, it is possible that lawyers and judges—and perhaps in particular big law firm partners—are generally viewing elite law schools with less and less respect than at any time in recent history, and perhaps more so in the last year than at any time else. It might be law student or university protests about the Gaza conflict, fossil fuels, free speech—pick a cause. And perhaps the brunt of that publicity (and perhaps actual events) is falling on the most elite schools, which is creating fallout to their reputations in the legal community more generally. But that is very hard to assume and to pinpoint, and one might want to see what happens next year.

Neither is a perfect causal explanation, but both offer some possibilities to consider. Now, again, I would have expected the new methodology to help entrench elite schools, but this year it seems not to have done so.

We shall see what happens next year. Will three-year averages of some categories be in store? Or will USNWR introduce other categories (e.g., if big law firm partners merit special surveys, shouldn’t such outcomes for employment merit special weight?) consistent with concerns that the methodology ought to value certain things more than it has in the past?

There’s much more to discuss, of course, but this is my first take on the methodological changes in particular and the noteworthy change of reputation scores among the legal profession among a cohort of law schools.

What percentage of law school faculty have recently contributed to political candidates?

My recent post, “Law school faculty monetary contributions to political candidates, 2017 to early 2023,” has garnered a lot of attention and feedback, and I’m grateful for people’s interest in it! Some recurring questions came up.

First, what does this say about the percentage of politically-engaged faculty (an important question raised by Professor Milan Markovic and others)?

I tracked around 3300 faculty. That could double-count some faculty who moved around, so it could be smaller. And some could self-identify as a “law professor” but not teach at the law school (e.g., a business law professor in an undergraduate), another way it could be smaller.

But I sense I am somewhat understating the results, as I know of a non-trivial number of faculty (some of whom followed up with me after this post!) who are not included because they failed to list their occupation; listed some other occupation, like attorney or teacher; or whose title is, say, “professor of legal writing,” “professor of the practice,” or “Williams Chair in Constitutional Law.”

That’s all a hedge, and let’s set it aside for a moment. Based on what we know, what does it say about political engagement?

The window of contributions was a five-plus year window, 2017 to early 2023. I looked at law school faculties from 2022. That worked out to around 35.5% of “full time” faculty who contributed in this time (9195 faculty).

I looked at faculty that hit the 50% mark in terms of contributions, with, of course, all of the caveats I’ve listed.

School D R Both Pct
American 56 2   81.7%
Barry 19 1   71.4%
Irvine 37 1   67.9%
Widener Commonwealth 9 1   66.7%
Hastings 43     66.2%
NYLS 30     63.8%
Pace 24     61.5%
New Mexico 23     60.5%
Fordham 46 1 1 59.3%
Rutgers 59 2   56.5%
Wake Forest 27     56.3%
CUNY 31     55.4%
Loyola Los Angeles 36   1 55.2%
Indiana-Bloomington 27     55.1%
Catholic 9 3 1 54.2%
George Washington 49 3   54.2%
Cooley 21   1 53.7%
Atlanta's John Marshall 8     53.3%
Montana 8     53.3%
California Western 19     52.8%
Utah 20     52.6%
Chicago Kent 25     52.1%
SMU 21     50.0%
Illinois-Chicago 21     50.0%

One might be hard pressed to see any rhyme or reason for particular schools on the list or off the list. It’s possible, for instance, that some DC schools (like American, Catholic, and George Washington) attract disproportionately higher donors with some hope of service in a future administration. It’s possible that some schools’ faculty members (Irvine, Katie Porter) or famous alumni (Hastings now UC Law SF, Kamala Harris) prompted more donations. Only three of the Princeton Review’s “most liberal law students” (American, Irvine, George Washington) appear on the list.

School D R Both Pct
Regent   1   4.2%
Lincoln Memorial 1     4.5%
Faulkner   1   5.9%
North Dakota 1     6.3%
Idaho 3     7.9%
South Dakota 2     10.0%
LSU 4 1   12.8%
Southern 6 1   13.0%
Ave Maria 2 1   13.0%
Widener Delaware 4     13.3%
Indiana-Indianapolis 6     13.6%
Gonzaga 4     14.8%
Liberty 1 2   15.0%
Loyola New Orleans 6     15.0%
Baylor 5     16.7%
St. Thomas (Florida) 7     16.7%
Capital 4     17.4%
Mississippi College 4     17.4%
Villanova 8     17.8%
Tulane 9 1   18.2%
Wyoming 4     18.2%
Elon 7     18.4%
Texas A&M 11     18.6%
Washington University 18     18.8%
St. Thomas (Minnesota) 5     19.2%
Cleveland State 6     19.4%

Again, it’s interesting to return to the Princeton Review rankings of the “most conservative students.” Six of those top ten (Ave Maria, Regent, Faulkner, LSU, Idaho, Mississippi College) make the list of the least politically engaged faculty.

So there are varying things to consider. On the whole, contributions hover around 1/3 of the reported faculty (perhaps a bit higher but perhaps not by much) in the last five years. At a handful of schools (perhaps for some reasons), contributions are much higher or much lower as a percentage of overall faculty. It could be that political engagement is happening elsewhere. That said, with 1/3 of faculty reporting and nearly 96% of them going to Democrats, I am not sure that it is masking substantial numbers of Republican contributors who are simply sitting on the sidelines—but, perhaps a few more, if one sees how political engagement shakes out among those least-active institutions.

There are some other contribution figures to consider, perhaps for later posts.

Analysis of first-time bar passage data for Class of 2023 and ultimate bar passage data for Class of 2021

The ABA has released its new batch of data on bar passage. The data includes the first-time passage data for the Class of 2023 and the “ultimate” passage data for the Class of 2021. As I noted earlier, USNWR has increased the weight on bar passage as a metric (18% of the methodology is for first-time passage, 7% for ultimate), and it is one of the biggest metrics. It is also one of the most volatile metrics.

To offer a snapshot of what the data means, I looked at both the first-time and ultimate passage data. I compared schools’ performance against their Class of 2022 and 2020 metrics. I weighed the data the way USNWR does for a point of comparison.

Note that USNWR has not yet released its latest rankings for Spring 2024. That will include the Class of 2022 and 2020 metrics. This new batch of data will appear on rankings released in 2025.

Here are the schools projected to improve in this metric (which, again, under the current methodology, is 25% of the rankings) over the Classes of 2022 and 2020. The numbers below show the change in score; that is, they show how much a school is projected to improve or decline in the scoring. It is not the bar passage data, which is a comparative metric that can be harder to make meaningful if viewed simply in raw terms. That said, these numbers are, in their own way, meaningless, as they are just one factor among several.

Pontifical Catholic University of P.R. 0.316441

Appalachian School of Law 0.2564

Texas Southern University Thurgood Marshall School of Law 0.230382

Widener University-Delaware 0.225445

Northern Kentucky University 0.201138

Stetson University College of Law 0.188107

Villanova University 0.179214

Miami, University of 0.158228

Kansas, University of 0.152897

Albany Law School 0.141105

Baltimore, University of 0.129179

Texas Tech University 0.127884

Southern Illinois University 0.127633

Cincinnati, University of 0.127493

Saint Louis University 0.122641

North Carolina Central University 0.118442

Pittsburgh, University of 0.116755

Memphis, University of 0.106309

Vanderbilt University 0.104692

Boston College 0.102051

Here are the schools projected to decline in this metric over the Classes of 2022 and 2020.

Willamette University -0.41276

New Hampshire, University of -0.3903

Illinois, University of -0.32793

Case Western Reserve University -0.32763

Florida A&M University -0.31228

Ohio Northern University -0.30423

City University of New York -0.25318

Kentucky, University of -0.20322

Southern University -0.18699

Missouri, University of -0.17881

Puerto Rico, University of -0.166

Seattle University -0.16593

Pennsylvania State-Dickinson Law -0.15274

Regent University Law School -0.14854

Tulsa, University of -0.14119

Colorado, University of -0.1361

Gonzaga University -0.1291

Cleveland State University College of Law -0.12842

California Western School of Law -0.12533

St. Thomas University (Florida) -0.12298

Law school faculty monetary contributions to political candidates, 2017 to early 2023

I’ve done some work looking at law firms and where political contributions from each went among the largest law firms. I thought I’d try my hand at gathering some comparable data among law professor at law schools.

I drew FEC data from 2017 to early 2023 (when I started running data for this study). Contribution disclosure is only required for those who contribute more than $200, but many outlets like ActBlue or WinRed disclose even $1 contributions.

I looked for all faculty who self identified as a “law” “professor” as their occupation. That included professors of law and all potential titles, but it did not include professors with “legal” alone in the title, or those who identified as a law “teacher” or “educator.” Of course, if faculty members primarily self-identified as an “attorney” or some other title, they fell outside the filter. I then screened out anyone with the title “adjunct” or “emeritus/emerita” to return only full-time faculty members. It includes anyone in “doctrinal,” “clinical,” “research,” “writing,” “dean,” or other faculty roles, as long as “law” and “professor” appeared in the title.

The final data set had around 80,000 items. I sorted and standardized their institutions, the law schools they taught at. Some were more ambiguous (e.g., Was “UM” Michigan, Minnesota, or Maryland? Was “Widener” in Pennsylvania or Delaware?), but I tried to standardize as readily as I could.

I then coded all contributions as “Democratic,” “Republican,” or “other.” Some, like ActBlue or WinRed, are of course obvious. But I sifted through every label to identify whether they were Democratic- or Republican-leaning. OpenSecrets helped reveal if an ostensibly “neutral” political organization overwhelmingly contributed to candidates of one political party or another. Those whose contributions were at least 25% to each party I labeled “other.” So, too, were contributions to the Green Party or the Libertarian Party. These ended up being a trivial part of the data set.

I cleaned up the names of faculty. For instance, “William O’Connor” might sometimes label himself “Bill O’Connor” in some places, or “William OConnor” elsewhere. Data entry for contributors is often quite sloppy. I created a function that took the first five letters of a donor’s last name and the first letter of the first name to create a unique ID, eliminating any punctuation or spaces. I then spot checked to clean up situations where the “William” v. “Bill” scenario could arise. Undoubtedly, this method cleaned up most things but might have errors.

I then sifted through each school to identify how many faculty at each school contributes to Democratic, Republican, or other candidates. I also separately identified faculty who contributed to both Democratic and Republican candidates in this window. If faculty moved from one school to another in this window, it is possible that faculty member is listed twice.

In the end, I identified 3148 law faculty who contributed only to Democrats in this 5+ year span—95.9% of the data set of those identified as contributing to either Democrats or Republicans in this period. Another 88 (2.7%) contributed only to Republicans. And 48 others contributed to both Democrats and Republicans.

The dollar figures were likewise imbalanced but slightly less so. About $5.1 million went to Democrats in this period, about 92.3% of the total contributions to either Democrats or Republicans. About $425,000 went to Republicans. (Around $6000 went to others.)

Of course, there are limitations to this study like any others. For some law schools, law faculty were running for office (e.g., former Harvard Law professor Elizabeth Warren running for Senate and for President), and contributions could be skewed to support a colleague. Faculty can “contribute” in other ways, such as volunteering for a campaign or even work in an administration. Faculty might be very “political” in a sense but refuse to contribute to candidates.

That said, I was surprised to see very few cross-partisan contributions. Even a $1 contribution to, say, Senator Mitt Romney or Representative Liz Cheney would have put a Democratic-leaning faculty member into the contributor to both Democrats and Republicans. But the data reveals very few cross-partisan contributions.

The first chart breaks down total faculty who gave to Democrats, Republicans, or both at each school in this time period.

Next shows the dollars contributed at each school in this time period.

Raw figures for the faculty donors, and the total dollars contributed, are below.

School D R Both
Stanford 30 2  
Yale 27    
Chicago 23 1 1
Penn 28    
Duke 40 1 1
Harvard 56   1
NYU 50   3
Columbia 32 2  
Virginia 38   2
Berkeley 34   1
Michigan 46    
Northwestern 32   1
Cornell 22    
UCLA 42 1  
Georgetown 74 2 1
Minnesota 19 1  
Texas 37   1
USC 19    
Vanderbilt 19   1
Georgia 13    
Washington University 18    
BYU 6 2  
Florida 22    
North Carolina 22    
Ohio State 19 1  
Wake Forest 27    
Boston University 26    
Notre Dame 14    
Boston College 21   1
Fordham 46 1 1
Texas A&M 11    
Arizona State 16   1
George Mason 11 6 2
Utah 20    
Alabama 11   1
Emory 21 1  
George Washington 49 3  
Iowa 13   1
Irvine 37 1  
Kansas 9    
Washington & Lee 17    
Wisconsin 14    
Illinois 12 1 1
Villanova 8    
Indiana-Bloomington 27    
Pepperdine 8 1 1
SMU 21    
William & Mary 9 1  
Baylor 5    
Washington 15    
Maryland 28 1  
Oklahoma 8   1
Tennessee 9 1 1
Arizona 21    
Temple 28    
Colorado 16 1 1
Florida State 10 1  
Seton Hall 18    
Wayne State 16   2
Davis 17    
FIU 10 1  
Hastings 43    
Houston 21    
Kentucky 11 1  
Loyola Los Angeles 36   1
Richmond 18    
South Carolina 14 1  
St. John's 14 1  
Cardozo 32    
Georgia State 18    
Connecticut 21   1
Marquette 8    
Miami 30   1
Missouri 8 1  
Northeastern 20    
Texas Tech 9   1
Tulane 9 1  
Oregon 11    
San Diego 12 3 1
Case Western 17   2
Denver 25 1  
Drexel 14    
Penn State Law 12    
Cincinnati 10    
Lewis & Clark 16    
Loyola Chicago 25    
Stetson 10 1  
Drake 10    
American 56 2  
Duquesne 11 1  
Nebraska 7    
Penn State Dickinson 7    
Pittsburgh 12 1 1
St. Louis 14   1
UNLV 19    
Montana 8    
New Mexico 23    
St. Thomas (Minnesota) 5    
Chicago Kent 25    
Gonzaga 4    
Indiana-Indianapolis 6    
Louisville 8   1
LSU 4 1  
Mercer 10   1
School D R Oth
Stanford $136,819 $8,205  
Yale $57,735    
Chicago $78,264 $7,904  
Penn $85,283    
Duke $46,535 $2,075  
Harvard $366,949 $1,000  
NYU $215,348 $3,470  
Columbia $68,598 $650  
Virginia $80,013 $21,073  
Berkeley $65,097 $500  
Michigan $103,402    
Northwestern $64,460 $167,245  
Cornell $30,666    
UCLA $62,972 $4,525 $2,724
Georgetown $223,280 $21,325  
Minnesota $37,115 $12,960 $900
Texas $41,912 $500  
USC $11,794    
Vanderbilt $37,174 $1,000  
Georgia $17,684    
Washington University $18,413    
BYU $3,248 $850  
Florida $28,752    
North Carolina $48,831    
Ohio State $32,457 $1,500  
Wake Forest $19,769    
Boston University $37,609    
Notre Dame $42,164    
Boston College $21,498 $2,500  
Fordham $277,494 $745 $30
Texas A&M $4,945    
Arizona State $19,899 $3,000  
George Mason $40,509 $18,932  
Utah $20,736    
Alabama $7,461 $250  
Emory $64,257 $500  
George Washington $103,639 $2,350  
Iowa $10,104 $200  
Irvine $55,211 $356  
Kansas $6,235    
Washington & Lee $18,461    
Wisconsin $17,193    
Illinois $61,570 $1,103  
Villanova $3,223    
Indiana-Bloomington $38,198    
Pepperdine $17,660 $3,476  
SMU $34,839    
William & Mary $7,494 $4,421  
Baylor $5,925    
Washington $19,149    
Maryland $40,920 $415  
Oklahoma $7,535 $2,000 $1,250
Tennessee $10,195 $2,000  
Arizona $9,893    
Temple $32,976    
Colorado $7,160 $3,193  
Florida State $11,286 $50  
Seton Hall $30,759    
Wayne State $16,078 $28,760  
Davis $16,359    
FIU $18,588 $1,565  
Hastings $98,908   $500
Houston $12,523    
Kentucky $8,853 $1,000  
Loyola Los Angeles $30,281 $1,000  
Richmond $18,044    
South Carolina $24,716 $2,000  
St. John's $14,519 $100  
Cardozo $44,321    
Georgia State $10,763    
Connecticut $12,097 $100  
Marquette $4,492    
Miami $40,449 $4,200 $250
Missouri $1,763 $77  
Northeastern $24,891    
Texas Tech $3,805 $1,584  
Tulane $4,396 $463  
Oregon $2,744    
San Diego $52,386 $14,450  
Case Western $15,988 $211  
Denver $51,519 $100  
Drexel $24,604    
Penn State Law $8,116    
Cincinnati $2,764    
Lewis & Clark $6,226    
Loyola Chicago $23,904    
Stetson $4,829 $20  
Drake $15,490    
American $199,403 $1,795  
Duquesne $12,098 $4,750  
Nebraska $10,638    
Penn State Dickinson $8,445    
Pittsburgh $10,160 $2,780  
St. Louis $21,462 $5,100  
UNLV $20,659    
Montana $9,529    
New Mexico $20,443    
St. Thomas (Minnesota) $2,184    
Chicago Kent $64,248    
Gonzaga $232    
Indiana-Indianapolis $6,221    
Louisville $5,480 $1,000  
LSU $4,020 $60  
Mercer $11,161 $500  

Supreme Court analysis: Trump v. Anderson

This is a high level overview of the decision in Trump v. Anderson, written in a format as I’ve been presenting in various ways over the last few days. Disclosure: I did file an amicus brief in support of neither party in this case, and in the court below.

On March 4, 2024, the Supreme Court decided Trump v. Anderson. It issued a per curiam opinion reversing the Colorado Supreme Court and effectively permitting Donald Trump’s name to appear on the Republican primary ballot.

Section 3 of the Fourteenth Amendment provides:

No person shall be a Senator or Representative in Congress, or elector of President and Vice President, or hold any office, civil or military, under the United States, or under any state, who, having previously taken an oath, as a member of Congress, or as an officer of the United States, or as a member of any state legislature, or as an executive or judicial officer of any state, to support the Constitution of the United States, shall have engaged in insurrection or rebellion against the same, or given aid or comfort to the enemies thereof. But Congress may by a vote of two-thirds of each House, remove such disability.

The State of Colorado, after a divided decision by the Colorado Supreme Court, had held that Donald Trump had engaged in insurrection for purposes of Section 3 for his role in the January 6, 2021 riots at the Capitol. It concluded he could not appear on the Republican primary ballot in that state. While he was excluded from the ballot, it stayed the ruling, so he appeared on the ballot as the case was appealed.

The United States Supreme Court expedited review and issued its decision in a little less than a month. It was mostly unsurprising after listening to oral argument. The sense was that at least eight justices, if not all nine, were inclined to reverse the Colorado Supreme Court on some theory that the that the state of Colorado, or any single state, didn't have this power to exclude ineligible presidential candidates from the ballot and didn't have the power to enforce this provision for varying structural or practical reasons. There was just the question from the court about how it got there.

Trump v. Anderson is a per curiam decision, which means we do not know the author, and, although I shouldn't speculate, it reads in some respects like the voice of Chief Justice Roberts. The result was unanimous, 9-0, essentially saying that Colorado lacks this power. But there are sharp elbows on the path there—the path not only to that one holding, but whether other holdings should be reached. So six justices, Chief Justice Roberts, Justices Clarence Thomas, Samuel Alito, Neil Gorsuch, Brett Kavanaugh, and Amy Coney Barrett all agreed with heart of the reasoning in the per curiam opinion. Justice Barrett wrote separately to explain she only agreed with part of the majority per curiam opinion. And then there was a concurring opinion jointly authored by justices Sonia Sotomayor, Elena Kagan, and Ketanji Brown Jackson concurring in the judgment only, but they too agreed with the heart of the reasoning of the majority.

I’ll focus on the consensus view of the court for a moment. That part of the decision really focuses on sort of this overall constitutional point, the text, structure, context, and so on. It begins with a quotation from U.S. Term Limits v. Thornton, a 1995 case, which says that that states had no power to add term limits or additional qualifications for congressional candidates. That case had in turn cited the great Justice Joseph Story in his Commentaries in the Constitution, to say that if states are exercising power in federal elections, that power has to come from some source in the constitution.

So if you are looking at Section 3 in the context of a presidential election, where is the state power? Well, it's certainly not going to be found in the 14th Amendment, which is a constraint on state power. And Section 5 gives Congress the power to enforce it, but it gives no power to the states. As you run through the rest of the Constitution, you can't find other provisions of the Constitution empowering states to enforce this provision against a presidential candidate. Articles I and II deal with congressional elections and presidential elections. But it's not clear that implicitly within them is the later power to come back and enforce Section 3.

By the structure of the Constitution, this is a provision that's designed for congressional enforcement, for national remedies and national mechanisms. As a practical matter, it makes very little sense for states to add the sorts of burdens on presidential candidates. If they want to do it for state candidates, it's their own thing. But to do so for presidential candidates makes very little sense. That makes very little sense given that Congress can lift the disability by a two thirds vote, so for a state to step in and hold a candidate not qualified for Congress to swoop in later and have to say, well, now we're going to lift the disability, would seem to force Congress’s hand rather than leave the power to Congress.

And the very end of the opinion are a series of practical concerns that one state’s evidentiary law or state’s procedural setup for how these challenges are filed could have a ripple effect throughout the United States, and we might reach inconsistent verdicts across the United States. And states in particular have less of an interest in presidential elections, simply because they are national offices, and the notion that states could adjudicate qualifications make these determinations and contested factual claims, and then reach kind of a patchwork result across the United States, not something that makes a whole lot of sense structurally.

That was Part II-B of the per curiam opinion, joined in full by Justice Barrett, and joined again in logic, if not in full, by the concurring opinion by Justices Sotomayor, Kagan and Jackson. So that that could have been it. That would have been easy in a way, for the Court.

But instead, there is a lot of friction on the court in a different context. So Part II-A of the opinion, where Justice Barrett peels off, along with the other concurring justices, addressed this sort of a separate question, which is not simply whether states have the power to enforce Section 3. It's more a question of who else and in what context has the power to enforce it. And for that, the court turns to the way that Section 3 is set up.

The five-justice majority speaks about how Congress has this role now to enforce the provisions of Section 3. Section 5 of the Fourteenth Amendment Amendment provides for Congress to have the power to enforce this provision of the Constitution with appropriate legislation, that appropriate legislation must be, in the words of other Supreme Court precedent, including City of Boerne v. Flores, a “congruent and proportional” remedy for the concerns that are addressed by these provisions of the Constitution.

When we look at the fact that we're dealing with this question, the factual dispute of a class of individuals barred for engaging in an “insurrection,” as Justice Kavanaugh at oral argument noted, we must ascertain who is covered. That requires a determination. This is something the Colorado Supreme Court recognized was necessary, in this case—the determination of whether someone engaged in insurrection ,which required procedures and factual findings.

And this is also what Justice Chase on the United States Supreme Court, then writing circuit as a circuit justice in 1869, noted in a case call Griffin's Case. It has a lot of attention and in some of the scholarly discourse, where a federal judge was deciding this case one year after ratification of the 14th Amendment, which was ratified in 1868. Justice Chase is hearing a habeas challenge from Griffin, who had been convicted in West Virginia state court. And he's challenging that conviction in federal court to say, well, I my conviction is invalid because it was adjudicated issued by a judge who was barred from holding office by Section 3. And Chase, writing this opinion says, Look, I'm not in a position to be able to determine these things. In part I have to make a determination, and “proceedings, evidence, decisions, and enforcement of decisions are indispensable.” Unless he's given some guidance, especially from Congress to figure out what to do here, the justice is not in a position to make this adjudication.

So Part II-A of the opinion really rides heavily on Congress's role here, because the Constitution empowers Congress. It enables Congress, subject to judicial review, to pass appropriate legislation, and Congress's Section 5 power is “critical” when it comes to Section 3. The per curiam opinion provides these sorts of statements before it then leads into the argument that the state lacks the power.

At the very end of the opinion, the per curiam opinion says these two things kind of go hand in hand. All of these things are essential. It's that Congress is the one that does these things, and that states lack the power to do so.

Now, Justice Barrett writes separately to say, I agree on the state's lack the power, we don't need to decide anything else today. I would not go in the path of the majority has done.

And then you have the concurring opinion the concurring opinion by Justices Sotomayor, Kagan, and Jackson. They seem to agree with part two of the opinion essentially agreeing that states don't have any such authority. They fracture very badly with the majority's approach, thinking about this congressional role. Some of the language the court that the concurring opinion uses, saying that these musings about Griffin's Case and about congressional power are as inadequately supported as they are gratuitous. And they go on to suggest that Section 3 is not special and does not require congressional enforcement alone. They point out that other provisions of the Constitution, including the Reconstruction Amendments, including things like Due Process, Equal Protection, and the abolition of slavery, which don't require additional congressional implementing legislation. They worry about how this is going to be applied in the future and whether or not they're adding these constraints, and how Congress goes about enforcing Section 3 or prohibiting other actors from enforcing Section 3.

The only concrete example they give is the concern that the forecloses judicial enforcement of that provision such as might occur when a party is prosecuted by an insurrectionist and raises a defense on that score. The notion being that without congressional implementing legislation, if you have someone who had taken an oath to support the Constitution, engaged in insurrection, and now we're serving as a prosecutor, it could be impossible for somebody to raise a defense to say this prosecutor is not authorized to hold this office. So there were some sharp elbows.

A few things to talk about here.

The first is the court doesn't touch really any factual issues. It doesn't touch questions about whether January 6 was an insurrection, or their Donald Trump engaged in an insurrection, whether his speech or his conduct was protected. These are just pure legal questions that the court is focused on.

Another is that this really closes the door and any of these ballot challenges going forward, whether it's the primary election or the general election. The court is quite clear that there's no role for the state and enforcing these provisions.

Another is that the opinion is very centered on section three of the 14th amendment. So it doesn't seem to foreclose the possibility that states exercising their power under Article II of the Constitution, to exclude, say, a 21 year old from the ballot or a Nicaraguan national from the ballot, and states might continue to be able to do so. Instead that the opinion looks much more at Section 3 and how the Fourteenth Amendment shifts this balance of power among the the federal government and the state governments to say that it's foreclosing some authority from the States. And there's not affirmative enforcement authority given to the States as a result. So it seems very much limited to what's happening with the Fourteenth Amendment and doesn't really touch on other presidential qualifications, disputes, election disputes, ballot access disputes, if we're just dealing with Section 3.

It also seems that it would appear to foreclose some challenges even might arise after the election. This is some of the opinion that I'm still wrapping my mind around and trying to understand how different parts of the opinion interact with one another. But the courts emphasis on speaking about Congress and legislation, and how that remedy needs to be tailored adequately to the remedy that are to the harm that you've identified, really does seem to say that other challenges would be inappropriate—at least without specific congressional legislation. But it's very hard to identify exactly what the court is doing when it is when it is suggesting that Congress has a role here with legislation.

What are those things that Congress can do apart from legislation such as seating its own members, as opposed to enacting legislation? What things by as the concurring opinion points out general federal statutes, such as (which the concurring opinion does not mention) the Administrative Procedures Act or the Electoral Count Reform Act? What kinds of deference is going to be given to Congress when it is acting pursuant to those rules, or when courts are acting pursuant to those rules, rather than things under its enforcement authority under Section 3? So there are some myriad questions that are ahead. And it fails to provide some of the clarity, which I think was part of the goal of the opinion

Matters are now largely left to the political process. There will be major questions about presidential immunity coming up in the weeks ahead, as the Supreme Court hears that case, and a number of criminal challenges to Trump in the United States. I think there's not going to be a closing off of the fact that the public will continue to intensely dispute, what is an insurrection, whether Trump engaged in an insurrection, and so on going forward, but that will be a matter of debate in the general election. The Court has at least closed that door when it comes to states attempting to enforce it for their ballot access provisions.

There's not much to change with the USNWR rankings to disrupt the status quo

Last year’s dramatic overhaul of the USNWR law school rankings saw the potential for increased volatility in the new metrics. But not much at the top, and much more beneath. And USNWR can only use publicly-available data.

I worked on creating an alternative set of metrics to try to stress test the rankings and see what might change. I reduced the 10-month employment score from 33% to 30%, and I subdivided that further in 20% for last year’s graduating class and 10% for the year before, a two-year weighted average. I reduced the bar passage stats a bit. I added a couple of other statistics at a percentage point or two per metrics: median debt among recent graduates; median income among recent graduates; a scholarly citation metric derived from Hein; conditional scholarships revoked; academic dismissals. That’s six new factors and changing of weights.

The result? Not much change. In fact, not much worth even listing in a chart below.

Virtually all factors highly correlate with each other. The top schools have the best admissions metrics and the best employment outcomes and the most citations and the highest bar passage rates and dismiss very few. When you add more factors, you just keep measuring mostly the same things. This isn’t true for all things, of course. You can isolate some schools that have uniquely strong scholarly profiles; stronger employment outcomes or bar passage metrics; low debt. But these can be isolated factors, and it is hard to move unless you’re moving all of them.

In short, it’s quite possible that changes, once again, to the USNWR metrics are coming. But more important than whether schools move, I think, is the incentives the metrics create. They certainly affect school behavior. Schools are less inclined, I think, to “chase” LSAT and UGPA medians this cycle, for instance, because they are weighed less, and because employability as a proxy for later employment rates matters more. But it’s just to note that many efforts to rank schools suffer from the issue that most of these factors so closely relate to one another. The more material effect may be how schools alter their behavior in an effort to retain their current position in the USNWR hierarchy.