Indebtedness metrics and USNWR rankings

Years ago, as I began to look at law student indebtedness metrics, I noted a number of reasons why debt figures should be construed with caveats.

Students with low debt loads could be independently wealthy or come from a wealthy family willing to finance the education. They could have substantial scholarship assistance. They could earn income during school or during the summers. They could live in a low cost-of-living area, or live frugally. They could have lower debt loads because of some combination of these and other factors.

Scholarship awards have, in recent years, appeared to be outpacing tuition hikes—which has been a several-year trend and places schools in increasingly precarious financial positions. Students are no longer purchasing health care due to the ability to remain on their parents' health insurance under federal law, a significant cost for students a few years ago. Schools have increasingly eased, or abolished, stipulations on scholarships, which means students graduate with less debt. Some schools have slashed tuition prices. We might simply be experiencing the decline of economically poorer law students, resulting in more students who need smaller student loans—or none at all. Students may be taking advantage of accelerated programs that allow them to graduate faster with less debt (but there are few such programs). As JD class sizes shrink, it's increasingly apparent that students who would have paid the "sticker" price as increasingly pursuing options at institutions that offer them tuition discounts.

These debt figures are only an average; they do not include undergraduate debt, credit card debt, or interest accrued on law school loans while in school. The average may be artificially high if a few students took out extremely high debt loads that distorted the average, or artificially low if a few students took out nominal debt loads that distorted the average.

Some borrowers will be eligible for Public Service Loan Forgiveness programs. That might make their debt loans appear high when they will ultimately have those loans paid off. And some borrowers will take out a high amount of debt only to earn very high salaries upon graduation.

In short? There are a lot of limitations to debt metrics.

Of course, I think, on the whole, a lower percentage of students taking out debt, the better; and a smaller debt load is better. Those are obvious points. But as USNWR includes two new indebtedness metrics worth 5% of the overall rankings, it’s worth considering the justification.

One reason is to say that “many new lawyers are postponing major life decisions like marriage, having children and buying houses – or rejecting them outright – because they are carrying heavy student loan debts.” True enough. But it’s also a reason, I think, to look at the debt to income ratios of graduates. That is, not all debts look the same—they are much less onerous when income levels are higher. Even this is a limited look, as cost of living matters dramatically, too. But there are alternative concerns, too—students may choose jobs they don’t really want or stick in loan forgiveness programs they don’t want to pay down debt. All important caveats.

Another is a racial disparity concern: “J.D. graduate debt is impacting Black and Hispanic students the most since they borrow more, according to the ABA.” That’s understating it: White students had average indebtedness of about $101,000 in 2018, compared to $150,000 for Hispanic students and $199,000 for Black students.

But more the point, why does USNWR think this metric will improve these disparities? The incentives for law schools are currently to maximize the median LSAT and UGPA scores, which focuses on “bounty” scholarships for high-performing LSAT scorers, which tend to be, at a given institution, white students. One way to both reduce debt levels and improve admissions metrics—both now valued by USNWR—is to increase these disparities. Independently wealthy students are also more attractive to a law school, those who take out zero dollars in loans—and, again, socioeconomic status interacts with racial characteristics, which is likely to increase these disparities. It’s strange, then, to cite race as a basis for including a metric, but to include the metric in such a way as to offer opportunities to exacerbate the very concerns raised.

Furthermore, USNWR already incentivizes schools for being expensive (and fails to disclose that data publicly). This means USNWR incentivizes a very expensive school with low reliance on tuition—pressing schools toward reliance on grants, endowments, or central administration support.

And Goodhart’s Law may come to debt metrics. Again, with my caveats above, I think fewer students with debt and smaller debt loads are, of course, on the whole, a good thing. It might be that schools really start to focus on financial health of students and provide greater counseling to students in law school. It might be that schools will take seriously these figures and do their best to reduce them for all students, not simply in ways that manipulate metrics at the margins. But with any newly-introduced metric, it’s not clear how it will play out.

I’ll have another post on the indebtedness metrics and how they are skewed to favor schools based on percentage of students who incurred debt instead of average debt—and why that’s the wrong approach.

The USNWR law school rankings are deeply wounded--will law schools have the coordination to finish them off?

While law schools love to hate the USNWR law school rankings, they have mostly settled for complaining loudly and publicly, but internally (and sometimes externally) promoting those same rankings or working like made to use them as a basis for recruitment. Collective action is a real problem. Furthermore, finding an effective tool to diminish the value of USNWR law school rankings remains elusive.

But this is, perhaps, the moment for law schools seeking to finish off the USNWR rankings. In the last month, USNWR has had four separate methodological alternations between the preliminary release of rankings and the final release:

  • It created a new “diversity” ranking of schools that did not include Asian-American law students as a component of a “diverse” law school. After law school protest after recent events, USNWR agreed to include them. This decision alone moved some schools as much as 100 spots in the rankings (among nearly 200 law schools).

  • Its new “diversity” ranking also does not include multiracial students (those who consider themselves members of more than one racial group). USNWR is considering that and has decided to delay the release of these new rankings.

  • A new component of the rankings on library resources added “number of hours” the law library was available to students, 0.25% of the rankings. Methodological errors forced USNWR to recalculate the figures. This component—a 1 in 400th component, mind you—altered the ranking of more than 30 schools, and some by as much as six spots.

  • Another new component of the rankings on library resources, another one worth 0.25%, added “ratio of credit-bearing hours of instruction provided by law librarians to full-time equivalent law students.” Those errors resulted in USNWR pulling the metric entirely, and adding weight to bar passage rated from 2% to 2.25% of the ranking. This decision—again, only a 1 in 400th part of the rankings—shifting another 35 schools.

These last two components as new metrics strike me as strange. Is a law school better off if its librarians teach more student electives than providing research support and assistance to students and faculty? Is a law school better if its student can access the library (not just the law school, the law school library) between 2 and 5 am? That’s what the new metrics do. UPDATE: For more specific critiques about the library metrics, see here.

This potpourri of new metrics is even worse by the fact that USNWR can’t even assess its own rankings correctly. It’s issued multiple retractions

  • Congressional hearing. Congress assuredly has an interest in near-monopolistic behavior from an entity that increases the price of legal education and that serves as a major indicator to students who choose to enter the legal profession. It systematically undervalues public schools that are low-cost institutions by inflating emphasis on expenditures per student; and it routinely undervalues particular institutions like Howard University, one that consistently places in the upper tier of elite law firm placement and remains deeply esteemed by hiring attorneys. These strike me as ripe matters of public concern for investigation. If Congress can call tech companies to the mat, why not the rankings entity?

  • Pay-for-access boycott. USNWR charges law schools $15,000 to see the data they already provide. It strikes me that given the low value and quality of the data, schools should just stop paying for it. Even cutting 10 schools out deprives USNWR of $150,000 in quasi-extortion cash. Sure, some schools will lose opportunities to “game” the rankings by digging in and comparing figures. But maybe every-other year access—halving USNWR revenue—will stifle it.

  • Survey participation boycott. This is two-fold. The first is a refusal to fill out the survey data requests each fall. Of course, USNWR can independently collects some things if it wants to, like LSAT score and 10-month employment figures. But it can’t replicate it all. This is, of course, a collective action problem. But a second is a refusal to fill out the peer-reviewed surveys. That’s a separate problem, but I think there’s a decent solution: spike the survey. That is, fill out your own school as a 5, and all other schools as a 1. That maximizes the value to your own school while at the same time incentivizing others to render the survey meaningless. If USNWR wants to start discounting surveys it views as “gaming,” let it articulate that standard.

  • Alternative rankings developments. Law schools, of course, hate to be compared with one another in a single ranking. But schools and students are going to use them. Why not develop metrics that law schools deem “appropriate”—such as a principal component analysis of employment outcomes—with its own separately-administered peer review score, among other things? That strikes me as a better way forward, breaking the monopoly by developing approved alternative metrics.

Of course, I imagine these, like most such projects, would fall to infighting. It’s one thing for law schools to write a strongly-worded letter decrying what USNWR is doing. It’s another thing to, well, do something about it. I confess my solutions are half-baked and incomplete means of doing so.

But if there’s a moment to topple USNWR law school rankings, it is now. We’ll see if law schools do so.

The hollowness of law school rankings

“We’re a top-ranked law school.”

Those words, in their various forms, are found everywhere in legal education marketing materials. They are hollow words. In my reflection, they grow more hollow each year.

It’s hard for me to think of where to begin a post like this one. Maybe I’ll start with what we think make a great law school. It’s great people, in a great community, doing great things. And others might have different definitions. But let’s start here.

*

Each of these requires a preexisting definition of “great.” Great at what?

The first is great people. It draws faculty who excel at writing and speaking, teaching and mentoring, reading and listening. It’s people who as passionate about these aspects of legal scholarship and legal education, who aspire to give their students meaningful guidance as they begin their careers. It draws students who are engaged and active in the classroom, inquisitive and active, eager for journals and service to the community.

The second is a great community. It’s one thing to have great people working in silos, or great students studying and going their own ways. But to have a great community builds upon those assets, people who can support one another to ensure that articles are even sharper in their clarity and argument, that classroom experiences are even more meaningful to students by learning from one another, that employment opportunities for students are supported across the faculty, staff, and students to build a culture commitment to student success.

The third is doing great things. This requires some look at the outputs—the quality of the articles and books from the faculty, the influence of law journals and centers at the law school, the success (more than just “elite placement”) of students in legal careers in the short-term and the long-term. It can take a lot of forms, traditional legal scholarship and engagement with the legislature, bar, and bench; placement in elite law firms and public interest work; advancing interests in the local community and in the nation as a whole.

The broader the pool in each class—more great people, stronger community engagement, higher output of great achievements—the better the institution.

*

From a prospective students view, assessing these things is difficult. It can be a challenge for a prospective law student to know exactly what “great” looks like. A student may want to do X or Y kind of law, but not really know what that means if an institution discusses its programs there or its alumni in that field, or how to weigh that against other competing concerns—or if it’s all just hype that doesn’t translate into the results one may want. Or a prospective student may not know exactly what she wants to do (particularly true of first-generation law students), and be at a loss of how to compare these things.

There is a temptation, then, to seek out advice. Undoubtedly, those with attorneys in the family or those in upper-class social strata or education circles get advice of varying types. But many also look for external validation, because it can be difficult to make assessments based on the representations of schools alone.

*

External validation can be rankings. I’ve been highly critical (admittedly, an easy position for a law professor to take!) of most law school rankings—at least, those rankings that purport to be comprehensive, to distill everything about a school into a single measure. But I acknowledge there’s a reason they're out there: prospective students in particular look for help evaluating schools.

I confess, I was particularly attracted to rankings early in my blogging career, even ranking the rankings. (Links, mercifully, herein omitted.) Over time, I realize that was largely a symptom of my desire to generate traffic by ranking something, anything, for someone’s feedback. That’s not to say comparing law schools is unimportant, particularly for prospective students. But it’s to turn rankings into, well, clickbait. And perhaps the most clickbait-y of all are singular rankings that aggregate a series of factors for one, “true” ranking. Rankings can't do that.

*

It’s the convergence of a few things, then, that may give rise to this hollowness of rankings. One is the tyranny of metrics, the obsession of measuring everything and evaluating everything on the basis of those measurements. I’m all for the "data-driven” or empirical evaluation of what we do. The tyranny part comes when those measures are used at the expense of all others, or used without proper acknowledgement of their limitations.

The bulk of rankings methodologies are much older than the available “analytics” we have and may desire to use today. Consider, again, USNWR, which includes a significant amount of inputs in its rankings, and which are not, in my judgment, useful. For instance, law students should worry much less about incoming metrics—essentially, self-congratulatory admissions-oriented metrics—and instead look at student outcomes.

I’ve tried to look more at student outcomes, from institutions’ commitments to reducing debt loads, to debt-to-income ratios of graduates, to employment outcomes at graduation, to federal judicial clerkship outcomes. Others have built on employment outcomes, too, in ways that are more helpful and more lucid than the USNWR figures (the published figures, for what it’s worth, are not the figures it uses in its actual ranking).

But unquestionably, the most alluring rankings are, really, any rankings, good or bad, that put a school in a good light (and may validate a prospective law student’s desire). Free pre-law magazines make them up. Blogs make them up. Clickfarms make them up.

*

Those rankings are everywhere. And it allows schools, with ease, to cite them. But the line, “We’re a top-ranked law school,” reflects two great weaknesses of so many law schools: lack of confidence and a lack of vision.

Lack of confidence arises from the inability to articulate to others—prospective students, current students, alumni, donors, faculty, staff, and the larger university—of what the school is accomplishing. It might be that too many overstatements of a school’s achievements now fall on deaf ears. Or that there’s simply distrust in self-promotional presentations of a school’s accomplishments. And it’s recognizing that these “others” won’t necessarily heed the list of accomplishments without some reference to some ranking—as weak or as hollow as the ranking may be—to shore up the chronicles of success about the institution.

Lack of vision arises from an inability to articulate success. Rather than define success to a public audience, they rely on others’ definitions of success as validated through a ranking, and they promote that ranking as the end, as the definition of success.

I admit, it might simply be that these others prefer to have some external validation of the school’s quality, rather than something internal. But schools could readily identify the things I pointed out at the beginning: what makes the school great? It can be data-driven, or it can be a qualitative narrative. Ideally, it’s a combination. Schools should have confidence in their own vision as they’ve articulated and measured it, and they should be able to persuade relevant outsiders about why the law school is succeeding on these terms, not on someone else’s terms.

Maybe that’s all too idealistic. It’s impossible to unring the bell of rankings. But I think schools should be spending much more effort thinking about how to define success and how to communicate that.

*

Here we sit on the eve of yet another USNWR ranking, one that gives weight to inputs, to dated measures like how much money a school spends on its electric bills—to an overall ranking that moderately correlates with some ways that we can think of “good” schools. But it’s time for schools to think about how hollow these rankings are, and to think about how to move beyond them in ways to persuade prospective students, the greater academic community, and the public about the institution’s value.

I’ve had a version of this post drafted in my blog queue for several years. I’ve been tweaking it now and then, and just never got around to posting it. These are my initial thoughts, that of course merit much deeper evaluation in the future!

Federal judges are announcing future vacancies as an extremely high rate

Last fall, I noted that federal judges were announcing future vacancies a historically low rate ahead of Election Day. I posited several reasons why that might be the case, but recent events suggest it’s attributable to partisan reasons.

Here’s the difference between November 1 future vacancies in a presidential election year, with February 1 future vacancies after the election:

November 1, 2000: 11 - February 1, 2001: 9

November 1, 2004: 23 - February 1, 2005: 17

November 1, 2008: 19 - February 1, 2009: 10

November 1, 2012: 19 - February 1, 2013: 19

November 1, 2016: 17 - February 1, 2017: 13

November 1, 2020: 2 - February 1, 2021: 15

That 2021 figure is deceptively low. Another 5 federal judges announced their intention to go senior in the first week of February. Several others took senior status since January 20.

(Maybe unsurprisingly, some judges announce a year-end plan to retire (12/31 or 1/1), which occurs between Election Day and a new presidential administration. I think that’s why a number of announced future vacancies convert to actual vacancies.)

I’m sure there’s more precise ways of examining these figures going forward, and it’ll take some time for the full effects to shake out. But we’re witnessing an extremely high rate of announcements from federal judges, timed to a new presidential administration and razor-thin co-partisan control of the Senate.

Disqualifying Trump wouldn't necessarily remove him from the political stage

The House of Representatives has impeached Donald Trump for inciting insurrection at the Capitol when Congress was counting electoral votes. The Senate may convict him of that charge. If it does, it will decide whether to disqualify him from “any office of honor, trust, or profit under the United States,” which likely (I think) includes the office of the president.

There are important reasons for the Senate to consider convicting and disqualifying Mr. Trump, and the stigma of disqualification might prevent a future candidacy. But this penalty isn’t a magic trick that would make Mr. Trump disappear from the political stage. It wouldn’t bar him from fundraising for a future presidential candidacy or from appearing on the ballot in a future election. And the question may ultimately return to Congress one day when it counts electoral votes.

The Constitution requires that the president must be at least 35 years old, a natural born citizen, and a resident of the United States for 14 years. There are other tacit requirements, too—for example, one must be alive instead of dead, and a person instead of, say, a dog or a cat. Disqualification from office after impeachment would be another condition.

These eligibility requirements are usually self-policing. Underage or non-citizen candidates rarely attempt to run for president, and major political parties winnow out ineligible candidates. Other disputes over eligibility, like whether Canadian-born Senator Ted Cruz was a “natural born citizen,” never became serious problems because candidates lost elections.

A candidate who runs for president must file with the Federal Election Commission to disclose campaign contribution and expenditure data. But the FEC doesn’t have power to determine whether candidates are eligible.

In 2011, a naturalized citizen, Abdul Hassan, sought to run for president. He admitted he wasn’t a natural born citizen but asked the FEC if he could still run. An advisory legal opinion from the FEC concluded that campaign finance law allowed him to solicit funds for his campaign, and that he wouldn’t be engaging in fraudulently misrepresentation if he did so.

Any candidate, eligible or ineligible, can run a presidential campaign in the United States. Ineligible candidates would not violate any campaign finance laws by soliciting financial contributions or running for president. Congress would need to amend campaign finance laws to bar ineligible candidates from fundraising for office.

Candidates running for office must also file paperwork in states to assure their names appear on the primary and general election ballots. Many states do not investigate the qualifications of candidates seeking elected office. They trust candidates, voters, and political parties to make those judgments and act appropriately.

It’s up to states to decide whether to enact laws to exclude disqualified candidates. But ineligible presidential candidates do sometimes appear on the ballot. Róger Calero, for instance, is a Nicaraguan who resides in the United States and was the Socialist Workers Party candidate for president in 2004 and 2008. He’s not a citizen, much less a natural born citizen, but he appeared on the ballot in New Jersey, New York, and other states in two elections. Or consider Peta Lindsay, who was just 28 years of age when she ran as the Socialism and Liberation Party nominee in 2012 and earned 7791 votes across nine states.

Some states do exclude unqualified presidential candidates. For instance, Colorado requires candidates to affirm under oath that they are qualified. In 2012, Mr. Hassan couldn’t affirm that, and he sued. Then-Judge Neil Gorsuch wrote a judicial opinion affirming the state’s right to exclude Mr. Hassan from appearing on the ballot.

But others have tended toward mischief. After false rumors swirled in 2008 that Barack Obama was not born in the United States, some state legislatures introduced legislation that would require candidates to show their birth certificate as a condition of appearing on the ballot. None became law.

In the event a disqualified candidate appeared on the ballot, he might receive electoral votes from a state, and those electoral votes would be sent to Congress to count. In 1873, Congress refused to count three electoral votes from Georgia cast for Horace Greeley, a candidate who died after Election Day but before the electors met. Congress rejected votes cast for an ineligible candidate.

Under the Electoral Count Act, a member of the House and a member of the Senate may object to counting votes that were not “regularly given.” Both houses of Congress would then need to agree to reject the votes cast for an ineligible candidate.

That means Congress might ultimately be forced to evaluate Mr. Trump’s eligibility when it counted electoral votes.

Yes, during that meeting.

At graduation employment figures for law school graduates in 2018

One underdiscussed statistic, in my view, is the “at graduation” employment figures at law schools.

Among 145 USNWR-ranked schools sharing data, the median at graduation employment rate was 56.1%. The median among all USNWR-ranked schools at 10 months after graduation was 83.3%. So there is substantial movement in those 10 months after graduation—passing the bar exam, moving to a new city, places of employment with limited resources hiring at the time they have an opening rather than years out, and so on. (The aggregate at-graduation employment metric, which is not the figure reported below, is just 4% of the overall USNWR ranking.)

Nevertheless, the ABA’s required disclosure employment data only includes the 10-month figures. USNWR collects and discloses the at-graduation employment rates, too. For students wondering about job security and likelihood of obtaining a position (and the ability to begin paying down debt promptly), at-graduation is an interesting figure. It’s also a figure that might relate to “elite” employment outcomes, like judicial clerkships (including state court clerkships) and big law firm associate positions—those are the kind that hire out months if not years in advance of a start date. It might be the case that government or public interest positions hire more frequently after graduation, a different way of thinking about these figures.

So below at graduation employment outcomes for the Class of 2018. USNWR includes full-time, long-term, bar passage-required and J.D.-advantage jobs that are not funded by a law school in this category.

School Employed at grad 2018
Columbia 94.2%
Virginia 92.3%
Cornell 91.3%
Penn 90.9%
Chicago 90.3%
Stanford 90.2%
Northwestern 90.0%
NYU 89.5%
Harvard 88.8%
Duke 88.2%
Michigan 84.2%
Seton Hall 84.2%
Berkeley 83.8%
Yale 82.8%
Vanderbilt 81.0%
Minnesota 79.7%
Georgetown 78.5%
Arizona State 77.9%
Washington & Lee 77.0%
Washington University (St. Louis) 76.1%
Fordham 74.4%
Texas 73.1%
Iowa 73.0%
BYU 72.3%
Rutgers 70.5%
UCLA 70.3%
USC 69.8%
George Washington 69.8%
Villanova 69.6%
Boston University 68.1%
Penn State-Dickinson 66.7%
UC-Davis 66.1%
St. Louis 66.0%
Notre Dame 65.9%
North Carolina 65.8%
Uconn 65.6%
Kansas 64.7%
Georgia 64.6%
St. John's 64.6%
Florida 64.4%
Illinois 64.4%
Ohio State 64.3%
William & Mary 64.2%
Kentucky 63.9%
Boston College 63.7%
Maryland 63.6%
Baylor 63.6%
Utah 63.0%
Albany 63.0%
Tulane 62.9%
Alabama 61.6%
UC-Irvine 61.4%
Tennessee 61.4%
Emory 59.9%
Temple 59.8%
Richmond 59.7%
Nebraska 59.6%
Wake Forest 59.2%
Louisville 58.7%
Creighton 58.7%
Hofstra 58.6%
Colorado 58.5%
George Mason 58.4%
Baltimore 58.3%
SMU 57.9%
Cardozo 57.0%
Toldeo 56.9%
Arizona 56.6%
UNLV 56.6%
Oklahoma 56.3%
Mercer 56.3%
Wisconsin 56.2%
Indiana-Bloomington 56.1%
Maine 56.0%
Washburn 55.0%
Washington 53.8%
Missouri 53.8%
LSU 53.7%
Houston 53.5%
South Carolina 53.4%
Penn State-University Park 52.0%
Drake 52.0%
Wayne State 51.8%
Syracuse 51.7%
Gonzaga 51.5%
Loyola Chicago 50.3%
Brooklyn 50.0%
DePaul 50.0%
American 49.1%
Marquette 48.8%
Pace 48.8%
New York Law School 48.7%
South Dakota 48.7%
Georgia State 48.6%
Miami 48.5%
Denver 48.1%
Cincinnati 47.8%
Howard 47.4%
Buffalo 46.6%
University of St. Thomas 46.5%
Tulsa 45.7%
Hawaii 45.3%
Florida International 45.2%
Loyola Law School-Los Angeles 45.0%
Montana 44.9%
Arkansas 44.2%
Northeastern 44.1%
New Hampshire 43.8%
New Mexico 43.8%
Michigan State 43.5%
Catholic 43.3%
Duquense 43.3%
Texas Tech 43.2%
Quinnipiac 42.7%
Wyoming 42.4%
Illinois-Chicago (John Marshall) 42.0%
Vermont 42.0%
West Virginia 41.8%
Pepperdine 41.7%
Florida State 41.1%
Drexel-Pennsylvania 41.1%
Stetson 41.0%
Suffolk 41.0%
Case Western Reserve 40.5%
Chicago-Kent 40.5%
San Diego 40.2%
Missouri 40.0%
Akron 40.0%
Texas A&M 39.9%
Memphis 39.8%
Oregon 39.5%
Santa Clara 39.2%
UC-Hastings 38.3%
Lewis & Clark 38.2%
Pittsburgh 37.8%
Indiana-Indianapolis 36.6%
Cleveland State 36.4%
Mitchell Hamline 34.4%
Mississippi 34.1%
Chapman 32.8%
Seattle 30.2%
Dayton 28.2%
Belmont 26.9%
Loyola New Orleans 26.1%
Willamette 26.0%

Algebra and geometry as prerequisites to the bar exam

From the Colorado Supreme Court, 1898:

Applicants who are not members of the bar, as above prescribed, shall present a thirty-count certificate from the regents of the university of the state of New York, or shall satisfy said committee that they graduated from a high school or preparatory school whose standing shall be approved by the committee, or were admitted as regular students to some college or university, approved as aforesaid, or before enter­ing upon said clerkship or attendance at a law school, or within one year thereafter, or before September 13, 1899, they passed an examination before the state superintendent of public instruction, in the following subjects: English lit­erature, civil government, algebra to quadratic equations, plane geometry, general history, history of England, history of the United States, and the written answers to the ques­tions in the above named subjects shall be examined as to spelling, grammar, composition and rhetoric. The said exam­inations shall be conducted in connection with the regular county examinations of teachers.

Law school inputs, outputs, and rankings: a guide for prospective law students

As we approach another law school rankings season, Dean Paul Caron has compiled a tentative ranking of the “admissions” metrics that USNWR uses as a component of its law school rankings methodology. Median LSAT score of the incoming class, median UGPA of the incoming class, and acceptance rate are 25% of the rankings.

Interestingly, in my judgment, it’s also probably one of the readiest way for a prospective law student to judge which school are most overvalued and undervalued by USNWR.

Law school inputs are, I think, probably the very weakest measure of law school quality from a student’s perspective. Prospective students, I think, care far less about the academic credentials of those around them, and far more about, say, their employment outcomes, their debt levels, or even the profession’s perception of their institution. (Above the Law’s rankings long ago focused on outputs over inputs. Professor C.J. Ryan has also looked to “value-added” rankings, what law schools add value to the student experience.)

Indeed, it’s remarkable to me that just 20% of the rankings focus on employment and bar outcomes, while 25% on admissions statistics. We know law schools spend significant resources on distorting admissions practices to meet UNSWR metrics.

But if you’re a student, which is better? To be at a law school with a median LSAT of 170 but a 50% high-quality job placement rate? Or at a law school with a median LSAT of 160 but an 80% high-quality job placement rate? One could look at the same figures for students who graduate with a low debt-to-income ratio, too.

Admissions-centered rankings, then, can help a prospective law student discern which schools are overvalued and undervalued by existing rankings, and discount accordingly. If two schools sit beside each other in the USNWR ranking, it might be because one has much better inputs and another much better outputs. Or if there are two schools that appear to disparate, it might be only because of a disparity of inputs, not outputs.

This isn’t to say that law school inputs are unimportant. They are important—to law schools, not (mostly) to law students. They are important to predict likelihood of success in law school, so law schools want to admit students with high likelihood of success. (For marginal students admitted to a school, it might be relevant to them as an indicator of the challenges they may face in the first-year curriculum in particular.)

But those figures aren’t generally, in my judgment, useful for prospective law students. Separating the components of the rankings can provide better information in decisionmaking.

Would-be faithless 2016 presidential electors return as electors in 2020, faithful this time

We can look back at the 2016 faithless elector litigation to see what happened to the many who attempted (or were successfully in their attempts) to vote for someone other than Donald Trump or Hillary Clinton. I anticipated that parties would change how they select and scrutinize presidential electors in 2020. There were no faithless elector this time and, on the heels of Chiafalo v. Washington, faithless elector laws would be enforceable. In 2020, there were not even attempts at casting faithless votes.

But, despite all that, several would-be faithless electors—and one, in fact, faithess elector—from 2016 were still chosen as electors. We can find out who they are. From the Archivist:

Vinzenz Koller, California: he wanted to cast a vote for someone other than Mrs. Clinton and Tim Kaine in 2016 and filed a lawsuit, but he ultimately voted for them. He was a Democratic elector in 2020, too.

Polly Baca, Colorado: she was one of the three plaintiffs in Baca v. Colorado Department of State who sought to vote for someone other than Mrs. Clinton and Mr. Kaine in 2016. Micheal Baca was the lead plaintiff who attempted to cast a faithless vote and was replaced. Ms. Baca ended up voting consistent with her pledge. (She was also the reason Justice Sonia Sotomayor had to recuse from the case.) She was a Democratic elector in 2020, too.

David Bright, Maine: he attempted to cast a vote for Bernie Sanders for president in 2016, but, upon a re-vote in Maine, cast a vote for Mrs. Clinton. He was a Democratic elector in 2020, too.

Muhammad Abdurrahman, Minnesota: he attempted to cast a vote for Mr. Sanders for president in 2016, and he was replaced. He sued, and he lost. He was selected an elector again in 2020.

I’m surprised that these individuals made it again as electors. But, maybe added pressure from the party, litigation culminating in Chiafalo, and simply different political circumstances in 2020 (e.g., these Democratic electors felt it more important to vote for Joe Biden in the face of Mr. Trump’s efforts to “decertify” and otherwise contest election results in several states) ensured they’d vote consistent with their promise and their party’s nominee.