No, the MBE was not "harder" than usual

I frequently read comments, on this site and others, commenting that the bar exam was simply harder than usual. Specifically, I read many people, often law faculty (who didn't take the exam this year) or recent graduates (the vast majority of whom are taking the bar exam for the first time), insisting that the bar, especially the Multistate Bar Exam ("MBE"), is "harder" than before.

Let's set aside, for now, and briefly, (1) rampant speculation, (2) cognitive biases suggesting that the instance in which someone is taking a multiple choice test that counts for something feels "harder" than ungraded practice, (3) erroneous comparisons between the MBE and bar prep companies, (4) retroactive fitting of negative bar results with negative bar experiences, or (5) the use of comparatives in the absence of a comparison.

Let's instead focus on whether the July 2015 bar exam was "harder" than usual. The answer is, in all likelihood, no--at least, almost assuredly, not in the way most are suggesting, i.e., that the MBE was harder in such a way that it resulted in lower bar passage rates.

I'll explain why this is the right question, and why I include the caveat "almost assuredly," below. First, it might be beneficial to take a moment to explain how the MBE is scored, and how that should factor into an analysis.

I. What the MBE scale is

Many are familiar with a "curved" exam, either from college or law school. The MBE is not curved. (For that matter, neither is the LSAT.)

In college, letter grades are commonly assigned based on converting numeric scores to a letter (e.g., 90-100 is an A, 80-89 is a B, etc., with some gradations for +'s and -'s). A common way of curving the exam is to add points to the top grade in the class to make it 100, and add the same number of points to everyone else's score. If the highest grade on the exam is a 92, then everyone gets an additional 8 points. If the highest grade is a 98, everyone gets an additional 2 points (and most classmates complain that this student "wrecked the curve"). This isn't really a "curve" in the typical use of the term, but it's a common way of distributing grades.

Instead, most law schools "curve" grades based on a pre-determined distribution of grades. Consider the University of California-Irvine. In a class of, say, 80 students, instructors are required to give 3 or 4 A+'s; 19%-23% of the next highest grades are A's; 19-23% of the next highest grades are A-'s; and so on.

But the MBE uses neither of these. The MBE uses a process known as "equating," then "scales" the test. These are technical statistical measures, but here's what it's designed to do. (Let me introduce an important caveat here: the explanations are grossly oversimplified but contain the most basic explanations of measurement!)

Imagine we have two groups of students. They are taking a test, but on different days. And we don't want to give them the exact same test, because, well, that's a bad idea--the second group might get answers from the first group. But we want to be able to compare the two groups of students to each other.

It wouldn't really do to use our law school "curve" above. After all, what if the second group is much smarter than the first group? If we, say, had a 75% pass rate, why should the second group be penalized for taking the test among a much smarter group, when their chances would have been better the first time around?

Standardized testing needs a way of accounting for this. So it does something called equating. It uses versions of questions from previous administrations of the exam, known as "anchor" questions or "equators." It then uses these anchor questions to compare the two different groups. One can tell if the second group performed better, worse, or similarly on the anchor questions, which allows you to compare groups over time. It then examines how the second group did on the new questions. It can then better evaluate performance on those new questions by scaling the score based on the performance on the anchor questions.

This is why the bar jealously guards its exam questions and why there is such tight security around the exam. It needs some of the questions to compare groups from year to year. But as the law changes, or simply to keep the test relatively fresh, there are always new questions introduced into the exam.

II. How the MBE scale works

It's one thing to read the math--yes, you might think, there's some magic that standardized test administrators have, but it's still a challenge to understand. How does it work?

Consider two groups of similarly-situated test-takers, Group A and Group B. They each achieve the same score, 15 correct, on a batch of "equators." But Group A scores 21 correct on the unique questions, while Group B scores just 17 right.

We can feel fairly confident that Groups A and B are of similar ability. That's because they achieved the same score on the anchor questions, the equators that help us compare groups across test administrations.

And we can also feel fairly confident that Group B had a harder test than Group A. (Subject to a caveat discussed later in this part.) That's because we would expect Group B's scores to look like Group A's scores because they are of a similar capability. Because Group B performed worse on unique questions, it looks like they received a harder batch of questions.

The solution? We scale the answers so that Group B's 17 correct answers look like Group A's 21 correct answers. That accounts for the harder questions. Bar pass rates between Group A and Group B should look the same.

In short, then, it's irrelevant if Group B's test is harder. We'll adjust the results because we have a mechanism designed to account for variances in the difficulty of the test. Group B's pass rate will match Group A's pass rate because the equators establish that they are of similar ability.

When someone criticizes the MBE as being "harder," in order for that statement to have any relevance, that person must mean that it is "harder" in a way that caused lower scores; that is not the case in typical equating and scaling, as demonstrated in this example.

Let's instead look at a new group, Group C.

On the unique questions, Group C did worse than Group A (16 right as opposed to 21 right), much like Group B (17 to 21). But on the equators, the measure for comparing performance across tests, Group C also performed worse, 13 right instead of Group A's 15.

We can feel fairly confident, then, that Group C is of lesser ability than Group A. Their performance on the equators shows as much.

That also suggests that when Group C performed worse on unique questions than Group A, it was not because the questions were harder; it was because they were of lesser ability.

There are, of course, many more nuanced ways of measuring how different the groups are, examining the performance of individuals on each question, and so on. (For instance, what if Group C also got harder questions by an objective measure--as in, Group A would have scored the same score as Group C on the uniques if Group A answered Group C's uniques? How can we examine the unique questions independent of the equators, in the event that the uniques are actually harder or easier?) But this is a very crude way of identifying what the bar exam does. (For all of the sophisticated details, including how to weigh these things more specifically, read up on Item Response Theory.)

So, when the MBE scores decline, they decline because the group, as a whole, has performed worse than the previous group. And we can measure that by comparing their performance on similarly-situated questions.

III. Did something change test-taker performance on the MBE?

The only way to say that the failure rate increased because of the test would be because of a problem with the test itself. It might be, of course, that the NCBE created an error in its exam by using the wrong questions or scoring it incorrectly, but there hasn't been such an allegation and we have no evidence of that. (Of course, we have little evidence of anything at all, which may be an independent problem.)

But this year, some note, is the first year at the MBE includes seven multistate bar exam subjects instead of six. Civil Procedure was added as a seventh subject in the February 2015 exam.

Recall how equating works, however. We equate similar questions given to the same groups of test-takers. That means, Civil Procedure questions were not used to equate the scores. If the Civil Procedure questions were more challenging, or if students performed worse on those questions than others, we can always go back and see how they did on the anchor questions. Consider Groups A & B above: if they are similarly skilled test-takers, but Group B suffered worse scores on the uniques because of some defect in the Civil Procedure questions, then scaling will cure those differences, and Group B's scores will be scaled to reflect similar scores to Group A.

Instead, a "change" in the bar pass rates derived from the exam itself must affect how one performs on both the equators and the uniques.

The more nuanced claim is this: Civil Procedure is a seventh subject on the bar exam. Law students must now learn an additional subject for the MBE. Students have limited time and limited mental capacity to learn and retain this information. The seventh subject leads them to perform worse than they would have on the other six subjects. That, in turn, causes a decline in their performance on the equators relative to previous cohorts. And that causes an artificial decline in the score.

Maybe. But there are several factors suggesting (note, not necessarily definitively!) this is not the case.

First, this is not the first time the MBE has added a subject. In the mid-1970s, the 200-question MBE consisted of just five subjects: Contracts, Criminal Law, Evidence, Property, and Torts. (As a brief note, some of these subjects may have become easier; indeed, the adoption of the Federal Rules of Evidence simplified the questions at a time when the bulk of the evidentiary questions were based on common law. But, of course, there is more law today in many of these areas, and perhaps more complexity in some of them as a result.)

By the mid-1970s, the NCBE considered adding some combination of Civil Procedure, Constitutional Law, and Corporations to the MBE set. It ultimately settled on Constitutional Law, but not after significant opposition. Indeed, some even went so far as to suggest that it was not possible to draft objective-style multiple choice questions on Constitutional Law. (I've read through the archives of the Bar Examiner magazine from those days.) Nevertheless, the MBE plunged ahead and added a sixth subject in Constitutional Law. There was no great outcry about changes in bar pass rates or inabilities of students to handle a sixth subject; there was no dramatic decline in scores. Instead, Constitutional Law was, and now is, deemed a perfectly ordinary part of the MBE, with no complaints that the addition of this sixth subject proved overwhelming. The practice of adding a subject to the MBE is not unprecedented.

Furthermore, it's an overstatement to say that the MBE now includes a seventh subject when all bar exams (to my knowledge) previously tested Civil Procedure. Yes, the testing occurred in the essay components rather than the multiple choice components, but students were already studying for it, at least somewhat, anyway. And in many jurisdictions (e.g., California), it was federal Civil Procedure that was tested, not state-specific.

Finally, Civil Procedure is a substantial required course at (I believe) every law school in America--the same cannot be said, at the very least, of Evidence and of Constitutional Law. To the extent it's something students need to learn, they are, generally, already required to learn it for the bar, and they already have learned it in law school. (Retention or comprehension, of course, are other matters.)

These arguments are not definitive. It may well be the case that they are wrong and that Civil Procedure is a kind of disruptive subject sui generis. But it points to a larger issue that such arguments are largely speculation, ones that require more evidence before gaining confidence that Civil Procedure is (or is not) responsible, in any meaningful measure, to the lower MBE scores.

We do, however, have two external factors that predict the decline in MBE scores, and ones that suggest that the decline in student quality rather than a more challenging set of subjects is responsible for the decline in scores. First, law schools have been increasingly admitting--and, subsequently, increasingly graduating--students with lower credentials, including lower undergraduate grade point averages and lower LSAT scores. Jerry Organ has written extensively about this. We should expect declining bar pass rates as law schools continue to admit, and graduate, these students. (The degree of decline remains a subject of some debate, but a decline is to be expected.)

Second, the NCBE has observed a decline in MPRE scores. Early and more detailed responses from the NCBE revealed a relatively high correlation between MPRE and MBE scores. And because the MPRE is not subject to the same worries about changes in subject matter tested, its predictive value is beneficial to examine whether one would expect a particular outcome in the MBE.

IV. Some states are making the bar harder to pass--by raising the score needed to pass

Illinois, for instance, has announced that it will increase the score needed to pass the exam. When it adopted the Uniform Bar Exam, Montana decided to increase the score needed to pass.

These factors are unrelated to the changes in MBE scores. We might expect pass rates to decline. And we might attribute that decline to something other than the MBE scores.

And it actually raises a number of questions for those jurisdictions. Why is the pass score being increased? Why did the generation of lawyers who passed that bar and who were deemed competent, presumably, to practice law conclude that they needed to make it a greater challenge for Millennials? Is there evidence that a particular bar score is more or less effective at, say, excluding incompetent attorneys, or minimizing malpractice?

These are a few of the questions one might ask about why one may want a bar exam at all--its function, or its role as gatekeeper, and so on. And it's a question about the difficulty of passing the bar, which is a distinct inquiry from the question about the difficulty of the MBE questions themselves.

V. Concluding thoughts

Despite some hesitation or tentative conclusions offered, I'll restate something I began with: "Let's instead focus on whether the July 2015 bar exam was 'harder' than usual. The answer is, in all likelihood, no--at least, almost assuredly, not in the way most are suggesting, i.e., that the MBE was harder in such a way that it resulted in lower bar passage rates."

We can see that the MBE uses Item Response Theory to account for variances in the test difficulty, and the NCBE scales scores to ensure that harder or easier questions do not affect the outcome of the test. We can also see that merely adding a new subject, by itself, would not decrease scores. Instead, something would have to affect test-takers ability to an extent that it would make them perform worse on similar questions. And we have some good reasons to think (but, admittedly, not definitively, at least not yet) that Civil Procedure was not that cause; and some good reasons (from declining law school admissions standards on LSAT scores and UGPAs, and MPRE scores) to think that the decline is more related to the test-takers ability. More evidence and study is surely needed to sharpen the issues, but this post should clear up several points about MBE practice (in, admittedly, deeply, perhaps overly, simple terms).

Law schools ignore this to their peril. Blaming the exam without an understanding of how it actually operates masks the major structural issues confronting schools in their admissions and graduation policies. And it is almost assuredly going to get worse over each of the next three July administrations of the bar exam.

Bar exam scores hit 27-year low

After my early details about the dropping bar rates across most jurisdictions, Bloomberg reports that the scaled MBE score has dropped yet again, 1.6 points, to reach its lowest level since 1988.

Last year, I blogged about the drop in MBE scores, the 200-question multiple choice test that serves as the primary objective scale for score for bar passage, noting that it was the single-largest drop in MBE history. This year's decline is by no means historic, but it is the among the lowest scores in the history of the test. I've updated charts from last year here.

July 2015 bar exam results again show declining pass rates almost everywhere: outliers, or a sign of more carnage?

This post has been updated.

Many speculated that the July 2014 bar passage results were anomalously low on account of some failure in the exam, either because of software glitches or because of some yet-undescribed problem with the National Conference of Bar Examiners and its scoring of the Multistate Bar Exam. Last October, I was among the first to identify the decline in scores last year, and my initial instinct caused me to consider that a problem may have occurred in the bar exam itself. Contrary evidence, however, led me to change my mind, and the final scores showed rather significant declines in all jurisdictions, in all likelihood, I concluded, based on a decline in law school graduate quality.*

It's quite early for the July 2015 bar exam results, but they are trickling in. In most of these jurisdictions, only the overall pass rate is available, even thought it's usually better to separate first-time test-takers from repeaters (and, even better, first-time test-takers who graduated from ABA-accredited law schools). In other jurisdictions, I use the best available data, which is sometimes second-hand (and I link all sources when available). Worse, many of these jurisdictions only list pass and fail identities, so I have to do the math myself, which increases the likelihood for error.

But looking at the NCBE statistics from last year, we can see another overall decline in scores almost across the board. And even in places where there was an uptick in pass rates--which, perhaps, suggest that things are not as dire as they appeared last year, where--they remain low compared to recent history. Assuming last year's exam was not an anomaly but the beginning of a trend, which I eventually came to agree was the best explanation given the evidence, these results are consistent with that assumption--with no ExamSoft fiasco to blame. The problem of lower standards at many law schools that began about four years ago appears to be coinciding with the decline of bar pass rates, in many jurisdictions to recent-past lows, and several jurisdictions experiencing double-digit drops in the pass rate.

As with last year, of course, we're looking at only a handful of early-reporting jurisdictions. The final scaled MBE score, when disclosed, should reveal a great deal of information, so projections from the trends of a few states should be treated with appropriate caution (and speculation).

UPDATE: The MBE scores have been released, and they are the lowest since 1988. You can see details here.

Change in overall bar pass rate, July 2014 over July 2015

Iowa, +5 points (July 2014: 81%; July 2015: 86%)

Kansas, -3 points (July 2014: 79%; July 2015: 76%)

New Mexico, -12 points (July 2014: 84%; July 2015: 72%)

North Carolina, -4 points** (July 2014: 71%; July 2015: 67%)

North Dakota, +6 points (July 2014: 63%; July 2015: 69%)

Oklahoma, -11 points (July 2014: 79%; July 2015: 68%)

Washington, -1 point (July 2014: 77%; July 2015: 76%)

West Virginia, -5 points (July 2014: 74%; July 2015: 69%)

Wisconsin, -10 points*** (July 2014: 75%; July 2015: 65%)

**denotes first-time test-takers, not overall rate. UPDATE: I relied on erroneous data from 2014; I've since updated the data.

***source via comments

It's worth noting that North Carolina's bar appears to have an unusually volatile pass rate. The first-time pass rate in July 2013 was 71%; that skyrocketed to 85% last year; and that plummeted back to 67% this year. UPDATE: This data was in error, see above.

Jurisdictions like North Dakota are incredibly small--just 62 people took the bar, which likely explains some of the great volatility in scores, as each test-taker represents almost 2 points in the overall pass rate. July 2013 had a 76% overall pass rate, which plunged to 63% last year and bobbed back up to 69% this year. But more importantly, their first-time pass rate increased 15 points, from 64% to 79%, which resembles the 81% first-time pass rate from July 2013.

I've also added a little historical perspective for these bar exams. I've added charts beside the table showing the overall July pass rate (in North Carolina's case, the first-time pass rate) since 2010. In many jurisdictions, this is a six-year low, and it might be the lowest in quite some time. In most jurisdictions, it's the lowest or second-lowest in the six-year window of data. (The charts are slightly deceptive because the axes all end near the bottom of the pass rate range and doesn't go all the way down to 0%; perhaps not obviously to all, most graduates still pass the bar in these jurisdictions, but the charts reflect the relative changes within a small band in recent years.)

*(As an important caveat, I recognize that there are many measures of "student quality" or "law school graduate quality," and that the bar exam is but one measure of that. But, assuming, which may be even too big an assumption for many, that the bar exam presents, very roughly, a proxy for those who have the minimum capability to practice law, and the pass rates continue to decline, then we can, very roughly, say that there has been a "decline" in "law school graduate quality," at least as evaluated by this one metric. Perhaps there are other metrics, or perhaps there are better metrics, but this is how I use the term here.)


Additional updates to this post will occasionally occur here.

Alabama, -5 points (July 2014: 65%; July 2015: 60%)

Arizona, -11 points (July 2014: 68%; July 2015: 57%)

California, -2 points (July 2014: 49%; July 2015: 47%)

Colorado, -2 points (July 2014: 74%; July 2015: 72%)

Connecticut: -2 points (July 2014: 77%; July 2015: 75%)

Florida, +3 points (July 2014: 66%; July 2015: 69%)

Georgia, -6 points (July 2014: 80%; July 2015: 74%)

Idaho, +4 points (July 2014: 65%; July 2015: 69%)

Indiana, unchanged (July 2014: 72%; July 2015: 72%)

Louisiana, -8 points (July 2014: 70%; July 2015: 62%)

Mississippi, -27 points (July 2014: 78%; July 2015: 51%)

Missouri, -1 point (July 2014: 85%; July 2015: 84%)

Montana, -2 points (July 2014: 64%; July 2015: 62%)

Nevada, +2 points (July 2014: 58%; July 2015: 60%)

New York, -4 points (July 2014: 65%; July 2015: 61%)

Oregon, -5 points (July 2014: 65%; July 2015: 60%)

Pennsylvania, -5 points (July 2014: 76%; July 2015: 71%)

Tennessee, -2 points (July 2014: 66%; July 2015: 64%)

Vermont, -14 points (July 2014: 66%; July 2015: 52%)

California bar votes to cut exam from three days to two

In March, I covered the news that the California bar was considering cutting the length of the bar exam from three days to two. Today, Above the Law reports that the bar's board of trustees has unanimously approved the change, which should take effect July 2017.

The proposal (PDF) called for five one-hour essay questions and a 90-minute performance test on one day, and the 200-question multistate bar exam (MBE) on another day. The essays and the multiple choice component would each receive half the weight in the final score.

This post has been updated.

Here we go again: February 2015 bar pass rates down over last year

For February 2016 information, please click here.

This post has been updated with a visual representation of the decline in the mean MBE score.

In pursuit of a seemingly endless quest to determine what caused the July 2014 decline in bar pass rates, there's a simple solution: wait and see. Subsequent administrations of the test would reveal whether the July 2014 test was a one-time aberration or reflected an actual decline in student quality.

As the February 2015 bar exam results start to trickle in, the answer, as I've been inclined to suggest of late, is increasingly likely to be the latter.

It should be noted that some state bars, like Illinois, have begun to pull up the ladder on young Millennials increase the score required to pass. That will likely independently increase the failure rate in many jurisdictions in the years to come.

Additionally, the February bar exam is something different in kind. It usually includes fewer first-time test-takers, which means that the overall pass rates are usually lower. (People who fail the bar once are much more likely than others to fail it again.) There are often with much smaller pools of test-takers, making a single jurisdiction's pass rate subject to apparent significant fluctuations.

At this stage, too, like last year, most jurisdictions only disclose the overall pass rate, lumping together first-time test-takers and repeaters, ABA and non-ABA law school graduates, which is the least meaningful metric for evaluating performance across administrations.

Then again, if the theory is that the July 2014 was a one-time aberration, we might see an increase in highly qualified repeaters who are much more likely to pass the test if they "ought" to have passed the first time around--meaning, perhaps, that, all things being equal, we may see pass rates increase in the February 2015 administration over the February 2014 test, if the July 2014 test was attributable to non-test-taker-related factors.

The preliminary data, however, reflects a decline in pass rates largely across the board (with no ExamSoft debacle to complicate our analysis).

Granted, not only are we dealing with the caveats above, but these jurisdictions are (mostly) smaller than the typical jurisdiction, which makes potential distortions even more likely. Further, the declines are (somewhat) smaller (and, perhaps, closer to what one would expect with the decline of predictors) than the ones initially observed last July. And until a jurisdiction discloses the national mean scaled MBE score, we don't have the cleanest comparison. But given that early signs last year pointed toward the ultimate trend--despite most of the same caveats--these might serve as a warning.

Overall bar pass rates, February 2015 v. February 2014

Florida, -8 points* (February 2014: 72%; February 2015: 64%)

Kansas, -4 points (February 2014: 86%; February 2015: 82%)

Kentucky, -7 points (February 2014: 77%; February 2015: 70%)

Illinois, about -5 points (February 2014: 75%**)

Iowa, -14 points (February 2014: 86%; February 2015: 72%)

Missouri, -3 points (February 2014: 81%; February 2015: 78%)

New Mexico, -1 point (February 2014: 81%; February 2015: 80%)

New York, -4 points (February 2014: 47%; February 2015: 43%)

North Carolina, -13 points (February 2014: 56%; February 2015: 43%)

North Dakota, -7 points (February 2014: 62%; February 2015: 55%)

Ohio, unchanged (February 2014: 64%; February 2015: 64%)

Oklahoma, -3 points (February 2014: 70%; February 2015: 67%)

Oregon, -2 points (February 2014: 66%; February 2015: 64%)

Pennsylvania, -4 points (February 2014: 57%; February 2015: 53%)

Tennessee, -10 points (February 2014: 64%; February 2015: 54%)

Vermont, -20 points (February 2014: 68%; February 2015: 48%)

Virginia, unchanged (February 2014: 59%; February 2015: 59%)

Washington, -5 points (February 2014: 71%; February 2015: 66%)

West Virginia, -2 points (February 2014: 70%; February 2015: 68%)

We have small additional data points reflecting that perhaps it's not quite so bad. North Dakota disclosed its first-time pass rate, which increased 7 points--of course, only 31 were first-time takers last year, which, again, reflects some of the caveats listed above. (UPDATE: Pennsylvania's first-time pass rate was 69%, a 3-point drop. Oregon's first-time pass rate was 69%, an 11-point drop.)

I hope to occasionally update this post in the weeks to come, and we'll see if these jurisdictions are an aberration or a sign of things to come.

*Florida's statistics include only first-time exam takers.

**While Illinois has not disclosed its pass rate, its percentile equivalent chart suggests a drop of about 5 points. A scaled score of 264 is required to pass. A scaled score of 270 was the equivalent of the 40th percentile in February 2014; it's the equivalent of the 46th percentile in 2015. A scaled score of 260 was the equivalent of the 27th percentile in February 2014; it's the equivalent of the 31st percentile in 2015. (Although I confess I don't understand how Illinois disclosed an overall 75% pass rate when it conceded that 27% of test-takers scored at least 4 points below the passing score in February 2014, unless they have extremely generous re-scoring and re-evaluation.)

UPDATE: The Pennsylvania bar results reveal that the national scaled MBE score for February 2015 was a 136.2. That's a 1.8-point drop from the February 2014, and, while not the steepest decline or the lowest score in the last decade, is certainly close to it.

 

Visualizing the grim final numbers from the July 2014 bar exam

Most by now are undoubtedly aware about the significant decline in MBE scores and bar pass rates in the July 2014 bar exam. I've recently been persuaded (but not wholly) by NCBE explanations, suggesting that the July 2014 had generally worse predictors and performed worse as a result. If true, that suggests a grim reality as predictors worsen over the next several administrations.

I had some data earlier, cobbled together from state by state data sets using overall pass rates, suggesting, among other things, that the ExamSoft fiasco was not (primarily) responsible for the decline.

The NCBE has released its statistics for the 2014 administrations of bar exams. That means we have access to complete data sets, and to more precise data (e.g., first-time pass rates instead of overall pass rates). Below is a chart of changes in first-time bar pass rates among all 50 states and the District of Columbia between July 2013 and July 2014, with some color coding relating to the MBE and ExamSoft. Thoughts below.

As noted previously, the only non-MBE jurisdiction, Louisiana, saw a significant improvement in bar pass rates among first-time test-takers. So, too, did North Carolina--an MBE and ExamSoft jurisdiction with its essays on Tuesday. Congrats to the lucky test-takers in the Tar Heel State. Elsewhere, however, you see across-the-board declines among first-time test-takers, with a modest improvements in a few of jurisdictions.

It's wait and see for the July 2015 administration to determine whether this decline is the start of a trend or, perhaps, a one-off aberration.

California poised to cut bar exam from three days to two

UPDATE: The bar voted in July 2015 in favor of the proposal, to take effect July 2017. See the update here.

Tomorrow, the Committee of Bar Examiners for the State of California meets to consider whether to cut the bar exam from three days to two days.

The proposal would result in one day of essays and one day of the MBE. The essays would include a morning of three, one-hour essays; and an afternoon of two, one-hour essays and a 90-minute performance test. As a practical matter, its most significant impact would be on the performance test, which has been a three-hour element of the exam. Each day would be weighed equally.

It would not make the exam any easier--that's a question left for the cutline for scores, which presumably would be recallibrated to reflect a comparable difficulty. Instead, it would make it less grueling for test-takers, and less expensive for all parties--one fewer day staying in a hotel, and one fewer day of material to develop and score. Further, it might speed grading, which, given California's glacial pace of scoring that postpones bar admission ceremonies into December after a student graduates in May, would benefit all parties.

The most intriguing component of the agenda item, in my view, describes the mismatch between critiques of proposed changes and the point of the exam itself:

There continues to be some confusion with regard to what the bar examination is intended to do. The examination is not designed to predict success as a lawyer or even that a lawyer is ready for the practice of law. In fact, one of the best predictors of bar examination scores is the grades an applicant received during law school. So, in one sense, the examination is confirmation that the necessary skills and knowledge were learned during the three or four years of law study, through whatever means, which are needed to show minimum competence as a lawyer. The bar examination is an examination to test minimum competence in the law.

The format of the exam, then, whether through essays or multiple choice, whether three days or two days, is not the point.

Implementation would be submitted for review in April 2015 to determine when the two-day bar, if approved, would first take place.

Correcting the National Jurist piece on bar pass rates

In its February 2015 issue, National Jurist published a story about bar pass rates. It quoted my earlier work on the subject. But, apparently, the editing process at the magazine is relatively slow and does not respond to new information well. For some time, I suspected the NCBE had some role in the decline in the bar pass rates (which National Jurist notes). But after the NCBE provided additional data in December 2014, I was convinced that much of the decline could be attributed to a decline in student quality. For more on this area, consider my previous posts about the bar exam.