Does the bar exam adequately test prospective lawyers' minimum competence?
The critiques of the bar exam have grown louder over the last few years on the heels of declining bar pass rates. But the most popular critiques have changed somewhat. It used to be that external factors--such as the ExamSoft debacle--were a target. Then came charges that the bar exam was harder than usual. But the most recent charges are actually quite a longstanding critique of the bar exam--it simply isn't a good measure of prospective lawyers' "minimum competence."
The bar has attempted to adjust in the last fifty years. Many states now have a "performance test," a component designed to simulate what lawyers do--test-takers are given some law and some facts and asked to address the problem with a legal task. That said, performance tests moderately correlate with other elements of the bar exam and perhaps are not performing the function some hoped they would serve.
Regardless, critiques of the bar exam are longstanding, and some of the most popular critiques look something like this: why did a state, like California, pick this score as a passing score for "minimum competence"? And why is the bar exam any good at testing the kinds of things that lawyers actually do? The bar exam is a three-day (in California, beginning this July, two-day), closed book test with multiple choice and timed essay questions that in no way resembles the real world of law practice. Why should we trust this test?
It's a fair point, and it's one best met with a question: what ought the bar test? And, perhaps a more subtle question: what if it turns out that the answer to what the bar ought to test actually aligns quite closely with the results from the existing bar exam?
A study in 1980 in California is one of the most impressive I've seen on this subject. And while it's a little old, it's the kind of thing that ought to be replicated before state bars go about making dramatic changes to their exams or scoring methods. I'll narrate what happened there. (For details, consider two reports on the study and the testimony presented to California lawmakers asking the exact same questions in 1984, after the particularly poor performance of applicants to the state bar on the July 1983 bar exam--a historically low score essentially matched in the July 2016 administration.)
After the July 1980 bar exam in California, the National Conference of Bar Examiners teamed up with the California Committee of Bar Examiners to run a study. They selected 485 applicants to the bar who had taken the July 1980 exam. Each of these applicants took an additional two-day test in August 1980.
The two-day test required participants to "function as counsel for the plaintiff in a simulated case" on one day, and "counsel for the defendant in a different simulated case" the other day. Actors played clients and witnesses. The participants were given oral and written tasks--client interviews, discovery plans, briefs, memoranda, opening statements, cross-examination, and the like. They were then evaluated among a number of dimensions and scored.
In the end, the scores were correlated to the applicants' bar exam scores. The relationship between the scores and the general bar exam scores were fairly strong--"about as strong as the underlying relationship between the Essay and MBE section of the [General Bar Exam]." "In short," the study concluded, the study and the bar exam "appear to be measuring similar but not identical abilities."
Additionally, a panel of 25 lawyers spent more than two days with extended in-depth evaluation of 18 of these participants. The panelists were clinical professors, law professors, attorneys, judges, and others with a variety of experience. The panelists were asked to evaluate these 18 participants' performance among the various dimensions along a scale of "very unsatisfactory" (i.e., fail) to "borderline" to "very satisfactory" (i.e., pass). The panel's judgments about the pass/fail line was consistent with the line where it was drawn on the California bar exam (with the caveat that this was a sample of just 18 applicants).
It might be that there are different things we ought to be testing, or that this experiment has its own limitations (again, I encourage you to read it if you're interested in the details). But before anything is done about the bar exam, it might be worth spending some time thinking about how we can evaluate what we think ought to be evaluated--and recognize that there are decades of studies addressing very similar things that we may ignore to our peril.