A disturbing truth about medical school — and America’s future doctors News that a federal educational experiment failed to supply evidence in favor of Education Secretary Betsy DeVos’s school choice agenda has undoubtedly elicited schadenfreude in some Democratic circles. Somewhat lost in the story, however, is scrutiny of how students’ educational success or failure is measured. The trend toward near-exclusive reliance on standardized testing to measure educational achievement now extends all the way to medical school. Many may not realize that the readiness of aspiring doctors to enter the world of clinical medicine is now based overwhelmingly on a single, standardized, closed-book, multiple choice test. Scores on the test — the U.S. Medical Licensing Step 1 Exam (a.k.a. the Boards) — taken after two intense years of classroom education, will overwhelmingly determine where students do their residency training. And their professional futures. Such reliance on Board scores wasn’t always this way. About 30 years ago, I took the Boards. I passed, and have absolutely no idea how I scored (even though I am the kind of person who still remembers the exact score I got on my SATs). But a decade or so ago, residency programs suddenly started caring, a lot, about Board scores — an unintended consequence of a well-intentioned move by medical schools to grade the first two years pass-fail, to foster student wellness. Residency programs abruptly found themselves in desperate need of a yardstick by which to measure and compare student applicants. Board scores were suddenly paramount. Behold the mismatch: We aim to prepare students for a career characterized by collaboration, complexity, nuance and uncertainty; yet, we evaluate them on their ability to select — autonomously and without research — among radio buttons representing a discrete range of right-or-wrong responses. After 20-odd years in practice, I have yet to see a patient come in with a list of four or five possible diagnoses, and ask that I select the most appropriate response. Nor have I, while searching online for current evidence or recommendations, heard a patient cry out, “Stop! This is a closed book appointment!” Here’s the thing: Students understand how they’re assessed — they’re all quite brilliant in this way, whether they’re in medical school or high school or third grade. They figure out with lightning speed what they need to do to maximize their performance on the assessment that matters. As a result, here is my students’ To Do list: Do not attend class, unless attendance is specifically required. Complain about the (modest) number of class hours requiring attendance. Resist discretionary learning opportunities, no matter how interesting. Their logic is impeccable. Each student’s sweet spot for MCQ mastery involves some combination of lecture videos at double speed, late nights, ear buds, coffee and little human interaction. It works beautifully in achieving the desired outcome of a good Board score. But what is the desired outcome? My students — and others like them — are the doctors of tomorrow. They’ll care for me — and you — as we age. For our parents facing life threatening illness and difficult decisions at the end of life. For the children we haven’t yet contemplated. The desired outcome should not be about test scores. We should hope students will have learned how to find, evaluate and apply knowledge; how to work collaboratively; how to tolerate and manage uncertainty; how to reason; how to walk in someone else’s shoes; how to relentlessly pursue what’s best for each patient; how to debate, be wrong, fail — and embrace and learn from it, each time; how to become who they want to be. It’s tough to do alone. It’s really tough with ear buds in. To be sure, the medical students I teach believe all these capabilities are genuinely important. But they are keenly aware that these are not what will bring them educational success. The contrast exemplifies the pernicious and corrosive power of standardized metrics of success in any educational setting — to transform what we value and how we learn. “Every system is perfectly designed to get the results it gets”: It’s taken me years to fully appreciate this deceptively simple observation by one of the fathers of health-care improvement science. Change the last two words to “by which we choose to measure it,” and the paradigm clearly applies as well to education as to the health-care systems Paul Batalden describes. Clearly, we need objective and reproducible measures of achievement. But when we permit the easy availability and seeming objectivity of one measure to exalt itself as sovereign, we become singularly capable of removing the joy from teaching, fragmenting a community of learning, and undermining our commitment to foster curiosity, nourish problem solving and inspire a love of lifelong learning. Source