centered image

Medical Trainees Must Move Beyond Algorithms

Discussion in 'General Discussion' started by Dr.Scorpiowoman, Apr 18, 2018.

  1. Dr.Scorpiowoman

    Dr.Scorpiowoman Golden Member

    Joined:
    May 23, 2016
    Messages:
    9,027
    Likes Received:
    414
    Trophy Points:
    13,070
    Gender:
    Female
    Practicing medicine in:
    Egypt

    In The Great divorce, C.S. Lewis wrote that we live "in a world where every road, after a few miles, forks into two, and each of those into two again, and at each fork, you must make a decision." The processes of diagnosis and treatment often feel like a forking world. Every decision simply leads to another choice.

    When The Great divorce was published in 1945, the corresponding medical era hadn't adopted the algorithmic mindset that is ubiquitous today. Back then, an "expert diagnostician" seemed to rely on an encyclopedic knowledge of diseases, a Sherlock-esque power of deduction, and a wizardish ability to recognize subtle symptoms, the same way a psychic can read a palm. Diagnosis was a skill that was part talent, part knowledge, and part experience.

    In that same year, British philosopher Gilbert Ryle delivered a speech in London. He delineated two kinds of knowledge: "knowing that" or understanding a fact, versus "knowing how" or experience-based, ingrained understanding. It might seem that doctors in 1945 knew more "how" than "that" compared with today's physicians.

    The Rise of the Algorithm

    To use Ryle's terms, today's physicians have more "that" to know. We can actually view living organs that were previously only accessible via stethoscope sounds. We can dissect the body down to granular genetic information. In 1960s and 1970s, the "language of computers" gained popularity in medicine. Physicians started to consider whether a diagnosis could be made using a series of probabilities. Could medicine be reduced to a game of "input" data—such as patient demographics, symptoms, signs, and lab work—to calculate the "output" of a most likely diagnosis?


    By 1972, Yale published Alvan Feinstein's "An Analysis of Diagnostic Reasoning: 3: The Construction of Clinical Algorithms". Feinstein dreamt of "a unique scientific opportunity to elevate [a clinician's] mode of reasoning from its current state of amorphous 'judgement.'" He was a cartographer, attempting to map medical thought. Although recognizing the shortcomings of pure computational medicine, he was inspired to use their "graphical notation" (flowcharts). As he stated, "...a clinician can now, at long last, specify the flow of logic in his reasoning." Feinstein outlined principles and applications, as if laying out trail markers for future clinicians who would eventually follow his well-laid path.

    A decade later, JAMA published Carmi Margolis's "Uses of Clinical Algorithms", which explored the use of algorithms for teaching and patient care. In an Ikea-esque approach, Margolis wrote five steps for writing an algorithm, plus seven for writing a set of algorithms.

    After that, flowcharts and risk calculators propagated into all aspects of medicine, across every specialty. Within 20 years, tens of thousands of algorithms were in use. The Journal of Medical Informatics Association called for the centralization and automation of medical algorithms, a gap identified by the National Patient Safety Foundation. Algorithms were—and still are—studied: Were they being used correctly? Were they being used at all? And, most important, to what extent did they inform trainees, and to what extent did they prevent trainees from "knowing how"?

    Beyond the Algorithm

    In his 2007 article "The Checklist", Atul Gawande, MD, MPH, underlined a few benefits of having prescribed sets of steps in medicine: "First, they helped with memory recall, especially with mundane matters that are easily overlooked in patients undergoing more drastic events. (When you're worrying about what treatment to give a woman who won't stop seizing, it's hard to remember to make sure that the head of her bed is in the right position.) A second effect was to make explicit the minimum, expected steps in complex processes."

    Neither point is too surprising. Reminders help you remember. When a sleep-deprived resident on night float manages a census of multiple complex cases, it's easy to miss a detail now and then. It's useful, and sometimes lifesaving, to have easy-access references.

    Further upstream, algorithms can be useful study aids for actually learning the steps in the first place. They lay out logical steps of disease management for trainees, the way children learn letters through singing the alphabet in order, so that every consonant and vowel are accounted for. With repetition, a diagnostic sequence can become ingrained, automated, and perhaps facilitate a "knowing how" and not just a "knowing that."


    All that said, in today's medical era, most trainees spend more time in front of a computer than with actual human beings and their physical bodies. It's a common thread, and algorithms are included in the complicated conversation about medical informatics. To what extent do we rely on thoughtless decisions? Are we training doctors to simply follow orders and click on automated EMR order sets, thus weakening the very rigorous thinking that made them eligible candidates for medical school?

    Ever the optimist, I would argue no, not if we're careful...

    The Importance of 'Knowing How'

    I have vivid memories of my first experience rounding in a team as a third-year medical student on my neurology rotation. Our team consisted of me, a fourth-year student, an intern, a senior resident, and our attending. On my first morning rounds, we stopped in front of a patient's room. The intern presented the case: A man with a history of poorly managed epilepsy was admitted through the ED after a seizure. Before seeing the patient, the attending asked the fourth-year, "What are some nonepileptic causes of a generalized seizure?" The student systematically listed traumatic causes, metabolic disturbances, and drug-related possibilities. As he spoke, I remembered charts in my text book. The intern hurriedly scribbled in the chart. The attending smiled, nodding thoughtfully. "Excellent! Very thorough!" Then he turned to the senior resident as if to imply, "Anyway, back to work."

    They reviewed the labs from the past night and the admission note. The intern piped up from the chart, asking if they should order a different treatment from what had been prescribed, because it wasn't the standard regimen for these types of cases. The attending looked to the senior resident and in a very different tone from what was used for the medical student, asked, "What do you think?" The senior paused. He looked at the chart and then back to the attending. "Well, I know this is nonstandard therapy, but in this case it might be appropriate...," he began.

    For the next several minutes, the two debated. They discussed the standard of care, the patient's diabetes and hypertension, and the clinical unknowns (and when they could become known). For a moment, they even discussed the quality of research that underpinned the recommended regimens versus evidence that supported the alternative that the patient received.

    In the end, they opted to switch to the treatment that an algorithm would have recommended. However, in the process of their decision, they used their own judgement to weigh the pros and cons.

    Algorithms are incredibly helpful tools. They're handed down from experts in the field, most of whom knew both "that" and "how." However, even when "standing on the shoulders of giants," trainees must find balance on their own two feet.

    [​IMG]

    Source
     

    Add Reply

Share This Page

<