centered image

I Think I’ve Seen This One Before: Learning to Identify Disease

Discussion in 'Dermatology' started by Egyptian Doctor, Sep 22, 2015.

  1. Egyptian Doctor

    Egyptian Doctor Moderator Verified Doctor

    Joined:
    Mar 21, 2011
    Messages:
    10,137
    Likes Received:
    3,337
    Trophy Points:
    16,075
    Gender:
    Male
    Practicing medicine in:
    Egypt

    Nothing puts more fear into the heart of an internist than a dermatologic chief complaint. And for good reason: we have very little exposure to the breadth of the field. To us, all rashes seem to be maculopapular, all bumps are pustules… or was that nodules?

    It’s not that we internists don’t care about the skin or don’t appreciate its complexity. Rather, we simply haven’t seen enough bumps, rashes, and red spots to sort them all out consistently.

    On the topic of pattern recognition in medicine, an oddly titled NEJM Journal Watch piece Quacking Ducks grabbed my attention recently. The commentary by Mark Dahl summarizes a J Invest Dermatol article by Wazaefi et al. that discusses the pattern identification and other cognitive processes involved in discerning suspicious nevi. I will try to distill the interesting discussion to the main points of Dr. Dahl’s summary and the index article:
    • Experienced dermatologists use other cognitive processes besides the “ABCD” method for finding suspicious nevi.
    • Most healthy adults have only two or three dominant patterns of nevi on their bodies.
    • Deviations from the patient’s own pattern usually represent suspicious nevi. These deviations are referred to as “ugly ducklings.”
    • Even untrained practitioners can cluster nevi based on patterns and can identify which nevi deviate from the patterns.
    • However, expert skin examiners tend to cluster nevi more reliably and into a smaller number of groups.
    • Identifying potential melanomas by seeking out “ugly duckling” nevi is both an exceedingly simple and cognitively complex means of finding cancer.
    So, what is the take-home point? To make diagnoses, dermatologists use their visual perception skills, some of which are innate and some of which are honed through practice. While technology threatens to overtake the task of perception — see the MelaFind device, for example — human perceptiveness is still difficult to qualify, quantify, and teach.

    A colleague of mine and a faculty radiologist at my institution, David Schacht, has pondered the very question of visual perceptiveness among trainees in his own specialty of mammography. As your probably realize, computer-aided diagnosis has risen to prominence as a way to improve radiologists’ detection of subtle suspicious findings on mammograms. These computerized algorithms lessen the chance of false-negative tests. However, a radiologist ultimately still interprets the study; as such, radiologists still need training in visual perception. But how does a radiologist acquire this “skill”? Dr. Schacht hypothesizes that radiology residents who review a large series of enriched mammograms will have better cancer-detection rates. In other words, he hopes that intensive and directed case review will improve visual perception.

    Clearly, mammographers and dermatologists are not alone in making diagnoses by what they see. Every field relies on some degree of astute observation that often becomes second nature over time. Even something as simple as the general appearance of a patient in the emergency room holds a trove of clues.

    My question is, can these perceptive abilities be better taught to trainees or even be programmed into a computer? Or should we simply assume that experience itself drives some otherwise unexplained improvement in visual diagnosis?

    If the former is true, then we ought to seek a better understanding of how physicians glean these skills. If man should stay ahead of machine, then we clinicians should hone our intuition and our abilities to recognize visual patterns. Moreover, we should design education systems that promote more visual engagement and activate the cortical pathways that underpin perceptiveness.

    On the other hand, if experience itself imbues clinicians with better perceptive skills, then we really ought to maximize the number of clinical exposures for our trainees. No matter what the field, students and residents might simply need to see a lot more cases, either simulated or real.

    As human or computer perceptiveness evolves, even the most expert eyes or finest computer algorithms will still be limited . And ultimately, any homely duckling of a nevus probably deserves a biopsy. But with biopsies, aren’t we trading one set of expert eyes for another — in this case, the pathologist — when we send that specimen to the lab?

    In the end, the prevailing message seems to be that repeated experiences breed keen and reliable observations. We cannot discount the very basic human skills of visual cues. We should continue to seek ways to refine, study, and computerize our own perceptiveness.

    2fc8ee7d085cdf2c20475981521d5ab9.jpg

    Source
     

    Add Reply

Share This Page

<