centered image

Software Spots Suspicious Skin Lesions On Smartphone Photos

Discussion in 'Hospital' started by The Good Doctor, Apr 20, 2021.

  1. The Good Doctor

    The Good Doctor Golden Member

    Joined:
    Aug 12, 2020
    Messages:
    15,161
    Likes Received:
    7
    Trophy Points:
    12,195
    Gender:
    Female

    Melanoma, which accounts for over 70 percent of all skin cancers, occurs when pigment producing cells called melanocytes multiply uncontrollably. This cancer is typically diagnosed through visual inspection of Suspicious Pigmented Lesions (SPLs), and such early detection of lesions in a physician’s office are often life-saving. However, there are several disadvantages with this approach, including the high volume of potential lesions one has to biopsy and test before confirming a diagnosis.

    To overcome these issues, researchers from MIT and a few other institutions around Boston, have developed a new deep learning tool to more easily identify harmful lesions from photographs taken with a smartphone.

    The paper, published in Science Translational Medicine, describes the development of the tool using a branch of artificial intelligence called deep convolutional neural networks (DCNNs). The researchers trained their tool using over 20,000 images, taken from 133 patients and from publicly available databases. Importantly, the pictures were taken using different personal cameras, to ensure that it would work with real-life examples.

    [​IMG]

    Once the tool was trained using known examples, it demonstrated over 90.3% sensitivity and 89.9% specificity in distinguishing SPLs from nonsuspicious lesions, skin, and complex backgrounds.

    One interesting aspect that distinguishes this tool from others is based on identifying lesions using the ‘ugly duckling’ criteria. This method, currently used by dermatologists, assumes that most moles on an individual appear similar to each other and are typically non-suspicious, with different-looking moles classified as ‘ugly ducklings’ for further investigation.

    By training the system on different features of moles such as circularity, size, and intensity, the accuracy of prediction was greatly improved: the algorithm matched the consensus of seasoned dermatologists 88 percent of the time, and matched individual dermatologists’ opinion 86 percent of the time. If the technology is confirmed, it could lead to significant savings in terms of clinical time and cost involved in the imaging and analysis of individual lesions.

    “Our research suggests that systems leveraging computer vision and deep neural networks, quantifying such common signs, can achieve comparable accuracy to expert dermatologists”, said Soenksen, the first author on the paper, in an MIT press release. “We hope our research revitalizes the desire to deliver more efficient dermatological screenings in primary care settings to drive adequate referrals”.

    Source
     

    Add Reply

Share This Page

<