centered image

Artificial Intelligence Is Worse Than The Old Boss

Discussion in 'Hospital' started by The Good Doctor, Feb 21, 2022.

  1. The Good Doctor

    The Good Doctor Golden Member

    Joined:
    Aug 12, 2020
    Messages:
    15,161
    Likes Received:
    7
    Trophy Points:
    12,195
    Gender:
    Female

    It seems that each positive story I read about the benefits of artificial intelligence (AI) is countered by a negative story.

    One internal medicine physician writes: “Someday, with enough computing power and artificial intelligence, we may be able to have systems that can do some basic medical advice and education about health care that could end up saving doctors a lot of time and helping patients get to a better state of health.”

    However, another physician observes that although AI can make medicine more efficient – particularly AI-based on computer algorithms – it can also generate “false flags” that lead to erroneous conclusions if doctors are too dependent upon technology and rely solely on system conclusions. “After all,” the physician reasons, “even though it’s a computer algorithm, it was devised by a human.”

    Prescription drug monitoring programs (PDMPs) are a prime example of AI algorithms gone awry. PDMPs are electronic databases that track controlled substance prescriptions in individual states. In many instances, PDMPs can be integrated into electronic health record systems, permitting physicians to delegate PDMP access to advanced level providers in their office.

    [​IMG]

    PDMPs are designed to monitor changes in prescribing behaviors and detect the use of multiple health care prescribers by patients, but because they cannot capture the nuances of clinical encounters, even the CDC has admitted that PDMPs have shortcomings and mixed findings. The CDC often refers to PDMPs as a “promise” rather than a real fix (no pun intended) that prevents doctors from making aberrant prescribing decisions.

    Addiction specialist Maia Szalavitz has chronicled nightmarish stories of patients denied necessary pain medication due to unintelligent systems based on flawed algorithms that lead physicians to believe patients are doctor shopping or somehow at risk of becoming addicted. Women and racial minorities are disproportionately impacted by these systems, as are patients with cancer and mental disorders, despite their use of controlled substances at a rate no higher than would normally be expected to treat debilitating pain or psychiatric symptoms, respectively.

    A major drawback of AI systems is their failure to account for known risk factors for addiction, such as adverse childhood experiences and mental illnesses. PDMPs may actually reinforce historical discrimination and make the opioid crisis worse by recapitulating inequalities associated with race, class, and gender and targeting patients with legitimate needs, forcing them to obtain controlled substances surreptitiously or go without them.

    This all amounts to the withholding of essential pain medications and other controlled substances – especially those intended to improve mental health and well-being – from individuals who truly need them – either because doctors won’t write the prescriptions once they’ve checked the PDMP database, or pharmacists won’t fill the prescriptions. New research shows that nearly half of medical clinics in the United States now refuse to see new patients who require opioids. I’ve read many accounts of the harm and humiliation created by PDMPs – even when the patients are health care providers themselves (their relatively high rates of addiction notwithstanding). Here are just a few examples:
    • A physician who specializes in informatics undergoes a complicated tooth extraction. Her pain needs to be managed by a second, more powerful, analgesic medication. Despite having a proper prescription, the pharmacist refuses to fill the medication until he personally verifies it with the patient’s PCP. The physician feels embarrassed by shoppers who stare at her in line, causing her to feel like a drug addict.
    • A physician assistant requires extensive abdominal surgery. Her surgeon has devised a “fast-track” post-operative program in which he only uses acetaminophen for pain control. The woman asks for a short course of opiate medication instead; she has no history of substance use disorder. The surgeon is unwavering in his adherence to the program, and the woman is coerced into seeking surgery elsewhere.
    • A psychiatrist has an established diagnosis of ADHD. He has been prescribed methylphenidate (Ritalin) for over 30 years, usually filled at his local pharmacy. On one occasion he decides to have the prescription filled at a pharmacy close to where he works. The pharmacist interrogates the psychiatrist in front of customers, refuses to fill the prescription, and insists that the psychiatrist have it filled at his local pharmacy, as is customary.
    Companies that market AI systems tell providers that computer analyses are not intended to be the sole determinants of a patient’s risk of addiction. Still, pharmacists have the “right” to fill a prescription – or not. They can rely on the output of PDMPs and, in addition, interject their own biases and prejudices to deny patients much-needed medication (some states allow pharmacists the absolute right to refuse to provide services). Patients lose when prescribing physicians defer to unwilling pharmacists, capitulate to insurance company bureaucrats, or avoid the company that stands behind the AI algorithm.

    Both pharmacists and prescribing physicians are advised to only use computer-generated red flags as calls-to-action to further review details in the patient’s prescription history in conjunction with other relevant patient health information. They are told that red flags are not meant to supplant clinical judgment. But going against AI-generated results puts many providers at legal risk should an untoward event occur, and many may not want to accept that risk.

    The entire scenario is somewhat Kafkaesque. It reminds me of The Who’s anthemic song “Won’t Get Fooled Again,” which contains the classic line: “Meet the new boss, same as the old boss.” Given the over-reliance on algorithmic-based AI, will the future of medicine give way to a new boss who is worse than the old boss?

    To be sure, AI has produced some great achievements. But when patients are at the mercy of uncaring and unsympathetic medical decision-makers, aided by predictive algorithms built on proxy measures for public health that may or may not be associated with known clinical risk factors nor vetted by the FDA or other regulatory authorities, many patients will continue to suffer needlessly with increased pain and decreased quality of life.

    Source
     

    Add Reply

Share This Page

<