Tapping into the mindset of one of the most villainous cinematic computers, the HAL 9000 of 2001 A Space Odyssey, may provide introspection to save us from the same paranoid meltdown that led HAL (spoiler alert) to embark on a murderous rampage. Homicidal activity should not be a lurking risk in any of us, but burnout is. Briefly, in case you are unfamiliar with Stanly Kubrick and Arthur C. Clarke’s sci-fi masterpiece: a non-naturally occurring monolith is discovered beneath the surface of the moon. There is intelligent life out there. When sunlight hits the monolith for the first time after being unlunared, it sends a signal toward Jupiter. The spaceship Discovery One is launched to explore. Aboard is a crew of three hibernating astronauts who know of the monolith’s existence, two non-hibernating astronauts unaware of the monolith, and one artificial intelligence computer, HAL. HAL has control of all systems aboard. On the way to Jupiter, HAL murders the hibernating crew by shutting off their life support, killing one of the non-hibernating astronauts while on an extravehicular repair mission, and refusing entry back aboard Discovery One to the remaining astronaut stuck helmetless in his EVA pod. Why? Why did HAL kill? According to his creator, Dr. Chandra, HAL was constructed for “the accurate processing of information without distortion or concealment,” yet the National Security Council had programmed HAL to hide the true mission — which was to study the monolith and existence of extraterrestrial life — from the non-hibernating crew. These incongruent orders caused internal conflict that the advanced artificial intelligence of HAL decided to resolve by killing the crew. With the crew dead, HAL could continue the mission without disobeying orders. Conflicting orders … familiar? How about antibiotic stewardship versus surviving sepsis campaigns? Could I be the only physician being bombarded with alerts from the sepsis champions that my 19-year-old patient with low-grade fever, mild tachycardia, non-toxic appearance, and viral sounding symptoms does not need triple parenteral antibiotic coverage within the hour of signing into the emergency department? Of course, I am not. My hospital recently mandated that physicians complete a course on antibiotic stewardship addressing the perils of antibiotic misuse and overuse. I complied only to come on the shift in the ED and be challenged by sepsis champions monitoring the EHR messaging me that I should be activating sepsis protocols on patients felt were not septic. I feel like HAL. With a bit of sarcasm mixed with a hope of decreasing stress from the paradox that has become the practice of emergency medicine, I dreamt of the sepsis champions having to complete the same antibiotic stewardship training I just completed. Or, even better, have the antibiotic stewards engage the sepsis champions in a duke-em-out fashion. Though, I doubt this would truly solve our paradox, at least there would be a time they are distracted and we can practice true medicine that takes into account vital signs, labs, patient presentation, etc., and we providers actually seeing the patients can provide appropriate treatments. That is not likely to happen. HAL was advanced artificial intelligence who came up with a solution to his paradox. We are better than HAL. We can understand HAL’s paradox, but we are smarter and better than HAL. We understand that the surviving sepsis champions are warning against well-appearing patients with worrisome labs or vital signs so that our guard is up, and the antibiotic stewards are warning us of potential pitfalls when we neglect our clinical acumen and overprescribe antimicrobials. As humans, we can fall to the stress this dichotomy brings on a daily basis to the practice of emergency, or we can embrace our intelligence. We know that even the most sophisticated artificial intelligence of fiction succumbed to contradictory orders. Yet, we are intelligent and can see more than one alternative solution. We can evaluate the best options and argue with opposing views. After all, our intelligence allows us to realize that we all want the best outcomes for our patients. Rather than hounding us on one view, we are all wanting good outcomes, and those remote people driving patient management are only doing so in a suggestive manner, not demanding. We are still the patients’ physicians. We are still in control. We still know best what to do. Do not implode like HAL. Rather, take the advice and execute it, or message back informative reasons the advice is not warranted for this patient. Human nature will help patients better than any algorithm or AI. Remember, we are better than AI. Otherwise, those stewards and champions hiding behind the curtains would be managing patients. We all, stewards, champions, and providers are on the same page, that is, to help the patient. We are better than AI. The patients need us. Source