Dr. Liliya Gershengoren, “Anecdotally, I know we do it all the time.” Dr. Gershengoren is a professor of psychiatry at Cornell University. In a recent survey she presented in 2017, Gershengoren found that “93 percent of staff and 94 percent of residents reported Googling a patient at least once, [and] that 17 percent of staff and 40 percent of residents Googled their patients on a frequent or semi-regular basis in the ER.” Practitioners can give plausible reasons for Googling their patients, especially in the mental health fields. Dr. Paul Appelbaum, a psychiatry professor at Columbia University explains that sometimes, “patients may be psychotic, intoxicated, or suicidal. In these acute settings, social media can provide clinicians with valuable context to make decisions — whether the patient uses drugs or alcohol, has self-harmed, or has family support.” But some of the respondents in Gershengoren survey gave reasons that appear less relevant to immediate patient care, including: * “Patient reported being on TV, but I was suspicious that this might not be true” * “Famous problem in the news” * “Criminal background” Furthermore, “when asked if they had informed patients either before or after Googling their names, a majority of both attendings and residents responded ‘never.’” One might ask, “What’s the big deal? After all, this information is publicly available.” But there are reasons that physicians may wish to be cautious about Googling their patients. First, the information on Google may be inaccurate or unreliable. Second, if physicians act on sensitive information that they learned online that the patient themselves didn’t tell them, patients could feel their privacy was violated. This could seriously undermine the trust necessary for a doctor-patient relationship. Finally, routine use of Googling could establish a new “standard of care” that essentially forces all physicians to Google their patients, lest they be found liable for malpractice. For example: As more and more providers Google to guide their decisions, they may be shifting the clinical standards to which all practitioners are held. “The standard of care is developed by the clinical community itself,” says Appelbaum. “What most people do, or at least what a substantial number of people do, becomes a standard of care.” If practitioners neglect that standard, and something preventable goes wrong, they risk accusations of malpractice. In other words, if patient-targeted online searches become the new standard of care, then clinicians could become liable for information patients post online. If a patient leaves a suicidal message on Facebook, and the clinician misses it, there’s a future — seemingly more plausible by the day — in which that clinician could be sued for malpractice if the patient then attempts suicide. Do we want routine Googling of patients by physicians to become the “new normal” in doctor-patient interactions? Another twist occurs when third parties make a business of collecting patient data to sell to physicians, claiming that information about their “social determinants of health” will help physicians better care for their patients: A small but fast-growing number of technology companies, including data brokers LexisNexis and Acxiom, sell health care providers detailed analyses of their patients, incorporating criminal records, online purchasing histories, retail loyalty programs and voter registration data... “Liens, evictions and felonies indicate that individual health may not be a priority,” according to a marketing pitch on the [LexisNexis] website. Voter registration might be relevant as well because “ndividuals showing engagement in their community may be more likely to engage in their own health.” As Politico writer Mohinda Ravindranath notes, this is “not the kind of information that normally populates a patient’s medical record.” Whether I’m registered to vote or where I buy my groceries should not be part of my medical record without my consent. To the extent that patients feel that their physicians are collating this information without their knowledge or consent, this could again cause them to view their physicians with mistrust. Nor does a human have to collate patients’ data. Facebook now uses AI (artificial intelligence) to monitor user posts for suicide risk, including contacting the local police if they feel someone is at imminent risk. According to Facebook Global Head of Safety Antigone Davis, they fielded 3,500 reports in the past year. In other words, “That means AI monitoring is causing Facebook to contact emergency responders an average of about 10 times a day to check on someone.” Mason Marks of Yale University has expressed concerns over the lack of transparency in this process. He notes that a proprietary “black box” algorithm can trigger police responses that override ordinary Fourth Amendment protections against warrantless government searches. And this is true, even if a human moderator has to agree to call the police: Facebook has over two billion users, and it continuously monitors user-generated content for a growing list of threats including terrorism, hate speech, political manipulation, and child abuse. In the face of these ongoing challenges, the temptation to automate suicide prediction will grow. Even if human moderators remain in the system, AI-generated predictions may nudge them toward contacting police even when they have reservations about doing so. Applebaum similarly warns: We are creating a new kind of medical record with all this information... It creates a permanent record that once would not have been accessible, but now can be accessed by insurers or in legal procedures. Fortunately, there are concrete steps that physicians and patients can take to protect the integrity of their medical records in the age of big data. For example, physicians and hospital systems can make it standing policy to not Google patients unless they have a medically compelling reason to do so. Patients can ask their doctors and local hospitals to state such a policy in writing. And all citizens can request that social media companies provide greater transparency about any automated algorithms that expose users to forcible interactions by law enforcement. Overall, I am a big fan of technology as a tool of improving medical care. But physicians must be careful to use these tools responsibly. A patient’s medical record should be the vehicle that enhances the doctor-patient relationship, not the vehicle that destroys it. Source