It may have seemed like a logical solution to the opioid crisis: Create an algorithm to identify patients at high risk of overusing opioids, and advise doctors not to prescribe opioid-based painkillers to them. But that strategy may only have made the epidemic worse, as Maia Szalavitz reports for WIRED. The algorithm, NarxCare, is a kind of "secret credit score" for opioid risk. It often flags people it shouldn't, and by denying meds to people in severe pain, it's driving many to seek out illegal drugs that they're more likely to get addicted to. And even as prescribing has fallen sharply over the past decade, overdose deaths have risen to a record high. As we've been reporting, AI is working its way into ever more spheres of medicine. It's being used to screen for breast cancer. It could predict when viruses will mutate. In the future it may help doctors remotely track patients' vital signs to spot dangerous conditions before they become fatal. There's great promise here. So why does it sometimes go spectacularly wrong, as with NarxCare? I think part of the answer is in this story by WIRED's Tom Simonite about another risk-assessment algorithm, one used to alert nurses that a patient might be developing sepsis, a frequent killer in American hospitals. It forces changes in routines and workflows, but doctors don't always believe it, or are more receptive to its diagnoses at some times than at others. The algorithm, as Tom writes, has "a complicated social life." The phrase made me think of the book The Social Life of DNA, by Alondra Nelson, a Princeton sociologist who is now deputy head of the White House's Office of Science and Technology Policy. WIRED's Khari Johnson interviewed her in June, and one of her key points was that the AI field needs to "move from technical standards to … socio-technical standards." That means, among other things, that people developing an AI tool shouldn't just evaluate whether it accurately predicts cancer or sepsis or opioid risk, but also how people will interpret and act on those predictions—and whether those actions could end up doing more harm than good. Building an AI, these days, can be almost trivial. Figuring out its effects is much harder. These are just a few of the complex, thought-provoking stories we've published recently; some of my other favorites are listed below. Gideon Lichfield | Global Editorial Director, WIRED |
0 Comments:
Post a Comment