The algorithm that narcs on you

And some other healthcare news.

Happy Sunday. If you haven’t seen my interview this week with a trans woman in the South about her struggles to afford and access medical care, please check it out. I learned a lot about the medical transition process from her, and I’m glad to pass it on. I had a message from a provider who confirmed a lot of what she said, particularly regarding the painful process of hair removal:

I'm a family medicine provider in a rural area with a high medicaid population serving a lot of trans patients who are, as you note, disproportionately medicaid patients. We have relatively "good" coverage of gender affirming care here but hair removal is explicitly carved out and the justification is "well we don't cover it for any other conditions either so it's not discriminatory" but well that's actually bad for everyone, you jackals. There's a second hair related problem where we "do cover" bottom surgery and "do cover" the hair removal needed for that but I've had multiple patients whose vaginoplasty has been on hold for OVER A YEAR because of the plan refusing to contract with available providers and the providers not being willing or able to jump through piles of paperwork or accept ridiculously low reimbursements


The big broad problem really is the face, because simply put HRT doesn't touch facial hair for most patients, and if it does it takes a very long time. Having a thick beard and being a woman is a big problem. So it sucks for a lot of people. and in a rural area, we don't have lots of laser providers jockeying for position by offering cheap groupons. So it's also harder and more expensive to get laser and electrolysis out of pocket than it would be in a major city

As always, if you’re a provider, I’d love to hear from you, at I am very nice to the doctors I talk to, despite some of the things I’m about to say in this post.

Wired had an incredible story this week about the algorithms and shoddy data behind state prescription drug monitoring programs (PDMPs), which are used by every single US state to track prescriptions for drugs like opioids. (It’s a good companion piece to Tarence Ray’s recent Baffler article about the punitive response to the opioid epidemic, and the cops, prisons, politicians, and prosecutors who benefited from treating the crisis as a law enforcement problem, not a public health problem.) One company, Appriss, dominates this market, with software that purports to identify and flag patients with ‘drug-shopping’ behaviors—those who might be lying to doctors about the pain they’re in in order to get opioids. It was their product NarxCare that led to a woman interviewed by Wired losing her gynecologist because of her high “scores,” which, after much digging, she found was because of prescriptions she had picked up for her dog. The company said this sort of situation is “very rare,” which is not comforting in the slightest.

The way the algorithm identifies patients leads to the software identifying chronically ill people as suspicious, since they usually visit many different doctors. It also reportedly uses the distance a patient traveled to see a doctor as a measure of suspiciousness, which sure seems like it could punish patients in rural areas who lack access to nearby care too. I should say it’s not even clear what sources of data the company actually uses, because the statements it provided to Wired about this contradict its own website. From the story:

For instance, the company told WIRED that NarxCare and its scores “do not include any diagnosis information” from patient medical records. That would seem to suggest, contra the NarxCare homepage, that the algorithm in fact gives no consideration to people’s histories of depression and PTSD. The company also said that it does not take into account the distance that a patient travels to receive medical care—despite a chatty 2018 blog post, still up on the Appriss site, that includes this line in a description of NarxCare’s machine-learning model: “We might give it other types of data that involve distances between the doctor and the pharmacist and the patient’s home.”

Like many industries, healthcare is rife with people making big claims about the value of machine learning and algorithms. You can even pay MIT’s Sloan School for Management $2,800 for an “executive course” in artificial intelligence in healthcare, which will enable you to put that you did that in your LinkedIn profile and tell everyone you meet that you are an expert in the exciting future of AI medicine. Speaking of MIT, the MIT Technology Review recently reported that “many hundreds of predictive tools were developed” to help the fight against Covid-19, and yet “none of them made a real difference, and some were potentially harmful.” That was the conclusion of several major studies which looked at AI tools developed to fight Covid-19—tools that predicted patient risk, for example, or were supposed to help diagnose patients: They were all shit.

Last year, the same author wrote about a Google Health algorithm that was deployed in Thailand, intended to help diagnose patients with potential diabetic retinopathy, which is the leading cause of new blindness in adults in the United States and requires early treatment. The algorithm was… bad. It was not good at its little computer job. The algorithm “had been trained on high-quality scans; to ensure accuracy, it was designed to reject images that fell below a certain threshold of quality,” which turned out to be about 20 percent of real-world cases—so the algorithm just flat-out rejected those ones. Those patients were “told they would have to visit a specialist at another clinic on another day,” which was especially frustrating “when they believed the rejected scans showed no signs of disease and the follow-up appointments were unnecessary.”

This is an example of an algorithm that just isn’t very good at what it’s supposed to do, leading to delays and other time-wasting nonsense. But there are other examples of medical algorithms that are actively very harmful, in ways that providers might not be able to catch themselves. In 2019, a high-profile study in Science concluded that an algorithm developed by Optum, part of UnitedHealth Group, and used widely by American hospitals was “systematically discriminating against black people.” The algorithm assigned ‘risk scores’ (algorithms love those, it seems) to patients based on their total healthcare costs, rather than how sick they actually were. Because black people are more likely lack access to healthcare and be discriminated against by the medical system, they would show up in the algorithm as being less likely to need extra care. But they were actually sicker than white people overall—they were just routinely not getting the care they need, and therefore having less spent on them. (A doctor might not order an MRI for a black patient when they would order one for the same symptoms in a white patient, for example, lowering the amount spent on them even if they have the same disease.)

Which brings us back to Appriss and the narc algorithm. Appriss insisted to Wired that “a NarxCare score is not meant to supplant a doctor’s diagnosis”—which is a little empty since, as Wired noted, most states legally require doctors to consider these dinky little algorithms when prescribing controlled substances. Similarly, Optum’s defense of their algorithm, according to Nature, was that “The cost model is just one of many data elements intended to be used to select patients for clinical engagement programs, including, most importantly, the doctor's expertise.” So their position essentially boils down to: Our algorithm is great and will revolutionize healthcare, except for when it sucks, and in those cases it’s the doctor’s job to know that.

Doctors are not perfect as a profession, as evidenced by the medical racism problem referenced earlier, for example. They are (often) overpaid and (often) don’t listen to you. Even some of the doctors in the Wired story are huge pieces of shit who defer blithely to the algorithm out of lazy prejudice—like the one who told a kidney stone patient that she couldn’t have IV painkillers, because “both IV drug use and child sexual abuse change the brain. ‘You’ll thank me someday, because due to what you went through as a child, you have a much higher risk of becoming an addict, and I cannot participate in that,’” he told her. But we’re in a very stupid situation when we have algorithms trained on bad data that are clearly supposed to at least influence doctors’ clinical decision-making, if not actively curtail it under threat of legal action—and then have the companies throw up their hands and say look, it’s not our fault you listened to us.

A couple of other healthcare stories from this week:

The Intercept reports on UnitedHealthcare’s role in a 2016 academic study of surprise bills, which found that roughly a fifth of ER visits resulted in a surprise bill (where the hospital is in-network but a particular doctor the patient sees isn’t). The study’s data came from UnitedHealthcare, who, emails reveal, then made suggestions for solutions to be included in the paper. This involvement, the site notes, doesn’t “invalidate the Yale study’s conclusions,” which have been replicated, but it does reveal the extent of corporate influence in academia. The context for this is that insurers and hospitals are constantly fighting about who is grifting the other more. In the case of surprise bills, patients were getting stuck with the bills because neither party could agree on who was being more greedy—hospitals for charging insurance too much, or insurance for demanding too-low rates for procedures? The important thing to remember is that they’re both awful and none of that process should be happening at all. Saved you a click, I guess. (No, do read it anyway.)

Speaking of UnitedHealthcare, they recently had to pay $15.6 million to settle complaints relating to the mental health parity law, which requires insurers to treat mental health claims equally to regular medical claims. The suits claimed they reduced reimbursements for out-of-network mental health services and “flagged participants undergoing mental health treatments for a utilization review, resulting in many denials of payment for those services.” If you ever wondered why it’s so hard to see a damn therapist in this country, stuff like this is part of it!

A good short summary of reasons why Medicare Advantage is bad, from Axios. (More content to come on this from Sick Note in the future!)

And finally: A picture of Digby, taken by my husband, who gets mad if I don’t credit him. See you next week!