A.I. could help predict when blood tests aren’t necessary

(Credit: )

An algorithm that can predict whether a given blood test will come back “normal” could help cut needless medical tests, researchers report.

Being thorough in medicine is a must—but doctors concerned about over-testing are raising a new question: Is it possible to be too thorough? Jonathan Chen, assistant professor of medicine at Stanford University, says the answer is yes, particularly in the context of diagnostic blood testing.

Blood testing is a cornerstone of diagnostic medicine, but there’s an increasing recognition that too much blood testing—such as repeated tests—yields diminishing results. Not only do the results of many repeat-blood tests remain unchanged, administering the same test over and over can have detrimental effects on patients.

“The financial downsides of unnecessary testing are often the most obvious, but there are a lot of other drawbacks, namely, the burden it can have on the patients themselves,” says Jason Hom, clinical assistant professor of medicine. “Patients in the hospital sometimes have to wake up at all hours of the morning to take these tests, making them delirious. And sometimes these tests are done so often the patients become anemic.”

“…what we’re trying to do with our algorithm is empower physicians with quantitative information so that they’re not stuck guessing.”

Often, excess blood tests occur as a result of the just-to-be-sure argument—as in, we’ll run another test, just to be sure the result is what we think it is.

The need for reassurance is, in part, likely due to a lack of guidelines on what constitutes grounds for multiple blood tests.

“The general train of thought on re-ordering a lab test is that doctors probably shouldn’t do it—unless it’s ‘clinically appropriate,'” says Chen. “So what’s a doctors supposed to do with that? It’s not at all clear.”

“So what we’re trying to do with our algorithm is empower physicians with quantitative information so that they’re not stuck guessing.” Chen and his team emphasize, however, that the algorithm is not meant to make decisions for the doctor or patients—it’s a resource that provides evidence, which should be factored into each individual patient’s case.

Simply put, the algorithm tells the doctor how likely it is that another test will produce a result that’s different than the first one. “If you’re not going to get new information from the test, then there’s no point,” says Chen.

Test runs of the algorithm have already started to reveal oft-repeated tests that doctors could start cutting back on immediately. Data used in the pilot study showed that some of these tests—such as a blood test for hemoglobin A1c, which measures blood sugar—are sometimes conducted so closely together, it’s physiologically impossible for the value to change.

“Those are the low-hanging fruit, the repeat tests we can start to weed out immediately,” says Chen.

To train the algorithm, Chen and his team took de-identified patient data including vitals, medical conditions, symptoms, lab test results, and more, and used it to show how often a blood test reported something abnormal or unexpected, given a person’s medical characteristics.

They started by training the algorithm on data from Stanford, testing its ability to predict results for patients at Stanford, the University of California, San Francisco, and the University of Michigan. To further ensure accuracy of their algorithm, and show that it could be applied to other institutions too, they also switched up the training protocol, separately training the algorithm with data from UCSF and then from University of Michigan.

Although there was some wiggle room, overall, all three variations, regardless of where the training data came from, yielded accurate predictive capabilities.

“This is a good first step to show that it’s indeed feasible to use the data in this way to help reduce unnecessary lab testing,” says Chen. “But ultimately, our idea is to have institutions use our method and technology but to develop their own algorithms based on their own data to generate the highest level of accuracy possible.”

A paper describing the algorithm appears in .

Source:

DOI: 10.1001/jamanetworkopen.2019.10967