RegenerativeMedicine.net

Statistical Approach Able To Pinpoint Real from Artifact Alerts

“We found that in this specific example, Random Forest was the most predictive of all the machine learning approaches,” Dr. Michael Pinsky.

As reported by Michael Vlessides of Anesthesia News, using a statistical approach known as Random Forest modeling, real and artifact vital sign events from continuous monitoring data have been distinguished. The approach is an important step toward reducing the alarm fatigue that plagues so many health care practitioners.

“We’ve had a long history of using machine learning to assess the instability of patients in the hospital,” said McGowan Institute for Regenerative Medicine affiliated faculty member Michael Pinsky, MD (pictured top), professor of critical care medicine, bioengineering, anesthesiology, cardiovascular diseases, and clinical/translational services at the University of Pittsburgh School of Medicine. “In one such study, we analyzed noninvasive vital signs such that alerts would go off if the monitor value was outside the normal range,” he said. “And using an artificial neural net, we were able to identify stable from unstable, good from bad. This current study is an extension of that work.”

In the study which also included the efforts of McGowan Institute for Regenerative Medicine affiliated faculty member Gilles Clermont, MD (pictured bottom), professor of critical care medicine, industrial engineering, and mathematics at the University of Pittsburgh, noninvasive monitoring data were recorded for patients in the institution’s 24-bed step-down unit over a period of 8 weeks. The data included heart rate (HR), respiratory rate (RR), blood pressure (BP), and peripheral oximetry. Deviations of vital signs beyond stability thresholds that persisted for 80% of a 5-minute moving window comprised important events. These stability thresholds were defined as HR 40 to 140 beats per minute; RR eight to 36 breaths per minute; systolic BP 80 to 200 mm Hg; diastolic BP less than 110 mm Hg; peripheral oxygen saturation (SpO2) greater than 85%.

The researchers, reporting at the 44th Critical Care Congress of the Society of Critical Care Medicine (abstract 42), found that of 1,582 events, 631 were labeled by consensus of four expert clinicians as either real alerts, artifacts, or “unable to classify;” another 795 were labeled as “unseen.” Random Forest models, which use an ensemble of algorithms that together provide increasing predictive accuracy, were then applied to the 631 labeled events; the models were trained to differentiate real events from artifacts, and then were cross-validated to mitigate overfitting. The resulting model was then applied to the 795 unseen events, which were reviewed by the experts for external validation of Random Forest events.

“We found that in this specific example, Random Forest was the most predictive of all the machine learning approaches,” Dr. Pinsky said in an interview with Anesthesiology News. “This is where you need a marriage of clinicians and machine learning science. Neither one on its own would be able to come up with anything of relevance; it has to be a union.”

Experts labeled 418 alerts as real alerts (SpO2 44%, RR 32%, BP 11%, HR 14%), 158 as artifacts (SpO2 59%, RR 16%, BP 25%, HR 0%), and 55 as unable to classify. Of 510 unseen RR events, experts agreed with 100% of the Random Forest artifact predictions and 99% of the real alert predictions. Of the 55 unseen BP events, the agreement was 80% and 76%, respectively. Of 230 SpO2 events, there was 55% artifact agreement and 92% real alert agreement.

“So our new approach allows us, with a high degree of certainty, to identify almost all alerts that are real or artifact,” Dr. Pinsky said. “This turned out to be the real excitement at the meeting, for which we won the award as best abstract. Clearly, this is really important for the bedside practitioners.”

Yet what proved most exciting for Dr. Pinsky was the model’s ability to look forward. “We’re now going to be using that prospectively to identify instability beforehand, in the operating room and in the intensive care unit,” he noted. Doing this, he explained, requires only a few minutes of vital sign data.

“Once you’ve calculated the best algorithms,” he explained, “then the actual data needed to apply these algorithms is phenomenally sparse. You need very little because you’ve already identified the leading indicators needed to make a diagnosis.

“And what becomes interesting is that you immediately appreciate the potentiality of the monitors we already have,” he added. “You actually end up needing fewer new monitors, just more intelligent use of existing ones. And this had the distinct advantage for hospitals in that they don’t have to buy new hardware, just better software.”

Illustration: McGowan Institute for Regenerative Medicine.

Read more…

AnesthesiologyNews.com (05/01/15)

Bio: Dr. Michael Pinsky

Bio: Dr. Gilles Clermont

Abstract (Critical Care Medicine; December 2014, Volume 42, Issue 12, p A1379.)