> the odds of the monitor detecting something in the intervening time and the intervention being correct are lower than the odds of the intervention causing some other unwanted side effect.
Huh. So that means, when the doctor decides to intervene based on what the continuous monitor comes up with, the interventions have negative expected value? Which means the doctors are making bad decisions about what interventions to make based on the data they have? I'll believe this is possible, but I want to ask to be sure.
I also would wonder about other explanations. You say "a nurse installs it, takes a reading, and removes it about once an hour"; presumably the nurse also glances at the patient and, if anything seems off, might ask the patient questions or take other appropriate actions. Could that be a significant effect? (In other words, to eliminate this potential difference, the better comparison for "continuous monitoring" would be for a nurse to come by once per hour and give the patient the same level of attention, perhaps going through the same motions that are involved in the monitor process.) Incidentally, as I read your comment, I expected it to conclude that the monitor itself or the process of repeatedly installing it and removing it was harmful (although that would point in the opposite direction).
I just went and checked and I was slightly wrong about the methodology. The comparison was actually between continuous electronic monitoring and manual listening with a stethoscope. I would hope/expect that a nurse is coming to visually inspect a laboring woman at least once per hour regardless.
This [0] is one of the studies cited by the book where I learned about this phenomenon. Continous monitoring was found to have a higher incidence of c-section and forceps deliveries. The only condition where continuous monitoring came out ahead was in detecting seizures.
It somewhat makes sense to me. Measuring all the time, you're susceptible to odd readings that are essentially false positives. If your device has a 1/1000 false positive rate, but you're usually only making one reading with it at a time, that's maybe acceptable. If you're making a million readings over a week because you're checking every second, than you're almost definitely going to have it show the same false positive enough that it looks real.
Huh. So that means, when the doctor decides to intervene based on what the continuous monitor comes up with, the interventions have negative expected value? Which means the doctors are making bad decisions about what interventions to make based on the data they have? I'll believe this is possible, but I want to ask to be sure.
I also would wonder about other explanations. You say "a nurse installs it, takes a reading, and removes it about once an hour"; presumably the nurse also glances at the patient and, if anything seems off, might ask the patient questions or take other appropriate actions. Could that be a significant effect? (In other words, to eliminate this potential difference, the better comparison for "continuous monitoring" would be for a nurse to come by once per hour and give the patient the same level of attention, perhaps going through the same motions that are involved in the monitor process.) Incidentally, as I read your comment, I expected it to conclude that the monitor itself or the process of repeatedly installing it and removing it was harmful (although that would point in the opposite direction).