I’ve been aware of this for about 20 years, but here’s an article outlining a fairly serious statistical fallacy on the part of doctors.
The basic premise is this. Suppose doctors are ordering up a statistical test that checks for something rare, and isn’t that accurate. Do they understand what the results mean?
A good example might be a PSA or mammogram. They order up a lot of these because they detect serious, and treatable, problems. That’s good. But, no matter what the public thinks, the forms of prostate and breast cancer that have positive treatment outcomes is just not that big a portion of the number of tests given. This is why we have the aphorism that most men die with prostate cancer not of prostate cancer.
So, here’s the math: what if 1% of the tests are true positives, and 5% are false positives? If you give 1000 tests, you will end up with about 10 true positives and 50 false positives. But you can’t tell them apart: so only about 1 in 6 of the positives that has come back to you should be treated. A second test and opinion is about the only way to start down the road towards figuring that out.
FWIW: I had this experience about 18 months ago: a routine blood test showed some elevated liver readings. I made a point of insisting on a month off and another test. I’ve had a few since then that are OK. You never know for sure, but I suspect I had a false positive.