Archive for March, 2010

PSA Screening Critically Questioned

Thursday, March 11th, 2010

According to Dr. Richard J. Ablin, Research Professor of Immunobiology at the University of Arizona College of Medicine in a New York Times Op-Ed article March 10, “Prostate screening is inaccurate and a waste of money.”   Each year 30 million American men undergo testing for  prostate-specific antigen (P.S.A.), a prostate enzyme believed, when elevated, to be a reliable marker for the presence of prostate cancer.  While 16% of men have a lifetime chance of receiving a diagnosis of prostate cancer, they have only 3% chance of dying from the disease. Curious, isn’t it.  The fact is, that infections, over-the-counter drugs like ibuprofen and simple prostatic enlargement occurring in the majority of older men, can all falsely elevate PSA. Moreover, the test detects only a small percentage of cases, and cannot distinguish between the cancers that kill and the vast majority which grow so slowly that 97% of men will die of something else.

As Dr. Ablin points out, the FDA in approving the procedure relied “heavily” on a study showing that testing could detect 3.8% of prostate cancers.  Not a very large figure.  The result over the past several years has been the subjection of hundreds of thousands of men to radical prostate cancer surgery or radiation, resulting in a tragically high percentage of permanent impotence,  incontinence of urine, or both.

Last year the New England Journal of Medicine published the two largest studies of the screening procedure, one in Europe which showed that 48 men would have to be treated to save one life “That’s 47 men who, in all likelihood, can no longer function sexually or stay out of the bathroom for long.”  The American study showed that over a period of 7 to 10 years, screening did not reduce the death rate in men 55 and over.

Dr. Ablin asks why PSA screening is still used, and answers his own question.”Because drug companies continue peddling the tests and advocacy groups push “prostate cancer awareness” by encouraging men to get screened. Increasing numbers of early screening proponents, like Thomas Stamey, a well-know Stanford urologist came out against routine testing, and the American Cancer Society urged more caution in using the test. Certain subsets of patients, e.g. those with a family history of prostate cancer, those patients after treatment with rising levels should be tested, of course.  But this is quite different from subjecting a normal population to widespread screening, something I have discussed in a previous blog, and re-published in my newsletter, Second Opinions, an interview with Dr. Otis Brawley by Maryann Napoli in 2004!

By the way, the good Dr.  Albin who wrote the Op-Ed article from which this blog borrowed freely, invented the PSA test 40 years ago.

Stay tuned.

Absolute and Relative Risk: Shell Games

Monday, March 8th, 2010

When researchers, reporters, and others use data to compare two or more different groups, they may present their results in two very different and often confusing ways to emphasize a point of view. These relations may be expressed as either absolute or relative differences. An absolute difference is a subtraction; a relative difference is a ratio.To emphasize how easily people and even most physicians can be fooled, consider the following: Which drug would you rather take, one that reduces your risk of cancer by 50 percent, or another drug that only reduces your risk of cancer from two to one out of 100? Most people would choose the drug that reduces their risk of cancer by 50 percent, but in fact both these numbers refer to the same outcome. They’re just two different ways of looking at the same numbers. Without any qualification, both statements “reduced the risk by 50%” and “reduced the risk by 1 in a hundred” (1%) could be construed as representing either an absolute or relative difference. But note the difference in “feel” between 50% and 1%. Which figure sticks in your mind?

The headlines read, “Tamoxifen Cuts Breast Cancer Risk by 33% in Healthy Women!,” yet it turns out, among all the women in a study who took Tamoxifen, less than 2% got breast cancer, and among those that took the placebo, less than 3% got breast cancer. The real difference was 1%. {“How to Lie With Statistics,” Real Health Breakthroughs, Dr. William Campbell Douglass, 2004}

One of the main studies being cited in support of a drug for advanced breast cancer, Herceptin©, saw 34 deaths in the control group (2.0% of the participants) and 23 deaths (1.4%) in the group treated with Herceptin. According to the authors, this translates into a 46% Relative Reduction in cancer deaths, (wrong calculation; should have been 2.0-1.4 divided by 2.0 or 30%) But the true absolute reduction in deaths is only 0.6% (2.0%-1.4%), almost certainly not statistically significant in this series. Is this a miracle drug? The number, of course, is pure marketing and statistical spin. As reported some time ago in the New Scientist magazine, one of the main cheerleaders for Herceptin is none other than Hortobagyi, a paid consultant of Genentech, who received somewhere between $10,000 and $100,000 from the drug company. He’s one of the proponents who calls Herceptin a “cure.”

Keep in mind that headlines promoting a drug will almost always refer to relative risk, “A breathtaking 40% reduction in risk!” -and this numerical shell game will be copied in the mainstream media, press, medical journals, even the FDA... Pharmaceutical companies, marketing reps, even some physicians anxious to publish and usually supported by commercial drug interests are constantly pushing and exaggerating the supposed benefits of their drugs while minimizing their risks.