Monday, December 10, 2012

Sometimes One Article IS Meaningful*



A new meta-analysis published in the latest edition of JNCI provides strong evidence that higher levels of carotenoids in women's bloodstreams may reduce the risk of breast cancer. Here's a pretty accessible article from HuffPo, and another a bit more in-depth about the findings, and what carotenoids are and where you can find them. If you are a woman at risk of breast cancer, it's well worth reading. There's a lot to consider, though, so let's get to it.

In my last post, I said in no uncertain terms that the conclusions of any one study do not represent the full story, but when done properly provide a possible clue. A lot of times, they provide little more than evidence that a hypothesis warrants further investigation. There are two potential exceptions to be on the lookout for when reading articles about health and medicine. Most publications are good about saying what type of analysis was performed.

One is called a systematic review, which is pretty much what it sounds like. The researcher surveys all of the evidence out there on a particular subject and lays it all out in a single article. The other possibility is a meta-analysis, which is similar except that a new statistical analysis is performed on the multiple studies put together, essentially expanding the sample sizes of the people exposed to a treatment or pollutant, and the controls who are not. The latter is what this paper did.

You don't need much experience with data analysis to know that increasing the sample sizes can make results more representative of the general population, thus potentially allowing you to make a stronger conclusion. Every Royals fan knows by now that our April enthusiasm will be sucked dry by Memorial Day, when more games get played and hot starts level off, or revert to the mean, so to speak.

The analysis should also smooth out the variations that can happen in the smaller-scale results due to bias or chance, and provide insight into the "true" effect. Bias, in the sense I'm using it, does not refer to a Fox News-like investigator deviously "cooking the books" for a preferred outcome, but rather a tendency to under or overestimate the effect of the exposure due to characteristics of the study subjects. Any one of those circles could be the result of this bias, and the theory is that looking at the fuller picture minimizes that effect. This particular analysis was done on eight cohort studies, which suffice to say at this point means that the subjects were not randomly selected by the investigator, and are particularly subject to these unintentional biases. This funnel plot, below, provides a visual representation:



Each little circle represents the result of a single published study. Just by chance, there should be some results that do not show a meaningful effect (negative, left of center), and some that do show a meaningful effect (positive, right of center). This is essentially just another visual representation of the bell curve that we're all familiar with.

The big caveat, and this is the case for all reviews and meta-analyses, is that it's possible that literally every published study out there could be over or underestimating the effect, so that your "true" effect may still be questionable. This occurs because a study that doesn't show the hypothesized effect (a negative result) is less likely to be published. Journals like to publish studies that show something interesting, assuming that the alternative is that something interesting didn't occur and not worth reading. Here's a visual representation of what's called publication bias:


When negative results do not get published, you get a funnel plot that skews toward a single direction like the graph on the right. If you look at the circles on both images, you can clearly see that the center (i.e. the mean, a.k.a your result) of them is quite different. The image on the right overestimates the effect of whatever the patients are exposed to. It's certainly plausible that our carotenoids are prone to this situation. How do we know that there aren't 10 studies sitting in various researchers' file cabinets that will never get published because they were negative? They obviously aren't going to be included in a meta-analysis if they aren't published.

So what's the real conclusion to take away from all this?

There seems to be pretty strong evidence that carotenoids may provide some sort of small-ish protective effect in regard to breast cancer, and only breast cancer as far as all of this evidence is concerned. I'll show you how to look at the results section with all the statistics gibberish and gauge the effect for yourself some other time. There's plausible mechanisms for how exactly this would work described in the HuffPo article, so it's not some mysterious shot in the dark. However, we're still well short of definitive proof. Eat fruits with high carotenoid content because they generally are yummy and are healthful in many other ways, too. Do not buy a $20 bottle of carotonoid supplements, and beware anyone trying to sell you on them. And look at a headline such as this one and roll your eyes, now that you know better.

I'll come back to publication bias from time to time, because it's everywhere. The pharmaceutical companies do large controlled experimental trials where the subjects are chosen randomly (i.e. the results are considered more conclusive than a cohort study), and they have plenty of incentive not to publish when the trials have a negative result. Everyone likes a good Big Pharma bashing sesh, and I'm happy to separate the genuine bullshit they pull from the conspiracies that don't really hold up to much scrutiny.







No comments:

Post a Comment