Tuesday, February 3, 2015

On the Process

Here we are, in 2015 debating the safety and efficacy of vaccines. I've always shied away from writing about vaccines because there's so many people out there who can write about it so much better, and with a much bigger audience. But here we are, in 2015, and I guess every little bit can help.

What non-scientists tend to not usually grasp about science is that it's not just a collection of facts you read in a book. It's the only process of gaining information about the universe that is willing, and in fact actively desires to prove itself wrong. You all know that the process starts with a hypothesis that needs to be tested, and that the results of the test then may lead to peer-review, but where I think the understanding diverges is that this peer-review process absolutely does not stop at publication.

Authors have to defend their studies and their data to a collective body of tens of thousands of people around the world who have spent millions of cumulative hours studying the same field, subjecting themselves to some of the harshest criticism in any human endeavor. Sometimes they do this for decades, and if you're only casually paying attention, it seems like it goes back and forth and nobody knows anything, but that's almost never really the case. Sometimes conflicts of interest and fraudulent data are exposed that immediately discredit a study. Sometimes good ideas and compelling data really are suppressed, but the key is that this cannot be done by an entire field. Fifty years ago, scientists were certain that lead in gasoline was an environmental and human health disaster unfolding in slow motion. They knew that tobacco exponentially increased the risk of lung cancer. The "debate" was not scientific, it was entirely political. If a researcher is claiming persecution by some industry or another, see how that researcher's field overall sees it before you assume anything. The power dynamic alone tells you little, and if you're like me, that's a very difficult thing to accept, but it's true.

So yes, the process is messy, but not so messy to make cynicism and nihilism the appropriate response. If this process eventually results in a consensus among all of these people that is completely overwhelming, then if you "disagree", the problem is your understanding of the topic. It's not Big Pharma. It's not Big Ag. It's not Big Government. It's you. You don't get to bypass this process because you can make a fancy webpage about "toxins". You don't get to bypass this process with superficial catchphrases like "treat the cause, not the symptoms", or lobbing around terms like "reductionist" or "scientism" that are often just catch-all terms to dismiss empirical evidence out of hand. You don't need to know how a vaccine works on a molecular level to know it works. A randomized control trial doesn't depend on a detailed understanding of biochemistry, it depends on simply comparing the final outcomes of two or more options. That's not reductionism, that's not arrogance, that's just the way we can tell if something works. If a new measles prevention therapy continually and undeniably works better than the MMR vaccine, and it's determined at some point that the reason has to do with chi or chakras or praying to Odin, then I guess it's time to unlearn everything I know.

If you watch things like "Cosmos" meant to communicate how various fields coalesce into an overall understanding of how the universe works, virtually every single fact ever mentioned in the show has gone through some version of this process, whether it was physics, chemistry, biology, or whatever. Whenever a finding revolutionized a field, it was the result of years of build-up, and years of sometimes bitter debate. People have always tried to bypass this process, but they are always forgotten over time, because the process wins every time.

You'll find a crank in every corner of every field that has the kind of credentials to adequately assess this same evidence, but for one reason or another decides to go against the grain. You might even find a Rand Paul or a Chris Christie to elevate these cranks and force dumbass pundits on cable news to debate things that have been settled for decades. But you'll know better, because now you understand the real meaning behind the phrase "extraordinary claims require extraordinary evidence."


Tuesday, December 24, 2013

Revisiting Lead and Violent Crime: A Year of Marinating On It

I'll just be honest, I've never really cared much for writing. A good post covering the sort of issues I want to tackle on this blog takes a ton of time and effort if I don't want to leave low-hanging fruit to damage my credibility. I simply value other things I can do with my free time than that, so it's been months since I've written anything at all. I feel I have to revisit the post that took off, though, and write about what has and hasn't changed in my thinking since.

I started this blog as a place where friends could find evidence-based answers to questions that pop up all the time in the media, and hopefully then they'd direct others. After 7 posts, that was pretty much out the window, which still sort of blows my mind. It's very easy (and often justified) to be cynical of any institution, even science. It's also easy to instinctively fear something and assume you know enough about it to justify it. Critical thinking is something you really have to develop over a long period of time, but done right, it's probably the highest pinnacle of human ingenuity in history. That or maybe whiskey.

On the other hand, sometimes it's easy to slide closer to nihilism. Knowing something about randomness and uncertainty can lead into being overly critical of sensible hypotheses. There's the famous BMJ editorial mocking the idea that there's "no evidence" for parachutes being a good thing to have in a free fall. You always have to have that concept in the back of your mind, to remind you that some things really don't need to climb the hierarchy of good evidence. I've spent some of the last year worrying if maybe I didn't do enough to show that's not where my analysis of Kevin Drum's article came from.

I think most people saw that I was trying to illustrate the evidence linking lead and crime through the lens of evidence-based medicine. The response was resoundingly positive, and much criticism was centered around my half-assed statistical analysis, which I always saw as extremely tangential to the overall point. The best criticism forced me to rethink things and ultimately led to today.

Anyway, anecdotes and case reports are the lowest level of evidence in this lens. The ecological studies Drum spends much space describing by Rick Nevin (which I mistakenly identified as cross-sectional) are not much higher on the list. That's just not debatable in any way, shape, or form. A good longitudinal study is a huge improvement, as I think I effectively articulated, but if you read through some other posts of mine (on BPA, or on the evidence that a vegetarian diet reduces ischemia), you'll start to sense the problems those may present as well. Nevin's ecological studies practically scream "LOOK OVER HERE!" If I could identify the biggest weakness of my post, it's that I sort of gave lip-service to the a Bayesian thought-process suggesting that, given the circumstances, these studies might amount to more than just an association. I didn't talk about how simple reasoning and prior knowledge would suggest something stronger, and use this to illustrate the short-comings of frequentist analysis. I just said that in my own estimation, there's probably something to all of this. I don't know how convincing that was. I acknowledge one of my stated purposes saying how the present evidence would fail agency review may have come off as concern-trolling.

On the other hand, if there is indeed something more to this, it seems reasonable to expect a much stronger association in the cohort study than was found. Going back to Hill's criteria, strength of the association is the primary factor in determining the probability of an actual effect. When these study designs were first being developed to examine whether smoking caused lung cancer, the associations were literally thousands of times stronger than what was found for lead and violent crime. The lack of strength is not a result of small sample sizes or being underpowered, it's just a relatively small effect any way you look at it. It would have been crazy to use the same skeptical arguments I made in that instance, and history has judged those who did properly.

Ultimately, I don't know how well the lens of evidence-based medicine fits the sort of question being asked here. Cancer epidemiology is notoriously difficult because of the length of time between exposure and onset of disease, and the sheer complexity of the disease. It still had a major success with identifying tobacco smoke as a carcinogen, but this was due to consistent, strong, and unmistakable longitudinal evidence of a specific group of individuals. Here, we're talking about a complex behavior, which may be even more difficult to parse. My motivation was never to display prowess as a biostatistician, because I'm not. It was never to say that I'm skeptical of the hypothesis, either. It was simply to take a step back from saying we have identified a "blindingly obvious" primary cause of violent crime and we're doing nothing about it.

I think the evidence tells us, along with informed reasoning, that we have a "reasonably convincing" contributing cause of violent crime identified, and we're doing nothing about it. That's not a subtle difference, and whether one's intentions are noble or not, if I think evidence is being overstated, I'm going to say something about it. Maybe even through this blog again some time.

Tuesday, May 28, 2013

Risk, Odds, Hazard...More on The Language

For every 100 g of processed meat people eat, they are 16% more likely to develop colorectal cancer during their lives. For asthma sufferers, the odds of suffering an attack for those who took dupilumab in a recent trial were reduced 87% over a placebo. What does all this mean, and how do we contextualize it? What is risk, and how does it differ from hazard? Truthfully, there's several ways to compare the effects of an exposure to some drug or substance, and the only one that's entirely intuitive is the one that you're least likely to encounter unless you read the results section of a study.

When you see statistics like those above, and pretty much every story revealing results of a study in public health will have them, each way of comparing risk elicits a different kind of reaction in a reader. I'll go back to the prospective cohort study suggesting that vegetarians are 1/3rd less likely to suffer from ischemic heart disease (IHD) than those who eat meat, because I think it's such a great example of how widely the interpretations can seem based upon which metric you use. According to this study, IHD was a pretty rare event; only 2.7% out of over 44,500 individuals developed it at all. For the meat-eaters, 1.6% developed IHD vs. 1.1% in vegetarians. If you simply subtract 1.6% and 1.1%, you might intuitively sense that eating meat didn't really add that much risk. Another way of putting it is out of every 1,000 people, 16 people who eat meat will develop IHD vs. 11 for vegetarians. This could be meaningful if you were able to extrapolate these results to an entire population of say 300 million people, where 1.5 million less incidences of IHD would develop, but I think most epidemiologists would be very cautious in zooming out that far based upon one estimate from a single cohort study. Yet another way of looking at the effect is the "number needed to treat" (NNT), which refers to how many people would need to be vegetarian for one person to benefit. In this case, the answer is 20 200 (oops!). That means 199 people who decide to change their diet to cut out meat entirely wouldn't even benefit in terms of developing IHD during their lifetime.


Wednesday, April 24, 2013

The EWG Dirty Dozen: Whoever Said Ignorance is Bliss Definitely Didn't Have Chemistry in Mind

Each year, the Environmental Working Group (EWG) compiles a "Dirty Dozen" list of the produce with the highest levels of pesticide residues on them. The 2013 version was just released this week, framed as a handy shopping guide that can help consumers reduce their exposure to pesticides. Although they do say front and center that it's not intended to steer consumers away from "conventional" produce if that's what they have access to, this strikes me a little as talking out of both sides of their mouth. How can you say that if you really believe that the uncertainties are meaningful enough to create the list, and to do so with the context completely removed? I'm pretty certain the Dirty Dozen preaches to the choir and doesn't change many people's behavior, but the underlying message behind it, while perhaps done with good intentions, to me does some genuine harm regardless. The "appeal to nature" fallacy and "chemophobia" overwhelm legitimate scientific debate, have the potential to polarize a nuanced issue, and tend to cause people stress and worry that's just not all that necessary. This is not hardly going to be a defense of pesticides, but a defense of evidence-based reasoning, and an examination of how sometimes evidence contradicts or complicates simplified narratives. You should eat any fruits and vegetables that you have access to, period, no asterisk.

This latte should not be so happy. It's full of toxins. (Source)
Almost 500 years ago, the Renaissance physician Paracelsus established that the mere presence of a chemical is basically meaningless when he wrote, to paraphrase, "the dose makes the poison." The question we should really be asking is "how much of the chemical is there?" Unfortunately, this crucial context is not accessible from the Dirty Dozen list, because context sort of undermines the reason for this list's existence. When we are absorbing information, it comes down to which sources we trust. I understand why people trust environmental groups more than regulatory groups, believe me. However, one of the recurring themes on this blog is how evidence-based reasoning often doesn't give the answers the public is looking for, whether it's regarding the ability to reduce violent crime by cleaning up lead pollution, or banning BPA. I think a fair amount of the mistrust of agencies can be explained by this disconnect rather than a chronic inability to do their job. However true it may be that agencies have failed to protect people in the past, it's not so much because they're failing to legitimately assess risk, it's for reasons such as not sounding an alarm and looking the other way when we know that some clinical trials are based on biased or missing data. Calling certain produce dirty without risk assessment is sort of like putting me in a blindfold and yelling "FIRE!", without telling me if it's in a fireplace or whether the next room is going up in flames. When 2 scientists at UC Davis looked into the methodology used by the EWG for the 2010 list, they determined that it lacked scientific credibility, and decided to create their own model based upon established methods. Here's what they found: