The food world is buzzing about the study that came out last week claiming that organic foods contain higher levels of antioxidants and lower pesticide residues than non-organic foods. It has gotten so much media attention that I just can’t not comment. I don’t necessarily think it’s nonsense, but I am skeptical of its conclusions.
The study is a meta-analysis of previous papers on the topic. The authors read through 300+ studies of organic crops and aggregated data from them. The idea is that with a bigger sample size and more data, we should have more power to detect a difference between the compounds in organic and non-organic foods. Frankly, I didn’t read the paper in great detail; I’m generally mistrustful of meta-analyses. In their essay “Statistical Assumptions as Empirical Commitments”, Berk and Freedman criticize meta-analyses, first on the grounds that it doesn’t necessarily make sense to assume treatment (organic farming practices, in this case) should have the same effect across all studies:
“If we seek to combine studies with different kinds of outcome measures (earnings, weeks worked, time to first job), standardization seems helpful. And yet, why are standardized effects constant across these different measures? Is there really one underlying construct being measured, constant across studies, except for scale? We find no satisfactory answers to these critical questions.”
The studies used in the meta-analysis were done in countries all across Europe. Certainly there are regulations about what can be considered organic, but there’s no telling how different farms handled crops differently and differences in how the outcomes were measured across studies.
Furthermore, a successful meta-analysis relies on the assumptions of random sampling and statistical independence. Since the “units of analysis” are research studies, these assumptions hardly make sense. They clearly are not sampled randomly; the authors carefully read through hundreds of papers to find data and chose the ones that met certain requirements. The assumption of statistical independence is even less justified. Freedman and Berk bring up an interesting point, the human side of how they simply cannot be independent:
“The assumed independence of studies is worth a little more attention. Investigators are trained in similar ways, read the same papers, talk to one another, write proposals for funding to the same agencies, and publish the findings after peer review. Earlier studies beget later studies, just as each generation of Ph.D. students trains the next. After the first few million dollars are committed, granting agencies develop agendas of their own, which investigators learn to accommodate. Meta-analytic summaries of past work further channel the effort. There is, in short, a web of social dependence inherent in all scientific research. Does social dependence compromise statistical independence? Only if you think that investigators’ expectations, attitudes, preferences, and motivations affect the written word – and never forget those peer reviewers.”
And here’s the kicker: the study was funded by an organization that funds research in support of organic farming practices. They state at the end of the paper that the “design and management” weren’t influenced by the funding organization, but it’s not difficult to imagine biases in how the proposal and research questions were formulated from the get-go.
It’s going to take more than a meta-analysis to get me to go organic.