New York Times | December 9, 2001 | By MICHAEL POLLAN
New technologies can bring mankind great benefits, but they can also cause accidental harm. How careful should society be about introducing innovations that have the potential to affect human health and the environment? For the last several decades, American society has been guided by the "risk analysis" model, which assesses new technologies by trying to calculate the mathematical likelihood that they will harm the public. There are other ways, however, to think about this problem. Indeed, a rival idea from Europe, the "precautionary principle," has just begun making inroads in America.
The problem with risk analysis, which came out of the world of engineering and caught on during the late 70's, is that it hasn't done a very good job predicting the ecological and health effects of many new technologies. It is very good at measuring what we can know - say, the weight a suspension bridge can bear - but it has trouble calculating subtler, less quantifiable risks. (The effect of certain neurotoxins on a child's neurological development, for example, appears to have more to do with the timing of exposure than with the amount.) Whatever can't be quantified falls out of the risk analyst's equations, and so in the absence of proven, measurable harms, technologies are simply allowed to go forward.
In Europe, a different approach has taken hold. When Germany, for example, discovered in the 70's that its beloved forests were suddenly dying, there was not yet scientific proof that acid rain was the culprit. But the government acted to slash power-plant emissions anyway, citing the principle of Vorsorge, or "forecaring." Soon, Vorsorgeprinzip - the forecaring, or precautionary, principle - became an axiom in German environmental law. Even in the face of scientific uncertainty, the principle states, actions should be taken to prevent harms to the environment and public health.
Germany's idea has since gone international. It has popped up in the preamble of the U.N. Treaty on Biodiversity and was written into a slew of protocols and rules issued by the European Union in the 90's. It informs treaties like the 2000 Cartagena Protocol on Biosafety, which allows countries to bar genetically modified organisms on the basis of precaution. The idea has not prevailed over risk analysis, however, at least not yet. The E.U.'s ban on American beef treated with hormones, for example, is based on the precautionary principle. But since world-trade rules are based on risk analysis rather than precaution, and the health risk of eating hormone-treated beef has not been proved, the World Trade Organization has ruled that the ban is illegal.
What explains the W.T.O.'s resistance to the precautionary principle? It doesn't sound like a revolutionary idea. Indeed, it sounds like common sense: better safe than sorry; look before you leap. But, in fact, the precautionary principle poses a radical challenge to business as usual in a modern, capitalist, technological civilization. As things stand, whenever questions are raised about the safety of, say, antibiotics in livestock feed, not until someone finds the smoking gun can anything be done about it. When President Bush earlier this year challenged the Clinton administration's tougher standards for arsenic levels in drinking water, he did it on the grounds that "the science isn't in yet." (He subsequently relented.) The problem very often is that long before the science does come in, the harm has already been done. And once a technology has entered the marketplace, the burden of bringing in that science typically falls on the public rather than on the companies selling it.
If introduced into American law, the precautionary principle would fundamentally shift the burden of proof. The presumptions that flow from the scientific uncertainty surrounding so many new technologies would no longer automatically operate in industry's favor. Scientific uncertainty would no longer argue for freedom of action but for precaution and alternatives.
Just how revolutionary an idea this really is is just now dawning on thinkers tied to American industry. In April, a fellow at the Hoover Institution published an attack on the precautionary principle, calling it, quite rightly, "a wolf in sheep's clothing." The Bush administration has adopted a hard line in international negotations. In the spring, its delegates to the Codex Alimentarius Commission, the world body that sets food safety standards for world trade, scuttled an agreement rather than allow precautionary language into a single footnote.
Critics argue that the precautionary principle is "antiscientific." No and yes. No, in the sense that it calls for more science in order to dispel the uncertainties surrounding new technologies and to develop less harmful alternatives. And yet there is a sense in which the idea is "antiscientific," if by scientific we mean leaving it to scientists to tell us what to do. For the precautionary principle recognizes the limitations of science - and the fact that scientific uncertainty is an unavoidable breach into which ordinary citizens sometimes must step and act.
Copyright 2001 The New York Times CompanyNew York Times: