Shopping Cart   |   Help

Hyping Health Risks: Environmental Hazards in Daily Life and the Science of Epidemiology

Geoffrey C. Kabat

Share |

Paper, 272 pages, 12 illus, 12 tables
ISBN: 978-0-231-14149-9
$26.00 / £18.00

July, 2008
Cloth, 272 pages, 12 illus, 12 tables
ISBN: 978-0-231-14148-2
$50.00 / £34.50

View this excerpt in pdf format | Copyright information

INTRODUCTION Toward a Sociology of Health Hazards in Daily Life

"Each society shuts out perception of some dangers and highlights others. We moderns can do a lot of politicizing merely by our selection of dangers.: —Mary Douglas and Aaron Wildavsky, 1982

We are all familiar with what has been referred to as the “hazard du jour” phenomenon. Typically, it starts with media reports of the findings of a new scientific study indicating that some lifestyle behavior, consumer product, or environmental factor is linked to some dire disease. Coffee drinking is linked to pancreatic cancer. Eating chocolate is claimed to dispose to benign breast disease in women. Environmental pollution, we are told, may cause breast cancer. Studies appear to show a connection between exposure to electromagnetic fields from power lines and electric appliances and a host of diseases, starting with childhood leukemia. Use of cellular telephones may lead to brain tumors. Exposure to secondhand tobacco smoke, or passive smoking, is linked first to lung cancer, then to heart disease, and most recently to breast cancer. Silicone breast implants are associated with connective tissue disorders. The list could be extended at great length. Following the initial report, a second report may appear soon after yielding further suggestive evidence of a hazard or, just as often, showing no effect. In this way, over time, a scientific literature develops on each topic marked by weak and inconsistent results, and the perception of a hazard takes on a reality.

Some scares, such as those surrounding coffee and cell phones, may subside fairly quickly, as better studies are published or as the hazard is put in perspective and deflated. In other cases, the hazard can take on a life of its own and persist over years or decades, becoming the focus of scientific research, regulatory action, lawsuits, and advocacy campaigns. In the case of electromagnetic fields from power lines, tens of billions of dollars have been spent to remediate a problem whose very existence is uncertain. But what characterizes all of these scares is that the public’s perception of a hazard was greatly exaggerated and was not counterbalanced by an awareness of the tenuousness of the scientific evidence or of the relatively modest magnitude of the potential risk. Thus, to a large extent, when one examines the public’s response to a high-profile health scare, one is dealing with the dissemination of poor information and appeals to fear. (It sometimes seems that the intensity of the fear is inversely proportional to the actual magnitude of the threat.)

To be sure, reporting of hazards in the media tends to reduce any question to the simplistic message that exposure X may cause condition Y, and the extensive background and necessary qualifications are rarely provided. But media attention to health hazards is only a symptom of a pervasive preoccupation in our society with risks to our health, and to understand their significance, one has to examine both the context and the full range of contributing factors. The distortion and inflation of health hazards is the result of a complex interplay between the consumers of information about health risks (the media, the public, activist groups) and the producers of this information (scientists and regulators), and the infl uences between these groups flow in both directions.

The processes and mechanisms by which the available scientific evidence on a question can be distorted have received little attention. This is not surprising. For one thing, studies in the areas of epidemiology and environmental toxicology are highly technical, and often there is disagreement among scientists about the available evidence and about the importance of a given agent, making it difficult for nonspecialists to evaluate. Furthermore, practicing scientists may not be inclined to look back and assess their own work critically or to examine the factors that led them to devote attention to a particular question as opposed to another. If one line of work does not pan out, scientists move on to something new, which is, by definition, “exciting” and “promising.” Science, like other disciplines, has its own internal structure of incentives and rewards that keep practitioners focused on the goals of improving methods and conducting more rigorous and powerful studies of questions that are agreed at a given point in time to be important. There is little incentive, therefore, to question the value of research on a specifi c topic, especially when that topic is of great concern to the public and regulators. But there are other reasons for the failure to examine certain health risks objectively, which we will come to in a moment.

In addition to practitioners directly involved in public health, psychologists and others have devoted attention to the new academic disciplines of “risk perception” and “risk communication.” Their objective has been to describe how people interpret information about various risks and how this calculus influences personal behavior and public policy. While contributing valuable insights, work in this area generally ignores how science is actually carried out, the impact of external factors on science, and the context in which its findings are interpreted and communicated to the public—precisely the issues that are central to this book. It is striking that some of the key collections of papers on risk perception make virtually no mention of epidemiology. This can be explained by the fact that these new disciplines focus on the consumers of information regarding risk rather than on its producers.

The purpose of this book is to elucidate how confusion about what is a threat comes about and what factors—both internal and external to science—contribute to it. How do certain perceived health hazards get blown out of proportion, in spite of well-known facts that should help to keep irrational fear within bounds? In attempting to address these questions, I have set myself three distinct but interconnected aims. First, I try to sketch out the variety of factors and processes that contribute to the manufacture of a hazard. Second, by introducing the reader to the basics of the science of epidemiology, I demonstrate the difference between well-established, important hazards and small, uncertain, or hypothetical hazards. Finally, in four detailed cases studies I attempt to show how particular hazards have been inflated.

Health hazards of the kind that get seized on by the media, studied by different scientific disciplines, addressed by regulatory agencies, contested in the courts, and embraced by advocacy groups are what the nineteenth century father of sociology Émile Durkheim termed “social facts.” That is, they are reflections of a particular society, and they function in specifi c ways within that society. To understand how certain health risks become greatly exaggerated, what is needed is an approach that incorporates an examination of the context in which science is carried out in the area of public health and how pressures and agendas that are internal as well as external to science can infl uence both what is studied and what is made of the scientifi c evidence. In other words, what is needed is a sociology of science in this area. This chapter attempts to describe the landscape in which certain health risks have been selected and distorted. Specifi cally, I examine the different groups and institutions that have played a role in the manufacture of several prominent hazards, as well as key considerations that have frequently been lost sight of in the public, and, even in the scientific, discussion.


It is a paradox of contemporary life that, although longevity as well as general health and well-being have increased dramatically in the United States over the past hundred years, we live in a society that is hyperattuned to any threat to our health, no matter how hypothetical or how small. This heightened concern about potential threats to our health stems from at least two major developments dating from the 1960s and 1970s: the rise of the fi eld of epidemiology, the science of the determinants of disease in human populations, and the more or less concurrent rise of the environmentalist movement. Together, these two developments have not only created a new awareness of the determinants of health, but they also continuously reinforce and feed this consciousness with novel findings that command attention. Beginning with the first studies linking cigarette smoking to lung cancer and other diseases in the 1950s, findings from epidemiology have made commonplace the notion that specific behaviors and exposures may affect health. Owing to these discoveries, a new and powerful paradigm has emerged, offering the prospect that many common chronic diseases could be drastically reduced by curtailing the causative exposures. Over the same time period, the recognition of the need to regulate environmental pollution—embodied in the creation of the federal Environmental Protection Agency (EPA)—and the publicity surrounding many environmental issues—from air and water pollution to climate change—have fostered an unprecedented and widespread perception that our actions have effects on the environment and that the environment has effects on human health. While reports of specific findings may be difficult for the lay public to assess, each new report of an environmental hazard or of a putative link between an exposure and disease underscores the message that we are surrounded by myriad agents in the environment that may pose a threat to our health.

Today the science of epidemiology occupies such a prominent place in the health sciences, in regulatory affairs, and in the news that it is easy to forget how recent a development this is. When the landmark studies linking cigarette smoking with lung cancer appeared in the early 1950s, there were no departments of epidemiology, and the methods used in these early studies were untested and were challenged by some of the most respected statisticians. During the 1960s and 1970s, epidemiology proved itself as a science by a series of important successes in identifying preventable causes of disease. Over the past four decades the field has grown at a prodigious rate, whether measured by the number of departments and programs in schools of public health, the number of textbooks devoted to subfi elds of epidemiology, or the number of research papers appearing in journals.

Since epidemiology is a young science and because, of necessity, it relies for the most part on observational, as opposed to experimental, studies, it is not surprising that many of the associations that are found and reported turn out to be spurious, that is, noncausal, associations. For one thing, chronic diseases like cancer, heart disease, and Alzheimer’s are complex, multistage, “multifactorial” conditions. (Lung cancer, which has one predominant cause—smoking—that accounts for the vast majority of cases, is probably an exception. For example, a large, international study has recently shown that ten different factors make a contribution to the occurrence of heart disease.) Furthermore, the relevant period of exposure for “diseases of long latency” like cancer and heart disease can be decades preceding the appearance of symptoms, making it difficult to establish a causal relationship. Usually, one has only crude information collected at one point in time to represent the actual exposure over this long period. Thus, a major problem confronting epidemiologic studies of low-level environmental exposures that have been the focus of public concern is that it is extremely difficult to know what an individual’s exposure was over a period of decades that may be relevant to his or her risk of developing disease.

Finally, many of the strong relationships—the low-hanging fruit, so to speak—like smoking and lung cancer and alcohol consumption and oral cancer have already been identified. It is much more difficult to reliably identify new factors that play a role in a given disease when these may be subtle, leading at most to a doubling or tripling of the risk in those with the factor compared to those without the factor. Often, we are dealing with even smaller increases of 10–100 percent. (For comparison, people who have ever smoked have a tenfold, or 1000 percent, increased risk of lung cancer; current smokers have a 20-fold risk; and heavy current smokers can have as much as a 50- or 60-fold increased risk.) Scientists who have devoted their careers to studying the possible impact of low-level environmental exposures on chronic diseases readily acknowledge the immense difficulty of establishing credible linkages. Thus, it has to be realized that many more potential “risk factors” are studied than turn out in the end to be causes of disease. This is inherent in the process of science and especially a young and nonexperimental science such as epidemiology. This basic truth is well known to any scientist, and yet when it is formulated explicitly and vigorously, it comes as a surprise. It seems, therefore, to conflict with deep, unconscious beliefs.

For these reasons, whether a hazard exists and, if so, its magnitude is often difficult to resolve. In some instances the vague penumbra of a hazard associated with a given exposure can persist for decades—despite, or because of, inconclusive evidence—contributing to a pervasive sense of distrust regarding the environment. Over time, the succession of media reports raising the possibility of new potential hazards has the contradictory effects of reinforcing a general message—the importance of “lifestyle” behaviors and the environment on health—and at the same time thoroughly confusing the public about what to believe. It is easy to see how, in many people, this torrent of “significant” findings would produce only a deep and generalized sense of anxiety or else apathy and fatalism. And such fatalism may itself be a health risk!

I should note that the vast majority of research on factors influencing health never attracts the attention of the public and usually is of interest only to a small number of specialists working in a particular area. It is only when research pertains to a potential hazard that has become a focus of intense public concern and government action that the kinds of distortion and misrepresentations that I will discuss take place.


Although we tend to think of science as occupying an Olympian realm that is insulated from social, political, and ideological influences, the fact is that science is not conducted in a vacuum. Historians and sociologists of science like Thomas Kuhn, Paul Feyerabend, Robert K. Merton, and Michel Foucault have drawn attention to the role of historical, social, ideological, and personal factors in the formation and acceptance of scientific ideas. Particularly in the area of science dealing with health and the environment, different groups and institutions have very different interests at stake when it comes to the science and policy concerning health risks. The starting point for any discussion of external influences on science in this area has to be industry’s long-standing and well-documented record of efforts to suppress or neutralize credible scientific evidence of adverse health effects due to its products or processes. Tobacco, asbestos, and lead are only some of the more dramatic and better known entries in this record. Powerful industries have routinely used their substantial resources in the legal, regulatory, and public relations spheres to put a favorable spin on the science relating to their products and to thwart regulation. Furthermore, how vigorously regulatory agencies, like the Environmental Protection Agency and the Food and Drug Administration, confront powerful corporate interests varies with the political climate. We have recently witnessed a powerful demonstration of how the party in power can attempt to edit and mould scientific findings to fit its philosophy and the requirements of its political supporters.

Given the inherent difficulty of establishing credible links between low-level exposure to environmental toxins and chronic disease, it is hardly surprising that the assessment of potential environmental health hazards is hotly contested and that there is a sharp antagonism between the proponents of unfettered freedom for commercial enterprise and those concerned with protecting the public’s health and improving occupational safety. Each side tends to cite the evidence that supports its point of view in order to influence public policy.

It needs to be recognized that, in spite of industry’s capacity to manipulate science affecting its interests, those who present themselves as defenders of public health and the environment are not always free of bias and self-serving agendas. But, whereas most of us are well aware of industry’s bias, we are much less on guard against other manifestations of bias from other quarters, especially when these are cloaked in a self-validating rhetoric. The highly charged climate surrounding environmental health risks can create a powerful pressure for scientists to conform and to fall in line with a particular position. Researchers who fail to find evidence of an adverse health effect or who criticize the reigning consensus on a particular topic can be branded as biased in favor of industry. When it comes to questions about the long-term health effects of low-level environmental exposures, the scientific evidence is often ambiguous, and policy decisions have to be made on the basis of incomplete data. In this situation, what is at stake far exceeds the often inadequate evidence on a specific question. Owing to the intensity of the conflicting points of view, a distorted picture of the evidence on a given question can be built up, and few in a position to know will risk the costs of speaking out. Furthermore, few attempts are made to step back and attempt to sort out what the scientific evidence has to say, as opposed to ideologically or professionally driven interpretations. For these reasons, things are not always what they appear to be when we are dealing with topics on which deeply entrenched interests clash.

If I have focused on examples of hazards that have been inflated, this is partly because this phenomenon has received little critical attention, in contrast to instances in which evidence of real or probable hazards has been suppressed. The fact that a threat was overblown, and how this came about, is simply not newsworthy in the way that a new alleged hazard is. What was the focus of great concern at one point in time merely fades from view and is replaced by new claims on our attention. However, I believe that these episodes have much to tell us about how scientific evidence can be overplayed and distorted information conveyed to the public. Furthermore, I believe that the tendency to overstate the evidence, for whatever purpose, actually strengthens the opposing party’s hand. It sanctions the partisan use of science that should be rejected, no matter who is engaging in it. Ultimately, by failing to put certain potential hazards in perspective, one confuses the public and diverts attention from issues that may be far more important. Certainly, there is no dearth of environmental and public health problems that need sustained—not rhetorical—attention.


There is a large gap between the way in which the general public perceives environmental risks and the way scientists view these same risks. For example, in a survey conducted by Harvard University, 55 percent of lay women believed that “chemicals” in the environment played a role in causing breast cancer, whereas only 5 percent of scientists did. Other surveys have shown that “technical experts” and the public have different perceptions and “acceptance” of risks from diverse sources of radiation exposure. For example, technical experts assessed the risk associated with nuclear power and nuclear waste as “moderate” and “acceptable,” whereas the public viewed it as “extreme” and “unacceptable.” Scientists also rated food irradiation as “low risk” and “acceptable,” whereas the public rated food irradiation as “moderate to high risk.” In contrast, the public was “apathetic” toward residential radon, which it considered “very low risk,” whereas experts judged it to carry “moderate risk” and to require “action.”

Consideration of benefits, as well as risks, also influences the perception of hazards. Results of a national survey regarding perception of risks and benefits from radiation and from chemicals indicated that risks from nuclear power were viewed as moderately high and the benefits low, whereas the risks from medical X-rays were viewed as low and the benefits moderately high. Similarly, risks from pesticides were seen as moderately high and the benefits low, whereas prescription drugs were viewed as posing a low risk and having large benefi ts. Researchers have discerned a logic behind these subjective perceptions of risk. Certain risks, such as those associated with X-rays and prescription drugs, are accepted because of their perceived benefits and because of the trust placed in the medical and pharmaceutical professions. In contrast “the managers of nuclear power and non-medical chemical technologies are clearly less trusted and the benefits of these technologies are not highly appreciated, hence their risks are less acceptable.”

In addition to such subjective factors influencing the perception of risk, basic concepts can have one meaning for the specialist and a very different meaning in popular usage. In popular usage, the word “risk” immediately suggests a hazard, whereas for the scientist it denotes a purely mathematical statement about the occurrence of a phenomenon. Similarly, the word “environment” as used by scientists means something very different from what it means in common usage.


Anyone confronted with an inexplicable and serious disease will have a powerful motivation to find an explanation. And, since the mind abhors a vacuum, a possible explanation is preferable to no explanation. When families and advocacy groups focus on a particular condition about which little is known, the drive to come up with an explanation can be so powerful that it can lead people to disregard solid scientific evidence and opinion in favor of a plausible-sounding explanation that has no scientific support. A recent instance of this phenomenon is the widespread conviction among many parents of children with autism that thimerosol, a mercury-containing preservative, used in some childhood vaccines must have played a role in their children’s disease, despite the existence of large studies that show no support for this claim. Similarly, some breast cancer activists have held a strong conviction that environmental pollution must play a major role in explaining breast cancer occurring in their communities, although what is known about breast cancer does not suggest that pollution plays an important role. Popular movies like A Civil Action and Erin Brokovich reflect the powerful drive to identify the cause of a cluster of cases of a rare disease, even though the science is often far from conclusive. The sheer force of belief is also in evidence among antismoking activists who have made questionable claims about the adverse health effects of modern smokeless tobacco.

Depending on the specific characteristics of the hazard (in terms of perceived risks and “acceptance”) and the nature of the available scientific evidence, the existence of a hazard can be more or less widely—and uncritically—accepted by the general population and sometimes by scientists themselves.


Modern science can measure contaminants down to the parts per billion range and below; however, it is much more difficult to determine what levels of exposure are associated with long-term health effects. Extremely low levels of dioxin can be detected in paper towels, and methylmercury has been detected in farm-raised salmon. Of course, we should not be complacent about the presence of detectable toxins in our food and environment, and it is imperative to rigorously monitor the levels of toxins in the environment, in food, and in our bodies. At the same time, it has to be recognized that health scares can be triggered by reports of what are extremely low levels of a toxin that are well below the level likely to have any biological significance. In the case studies discussed in this book this basic distinction either tends to get lost or, often, to be totally absent. Furthermore, it also has to be realized that the cost of eliminating every trace of toxic compounds from food and the environment may make it unfeasible; and even if this could be achieved, such a step might entail other negative consequences, such as making food less affordable.

In public discussions of environmental hazards, the critical principle of toxicology, attributed to the sixteenth century alchemist and physician Paracelsus, that “the dose makes the poison” is rarely even acknowledged. And yet, it is well-established that there are many substances that are toxic at high levels but beneficial at low levels. This is true of essential nutrients, including vitamin A, beta-carotene, and trace elements such as iron, copper, zinc, selenium, cobalt, magnesium, and manganese. It is also true of ethyl alcohol. But whether there is a benefit, no effect, or a harmful effect, when we get down to very low levels, often the focus is only on the existence of a risk, without any accompanying awareness of how low the exposure is. Often we are operating in a low-dose region where it is difficult to determine what the effect is, or whether there is any effect. In some instances the lack of definitive resolution regarding the health effects of very low-level exposure to an agent such as ionizing radiation can paradoxically lead concerned groups to ignore just how low the levels actually are.

Few people are aware of the extent to which certain controversies can be about exquisitely small doses and effects. It should be noted that scientists bent on promoting the importance of their work or on furthering an ideological cause can also fail to acknowledge this principle.


Scientists can contribute to the manufacture of a hazard or its inflating in a number of ways and for a number of reasons. Being human, scientists can get excited about a novel result that could have important implications, and they may overstate the significance of their findings. The familiar pattern of a dramatic new finding that is not confirmed when more careful studies are carried out is no doubt partly explained by the desire for novel findings. Furthermore, scientists can develop an attachment to a particular hypothesis or line of inquiry, and this can subtly influence the way in which they evaluate the relevant evidence. Sander Greenland, an epidemiologist at the University of California, Los Angeles, has provided an incisive formulation of what is at issue when interpreting research findings:

"There is nothing sinful about going out and getting evidence, like asking people how much do you drink and checking breast cancer records. There’s nothing sinful about seeing if the evidence correlates. There’s nothing sinful about checking for confounding variables. The sin comes in believing a causal hypothesis is true because your study came up with a positive result, or believing the opposite because your study was negative."

This applies not only to the individual study but also to the totality of the evidence on a given question. In both cases it is imperative to critically evaluate all of the relevant evidence and to assess what can be concluded from it. But the fact remains that it is difficult to study a new question without in some way privileging those findings that tend to support the hypothesis. There is a built-in disposition to want to believe that what one is studying is important.

In order to obtain funding, a scientist needs to make a convincing case to other scientists—who are his peers and competitors—that his idea is worthy of receiving limited federal dollars. And to be successful, it is crucial for a grant proposal to contain “preliminary data” that lend support to the proposed line of research. This can create a pressure toward emphasizing what is “interesting” in one’s recent results in order to make the strongest case for the importance of the proposed work. Science is no different from other more mundane fields of endeavor—one has to sell one’s idea.

Scientists can also overstate the importance of a body of evidence in order to further a policy objective that they feel is important. Furthermore, once an inflated assessment becomes widely accepted, there is an incentive to emphasize confirmatory evidence and downplay other evidence.

The real-world conditions under which epidemiological research on environmental factors affecting human health is carried out also affect how questions regarding hazards are initially conveyed to the public and pursued. Attention to a new environmental hazard tends to conform to a pattern. An early, provocative study—which is usually small and rudimentary—appears to show evidence that a given exposure is associated with a given disease. In response to this initial finding, scientists undertake new studies to address the question. In some cases, existing data can be used for this purpose, and informative results can be obtained quickly. But often scientists will need to apply for funding to carry out full-fl edged studies from scratch that attempt to improve on the initial study. Whereas laboratory scientists can carry out an experiment (or a series of experiments) in a matter of weeks or months to address a question and if it does not pan out, they can quickly shift to a new line of inquiry; the typical epidemiologic study involving a large population can take nearly a decade from the grant proposal stage to the publication of results. This reality has implications for the life of a putative hazard. If a hazard attracts the attention of scientists, a wave of reports based on more mature studies may appear a number of years following the initial report(s), and new studies may continue to appear for many more years. Sometimes, by the time the results of the later studies appear, the scientific community has all but lost interest in the question. In epidemiology, one is wedded to the data one has collected and has an obligation to publish results that might have been of burning interest ten years earlier but may now be old hat.


When a putative hazard achieves a certain level of prominence, regulatory agencies are likely to become involved in order to demonstrate their responsiveness to a perceived threat. Here, at the interface of science and policy, scientists may be playing a political role as well as a scientific one, and the two roles may be hard to separate. Findings that would diminish the importance of the putative hazard or that would help to put it in perspective may be given short shrift. Certain kinds of evidence may be given more weight, and others may be downplayed in order to fit the conclusion that the committee wants to arrive at. Studies that show the desired association may be rated as superior to those that fail to show an effect, in a travesty of impartiality. Finally, one can exaggerate what may be a small or borderline effect, making it seem more consequential than it is. By such emphases and omissions, one can make the evidence appear more uniform or meaningful than it is in reality. Regulators and health officials may overstate the evidence in order to promote a specific policy (reducing exposure to tobacco smoke) or to foster public consciousness of an alleged hazard (radon in homes, electromagnetic fields). Central to such instances in which hazards are inflated are a failure to be sufficiently critical of the quality of the available evidence and a disposition to accept the evidence as if it were more meaningful than it is. Scientists who are willing to apply a more critical standard and to say outright that a great deal of what is published on certain topics is of poor quality are in a distinct minority.

In some instances, the very composition of an expert committee charged with writing a report—and particularly the leadership of the committee—can influence the conclusions and the tone of the report. If a committee is heavily weighted with scientists who have a stake in a particular area, this can color the assessment of the hazard. Agencies, like individuals, can become wedded to a particular belief, and this can influence their evaluation of the evidence and even what projects are deemed worthy of funding.


When public concern about a perceived health hazard reaches a certain point, politicians may take it up to be responsive to their constituents. For example, in the late 1980s, concern about a danger from radon in homes became a focus of national attention, and legislators and federal agencies had a profound influence on how this scientific question was formulated and on the policies that were adopted. A number of senators and representatives took a vigorous role in addressing the suspected hazard. However, the congressional hearings held regarding radon tended to solicit expert testimony only from those scientists who saw domestic radon as a serious problem and not from equally qualified scientists who had compelling reasons to think otherwise. Legislators also tended to favor the aggressive policies of the EPA, as opposed to the more conservative approach of the Department of Energy. Similarly, in the early 1990s the Long Island senator Alfonse D’Amato became a spokesman for Long Island women who feared that environmental pollutants were contributing to what were thought to be high rates of breast cancer in their communities. Largely as a result of D’Amato’s efforts, Congress allocated thirty million dollars to fund government-sponsored research to address the causes of breast cancer on Long Island.

While politicians can perform a vital role in advocating for increased funding for scientific research on important but neglected topics, their involvement in highly-charged issues can also contribute to the inflation of perceived health risks.


The most obvious and most proximate source of the hyping of risks is media coverage of stories involving health risks. Here there is a clear selection process at work since reports of positive findings—that is, of a hazard—are much more likely to be published and prominently featured than reports of no effect. The most conspicuous media reporting of suspected hazards is fundamentally antithetical to the essence of science. Science operates by the continual incremental accumulation of new data and the constant revision of theories to account for new and better data. In contrast, the “newsworthy” item is presented in isolation without the background or qualifications that would help put it in perspective.

Having said this, however, we need to distinguish between news reports that feature a potential threat on the one hand and broader and more critical articles discussing what is known on a particular topic on the other. I have been referring to the former, which give currency to a poorly substantiated hazard. I will cite many examples of the latter, which often do a superb job of explaining relevant considerations and putting a putative hazard in perspective. Sometimes an article combines both features, but in such cases one may have to read to the end to reach the critical commentary.

The various groups and mechanisms discussed above do not exist in isolation but, rather, impinge on and influence each other. Scientists can respond to a public perception of a hazard and can both study a problem and produce findings that add to public concern. They can advocate for the need for further study of an issue when they may stand to benefit from special research programs. Regulatory and health agencies can respond to a perceived threat by setting aside special funding and by issuing reports and advisories. These special programs are then interpreted by the public and the media as confirming the existence of a hazard. It needs to be emphasized that a public aroused by anxiety can be useful to researchers, regulators, and politicians committed to pursuing a suspected hazard. After contributing to public concern regarding a specific hazard, scientists may then cite as a justification for further research the need to allay the public’s fear. Some observers have noted that, as long as there is public concern and money to conduct studies, it is likely that scientists will continue to study an issue.


I have already referred to the fact that certain types of exposures are viewed by the public as carrying a higher risk and as being less acceptable, in contrast to the way scientists view them. In addition, it is significant that not all agents have equal power to evoke the deep terror that is associated with certain types of hazards. For example, major known causes of chronic disease, such as smoking, heavy consumption of alcohol, obesity, and excessive exposure to solar radiation, do not engender the kind of anxiety that attaches to other perceived threats over which people have less direct control. This may provide a clue to one of the necessary conditions for a putative hazard to become a focus of societal concern. Hazards that are nondiscretionary and thus beyond our ability to control and that are invisible—such as electromagnetic fields, ionizing radiation, and chemical pollutants—have a greater potential to inspire terror than everyday, discretionary behaviors like smoking, drinking, weight gain, and exposure to the sun’s rays.

Also, it is much easier for people to focus on external threats than to take responsibility for personal behaviors that are difficult to change but which have a much greater impact on health. Smoking and obesity are responsible for a large proportion of death and disability, and yet people find it very hard to change these behaviors—to quit smoking, change their eating habits, control their weight, and exercise more. Thus, when scientists, regulators, and the media overstate the significance of a potential environmental hazard (like electromagnetic fields from power lines, residential radon, or secondhand tobacco smoke), this plays into the public’s predisposition to focus on certain types of risks. But it also happens that nondiscretionary exposures are often much harder to measure in scientific studies than personal factors, like smoking, alcohol intake, and body weight, and exposure in the general population tends to be at low levels. These realities add another critical element that characterizes such hazards, namely, uncertainty.

When examined in their own right, the distortions and exaggerations of health risks have much to tell us about the interplay between the science of public health and the wider society. The anthropologist Mary Douglas and the political scientist Aaron Wildavsky have argued that the selection of risks is “socially constructed” and gives expression to deep and often unconscious preoccupations of a society. Douglas and Wildavsky identified fears concerning technology and environmental pollution as forming a powerful ideological complex that has a profound effect on the political process. More recently, the biochemist Bruce Ames has provided a vigorous critique of the tendency to assume that synthetic compounds must be harmful and that “natural” compounds must be good and the tendency to give undue weight to minute exposures to synthetic chemicals. These kinds of unquestioned, deeply held beliefs can play a role in what scientific research gets funded and how it is interpreted by groups with a strong vested interest in a particular question, as well as by the public at large. Thus, there is a complex and inescapable, but largely unacknowledged, interplay between the parties and institutions that contribute to the manufacture or inflation of a hazard.


For all that distinguishes them from one another, the diverse parties that contribute to the manufacture of a hazard appear to have one thing in common. They are all susceptible to what has been called the “availability heuristic.” This term, coined by the psychologists Daniel Kahneman and Amos Tversky, refers to the fact that our judgment regarding the likelihood, and, hence, the importance, of an event can be skewed by what our attention is focused on. In essence, what is in front of us tends to blot out consideration of other factors, which may be more important. We are strongly influenced by the framing of an issue and by its vividness, or salience. Professional pollsters and psychologists are well aware of this phenomenon and have developed strategies for avoiding it. For example, if people are asked a single question about whether they think that exposure to “chemicals” causes breast cancer, a higher percentage will say yes than if “chemicals” is only one item in a list of possible causes. To give another example, our feelings about taking an airplane flight may be influenced by a recent airline crash involving hundreds of deaths, even though we know that in general fl ying is extremely safe. By the very act of focusing on a specific question and isolating it from its context, one is more apt to find evidence supporting its importance.

As a consequence of the kind of biases, vested interests, and narrow focus of attention described above, on certain questions a sort of para-knowledge can be built up and widely disseminated. By para-knowledge, I mean propositions that are given currency and are presented in such a way as to be beyond questioning. Instead of a rigorously impartial and comprehensive account of the relevant evidence, what we get is a selected account tailored for instrumental or partisan purposes. The propositions disseminated in this way become self-evident “facts” that “everybody knows.” To give just a few examples: breast cancer is asserted to constitute an “epidemic” in certain regions of the country; radon and secondhand tobacco smoke exposure are claimed to each account for about one quarter of lung cancer cases occurring in never smokers; and exposure to secondhand smoke is officially stated to be responsible for about 50,000 deaths from heart disease each year and is labeled a cause of breast cancer in a report by a state environmental agency. This kind of factoid—detached from any sense of the evidence supporting the claim or any sense of how the alleged risk compares with established risks—is the end result of reports that overstate the evidence on a given question. Alongside of this instrumental para-knowledge, there are more rigorous and impartial assessments of the evidence. But, not surprisingly, these cannot compete with the much more compelling and satisfying—and often politically correct—claims of an effect. These articles may exist in the literature but will be rarely cited. Overall, outright criticism of the reigning consensus by professionals who are in a position to know is often quite feeble.

The distortion of information about health risks has consequences and costs—at once societal and individual—which are rarely acknowledged. These include tens of billions of dollars in remediation of what appears to have been a pseudo-problem (I am referring to electromagnetic fi elds from power lines). Clearly, resources devoted to a poorly-substantiated threat cannot be devoted to less sensational lines of work, which may prove to be more valuable. Another cost entails the needless anxiety about problems that were miniscule or non-existent, and the confusion about what are firmly established and important health risks. The amplifi cation of hazards diverts attention from real and consequential dangers. Perhaps most important is the corruption of science that occurs when, for reasons of political expediency and self-interest, responsible institutions and scientists attempt to create a consensus by ignoring divergent evidence. Not only are dissenting voices ignored or maligned, but the very undertaking is contrary to the essence of science, which relies on skepticism and an openness to new evidence and new interpretations.


I have tried to address this book both to the interested general reader and to specialists and students in public health. Chapter 2 is designed to give the nonspecialist “a feel” for what epidemiology is, the basic logic of epidemiologic inquiry, its wide range of application, and its value, when its findings are interpreted in a discriminating way. To prepare the reader for the case studies, I give examples of what substantial and well-established associations look like, in contrast to weak, ambiguous, and inconclusive associations. In view of the confusion caused by the seemingly unending series of news reports of studies often yielding contradictory results concerning threats, an attempt to make some of the central concepts of epidemiology accessible to a general audience seems like a worthwhile undertaking.

The core of the book (chapters 3–6) consists of four case studies of environmental hazards: environmental pollution and breast cancer; electromagnetic fields and a range of diseases; residential exposure to radon as a cause of lung cancer; and passive smoking as a cause of serious chronic disease. For each hazard, I trace the trajectory of the research studies over time and the ways in which factors internal and external to the science infl uenced the interpretation of the evidence and contributed to shaping the message that was conveyed to the public. Although it was necessary to discuss the nature of the evidence for each hazard, each of the case studies also tells a story that deserves wider attention. In order to recapitulate the main points of the chapters that follow, “major take-home points” are provided at the end of each chapter.

Each of the case studies has its own unique features. For one thing, the nature of the alleged hazard, its perception by the public, the circumstances surrounding exposure, and the available evidence is different in each case. For example, extensive evidence supports radon’s ability to cause lung cancer, whereas decades of research have failed to yield any convincing evidence that exposure to low frequency electromagnetic fields in everyday life can cause disease. Also, the way in which external forces, including industry, advocacy groups, and government agencies impinged on the controversy differed in each case. Nevertheless, in spite of these differences, a number of common features are discernable. In all four cases, early evidence of a hazard based on limited or flawed studies was exaggerated, and actions taken by government agencies served to promote the existence of a hazard and to ignore evidence that would have helped present a more accurate picture.

Other controversies could have been included—such as those pertaining to cellular telephones and brain tumors or general air pollution and chronic disease, to name just two; however, the overall conclusions would have been essentially the same. One justification for my selection is that my four topics involve very different types of exposure: residues of chemical compounds in food and water (organochlorine compounds); electromagnetic fields produced by power lines and electric appliances; a naturally occurring element found in the earth that emits high-energy particles (radon); and indoor air pollution produced by a discretionary personal activity, namely, smoking. The very different nature of these hazards makes all the more striking certain parallels in the way the scientific evidence was interpreted and the hazard exaggerated.

Another reason for focusing on these particular hazards is that I have had a direct professional involvement in epidemiologic research on three of the four topics (passive smoking, electromagnetic fields, and risk factors for breast cancer). And while I have not conducted research on radon, I have long been involved in research documenting the effects of the foremost cause of lung cancer, cigarette smoking, as well as in the causes of lung cancer in those who have never smoked. Having done primary research and contributed to the literature on a number of these topics, I have an intimate acquaintance with the relevant bodies of evidence. In following the scientific work on these hazards over the years, I have tried to maintain the dual perspective of the practicing epidemiologist and the skeptical observer.

In addition to a close reading of the literature on my four topics, I have benefi ted from interviewing a number of colleagues who have done important work on some aspect of the questions I treat, as well as several advocates who have played an important role in making breast cancer a research priority. These interviews and follow-up discussions deepened my understanding of key issues and helped me to clarify both the facts and my argument. What was most instructive about these interactions was how open accomplished scientists and activists were in acknowledging the tenuous support for some of their hypotheses and the wrong paths we are all capable of going down. This kind of frank assessment added a different dimension to what one can glean from the published scientific literature.


COPYRIGHT NOTICE: Copyright © 2008 Columbia University Press. All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher, except for reading and browsing via the World Wide Web. Users are not permitted to mount this file on any network servers. For more information, please e-mail us or visit the permissions page on our Web site.

Related Subjects

About the Author

Geoffrey C. Kabat is a cancer epidemiologist whose research has focused on the effects of smoking, alcohol, diet, hormones, electromagnetic fields, and other factors. He has been on the faculty of the Albert Einstein College of Medicine and the school of medicine of the State University of New York at Stony Brook and has published over 80 scientific papers. Currently he is senior epidemiologist at the Albert Einstein College of Medicine in New York City.

top of page