Risk Assessment:
The U.S. Experience

European Policy Centre's Managing Risk: A Balancing Act
April 29, 1998
UK Presidency of the European Union


I'm here to talk about the U.S. experience with risk assessment. It is my hope that the European Union (EU) can learn from the U.S. experience so that actual risks to human health and the environment can be more accurately assessed in the EU.

Accurate assessment of actual risks will allow the EU to better weigh the benefits of regulatory action against its costs. This will lead to better risk management decisions, and better public health and environmental policies.

Risk assessment is often thought of as a scientific process consisting of four steps:

  1. Hazard identification: Scientific studies are reviewed to determine whether a potential hazard may cause harm. And by "potential hazard," I mean something like a chemical, or a condition in the environment, or an activity that people engage in.
  2. Dose response assessment: A determination is made as to at what levels of exposure to the potential hazard may harm occur.
  3. Exposure assessment: The question is answered "How much of the potential hazard are humans or the environment being exposed to?"
  4. Risk Communication: This is the process of explaining the first three parts to policymakers, the media and the public in an accurate and understandable way.

Now I said that risk assessment is often thought of as a scientific process. And you can easily see where science would and should be used:

But the reality is that there often little science available for risk assessment. There are two reasons for this.

First, there are tens of thousands of chemicals people are exposed to, with many different dose- response curves and exposures. And there are many countless environmental conditions and human activities that could be the subject of risk assessment.

Often, the science in risk assessment is limited because of constraints on time and resources. The resources to apply state-of-the-art science to the many potential hazards that my actually exist or be imagined simply don't exist.

Second, and perhaps more importantly, the questions being asked of risk assessment are simply too demanding of science -- at least as we know science now.

Science is simply incapable of determining whether low level exposures to substances in the air, water and food have any human health or environmental effects. Often, risks to human health are too small to be seen or measured with the available tools.

And as far as impacts to the environment are concerned, even less is known than about human health risks by orders of magnitude.

These gaps and uncertainties in scientific knowledge have been recognized virtually since the practice of risk assessment began in the U.S.

But rather than stifle the risk assessment process, solutions were quickly developed and implemented.

Gaps and uncertainties in the risk assessment process have been filled with what I call "science policy."

Science policy is not science, but rather is a set of policy decisions for bridging gaps and uncertainties in scientific knowledge.

For example, let's say we want to know whether a particular chemical can cause cancer in humans.

It would obviously be unethical to test the chemical on humans,. Also, such a test may be impractical because some cancers may occur only decades after exposure has ended.

So the only sort of scientific way to test whether the chemical can cause cancer is to test in on laboratory animals.

So let's say we run such a test and the results of the experiment are that laboratory animals exposed to high levels of the chemical had higher rates of cancer than those exposed to lower levels and those not exposed at all.

Does this mean that, automatically and as a matter of science, the chemical causes cancer in humans?

The answer is clearly "No."

First, laboratory animals aren't humans.

While there are many similarities between humans and laboratory animals, there are many differences as well. And it is these differences that likely change the human response to exposure to the chemical.

Second, the laboratory animals would be exposed to far higher levels of the chemical over a relatively short period than humans would be exposed to over the course of a lifetime.

For example, many of you may have heard of the chemical alar which used to be applied to apples in the U.S..

When alar was tested on laboratory animals, the animals were given a doses that a human could only get from eating 28,000 pounds of apples every day for ten years.

How is this experiment related to reality?

Now, the reason for giving laboratory animals such high doses is not entirely without reason. Animal experiments are typically very expensive and usually only involve, at most, several hundred animals.

In order to maximize the probability of seeing an effect of the chemical, the animals are given a dose level called the maximum tolerated dose.

The maximum tolerated dose is essentially the greatest amount of the chemical the animal can be exposed to without becoming sick just from the exposure.

Now although such an experiment is obviously not scientific proof that the chemical can cause cancer in humans, for risk assessment such proof is usually good enough.

And this is because of that concept I called "science policy."

Science policy says that, if a chemical causes cancer in animals, then it is assumed to cause cancer in humans as well.

Science policy says that, if the maximum tolerated dose causes cancer, then it is assumed that any level of exposure can cause cancer.

So science policy is not science. It is simply a set of assumptions.

In 1993, I was asked by the U.S. Department of Energy to report on the use of assumptions in the risk assessment process.

My group spent more than a year examining risk assessment and compiling the report. We interviewed scientists, regulators, regulated industries, public policy groups, and Congressional staffers to ensure we had a comprehensive report.

Before publication, the report was reviewed favorably even by the science advisor to the administrator of the Environmental Protection Agency Carol Browner.

The report garnered a wide range of other favorable reviews — from the editorial page of the Wall Street Journal to the president of the Sierra Club. So I am quite confident that the report accurately portrays risk assessment in the U.S.

My report is titled Choices in Risk Assessment: The Role of Science Policy in the Risk Management Process.

Choices in Risk Assessment identifies the 10 most fundamental and commonly used science policy decisions. In the parlance of risk assessment, they are known as the "default assumptions."

I'll briefly go through them, and please keep in mind that they are only assumptions.

  1. A substance that causes cancer in animals also causes cancer in humans.
  2. In laboratory animal experiments, benign tumors are counted as malignant tumors.
  3. Scientific studies that do not show a chemical to be a risk, are not to be used in risk assessment.
  4. If a substance causes cancer at very high doses, then it causes cancer at very low doses as well.
  5. The animal species most sensitive to a hazard is the appropriate species to for use in risk assessment.
  6. Differences between species in the process of developing cancer are not to be considered in risk assessment.
  7. If a chemical causes cancer by one route of exposure, then it causes cancer by all other routes of exposure — so that if something causes cancer by inhaling it, it is assumed that ingesting the substance will also cause cancer.
  8. If something causes cancer at a high dose, there is no safe level of exposure.
  9. The dose-response relationship for a chemical is linear at low doses.
  10. Estimates of exposure to chemicals should represent the upper bound limits of potential exposure.

In addition to identifying these 10 default assumptions, we examined their impact on risk assessments and public health and environmental policy. We also examined more scientifically- based alternatives to the default assumptions.

Our basic conclusion was that U.S. environmental policy in general, and particularly the process of risk assessment, is largely driven by policy, politics, and value judgments in the form of science policy. Science plays an exceedingly limited role..

And if you like lots of wasteful, unproductive, and intrusive governmental regulation, maybe this is acceptable.

But if you actually want to accomplish something for public health and the environment through risk assessment and environmental policy, and at the same time not cause undue and wasteful burdens on the general population and regulated community, then I suggest that the U.S. model is not what you want.

Before I move on to a recommended paradigm for risk assessment, let me give you an example of what U.S. risk assessment can "accomplish."

Superfund is the U.S. government program under which hazardous waste sites are identified, assessed and cleaned up. Whether a site gets cleaned up and how much it costs depends on the risk assessment done at the site.

There is a site in Montclair, New Jersey consisting of residential homes built over a former industrial area where radium was used.

The industrial site closed down during the 1920s and residences were built there following World War II. Until the 1980s, the residents lived there in peace.

During the 1980s when the radon panic hit the U.S., someone discovered that some of the community and homes had unusually high levels of radiation. EPA soon swooped in and evacuated the entire community.

EPA performed a risk assessment for the community which concluded that the lifetime risk of lung cancer in the community was 40 percent.

This estimate was particularly notable since it is roughly 200 times greater than the lifetime lung cancer risk in a two-pack-a-day smoker.

According to EPA, this risk estimate justified permanent relocation of residents as part of a $250 million cleanup program — a cleanup which is ongoing today.

Now the problem is that there has been no observed excess of lung cancer in the community — let alone the high rates of lung cancer predicted by EPA. Why?

Because when EPA did the risk assessment, EPA science policy decisions resulted in a wildly exaggerated, to say the least, and likely fictitious risk estimate.

It was assumed that low-levels of radon caused lung cancer — an assumption not borne out by existing science.

It was assumed that the rate of lung cancer among homeowners could be predicted based on lung cancer rates among underground uranium miners— individuals who were exposed for many years to extreme levels of radiation and other respiratory irritants, and who smoked heavily.

It was assumed that residents would be exposed everyday for 30 years to maximum levels of radon.

These assumptions are simply not backed up by any data or scientific knowledge.

Now, there is a right way to do risk assessment and that's what we'll talk about next.

These are my recommendations for risk assessment.

1. Risk assessment guidelines

First, risk assessment guidelines should be established in advance of conducting risk assessments. Risk assessment guidelines are the rules of the road for risk assessment.

They should identify and describe the process of risk assessment -- that is, what triggers a risk assessment, who may conduct the risk assessment, what opportunities exist for participation by interested parties, who will review the risk assessment and what the final products of a risk assessment are and how often and through what process a risk assessment can be updated.

Risk assessment guidelines should also identify and describe the criteria by which data and studies will be evaluated. What types of data and studies are sufficiently reliable for risk assessment?

How should animal bioassays, epidemiologic studies and clinical trials be designed and conducted to be acceptable for risk assessment?

What are the criteria by which studies will be evaluated? For example, shouldn't study results should be statistically significant at the 95 percent level?

For epidemiologic studies, bias and confounding should be ruled out as being responsible for observed results.

The magnitude of relative risks should be sufficient to overcome the limitation of the epidemiologic method. Usually, this means relative risks should be 2 or greater.

Risk assessment guidelines need to be dynamic. They must keep pace with the state of science. If they don't, the risk assessment process will fall apart -- as it has in the U.S.

The EPA issued its first risk assessment guidelines in 1976. They were soon out of date.

But EPA didn't issue new guidelines until 1986. Again, almost as soon as they were issued, they were out of date.

And although, EPA started working on new risk assessment guidelines in 1988, it wasn't until 1996 that EPA even proposed to update the 1986 risk assessment guidelines -- a proposal which still has not been finalized.

Unless, risk assessment keeps pace with the state-of-science, there is no incentive to develop new scientific knowledge to fill gaps and uncertainties and replace science policy.

When progress in risk assessment stops, confidence in the risk assessment process is lost.

2. Transparency

The process of risk assessment must be transparent to the public and the interested parties.

And by that, I mean that all the science, gaps and uncertainties in knowledge, and science policy decisions should be clearly stated and acknowledged -- and this does mean buried in an obscure footnote in the appendix of a 3,000-page document that is only available from the agency upon special request.

Transparency also means that a risk assessment is conducted, to the extent possible, out in the open. Backroom negotiations and secret editing have no place in an open process.

One way to bring about transparency in risk assessment is to ensure that there is balanced representation of all viewpoints during the entire process.

Transparency may slow down the process of risk assessment somewhat, but it's better to have a risk assessment that is somewhat delayed, rather than an assessment that everyone is suspicious of.

3. Data access

Open and unfettered access to data underlying major scientific studies relied on in a risk assessment, as well as other key data used in a risk assessment, should be available to the public for independent review.

Part of the scientific method is that scientific results must be capable of replication. But unless data is made available for review, such replication is not possible.

Last year, the U.S. EPA issued new standards governing outdoor air quality. These regulations -- which are the most expensive environmental regulations ever issued by EPA, estimated to cost the U.S. economy more than $100 billion per year -- are largely based on a single epidemiologic study.

EPA used this study as a basis for estimating that 15,000 people die prematurely in the U.S. annually because of particulate air pollution.

When the U.S. Congress requested from the researchers the data underlying these studies, the researchers refused.

When Congress requested EPA -- the funding source for the research -- to compel the researchers to provide the data, EPA at first refused, and then said it could not compel production of the data.

Now just this month, the National Research Council -- the research arm of the National Academy of Sciences -- warned EPA that there were too many gaps and uncertainties in the science to begin implementing the new regulations.

Of course, you might say this warning came a little late, as EPA issued the new standards more than nine months ago. And, right now, there is no way to turn the regulatory clock back.

There is no good reason to hide data. Risk assessment should not be based upon "secret science."

4. Peer review

My next recommendation concerns peer review.

Although scientific studies are peer-reviewed before they are published in the scientific literature and although the EPA has its Science Advisory Board -- a group of scientific advisors who are not EPA employees -- by itself, this is inadequate peer review.

In my view, this setup is simply paying lip service to peer review. This has many faults.

First, many scientific journals are published by organizations who advocate on public policy issues. For example, the epidemiologic study I referred to in the air quality example was published in a journal of the American Lung Association.

Not only did the American Lung Association sue EPA in court to compel EPA to issue new air quality standards, EPA also has given the American Lung Association millions of dollars in grants.

I don't know about you, but for me, this raises questions in my mind about exactly how independent the peer review of the epidemiologic study was.

Second, when EPA's Science Advisory Board meets to review a risk assessment, often the SAB members are provided with materials to review — often several hundred pages -- only a few days before they are to make pass judgment on the material reviewed. Because of their busy schedules, many of them can only flip through the material on the airplane to the meeting.

This circumstance makes it possible for a small number of reviewers — who perhaps have more familiarity with the material under review, or even the EPA staff itself -- drive the peer review process.

Third, the EPA Science Advisory Board, although not made up of EPA employees, is in large part made up of people who have some affiliation with EPA -- either by being a grant recipient or contractor. If nothing else, EPA has final say about who sits on the Science Advisory Board. So EPA selects the reviewers and the agency doesn't pick troublemakers.

Is this a workable situation? I don't think so.

Peer review only works if the reviewers are completely independent from the Agency and risk assessment they are reviewing.

Now, if peer review can't be made independent for some reason, then I suggest an adversarial process -- open and unfettered debate about the risk assessment by experts and interested parties.

Such a process might not always attain the height of civility, but at least it guarantees that all key issues will be thoroughly explored.

5. Oversight

The next recommendation I have concerns oversight of the risk assessment process.

During the 1980s and until 1993, federal agency risk assessments were reviewed and overseen by the White House Office of Management and Budget (OMB).

OMB was a very effective watchdog working to ensure that if a risk assessment wasn't of some quality, it was not going to be used to support burdensome regulation. For example, in 1990, OMB single-handedly prevented EPA from pouring gasoline on the EMF controversy by declaring EMF to be a carcinogen.

In 1993, when President Clinton took office, OMB's watchdog role was ended. The agencies, most notably EPA, were allowed to do risk assessment however the agencies saw fit. It is during this time that U.S. risk assessment experienced its steepest decline.

The most notable symptom of this decline has been the shift from using science policy in risk assessment to using what I call "junk science." In the regulatory context, junk science is poor quality, exaggerated, or over-interpreted science used to justify regulatory action.

The problem with junk science is that, whereas science policy recognizes the existence of scientific uncertainty, junk science does not. Junk science pretends science exists where it does not.

I see the elimination of risk assessment oversight as being primarily responsible for the rise of junk science in risk assessment.

6. Judicial review

My next recommendation concerns judicial review of risk assessment.

Without a doubt, this is the most controversial issue in U.S. risk assessment. It is a major reason why risk assessment reform has been so difficult to achieve in the U.S.

And here's why. Risk assessments reviewed by U.S. Courts often get sent back to the relevant regulatory agency to be done according to a more rigorous, legal and scientific process.

In 1980, the U.S. Supreme Court remanded to the U.S. Occupational Safety and Health Administration (OSHA) its risk assessment for benzene requiring OSHA to provide a quantitative estimate of the actual risk associated with benzene. Merely labeling the chemical as toxic was an insufficient as a "risk assessment."

In 1992, a federal appeals court struck down an OSHA regulation covering more than 400 chemicals. The court held that, although OSHA may use assumptions to fill gaps and uncertainties in scientific knowledge, such assumptions — or science policy — must have some basis in science.

It was not enough for OSHA to simply say that because rats fed mega-high doses of a chemical experienced higher rates of cancer, that humans exposed to much lower levels of that chemical were similarly at risk.

Also in 1992, another federal appeals court struck down EPA's ban on the use of asbestos. The Court held that EPA had failed to adequately consider the risks of substitutes for asbestos. The Court stated that an agency is required to regulate on the basis of the known, not the unknown.

In the U.S., the opponents of judicial review of risk assessment — that is, the regulatory agencies and advocacy groups — says that judges should not be able to overrule scientists and that judicial review is only intended to slow down, if not stop, the risk assessment process.

These are weak arguments.

As to judges overruling scientists, you'll note that in the three examples I mentioned, in none of them did a court overrule a scientist. What the courts did was say that the how the risk assessments were conducted — the processes — were inadequate. And judges are more than qualified to pass judgment on governmental process.

As to judicial review being used to delay or interfere with risk assessments, this is certainly an easy issue to address and ensure that risk assessments are not subject to judicial review for frivolous or improper claims.

7. Deal with uncertainty

I'll finish off my recommendations by dealing with the issue I started off discussing — science policy.

To do a risk assessment, addressing the gaps and uncertainties in science is unavoidable. Science policy helps bridge these gaps. But as we know, science policy is not science; it is only a set of assumptions.

U.S. science policy is built on something I like to call the "precautionary principle" — or "better safe than sorry."

Science policy decisions are always made so as to play it on the safe or conservative side -- for example, we assume for purposes of a risk assessment that the human response to a low-level chemical exposure is the same as laboratory rats exposed to mega-doses.

In this context, science policy is an abject failure. Not only is the assumption not realistic, but when many assumptions are chained together — or compounded — risk estimates are wildly unrealistic.

To the extent science policy is used, it should be tempered with confidence in the assumption. For example if I had 50 percent confidence in the assumption that people react to chemicals like laboratory rats, I would incorporate that level of confidence in the risk assessment.

It is sort of a conditional probability approach to risk assessment. It permits a risk assessment to go forward, but perhaps reduces the effect of assumptions that are of questionable validity.

Conclusion

In conclusion, scientifically up-to-date risk assessment guidelines, a transparent process for conducting risk assessment, public access to key scientific data, independent peer review, access to judicial review of the process of risk assessment, and addressing confidence levels in science policy is the way to make risk assessment work for the EU.

Risk assessment can be useful for setting public policy. It will be up to you to decide if this is the type of risk assessment that you want.

Lastly, we heard this morning about the role and importance of public perception of risk.

In the U.S., we don't have a federal agency to protect us against UFOs -- despite the not- uncommon belief among the public that aliens are real.

Why is there no UFOPA? Because the government acts responsibly and rationally in not acceding to public eccentricities.

And, indeed, this has happened many times in risk assessment history.

EPA decided not to label unleaded gasoline as cancer-causing though it caused cancer in laboratory animals. EPA, which had been pushing to get rid of leaded gasoline for 20 years and had substantial institutional credibility locked up in unleaded gasoline, sought more science rather than apply standard science policy that would have condemned unleaded gasoline as carcinogenic.

The U.S. Public Health Service chose to use so-called "negative" studies in contravention of standard science policy in declaring fluoridated water to be safe. Some studies had reported it caused cancer.

The National Academy of Sciences recently chose to use science in declaring the EMF scare to be without convincing evidence.

The EPA has participated in efforts to have the gasoline additive MTBE declared safe. And the FDA has refused to start a milk scare despite efforts by some who claim that milk contains harmful hormones, viruses and chemicals.

I suggest that government has the ability to act responsibly. But it must choose to do so.

Governments can also make emotional appeals to the public, if its chooses to.

For example, instead of misleading the public about the pesticide DDT, the U.S. EPA could easily have pointed to a 1970 declaration by the National Academy of Sciences that DDT is credited with saving about 500 million lives. DDT is one of the greatest public health measures ever, next to chlorinated drinking water and vaccinations.

Instead, the EPA has presided over the demise of DDT, even though two million die every year from malaria.

And if that is not "emotional" enough, I would direct you to the Imperial War Museum in London where you can see on display a canister of DDT used to de-louse concentration camp survivors following their liberation. This use of DDT undoubtedly saved countless thousands of lives from typhus.

A government can do the right thing, but it must choose to do so.

Comments on this posting?

Click here to post a public comment on the Trash Talk Bulletin Board.

Click here to send a private comment to the Junkman.


Material presented on this home page constitutes opinion of the author.
Copyright © 1998 Steven J. Milloy. All rights reserved. Site developed and hosted by WestLake Solutions, Inc.
 1