"Our lives improve only when we take
chances -- and the first and most difficult risk we take is to be honest with
-- Walter Anderson
Most of us hate the idea of risk. While we
collectively spend a great deal of time and money to reduce it, we can never hope to
eliminate it. The reason is that some amount of uncertainty is "built in" to all
aspects of the world around us. In fact, at the smallest level of physical reality,
quantum physicists must deal only in probabilities, as they cannot predict with certainty
which events will occur, or where or when they might occur.
The very word "risk" tends to make us nervous, as
it portends the probability that something bad may happen to us. The word
"happen" is also a clue to our deep concerns about risk, because it indicates
events that are out of our control.
It is this feeling of helplessness that is at the heart of
why we are concerned about some risks and not others, and why we tend to react emotionally
when confronted with risk-related issues. According to Dr. Robert Scheuplien, former head
of the U.S. Food and Drug Administration's Office of Toxicology:
"When risks are perceived to be
dread[ful] , unfamiliar, uncontrollable by the individual, unfair, involuntary, and
potentially catastrophic, they are typically of great public concern, or high outrage.
When risks are perceived as voluntary, controllable by the individual, familiar,
equitable, easily reducible, decreasing, and non-catastrophic, they tend to be minimized
by the public, or low outrage."
Our natural tendency to think in the manner described by Dr.
Scheuplien leads us to be overly concerned about risks for which we feel little control
and to feel little concern for risks where we can exercise significant control. Thus, we
are far more afraid of the potential effects of chemicals we unknowingly ingest than of
the food, drink and other substances we willingly ingest but shouldn't: Studies indicate
that poor diet, smoking and excessive alcohol consumption together account for about
two-thirds of cancers (68%); while food additives account for less than one percent (The Causes of Cancer, Doll and Peto, 1981).
This doesn't mean that risks associated with chemicals,
electromagnetic fields and the like should be ignored. On the contrary! For example, the
potential for a nuclear accident may be relatively small, but the amount of harm that
would be caused by one is so large that the possibility of such a mishap must be taken
very seriously. Thus, risk entails not just the probability of something happening, but
also the magnitude of the problem if it were to happen.
Frankly, we humans are not particularly good at assessing
risks in the absolute, or when comparing one risk against another. We hope to provide you
with some reasons as to why this is so, and to highlight some of the ways that scientists,
engineers and other professionals are working to improve our ability to assess, manage and
communicate about risk.
How We Perceive Risk...
When trying to evaluate risk, we are typically forced
to deal with a number of quantitative and qualitative tradeoffs. These include:
Time -- Is the potential negative outcome related to
the risk going to occur now, or sometime in the future?
Physical Distance -- Is the risk nearby or far away?
Personal Distance -- Will the risk affect me or my
Cost vs. Benefit -- Which is greater, the benefit
derived from taking the risk, or the costs associated with either reducing it or
remediating its effects?
Probability -- Is the risk in question a rare event, or
something that is likely to occur?
Magnitude -- Is the resulting affect of taking the risk
likely to be large or small?
In general, risks that 1.) appear closest to us in terms of
time, space and kinship; 2.) would seem to cost more than their benefits, and 3.) have a
higher likelihood of occurring or causing serious damage, are logically of greater
importance to us. The problem is that many times we are unable to properly apply these
ideas to the risk at hand.
Sometimes, our inability to judge risk is based on factors
outside of our control. For example, it is virtually impossible to tell if:
the information we receive from the media or other sources is
reliable and well-balanced.
a particular expert is in fact well versed in the subject at
hand or held in high esteem by his or her peers.
the information provided is up-to-date and in agreement with
the currently accepted body of knowledge for that field.
Just as important are the ways in which we can fool ourselves
about risk. There are many mental devices we use every day to help with decision-making
tasks. Known as heuristics, many of these rules-of-thumb can actually reduce
our chances of making good decisions.
The reason why our mental faculties often fail us today
probably has to do with the fact that we are still "programmed" or
"wired" to deal with life-threatening situations that occurred eons ago, when
reflexive reactions were critical to survival. Reaction time was at a premium then, so
unconscious or subconscious decision-making was the fastest way to literally remove
oneself from harm's way.
Today, survival is not generally an everyday concern in the
industrialized world, as the number of immediate negative consequences are dwindling.
Thanks to improvements in sanitation, medicine, nutrition and working conditions, we
unconsciously assume survival, rather than consciously expect to fight for it.
Further, issues are far more complex and cerebral today,
meaning that responses to them need to be more measured and thoughtful. Thus, the
heuristics and biases that ensured success in the past can lead to failure in the present.
Recognizing which heuristics interfere with our ability to
make accurate judgements about risk and uncertainty is the first step in overcoming them.
They generally fall into four categories known as availability, overconfidence,
anchoring and adjustment and representativeness.
People using the availability heuristic will judge an event as either frequent or
likely to occur if it is easy to imagine or recall, i.e., if it is readily available to
one's memory. If an event is truly frequent, availability can be a very appropriate cue.
However, availability can be affected by many factors other than frequency of occurrence.
The film Independence Day could convince many people that the risk of alien
invasion is either probable or imminent, when in fact the risk is at best nonexistent, or
at worst no different than before the film was released.
Another effect of the availability heuristic is the mistake
of viewing the future only in terms of the immediate past. The way in which we deal with
natural disasters is particularly prone to this problem:
After a flood, hurricane or earthquake, planners are so
conditioned by the immediate past that they have a hard time taking into account that an
even bigger one may occur next time. (This is why engineers have developed the "100
Years" concept for designing buildings, dams and bridges.)
After a natural disaster, catastrophic insurance sales
increase dramatically and then fall off over time. Thus, people tend to forget about the
past, while the odds of an event recurring haven't really changed at all. This is similar
to the old adage out of sight, out of mind.
Finally, the availability heuristic slants our perceptions
regarding the frequency of, and thus our concern for, low probability events which have
been highly sensationalized. Since we read more about airplane crashes, factory explosions
and volcanoes than we do about asthma or old age, we tend to overestimate the probability
of the former and underestimate the probability of the latter. In reality, the number of
people dying from asthma and old age is far greater than the number succumbing to the more
sensational, but far less frequent, occurrences.
There are many ways in which people tend to be overconfident about their judgements
regarding risk. Importantly, it isn't just laypersons who are at the mercy of these
heuristics. Experts can be just as overconfident in their assessments as well. Thus, an understanding of heuristics which lead to overconfidence is
critical, since we can't always rely on the experts to overcome their own biases.
People tend to place great faith in their own judgements. For example, studies indicate
that subjects are extremely confident in their assessment of risks caused by pesticides,
dam failures, nuclear accidents, etc., when in fact they overestimate the true risks by
factors as high as 25 to 1. (This effect may be caused in large part by the availability
of news relating to these factors, as discussed above under "Availability.")
People also tend to be sure that bad things "won't
happen to them." Even though exposure to smoking, radon, etc. is the same for one
person as another, each of us tends to see ourselves as being at a lower risk level than
our neighbors. This effect helps explain the shock we feel when something happens to us,
but the lack of shock felt when it happens to "the next guy." (This is the same
type of situation that leads 85 percent of Americans to rate themselves as "better
than average" drivers!)
Similar to our belief that we are somehow different (and better) than others is the fact
that we tend to crave certainty and abhor uncertainty. This heuristic leads us to see
issues as black and white, rather than gray. The result can be to overreact in either
After a hurricane, residents might convince themselves that such a storm can never happen
again. The result is that proper safety precautions aren't taken when rebuilding.
Conversely, accidental discharges at a paper mill caused by a
malfunctioning piece of equipment may cause people to try and close down the mill in order
to eliminate any more potential hazards. The result may be a total loss of a productive
facility when what was required was the replacement of one particular system with another
of better design.
When forced to rely on judgement, rather than on data, experts are prone to making the
same types of mistakes that the rest of us make. These include overconfidence in current
scientific knowledge, ignoring the role of human errors when designing technological
systems, and not taking into account ways in which humans may interact with technology.
This is not really surprising. Experts, like the rest of us,
are biased towards believing in the value of what they do, and therefore tend to play up
In 1976, the Teton Dam collapsed. The Committee on Government Operations attributed the
disaster to overconfidence on the part of the construction engineers, who were certain
that they had solved the many serious problems that occurred while building the dam (Slovic, Fischhoff & Lichtenstein).
Thus, it is important when assessing risk to include outside
investigators whose expertise is in the subject of risk, and not in the specific technical
field that is under consideration (e.g., power plant construction, pharmaceutical
development, etc.). For a more detailed review of the use of experts, please see the
module on reasoning and the specific section on experts and authority.
Anchoring and Adjustment
It is not uncommon for us to make estimates by starting with a value we know (the anchor)
and adjusting from that point. The initial value colors our perception of future events
because we judge the probability of the future by looking at what has occurred in the
Two of the more common ways in which we adjust our thinking
by referring to a given starting point occur when we try to predict the future by
examining the present. In one scenario, we will predict the success of an entire project
by looking at the probability of success for each of the steps used to construct it. This
is known as a conjunctive event, because it is based upon constructing a
plan, process or product.
The other common scenario occurs when we start with a
completed project, and try to make predictions about the parts by examining the completed
structure. These is known as a disjunctive event, because it is based upon disassembly.
Conjunctive events are based on chains of occurrences that are supposed to produce an
expected result. Examples include all types of planning processes in which there is a
linear progression of steps. Because each step is an independent event, the probability of
the step occurring is higher than the probability that the entire project will be
successfully completed. This leads people to overestimate the probability that the
outcome will succeed.
Let's say you're planning a rally. You map out all of the steps required, from forming a
committee, to creating subcommittees, to finding a site, securing a permit, generating
publicity, to holding the rally. You estimate the probability of success for each of the
30 steps to be 97 percent. Thus, you feel confident that the rally will occur.
The problem is that because each of the steps is independent,
the probabilities must be multiplied together, not averaged. Multiplying 97 percent by
itself 30 times produces a true probability of success closer to 40 percent! Thus, rather
than looking as if the rally is virtually certain to succeed, it now appears that the odds
of success are only 40/60.
Disjunctive events work the opposite way. Because the entire system is functioning
properly, we tend to believe that each of the pieces will continue to function properly as
well. Thus, we underestimate the chances for a system failure. Even if the
probability of failure for each system component is slight, the chance for an overall
failure can be high if it contains many parts or components.
Computers, cars, the human body and nuclear reactors are all examples of complex systems
that usually function smoothly because the components all have low breakdown rates. But
because there are so many parts, it's very possible (even probable) that one will
eventually fail. Thus, a portion of the system, or even the entire system may fail,
depending upon the parts involved.
You'll note that there is a strong similarity between these
two types of heuristics and two fallacies of logic discussed in the reasoning
module -- errors of composition and division.
Errors of Composition apply standards from a small group to a larger one (if a few
ladybugs help control pests, lots of them will work even better). This is similar to the
Conjunctive heuristic in which our confidence in one step in an event leads to
overconfidence that the entire event will occur as planned.
Errors of Division apply standards of the whole system to the
components of that system (if the Rolling Stones are a great band, each of the members
will have great success as a solo act). This is very much like the Disjunctive heuristic,
in which we are overconfident that each of the components will continue to function
It is thus very possible that our tendency to unconsciously
utilize heuristic devices explains many of the fallacies found in the study of logic. From
this perspective, it becomes clear that errors in logic go way beyond our merely
"making a mistake" -- we may actually be programmed to make these errors! Unless we become conscious of our decision-making processes, our reliance
on heuristics will inevitably lead to mistakes in logic and ultimately to our making poor
The representativeness heuristic is based on the fact that we tend to judge events
by how much they resemble other events with which we are familiar. In so doing, we ignore
relevant facts that should be included in our decision making process, but are not.
People have a tendency to apply what they know about a large population to a small sample,
or vice versa. The effects of these types of judgements can reduce the opportunity to make
good decisions, since they bias our ability to put information in context.
Using small sample sizes in toxicology studies can lead us to believe that a particular
risk is lower than it really is. The reason is that small samples increase the
probability that researchers won't find a particular relationship when in fact one exists.
Thus, by projecting the results of a small sample to the whole population, we underestimate
the size of the risk to which the larger population is being exposed.
Conversely, using small samples can also lead us to believe
that a particular risk is higher than it really is. Positive correlations found in
very small samples are subject to high error factors which, when projected to whole
populations, overestimate the size of risks to which the larger population is being
Regression Toward the Mean
Simply put, things have a way of moving toward the middle, which means they "regress
toward the mean." This is why the football team that won by a landslide this week
will not do quite as well next week, and why the team they beat will go on to have a
better game next Sunday. This phenomenon has been studied for over 100 years. (Galton)
The effect of ignoring the statistical tendency for things to
change over time is that we attribute results to specific occurrences when in fact the
results would have happened anyway. Thus, when an industry complains that regulations are
hurting productivity, it is very possible that productivity would have fallen by itself.
Conversely, when government claims that pollution controls are working, it is possible
that pollution would have declined as a result of normal fluctuations or because levels
had been abnormally high during the recent past.
Risk analysis has become the umbrella term for the
process of identifying, controlling and discussing risk-related issues. A brief
description of these three components can be found below, while more detailed discussions
will be included in our module on economics, as well as future
modules on public policy, and quantitative
As defined by the National Research Council, risk assessment is the estimation of the
probability, and extent of harm posed by, a human health risk. It includes four major
- Hazard Identification -- What is the source of the
- Dose-Response (Toxicity) Assessment -- What is the
increased risk per unit of increased exposure to the hazard?
- Exposure Assessment -- What is the amount and type of
exposure that people have with the hazard? How many people are exposed and for what
- Risk Characterization -- What is the risk level for
people at varying levels of exposure to the hazard?
A terrific place to find additional information on Risk
Assessment is A
Journalist's Handbook on Risk Assessment, published by the Foundation for American
Communications (FACS). (No need for us to reinvent the wheel!)
Risk management is a step up from risk assessment, in that it attempts to develop,
evaluate and implement strategies that can reduce risks.
governments may try to manage the risk of auto injuries by educating the public about the
effects of drunk driving or by passing mandatory seat belt laws. We may also develop
personal strategies to reduce risks -- not driving on Friday nights, appointing a
designated driver, driving no faster than the speed limit, etc.
This is the active exchange of information among those involved in the management of
health-related risks. Because of differing values, perceptions, ideologies and
assumptions, clearly and accurately communicating information about risk is extremely
This difficulty is compounded when risk-related issues are
brought into the political and public arenas, where more emotional reactions and
involvement tend to occur. To help risk communicators with their efforts to work
effectively with the public, legislators and the media, the EPA has established what it
The Seven Cardinal Rules
of Risk Communication
- Accept and involve the public as a legitimate partner.
- Plan carefully and evaluate your performance.
- Listen to the public's feelings.
- Be honest, open and frank.
- Coordinate and collaborate with other credible sources.
- Meet the needs of the media.
- Speak clearly and with compassion.
A Final Thought
While risk analysis is a quantitative process, the people conducting the analyses are
subject to all of the cognitive and heuristic biases mentioned in this module. Thus, even
the best studies, conducted by the most careful and well-meaning researchers, are subject
to the errors which make all of us human and help put at least a little bit of risk into
everything we do.
Risk is a fact of
life, thanks to a combination of this being an uncertain world and our
less-than-perfect judgements regarding how best to deal with that uncertainty. The
starting point is to recognize why and how we make poor decisions, and to continually
apply this knowledge to our ever-evolving quantitative decision making tools and
Here are some ways to think about risk before making a
Risk Analysis Crib Sheet
- Are there clear benefits involved with the risk in
- Are the risks being borne by the same people to whom the
benefits accrue? Or do some people bear risks without having a proportional share of the
benefits, or receive benefits without bearing a proportional share of the risk?
- Has the risk in question been put through an assessment
- If research is being quoted, was the sample size large
enough to allow for accurate projections?
- Is the source of the risk-related information a credible
professional or government organization?
- What types of cognitive or heuristic biases could be
References On the Net...
Base for Screening Benchmarks for Ecological Risk Assessment, U.S. Department of
Handbook on Risk Assessment, Foundation for American Communications. A great site!
official journal of RAPA, the Risk and Policy Association. See also their section on Articles and Titles.
Communication Bibliography, Center for Environmental Communications Studies,
University of Cincinnati
Risk World, a
publication providing information on identification, analysis and management of risks
References Off the Net...
Acceptable Risk, B. Fischhoff et al
(Cambridge University Press, 1981).
Gods: The Remarkable Story of Risk, Peter L. Bernstein (John Wiley & Sons,
of Heuristics and Biases to Social Issues, Heath et al (Plenum Press,
Life at Risk: Playing the Odds in an Uncertain World, Discover
Magazine (May 1996).
under Uncertainty: Heuristics and Biases, Daniel Kahneman, Paul Slavic, Amos
Tversky (Cambridge University Press, 1982).
Toxics: A New Approach to Policy and Industrial Decisionmaking, Robert Gottlieb,
Editor (Island Press, 1995).
The Seven Cardinal Rules of Risk Communication, EPA, OPPE (EPA
Publication No.: EPA 230K92001, May 1992).
Assessment: How Safe is Safe Enough?, R. Schwing and W. Albers Jr. , Editors
(Plenum Press, 1980).
How Risk Affects Your Everyday Life, James Walsh (Merritt Publishing, 1995).
A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis, M.
Granger Morgan and Max Henrion (Cambridge University Press, 1992).
What Causes Cancer?, D. Trichopoulos, F.P. Li, D.J. Hunter, Scientific
American (September 1996).