The objective of this research project was to encourage government agencies charged with managing health, safety, and environmental risks to set priorities for addressing the many risks in their domain of responsibility. Traditionally, risk management agencies have handled their prioritization problems without much input from outside parties. In the last decade, however, agencies have begun to experiment with a variety of processes for gathering input from the lay public, external experts, and other stakeholders. Such ranking efforts represent a sea change in the way that risk management agencies engage the public and the way that agencies integrate expert opinion and lay values.
Although these new attempts at risk ranking are watershed events, the practice is still in an early stage of methodological development. Citing the need for more formal and scientifically sound risk-ranking methods, we proposed and have developed a detailed set of procedures for risk management agencies to use in gathering input from the public for risk ranking. Drawing on principles from decision theory, risk analysis, and risk communication, the method involves five steps: (1) iterative refinement of the set of risks to be ranked and a set of attributes used to describe those risks; (2) characterization of risks in terms of each attribute; (3) combination of this information with narrative descriptions; (4) creation of a set of standardized risk summary sheets to be used in risk-ranking exercises by jury-like groups of laypeople, so that policymakers can employ it to assess public preferences; and (5) description of the deliberations and the resulting rankings is prepared for use in risk management decision making.
Although this project builds on theory and empirical results from behavioral social science, there is no way to develop and refine a satisfactory risk-ranking method without a substantial amount of experimentation in a realistic setting. To accomplish this, we developed two experimental test beds that would allow us to easily run large numbers of experiments without a large overhead in "bringing subjects up to speed" on the specifics of the risk domain. During the first phase of our experiments conducted from 1996-2000, we limited the spectrum of risks to hazards that affected only human health and safety. Our test bed for this phase was the fictitious Centerville Middle School, in which students and staff bear risks in 22 different categories including, in part, infectious diseases, sports injuries, asbestos, and hazmat spills on a nearby highway and railroad.
To make this risk domain more concrete for our experimental subjects, we prepared detailed descriptions of the school and surrounding community including a school site layout, floor plans, and a community map. To help our subjects learn about the risks to be ranked, a standardized risk summary sheet was prepared for each of the 22 school-related risks. Each risk in Centerville Middle School was described in terms of 11 different attributes, which are defined in a separate document. Finally, to make it easier for risk-ranking participants to compare risks across multiple attributes, we prepared a large sheet containing 11 lists, ranking all 22 risks on each attribute. The complete set of our test bed materials describing risks at Centerville Middle School is available online at http://www.epp.cmu.edu/research/EPP_risk.html .
The test bed of school risks was used in a series of group experiments to understand and refine our risk-ranking method. Building on the success of these experiments, we mounted a second phase of research to explore the challenge of risk ranking by laypersons when the risk set includes both risks to humans and risks to the environment. Major components of this second phase of our risk-ranking research included the development of an attribute set to describe both human and ecological risks, the creation of a test bed involving a broad set of risks with which to conduct risk-ranking experiments, and the testing and refinement of the risk-ranking method itself. The test bed created for these experiments was set in the fictitious midwestern DePaul County. The test bed included 11 risks (e.g., air pollution from electric power, genetically modified crops) described using 20 attributes. Briefing materials prepared for the DePaul County test bed included a map and description of DePaul County, risk summary sheets describing each of 11 risks to be ranked, a document defining each of the 20 attributes used to describe these risks, a large sheet ranking the 11 risks along each of 20 attributes, and a summary table of these rankings. The complete set of our test bed materials describing risks to humans and the environment in DePaul County are available online at http://www.epp.cmu.edu/research/EPP_risk.html .
Evaluation of Materials and Procedures. In a previous study involving college students, we evaluated the materials and procedures used in our risk-ranking method. Specifically, we assessed the contributions of the text and table portions of our risk summary sheets to individuals' rankings of risks to students at a fictitious middle school (K.M. Morgan, 1999; DeKay, et al., 2001). Participants received either text-only materials (a risk label plus a one-paragraph description), table-only materials (the attribute table from the risk summary sheet, with a generic risk label), text-plus-table materials (the risk label, the one-paragraph description, and the attribute table, but none of the additional narrative from the summary sheets), or full-summary sheet materials. We repeated this study with laypeople rather than students, and the complete set of 22 middle school risks, rather than the 12 used in the earlier study.
Evaluation of the Effects of Focusing on the Attributes and Collecting Direct Information on the Ranking Process. In developing and evaluating the procedures described in our overviews (DeKay, et al., 2001; Florig, et al., 2001), we conducted 11 risk-ranking exercises involving 86 laypeople from the Pittsburgh area and the full set of 22 middle school risks. Under the auspices of a Harvard short course, we also conducted abbreviated risk-ranking sessions involving smaller sets of risks, with 218 risk managers in 43 groups. An article based on these Harvard sessions assesses the validity and replicability of the resulting rankings (K.M. Morgan, et al., 2001).
The methods and materials used in these studies evolved as we learned more about how participants ranked the risks (e.g., we used different risk subsets and slightly different attributes and displays). To evaluate the effects of focusing on attributes and to document better the risk-ranking process, we conducted a study (Task CMS 2 in the proposal) with three conditions formed by varying attribute focus. The last 6 of the 11 groups mentioned above (2 groups in each of the 3 conditions), made up of about 6 laypersons each, completed the ranking task for all 22 risks. The three levels of attribute focus were: (1) control condition (our standard nine-step ranking protocol described in the proposal and in Florig, et al., 2001); (2) holistic ranking only; and (3) multi-attribute focus (conducting the multi-attribute ranking first, without a preceding holistic ranking). Evaluation measures for this study included the several consistency, satisfaction, and agreement metrics reported elsewhere (K.M. Morgan, et al., 2001). We also implemented a coding scheme to capture group deliberations. Detailed data were collected during the group risk-ranking and attribute-ranking phases of the process. Coders interjected clarifying questions to the group when the reasons behind ranking decisions were unclear. In addition, we videotaped the risk-ranking sessions and postranking explanations, and transcribed relevant explanations for analysis.
Communication of Rankings Results. Materials summarizing the risk ranking exercises were crafted. These included both text and graphical summaries designed for review by laypersons. These materials were pilot tested using opportunity participants at the University. Versions of this material were tested using 35 public health officials attending a professional development course at Harvard University. Satisfaction surveys and comprehension tests also were completed.
Development of Health and Safety Risks for a New Test Bed. To test individual and group procedures for ranking health, safety, and environmental risks, we developed a test bed in a different domain. Specifically, we expanded the Centerville Middle School test bed to include other hazards in a fictitious county (DePaul County) in the Midwest, where the school was said to be located. We developed risk summary sheets for a new set of 10 risks in this county. Although some of these were revised versions of the risk summary sheets for the middle school test bed, most were completely new. Comparison of health and safety risks and environmental risks on a common set of attributes required that the number of health and safety attributes be reduced to allow for the inclusion of environmental attributes. For example, some illness and injury attributes were merged into more inclusive ones. Despite this minor consolidation and simplification of the health and safety attributes, the creation of new summary sheets for health and safety risks in a new test bed served as an effective dry run for the construction of material needed for the real-world application.
Replicating and Extending the U.S. Environmental Protection Agency (EPA)-Science Advisory Board (SAB) Study of Environmental Risks. Upon closer inspection of the methods and results of the EPA-SAB study, we decided to develop our sets of environmental risks and attributes on the basis of other research, as described below.
Selection of Attributes for Describing Environmental Risks. Because only a handful of studies have addressed perceptions of ecological hazards (Lazo, Kinnell, and Fisher, 2000; McDaniels, Axelrod, Cavanagh, and Slovic, 1997; McDaniels, Axelrod, and Slovic, 1995, 1996), we conducted additional research to inform the choice of attributes for describing such hazards. For our first study (Willis, DeKay, Fischhoff, and Morgan, in review), we assembled a list of 39 attributes of ecological hazards from the literatures on risk perception, comparative risk assessment, environmental psychology, environmental economics, ecology, and conservation. One hundred and twenty-five laypeople evaluated 83 hazards on subsets of this attribute set. We used oblique factor analysis to determine a smaller number of dimensions that could be used to describe these environmental hazards, and assessed the relationships between these dimensions and other variables, including judgments of overall and ecological risk. In a second study reported in the same paper, 30 laypeople each evaluated 34 hazards on 17 attributes. Analyses parallel to those used in the first study were performed at the aggregate level and at the individual-participant level to illustrate the importance of discerning differences between the level of analysis (aggregate versus disaggregate) and the focus of analysis (distinctions among risks versus distinctions among participants).
In our third study in this series (Willis and DeKay, in preparation), we investigated variability in ecological risk perceptions by surveying members of four stakeholder groups commonly involved in environmental policy debates. Fifty-six individuals representing government, industry, environmentalist, and general public groups evaluated 34 environmental hazards on 17 attributes, and also evaluated the riskiness and acceptability of each hazard. In addition, we assessed participants' worldviews with modified versions of Schwarz's personal norms and awareness of consequences scales (Stern, Dietz, and Black, 1986) and Dunlap's New Ecological Paradigm scale (Dunlap, Van Liere, Mertig, and Jones, 2000). Factor analysis and regression models were used to evaluate the underlying dimensions of ecological risk perception and the relationships between these dimensions, group membership, worldviews, and judgments of overall and ecological risk.
Risk-Ranking Exercises Involving Health, Safety, and Environmental Risks. Our most recent study (Willis, et al., in review) reports an extension of our risk-ranking method to incorporate ecological risks and their attributes. On the basis of our earlier research, we identified 20 relevant attributes for describing health, safety, and environmental hazards in standardized risk summary sheets (8 health and safety attributes and 12 environmental attributes). In a series of 3 ranking sessions, 23 laypeople ranked 10 such hazards in a fictitious midwestern county using both holistic and multi-attribute ranking procedures. The results of these risk-ranking sessions were compared to results from sessions involving only health and safety risks in the middle school test bed.
Documentation and Application. All of this work is being published in the refereed literature. We have provided details to a number of practitioners such as Sarah Thorne and Gordon Butte of Decision Partners. We are looking for an opportunity to assist in a real-world application.
Educational Activities. We have used risk-ranking materials as the basis for discussions and assignments on numerous occasions.
For example, in-class risk-ranking exercises and homework assignments were implemented in the following five different courses (they were given nine times): Environmental Decision Making in 1999 and 2001; Decision Analysis and Decision Support Systems in 1999-2002; Quantitative Methods in Policy Analysis in 2001; Theory and Practice of Policy Analysis in 1999 and 2001; and Policy Analysis and Regulation in 2002.
Publications and Presentations:
|Florig HK, Morgan MG, Morgan KM, Jenni KE, Fischhoff B, Fischbeck PS, DeKay ML. A deliberative method for ranking risks (I): overview and test bed development. Risk Analysis 2001;21(5):913-922. |
|Morgan KM, DeKay ML, Fischbeck PS, Fischhoff B, Morgan MG, Florig HK. A deliberative method for ranking risks (II): evaluation of validity and agreement among risk managers. Risk Analysis 2001;21(5):923-938. |
|Morgan MG, Florig HK, DeKay ML, Fischbeck P. Categorizing risks for risk ranking. Risk Analysis 2001;20:49-58. |
exposure, public policy, community-based, preferences, psychological, modeling, survey. , Economic, Social, & Behavioral Science Research Program, Ecosystem Protection/Environmental Exposure & Risk, RFA, Scientific Discipline, Applied Math & Statistics, Chemical Mixtures - Environmental Exposure & Risk, Ecological Effects - Environmental Exposure & Risk, Ecological Effects - Human Health, Ecological Indicators, Ecological Risk Assessment, Ecology, Ecology and Ecosystems, Economics & Decision Making, Ecosystem Protection, Ecosystem/Assessment/Indicators, Health Risk Assessment, Psychology, Social Science, Sociology, decision-making, exploratory research environmental biology, decision analysis, deliberative policy, ecological exposure, environmental decision-making, environmental policy, environmental risk assessment, environmental risks, health and safety ranking, human health risk, multi-attribute utility, multi-criteria, multi-objective decision making, risk characterization, risk management, risk ranking, risk reduction, stakeholder, valuation