Jump to main content.

Continued Development of Methods for Characterizing and Ranking Health, Safety and Environmental Risks

Quick Links

Research Funding

Metadata


In the majority of risk-ranking exercises undertaken to date, the specific methods that have been employed to perform the rankings have received little attention. In the case of many of the EPA-sponsored regional ranking exercises, this has not been a serious problem because a primary objective has been to promote dialogue among stakeholders. However, if the results of risk ranking are to be used by regulatory agencies in support of risk-management decision making, they must be based on normatively justifiable and empirically validated procedures. Under support from a previous NSF grant (SBR-95120232), researchers in the Department of Engineering and Public Policy at Carnegie Mellon have begun the development, empirical testing, and refinement of such a method for ranking health and safety risks. The research proposed here is designed to complete that work, and to extend it to include environmental risks.

Approach:
Risk ranking requires both risk analysis and value-based judgments. In the Carnegie Mellon method, experts provide detailed descriptions of the risk in multi-attribute terms in the form of "risk summary sheets." Then, representative jury-like groups of laypersons perform the ranking using two different methods. The first is a "holistic" ranking procedure in which the participants carefully consider all available information on the risks and then, using procedural guidance and supporting materials developed by the investigators, make overall judgments of their relative concern. The second is a "multi-attribute" (MA) ranking procedure in which participants indicate the relative importance they attach to each of the risk attributes described in the risk summary sheets. Implicit ranks can then be inferred. These two procedures provide alternative "framings," and are used to assist participants in producing their best-considered judgment. Both individual and group rankings are elicited in order to obtain the benefits of group interaction while allowing individuals to express their personal views, and in order to assess the impact of group interaction. Although development of the method has built upon theory and empirical results from behavioral social science, a substantial amount of experimentation in a realistic setting has also been necessary. An experimental test bed has been developed which involves 22 health and safety risks in a hypothetical middle school. In the work now proposed, the investigators plan further experimental studies to complete the development of the method. Studies are planned to refine and evaluate the ranking procedures using the middle-school test bed. These studies will involve investigating the importance of several different components of the risk summary sheets; investigating the importance of focusing on the risk attributes in the individual and group ranking tasks; and investigating the credibility and usefulness of the method and the resulting rankings. To date, the method has involved only risks to health and safety, but risks to the environment are also very important in many real-world contexts. Under this proposal, the researchers plan to conduct new theoretical and empirical work to identify how to characterize ecological and other environmental risks. In addition, they plan to develop materials for health, safety, and environmental risks in a new domain, so that those risks may be compared on a common set of attributes and ranked together in a single exercise. Once development of the method is complete, the investigators will complete a set of preparatory studies that will allow a careful systematic evaluation and selection of one or more real-world application domains.

Expected Results:
In the middle-school test bed, the researchers expect to find conclusive evidence for the usefulness of the tabular and text components of the risk summary sheets and for the usefulness of the MA procedure in the risk-ranking workshops. The credibility and usefulness of the resulting rankings is expected to depend on the quality of the risk materials, the consistency among rankings produced by different procedures, the satisfaction of the participants with the ranking process and output, and the level of agreement among different groups who ranked the same set of risks. Finally, the researchers expect to demonstrate the applicability of this method to ranking health, safety, and environmental risks in a novel test bed in a new domain.

Metadata

EPA/NSF ID:
R827920
Principal Investigators:
Morgan, Granger
DeKay, Michael
Fishbeck, Paul
Technical Liaison:
Research Organization:
Carnegie Mellon University
Funding Agency/Program:
EPA/ORD/Valuation
Grant Year:
1999
Project Period:
December 15, 1999 to December 14, 2001
Cost to Funding Agency:
$235,504
Project Status Reports:
For the Year 2000

Objective: In the majority of risk-ranking exercises undertaken to date, the specific methods that have been employed to perform the rankings have received little attention. In the case of many of the EPA-sponsored regional ranking exercises, this has not been a serious problem because a primary objective has been to promote dialogue among stakeholders. However, if the results of risk ranking are to be used by regulatory agencies in support of risk-management decision making, they must be based on normatively justifiable and empirically validated procedures. Under support from a previous National Science Foundation (NSF) grant (SBR-95120232), researchers in the Department of Engineering and Public Policy at Carnegie Mellon have begun the development, empirical testing, and refinement of such a method for ranking health and safety risks. The research proposed here is designed to complete that work, and to extend it to include environmental risks.

Progress Summary: Excellent progress has been made on all phases of the work. We have completed a critical review of previous research in environmental psychology, ecology, conservation, and comparative risk assessment on attributes for use in characterizing ecological risks. We completed a systematic process to develop candidate attributes, and used these to evaluate a comprehensive, multimedia set of activities and stressors. Using these materials, we then designed and conducted a series of experimental studies with lay respondents. The results, which have been analyzed using factor analysis and multi-dimensional scaling, have allowed us to confirm many previous findings in the literature and extended those earlier results. We currently are conducting additional studies to determine whether the attribute set accurately represents risk perception for other stakeholder groups (for example, environmental risk assessors and industry experts). This work is providing us with a firm basis on which to develop a set of risk-ranking procedures.

In parallel with these efforts, we defined a new testbed for experimental risk-ranking studies. Our previous testbed (the hypothetical Centerville Middle School (see http://www.epp. cmu.edu/research/risk_ranking.html) allowed us to study only risks to health and safety. We now have defined a new testbed (a hypothetical mid-western county in which the school is located) and we are in the early stages of developing a new set of risk summary sheets (or in some cases revising and extending previously developed sheets). Creating these sheets is very labor intensive because a great deal of risk-specific information must be developed, and the sheets must then be subjected to intensive review and refinement.

In addition to the work on extending our risk-ranking method to cover ecological risks, we also have continued work on refining the previously developed method for ranking health and safety risks. We have continued to run experimental studies with participants in the risk-related short-course run by the School of Public Health at Harvard. We have long argued that simply reporting the results of a risk ranking was not sufficient, that decision makers would need a "thick description" that highlights sources of agreement and disagreement, notes risks that are likely to be problematic, and so on. We now have developed a draft "thick description" of the results of a set of previous risk-ranking exercises we conducted, and have pilot tested it with three volunteers from one of the Harvard classes. The materials now are being revised in preparation for more extensive study.

Finally, a study is now being developed to examine how the relative effect of different types of risk information affects ranking results. Four conditions, simple verbal descriptions, anonymous (e.g., Risk A, Risk B) numerical attribute tables, verbal descriptions plus numerical attribute tables, and full risk summary sheets are being examined.

Standard methods for developing social science research materials, and standard statistical methods are being employed throughout the work to assure the quality of the results. Results have been, and will continue to be, submitted to peer reviewed journals for publication.

Future Activities: In the next year of project activity, the current work on developing empirically valid attributes for characterizing lay concerns about ecological risks will be completed. A new risk-ranking testbed, based on a hypothetical county in the mid-western United States, will be developed using the ecological risk attributes along with the materials, methods, and insights of our previous studies on ranking risks to health and safety. Risk summary sheets will be developed for use in the testbed. Experimental studies using lay respondents will be undertaken to develop, test, and refine risk-ranking methods that include ecological as well as health and safety risks. Individuals will be asked to rank risks by themselves and as part of a small group. The method will be evaluated on participant satisfaction with ranking results, agreement between individual's ranking results, and agreement between groups of lay public participants. In addition, the experimental testbed may be used to assess differences in risk management decisions between different stakeholder groups.

Publications and Presentations: Total Count: 9
TypeCitation
Book ChapterDeKay ML, Florig HK, Fischbeck PS, Morgan MG, Morgan KM, Fischhoff B, Jenni K. The use of public risk ranking in regulatory development. In: Fischbeck PS, Farrow S, eds. Improving Regulation: Cases in Environment, Health, and Safety, Resources for the Future Press, Washington, DC.
Dissertation/ThesisWillis HH. Development and evaluation of a method for incorporating public perception in ecological risk prioritization. Ph.D. Thesis, Department of Engineering and Public Policy, Carnegie Mellon University, January 2001.
Journal ArticleFlorig HK, Morgan MG, Morgan K, Jenni K, Fischhoff B, Fischbeck PS, DeKay ML. A deliberative method for ranking risks. I. Overview and testbed development. Risk Analysis 2001.
Journal ArticleMorgan KM, DeKay ML, Fischbeck PS, Morgan MG, Fischhoff B, Florig HK. A deliberative method for ranking risks. II. Evaluation of validity and agreement among risk managers. Risk Analysis 2001.
Journal ArticleMorgan MG, Florig HK, DeKay ML, Fischbeck P. Categorizing risks for risk ranking. Risk Analysis 2001;20:49-58.
PresentationFischbeck PF, DeKay ML, Fischhoff B, Florig HK, Morgan MG, Palmgren CR, Willis HH. Evaluating a risk-ranking methodology. Paper presented at the Annual Meeting of the Society for Risk Analysis, December 2000.
PresentationWillis HH, DeKay ML, Fischbeck PS, Fischhoff B, Florig HK, Morgan MG, Palmgren CR. Extension of the Carnegie Mellon risk ranking testbed to include environmental and ecological factors. Paper presented at the Annual Meeting of the Society for Risk Analysis, December 2000.
PresentationDeKay ML, Willis HH. Public perceptions of environmental risks. Paper presented at the Annual Meeting of the Society for Judgment and Decision Making, November 2000.
PresentationFischbeck P. Risk-based priority setting. Presented to the Harvard School of Public Health Short Course on Risk, September 2000.
For the year 2001

Objective:
The specific methods that have been employed in the majority of current risk-ranking exercises have received little attention. In the case of many of the U.S. Environmental Protection Agency (EPA)-sponsored regional ranking exercises, this has not been a serious problem because the primary objective has been to promote dialog among stakeholders. However, if regulatory agencies are going to use risk-ranking results in support of risk management decisionmaking, they must be based on normatively justifiable terms. Researchers in the Department of Engineering and Public Policy at Carnegie Mellon have begun the development, empirical testing, and refinement of such a method for ranking health and safety risks. This research project is designed to complete this work and extend it to include environmental risks.

Progress Summary:
The researchers continued to make excellent progress on all phases of the work, and we completed two studies that extend previous research in environmental psychology, ecology, conservation, and comparative risk assessment on attributes for use in characterizing ecological risks.

In the first study, we used a systematic process to develop candidate attributes to describe a comprehensive, multi-media set of activities and stressors. Aggregate-level factor of analysis data collected from laypeople has confirmed many previous findings from the literature about the perception of ecological risks. However, our results suggest that the factors underlying perception of ecological risks are not uncorrelated, as previously assumed. Moreover, multiple-regression results suggest that aesthetic impacts affect judgments of ecological risk, even when impacts on humans, other species, and habitats are held constant.

In the second study, we used a subset of attributes, activities, and stressors to examine these relationships among members of four stakeholder groups (laypeople, environmentalists, and ecological risk assessors from government and industry). Results indicate that these four groups view the risks similarly (the group-level factor structures are very similar), but the groups differ in that the factors are related to risk judgments (e.g., compared to the other groups, laypeople place more emphasis on aesthetics and scientific understanding than on the impacts of species and habitats). Compared to group membership, differences in participants' worldviews played only minor roles in determining the relationships between the factors and risk judgments. Differences between aggregate- and individual-level analyses, differences between hazard- and participant-based analyses, and the common confound between these two distinctions also are considered. This research project has provided us with a firm basis on which to incorporate ecological risks and their attributes into our risk-ranking method.

In parallel with these efforts, the researchers defined a new testbed for experimental risk-ranking studies. Their previous testbed allowed us to study only risks to health and safety. They subsequently defined a new testbed and developed a new set of 10 risk summary sheets to describe risks in the United States. Copies of these are being posted on the Web site. Creating these sheets was very labor intensive because they required a great deal of risk-specific information, and because they were subjected to intensive review and refinement. After developing the risk summary sheets, they used them in a number of risk-ranking exercises with risk managers and laypeople. Specifically, they used the new summary sheets in short courses at Harvard's School of Public Health and at an annual meeting of the Society of Environmental Toxicology and Chemistry (SETAC), and with three groups of laypeople at Carnegie Mellon. Results indicate that our risk-ranking method can be used successfully to simultaneously compare health, safety, and environmental risks. More specifically, measures of the consistency of the rankings produced by the holistic and multi-attribute procedures, explicit and implicit measures of participants’ satisfaction with the process and the resulting rankings, and levels of agreement among individuals and groups are similar to results from our earlier studies that used only health and safety risks. Notably, all groups ranked some ecological attributes above some health and safety attributes, and ranked some purely ecological risks above some purely health and safety risks.

In addition to the work on extending our risk-ranking method to cover ecological risks and attributes, the researchers also have continued work on refining the previously developed method for ranking health and safety risks. They have completed a study examining the relative effects of different types of risk information on the ranking results. Individual laypeople ranked our full set of 22 health and safety risks in each of four conditions: simple verbal descriptions only, anonymous (e.g., Risk A, Risk B) numerical attribute tables, verbal descriptions plus numerical attribute tables, and full risk summary sheets. The results were consistent with those from an earlier pilot study in which individual undergraduates ranked 12 of the risks in these conditions. Compared to the condition that included verbal descriptions, there was more consistency among rankings (and between individual rankings and a ranking based only on expected mortality) in the three conditions that only included the numerical attribute table. These results strongly suggest that laypeople are very responsive to quantitative risk information when it is presented in a coherent, easy-to-understand format. Risk judgments were much more idiosyncratic (i.e., there was very little consistency among participants) when such information was omitted from the descriptions of the hazards.

The researchers have reconsidered the relative appeal of several multi-attribute models based on participants' rankings of the relative importance of the various attributes (e.g., rank-order-centroid, reciprocal-of-the-rank, and rank-sum models for inferring risk rankings from attribute rankings). Although they have suggested that the reciprocal model worked well for the attributes of health and safety risks, recent papers and feedback at presentations have encouraged us to reconsider the rank-order-centroid model. In addition, the larger number of attributes required for the description of ecological risks, with the constraints of risk-ranking exercises (e.g., the desire for simple spreadsheet implementation, transparency for the participants, and a more even-weighting profile) caused us to reconsider the rank-sum model. In pilot studies with health, safety, and ecological risks, these models implied rankings that were reasonably consistent with participants' holistic rankings; we opted for the simpler rank-sum model in our most recent risk-ranking exercises. They are evaluating these models on the basis of the many risk-ranking exercises that we have conducted with risk managers and laypeople.

Finally, they have argued that simply reporting the results of a risk-ranking was not sufficient, that decisionmakers would need a "thick description" which highlights sources of agreement and disagreement, and notes risks that are likely to be problematic. Draft materials are currently under development. These will serve as the basis for an experimental study to determine the nature and structure of summary materials that people find most useful.

To assure the quality of the results, we will continue to use state-of-the-art methods for developing social science research materials, and for analyzing the results of our experiments. Results have been, and will continue to be, submitted to peer-reviewed journals for publication.

Future Activities:
The researchers will focus on analysis of the data that have been collected and on the publication of these results. They also will continue to explore possibilities for a real-world application, either in North America or in Europe.

Publications and Presentations:
Florig HK, Morgan MG, Morgan KM, Jenni KE, Fischhoff B, Fischbeck PS, DeKay ML. A deliberative method for ranking risks (I): overview and test bed development. Risk Analysis 2001;21(5):913-922.

Morgan KM, DeKay ML, Fischbeck PS, Fischhoff B, Morgan MG, Florig HK. A deliberative method for ranking risks (II): evaluation of validity and agreement among risk managers. Risk Analysis 2001;21(5):923-938.

Supplemental Keywords:
risk ranking, ecological risk, attributes of risk, measurement of ecological risk, risk characterization, risk assessment, ecosystem protection, ecosystem indicators, methods/techniques, exposure, public policy, community-based, preferences, psychological, modeling, survey. Economic, Social, & Behavioral Science Research Program, Ecosystem Protection/Environmental Exposure & Risk, RFA, Scientific Discipline, Applied Math & Statistics, Chemical Mixtures - Environmental Exposure & Risk, Ecological Effects - Environmental Exposure & Risk, Ecological Effects - Human Health, Ecological Indicators, Ecological Risk Assessment, Ecology, Ecology and Ecosystems, Economics & Decision Making, Ecosystem Protection, Ecosystem/Assessment/Indicators, Health Risk Assessment, Psychology, Social Science, Sociology, decision-making, exploratory research environmental biology, decision analysis, deliberative policy, ecological exposure, environmental decision-making, environmental policy, environmental risk assessment, environmental risks, health and safety ranking, human health risk, multi-attribute utility, multi-criteria, multi-objective decision making, risk characterization, risk management, risk ranking, risk reduction, stakeholder, valuation.

Project Reports:
Final
Objective:
The objective of this research project was to encourage government agencies charged with managing health, safety, and environmental risks to set priorities for addressing the many risks in their domain of responsibility. Traditionally, risk management agencies have handled their prioritization problems without much input from outside parties. In the last decade, however, agencies have begun to experiment with a variety of processes for gathering input from the lay public, external experts, and other stakeholders. Such ranking efforts represent a sea change in the way that risk management agencies engage the public and the way that agencies integrate expert opinion and lay values.

Summary/Accomplishments:
Although these new attempts at risk ranking are watershed events, the practice is still in an early stage of methodological development. Citing the need for more formal and scientifically sound risk-ranking methods, we proposed and have developed a detailed set of procedures for risk management agencies to use in gathering input from the public for risk ranking. Drawing on principles from decision theory, risk analysis, and risk communication, the method involves five steps: (1) iterative refinement of the set of risks to be ranked and a set of attributes used to describe those risks; (2) characterization of risks in terms of each attribute; (3) combination of this information with narrative descriptions; (4) creation of a set of standardized risk summary sheets to be used in risk-ranking exercises by jury-like groups of laypeople, so that policymakers can employ it to assess public preferences; and (5) description of the deliberations and the resulting rankings is prepared for use in risk management decision making.

Although this project builds on theory and empirical results from behavioral social science, there is no way to develop and refine a satisfactory risk-ranking method without a substantial amount of experimentation in a realistic setting. To accomplish this, we developed two experimental test beds that would allow us to easily run large numbers of experiments without a large overhead in "bringing subjects up to speed" on the specifics of the risk domain. During the first phase of our experiments conducted from 1996-2000, we limited the spectrum of risks to hazards that affected only human health and safety. Our test bed for this phase was the fictitious Centerville Middle School, in which students and staff bear risks in 22 different categories including, in part, infectious diseases, sports injuries, asbestos, and hazmat spills on a nearby highway and railroad.

To make this risk domain more concrete for our experimental subjects, we prepared detailed descriptions of the school and surrounding community including a school site layout, floor plans, and a community map. To help our subjects learn about the risks to be ranked, a standardized risk summary sheet was prepared for each of the 22 school-related risks. Each risk in Centerville Middle School was described in terms of 11 different attributes, which are defined in a separate document. Finally, to make it easier for risk-ranking participants to compare risks across multiple attributes, we prepared a large sheet containing 11 lists, ranking all 22 risks on each attribute. The complete set of our test bed materials describing risks at Centerville Middle School is available online at http://www.epp.cmu.edu/research/EPP_risk.html exit EPA.

The test bed of school risks was used in a series of group experiments to understand and refine our risk-ranking method. Building on the success of these experiments, we mounted a second phase of research to explore the challenge of risk ranking by laypersons when the risk set includes both risks to humans and risks to the environment. Major components of this second phase of our risk-ranking research included the development of an attribute set to describe both human and ecological risks, the creation of a test bed involving a broad set of risks with which to conduct risk-ranking experiments, and the testing and refinement of the risk-ranking method itself. The test bed created for these experiments was set in the fictitious midwestern DePaul County. The test bed included 11 risks (e.g., air pollution from electric power, genetically modified crops) described using 20 attributes. Briefing materials prepared for the DePaul County test bed included a map and description of DePaul County, risk summary sheets describing each of 11 risks to be ranked, a document defining each of the 20 attributes used to describe these risks, a large sheet ranking the 11 risks along each of 20 attributes, and a summary table of these rankings. The complete set of our test bed materials describing risks to humans and the environment in DePaul County are available online at http://www.epp.cmu.edu/research/EPP_risk.html exit EPA.

Evaluation of Materials and Procedures. In a previous study involving college students, we evaluated the materials and procedures used in our risk-ranking method. Specifically, we assessed the contributions of the text and table portions of our risk summary sheets to individuals' rankings of risks to students at a fictitious middle school (K.M. Morgan, 1999; DeKay, et al., 2001). Participants received either text-only materials (a risk label plus a one-paragraph description), table-only materials (the attribute table from the risk summary sheet, with a generic risk label), text-plus-table materials (the risk label, the one-paragraph description, and the attribute table, but none of the additional narrative from the summary sheets), or full-summary sheet materials. We repeated this study with laypeople rather than students, and the complete set of 22 middle school risks, rather than the 12 used in the earlier study.

Evaluation of the Effects of Focusing on the Attributes and Collecting Direct Information on the Ranking Process. In developing and evaluating the procedures described in our overviews (DeKay, et al., 2001; Florig, et al., 2001), we conducted 11 risk-ranking exercises involving 86 laypeople from the Pittsburgh area and the full set of 22 middle school risks. Under the auspices of a Harvard short course, we also conducted abbreviated risk-ranking sessions involving smaller sets of risks, with 218 risk managers in 43 groups. An article based on these Harvard sessions assesses the validity and replicability of the resulting rankings (K.M. Morgan, et al., 2001).

The methods and materials used in these studies evolved as we learned more about how participants ranked the risks (e.g., we used different risk subsets and slightly different attributes and displays). To evaluate the effects of focusing on attributes and to document better the risk-ranking process, we conducted a study (Task CMS 2 in the proposal) with three conditions formed by varying attribute focus. The last 6 of the 11 groups mentioned above (2 groups in each of the 3 conditions), made up of about 6 laypersons each, completed the ranking task for all 22 risks. The three levels of attribute focus were: (1) control condition (our standard nine-step ranking protocol described in the proposal and in Florig, et al., 2001); (2) holistic ranking only; and (3) multi-attribute focus (conducting the multi-attribute ranking first, without a preceding holistic ranking). Evaluation measures for this study included the several consistency, satisfaction, and agreement metrics reported elsewhere (K.M. Morgan, et al., 2001). We also implemented a coding scheme to capture group deliberations. Detailed data were collected during the group risk-ranking and attribute-ranking phases of the process. Coders interjected clarifying questions to the group when the reasons behind ranking decisions were unclear. In addition, we videotaped the risk-ranking sessions and postranking explanations, and transcribed relevant explanations for analysis.

Communication of Rankings Results. Materials summarizing the risk ranking exercises were crafted. These included both text and graphical summaries designed for review by laypersons. These materials were pilot tested using opportunity participants at the University. Versions of this material were tested using 35 public health officials attending a professional development course at Harvard University. Satisfaction surveys and comprehension tests also were completed.

Development of Health and Safety Risks for a New Test Bed. To test individual and group procedures for ranking health, safety, and environmental risks, we developed a test bed in a different domain. Specifically, we expanded the Centerville Middle School test bed to include other hazards in a fictitious county (DePaul County) in the Midwest, where the school was said to be located. We developed risk summary sheets for a new set of 10 risks in this county. Although some of these were revised versions of the risk summary sheets for the middle school test bed, most were completely new. Comparison of health and safety risks and environmental risks on a common set of attributes required that the number of health and safety attributes be reduced to allow for the inclusion of environmental attributes. For example, some illness and injury attributes were merged into more inclusive ones. Despite this minor consolidation and simplification of the health and safety attributes, the creation of new summary sheets for health and safety risks in a new test bed served as an effective dry run for the construction of material needed for the real-world application.

Replicating and Extending the U.S. Environmental Protection Agency (EPA)-Science Advisory Board (SAB) Study of Environmental Risks. Upon closer inspection of the methods and results of the EPA-SAB study, we decided to develop our sets of environmental risks and attributes on the basis of other research, as described below.

Selection of Attributes for Describing Environmental Risks. Because only a handful of studies have addressed perceptions of ecological hazards (Lazo, Kinnell, and Fisher, 2000; McDaniels, Axelrod, Cavanagh, and Slovic, 1997; McDaniels, Axelrod, and Slovic, 1995, 1996), we conducted additional research to inform the choice of attributes for describing such hazards. For our first study (Willis, DeKay, Fischhoff, and Morgan, in review), we assembled a list of 39 attributes of ecological hazards from the literatures on risk perception, comparative risk assessment, environmental psychology, environmental economics, ecology, and conservation. One hundred and twenty-five laypeople evaluated 83 hazards on subsets of this attribute set. We used oblique factor analysis to determine a smaller number of dimensions that could be used to describe these environmental hazards, and assessed the relationships between these dimensions and other variables, including judgments of overall and ecological risk. In a second study reported in the same paper, 30 laypeople each evaluated 34 hazards on 17 attributes. Analyses parallel to those used in the first study were performed at the aggregate level and at the individual-participant level to illustrate the importance of discerning differences between the level of analysis (aggregate versus disaggregate) and the focus of analysis (distinctions among risks versus distinctions among participants).

In our third study in this series (Willis and DeKay, in preparation), we investigated variability in ecological risk perceptions by surveying members of four stakeholder groups commonly involved in environmental policy debates. Fifty-six individuals representing government, industry, environmentalist, and general public groups evaluated 34 environmental hazards on 17 attributes, and also evaluated the riskiness and acceptability of each hazard. In addition, we assessed participants' worldviews with modified versions of Schwarz's personal norms and awareness of consequences scales (Stern, Dietz, and Black, 1986) and Dunlap's New Ecological Paradigm scale (Dunlap, Van Liere, Mertig, and Jones, 2000). Factor analysis and regression models were used to evaluate the underlying dimensions of ecological risk perception and the relationships between these dimensions, group membership, worldviews, and judgments of overall and ecological risk.

Risk-Ranking Exercises Involving Health, Safety, and Environmental Risks. Our most recent study (Willis, et al., in review) reports an extension of our risk-ranking method to incorporate ecological risks and their attributes. On the basis of our earlier research, we identified 20 relevant attributes for describing health, safety, and environmental hazards in standardized risk summary sheets (8 health and safety attributes and 12 environmental attributes). In a series of 3 ranking sessions, 23 laypeople ranked 10 such hazards in a fictitious midwestern county using both holistic and multi-attribute ranking procedures. The results of these risk-ranking sessions were compared to results from sessions involving only health and safety risks in the middle school test bed.

Documentation and Application. All of this work is being published in the refereed literature. We have provided details to a number of practitioners such as Sarah Thorne and Gordon Butte of Decision Partners. We are looking for an opportunity to assist in a real-world application.

Educational Activities. We have used risk-ranking materials as the basis for discussions and assignments on numerous occasions.

For example, in-class risk-ranking exercises and homework assignments were implemented in the following five different courses (they were given nine times): Environmental Decision Making in 1999 and 2001; Decision Analysis and Decision Support Systems in 1999-2002; Quantitative Methods in Policy Analysis in 2001; Theory and Practice of Policy Analysis in 1999 and 2001; and Policy Analysis and Regulation in 2002.

Publications and Presentations:
Florig HK, Morgan MG, Morgan KM, Jenni KE, Fischhoff B, Fischbeck PS, DeKay ML. A deliberative method for ranking risks (I): overview and test bed development. Risk Analysis 2001;21(5):913-922.
Morgan KM, DeKay ML, Fischbeck PS, Fischhoff B, Morgan MG, Florig HK. A deliberative method for ranking risks (II): evaluation of validity and agreement among risk managers. Risk Analysis 2001;21(5):923-938.
Morgan MG, Florig HK, DeKay ML, Fischbeck P. Categorizing risks for risk ranking. Risk Analysis 2001;20:49-58.
Supplemental Keywords:
exposure, public policy, community-based, preferences, psychological, modeling, survey. , Economic, Social, & Behavioral Science Research Program, Ecosystem Protection/Environmental Exposure & Risk, RFA, Scientific Discipline, Applied Math & Statistics, Chemical Mixtures - Environmental Exposure & Risk, Ecological Effects - Environmental Exposure & Risk, Ecological Effects - Human Health, Ecological Indicators, Ecological Risk Assessment, Ecology, Ecology and Ecosystems, Economics & Decision Making, Ecosystem Protection, Ecosystem/Assessment/Indicators, Health Risk Assessment, Psychology, Social Science, Sociology, decision-making, exploratory research environmental biology, decision analysis, deliberative policy, ecological exposure, environmental decision-making, environmental policy, environmental risk assessment, environmental risks, health and safety ranking, human health risk, multi-attribute utility, multi-criteria, multi-objective decision making, risk characterization, risk management, risk ranking, risk reduction, stakeholder, valuation

 
Relevant Websites:
http://www.epp.cmu.edu/research/EPP_risk.html exit EPA

Local Navigation


Jump to main content.