The Risk & Uncertainty Conference Amsterdam was the follow up of the successful meeting in Cambridge UK in June 2017. How to manage (uncertain) risks is important for e.g. the acceptance of new technologies. The aim of the conference was to stimulate high-quality international research on risk & uncertainty communication, and to inform and support policy in this field with the latest evidence-based approaches, for example in communicating to the general public about issues involving risks and uncertainties.
This year’s topic was Contested Facts: how to communicate with the public and other stakeholders about controversial issues where competing information exists about risks and uncertainties.
David will be speaking on Tuesday
Prof. dr. Sir David Spiegelhalter is Winton Professor for the Public Understanding of Risk and Fellow of Churchill College at Cambridge University. He works to improve the way in which risk and statistical evidence are taught and discussed in society: he gives many presentations to schools and others, advises organisations on risk communication, and is a regular commentator on risk issues. He was elected FRS in 2005, awarded an OBE in 2006, and was knighted in 2014 for services to medical statistics.
Links:
Cecile will be speaking on Wednesday
Prof dr. Cecile Janssens is research professor of translational epidemiology in the department of Epidemiology of the Rollins School of Public Health, Emory University, Atlanta, USA. Her research concerns the translation of genomics research to applications in clinical and public health practice and focuses on the genetic prediction of common diseases such as diabetes, cardiovascular disease and cancer. I also study how the predictive ability and utility of genetic testing can best be measured.
Links:
Michael will be speaking on Wednesday
Prof. dr. Michael Siegrist is Professor of Consumer Behavior at ETH Zurich, Switzerland. The aim of the Consumer Behavior group’s research is to better understand individual and organizational decision-making. The group is dedicated to helping society make better decisions particularly with regard to the management of technological, environmental and food hazards. The research focus of Michael is on risk perception, risk communication, acceptance of new technologies, and decision making under uncertainty. He is especially interested in food and consumer behavior.
Marjolein will be speaking on Thursday
Prof.dr.ir. Marjolein B.A. van Asselt is holding the Risk Governance chair at Maastricht University, the Netherlands. From January 2008 till July 2014 she was a member of the Scientific Council for Government Policy (WRR). Marjolein is a permanent member of the Dutch Safety Board. The Safety Board's aim is the improvement of safety in the Netherlands. Its main focus is those situations in which civilians are dependent on the government, companies or organisations for their safety.
Links:
Tuesday - 12 June 2018 | ||
12:30 | Registration | ATRIUM [Room: D146] |
13.25 - 13.30 | Welcome [Room: D146] | Prof dr. Daniëlle Timmermans (Professor Public Health Risk Communication, VU University Medical Center, NL) |
13.30 – 13.45 | Opening [Room: D146] | Prof dr. André van der Zande (General Director National Institute for Public Health and the Environment (RIVM), NL) |
13.45 – 14.30 | Key note [Room: D146] | Prof dr. Sir David Spiegelhalter (Winton Professor for the Public Understanding of Risk, Cambridge University, UK) |
14.30 – 15.30 | Abstract session 1 [Room: D146] | Information bubbles, trust and policy |
Abstract session 2 [Room: A415] | Health Risk Communication and Decision Making | |
15.30 – 16.00 | Coffee break | |
16.00 – 17.30 | Symposium 1 [Room: D146] | Fake news, uncertainty, and the vaccination controversy |
Symposium 2 [Room: A415] | Uncertainty, food safety and health: communicating to public and policy | |
17.30 – 19.15 | Welcome reception |
We are faced with claims of both a reproducibility crisis in scientific publication, and of a 'post-truth' society in which emotional responses trump balanced consideration of evidence. This presents a strong challenge to those who value quantitative and scientific evidence: how can we communicate risks and unavoidable scientific uncertainty in a transparent and trustworthy way, keeping in mind Onora O'Neill's requirements that this evidence is accessible, useable and assessable?
Appropriate communication of established risks has been well-studied, but deeper uncertainty about facts, numbers, or scientific hypotheses needs to be communicated without losing trust and credibility. This is an empirically researchable issue, and we are conducting randomised experiments concerning the impact on audiences of alternative verbal, numerical and graphical means of communicating uncertainty.
Available evidence may often not permit a quantitative assessment of uncertainty, and I will also examine scales being used by 'What Works' centres and other agencies to summarise degrees of 'confidence' in conclusions, in terms of the quality of the research underlying the whole assessment.
Andreas Kappes (1) and Tali Sharot (2)
(1) City, University of London, UK (2) University College London, UK
When people receive information that contests their beliefs, they often neglect the unwanted information and the beliefs remain unchanged. This neglect of disconfirming information has been widely documented in various policy-relevant areas ranging from politics to science and education. But what is the role of reasoning in this information neglect? Some models assume that reasoning is the mechanism that enables people to reject unwanted information; reasoning is needed to render unwanted information invalid. Other accounts suggest that people automatically neglect information, and reasoning has no role in the neglect of unwanted facts. Here, we present two lines of research that suggest that reasoning is not needed for neglecting unwanted information. In one line of research, focusing on policy-relevant risk estimations, we restrict cognitive resources using a cognitive load and a time restriction manipulation and find that while these manipulations diminish learning in general, they do not diminish the neglect of unwanted information. In another line of research, focusing on financial decisions while people are in a functional magnetic resonance scanner (fMRI), we examined if people show enhanced neural responses to disconfirming opinions (as suggested by reasoning-centric models) or diminished responses (as suggested by models assuming automatic neglect of unwanted information). We find reduced neural tracking of information embedded in disconfirming judgments compared to information embedded in confirming judgments. Both lines of research suggest that reasoning is not needed to neglect disconfirming information, findings that suggest new ways to think about the role of reasoning in approaches to overcome the neglect of contested facts.
Scott Ferson
University of Liverpool, UK
The modern world brings us communication and connectedness unparalleled in the history of humanity, but we are not prepared for this world. Recent research in risk communication has revealed the underlying psychometric reasons that conventional approaches to science education are insufficient today. We are susceptible to fake news, dubious science, and clever advertisers. Our votes can be swayed by duplicitous politicians and terrorist shocks. More and more adults subscribe to irrational conspiracy theories. People who believe the Earth is flat are increasing exponentially via the web. This gullibility becomes dangerous in our hyperconnected world, and it is a problem for our society as serious as illiteracy was in previous centuries. We argue that pervasive uncertainty analysis is one of several strategies that can be useful in cultivating—and teaching—healthy skepticism without falling into the trap of losing trust in everything.
Frederic Bouder
SEROS, University of Stavanger, NOR
Salient political world events – such as the election of President Donald Trump in the US or the UK ‘’Brexit’’ vote to leave the EU have triggered new worries about whether we are entering a ‘’post-truth” era where facts and evidence are becoming less relevant. In this context, there has been much speculation about the impact of ‘’fake news’’ and other alternative facts on people’s perceptions as well as their implications for the relationship between science and policy. On both sides of the Atlantic, populistic claims with little scientific basis have been heard, as well as statements that challenge or even dismiss the prevalent scientific view. Do science and evidence play a dwindling role? Are unscientific claims becoming more widespread and influential? And, as a consequence, are we missing or misrepresenting uncertainties? As part of a fellowship project at the International Institute for Advanced Sustainability Studies (IASS) we asked over twenty influential players from policy science and industry to reflect on the changing nature of the relationship between science and policy in the ‘’post-Trump’’, ‘’post-Brexit environment’’. The results offers novel insights into the changing relationship between scientific uncertainty and risk policy.
Jan Stellamanns (1,2,3), Keshav Dahal (2, 4) and Zita Schillmoeller (1)
(1) Department Health Sciences, Hamburg University of Applied Sciences (HAW Hamburg), GER (2) School of Engineering and Computing, University of the West of Scotland (UWS), SCO (3)Deutsche Krebsgesellschaft (German Cancer Society), GER (4) Nanjing University of Information Science and Technology (NUIST), Nanjing, CHN
Objective: Population mammography screening programs are implemented in many countries to detect breast cancer early, and to decrease mortality. Harms of screening can result from false positive results and overdiagnosis. There is debate about the quantification of pros and cons, while the interpretation of the statistical information is challenging for women. Low numeracy skills and various types of biases can lead to misinterpretations. Recently a new decision aid leaflet was designed for the German program. A web-adapted version of this aid with four information visualizations (InfoVis) was developed and pre-tested, to evaluate the effects of InfoVis on decision-making with qualitative and quantitative methods. Preliminary descriptive results with one third of the envisaged sample are presented.Methods: The target group are women aged 30-49 years. The InfoVis and evaluation measures were pre-tested for usability issues applying think aloud protocols. The quantitative evaluation is carried out as a randomized-controlled online experiment with four arms. The main outcomes measures are knowledge according to fuzzy trace theory (gist & verbatim) and informed choice. The control conditions show statistical information as text, with or without static graphs. The InfoVis depict the statistics with interactive features, with or without reflective questions. The modifying effect of numeracy and graph literacy is measured. For the qualitative evaluation think aloud protocols and focused interviews are applied.Findings: The InfoVis and verbatim knowledge items were simplified after the pre-test. 90 participants completed the experiment in three month. The overall knowledge score is M=7.96 (SD=1.51; scale 0-12). Gist knowledge scores are higher (M=3.74; SD=0.92; scale 0-5) than verbatim scores (M=2.57; SD=0.92; scale 0-5). An informed choice was made by 38.9% of participants. The mean scores of the moderating factors are 2.86 (SD=1.6; scale 0-6) for numeracy and 2.54 for graph literacy (SD=0.71; scale 0-4). Numeracy, overall knowledge and verbatim scores are highest with InfoVis plus reflective questions (M-numeracy=3.25/M-knowledge=8.14/M-verbatim=2.93). Gist knowledge and informed choice is superior in the InfoVis only group (M-gist=3.91/Inf.-ch.=43.5%).
Yasmina Okan (1), Eric R. Stone (2), Wändi Bruine de Bruin (1,3)
(1) Centre for Decision Research, Leeds University Business School, UK (2) Department of Psychology, Wake Forest University, USA (3) Department of Engineering and Public Policy, Carnegie Mellon University, USA
Background/objective: Graphs show promise for improving communications about different types of risks, including health risks and climate risks. Yet, graph designs that are effective at meeting one important risk communication goal (promoting risk-avoidant behaviors) can at the same time compromise another key goal (improving risk understanding). We developed and tested simple bar graphs aimed at accomplishing these two goals simultaneously.Method/approach: Participants (n = 1,116) were recruited online via Mechanical Turk, and were presented with bar graphs depicting the effectiveness of a hypothetical drug for heart attack prevention. We manipulated two design features in graphs, namely (1) whether graphs depicted the number of people affected by a risk and those at risk of harm (“foreground+background”) versus only those affected (“foreground-only”), and (2) the presence vs. absence of simple numerical labels above bars. Participants were randomly allocated to one of the four bar graph displays and answered items assessing risk-avoidant behavior (i.e., willingness to take the drug), risk perceptions, risk understanding, and evaluations of the graphs (i.e., trust and liking).Findings: Foreground-only displays led to larger risk perceptions than foreground-background displays, which in turn resulted in increased willingness to take the drug. This effect was observed independently of the presence of numerical labels. Foreground-only displays also hindered risk understanding relative to foreground-background graphs, when labels were not present. However, the presence of labels significantly improved risk understanding, eliminating the detrimental effect of foreground-only displays. Additionally, labels led to more positive user evaluations of the graphs, but did not affect risk-avoidant behavior.Discussion/conclusion: Previous research in graphical risk communication has recognized a tension between the goals of improving risk understanding and promoting healthy behaviors. Our findings suggest that interventions designed to promote risk avoidance (e.g., foreground-only graphs) do not necessarily need to come at a cost of reduced risk understanding. These findings have implications for applied projects in health risk communication and decision support. Our results also contribute to achieve a better understanding of the mechanisms underlying effects of different graph design features.
Linda Douma (1. 2), Ellen Uiters (2) and Danielle Timmermans (1, 2)
(1) Department of Public and Occupational Health. Amsterdam Public Health research institute, VU University Medical Centre, NL (2) National Institute for Public Health and the Environment (RIVM), NL
Introduction: Population-based CRC screening is widely recommended as it can reduce the incidence and mortality of CRC. However, it also involves potential harms and risks. Whether for an individual the potential benefits outweigh the potential downsides is a complex issue, which has been the topic of an ongoing debate among experts. Therefore, people making an autonomous and well-informed CRC screening-decision is increasingly seen as being important. However, relatively little is known about what the eligible CRC screening population believes to be important concerning this decision, and how this relates to the concepts of autonomous and informed decision-making. The aim of our study was to answer these questions. Methods: We conducted 27 semi-structured interviews with people from the eligible CRC screening population (eighteen CRC screening participants/intention to participate, and nine CRC screening non-participants/intention to not participate). The general topics discussed concerned how and why people made their CRC screening decision, what was needed to make their decision, and when they considered to have made a ‘good’ decision.Results: Most interviewees viewed a ‘good’ CRC screening decision as one they stand by, that is based on both reasoning and feeling/intuition, and that is made freely. A majority of CRC screening non-participants experienced their decision to be not fully free as they felt a certain social pressure to participate. All CRC screening non-participants viewed being informed and making a well-considered decision as essential. This was the case for merely a proportion of CRC screening participants. Conclusion: Considering our findings, the concept of informed decision-making as it is presently being defined or operationalised does not appear to be fully accommodating the decision-making process concerning CRC screening in practice as well as the realisation of autonomous decision-making. More efforts could be made to acknowledge the diverse needs, values and factors that are involved in deciding about CRC screening participation.
Chair: Dr. Sander van der Linden [University of Cambridge]
Dr. Anne Marthe van der Bles & Dr. Sander van der Linden [University of Cambridge] - The effects of communicating uncertainty about contested facts and numbers
Dr. Bastiaan Rutjens [University of Amsterdam] - Exploring the ideological antecedents of science acceptance and rejection
Drs. Philipp Schmid & Prof. dr. Cornelia Betsch [University of Erfurt] - How to respond to persuasive messages of vaccine deniers in public debates
Discussant: Dr. Sander van der Linden [University of Cambridge]
Chair: tba
Prof. dr. ir. Ingeborg Brouwer [VU University, Amsterdam] - What we know about unhealthy and healthy food, in particular about fatty acids.
Dr. Bernadette Ossendorp [National Institute for Public Health and the Environment (RIVM), Bilthoven/] - About advising policy makers on food safety issues
Dr. Barbara Gallani [European Food Safety Authority (EFSA), Parma] - About communicating complex issues in food safety to the lay public
Drs. Tom Jansen [VU University Medical Center, Amsterdam/RIVM, Bilthoven] - About perceptions of risk and uncertainty in relation to food in different societal groups.
Discussant: Prof. dr. Ragnar Löfstedt [King's College London]
Wednesday - 13 June 2018 | ||
9.00 – 9.45 | Key note [Room: D146] | Prof. dr. Cecile Janssen (Professor of Translational Epidemiology, Rollins School of Public Health, Emory University, Atlanta, USA) |
9.45 - 10.45 | Abstract session 3 [Room: D146] | Perceptions of environmental health risk |
Abstract session 4 [Room: A4515] | Medical decision making: health information | |
10.45 – 11.15 | Coffee break | |
11.15 - 12.45 | Symposium 3 [Room: D146] | Genetic testing for the public: contested and potentially harmful information? |
Symposium 4 [Room: A415] | Climate Change: uncertainties and controversies | |
12.45 – 14.00 | Lunch | |
14.00 – 14.45 | Keynote [Room: D146] | Prof dr. Michael Siegrist (Professor of Consumer Behaviour, ETH Zurich, Switzerland) |
14.45 – 15.45 | Abstract session 5 [Room: A415] | Interpreting uncertainty in evidence |
Abstract session 6 [Room: D146] | Uncertainty in risk communication | |
15.45 – 16.15 | Coffee break | |
16.15 – 17.15 | Abstract session 7 [Room: D146] | Communicating risk |
Abstract session 8 [Room: A415] | Risk assessment, governance & communication | |
19.00 | Conference dinner | Haarlemmermeerstation, Amstelveenseweg 266, Amsterdam https://goo.gl/maps/eszNa8iUbHL2 |
The direct-to-consumer health industry is booming. Consumers increasingly are buying genetic tests to learn about their health risks so that they can tailor their efforts to improve their health. This interest in genetic testing is remarkable given that the science is still premature and the test results are known to vary between companies. It is said that consumers understand the limitations: they know that genetics is not the whole story and that lifestyle matters too. But is that sufficient? What do consumers understand about the risk information? Do they know how the risks should be interpreted, what the numbers mean, and how they were calculated? In this talk, I will outline what is there to know about genetic risks and share thoughts about the question: would consumers still be interested in these tests if they had known more?
Marion de Vries (1), Liesbeth Claassen (1, 2), Marcel Mennen (1), Aura Timen (1), Margreet te Wierik (1), Danielle Timmermans (1, 2)
(1) Dutch National Institute for Public Health and the Environment (RIVM), Bilthoven, NL (2) Department of Public and Occupational Health, Amsterdam Public Health research institute, VU Medical Center, NL
Communicating about risks is often highly complex. This is even more the case when the risk is contentious and there is a mismatch between the risk portrayed in the media and the risk according to experts. For risk communications to be effective, communication messages should be well adapted to the risk perceptions and information needs of its public. We studied lay risk perception of a contentious risk issue in the Netherlands, namely practicing sports on fields with rubber granulate infill. During a period of ~three months in 2016, there was a recurrent debate in the Dutch media regarding the possible severe health risks of practicing sports on artificial fields with crumb rubber infill. In this period, the National Institute for Public Health and the Environment (RIVM) conducted an extensive research, and concluded that the health risk of exposure to chemicals in crumb rubber is practically negligible, and therefore it is safe to practice sports on fields with crumb rubber infill. We assessed lay risk perceptions of practicing sports on fields with rubber granulate infill among Dutch citizens. We did this at two moments in time, before and after publication of the RIVM report, to understand how lay perceptions of the risk of rubber granulate changed over time, after an important turn in the public discussion. We studied the risk perceptions of three specific groups in the population, namely parents of children who practice sports on fields with rubber granulate, parents of children who do not practice sports on fields with rubber granulate, and individuals without children. Two surveys (N=1031), one in December 2016 and one in January 2017, were conducted via an online survey panel. The results will be presented.
Margôt Kuttschreuter (1) and Femke Hilverda (2)
(1) University of Twente, Enschede, NL (2) VU University, Amsterdam, NL
Transparency on decision making processes and the communication of scientific uncertainties have been suggested as means to combat the decline in trust that agencies are currently facing. How the public responds to the communication of uncertainty is, as yet, poorly understood. This understanding is however very relevant in the context of the introduction of new food and consumer products where consumers have to make sense of information that they find on Internet websites and social media. These days, in the Netherlands, there are individuals as well as experts who use a social medium such as Twitter to express their views on a risk related topic and to interact with their followers. A substantive number of such posts express uncertainty and a public discussion among users is not uncommon. This raised the important question to what extent the expression of uncertainty on social media affects consumers’ sense making of risk related information. Focusing on the use of nanoparticles in food, an online experiment was conducted to investigate the effects of a simulated online chat in which a particular interaction partner (expert vs similar other vs anonymous other) expressed his view on the use of nanoparticles in food products (positive vs negative vs uncertain). The study examined the effects of this interaction on cognitions and behaviours that contribute to sense making: risk perception, attitude, information need, taking notice of information, information seeking and information sharing. The study was carried out in the Netherlands on a representative sample of Dutch Internet users (n = 300). Participants were recruited through a research agency and randomly assigned to one of the nine conditions. Analysis showed that the expression of a negative viewpoint led to a higher risk perception and a less positive attitude toward the use of nanotechnology in food. No differences were found between the expression of uncertainty and the expression of a positive viewpoint. Conclusions on the effects of communicating uncertainty will be drawn and implications for evidence-based communication strategies and related policy will be discussed.
Astrid Martens (1, 2), Marije Reedijk (1), Tjabe Smid (2), Hans Kromhout (1), Pauline Slottje (3) Roel Vermeulen (1) and DaniëlleTimmermans (2)
(1) IRAS, Utrecht University, NL (2) Social Medicine, VU Medical Centre, Amsterdam, NL (3) Department of General Practice and Elderly Care Medicine, VU Medical Centre, Amsterdam, NL
Background: When health effects of actual exposure are deemed unlikely, psychosocial mechanisms such as nocebo responses are suspected to play a role in symptom reporting attributed to environmental exposures. Little is known about differences between environmental exposures regarding the relationship between actual and perceived health risks. Understanding the interplay between perceived exposure and risk perception, actual exposure and symptom reporting, is vital for adequate risk communication strategies. Method: We compared longitudinal associations between modeled exposures, the perceived level of these exposures and reported symptoms (non-specific symptoms, sleep disturbances, and respiratory symptoms) for three different environmental exposures (radiofrequency electromagnetic fields (RF-EMF), noise, and air pollution). Participant characteristics, perceived exposures and risk, and self-reported health were assessed by a baseline (n=14,829, 2011/2012) and follow-up (n=7,905, 2015) questionnaire in the Dutch population-based Occupational and Environmental Health Cohort (AMIGO). Environmental exposures were modeled at the home address with spatial models. Cross-sectional and longitudinal regression models were used to examine associations between modeled and perceived exposures, and reported symptoms. Findings: Correlations between modeled and perceived exposure were substantial for air pollution (rSp=0.34) and noise (rSp=0.40), but less distinct for RF-EMF (rSp=0.11). For participants with increases in modeled exposure levels between baseline and follow-up, we found a corresponding increase in perceived exposure levels, in particular for noise. We found that perceived exposures were consistently associated with increased symptom scores (respiratory, sleep, non-specific). Modeled exposures, except RF-EMF, were associated with increased symptom scores, but these associations disappeared or strongly diminished when accounted for perceived exposure. Symptom attribution to environmental exposures was rare (<2%) and transient. Discussion: Exposure perceptions played an important role in symptom reporting. This also applied to exposure-symptom associations where a role for actual exposure is plausible, for example air pollution-respiratory symptoms. The potential impact of risk information on nocebo responses should be considered. Exposure cues in the environment, such as the visibility of nearby roads, apparently provide the public with information about the relative individual exposure level, as evidenced by exposure-perception correlations. When designing risk communication strategies it is important to consider how exposure cues can impact exposure perceptions.
Yasmina Okan (1), Samuel G. Smith (2), Wändi Bruine de Bruin (1,3)
(1) Centre for Decision Research, Leeds University Business School, UK (2) Leeds Institute of Health Sciences, University of Leeds, UK (3) Department of Engineering and Public Policy, Carnegie Mellon University, US
Background/objective: Web-based information about cervical cancer screening plays an important role in women’s screening decisions. We investigated whether UK websites about cervical cancer screening (1) contain key information about benefits and harms of screening, possible screening outcomes, and cervical cancer risks, (2) present probabilistic information using formats recommended in the risk communication literature, (3) include appeals for participation and/or informed decision making. Method/approach: We identified 14 UK websites through Google, by entering common search terms pertaining to cervical cancer screening. We coded website content using an established checklist of 16 items, including benefits and harms of screening, possible screening outcomes, and cervical cancer risks. We also examined whether the format of any probabilistic information involved verbal quantifiers, numbers, or graphical displays, and whether risk reduction was communicated in relative vs. absolute terms. Finally, we coded whether websites included appeals for participation and/or statements concerning informed decision making. Findings: We report on three main findings. First, benefits and harms of screening and possible screening outcomes were mentioned frequently: Discussed benefits focused on risk reduction for developing cervical cancer (in 71% of websites), whereas commonly mentioned harms included overdiagnosis and overtreatment (86%), pain/discomfort related to the cytology test (86%), and the possibility of false negatives (71%). Second, risk reduction for developing cervical cancer was typically presented numerically, in all cases in relative terms. In contrast, quantifications of harms were mostly verbal. Graphical displays were only included in two websites, depicting possible screening outcomes (e.g., likelihood of abnormal results). Finally, appeals for participation were present in 86% of websites, with 42% of these also referring to informed choice concerning screening. Discussion/conclusion: The existing heterogeneity in the information included in UK websites about cervical cancer screening may compromise women’s informed decision making. Probabilistic information was often not conveyed in formats known to facilitate understanding. Recommendations to avoid use of verbal quantifiers without numbers, to present absolute rather than relative risk reductions, and to use graphical displays were often not met. Designing websites that adhere to recommendations may support informed cervical screening uptake and avoid potentially harmful misunderstandings.
Inge Dubbeldam (1), José Sanders (2), Wilbert Spooren (2) Frans J. Meijman (3, 4) Maaike van den Haak (4)
(1) SAG Health Centres Amsterdam, NL (2) Radboud University Nijmegen, Centre for Language Studies, NL (3) VU Medical Centre, Amsterdam, NL (4) VU University Amsterdam, NL
Background Health consumers are increasingly expected to play an active role with respect to their health and make informed decisions. In processes of treatment decision making as well as other healthcare contexts, this may cause feelings of uncertainty. Whether or not treatment options are evidence-based, full of risk or not at all–fear and doubts may arise that guide information needs and (lack of) decisions based on the information that was found. Therefore, it is essential that health information meets the needs and expectations of health consumers. In this paper, we examine online health information from the perspectives of patients with Uterine Cervical Dysplasia undergoing a frequently performed gynecological procedure at an outpatient clinic of a general, top clinical Dutch hospital. Our aim was to gain more insight in the patterns of health information behavior of actual care users by intensively studying the patients’ information needs, motives and search behavior. Approach The information behavior of 40 women was studied qualitatively by means of interviews and observations in three phases from the start of their medical episode: Emergence or lack of information needs (interviews); Choice for particular information sources or channels (interviews); and Searching for information on the internet (both observations and interviews). Uncertainty with respect to the origin (sexually transmitted disease?) as well as the consequences (cancer? childbearing?) of Uterine Cervical Dysplasia and its treatments appeared to be frequent in patients, and lead to various needs for trusted sources of specific, personal information. Findings The results allowed us to identify five patterns of information behavior: inactive/passive (N=5), sensitive/limited (N=7), selective/problem solving (N=5), constructive/explorative (N=8) and assertive/browsing (N=4). These patterns, more refined than the traditional dichotomies of need for cognition/affect or monitoring/blunting, are neither restricted to this particular case nor completely generalizable, but rather provide an onset for further studies towards completion, distinction and application. Discussion In our presentation, we will discuss examples of corresponding recommendations that aim at optimizing health information: a) instructing only insofar as needed; b) addressing uncertainty in narratives; c) helping to develop positive expectations; and d) offering distinct patient profiles resembling particular attitudes and needs.
Tim Rakow (1), Emily Blackshaw (1), Emily Jesper (2) Christina Pagel (3) Mike Pearson (4) David Spiegelhalter (4) Joanne Thomas (2)
(1) King's College London, UK (2) Sense About Science, UK (3) University College London, UK (4) University of Cambridge, UK
Background/objective The mandated public availability of hospital audit data for children’s heart surgery in the UK creates a challenge for communicating these sensitive data. The diverse audiences interested in these data include: the families of children with congenital heart conditions and their support charities, clinicians, hospital administrators, journalists and policy specialists. However, these data are complex (e.g., include sophisticated risk adjustment of outcomes for hospital case mix, use model-generated prediction intervals) and were not previously presented in a format that could be easily understood and interpreted. Consequently, the audit data were confusing to many, and open to misinterpretation. Following a recent update to the risk-adjustment model used for these data, our goal was to improve the public presentation of these data. Method/approach In a research-led initiative, we created a website to present the UK audit data for children’s heart surgery outcomes in a way that could be readily understood (http://childrensheartsurgery.info). Our inter-disciplinary team included specialists in: data analysis; communicating risk, uncertainty and evidence; web programming; and decision psychology. Each stage of website development was informed by workshops with interested groups (e.g., parents of children with heart conditions, healthcare professionals) to determine which ways of describing and presenting the data were feasible, acceptable or desirable. We ran experiments to test alternative information formats, and to determine which information formats promoted understanding and reduced misinterpretation. Findings Initial experiments showed that some presentation formats encourage inappropriate comparisons. For example, presenting outcomes as a survival ratio of observed/predicted survival rates encourages a ratio of ‘1’ to act as a comparison point that exerts undue influence on evaluations, and presenting a hospital’s data alongside that of others encourages comparisons that show insufficient regard for each hospital’s case mix. Subsequent experiments showed how to ameliorate these effects, while also improving comprehension of the data. Discussion/conclusion The website has been well received by a range of interested parties. We have therefore demonstrated the feasibility and value of engaging users when deciding how to communicate risk and uncertainty, and of testing alternative forms of communication that aim to meet those users’ needs and goals.
Chair: Dr. Olga Damman [VU University Medical Center, Amsterdam]
Prof. dr. Martina Cornel [VU University Medical Center, Amsterdam] - Genetic knowledge and informing the public in the era of mainstreaming: do non-experts know how?
Dr. Dirk Jan Boerwinkel [Utrecht University] - Genetic literacy: Learning about uncertainty as an educational aim in secundary education.
Dr. Elisa Garcia [VU University Medical Center, Amsterdam] - The Ethical Burden of Genetic Knowledge for People.
Discussant: Prof. dr. Cecile Janssens [Rollins School of Public Health, Emory University, Atlanta, USA]
Chair: Prof. dr. Ann Bostrom [University of Washington]
Prof. dr. Ann Bostrom [University of Washington] - The social evolution of mental models of a contested risk: climate change
Prof. dr. Gisela Böhm [University Bergen] - Social discourse on the Internet about contested risks: The case of climate change
Dr. Helen Fischer [Heidelberg University] - Science meets policy-makers: Assessing the comprehension of IPCC graphs on climate related health risks.
Discusssant: Dr. Bart Verheggen [Amsterdam University College]
Decision-making processes differ considerably across situations and individuals. This is a challenge for the communication of risks and uncertainties. In some instances, people rely on simple heuristics when they make decisions under risk and uncertainty. I will show that simple heuristics may sometimes result in good decisions, and therefore additional risk communication is not needed. In other situations, however, relying on simple heuristics results in biased decisions. People may rely on numerical information to make a decision. How risk information expressed as frequency information should be communicated to lay people has therefore been examined in many studies. Less attention has been paid to possible individual differences in how these communication formats (e.g., pictogram) are processed by different people. I will show that people’s processing of simple graphical displays of risk information depends on their skills and abilities (e.g., numeracy). Most people avoid uncertainties if possible. In many important situations, however, a decision must be made under uncertainty. Even more challenging, when it comes to controversial topics, we usually do not have frequentist information but subjective probabilities. There is lack of research, however, examining how subjective probabilities should be communicated to the public so that they are correctly understood. Some recommended communication formats may work well in the case of frequencies but not in the case of subjective probabilities.
Sander Clahsen (1,2), Holly van Klaveren (1,3), Theo Vermeire (4), Irene van Kamp (1), Bart Garssen (3), Aldert Piersma( 2,5), and Erik Lebret (2,6)
(1) Centre for Sustainability, Environment and Health, National Institute for Public Health and the Environment (RIVM), Bilthoven, NL (2) Institute for Risk Assessment Sciences (IRAS), Utrecht University, NL (3) Department of Speech Communication, Argumentation Theory and Rhetoric, University of Amsterdam, NL (4) Centre for Safety of Substances and Products, National Institute for Public Health and the Environment (RIVM), Bilthoven, NL (5) Centre for Health Protection, National Institute for Public Health and the Environment (RIVM), Bilthoven, NL (6) National Institute for Public Health and the Environment (RIVM), Bilthoven, NL
Background: To what extent do substances have the potential to cause adverse health effects through an endocrine mode of action? This question elicited intense debates between endocrine disrupting substances (EDS) experts. The pervasive nature of the underlying disagreements justifies a systematic analysis of the argumentation put forward by the experts involved. Method: Two scientific publications pertaining to EDS science were analyzed using pragma-dialectical argumentation theory (PDAT). PDAT's methodology allowed us to perform a maximally impartial and systematic analysis that remains true to the texts’ essence. Using PDAT, the argumentation contained in both publications was structured, main standpoints and arguments were identified, underlying unexpressed premises were made explicit and major differences in starting points were uncovered. Findings: The five differences in starting points identified were subdivided into two categories: interpretative ambiguity about underlying scientific evidence and/or normative ambiguity about differences in values. Two differences in starting points were explored further using existing risk and expert role typologies. Discussion and conclusion: We emphasize that normative ambiguity, unlike interpretative ambiguity, cannot be solved with additional research but requires multi-stakeholder approaches. Extrapolation of our findings to the broader discussion on EDS science and further exploration of the roles of EDS experts in policy processes should follow from further research.
Megan M. Crawford, Fergus Bolger, Gene Rowe, George Wright, Iain Hamlin, Ian Belton, Alice MacDonald
Strathclyde Business School, Glasgow, UK
This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI) and Intelligence Advanced Research Projects Activity (IARPA)
Background/objective: The Delphi method is a well-established technique for eliciting information from experts, and achieving convergence of opinion, that reduces biases due to social processes. The technique involves repeated facilitated rounds of individual judgment followed by revision in the light of feedback on judgments of other, anonymous, group members.Delphi is principally used to elicit simple judgments, such as forecasts. However, the current research investigates the effectiveness of Delphi for the solution of complex problems requiring evaluation of hypotheses from uncertain evidence, such as assessments of foreign and domestic risks to national security.In extending the scope of Delphi to these new problems we investigate whether the most effective strategy is to facilitate Delphi rounds on the full problem, on decomposed sections of the problem, or a combination of the two. Problem decomposition is itself a well-known technique: we are particularly interested in decomposition into hypotheses and evidence for representation in Bayesian Networks.Method/approach: In a series of experiments, Delphi and decomposition techniques were used in conjunction to facilitate understanding and problem solving in the realm of intelligence analysis. These experiments used novel methods derived from what we term the Simulated Group Response Paradigm (SGRP).SGRP presents individuals with a set of responses (e.g. judgments) which are purported to (or actually) come from other participants, thereby simulating participation in a real group. SGRP is efficient as each individual stands for a group, which has logistic advantages and obviates the need for large samples typically required to obtain acceptable power in group studies. Further, by pre-collecting responses, researchers can control the characteristics of real responses (e.g. validity and reliability) rather than fabricating them, increasing ecological validity.Findings: Results show that decomposition and Delphi independently lead to improvements in problem-solving performance, relative to appropriate controls. Factors moderating these effects, and the nature of their interaction, are being further investigated.Discussion/conclusion: This research brings new insights for the applicability of both Delphi and decomposition techniques in qualitative problem solving which present high risk and uncertainty. Further, SGRP offers a new methodological design for studying group information processing.
Scott Ferson (1), Daniel Rozell (2), and Lev Ginzburg (3)
(1) University of Liverpool, UK (2) Stony Brook University, USA (3) Applied Biomathematics, USA
Background/objective Analyses of the environmental effects of industrial activities are often especially contentious, often because underlying stakes are in terms of short-term economic growth against long-term environmental preservation and human health. Typically, debates revolve around technical issues that are quantified imprecisely. Discerning whether these issues mitigate or overwhelm other considerations is difficult. We sought to determine whether bounding analysis is useful in settling disputes where disagreements exist among stakeholders about which issues are important in the analysis. Method/approach We developed quantitative tools consisting of simple but stochastic models in user-friendly software with graphical outputs. We used these tools to mediate debate among stakeholders in environmental assessments. We review two case studies. The first concerns ecological impacts on fish populations from nuclear plants using water for cooling. The second concerns environmental impacts of fracking by the oil and gas industry. We introduced the quantitative tools to contextualise the possible consequences of theoretical phenomena and unseen effects. Discussants were allowed to suggest assumptions and parameter values in live, collaborative experiments quantitatively exploring the possible magnitudes of the various effects and phenomena under debate. Findings Before using the tools, discussants ceaselessly debated phenomena that could theoretically alter impacts but are hard to assess because of uncertainty about parameters. For example, biologists working for the nuclear industry touted the importance of compensation in fish populations reducing effects on populations. But compensation is difficult to characterise with sparse data. Likewise, stakeholders worried about possibly cumulative “unseen” effects of an impact. Such concerns are hard to quantify. The tools allowed even relatively thorny debates to be at least partially resolved by open discussion and, importantly, quantitative calculations based on bounding assessments. Bounding impacts, including risks expressed probabilistically, can be remarkably effective for settling scientific debates when relevant information is sparse and difficult to obtain. Open access to simple but stochastic models implemented in accessible software can be key to resolving long-standing technical disagreements. Discussion/conclusion These findings have implications for other contentious risk assessments such as those concerning global climate change.
Sarah Jenkins and Adam Harris
Department of Experimental Psychology, University College London, UK
Background/objective: The scientific community have raised concerns over the public’s ability to understand and conceptualise uncertainty (Frewer et al., 2003). Misunderstanding the nature of uncertainty can negatively influence perceptions of a communicator’s credibility, though the extent to which this occurs varies according to communication format (Jenkins, Harris & Lark, 2018). For instance, when an ‘unlikely’ event occurs (an ‘erroneous’ prediction), a communicator who uses a verbal format in their prediction is perceived as less credible and less correct than one who uses a numerical format (regardless of whether this is as a point – ‘20% likelihood’ or range expression – ‘10–30% likelihood’). This was attributed to directionality, in that the term ‘unlikely’ is negatively directional, which focuses one’s attention on the event not occurring. However, such an effect of format has not been consistently observed for high probability expressions. This study aims to identify the mechanisms behind the vulnerability of the verbal format to ‘erroneous’ predictions, disentangling the role of directionality and probability level. Method/approach: In a pre-registered, online study across the low and high probability domains, we examine whether directionality (i.e. focus of the expression) is responsible for the vulnerability of the verbal format to ‘erroneous’ predictions. We will compare the effects of using positively (e.g. ‘a small chance’) and negatively directional (e.g. ‘doubtful’) verbal probability expressions (VPEs) and a numerical range on perceptions of credibility. To do so, we will measure the reduction in credibility after either the event occurs (low probability) or does not occur (high probability). Measures of perceived correctness and surprise will be taken following the actual outcome. Findings / discussion / conclusion: The current study is one of the first to focus on the consequences of a common misunderstanding of uncertainty. Our results will highlight the (potentially) far-reaching influence of directionality (or probability) on perceptions of a communicator. By gaining an understanding of the mechanisms behind such an influence, risk communications might be designed so as to ‘protect’ communicators from losses of credibility, which is key to improving the effectiveness of such communications.
Marij Hillen (1), Danielle Blanch Hartigan (2), Marceline van Eeden (3) and Ellen Smets (3)
(1) Department of Medical Psychology, Academic Medical Center – University of Amsterdam, Amsterdam, NL (2) Department of Natural and Applied Sciences, Bentley University, Waltham (MA), USA (3) Department of Medical Psychology, Academic Medical Center – University of Amsterdam, Amsterdam, NL
Background/objective; Although physicians are increasingly expected to share uncertain information with their patients, concerns have arisen about the possible negative effects for patients of discussing uncertainty. How the uncertainty is conveyed and by whom may influence how patients respond to an increased awareness of uncertainty. Therefore, we tested the effect of verbally and non-verbally communicating uncertainty by a male versus a female physician on patients’ reported trust. Additionally, we examined whether patients’ experience of uncertainty mediated the relation between communication of uncertainty and trust. Method/approach; Former cancer patients participated as analogue patients in an experimental video vignettes study. Physician communication behavior (verbal vs. non-verbal and high vs. low uncertainty) and gender (male vs. female) were systematically manipulated while the remainder of the videos was kept constant. Analogue patients afterwards reported trust in the observed physician and experienced uncertainty. Data were analyzed with mediation analysis using the PROCESS macro in SPSS 24 (NY: IBM Corp). Findings; Physician gender did not influence analogue patients’ (N=160) trust or experienced uncertainty overall. If the physician communicated a high degree of uncertainty non-verbally, this led to both weaker trust and higher experienced uncertainty. Verbal communication of uncertainty influenced neither trust nor experienced uncertainty. The effect of high non-verbal uncertainty on trust was mediated by an increase in patients’ experience of uncertainty. Interactions between communication of uncertainty and gender on trust were non-significant, indicating that patients do not perceive communication of uncertainty differently when conveyed by a female versus a male physician. Discussion/conclusion; Our results suggest that by non-verbally expressing uncertainty, physicians may have a much stronger impact on patients’ evaluation and experience than by their verbal behavior. Alternatively, our manipulation of verbal communication of uncertainty may have been insufficient. The finding that physician gender did not influence trust overall is encouraging. More training with regard to physicians’ non-verbal communication behavior is warranted.
Mette Kjer Kaltoft (1, 2) Jesper Bo Nielsen (2), Jack Dowie (3, 2)
(1) Odense University Hospital Svendborg, DEN (2) University of Southern Denmark, DEN (3) London School of Hygiene and Tropical Medicine, UK
Background/Objective. In the classic research paradigm, researchers establish both the expected average outcomes from interventions and the surrounding uncertainties, and then hand over the task of dealing with the two outputs – e.g. Means and Credible Intervals (CI) - to the decision maker to ‘make up their mind’. The decision maker is left analytically unsupported in making the required mean-uncertainty trade-off. This challenge is faced by individuals in the clinical context and policy-makers in the group context. Our objective is to support the individual. Method/Approach. In our end-of life example the patient has two options. Palliative care offers a mean expected life of 6 months, with 95% CI of 3 to 9 months. An operation offers a mean expected life of 9 months, with 95% CI of 0-18 months. Neither option is therefore dominant on this single criterion, with operation having a better average outcome but greater uncertainty, which overlaps with that of palliation. The suggested method (not particularly novel outside health) involves treating Mean and Credible Interval as separate criteria in a value-based compensatory Multi-Criteria Decision Analysis (MCDA). The decision maker has their relative importance weights for mean and CI elicited for each criterion separately, so the resulting trade-offs may differ between (e.g.) life expectancy and quality of life. The expected value score for each option within the MCDA-based decision support incorporates their trade-off. Findings. The approach can be explored interactively online (https://goo.gl/ZPmWiW). Two additional criteria have been added to the life expectancy one, each with their own mean and uncertainty ratings. Discussion/Conclusion. In one view treating mean and CI as separate criteria involves double counting, the mean calculation having already ‘synthesised’ the uncertainty in the distribution. In the alternative view taken here, the mean calculation, while reflecting the uncertainty in the distribution, has left it as a challenge to the decision maker. A method to support them in the synthesising task, by bringing it inside a decision support tool, is feasible. Their individual preference-based weightings of mean and uncertainty can be entered into a new expected value calculation, leading to a preference-sensitive result, shareable with any stakeholders.
Felix G. Rebitschek
Harding Center for Risk Literacy, Max Planck Institute for Human Development, Berlin, GER
Objective. Risk communication aims at conveying probabilistic information. However, communicating likelihoods of rare events remains a challenge. Research presented here addresses the challenge by providing coin-toss analogies as neutral risk cues to put likelihoods into perspective. It was hypothesised that providing comparative cues would reduce perceived risk and, so, fear. It was further hypothesised that the cue effect might be stronger if the cue meaning of an event’s rarity is experienced before, by prompting coin sequence tossing, than just described: pictures of heads’ sequences translating likelihoods. Method. Lab experiment 1 (N=210) varied presentation formats with and without coin-toss descriptions in a mixed design. Lab experiment 2 (N=182), also with a mixed design, aimed to support the risk cue: before presenting the coin-toss description participants either tossed coins in order to generate training sequences or they tossed randomly. Findings. Opposed to my hypothesis providing risk ladders with coin-toss analogies did increase perceived risk. However, providing verbal descriptions with coin-toss analogies did not affect risk perception. Dread could not shown to be affected by coin-toss analogies at all. Compared to real random tossing of coins sequence tossing actually reduced risk perception. Furthermore, although the cue meaning of an event’s rarity was experienced by sequence tossing only, the reducing effect of a subsequent coin-toss analogy in a verbal description held across the toss conditions. Discussion. Risk perception cannot by easily modified by just presenting sequences of toss outcomes. Such pictorial descriptions can even increase risk perceptions unnecessarily (e.g. in risk ladders). Although only tossing coins in order to produce specific sequences seems to lead to reduced risk perception, both random and sequence tossing seem to lay the grounds for subsequent risk communication with toss analogies. Limitations of the material and the design but also a follow-up study will be presented.
Leela Velautham and Michael Ranney
University of California, Berkeley, USA
Background / objective: In environments in which competing information about risk and uncertainty exists, misinformation is likely to be prevalent. Interventions and public information campaigns that have sought to help people identify misinformation have often emphasized attendance to contextual features of information in the public sphere, such as source. However, such interventions have often failed to utilize the fact that communication about risk and uncertainty is inherently quantitative in nature. In this study, we attempt to better characterize participants’ use of quantitative reasoning as a means to differentiate between misleading and revealing statistics. Methods: Participants (n=372) were assigned to mixed collections of misleading or revealing statistics about global warming that were either displayed in full (statistics-with-numbers condition), or which had their numerical portion blanked out (statistics-initially-blanked condition). They then rated how misleading, pointless or revealing each statistic was. After this initial rating, participants in the statistics-initially-blanked condition were then asked to estimate the blanked out quantity, given feedback on their estimate’s accuracy with the actual value, and asked to re-rate each statistic as misleading, pointless or revealing. Findings: Participants in the statistics-with-numbers condition were significantly better at differentiating between the revealing and misleading statistics compared to the initial ratings of those in the statistics-initially-blanked condition. They were also better at discounting the misleading information, showing an overall increase in global warming acceptance following exposure to the mixed set of statistics-with-numbers. Furthermore, the process of estimating and receiving feedback on their estimate led participants in the statistics-initially-blanked condition to become significantly better at discriminating between the misleading and revealing statistics, compared to their pre-test performance. Conclusion: These results point to numerically-driven inferencing as a useful paradigm for improving differentiation between misleading and revealing statistical information related to risk. Policy implications point to a focus on quantitative literacy as a means to enhancing the informational literacy of the general public.
Fred Goede (1) and Gert Jan Hofstede (2)
(1) North-West University, RSA (2) Wageningen University, NL
A review of 70 years of safety management communication illustrates major changes in approaching risk. Literature shows emerging trends in the way incidents are investigated. Industries which have a rich history of incident reports and used as the basis for this study are mainly in high-risk industries such as from the aviation, space, shipping, nuclear, chemical and petroleum sectors. The focus of safety incident investigations have shifted over time. A century ago the main focus was on finding the technology failures to engineer safer systems. Since the 1970s the human element was brought into the investigations, as the leading cause of safety incidents and the focus shifted towards communication challenges. In some sectors that remains the solution- finding the guilty party to blame for miscommunicating, which immediately terminates further learning from incidents. However, the 1990s introduced the advent of an integrated approach to complex sociotechnical safety system design. Communication between actors, as well as communication between the people and the technology involved, appears as a critical component to prevent disasters from occurring. Recent approaches incorporate the learnings from incidents which arise from the organizations’ risk culture as a group, combined with the individuals’ risk-taking behavior. Despite efforts to integrate sociotechnical systems into safety incident investigations, a fragmented approach persists to this day and the most common findings remain a “wrong safety culture” of a group, or a “risk taking individual” as the cause, or a “technological or procedural or communication failure” that caused these incidents. Once the blame has been suitably allocated, the incident investigation stops abruptly. Yet, people died as a result of the incident and these safety incident reoccur. Data is presented to indicate the changing nature of safety incidents- from communication of regular, smaller-scale incidents to infrequent, but larger scale incidents. To prevent safety incidents from occurring in complex large systems, communication becomes ever more challenging. A more complex approach is required where the interactions between actors and technology in sociotechnical systems is required. Improved communication in large complex sociotechnical systems remains one of a few major challenges to avert large safety incidents.
Niels B. Lucas Luijckx, Fred J. van de Brug and Hilde J. Cnossen
The Netherlands Organisation for applied scientific research (TNO)
The prediction of what might happen in the future has always been an attractive proposition. Nowadays, in our information and data based society we are on the threshold to benefit from smart technologies to detect early signals, clues on development and evolution of scenarios. Still, the unexpected is on the lure and the amount and quality of information creates an opaque landscape. We need to learn about the uncertainty of early signals. Based on past crises and issues in the food safety area TNO set out to develop a support system for food stakeholders to identify emerging risks. It was found that very often early signals for these crises and issues could have been detected in the scientific domain. Easy as it is of course in hindsight constructing the flow and dependency of events, but increasing experience and knowledge for interpretation and evaluation. We use text mining and natural language processing on large sets of science abstracts with the aid of an elaborate food (safety) ontology, enabling the identification of early signals of interest. Semantic searching in a matrix-like way allows the combination of individual signals from different sources to develop a scenario. When the source is peer reviewed science the signal could be considered a fact, that can be contested, but an indication of the uncertainty will often be included as well. The TNO Emerging Research Identification Support System (ERIS) is used by large food industry and food authority stakeholders. We have been able to support them with knowledge and information management to identify relevant hits for emerging risks or issues. It increases the ability to make risk management decisions and scenarios, potentially influencing the course of the near future, with all uncertainty involved. The approach taken by TNO for food safety issues is being broadened to other domains, e.g. occupational health and safety, pharmacological target safety assessment, and more. Preparation for what might come and communicating this internally and, if appropriate, externally will help to make the world more transparent and support prevention: both of issues evolving into crises and of intentional issues, such as fraud.
Erik Løhre (1), Agata Sobków (2), Sigrid Møyner Hohle (1), Karl Halvor Teigen (3, 1)
(1) Simula Research Laboratory, Fornebu, NOR (2) SWPS University, Warsaw, POL (3) University of Oslo, NOR
In an increasingly complex society, lay people often need to rely on expert advice when making decisions about a variety of topics, from personal health (which foods should be included in a healthy diet) and finance (which home insurance is the best) to complex global issues such as climate change. But experts do not always agree, and experts may change their mind over time. Previous research suggests that people prefer consensus between different experts and consistency from one expert over time. However, to our knowledge, previous studies have not investigated which circumstances lead lay people to perceive experts as being in disagreement or as changing their mind. The current studies focus on numerical probability estimates for future events. We propose that framing of probability estimates, and upper vs. lower bound probability estimates (e.g., "less than 50%" vs. "more than 60%"), may influence perceived disagreement between probabilistic statements. In two experiments, people received two statements about air pollution, either from two different experts or from one expert at two points in time. Probability estimates were either expressed in the same frame (45% vs. 55% probability that the smog will influence citizens' health negatively) or in different frames (45% probability that the smog will influence citizens' health negatively vs. 45% probability that the smog will not influence citizens' health negatively). Although these are logically equivalent ways of expressing two opinions, people find the use of different frames to indicate higher disagreement between two experts and a larger change of opinion from one expert. Two other experiments compared so-called single-bound probability statements that differed either in the direction (more than 50% vs. less than 50%) or in level of probability (more than 50% vs. more than 70%). Statements that differed in direction were perceived as indicating greater disagreement between experts and a larger change of opinion from one expert, indicating that verbal terms like "more than" and "less than", contain information about the speakers' opinion, over and above the numerical probabilities involved. These findings have clear implications for the communication of risk and uncertainty, in particular for controversial or contested topics.
James Reynolds, Kaidy Stautz, Mark Pilling, Sander van der Linden and Theresa Marteau
University of Cambridge
Background: Low public support for government interventions in health, environment and other policy domains can be a barrier to implementation. Communicating evidence of policy effectiveness has been used to raise public support, with mixed results. This review provides the first systematic synthesis of findings from these studies. Method: Eligible studies were randomised experiments that included a control group, an intervention group that provided evidence of a policy’s effectiveness or ineffectiveness at achieving a salient outcome, and measured support for the policy. Databases searched: ASSIA, EconLit, EMBASE, PsycINFO, Public Affairs Information Service, PubMed, Science Direct, Web of Science, and Open Grey (inception to October 2017). The Effective Public Health Practice Project quality assessment tool for quantitative studies was used to assess study quality and bias. Study characteristics and interventions were coded for variables that might influence changes in support for the policy. Findings: We examined 6,498 abstracts and included 36 studies (N = 31,351) that communicated evidence of effectiveness and nine studies (N = 4,552) that communicated evidence of ineffectiveness. Random effects meta-analysis revealed that communicating evidence of a policy’s effectiveness increased support for the policy (SMD = .10, 95% CI [.06, .13], p < .0001). This is equivalent to public support increasing from 50% to 54% (95% CI [53%, 55%]). This effect did not significantly vary by i. policy domain (i.e., health, environment, other), ii. the use of uncertainty when describing the effectiveness, or iii. other intervention characteristics. Communicating evidence of a policy’s ineffectiveness decreased support for the policy (SMD = -.13, 95% CI [-.22, -.04], p = .003). This is equivalent to public support decreasing from 50% to 45% (95% CI [41%, 48%]). Most of the included studies were of low quality and high risk of bias. Discussion These findings suggest that public support for policies in a range of domains is sensitive to evidence of their effectiveness as well as their ineffectiveness. Uncertainty remains about the most effective ways of communicating such evidence.
Anine C Riege and Gaelle Vallee-Tourangeau
Kinston University London, UK
Government agencies and experts are tasked with giving the public unambiguous advice on risky situations to safeguard people's’ health and welfare, such as advice on vaccination behavior. However, scientific knowledge is often subject to uncertainty and disagreements, and advice can also turn out to be incorrect. The present work explores how communication of scientific uncertainty affects people’s risk perception and judgments of responsibility, blame, and trust after recommendations turn out to be incorrect. An experiment explored three different ways of communicating unknown (epistemic) risk: no information; experts disagreeing about safety; and experts agreeing the product is safe (not knowing it is unsafe), and its effects on risk perception, responsibility, blame, and trust. Participants (N = 270 MTurk workers) were given information about two new products (medication and phone) and told the products were safe to use. The two experimental groups were also told that because the products were new, no research currently existed on the product, but that based on knowledge about the individual components in the products experts disagreed [agreed] the product would [not] cause side effects. Participants in the control group were not given this information. After assessing the risk, participants were told an adverse outcome occurred and asked to judge the expert’s responsibility, blame, and trust. We predicted that communicating epistemic uncertainty would increase risk perceptions, particularly if the experts disagreed. However, we also predicted that disagreement would lower judgments of responsibility and blameworthiness, while increasing trust. The results showed that information about epistemic uncertainty made the products appear riskier (F(2) = 4.02, p = .019, ηp2 = .029). However, participants deemed the disagreeing experts as less responsible (F(2) = 5.11, p = .007, ηp2 = .036). For the phone scenario, participants deemed the experts who gave no information as more blameworthy (F(2) = 8.4, p < .001, ηp2 = .058), and less trustworthy (F(2) = 4.13, p = .017, ηp2 = .031) compared to the other two conditions (the medical scenario shows a similar pattern, but did not reach significance). Communication of unknown risk may thus affect risk perception and responsibility judgments in opposite ways.
Thursday - 14 June 2018 | ||
9.00 – 10.30 | Symposium 5 [Room: D146] | The Media and Contested Science: journalists as knowledge brokers between science and the public? |
10.30– 11.00 | Coffee break | |
11.00 – 11.45 | Key note | Prof dr. ir. Marjolein van Asselt (Professor Risk Governance Maastricht University, the Netherlands and permanent member of the Dutch Safety Board) |
11.45 – 12.45 | Panel discussion | Contested Science: implications for science governance and science communication |
12.45 - 13.00 | Closing and farewell | Prof. dr. Daniëlle Timmermans |
13.00 – 13.30 | Lunch (takeaway) |
Chair: Prof. dr. Daniëlle Timmermans [VU University Medical Center, Amsterdam, NL]
Prof. dr. Peter Achterberg [Tilburg University, NL] - The Science Confidence Gap
Prof. dr. ir. Ionica Smeets [Leiden University, NL / Science columnist] - Reporting and distorting science in the media: same facts, different interpretations.
Hans van Maanen [Science journalist] - How to help journalists deal with scientific uncertainty and contested facts
Discussant: Prof. dr. ir. Erik Lebret [National Institute for Public Health and the Environment (RIVM), NL]
Uncertain risks are potential hazards which can be imagined but with the current state of knowledge neither be fully refuted nor proven. Such risks are often associated with new technologies and/or unprecedented societal developments. As a consequence, they escape statistics making the traditional approach of assessing risks in terms of probability obsolete. However, in many of the assessment practices and governance arrangements the probability-based approach is deeply institutionalized. The basic idea of risk assessment has been to balance unsubstantiated fears, in order to prevent a society anxious about innovation and change. The idea of risk assessment is that it facilitates intelligible and responsible decision-making. The inherent uncertainty around risks challenge the governance arrangements that have been helpful in dealing with simple risks, but are inherently problematic for uncertain risks. This raises the question how scientific expertise and risk professionals can still provide much needed competence and critical intelligence to advance responsible politics, while accepting the limits set by uncertainty. Informed both by her academic research as well as her experiences at the Council for Environmental and Spatial Research (RMNO), the Scientific Council for Government Policy (WRR) and the Dutch Safety Board (OvV), prof. Marjolein van Asselt will share both the need of and the difficulties in the governance of uncertain risks.
While ‘regular’ science could very well be managed using scientific rules and criteria, contested science needs a different more governance like approach. The approach as well as the results of contested science might encounter societal resistance and criticism. Some of these contested science is discussed at this conference. One could argue that in particular for contested subjects, citizens need to be involved in scientific research at an earlier or later stage and that science communication is essentially a dialogue of scientists with policy makers and the public. Panel members will discuss what is needed for a sustainable science governance. Each panel member will defend a statement illustrating his or her position.
Chair: Prof. dr. Daniëlle Timmermans [VU University Medical Center, Amsterdam, NL]
Prof. dr. ir. Erik Lebret [National Institute for Public Health and the Environment (RIVM), NL]
Prof. dr. Peter Achterberg [Tilburg University, NL]
Prof. dr. Cecile Janssen [Rollins School of Public Health, Emory University, Atlanta, USA]
Prof dr. Daniëlle Timmermans, Amsterdam Public Health research institute, RISC research group, VU University Medical Center & National Institute for Public Health and the Environment (RIVM). Organizer
Prof dr. Ragnar Löfstedt, King's Centre for Risk Management, King’s College London. Founder and co-organizer
Dr. Sander van der Linden, Department of Psychology & Winton Centre for Risk and Evidence Communication, Cambridge University. Founder and co-organizer.
Dr. Liesbeth Claassen, Amsterdam Public Health research institute, RISC research group, VU University Medical Center & National Institute for Public Health and the Environment (RIVM). Local organizer
Dr. Olga Damman, Amsterdam Public Health research Insitute, RISC research group, VU University Medical Center. Local organizer
Tom Jansen, PhD student, Amsterdam Public Health research institute, RISC research group, VU University Medical Center & National Institute for Public Health and the Environment (RIVM). Local organizer
Van der Boechorststraat 7, 1081 BT Amsterdam
Get directions >