Scientific research is intimately bound up with risk. For one thing, scientific findings often reveal significant societal risks, such as risks of catastrophic climate change or dangerous disease outbreaks. For another, scientific findings are themselves subject to risk. It is widely acknowledged that scientific evidence typically does not fully confirm or disconfirm scientific hypotheses (about the severity of climate change, the safety of vaccines, and so on). Scientific findings are therefore exposed, to a greater or lesser degree, to some risk of error (John 2017).
The project will investigate the ethics of communicating such scientific risks. In a democratic society, scientific findings that bear on policy-making must somehow be conveyed to policy-makers and the broader public (Christiano 2012; Pamuk 2021). But communicating about the risks associated with scientific research raises challenging ethical questions. The project is therefore seeking researchers interested in investigating ethical questions surrounding the communication of risk, such as (but not necessarily limited to) the following:
a. What role, if any, should values play in the communication of risk? The evaluation of risk depends importantly on value judgments. Pronouncing a medical treatment safe enough, for example, involves a value judgment about how potential risks due to treatment side effects should be weighed against the risks of not deploying the treatment (Douglas 2009). Similarly, the decision to pronounce climate change an emergency depends on judgments about the ethical significance of the risks posed by climate change. Should scientists therefore let values influence their public statements about the risks posed by their findings? Or should scientific communication be “value-free”? And, supposing that it is permissible to appeal to values when communicating about science, whose values should be appealed to? The scientist’s? The public’s? The “right” values?
b. How transparent should the communication of scientific risk be? Philosophers and political theorists have widely argued that scientists should be wholly transparent about the uncertainty, disagreement—and so, the risk of error—attached to their results (Lane et al. 2014; Pamuk 2021). Yet some common scientific practices run against this call for transparency, by masking scientific uncertainty or disagreement. For example, dissenting voices on scientific panels often agree to refrain from expressing their dissent in order to allow the panel speak with one voice (Beatty 2006). Some defend this lack of transparency by pointing to the real possibility that expressions of disagreement or uncertainty will be misunderstood, or worse, strategically amplified and misinterpreted by parties with vested interests in rejecting scientific findings (e.g., climate sceptics) (John 2018). Is it ever permissible for scientific panels to conceal disagreement or uncertainty, and with it the risk of error? Or is doing so unacceptably deceptive, undemocratic, or otherwise wrong?
The project is primarily situated in the field of political theory, applied ethics, and philosophy of science. But interdisciplinary approaches are welcome, and applicants with backgrounds in other relevant fields (such as media and communications, or medical ethics) may also apply.
For further details, please contact Dr Maxime Lepoutre (email@example.com) and Dr Alice Baderin (firstname.lastname@example.org).
Master's (preferably in Political Theory, Philosophy, Applied Ethics, or adjacent area)