Epistemic quality in public science
Getting enough experts
Some science proceeds in an institutional environment, whether a laboratory or a seminary, and only really ventures out in manuscript form seen by other institutions. Technology can emerge into the market without reference to the fundamental research behind its development, sustaining a myth of private investment led innovation. But some science is not like that; it is more directly connected to society, from knowing about our world to public policy decisions. In both cases, new knowledge supplants old ideas, showing how our environment is shaped and our role in our own future, which people need to know about. Democracy is based on a dialogic participation of the people, arranged around principles of inclusion and epistemic quality, which ought to extend to such public science.
Sharing science with public audiences of any kind was fairly unusual early in the 20th century, and political decisions were more about deliberation than expertise. New significance emerged in warfare and then health and the environment, yet accounts of that early work is focused on individuals, heroes and villains. Physicists and other scientists did remarkable work well beyond nuclear fission experiments, and many accounts were made for the public only after the events. However, corporate interests resisted other evidence which showed various environmental and health impacts of industrial pollutants and tobacco smoke, fought by isolated yet eminent scientists. Much analysis is on the political and financial motives of resistance but neglects to say what public science should look like, if not uncritical adoption of today’s finding.
Environmental Exposures
The hole in the ozone layer was in the news for some time, but how we knew we could do something about it was obscure even as action was taken. CFCs were identified as the main culprit because they persisted until exposed to certain radiation high in the atmosphere, where they degraded to halogen radicals (really very toxic chemicals). My teacher assured me the safety procedure for a bromine leak in an industrial plant was ‘run’; meanwhile a colleague explained if you work with fluorine (HF), and you get some on you skin it will go straight through and strip the calcium out of your bones. So although the regulation of CFCs was inconvenient, there were other incentives to find alternatives for them in industrial processes, and these are now in use. By contrast, the main active ingredients of tobacco and its delivery by smoking was the direct cause of the many health problems it causes, not just lung cancer.
Nuclear power is an example of public response to new technology hindering its adoption, partly tempered by incidents where control was lost. But the actual danger of nuclear accidents, including the estimates from those which happened, is small compared to the dread of the technology. Paul Slovic established this combination of radiation dread and unknown threat was what caused the problem, in contrast to how the domestic microwave became commonplace despite fear. Indeed the enthusiastic reporting of dreaded cancer epidemiology associated with dietary additives and chemical processing reflects a public fascination for commonplace but unseen risks. The key theme which emerges in the nuclear power public reaction is scientists who understood kept out of the media, leaving unbalanced narratives to play on fear.
Communication Protocol
The UK saw a similar pattern of very confused government statements about the risk of BSE and then the technology of genetic modification (GM) of food. The point is not that risks are misjudged but at the outset they are unknown so public communication sustains a narrative which may be missing some perspectives. There are good ways to approach communication about emergent risk, mostly the avoidance of bad practice:
do not hide information but be open about what is known
do not be complacent but commit to resolving uncertainty
do not be definitive but give interim advice and update it
These are features of the events which went wrong but they rely heavily on the main government response being well-informed and with new technology that is unlikely. Reflecting on the GM debacle it was clear that some messages were complacent and that specialists had specific uncertainties which needed further research to resolve.
And the response which came was to establish the Science Media Centre (SMC), so those specialist voices could be elicited when they were relevant. But the key contribution which they were providing was independent comment, because the Mertonian principle of disinterestedness is much more aspiration than practice in science. Everyone is invested in their own interests, their funding, their recognition, their ideas, including difficulty really testing their own view. Experts add context and an authoritative view but they also provide scepticism across the whole of a press release which will have empirical findings, interpretation and expectations. The system of embargo in publications therefore provides time for these scientists to review the work and provide a considered view on its quality and significance.
Limited Perspectives
Including a new perspective which challenges other political views has been successful enough that the independence of scientists is questioned. The SMC discloses competing interests such as funding (both its own and the relevant interests of scientists) and collaborations, however an example of a more complicated conflict arose with ME, where scientific integrity was criticised as well as validity. ME has been a difficult condition to rehabilitate and treatment of behavioural and exercise therapy has been trialled with contested results. The contest proceeded to entrenched positions from patient groups and senior scientists without any clear conclusion on what improves prospects for people suffering. Unfortunately the scientists doing the contested work have subsequently dominated commentary without being clear enough their personal connections to each other i.e. compromised independence.
When there are a handful of extremely specialist experts and limited directly relevant evidence, individual comments, and careful management of interests, are key to better dialogue. However, extensive research requires more systematic synthesis, following evaluation by relevant specialists (typically described as ‘peer review’). Although both have protocols to ensure they do make a quality assessment of science, in practice they suffer from rigid focus on check lists and indicators such as academic credentials (of reviewers and researchers). Too many papers being published has led to decline in the success of peer review in identifying errors (and outright fabrication). More awkwardly, systematic reviews produce answers to questions which can be answered, which may not be those under discussion because they are important to people.
Accountable Evidence
In public debate, there is a desire for expertise to be accountable, which frustration Michael Gove MP infamously articulated as people having ‘had enough’ of behaviour:
I think the people in this country have had enough of experts from organisations with acronyms saying that they know what is best and getting it consistently wrong.
Transparency in making evidence available is one thing, and making it evaluable is another, especially for more statistical evidence where more than provenance is at issue. We need to know about the context i.e. distinguish efficacy from effectiveness, and how all of this adds up to what is claimed. That may be tractable for someone doing their own research online but in debate it is more about exchanging reasons for the views advanced. And being prepared to respond to criticism and update views as more evidence emerges, or errors in what has been relied upon.
The foregoing assumes not only that the issues are tractable enough that science can be done, but also that there is time available to do it, and debate its significance. In most emergencies, like a fire, we rely on responders knowing what to do and arriving with the right equipment in time to make us safe. But science is about understanding the unknown, so in an emergency there will be unfamiliar dimensions where the right course of action is not established. Experts can gather but they also need a process to evaluate the quality of evidence and make recommendations without a reassuring level of certainty. This leads science into a function within the emergency state, outwith duly democratic processes, subjectively excluding inexpert contributions.
Exclusion Principles
Dealing with low quality interventions is the most contentious difference between scientific and democratic norms. While in more technical and abstract science this is not a problem as they are inaccessible to outsiders and have no direct impact on society, it is quite different in medicine and some aspects of consumer technology. There are strict rules around licensing and advertising products and professional regulation of practice which still can see popular misapprehensions. But the need to respond to and engage with partial understanding, specific concerns and alternative accounts is essential to democratic appreciation. Excluding such interventions out of hand is corrosive of trust and evidence shows that onesided presentations which exclude consideration of negative impacts of policy alienate those more sceptical.
An estimable account has emerged that we could ‘inform not persuade’, leaving people with good information to integrate their own values, context and concerns. This is attractive but it is naive about the motivation for dissemination of narratives, not which are social more than anything else. Challenging misleading claims and those who make them is acknowledged to be important, but an entirely unfounded community belief is more than that. Science is difficult and not knowing means losing control; the consequences of matters being researched may well be emotive matters of health or future prosperity. Again advice is to acknowledge how people feel about issues in talking about technical ideas, but a proportionate response may be complex.
Models of Responsibility
Experts have a responsibility to be principled not only in their appraisal of evidence matching their expertise, but also in setting out their review for the public. Where there are few experts they may collaborate in professional scientific bodies - no one person ought to bear the burden alone. But they ought to speak out, not definitively but about the limitations of methods data and relevance of the context in applications of public interest. So they can identify other information which enhances the epistemic environment, dispelling an opportunity of seductive confluence of complexity and concern in light of official neglect. And they should go further, to challenge other views presented for their consideration, recognising their view is only one and public discourse is not determined in by refutation as science may hope to be.
Coda This all begs the question of impact on quality. The journalistic aphorism about weather reporting is to ‘look outside and see whether it is raining’ rather than canvassing both sides. Science is empirical but the data is not readily accessible so scientists can give their view independently if the question is well constructed. Many others can get to the heart of the question and identify data which may be relevant to the contention. Doing ‘your own research’ has been given an eccentric overtone in policy communities but it is as simple as looking for other opinions and evidence. Whether this all adds up to an improvement depends on the participation of those able to discriminate, to compare predictions with outcomes. The biggest inhibitor to quality is not being consistently wrong, but so woolly as to be never challenged.
