Nonprofit, independent journalism. Supported by readers.


A New Year’s resolution for science

When problems are complex, expert judgment is appealing. But it can also lead to a sense of exclusion that undermines trust in science by the public.

Dr. Michelle Chester preparing to administer a Pfizer coronavirus disease vaccine at Long Island Jewish Medical Center in New Hyde Park, New York, on December 14.
Dr. Michelle Chester preparing to administer a Pfizer coronavirus disease vaccine at Long Island Jewish Medical Center in New Hyde Park, New York, on December 14.
REUTERS/Brendan Mcdermid

2020 was a bad year for science. Not in the sense of delivering but in the sense of being accepted. Science has delivered spectacularly for COVID-19 in terms of identifying cause, treatment and vaccines. Yet a little less than half the popular vote in the 2020 U.S. election supported a president who at best has been ambiguous on the usefulness of science for policy and the pandemic. Similarly in the UK, as Brexit draws to a ponderous closure, it is worth recalling that this was in part a retreat from the science-driven values and approach of the Eurocrats. So why the distrust in science on both sides of the Atlantic?

At face value, it seems inconceivable that people should not embrace science more than made-up explanations, when it is based on evidence carefully collected under as controlled conditions as possible and has delivered in a spectacular way over recent times. One possible explanation is that, in general, the public does not know enough about the process of science to understand its power in delivering reliable solutions. This is an argument for more science education.

Article continues after advertisement

But that is unduly dismissive, especially when with information technology the achievements of science are there for all to see and enjoy. There is surely more to it and this, I want to suggest, is fundamentally connected with the populist reaction to elitism.

The populist reaction to elitism

In developing policy to solve problems, most governments are inclined to take the views of experts more seriously than those of the public at large. This has been especially true in the EU. When the problems are complex, as they often are in policymaking, expert judgment is appealing because the experts know more; but it can also lead to a sense of exclusion that undermines trust in science by the public.

Peter Calow
Peter Calow
There are a couple of valid reasons for these populist fears about too much technocracy. Both are concerned with bias: One relates to scientists making bad judgments and the other to them making different judgments.

First, individual scientists can be biased and wrong; we are all human. This is not just about fraud. More important is that in confronting problems in the early stages, scientists tend to jump to conclusions and in consequence, most of these hypotheses turn out to be, if not wrong, not entirely right. Yet, again for entirely human reasons, individual scientists can want to defend their pet theories. The power of science, the process, is that it recognizes this kind of bias in individual scientists and delivers despite it. It seeks to exclude our personal and political inclinations by insisting not only that the evidence is good, but that it is repeatable under scrutiny of the community of scientists. The process of science routinely screens out individual bias and that is why it gets things right.  So beware of the advocate who rushes to pronouncements without following due process.

The second point — that the views of scientists, if not flawed, may well be different from the majority of the people affected by a policy decision— is more subtle and more difficult to deal with.  For complex problems, typical of policymaking, science rarely leads to one answer. What we find out from our experiments generally takes the form of cause-effect options. In my own area of risk assessment, for example, I study the adverse effects from different levels of exposure to toxic chemicals, greenhouse gases, viral infections and the like.

Policymaking and values

Making decisions about acceptable options is based on values. The science is silent on these. For example, what levels of exposure are acceptable and what we should do about them in terms of managing industrial and agricultural chemicals, moving to renewables, and mandating lockdowns in the face of rising levels of infection. In a democracy, these decisions should reflect the preferences of the public, not those of scientists. When the views of scientists dominate, they appear political.

What to do?  Certainly, as scientists we should resolve to promote more understanding of the process of science. But with some humility, not on the presumption that if the public knew enough science, their values and preferences would converge on those of the experts. Governments need to resolve to make the process more participatory; for example, by promoting more citizen forums for discussing options and making choices in open dialogue with the experts. These should recognize that science is evidence-based and works, but that the policy options suggested by science are many and varied, and that policy decisions need to reflect public preferences.

There are challenges here for all in 2021 and beyond. But, unless we address them, science and scientists may well be considered no less political than the rest in the policy arena and be treated as such by the electorate in future campaigns.

Peter Calow is a professor at the University of Minnesota’s Humphrey School of Public Affairs, in its Science, Technology, and the Environment Area.

Article continues after advertisement


If you’re interested in joining the discussion, add your voice to the Comment section below — or consider writing a letter or a longer-form Community Voices commentary. (For more information about Community Voices, see our Submission Guidelines.)