Credit: Illustration by David Parkins

Many governments aspire to evidence-based policy and practice. The predominant, conventional approaches to using experts are either to seek the advice of one highly regarded individual, or to convene a panel with diverse expertise across relevant areas. For example, quarantine services worldwide routinely rely on expert judgement to estimate the probability of entry, establishment and spread of pests and diseases.

The accuracy and reliability of expert opinions is, however, compromised by a long list of cognitive frailties1. Estimates are influenced by experts' values, mood, whether they stand to gain or lose from a decision2, and by the context in which their opinions are sought. Experts are typically unaware of these subjective influences. They are often highly credible, yet they vastly overestimate their own objectivity and the reliability of their peers3.

Happily, a large and growing body of literature describes methods for engaging with experts that enhance the accuracy and calibration of their judgements4,5. Unhappily, these methods are rarely used to support public policy decisions. All the methods strive to alleviate the effects of psychological and motivational bias; all structure the acquisition of estimates and associated uncertainties; and all recommend combining independent opinions. None relies on the opinion of the best-regarded expert or uses unstructured group consensus.

The cost of ignoring these techniques — of using experts inexpertly — is less-accurate information, and thus more frequent and more serious policy failures.

Knowns and unknowns

For an important subset of questions, expert technical judgements about facts plays a part in policy and decision-making. (We appreciate that political context may determine what comprises relevant, convincing evidence, and that that evidence rarely leads directly to policy and action because decision-makers must balance a range of political, social, economic, practical and scientific issues.)

Policymakers use expert evidence as though it were data. So they should treat expert estimates with the same critical rigour that must be applied to data. Experts must be tested, their bias minimized, their accuracy improved, and their estimates validated with independent evidence (see ‘Eight ways to improve expert advice’). That is, experts should be held accountable for their opinions.

boxed-text

For example, experts who are confident and routinely close to the correct answer provide more information than do experts who regularly deviate from the correct answer or are under-confident. Highly regarded experts are routinely shown to be no better than novices at making judgements. Opinions from more-informative experts can be weighted more heavily, whereas the opinions of some experts may be discarded altogether6. These strategies will illuminate where advice is robust, and where it is contradictory, self-serving or misguided. This will generate evidence for policy decisions that is more relevant and reliable. Roger Cooke, a risk-analysis researcher at the Delft University of Technology in the Netherlands and his colleagues have used this approach effectively to better predict the implications of policy for transport and nuclear-power safety4.

Experts themselves must make explicit the sensitivity of their decisions to scientific uncertainty, assumptions and caveats. When invited to advise, they should demand that state-of-the-art techniques are used to harvest and process what they offer. If not, all involved risk wasting substantial time, resources and opportunities.

Nature special: Science advice to governments

Importantly, all parties must be clear about what they expect of each other: estimates of facts, predictions of event outcomes or advice on the best course of action. Properly managed, experts can help with the first two. Providing advice assumes that the expert shares the same values and objectives as the decision-makers.

Several processes have been shown to improve experts' performances on estimates of facts and predictions of event outcomes. In using specialists to weigh up the best course of action, the researchers themselves, and the policymakers using them, should identify all possible changes, options and threats, a process known as horizon scanning. Policymakers must list all possible known solutions using a wide group of experts and reference to the literature, to reduce the risk that valuable alternatives are overlooked (known as solution scanning7).

Deliberations should be underpinned by a systematic collection of evidence, an assessment of its relevance, and an identification of the knowledge gaps that might change the decision. The information can be collated so that it is ready for use — rather than in response to a policy need (as is being done for biodiversity8, see www.conservationevidence.com).

Rules of engagement

A few more rules of engagement, routinely applied, will enhance the quality and reliability of expert judgements.

Ensure that questions are fully specified and unambiguous, so that language-based uncertainties do not cloud judgements. For example, a seemingly straightforward question such as 'How many diseased animals are there in the region?' could be interpreted differently by different people. The question does not specify whether to include only those animals that are known to be infectious, or also those that have died, have recovered, are diseased but yet to be identified as such, and so on.

Structured question formats counter tendencies towards over-confidence for individual estimates. For example, Andrew Speirs-Bridge at La Trobe University has shown9 that questions that elicit four responses — upper and lower bounds, a best guess and a degree of confidence in the interval — generate estimates that are relatively well calibrated. Consider a range of scenarios and alternative theories. Ensure that several experts answer each question.

Structured question formats counter tendencies towards over-confidence

Unstructured group interactions are subject to 'groupthink': the group gravitates towards an initial or even an arbitrary estimate; dominant individuals drive the outcome; or individuals are ascribed greater credibility than they deserve because of their appearance, manner or professional background. Structured, facilitated interactions counter factors such as these, which distort estimates3.

Review assumptions, reconcile misunderstandings and introduce new information. Ensure that decision-makers do not rely on experts to choose between options but rather use an appropriate decision tool. One such is structured decision-making, in which experts populate decision tables with estimates of the expected outcomes for each criterion under each policy option, but do not decide the best option.

For example, an analysis6 of volcano-eruption risks by Willy Aspinall, an Earth scientist at the University of Bristol, UK, used structured interactions. These substantially improved the quality of estimates, because he ensured that well-specified questions were answered by several experts in such a way that he avoided or mitigated the psychological tripwires that compromise many group interactions.

Similarly, a study led by conservation ecologist Marissa McBride10 at the University of Melbourne in Australia engaged with groups of experts remotely, using structured questions and group interactions to assess the conservation status of threatened Australian birds. They used telephone conferences to outline the context and purpose of the interactions, which was to reassess the International Union for Conservation of Nature's Red List assessments for a suite of threatened species. They then used e-mail to elicit initial judgements and to clarify questions, introduce further data and explanations. Finally, they circulated a spreadsheet and compiled a second round of private, anonymous judgements.

In many cases, incorporating the formal stages described here will improve decision-making. The benefits are substantial improvements in the reliability of judgements, relatively free of personal biases and values. The costs in time and resources are modest.