PhD Research

Wintle et al_ MEE-12-08-318_cover image4_explaining taskEnvironmental decision making relies heavily on the judgments of experts. Judgements are often made about systems variables that are severely uncertain and often data poor. Making judgements in this complex environment is challenging. On top of this, there is space for biases to creep in. Experts can interpret data, information and uncertainty in ways that may be aligned with their values or motivations. They are prone to the same cognitive biases as lay-people, and previous research shows that they are even more overconfident in their judgments. Some concrete strategies for minimising judgement biases have been developed in cognitive psychology, but as yet, these have seen little application in environmental science and Natural Resource Management.

While it is worthwhile to mitigate unwanted biases in individuals, this should not be taken as a call to reduce variability in judgement altogether. To do so would be both impractical and myopic. A broader perspective from philosophy suggests that values and diversity can be good for the judgements of groups, by bringing together different knowledge and opinions, by promoting critical thinking, and by compelling people to consider counter-arguments. Moreover, when estimating quantities, biases tend to cancel each other out in the group average. But unfortunately, diversity can also lead to conflict between judges, underscoring the importance of structured decision making and judgement elicitation.

I recently completed my PhD thesis. It explored four considerations for environmental decision making involving experts where there is a diversity of judgements. First, it examined three case studies of conflict arising from diverse judgements in environmental science, where people may be motivated to produce different estimates. It identified decision support tools that are suited to each problem. One case study was extended to explore the practical implications of using one parties’ judgements over another’s, using Bayesian Networks. Second, it took a step back from decision making to judgement elicitation. It outlined a structured taxonomy of an important bias in expert judgement—overconfidence—and tested interval elicitation techniques that successfully mitigate overconfidence in individual experts. Third, the main experimental work in my thesis demonstrated that active feedback improves judgement performance (calibration and accuracy) of individuals in a group, even when group averages are substituted for true values in feedback procedures. Typically, in order to provide feedback to experts, they need true values to learn from. Here, I provided a new procedure for harnessing the wisdom of crowds during feedback, when we don’t know the truth. Fourth, through interviews, the thesis explored estimation strategies and statistical cognition. More specifically, how people think about the relationship between two measures of uncertainty: interval width (or precision) and confidence. I concluded by outlining a new research program for improving judgement and estimation.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s