Quantifying uncertainty or trying to make predictions on subject for which there is an evident lack of data can be challenging. Hence turning into experts seems reasonable, given the consideration that these may not agree on their judgments. The Structured Experts Judgment Method or Cooke’s Method, named after Roger Cooke who formulated such methodology, aims at treating experts judgment as scientific data in a methodologically transparent way. Structured judgment method may pursue three goals according to Cooke & Gossens (2008):
- Census. Represent the general opinion of a community
- Political consensus. Here opinions from different stakeholders want to be represented in the final decision
- Rational consensus. Refers to a group decision process. Here a set of conditions is necessary in order to ensure its reliability:
- Accountability. All data are open to peer review.
- Empirical control. Quantitative experts assessment is subjected to quality controls
- Neutrality. Experts should not be conditioned on their final opinions.
- Fairness. Experts are not pre-judged.
Structured Expert Judgment is therefore a quantitative methodology which tries to bridge between subjective data and predictions by measuring the uncertainty behind such data. While the method of course assess experts’ expertise, nevertheless an interesting reading on the election of “good” versus “bad” experts from a qualitative point of view is provided by Gläser & Laudel (2009).
Courses on Structured Expert Judgment
- Decision Making Under Uncertainty: Introduction to Structured Expert Judgment
- Decision Making Under Uncertainty: Applying Structured Expert Judgment
Cooke, R. M., & Goossens, L. L. H. J. (2008). TU Delft expert judgment data base. Reliability Engineering & System Safety, 93(5), 657–674. https://doi.org/10/c8m5tm
Gläser, J., & Laudel, G. (2009). On Interviewing “Good” and “Bad” Experts. In A. Bogner, B. Littig, & W. Menz (Eds.), Interviewing Experts (pp. 117–137). Palgrave Macmillan UK. https://doi.org/10.1057/9780230244276_6