How can we evaluate experts' performance as depend

Home  ⁄  Workgroups  ⁄  How can we evaluate experts' performance as depend
 
 

Events

News

No events listed.

Events

News

STSMs 2016/2017

The action has funded 6 STSMs over the coming months! 

Successful meeting on Project Risk & Asset Management Uncertainty Assessment

October 2016: The Action held a workshop on Project Risk & Asset Management Uncertainty hosted by colleagues at TU Delft

Expert Judgement Workshop, 26th August 2016

An expert judgement workshop is being held at the University of Strathclyde on Friday 26th August!

How can we evaluate experts' performance as dependence assessors?

When experts assess uncertain quantities it is desirable to provide some kind of empirical control over the expert opinions. In practice this often translates in a measure of “statistical accuracy” of expert’s opinions. This measure is often called calibration, and in Cooke’s classical model it is summarized in a quantity which is computed with experts answers to “seed” or “calibration” variables. Seed or calibration variables are questions about quantities that resemble as much as possible the uncertain quantities of interest but for which the analysts  knows (or will know within the time framework of the study) the true value.

What about dependence? As shown in section <here a link to section 1> it is common practice to obtain from experts also estimates for dependence measures. However controlling these estimates empirically in a fashion similar as the empirical control developed for uncertain quantities is very much lacking in the field. It is the task and ambition of this WG to develop and advance the appropriate instrumental required to “calibrate” expert opinions in terms of dependence estimates.

Some progress has been made recently in this direction. The starting point taken is by summarizing dependence measures provided by experts in correlation matrices. Similarly as what is done with uncertain quantities, a “seed” or “calibration” correlation matrix needs to be used. Then, by using a measure of “distance” between matrices one can construct a dependence-calibration or d-calibration score. Some general questions about evaluating performance of experts as dependence assessors are:

·         Can experts provide sufficiently “calibrated” answers?

·         Are good uncertainty assessors also good assessors of dependence?

·         Is combining expert opinions about dependence a “desirable” thing to do?

For a recent overview of the topics mentioned here see <here a link to the presentation in Galsgow>.