Quantifying uncertainty or trying to make predictions on subject for which there is an evident lack of data can be challenging. Hence turning into experts seems reasonable, given the consideration that these may not agree on their judgments. The Structured Experts Judgment Method or Cooke’s Method, named after Roger Cooke who formulated such methodology, aims at treating experts judgment as scientific data in a methodologically transparent way. Structured judgment method may pursue three goals according to Cooke & Gossens (2008):
Census. Represent the general opinion of a community
Political consensus. Here opinions from different stakeholders want to be represented in the final decision
Rational consensus. Refers to a group decision process. Here a set of conditions is necessary in order to ensure its reliability:
Accountability. All data are open to peer review.
Empirical control. Quantitative experts assessment is subjected to quality controls
Neutrality. Experts should not be conditioned on their final opinions.
Fairness. Experts are not pre-judged.
Structured Expert Judgment is therefore a quantitative methodology which tries to bridge between subjective data and predictions by measuring the uncertainty behind such data. While the method of course assess experts’ expertise, nevertheless an interesting reading on the election of “good” versus “bad” experts from a qualitative point of view is provided by Gläser & Laudel (2009).
Cooke, R. M., & Goossens, L. L. H. J. (2008). TU Delft expert judgment data base. Reliability Engineering & System Safety, 93(5), 657–674. https://doi.org/10/c8m5tm
Gläser, J., & Laudel, G. (2009). On Interviewing “Good” and “Bad” Experts. In A. Bogner, B. Littig, & W. Menz (Eds.), Interviewing Experts (pp. 117–137). Palgrave Macmillan UK. https://doi.org/10.1057/9780230244276_6
Last week the Knowledge Transfer Conference held in Córdoba (Spain) and organized by the IESA (CSIC), took place. We took this opportunity to present for the first time our results on the use of contribution statements to profile researchers combining Bayesian Networks and Archetypal Analysis. Bayesian Networks is a machine learning technique to develop predictive models. Archetypal Analysis is a non-parametric technique for identifying patterns in multivariate data sets. Instead of clustering cases, it defines archetypes where cases take extreme values in one or more of the variables introduced.
We use these two techniques for the following. First, to predict the probability of performing specific contributions based on bibliometric variables. Second, to identify archetypes or profiles of researchers. Based on our results we can explore research career trajectories, potential biases on gender and compare productivity and citation measures by archetype.
Yesterday, Nicolas presented the paper ‘Towards a multidimensional valuation model of scientists’ at the Atlanta Conference on Science and Innovation Policy 2019 in Atlanta, GA. The model was previously presented as a poster at the ISSI 2019 Conference. In this case, we are now moving with the data collection process and have already retrieved the bibliometric data for the six research groups we are analyzing as case studies. As a kind of exploratory analysis to see if the model we have designed could actually identify different profiles, we did an archetypal analysis using very limited and dubious variables to operationalize each dimension. Although the results most be interpreted with lots of caution, the fact that we could find distinct archetypes and even some consistencies between fields, was really surprising and reassuring.
The presentation was followed by a heated debate and confronting views, showing that the project seems to be tackling a sensitive issue. There were good comments and lots of interest. Hopefully as we are able to develop our case studies, the results will be consistent and more robust.
Last week Nicolas participated in the Falling Walls Lab Marie-Sklodowska Curie contest in which MSCA fellows are faced with the challenge of presenting their research in just three minutes. After receiving some training and coaching on public speaking, 30 contestants had the pleasure of participating in this unusual event.
Nicolas took the opportunity to present our first findings in a new study in which we are using contribution data from Plos journals to predict contributions of scientists and be able to develop taxonomies of scientists based on their contributing patterns. This research study is being done in collaboration with Tina Nane, Rodrigo Costas, Vincent Larivière and Cassidy R. Sugimoto. The whole competition was streamed online and the video is available online, check around minute 54 to see my presentation. More on this to come!
Last Tuesday we presented the poster ‘Towards a multidimensional valuation model of scientists’ co-authored with Tina Nane, Rodrigo Costas and Thed N. van Leeuwen at the ISSI 2019 Conference held in Rome . Here we include a brief summary of its contents:
The use of scientometric indicators for individual research assessment has been severely criticized over the years due to their limited capacity to discriminate between different scientists and capture differences in a statistically reliable manner. Nevertheless, science managers and policy makers make use of these indicators for recruitment of scholars, promotion or allocation of funds. This has provoked strong reactions from the academic community. We argue that the greatest threat of the current use of bibliometric indicators for the assessment of scientists goes beyond technical or methodological decisions, and is more related to the irreflexive use of metrics at the individual level. By linking with the current literature and our own experience on conducting research evaluation, we here present a tentative valuation model which tries to balance between a conceptually-informed framework and a methodological viable operationalization. The model is designed so that it can be operationalized by making use of bibliometric indicators, although we acknowledge that it is sufficiently broad as to give room to non-bibliometric indicators.
The paper of the poster as well as the poster itself are both openly accessible and available at: