Posted on Leave a comment

Preprint out! Task specialization and its effects on research careers

Yesterday we made openly accessible our work on task specialization and its effects on research careers. The paper is available here doi:10.1101/2020.07.01.181669

And a brief summary of its contents in this Twitter thread:

Posted on Leave a comment

Online seminar for the Academic Careers Hub at CWTS

Yesterday I had the privilege of presenting an online seminar for the Academic Careers Hub at CWTS, invited by Inge van der Weijden and Guus Dix. In this presentation I showed the latest updates on the valuation model we are designing and which we will soon be able to test with a multiple case study analysis on five departments of Dutch universities. Find below the presentation. Looking forward to be able to show more definite results!

Posted on Leave a comment

A Humanities Ranking of Spanish universities based on experts’ opinion

As part of the MOOC Decision Making Under Uncertainty: Applying Structured Expert Judgment I had to conduct an actual elicitation with experts. Structured Expert Judgment is designed for and implemented in areas in which there is an evident lack of data and hence, we are forced to rely on experts to make predictions. This is not the case of scientometrics, where the volume of data is actually increasing exponentially with the transition to online collaboration, commenting and dissemination of scientific work.

However, we do have a problem when deciding which data and how such data should be used in research assessment. An area in which the use of scientometrics is especially troublesome is that of the Humanities. For this reason, in fields or when responding to issues in which there is a lack of consensus on the use of scientometrics, conducted structured expert judgment elicitations may come in handy. For this reason, in my elicitation I tried to focus on the development of university rankings (another controversial area) in the Humanities, where lack of consensus would be all over the place. The actual submitted report is here, and of course, this is an exercise and hence none of the results should actually be taken seriously, beware!

If you find it useful in whichever way and want to reference it feel free to do so! The paper is uploaded in Zenodo and you can find it here:

Nicolas Robinson-Garcia. (2020, April 27). Using Structured Expert Judgment to predict a university ranking in the Humanities (Version 1). Zenodo. http://doi.org/10.5281/zenodo.377039

Posted on 1 Comment

Structured Expert Judgment

Quantifying uncertainty or trying to make predictions on subject for which there is an evident lack of data can be challenging. Hence turning into experts seems reasonable, given the consideration that these may not agree on their judgments. The Structured Experts Judgment Method or Cooke’s Method, named after Roger Cooke who formulated such methodology, aims at treating experts judgment as scientific data in a methodologically transparent way. Structured judgment method may pursue three goals according to Cooke & Gossens (2008):

  • Census. Represent the general opinion of a community
  • Political consensus. Here opinions from different stakeholders want to be represented in the final decision
  • Rational consensus. Refers to a group decision process. Here a set of conditions is necessary in order to ensure its reliability:
    • Accountability. All data are open to peer review.
    • Empirical control. Quantitative experts assessment is subjected to quality controls
    • Neutrality. Experts should not be conditioned on their final opinions.
    • Fairness. Experts are not pre-judged.

Structured Expert Judgment is therefore a quantitative methodology which tries to bridge between subjective data and predictions by measuring the uncertainty behind such data. While the method of course assess experts’ expertise, nevertheless an interesting reading on the election of “good” versus “bad” experts from a qualitative point of view is provided by Gläser & Laudel (2009).

Courses on Structured Expert Judgment

References

Cooke, R. M., & Goossens, L. L. H. J. (2008). TU Delft expert judgment data base. Reliability Engineering & System Safety, 93(5), 657–674. https://doi.org/10/c8m5tm

Gläser, J., & Laudel, G. (2009). On Interviewing “Good” and “Bad” Experts. In A. Bogner, B. Littig, & W. Menz (Eds.), Interviewing Experts (pp. 117–137). Palgrave Macmillan UK. https://doi.org/10.1057/9780230244276_6

Posted on Leave a comment

Presenting specialized profiles based on contribution statements

Last week the Knowledge Transfer Conference held in Córdoba (Spain) and organized by the IESA (CSIC), took place. We took this opportunity to present for the first time our results on the use of contribution statements to profile researchers combining Bayesian Networks and Archetypal Analysis. Bayesian Networks is a machine learning technique to develop predictive models. Archetypal Analysis is a non-parametric technique for identifying patterns in multivariate data sets. Instead of clustering cases, it defines archetypes where cases take extreme values in one or more of the variables introduced.

We use these two techniques for the following. First, to predict the probability of performing specific contributions based on bibliometric variables. Second, to identify archetypes or profiles of researchers. Based on our results we can explore research career trajectories, potential biases on gender and compare productivity and citation measures by archetype.

Below are the slides I used, also available in Zenodo with doi:10.5281/zenodo.3580984

Posted on Leave a comment

Valuation model presented at the #ATLC2019

Yesterday, Nicolas presented the paper ‘Towards a multidimensional valuation model of scientists’ at the Atlanta Conference on Science and Innovation Policy 2019 in Atlanta, GA. The model was previously presented as a poster at the ISSI 2019 Conference. In this case, we are now moving with the data collection process and have already retrieved the bibliometric data for the six research groups we are analyzing as case studies. As a kind of exploratory analysis to see if the model we have designed could actually identify different profiles, we did an archetypal analysis using very limited and dubious variables to operationalize each dimension. Although the results most be interpreted with lots of caution, the fact that we could find distinct archetypes and even some consistencies between fields, was really surprising and reassuring.

Slides from the presentation at the ATL Conference.

The presentation was followed by a heated debate and confronting views, showing that the project seems to be tackling a sensitive issue. There were good comments and lots of interest. Hopefully as we are able to develop our case studies, the results will be consistent and more robust.

Posted on Leave a comment

The Falling Walls contest

Last week Nicolas participated in the Falling Walls Lab Marie-Sklodowska Curie contest in which MSCA fellows are faced with the challenge of presenting their research in just three minutes. After receiving some training and coaching on public speaking, 30 contestants had the pleasure of participating in this unusual event.

Presenting our project using contribution data to profile scientists

Nicolas took the opportunity to present our first findings in a new study in which we are using contribution data from Plos journals to predict contributions of scientists and be able to develop taxonomies of scientists based on their contributing patterns. This research study is being done in collaboration with Tina Nane, Rodrigo Costas, Vincent Larivière and Cassidy R. Sugimoto. The whole competition was streamed online and the video is available online, check around minute 54 to see my presentation. More on this to come!

Some pics from the event

Posted on 1 Comment

Presenting the valuation model at ISSI 2019 in Rome

Last Tuesday we presented the poster ‘Towards a multidimensional valuation model of scientists’ co-authored with Tina Nane, Rodrigo Costas and Thed N. van Leeuwen at the ISSI 2019 Conference held in Rome . Here we include a brief summary of its contents:

The use of scientometric indicators for individual research assessment has been severely criticized over the years due to their limited capacity to discriminate between different scientists and capture differences in a statistically reliable manner. Nevertheless, science managers and policy makers make use of these indicators for recruitment of scholars, promotion or allocation of funds. This has provoked strong reactions from the academic community. We argue that the greatest threat of the current use of bibliometric indicators for the assessment of scientists goes beyond technical or methodological decisions, and is more related to the irreflexive use of metrics at the individual level. By linking with the current literature and our own experience on conducting research evaluation, we here present a tentative valuation model which tries to balance between a conceptually-informed framework and a methodological viable operationalization. The model is designed so that it can be operationalized by making use of bibliometric indicators, although we acknowledge that it is sufficiently broad as to give room to non-bibliometric indicators.

The paper of the poster as well as the poster itself are both openly accessible and available at:

DOI