Online / 5 & 6 February 2022


On the dissemination/evaluation loop for Research Software

This talk explores the interconnections that links Research Software (RS) dissemination and evaluation issues, in the Open Science context, following the guidelines of the CDUR RS assessment protocol.

In our presentation of FOSDEM 2021:

Free/Open source Research Software production at the Gaspard-Monge Computer Science laboratory - Lessons learnt

we have analyzed the evolution of several problems that rise when considering the Research Software (RS) production of a laboratory, and we have highlighted several issues that should be addressed to deal with these problems within the Open Science context. Among them we have called to the establishment of sound dissemination and evaluation procedures following the guidelines of the CDUR RS assessment protocol presented in:

Gomez-Diaz T and Recio T. On the evaluation of research software: the CDUR procedure. F1000Research 2019, 8:1353

CDUR comprises four steps that can be succinctly described as follows:

  • Citation, to deal with correct RS identification,
  • Dissemination, to measure good dissemination practices,
  • Use, devoted to the evaluation of usability aspects, and
  • Research, to assess the impact of the scientific work.

In this talk we would like to analyze more in depth RS dissemination and evaluation issues and the above mentioned protocols, referring, in particular, to how these protocols can be adapted to different situations that may appear in evaluation processes such as, for example, different evaluation contexts (career, review...).

We will also highlight the interconnections that link both dissemination and evaluation issues, as the RS dissemination needs to adjust to evaluation rules and only suitably disseminated RS (maybe in a restricted context) can be evaluated.

This is a collaboration work with Tomas Recio, Professor at the University Antonio de Nebrija (Madrid).


Photo of Teresa Gomez-Diaz Teresa Gomez-Diaz