IVA has recently been host to the ACUMEN project. Twenty-one experts in research evaluation, project administrators and European Communion Evaluators have been developing the future of research evaluation, led by Professor Paul Wouters from CWTS, Leiden University.
Read more about
ACUMEN.
The workshops
During the two-day workshop, arranged by Birger Larsen and Lorna
Wildgaard, ACUMEN further investigated how current research
evaluation is happening at the individual level. The project is two
years in, and this, the final year, is the decisive year for the
project's success or failure. The aim? To use the ACUMEN members'
combined expertise to produce a portfolio of both traditional
indicators and new (useful) qualitative indices and quantitative
web-based and bibliometric measures. These measures will be
presented to the researcher as an online enriched CV, which
documents their research activities as well as supporting
assessments of their expertise, output and influence in the context
of their demographic information and career path narratives. This
visualization tool will support the core creativity of research in
all disciplines and not steer research in boring directions, such
as publishing in high JIF journal rather than work with
low-prestige but relevant problems. Hence the indicators are not
limited to publication and citation counts, or limited to
traditionally measureable forms of scientific communication in
journals as a lot of communication now-a-days is on the web or
through popular media channels or interactive installations.
The philosophy behind the project is to address
the gap between creating research, evaluating research and
promoting excellence. There is a problem in current systems of
research evaluation and this problem is complicated. Researchers
are people who are being evaluated between narrow frameworks and
limited technology. In these systems the societal role of their
research is secondary and the methods of evaluation, such as peer
review can be biased, subjective, give power to scientific elite
and enforce the gender power structure. To understand the effect of
evaluation, we need to be aware of differences between disciplines,
gender and culture. Thus, to obtain a consistency between the
mission of the researcher and the mission of evaluation, ACUMEN
will also be developing guidelines for Good Evaluation Practice in
the hope that evaluation will be implemented in such a way that
does not undermine the authority of the researcher in their process
of quality, and support their craftsmanship without giving them all
the freedom or taking freedom away.
What difference will ACUMEN make?
ACUMEN is investigating how evaluation plays out in diversity of
labour force and gender. This questions the neutrality of
evaluation and how straightforward it is. In cooperation with the
European Commission, ACUMEN will contribute to policies and get
research evaluation on a better track. The goal is still to promote
excellence and tools that can solve societal problems but keep
space for creativity. The connection of analysis of the
individuals' career with evaluation and the interaction between
evaluation process and career advancement will be strengthened. The
measures created will enrich CVs and point to activities in a
systematic way that is acceptable to evaluators. The ACUMEN
Portfolio is the link between knowledge evaluation and how this is
embedded in research careers evaluation.
The Open Seminar
The ACUMEN members were joined on the third and final day of the
workshops by forty researchers and professionals not involved in
the ACUMEN project but with a shared interest in research
evaluation.
The theme of the seminar was how the performance
of individual researchers is currently assessed and the
discrepancies between the criteria used in performance assessment
and the broader social and economic function of scientific and
scholarly research. Additionally, problems in the current
evaluation system were challenged, such as the applicability of
quantitative measures at the individual level and the lack of
recognition for new types of work that researchers need to perform.
As a result, the broader social functions of the scientific system
are often not included in its quality control mechanisms. Solutions
to these challenges were suggested.
Invited speakers included Daniel Spichtinger,
European Policy Officer evaluating the ACUMEN project, who
introduced Horizon 2020 focusing on the funding rate and position
of the social sciences and humanities. How the 70 billion euros
designated for research investment will be distributed between
disciplines is under discussion between the EU parliament and EU
advisors. How this will play out is still unknown. One person with
a clear opinion on this was Sune Auken, Leader of the PhD school at
the Faculty of the Humanities. In his presentation, he reflected on
the differences between the humanist and the hard sciences, and how
in evaluation and in subsequent funding, humanists can be treated
as failed scientists.
The resulting small resources invested in humanist
research mean that the effort to measure may not be worth it both
time-wise and financially - a footnote to EU evaluators present.
Clearly, evaluation measures must be designed specifically to
account for the different perspectives of quality and influence in
both the humanities and in the hard sciences. The theme of
disciplinary (mis)use of measures was continued in Fredrik Åström's
presentation (bibliometrician from Lund University). He questioned
the use and interpretation of bibliometrics in evaluation, with the
case of the h-index as a performance indicator in awarding funds.
He found that it is assumed that reviewers know and understand
differences between fields but this assumption is not in any way
regulated or monitored. Dr. Kayvan Kousha, visiting fellow from the
University of Wolverhampton, introduced a method for harvesting
book citations from Google Books as a source of evidence for
research impact while Dr. Andrea Scharnhorst, head of research at
Data Archiving and Networked Services at Royal Netherlands Academy
of Art and Sciences and active in the e-humanities group, discussed
the challenges of providing a globally interoperable and expressive
data infrastructure for research information.
With the workshops and seminar brought to a
close, bibliometricians, researchers and evaluators alike
were in agreement that there is inconsistency between the mission
of the researcher and the mission of evaluation. Evaluation of the
individual researcher is the cornerstone of the scientific and
scholarly workforce and shapes the quality and relevance of
knowledge production in science, technology and innovation. Not all
the activities and efforts of the individual to communicate this
research are measured, but this does not mean that what is not
visible is not important.
By Lorna Wildgaard