Skip to main content

Table 2 Characteristics for good research assessments

From: Rethinking success, integrity, and culture in research (part 1) — a multi-actor qualitative study on success in science

Characteristic Sample quote Actor
Diversity of indicators “With metrics I think there is an important rule to keep in mind is that if you’re going to use metrics, you need to use many of them. And you need to really understand what they mean and whether they answer what you’re looking for.” EP
“And maybe also, and I think that the idea of taking other impacts into account can be helpful. I have no one solution, but I think this can be helpful.” FA
“I think that you have to have different parameters. I think that’s important. Not to focus on just one or a few, but have different parameters that focus on different aspects and put these together together with alternatives.” RIO
“...you need to use [metrics] in combination in terms of other indicators, you need to use what we say ‘a basket of metrics’, you cannot evaluate people just based on one single metric. I would argue that you need several, and then of course you have different metrics evaluating different things. One thing is valuating excellence. Impact factor is going to be among that one. But then when you evaluate the education part, the capacity of someone to be a good professor, you need different educators also. So that’s the first one. You need … It’s never in isolation.” EP
Human input “I don’t think you can rely on one or several indicators without human input. You can’t make sense of a number on its own.” EP
“That’s also why there is not one penny of research money allocated in the university that is not based on peer-review. Everything is based on peer review. So every proposal submitted is based on peer review.” PMI
“So you can have indicators, publications of course is a good indicator, but it’s an indicator. You should also do, let’s say an holistic, what we call an holistic approach, have an holistic approach.” FA
Quality over quantity “I think they should evaluate again true quality in terms of … And that isn’t done easily via metric, one metric, you know. It’s actually … You know I would suggest, it’s just up the top of my head … That for any appointment, the people say to a researcher ‘OK, please choose your best two or three papers in the past five years, and then two additional ones if you want, where there’s no time limit. You know, something that you might have published 20 years ago, but is really important. And you submit that with your application. And then people … And you say to us why it’s important. And then people need to of course evaluate that. So that would get away to, that would take into account the more longer term strategy of someone, but also … But it needs a qualitative.” EP
Transparent (robust and valid) indicators “I think that it needs to be transparent, robust, validated, etc. [...] Everybody can see the methodology, they can reproduce it if they want. [...] As long as you describe really well what the rankings take into account, and why you are first or hundredth … What would be bad is if you rank and you don’t tell people on what basis you rank. There you go. You might disagree with the indicator, you might disagree with the ranking, but as long as it’s transparent, well validated, robust, etc. There you go.” EP