Physicists like numbers. We measure them, calculate them and use them to test experiment against theory.
Yet we also respect numbers. We appreciate their limitations and we take extreme care to educate students that numbers are open to misinterpretation. From the first years of university-level physics teaching we stress how measurements are affected by systematic and random errors and uncertainties, and we explain and distinguish the difference between accuracy (“truth”) and precision (“reproducibility”).
We view numbers that suggest new physics with great suspicion and, indeed, the recent reanalysis of the Gran Sasso neutrino velocity results highlights just how hard we work to revisit data and methodology in such cases to identify (and in this case, find!) sources of error.
As physicists, we are proud to insist on such rigour and we are prone to look less than kindly on other disciplines where such attention to the role of errors in measurements is not as widespread.
Yet something strange seems to happen when it comes to interpreting the numbers which describe scientific publications and citations, and their role in the assessment of the quality of researchers and their output. I cannot help feeling that we do not subject bibliographic metrics – such as the Impact Factor and the H-index – to the same level of scrutiny as the other numerical data that passes across our desks.
Of course, this is somewhat of a generalization. There are many researchers who consider the interpretation of bibliometrics extremely seriously: citation data is, after all, an important and significant part of the modern research landscape.
The quantitative analysis of publications can be valuable in many different circumstances. Tools such as Google Scholar and ResearcherID can be helpful for others to see what we are working on now, and what we worked on in the past. Of course, it is always nice to see that others find our results useful: even if it is sometimes the case that the papers we are the most proud of don’t always attract the (numerical) attention that we may feel they deserve.
Whilst it may be easy to calculate bibliometric data, however, it does not follow that such data is easy to interpret. This is especially the case in the evaluation and assessment of individual researchers, laboratories and research proposals; which brings us to the title of this editorial, “Assessment Assessed”.
The European Physical Society has just released a statement – entitled “On The Use Of Bibliometric Indices During Assessment” – which gives clear recommendations on how bibliometric data should be used during such evaluation processes.
Whilst publication and citation measures can certainly provide useful information as part of an evaluation process, they need to be complemented by a wide variety of other considerations: such as the habits of the discipline under study, the local and national context, the many other tasks that physicists do depending on their career stage, and so on.
This is of course just common sense, but sometimes we forget the things that are most sensible; we should not, however, just as we should never forget the lessons of first year physics.