Categories

Assessment assessed

By . Published on 28 June 2012 in:
Editorial, , , ,

Physicists like numbers. We measure them, calculate them and use them to test experiment against theory.

Yet we also respect numbers. We appreciate their limitations and we take extreme care to educate students that numbers are open to misinterpretation. From the first years of university-level physics teaching we stress how measurements are affected by systematic and random errors and uncertainties, and we explain and distinguish the difference between accuracy (“truth”) and precision (“reproducibility”).

We view numbers that suggest new physics with great suspicion and, indeed, the recent reanalysis of the Gran Sasso neutrino velocity results highlights just how hard we work to revisit data and methodology in such cases to identify (and in this case, find!) sources of error.

As physicists, we are proud to insist on such rigour and we are prone to look less than kindly on other disciplines where such attention to the role of errors in measurements is not as widespread.

Yet something strange seems to happen when it comes to interpreting the numbers which describe scientific publications and citations, and their role in the assessment of the quality of researchers and their output. I cannot help feeling that we do not subject bibliographic metrics – such as the Impact Factor and the H-index – to the same level of scrutiny as the other numerical data that passes across our desks.

Of course, this is somewhat of a generalization. There are many researchers who consider the interpretation of bibliometrics extremely seriously: citation data is, after all, an important and significant part of the modern research landscape.

The quantitative analysis of publications can be valuable in many different circumstances. Tools such as Google Scholar and ResearcherID can be helpful for others to see what we are working on now, and what we worked on in the past. Of course, it is always nice to see that others find our results useful: even if it is sometimes the case that the papers we are the most proud of don’t always attract the (numerical) attention that we may feel they deserve.

Whilst it may be easy to calculate bibliometric data, however, it does not follow that such data is easy to interpret. This is especially the case in the evaluation and assessment of individual researchers, laboratories and research proposals; which brings us to the title of this editorial, “Assessment Assessed”.

The European Physical Society has just released a statement – entitled “On The Use Of Bibliometric Indices During Assessment” – which gives clear recommendations on how bibliometric data should be used during such evaluation processes.

Whilst publication and citation measures can certainly provide useful information as part of an evaluation process, they need to be complemented by a wide variety of other considerations: such as the habits of the discipline under study, the local and national context, the many other tasks that physicists do depending on their career stage, and so on.

This is of course just common sense, but sometimes we forget the things that are most sensible; we should not, however, just as we should never forget the lessons of first year physics.




Read previous post:
Hardaker appointed new Institute of Physics CEO

Paul Hardaker has been appointed as the new Chief Executive Officer [CEO] of the Institute of Physics, it was announced at the end of May. Hardaker will take up the post on 3 September this year.

Hardaker, a visiting professor at the University of Reading’s School of Mathematical and Physical Sciences, is the also the CEO of the Royal Meteorological Society. A Met Office employee for 14 years, he served as both Development Programme director and Chief Advisor to Government for the British national weather service...

Close
chemist