Metrics for the REF

The HEFCE consultation document [H] states that assessment and funding for the ‘other disciplines’ (arts, social sciences, mathematics) will be via light-touch peer review ‘in­formed by metrics’. The likely single panel for mathematics (or for mathematics and statistics) will clearly thus have to give considerable weight to metrics, while not being ‘driven by’ them as is planned for the ‘science-based disciplines’ (science, engineering and medicine except mathematics). We need therefore to input our views as to what metrics should be used for mathematics, or we will find ourselves subject to those decided on for everyone else. The AHRC has put in its own views [A] on metrics, and I understand that something similar is brewing in the social sciences under the aegis of the Academy for the Learned Societies in the Social Sciences. Mathematics is on its own and would seem to need to speak up for itself.

Behind the consultation document lie the interdisciplinarity report [I] by Evidence Ltd and the scoping study [C] on citation analysis by the Leiden group. The former probably need not concern us much, except that in several places the authors expand STEM as science, technology, engineering and medicine, so we must get that corrected. The citation study [C] is 130 pages, but there is more: the study refers to a report [D] giving a full comparative evaluation of all Dutch Mathematics departments and institutes based on citation indicators. This 103-page report is in English and is highly relevant to any views we may formulate on metrics as applied to mathematics.

HEFCE's proposed metrics will be based on citation analysis [H, §31], presumably as developed by the Leiden group which is already working for HEFCE. This will certainly be better than Thomson Scientific's ‘impact factors’ with their grotesquely short two-year citation window, but will carry its own problems. The Leiden researchers defend their indicators [C, §3] but can provide only partial solutions to some potential problems, especially, in my view, those stemming from the fact of the data being a ‘given’, as provided and to some extent constructed by the Thomson Corporation. The Leiden authors mention the possibility of using additional, non-Thomson, journal data [C, §3.3, paragraphs 3 & 4], but that would need a very considerable amount of work and does not seem a realistic prospect.

I concentrate in what follows on Thomson Scientific's classification of the world of research into individual topics. For our purposes this comes down [C, §4.1, subsection ‘Definition of fields’] to the 172 Subject Categories of the Science Citation List Expanded 2007, called ‘fields’ in [C]. Key Leiden indicators involve a ‘field-specific international reference level’ [C, §2.3], allowing comparison of citation rates with an average over a whole field. Validity and coherence of any such indicator hinges on there being a sensible classification into fields, and on the allocation of each journal to a specific field. The list of Subject Categories with a brief ‘scope note’ on each is at [F], and having read all the scope notes I extract the following Subject Categories, with their scope notes, as being those likely mainly to fig­ure in the intended ‘mathematics & statistics’ area of the REF. (Individuals in our area can be expected also to publish in a scatter of other areas.)

The first thing to strike one about this list is that there are fields ‘Mathematics’ and ‘Mathe­matics, Applied’ but no ‘Mathematics, Pure’. As the Leiden authors note [C, §4.1], Thomson sometimes allocates a journal to more than one field. Indeed, the ‘Mathematics’ field has 215 journals, the ‘Mathematics, Applied’ field has 174, and there are 55 journals common to the two fields. For a journal to be compared to a field-specific reference level it has to be allocated to one field, so a further allocation of journals to fields, beyond that done by Thomson, must have been applied by Leiden before calculating their index. They do not make that very clear, or give any details. They say [C, §4.1] “Field- normalisation always occurs at the level of the 250 fields” (i.e. the 172 from the Science Citation Index Expanded, plus those from arts and social science), so aggregation of fields does not obviate the need for each journal to be allocated to one and only one field.

Presumably, the 55 journals common to the ‘Mathematics’ and ‘Mathematics, Ap­plied’ fields are meant to be those carrying a mix of applied mathematics and non-applied (pure!) mathematics. A crude solution to the allocation problem would count all these journals under ‘Mathematics, Applied’, but a pure mathematics paper in one of them would then be compared for citation count with a field largely made up of applied mathematics, hardly appropriate. However, were the 55 journals allocated to ‘Mathematics’, the applied papers in them would be compared to papers in that wider field, while the papers in the 119 journals left in ‘Mathematics, Applied’ would be compared only with other applied papers, an equally arbitrary outcome.

Arbitrariness is inevitable, given the headings chosen to make up the various fields and the vagueness of the scope notes attempting to distinguish them. One notes that analysis does not appear as such in the scope notes of either ‘Mathematics’ or ‘Mathematics, Ap­plied’, but differential equations appears under the latter. The Journal of Differential Equations nevertheless appears under ‘Mathematics’. Arbitrary, indeed perverse, allocation thus becomes inevitable. Other instances: Algebra Colloquium is allocated to ‘Mathematics, Applied’, as is Annals of Combinatorics. Allocation of each journal to one or more fields is done by anonymous Thomson employees, whose qualifications for the task are unknown. Thomson makes no claims on its website about the allocation process, being responsible to no-one for it.

A similar vague boundary between one field and another that might be considered a subset of it occurs between ‘Biology’ and ‘Mathematical & Computational Biology’. The first of these is made up of 87 journals, the second of 27, and there are 10 journals common to both, including the IMA Journal of Mathematical Medicine and Biology. Which of these two fields a journal in the area of mathematical biology should be allocated to is completely unclear. Similar issues arise over the boundaries of the other fields listed above. My conclusion is that the fields for the mathematical sciences, and the allocation of journals between them, have so little validity as to cast serious doubt on the worth of Leiden's field-specific indicator making use of them. Unfortunately, this is their favoured indicator: “We regard the internationally standardised impact indicator as the most appropriate research performance indicator” [C, §2.3]. Leiden's other main indicator gives undue merit to groups who publish in unambitious journals: “although one research group might publish in prestigious (high impact) journals, and another group in more mediocre journals, the citation rate of articles published by both groups might be equal relative to the average citation rate of their respective journal sets. But one would generally argue that the first group evidently performs better than the second” [C, §2.3].

Prior to being allocated to one or more fields, a given journal has first to be selected for inclusion at all. Thomson’s ‘Journal Selection Process’ is described at [J], and there is a link there to an essay on the process by James Testa, ‘Senior Director, Editorial Development & Publisher Relations, Thomson’, with acknowledgements to 8 named colleagues. Again, the selection process is proprietary to Thomson and the qualifications of those undertaking it are unknown.

The last few paragraphs suggest one general point, not specific to mathematics, that I hope the CMS response can take up, which is that the citation studies planned by HEFCE to be its main indicators depend on data from a private overseas corporation with no responsibility to the UK whatsoever. The way the data are organised by the Thomson Corporation (choice of fields, selection of journals for inclusion, allocation to fields) has considerable prior consequences for what it is feasible to do with the data, and hence for what indicators HEFCE or their agents might wish to employ. For the research future of this country to be determined to a large extent in this way is absolutely craven, and seems to me simply shameful.


[A] Use of research metrics in the arts and humanities. Report of the expert group set up jointly by the AHRC and HEFCE, October 2006.

http://www.hefce.ac.uk/research/assessment/reform/expert.htm


[C] Scoping study on the use of bibliometric analysis to measure the quality of research in UK higher education institutions. Report to HEFCE by the Centre for Science and Technology Studies, Leiden University, November 2007.

http://www.hefce.ac.uk/pubs/rdreports/2007/rd18_07


[D] T.N. van Leeuwen, M.S. Visser, A.J. Nederhof, L.J. van Wurff, Bibliometric Study on Mathematics Research in the Netherlands, 1993-2002. Research Report to the Exact Science Division of NWO, February 2007.

http://www.nwo.nl/nwohome.nsf/pages/NWOA_753BTU


[F] Science Citation Index Expanded, Scope Notes 2007.

http://scientific.thomson.com/mjl/scope/scope_scie.html

(Care needed to avoid Scope Notes 2005, also on the web with a similar but slightly different URL.)


[H] Research Excellence Framework. Consultation on the assessment and funding of higher education research post 2008. HEFCE November 2007/34.

http://www.hefce.ac.uk/pubs/hefce/2007/07_34


[I] Bibliometric analysis of interdisciplinary research. Report to HEFCE by Evidence Ltd, November 2007.

http://www.hefce.ac.uk/pubs/rdreports/2007/rd19_07


[J] Journal Selection Process.

http://scientific.thomson.com/mjl/selection


Charles Goldie

December 2007

Reproduced here with permission.

See also further links on Journal and Article Assessment

Back to Popularisation and Teaching

Back to Home Page