The aim of this page is to give links to some relevant web pages or other information.
Open access: question of quality, by Richard Poynder
Why does the impact factor persist?, by Richard Poynder
The International Mathematical Union report on citation statistics may be found at the link just given.
Metrics for the Research Assessment Formula, by Charles Goldie: a report for the LMS.
The H-index This wikipedia article gives a well referenced survey on this proposed citation index for an article.
There is material below on Thomsons ISI. Their evaluation of journals in our area for listing is under the control of a Mr Rodney Chonka, Editor, Physical, Chemical & Earth Sciences, Healthcare & Science, Thomson Reuters, but requests for information on his qualifications are ignored.
If you can look into the seeds of time,
And say which grain will grow and which will not,
Speak then to me, who neither beg nor fear
Your favours nor your hate.
Banquo in Macbeth, Act 1, Scene 3
`Hans Christian Anderson'. a film with Danny Kaye, in which he sings a song: "Inchworm, inchworm, measuring the marigold."
Are there parallels with with the general obsession with `measurement'? In connection with social services this has been called `Stalinist'!
Is the obsession with numbers related to the dominance of traditional set theory in mathematics? Category theorists know that abstract sets form a useful category, but less rich than many others, since discrete sets have no structure. Even categories of graphs (pdf link) have more interest.
From: "John Ewing" <email@example.com>
To: "'jim stasheff'" <firstname.lastname@example.org>
Subject: RE: citation indices
Date: Sat, 6 Dec 2008 11:09:27 -0500
The IMU-IMS-ICIAM report was widely circulated, pushed out to government and university bureauacracies, and republished in many places throughout the world. (The AMS Notices only wanted to publish the executive summary -- not my call, of course, because the Notices has an independent editor-in-chief.)
There is really not much more one can do other then educate people. None
of these organizations, nor any societies, have the power to force people
to use common sense.
I have talked about citation statistics to several groups, one of which was the NIH. Surprisingly, they were very receptive to the idea that one had to use citation statistics with care. People in the biological sciences, with a citation culture, seem to understand this.
On the other, the otherwise mild report has drawn intense negative reaction from many other places, with headlines to stories along the lines of "Confused mathematicians" and "Mish-Maths Statistics" and so forth. There is a huge enterprise behind citation statistics, and it includes a large part of the scientific community -- people who enthusiastically promote the use of citation data as a substitute for peer review. Parts of the mathematical sciences are included in this effort (most prominently, statistics itself). I've learned a lot from the reaction to the report, and in some ways I've learned more from the reaction than from the work itself.
The original hope was that a sensible report from a respected international body would help to persuade people to use common sense. In some places, that worked. In many others, it's clearly had little or no effect.
Of course, the misuse of statistics in a world gone mad to quantify every aspect of life extends far beyond citation statistics. I sometimes yearn for the better days of the past ... a sure sign I'm growing old.
Exec Dir, AMS
From: "R Brown" <email@example.com>
Sent: Saturday, December 06, 2008 3:19 PM
Subject: Science Citation Index
It should be emphasised that the ethics and practice of citation for an individual paper are unclear and probably untaught, except possibly through the admonishments of editors. Certainly scholarship in itself is generally unrewarded. What gets the most fame is a solution to a famous problem; and this is partly because the judgement of the achievement is easy, and could almost be set up as a computer program, as for tennis rankings. Opening new areas, or problem formulation, gives a more difficult task to assess: as they say, predicting the future has its problems. And it may take many years or decades for the true implications to sink in.
Should a citation be to the original paper, or to the most recent and possibly best exposition (the latest author has the advantage of someone else doing the spadework)? There is always an attraction in citing a famous author, which gives a certain cachet, even if the idea came from someone relatively unknown. There is the practice of changing terminology, so that the original paper looks old fashioned, and in any case dealt with oomla when `everyone' nowadays calls it bamloo.
How far back in the history of an idea or technique should citations go?
There is no established framework for good practice in citations dealing with all these matters.
Thus the idea of using citations as a basis for assessment of importance is hazardous in the extreme. This is emphasised in the IMU report.
From: "Garfield, Eugene" <firstname.lastname@example.org>
Subject: RE: Open access
Date: 02 May 2004 04:14
Eugene Garfield, PhD. email email@example.com
tel 215-243-2205 fax 215-387-1266
President, The Scientist www.the-scientist.com
Chairman Emeritus, ISI www.isinet.com
home page: www.eugenegarfield.org
Past President, American Society for Information Science and Technology
We have had students who have had excellent papers published in Theory and
Application of Categories, but who have had to swap to the Indian J. of
Mathematics, as otherwise their work would not be recognised by their countries'
[Garfield, Eugene] I DON'T UNDERSTAND HOW THIS IS RELATED TO SCI.
The general ressentment among academics is compunded by the high prices of
many academic journals, often those highly rated by assessment boards, and
the difficulty of academic libraries in affording these.
[Garfield, Eugene] THIS IS A SEPARATE MATTER THAT I CANNOT CONTROL.
I cannot have much sympathy with ISI's suggested difficulty in coping.
[Garfield, Eugene] ARE YOU SAYING THAT ISI MAKES TOO MUCH PROFIT. YOU SURELY CAN'T BELIEVE THEY HAVE AN UNLIMITED BUDGET. EVEN THE NATIONAL LIBRARY OFMEDICINE HAS A BUDGETARY LIMIT.
It surely cannot compare with the difficulties academics have in producing the papers on which the prosperity of the publishing world depends, including the assumed high rewards to their executives, in comparison with rewards to academics, particularly in the UK.
[Garfield, Eugene] I FAIL TO SEE WHAT THIS HAS TO DO WITH THE ISSUE OF JOURNAL COVERAGE.
There certainly needs to be more debate on the value of citation indices, and of inclusion in ISI lists. Governments find it simpler to assume that these have the most value.
[Garfield, Eugene] THERE IS NO SHORTAGE OF LITERATURE CRITICIZING CITATION ANALYSIS. YOU CAN SEE DOZENS OF THESE ON THE SIGMETRICS LISTSERV OF THE ASIS&T WHICH IS FREE.
I find that assessment bodies focus on rearding `world class research', as that sounds good, but have little mechanisms for rearding those who start new lines of research.
[Garfield, Eugene] ARE YOU SUGGESTING ISI COVER THE LOWEST IMPACT JOURNALS AND PASS LESS ATTENTION TO THE HIGHEST? ( I failed to respond to this by suggesting the IMPACT FACTOR was nonsense!)
There is little attention paid to the implications of work of Thomas Kuhn.
[Garfield, Eugene] AGAIN THERE IS PLENTY OF LITERATURE ON KUHNIAN THEORY.
A problem is that Governments want some measure of success in order to dole out funds, and will not recognise that this is analogous to advice always to invest your money in the those businesses which are most likely to succeed. Or for a rose grower to get rid of the seedlings which are unlikely to grow into champion roses. Diversity is not encouraged, as it costs; a narrrow base is preferred, though it could also lead to sterility.
[Garfield, Eugene] AGAIN I FAIL TO SEE HOW ISI SHOULD DEAL WITH THIS. WE CANNOT CONTROL HOW THE DATA IS USED. I HAVE DONE MY BEST TO PREVENT ITS ABUSE BUT I HAVE NO POWER TO CONTROL IT.
If Governments behave in this way, it is not unreasonable for commercial organisations to make money out of it.
[Garfield, Eugene] SCI WAS NOT DESIGNED FOR THIS PURPOSE.
Of course you will understand, if you examine my web page, that this is written from the point of view of a confirmed maverick!
[Garfield, Eugene] I WILL TRY TO READ YOUR WEB PAGE.
[Garfield, Eugene] I HOPE YOU CAN BE MORE SPECIFIC IN LAYING OUT YOUR OBJECTIONS.
DID YOU EVER
HAVE ANY CORRESPONDENCE WITH ISI ABOUT SPECIFIC TITLES YOU RECOMMENDED FOR COVERAGE?
Completeness of MR citations? I did a quick test on the Nonabelian tensor product of groups, initiated by the papers
1. R.Brown, J.-L.Loday, `Excision homotopique en basse dimension', C.R.
Acad. Sci. Ser. I. Math. Paris, 298 (1984) 353-356.
2. R.Brown, J.-L.Loday, `Van Kampen theorems for diagrams of spaces', Topology, 26, 311-335, 1987.
3. R.Brown, D.L.Johnson, E.F.Robertson, `Some computations of non-abelian tensor products of groups', J. Algebra, 111, 177-202, 1987.
My list of references (http://www.bangor.ac.uk/~mas010/nonabtens.html) has 92 items after these three, of which 3 are by me and 6 are preprints or theses. Math Reviews gives 7 citations for reference 1, 35 citations for reference 2, and 15 for reference 3, with some overlap.
Guiseppe Longo informs me that Mathematical Structures in Computer Science (MSCS), a Cambridge UP journal, is soon going to publish an Editors' note: Bibliometrics and the curators of orthodoxy
Impact factor is indeed a poor metric. Carl Bergstrom, a biologist, has implemented a better metric which is essentially the dominant eigenvector algorithm for the directed graph whose vertices are journals and whose edges are citations in (articles within) those journals. This gives you the "eigenfactor", and when you divide by the number of articles you get an "influence score." This takes care of the differences between citation culture in different fields as well as the rigging of the system by such journals as that silly chaos theory journal with the editor who cites himself hundreds of times. See the results for math journals at:
On the issue of citation indices, the quality of the data probably has more influence than the precise algorithm used. Carl Bergstrom's eigenfactor metric may be a better algorithm but it still uses Thomson SCI data, which IMHO is seriously flawed.
I did a troll through their data a few months ago, when the 2007 SCI came out with a precipitous slide for G&T (MathSciNet's citation index does not show any such slide). The slide seems to be mostly because for the "impact factor" they only count 2007 citations to 2005 and 2006 issues of the journal, but for some reason they seem to have misclassified a large number of 2007 citations to G&T as 2008. I also checked a paper at random that they didn't list as a citing paper where they should have (a paper by Namazi) and discovered that of the 18 bibitems in the paper, Thomson ISI had omitted one and garbled 10 of the rest sufficiently that only 7 were actually used. I don't now how general this sort of thing is, but this particular problem also struck me when I first investigated SCI data about ten or more years ago.
Even when they don't make errors, if a paper cited as a preprint then appears, it is not updated by Thomson (as opposed to MathSciNet), so many of the most relevant citations are ignored.
Another issue for the SCI may be that Thomson uses a quite limited list of citing journals for their SCI index. When I last looked at the list, Topology, which is basicly dead, was still on the list and G&T, the leading journal in the field, was not. Any field specific journal in a field poorly represented on their list will suffer.
Maybe more troubling than inaccurate SCI indices, since ISI data is used by many third world universities (and many non third-world ones) for personnel purposes, is that Colin Rourke has pointed out that Thomson sometimes omits some of the authors of multiauthor papers.
I suspect the Thomson data entry people may be paid on a piecework basis, encouraging shortcuts. And Thomson probably has no interest in checking accuracy of their data since to do so is not cost-effective; people buy their data anyway.
Mathematics is lucky to have MathSciNet as an alternative. So the bottom line would seem to be that if you are at an institution whose administration insists on using metrics, at least ensure that they are not using bad data, by pointing them to MathSciNet rather than Thomson for decisions that affect mathematics.
--walter neumann 16/12/08
But the following shows there is a possible blockage to getting on to the relevant part of Math Reviews.
to Jim Stasheff
Reference lists appear in MathSciNet from journals included in the Mathematical Reviews Citation Database. These journals are selected through an editorial process that includes approval by the Mathematical Reviews Editorial Committee (MREC). Citation Database reference list journals are reexamined each year in October. The list of current Citation Database reference list journals can be found at: http://www.ams.org/mrcitations/journal_list.html.
Please feel free to send me specific examples of journals you might have in mind and I will forward them to MREC for their consideration. Let me know if you have further questions.
416 Fourth St.
Ann Arbor, MI 48107
There is a web blog on the rating of some Elsevier journals.
Back to Popularisation and teaching page
Back to Home Page
6 January, 2011