王汉森的博客分享 http://blog.sciencenet.cn/u/王汉森 Welcome to my blog!

博文

如何理解期刊影响因子

已有 4572 次阅读 2012-6-25 02:19 |个人分类:科技视窗|系统分类:论文交流|关键词:学者

     最近,网上有不少关于期刊影响因子的文章。其中有一篇来自Sxientific America网,感觉写的很不错,比较客观全面。文章题目是Understanding the Journal Impact Factor,作者 Hadas Shema是一名来自以色利的信息科学专业研究生,目前文章只写完了第一部分。 转贴于此,与大家分享。

 

http://blogs.scientificamerican.com/information-culture/2012/05/07/understanding-the-journal-impact-factor-part-one/

 

Understanding the Journal Impact Factor – Part One

 

The journals in which scientists publish can make or break their career. A scientist must publish in “leading” journals, with high Journal Impact Factor (JIF), (you can see it presented proudly on high-impact journals’ websites). The JIF has gone popular partly because it gives an “objective” measure of a journal’s quality and partly because it’s a neat little number which is relatively easy to understand. It’s widely used by academic librarians, authors, readers and promotion committees.

Raw citation counts emerged at the 20′s of the previous century and were used mainly by science librarians who wanted to save money and shelf space by discovering which journals make the best investment in each field. This method had a modest success, but it didn’t gain much momentum until the sixties. That could be because said librarians had to count citations by hand.

In 1955, Eugene Garfield published a paper in Science where he discussed the idea of an Impact Factor based on citations for the first time. By 1964, he and his partners published the Science Citation Index (SCI). (Of course, this is a very short, simplistic account of events. Paul Wouters’ PhD, The Citation Culture, has an excellent, detailed account of the creation of the SCI). About that time, Irving H. Sherman and Garfield created the JIF with the intention of using it to select journals for the SCI. The SCI was eventually bought by the Thomson-Reuters giant (TR).

When calculating the JIF, one takes into account the overall number of citations the journal received in a certain year for the two previous years and divides them by the number of items the Journal Citation Report (JCR) considers “citable” and were published that year. TR offer 5-year JIFs as well, but the 2-year JIF is the decisive one.

Example:
JIF= (2011 citations to 2010+2009 articles)/(no. of “citable” articles published in 2009+2010)

The JIF wasn’t meant to make comparison across disciplines. That is because every discipline has a different size and different citation behavior (e.g. mathematicians tend to cite less, biologists tend to cite more). The journal Cell has a 2010 JIF of 32.406, while Acta Mathematica, the journal with the highest 2010 JIF in the Mathematics category, has a JIF of 4.864.

Due to limited resources, the JCR covers about 8,000 science and technology journals and about 2,650 journals in the social sciences. It’s a large database, but still covers only a fraction of the world’s research journals. If a journal is not in the JCR database, not only all the citations to it are lost, but all the citations articles in that journal give to journals in the database are lost as well. Another coverage problem is that having been created in the US, the JCR has an American and English-language bias.

Manipulating the impact factor

Given the importance of the IF for prestige and subscriptions, it was expected that journals will try to affect it.

In 1997, the Journal Leukemia was caught red-handed trying to boost its JIF by asking authors to cite more Leukemia articles. This is a very crude (but if they wouldn’t have gotten caught, very effective) method of increasing the JIF. Journal self-citations can be completely legitimate – if one publishes in a certain journal, it makes sense said journal published other articles about the same subject –when done on purpose, however, it’s less than kosher, and messes with the data (if you want to stay on an information scientist’s good side, do NOT mess with the data!). Part of the reason everyone has been trying to find alternatives to the JIF is that it’s so susceptible to manipulations (and that finding alternatives has become our equivalent of sport).

A better method to improve the JIF is to eliminate sections of the journal which publish items the JCR counts as “citable” but are rarely cited. This way the number of citations (the numerator) remains almost the same, but the number of citable items (the denominator) goes down considerably. In 2010, the journal manager and the chair of the journal’s steering committee of The Canadian Field-Naturalist sent a letter to Nature titled “Don’t dismiss journals with low impact factor” where they detailed how the journal’s refusal to eliminate a rarely cited ‘Notes’ section lowered their JIF. The editors can publish more review articles, which are better cited, or publish longer articles, which are usually better cited as well. If the journal is cyberspace-only, they won’t even have to worry about the thickness of the issues. The JIF doesn’t consider letters, editorials, etc. as citable items, but if they are cited the citation is considered as part of the journal’s overall citation count. However, the number of the journal’s citable items remains the same.

The JIF doesn’t have to increase through deliberate manipulation. The journal Acta Crystallographica Section A had rather modest IFs prior to 2009, when its IF went sky-rocketing to 49.926 and even higher in 2010 (54.333). For comparison, Nature’s 2010 IF is 36.104. The rise of the IF happened after a paper called “A short history of SHELX” was published by the journal in January 2008, and was cited 26,281 times since then (all data is from Web of Knowledge and were retrieved on May 2012). The article abstract says: “This paper could serve as a general literature citation when one or more of the open-source SHELX programs (and the Bruker AXS version SHELXTL) are employed in the course of a crystal-structure determination.”

Acta Crystallographica Section A Journal Impact Factor, Years 2006-2010

Acta Crystallographica Section A Journal Impact Factor, Years 2006-2010

All this doesn’t mean that the JIF isn’t a valid index, or that it has to be discarded, but it does mean it has to be used with caution and in combination with other indices as well as peer reviews.

Note: I assumed the writers of the The Canadian Field-Naturalist letter were the journal’s editors, which turned out to be a wrong assumption (see below comment by Jay Fitzsimmons). I fixed the post accordingly.

Note 2: My professor, Judit Bar-Ilan, read through the post and noted two mistakes – first, the JIF, of course, is calculated by dividing the citations for the two previous years by the items of the year after, and not the way I wrote it. Second, while the first volumes of the SCI contained citations to 1961 articles, they were published in 1964 and not in 1961. I apologize for the mistakes.

 

 

 

 



https://m.sciencenet.cn/blog-5414-585516.html

上一篇:玫瑰(三)
下一篇:警惕婴儿食物过敏

2 曾庆平 孙学军

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-5-8 09:18

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部