You are here

Why you should not use the journal impact factor to evaluate research

에디티지 인사이트 | 2013년11월4일 | 조회수 89,634
시리즈 기사 Impact factor
journal impact factor and research evaluation
Eugene Garfield,1 the founder of the Journal Impact Factor (JIF), had originally designed it as a means to help choose journals. Unfortunately, the JIF is now often used inappropriately, for example, to evaluate the influence of individual pieces of research or even the prestige of researchers. This metric has recently come under considerable criticism owing to its inherent limitations and misuse.2-4 

The impact factor of a journal is a simple average obtained by considering the number of citations that articles in the journal have received within a specific time frame.5 A previous article "The impact factor and other measures of journal prestige" touched upon its calculation and features. This article delves a little deeper into the fallacies of the impact factor and points that you should consider when using it.

How the JIF should be used How the JIF should not be used
As a measure of journal prestige and impact To evaluate the impact of individual articles and researchers
To compare the influence of journals within a specific subject area To compare journals from different disciplines
By librarians, to manage institutional subscriptions By funding agencies, as a basis for grant allocation
By researchers, to identify prestigious field-specific journals to follow and possibly submit to By authors, as a singular criterion of consideration for journal selection
By journals, to compare expected and actual citation frequency and compare themselves with other journals within their field By hiring and promotion committees, as a basis for predicting a researcher’s standing
By publishers, to conduct market research6 By authors, to compare themselves

Characteristics of the JIF 

Below are listed some of the features and shortcomings of the JIF that should be well understood in order to prevent misuse of this metric: 

 

  • The JIF is a measure of journal quality, not article quality. The JIF measures the number of citations accrued to all the articles in a journal, not to individual articles. Following the well-known 80-20 rule, the top 20% articles in a journal receive 80% of the journal’s total citations; this holds true even for the most reputed journals like Nature.8 So, an article published in a journal with a high JIF has not necessarily had high impact: it is very well possible that the article itself has not received any citations. Conversely, a few highly cited papers within a particular year can result in anomalous trends in a journal’s impact factor over time.9

Quote on impact factor

  • Only citations within a two-year time frame are considered. The JIF is calculated considering only those citations that a particular journal has received within 2 years prior. However, different fields exhibit variable citation patterns. While some fields such as health sciences receive most of their citations soon after publication, others such as social sciences garner most citations outside the two‐year window.11 Thus, the true impact of papers cited later than the two-year window goes unnoticed.
  • The nature of the citation is ignored. As long as a paper in a journal has been cited, the citation contributes to the journal’s impact factor, regardless of whether the cited paper is being credited or criticized.8,11 This means that papers being refuted or exemplified as weak studies can also augment a journal’s impact factor. In fact, even papers that have been retracted can increase the impact factor because, unfortunately, citations to these papers cannot be retracted.
  • Only journals indexed in the source database are ranked. Thomson Reuters’ Web of Science®, the source database for the calculation of the JIF, contains more than 12,000 titles. Although this figure is reasonably large and is updated annually, several journals, especially those not published in English, are left out. Thus, journals not indexed in Web of Science don’t have an impact factor and cannot be compared with indexed journals.12
  • The JIF varies depending on the article types within a journal. Review articles are generally cited more often than other types of articles because the former present a compilation of all earlier research. Thus, journals that publish review articles tend to have a higher impact factor.13
  • The JIF is discipline dependent. The JIF should only be used to compare journals within a discipline, not across disciplines, as citation patterns vary widely across disciplines.14 For example, even the best journals in mathematics tend to have low IFs, whereas molecular biology journals have high IFs.
  • The data used for JIF calculations are not publicly available. The JIF is a product of Thomson Reuters®, a private company that is not obliged to disclose the underlying data and analytical methods. In general, other groups have not been able to predict or replicate the impact factor reports released by Thomson Reuters.8
  • The JIF can be manipulated. Editors can manipulate their journals’ impact factor in various ways. To increase their JIF, they may publish more review articles, which attract a large number of citations, and stop publishing case reports, which are infrequently cited. Worse still, cases have come to light wherein journal editors have returned papers to authors, asking that more citations to articles within their journal—referred to as self-citations—be added.15

These are some of the reasons you should not look at the JIF 

Did you know?

as a measure of research quality. It is important to explore other more relevant indicators for this purpose, possibly even in combination. If the JIF is used by a grant-funding body or your university, it might be a good idea to list your h index and citation counts for individual articles, in addition to the impact factors of journals in which you have published. This will help strengthen your argument on the quality and impact of your papers, regardless of the prestige of the journals you have published in. 

 

Concluding remarks

Finally, remember that the nature of research is such that its impact may not be immediately apparent to the scientific community. Some of the most noteworthy scientific discoveries in history were recognized years later, sometimes even after the lifetime of the contributing researchers. No numerical metric can substitute actually reading a paper and/or trying to replicate an experiment to determine its true worth.

 
Quote by Einstein
  • Garfield E (2006). The history and meaning of the journal impact factor, The Journal of the American Medical Association, 295: 90-93.
  • Brumback RA (2009). Impact factor wars: episode V−The empire strikes back. Journal of Child Neurology, 24: 260-262.
  • Brischoux F and Cook T (2009). Juniors seek an end to the impact factor race. Bioscience, 59: 238-239.
  • Rossner M, Epps HV, and Hill E (2007). Show me the data. The Journal of Cell Biology, 179: 1091-1092.
  • Adler R, Ewing J, Taylor P (2008). Citation Statistics. Joint Committee on Quantitative Assessment of Research, International Mathematical Union. [http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf]
  • Garfield E (2005). The agony and the ecstasy—the history and meaning of the journal impact factor. Presented at the International Congress on Peer Review and Biomedical Publication. [http://garfield.library.upenn.edu/papers/jifchicago2005.pdf]
  • Saha S, Saint S, and Christakis DA (2003). Impact factor: a valid measure of journal quality? Journal of the Medical Library Association, 91: 42-46.
  • Neylon C and Wu S (2009). Article-level metrics and the evolution of scientific impact. PLoS Biology, 7: 1-6.
  • Craig I (2007). Why are some journal impact factors anomalous? Publishing News. Wiley-Blackwell. [http://blogs.wiley.com/publishingnews/2007/02/27/why-are-some-journal-impact-factors-anomalous/]
  • EASE. EASE statement on inappropriate use of impact factors. [http://www.ease.org.uk/publications/impact-factor-statement]
  • West R and Stenius K. To cite or not to cite? Use and abuse of citations. In: Publishing Addiction Science: A Guide for the Perplexed. Babor TF, Stenius K, and Savva S (eds). International Society of Addiction Journal Editors. [http://www.who.int/substance_abuse/publications/publishing_addiction_science_chapter4.pdf]
  • Katchburian E (2008). Publish or perish: a provocation. Sao Paulo Medical Journal, 202-203.
  • The PLoS Medicine Editors (2006). The impact factor game. PLoS Medicine 3(6): e291.
  • Smith L (1981). Citation analysis. Library Trends, 30: 83-106.
  • Sevinc A (2004). Manipulating impact factor: An unethical issue or an Editor’s choice? Swiss Medical Weekly, 134:410.

 

스크랩하기

해당 기사를 스크랩해보세요!

지식은 모두에게 함께 공유되어야 한다는 것이 에디티지 인사이트의 이념입니다. 해당 사이트에서 제공되는 모든 기사는 Creative Commons license로 재포스팅 및 스크랩이 가능합니다. 아래의 가이드라인만 유념해주신다면 언제든지 무료로 에디티지 학술 전문가의 지식을 가져가실 수 있습니다!


  • 주의 : 에디티지 학술 전문가들은 해당 콘텐츠를 만들기 위해 많은 시간과 노력을 쏟고 있습니다. 기사를 스크랩 및 재포스팅 하실 때는 명확한 출처를 남겨주시기 바랍니다.
  • 이미지 재사용: 이미지를 원본이 아닌 편집 재사용하실 때는 에디티지 인사이트의 허가가 필요합니다.

코드를 복사하셔서 기사 공유를 원하시는 사이트에 적용하시면 에디티지 인사이트 기사를 가장 쉬운 방법으로 공유하실 수 있습니다.
 
Please copy the above code and embed it onto your website to republish.

Comments