You are here

The advance and decline of the impact factor

스네하 쿨카니 | 2014년1월8일 | 조회수 92,088
impact factor and journal publication

The impact factor is one of the most discussed topics in the publishing and scientific community. Thomson Reuters assigns most journals a yearly impact factor (IF), which is the mean citation rate during that year for the papers published in that journal during the previous two years. Last month, Thomson Reuters released the much-awaited Journal Citation Reports (JCR), with the new journal impact factors for 2013. According to Thomson Reuters, the latest JCR features 10,853 journal listings across 232 disciplines and 83 countries. A total of 379 journals received their first impact factor. Additionally, 37 journals were suppressed due to questionable citation activity. Suppressed journals are re-evaluated after two-years and it is decided whether they should be included in the JCR.

Here are some attention-grabbing highlights from the JCR: 66 journals were banned from the 2013 impact factor list because of excessive self-citation or ‘citation stacking,’ wherein journals cite themselves or each other excessively. According to Thomson Reuters, 55% journals show an increase whereas 45% show a decrease in impact factor this year. One such journal with a declined impact factor is PLoS ONE—the world’s largest journal by number of papers published. PLoS ONE’s impact factor has dropped by 16%, from 4.4 in 2010 (when it published 6,749 articles) to 3.7 in 2012 (when it published 23,468 articles). Interestingly, while several people in the publishing industry are discussing details of the new JCR, some journals and researchers are not bothered by it. Why is this so?

Researchers and publishing professionals are well aware of the increasing criticisms against the impact factor. Last year, a new initiative was launched to make academic assessment less dependent on the impact factor. In December 2012, a group of editors and publishers of scholarly journals gathered at the Annual Meeting of The American Society for Cell Biology in San Francisco to discuss current issues related to how the quality of research output is evaluated and how scientific literature is cited. They also wanted to find ways to ensure that a journal’s quality matched with the impact of its individual articles. In this meeting, they came up with a set of recommendations that is referred to as the San Francisco Declaration on Research Assessment (DORA). These recommendations mainly focus on practices relating to research articles published in peer-reviewed journals, and seek to find methods for improving the way in which the quality of research output is evaluated. The themes addressed in DORA are as follows:

  • The need to eliminate the use of journal-based metrics, such as journal impact factors, in funding, appointment, and promotion considerations
  • The need to assess research on its own merits rather than on the basis of the journal in which the research is published
  • The need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).

Although DORA does not propose methods to achieve all this, it tries to clearly set out the problems regarding the impact factor and provide a path to overcome these problems. It discourages the use of impact factors to measure the impact of individual research articles; assess a researcher’s scientific contribution; and decide on a researcher’s promotion, hiring, and funding. DORA suggests the use of other journal-based metrics that provide a clearer picture of a journal’s performance such as the 5-year impact factor, EigenFactor, SCImago, h-index, editorial and publication times, etc.

DORA has got a positive response from many in the academic and scientific community worldwide. More than 8000 individuals and 300 organizations have signed up for it. Of the signers, 6% are in the humanities and 94% in scientific disciplines; 46.8% were from Europe, 8.9% from South America, and 5.1% from Asia. However, some critics have pointed out that DORA criticizes the impact factor harshly without providing a better alternative to assess journals’ and authors’ impact. They feel that impact factors have remained strong for long due to their reliability, which DORA does not acknowledge. Thomson Reuters has released a statement in response to DORA. While Reuters accepts that the impact factor does not and is not meant to measure the quality of individual articles in a journal, they say that it does correlate to the journal’s reputation in its field.

As authors and researchers, do you think DORA would bring about a change in the scientific world? Would you support it? Please share your views.

Also read about why the journal impact factor should not be used to evaluate research impact and other prevailing debates on the impact factor.

 

스크랩하기

해당 기사를 스크랩해보세요!

지식은 모두에게 함께 공유되어야 한다는 것이 에디티지 인사이트의 이념입니다. 해당 사이트에서 제공되는 모든 기사는 Creative Commons license로 재포스팅 및 스크랩이 가능합니다. 아래의 가이드라인만 유념해주신다면 언제든지 무료로 에디티지 학술 전문가의 지식을 가져가실 수 있습니다!


  • 주의 : 에디티지 학술 전문가들은 해당 콘텐츠를 만들기 위해 많은 시간과 노력을 쏟고 있습니다. 기사를 스크랩 및 재포스팅 하실 때는 명확한 출처를 남겨주시기 바랍니다.
  • 이미지 재사용: 이미지를 원본이 아닌 편집 재사용하실 때는 에디티지 인사이트의 허가가 필요합니다.

코드를 복사하셔서 기사 공유를 원하시는 사이트에 적용하시면 에디티지 인사이트 기사를 가장 쉬운 방법으로 공유하실 수 있습니다.
 
Please copy the above code and embed it onto your website to republish.

Comments