qr code

If only there was a way to tell if articles are worth the time you put into reading them…

Updated: Oct 19, 2019

Apparently there is a whole evaluation system that allows you to see how important a scientific journal is, before you even read the articles!

Wait, what?!

Now calm down. Because I’m going to explain to you exactly why I don’t think this is actually going to help you work out which papers are worth reading before putting in that time investment.

I was provided with an excellent (and by excellent, I mean easy to follow and reasonably concise) link to an article titled “Impact  factors and their significance; overrated or misused?” (Scully & Lodge, 2005) to glance over as essential reading for my MSc Animal Welfare, Science, Ethics and Law. I quickly found out that this system of evaluating research papers was not only widespread and well know, but surrounded in controversy. *rubs hands together* Controversial topics are my favourite type!

Go on then, what is an impact factor?

So, Scully and Lodge (2005) have summarised key points beautifully and I’m here to paraphrase to save you even more time. You’re welcome.

As mentioned at the start, impact factors (IFs) are measures applied to journals that allow the reader to see how many times the ‘average article’ has been cited in a given period. In this sense, the more times it is cited, the higher the IF, the better and more important it must be, right? Well before I answer that question, I’m going to give you a bit more information on how this all works.

Calculating an impact factor – skip if you hate simple algebra.

IFs are calculated by looking at the number of times an article is cited, typically within a year. You can work it out by looking at the number of citations in year 3, by using the numbr of published articles in years 1 and 2. Well I say you, there are actually computer algorithms and systems in place that do this automatically.

So, to calculate the IFs,  you need to divide the number of times the journal is actually cited (a) by the number of published articles in the last two years (b).

Want an example? Go on then.

Journal X and journal Y both publish about research in human-animal interactions.

So if journal X  published 100 articles available for citation in 2011, 70 articles available for citation in2012, and there were 50 citations from the journal in 2013, the IF will be a (50) divided by b (100 +70).

a/b = IF, 50/(100+70) = 0.29

The impact factor for journal X in 2013 is 0.29.

Journal Y published 180 articles in 2011, and 85 articles in 2012. Journal Y had 45 citations in 2013. a/b = IF

IF for journal Y = 45 / (180 +85) = 0.17

Journal X has the higher impact factor, making it the more useful journal, and should be held in higher esteem among the anthrozoological research community, right? WRONG.

So why am I not convinced?

First of all, this system has taken no consideration to how much of an impact a publication has on a community. You can cite something over and over to make a point, but you can’t tell if anyone is actually influenced by it. This goes the other way too; a really important, heavily factual and ground breaking report may be used all over by leading professionals, but not necessarily cited over and over in future papers.

Another point. If you are working out IFs, you are effectively evaluating relative importance, and this is compared to other journals in the field. This can be misleading. Two journals in the education field can be compared, when one is about teaching maths in 6th form colleges, and one could be about teaching vocational subjects in 6th form colleges. These are very similar topics but with very different topics, research and regulatory bodies behind them.

Furthermore, availability with have an impact on the impact factor of journals. A smaller, lesser known journal with important and solid research methods in it may not be freely available. This is therefore going to be far less cited than a vague, generalised article splashed over the internet for free. Does this automatically make the free one better? No.Does it increase the IF? Yes.

My final, personal issue with impact factors relies with my personal vendetta against computers, automatic systems and the inevitable human apolclypse when we perfect artificial intelligence and the robots take over. Well maybe no the last one, but without a shadow of a doubt, no one can deny computers do not work 100% perfectly 100% of the time.

Now regardless whether its a super clever computer algorithm or a very patient person endlessly scrolling recent publications to highlight citations, but there will be errors, ther e will be missed citations, and changes to titles, names etc., are not going to be picked up. This ultimately means that IFs are never going to be 100% accurate, and it also means some journal IFs are more acccurate than others, with no way of knowing by how much or which ones.

Now I can see the appeal of trying to highlight importance of journals overall, and they are clearly used in marketing (WOW! look at our IF! We are heads and shoulders about other journals in our field!) as well as removing bias from better known and larger journals (all citations are treated equally, despite their origins). But Ifeel I’ve argued why this is a controversial area, and why I can’t always trust in a method ‘just because’ it’s widespread, well-known, and a long-standing adopted system.

I can also apply that same principle to many UK animal welfare practices, but that’s a post for another day.

Image result for text divider

Please check the article out yourselves! I have based all of my writing on this post on this exact one, as cited and mentioned throughout:

Scully, C. & Lodge,H. 2005) Impact factors and their significance; overrated or misused? British Dental Journal, 98, pages 391-393. Available at: http://www.nature.com/bdj/journal/v198/n7/full/4812185a.html?cookies=accepted [Accessed: 7thOctober 2016].

Also, although I did not use the information, ad therefore cite them, I read the following articles to help get a better understanding of impact factors.

Garfield, E. (2005) The Agony and the Estasy – The History and Meaning of the Journal Impact Factor.International Congress on Peer Review and Biomedical Publication, Chicago, September 16, 2005. Philadelphia, Thomson ISI. Available at: http://garfield.library.upenn.edu/papers/jifchicago2005.pdf?utm_source=false&utm_medium=false&utm_campaign=false [Accessed:7th October 2016].

Saha, S., Saint,S.& Christakis, D. A. (2003) Impact Factor: a valid measure of journal quality? Journal of the Medical Library Association, 91(1), 42–46. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC141186/ [Accessed: 7th October 2016].

#masters #journals #research #scientificwriting #explained #undergraduate #impactfactors #scienceblogger #referenced #controversial #scienceblog

0 views0 comments