As mentioned in my previous post, the amount of science being published has increased exponentially in the last couple of decades. But how much impact is this having and how is this measured?
Traditionally, the impact factor (IF) of the journal that publishes an article would be a measure of the impact that article is likely to have. Impact factors are calculated by dividing the total number of times articles in a specific journal are cited by the number of articles published in the journal. The problem is that, as the IF is an average value calculated at the journal level, even negative article mentions such as highlighting flaws in research would increase the journal impact factor. This is just one of the defects of the impact factor metric. Another is that it is very likely that some articles in a journal will be cited a different number of times to others, thus, they have more of an impact.
Is it correct that articles cited more often are referred to as having the same impact factor as those that don’t do as well just because they are published in the same journal? Probably not! A further catch of the current system is that only citations of papers from a journal in other scientific papers are used in the calculation of the impact factor of the journal. It would be naïve to think that impact of science can only be measured from how many times the work gets cited in academic papers. It has become apparent that tweets and blogs are ripping apart scientific papers within days of publication, whereas, the work may take months or years before it gets cited in fresh original research articles – for good or bad reasons.
The question is therefore whether publishers should react to these tweets and blogs that comment on papers, or simply wait for other academic papers to support or contradict these papers? The ideal scenario would be to recognise the incorrect content and remove it from the journal website or retract it before more readers read and cite the original work. I will discuss the issues with retractions in more detail in my next blog post but the main complication that publishers would face here would be the ambiguity about in what tweets to trust as well as carrying out the laborious task of keeping track of every tweet about every article.
In order for a journal to receive its IF, the journal needs to be tracked for three consecutive years by Thomson Reuters (ISI). The idea is to give time for the articles published in the journal to be cited enough to calculate a reasonable average value. The IF would also be at least one year behind on the value on the journal websites i.e. the citations of articles published in 2011 and 2012 would determine the impact factor the journal gets around April 2013. This seems like a rather long-winded process for a value that is not concrete in what it aims to achieve. The problem becomes greater when scientific research is judged on the IF of the journals that it is published in. For instance, in China it is common for researchers to be judged on the journals they publish in during applications for grants and senior academic positions.
In 2010, altmetrics (article level metrics) came on the scene with the aim of measuring the amount of impact on an article level rather than on a journal level. Another key difference of altmetrics from the IF is that it takes into account tweets, ‘likes’ on Facebook, blogs, downloads, the number of times an article is bookmarked and other media as part of the impact and hence, towards the calculation of the altmetric value in addition to the academic citations. The altmetric value, however, would need to be used carefully, as, like the IF, it does not distinguish between positive and negative impact, so a higher altmetrics value may not necessarily be a good thing. For example, more controversial articles are likely to have a higher altmetrics value, as more people are likely to talk about them.
A recent study has shown that the number of tweets about an article correlates with the number of citations the article eventually receives. Hence, Twitter may be a quicker way of predicting scientific impact of articles at a much earlier stage rather than waiting around for months for articles to be cited. Following on from this, the world’s first twitter journal called ‘Twitter Journal of Academic Research’ was launched yesterday; you can follow the journal @TwournalOf. The journal clearly has many issues to consider such as how researchers are supposed to put research that they have been conducting for years, which they would usually write a 5000-word article on, in 140 characters. Other possible problems one may immediately think of would be to do with peer review; how would peer review be done in reliable way? Can we trust tweets in such an open manner to judge work on? Are the reviewers influenced by previous tweets on the same article? Other possible problems may occur to do with copyright and intellectual property.
One possible route that the twitter journal may go down would be part of a chain of events until a paper is published. The current setup is that a paper gets tweeted about after it gets reviewed and published, but the twitter journal may think of reversing that process, so the paper gets tweeted about before the article gets published. Of course, this would bypass any embargo periods that are usually set by journals such as Nature and Science. This makes the essential point that the twitter journal will need backup from publishers, if it is to be incorporated as part of the publishing process. Another interesting initiative is being carried out by Mozilla Open Badges, where individuals can gain digital badges for the skills that they have. This is a useful and interactive way of showing your skills that may not be so easy to incorporate into your CV. The applications of Open Badges have so far been largely been educational, however, it wouldn’t be difficult to imagine an application related to science publishing coming out of it sooner or later.
Social media metrics have developed significantly since 2010 and four main companies work with publishers to provide article level metrics to readers: Altmetric, ImpactStory, PLOS Article-Level Metrics and Plum Analytics. So far, some of the big names in the industry including Nature Publishing Group, Cell Press, PLOS and BioMed Central Ltd have jumped onto the altmetrics bandwagon. In the years to come, as altmetrics become more common amongst publishers worldwide, the IF is likely to be scrapped for the multiple flaws that it suffers from. The twitter journal model needs to be analysed carefully and policies need to be setup clearly for publishers and researchers to take it seriously.