Twimpact & Altmetrics

Twitter_Wikicommons

As mentioned in my previous post, the amount of science being published has increased exponentially in the last couple of decades. But how much impact is this having and how is this measured?

Traditionally, the impact factor (IF) of the journal that publishes an article would be a measure of the impact that article is likely to have. Impact factors are calculated by dividing the total number of times articles in a specific journal are cited by the number of articles published in the journal. The problem is that, as the IF is an average value calculated at the journal level, even negative article mentions such as highlighting flaws in research would increase the journal impact factor. This is just one of the defects of the impact factor metric. Another is that it is very likely that some articles in a journal will be cited a different number of times to others, thus, they have more of an impact.

Is it correct that articles cited more often are referred to as having the same impact factor as those that don’t do as well just because they are published in the same journal? Probably not! A further catch of the current system is that only citations of papers from a journal in other scientific papers are used in the calculation of the impact factor of the journal. It would be naïve to think that impact of science can only be measured from how many times the work gets cited in academic papers. It has become apparent that tweets and blogs are ripping apart scientific papers within days of publication, whereas, the work may take months or years before it gets cited in fresh original research articles – for good or bad reasons.

The question is therefore whether publishers should react to these tweets and blogs that comment on papers, or simply wait for other academic papers to support or contradict these papers? The ideal scenario would be to recognise the incorrect content and remove it from the journal website or retract it before more readers read and cite the original work. I will discuss the issues with retractions in more detail in my next blog post but the main complication that publishers would face here would be the ambiguity about in what tweets to trust as well as carrying out the laborious task of keeping track of every tweet about every article.

In order for a journal to receive its IF, the journal needs to be tracked for three consecutive years by Thomson Reuters (ISI). The idea is to give time for the articles published in the journal to be cited enough to calculate a reasonable average value. The IF would also be at least one year behind on the value on the journal websites i.e. the citations of articles published in 2011 and 2012 would determine the impact factor the journal gets around April 2013. This seems like a rather long-winded process for a value that is not concrete in what it aims to achieve. The problem becomes greater when scientific research is judged on the IF of the journals that it is published in. For instance, in China it is common for researchers to be judged on the journals they publish in during applications for grants and senior academic positions.

In 2010, altmetrics (article level metrics) came on the scene with the aim of measuring the amount of impact on an article level rather than on a journal level. Another key difference of altmetrics from the IF is that it takes into account tweets, ‘likes’ on Facebook, blogs, downloads, the number of times an article is bookmarked and other media as part of the impact and hence, towards the calculation of the altmetric value in addition to the academic citations. The altmetric value, however, would need to be used carefully, as, like the IF, it does not distinguish between positive and negative impact, so a higher altmetrics value may not necessarily be a good thing. For example, more controversial articles are likely to have a higher altmetrics value, as more people are likely to talk about them.

A recent study has shown that the number of tweets about an article correlates with the number of citations the article eventually receives. Hence, Twitter may be a quicker way of predicting scientific impact of articles at a much earlier stage rather than waiting around for months for articles to be cited. Following on from this, the world’s first twitter journal called ‘Twitter Journal of Academic Research’ was launched yesterday; you can follow the journal @TwournalOf. The journal clearly has many issues to consider such as how researchers are supposed to put research that they have been conducting for years, which they would usually write a 5000-word article on, in 140 characters. Other possible problems one may immediately think of would be to do with peer review; how would peer review be done in reliable way? Can we trust tweets in such an open manner to judge work on? Are the reviewers influenced by previous tweets on the same article? Other possible problems may occur to do with copyright and intellectual property.

One possible route that the twitter journal may go down would be part of a chain of events until a paper is published. The current setup is that a paper gets tweeted about after it gets reviewed and published, but the twitter journal may think of reversing that process, so the paper gets tweeted about before the article gets published. Of course, this would bypass any embargo periods that are usually set by journals such as Nature and Science. This makes the essential point that the twitter journal will need backup from publishers, if it is to be incorporated as part of the publishing process. Another interesting initiative is being carried out by Mozilla Open Badges, where individuals can gain digital badges for the skills that they have. This is a useful and interactive way of showing your skills that may not be so easy to incorporate into your CV. The applications of Open Badges have so far been largely been educational, however, it wouldn’t be difficult to imagine an application related to science publishing coming out of it sooner or later.

Social media metrics have developed significantly since 2010 and four main companies work with publishers to provide article level metrics to readers: Altmetric, ImpactStory, PLOS Article-Level Metrics and Plum Analytics. So far, some of the big names in the industry including Nature Publishing Group, Cell Press, PLOS and BioMed Central Ltd have jumped onto the altmetrics bandwagon. In the years to come, as altmetrics become more common amongst publishers worldwide, the IF is likely to be scrapped for the multiple flaws that it suffers from. The twitter journal model needs to be analysed carefully and policies need to be setup clearly for publishers and researchers to take it seriously.

IMAGE: Wikicommons

8 thoughts on “Twimpact & Altmetrics

  1. The great mistake made by altmetrics advocates is that they fail to consider individual papers. As soon as you do that you see that altmetrics has even more potential to corrupt the scientific enterprise than other forms of metrics. See, for example, Why you should ignore altmetrics and other bibliometric nightmares, at http://www.dcscience.net/?p=6369

  2. I have read your piece at the link you provided above. I agree with some of the points you make about the irregularities of these metrics, but I think you have to think rationally and practically. Who has time to sit and read (and understand) each scientific paper and rate it on its soundness? We need a metric to be able to quantify how much attention a particular paper is getting. Even if it is bad attention, that will still be good, as it will come into the attention of scientists and publishers to retract it or take it off their website. Of course, I am not trying to justify this weakness of altmetrics, instead, I am saying that we need to improve it further so that it is able to pick up negative comments and perhaps, highlight them in negative numbers. Perhaps, articles should have negative scores to highlight the negative comments about it? Also, you make some very valid points about people not reading the papers, but citing them. Again, how relevant or eye-catching the paper is to public is something for the researchers to consider. Of course, people are going to care more about a potential treatment for cancer more than some other physics topics, even though these physics topics are fundamental to all science. The funding system always has this issue; should physics be funded as much as some areas of molecular biology? This discovery of the higgs boson is now benefitting scientists working in all areas including medical treatments. But I think it is true that we need a metric to measure the impact of science, instead of blindly conducting research, as there is a science paper now published every 20 seconds. There needs to be a way of distinguishing between how much more attention some science papers are getting and why. We need a way to show tax payers that something valid and important is being done by their money. Normal public does not have enough interest in science or spare time to go to PubPeer and check out comments on papers; we have to find a quick and easy way for them to know about science and its impact and applications. Metrics does have a long way to go, but I think altmetrics is a step in the right direction, as it measures impact from blogs, tweets and other social media unlike the outdated impact factor. The next challenge would be when the scores start distinguishing between negative and positive publicity.

  3. I can see that I haven’t persuaded you at all. You say

    “Who has time to sit and read (and understand) each scientific paper and rate it on its soundness?”

    Well if you haven’t read and understood a paper then you shouldn’t be tweeting about it or commenting on it. I see from the web that you have done an MSc in science communications. I hope they didn’t teach you to write about work that you haven’t read properly, or even to retweet the title. People who do that just end up looking foolish if the work turns out to poor (as in the examples we cite).

    You say also

    “We need a way to show tax payers that something valid and important is being done by their money”

    How can you imagine that retweeting unread, overhyped papers shows taxpayers anything worthwhile at all? That’s sheer nonsense. You mistake advertising for reality.

    Perhaps you should have declared that you have a financial interest in your argument: you appear to work for Biomed Central and have a vested interest in promoting metrics.

    Andrew Plested and I are scientists, not sci comms people. It makes us angry that a bunch of people who aren’t scientists have grown up round the fringes of science where they prey on the work we do. In the process they harm and corrupt science, by promoting ill-understood gimmickry.

    As it happens, I’m all for communicating with the public. I blog about things that interest me, and which I understand. If you haven’t read and understood then at least get off our backs.

  4. First of all, let’s clarify a few things:

    I am currently studying Science Communication MSc, I have not graduated yet. Secondly, yes, I am a part time employee of BioMed Central, but the views represented here are my own views, as this my university’s science magazine – nothing to do with BioMed Central. Also, it would be good if you can refer to the topic under question in future comments rather me, because the comment above looks rather like a personal attack instead of a constructive debate/conversation.

    Secondly, I agree that people should not tweet about stuff that they have not read properly, however, you can’t convince everyone to do that. The most difficult thing to do is to get a bunch of people to follow a procedure. Next, as you point out in your article, people don’t have access to articles that subscription journals such as Nature and Science tweet about (in the first 6 months, as they become open access after that). However, you can’t take away that level of trust that people have in these publications. This may indeed contribute in messing up metrics, which we have to figure out a way around. Also, you have to consider that mainstream media get a lot of their stories from these journals, as they have a level of trust with these publications. People then tweet about stuff that they have heard to read in the mainstream media, even though they may have not read the original science papers. In other words, you can’t trust people to read everything they tweet/retweet, as people do not have the time to read, analyse and understand everything! What you mention seems too idealistic. Metrics are not perfect, which I have pointed out in this blog already.

    I am not backing up any company or any metrics, I am simply stating my point of view on this blog. If you disagree, that is your point of view. My aim is not to convince you of my point of view, but I am free to write what I believe, as per freedom of speech.

  5. I’m not intending to be personal, but when reading things it is always relevant to know the background of the writer (mine’s at http://www.dcscience.net )

    Of course we both have the right, even the duty, to say what we believe. That’s what we are doing now.

    You are right about the problem of people not having access to glamour journals (and many others too). That’s one reason that they have to be competed out of business. Open access is essential and its affordable if we don’t allow ourselves to be hijacked by Elsevier and NPG. As we pointed out in 60% of the articles in the top 100 for 2013 from altmetrics.com were not available unless you work in a university. That’s appalling if one want’s to engage the public -it’s saying “trust me”, and that sort of argument from authority is the antithesis of science.

    The problem starts with the journals, which issue tweets and press releases, often, as in our examples, wildly hyped-up (this requires the connivance of the authors, who are not always blameless). The vanity journals are the most efficient at gaming the system in this way (our examples were from Science and NEJM). It is this ‘gaming of the system’ that is one of the reasons why metrics to harm. ‘Gaming’ is a euphemism for dishonesty. That is why metrics can be not merely “imperfect” but actively harmful.

    You say “you can’t take away that level of trust that people have in [Nature and Science]. I disagree. The general public probably know little at journals, and among scientists the reputation of glamour journals is on the wane. The have the highest retraction rates, the lowest quality structures (and they aren’t open access).

    I’m quite glad you described my ideas as idealistic. What else should be striving for in life if not ideals? Every ideal is described by its opponents as impractical, before it happens.

    Good luck with your MSc.

  6. Just to let you know, the bio of all of this year’s bloggers, including myself, is at this part of the website: http://www.isciencemag.co.uk/blogs/.

    I agree with open access. I think it’s essential, but I think we need to be careful on how open we make it, as sharing sensitive datasets to everyone, may not be the correct way to go. My views on open access are highlighted in a previous post: http://www.isciencemag.co.uk/blog/openness-closedness-or-both/. I agree that these journals need to adapt to the open access model, but I think it would be better to target the funders to make changes to their policies. For example, as highlighted in my previous post, the Wellcome Trust and the UK research councils have made changes in their policies to make sure research funded by them is published as open access. This needs to be done worldwide and at a government level for these publishers to adapt to this model. I don’t think journals such as Nature or Science can be boycotted completely, but I do think they need to change their business models.

    I see your point about ideals as well. However, I think there is an important reality check that needs to be done at every step of the way.

    All the best to you as well!

  7. The author seems to miss a lot of points. Seems to have little understanding on metrics that are used for quality assesment for articles and how the scientific community behaves.

    In the first place Impact Factor is only a measure of the Journals influence and cannot be used at the article level or for individuals. All quality measures are usually lagging in nature. In the day and age of quick results and turn around views, clicks and downloads may give us some information, but are we attemepting to measure by following these scores.

  8. I am aware that impact factors are only a measure of a journal’s influence and not an article’s. In fact, that is what my point is; the fact that some articles in a journal are more popular and receive more attention than other articles published in the same journal. The calculation for working out the impact factor is also flawed, as outlined above.

    Next, I go on to talk about ‘altmetrics’, which is article level metrics used to measure the influence of specific articles at the article level (not the journal level). This is includes, blogs, tweets and other forms of social media. I agree that this way is a better way of measuring impact than the traditional impact factors. I don’t think you’ve understood my blog post properly, it might be worth having another read through it!

Leave a Reply

Your email address will not be published. Required fields are marked *