Demands for open access

Print

Crowdsourcing has brought fresh impetus to scientific discovery both in terms of raw processing power and in public engagement. Anyone with a computer can get involved in projects ranging from mapping galaxies to logging visits to their bird feeder. What would have once been a PhD student’s dreary summer of collecting raw data, is now an opportunity for anyone with a bit of spare time to get involved, even in a small way, with a genuine research problem.

Although the public is more engaged with the acquisition of data they still face a problem in getting to the results; one-off access to a single peer-reviewed journal article costs around $25. Considering much scientific research is publicly funded, and the peer review process is conducted gratis by researchers, many academics are calling loudly for citizens to be freed from the knowledge hegemony of the major journals. The public should not have to pay once for research to be undertaken and then again for it to be communicated to them; so the argument goes. With this increasing demand there is perhaps a chance to remould the publishing process and remove the accusations of bias and manipulation that have dogged the traditional model of journals as ‘gatekeepers’ to scientific knowledge.

These are certainly issues that need addressing. Numerous reviews in journals such as the British Medical Journal have found systemic evidence of publication bias, and with funding sources becoming increasingly scarce and dependent on visible outcomes, scientists are turning to less traditional and more open methods of publicising themselves. A blog and a Twitter profile can generate far more publicity for a department in the public eye than journal articles that will never be read.

So could open access really challenge traditional publishing or will the respectability and prestige of Nature and Science always dominate? And how useful is crowdsourcing – are there any disadvantages to entrusting research to these new ‘citizen scientists’? I, Science spoke to an expert in the field to gain some insights.

Dr Mark Mulligan is a Reader in Geography at King’s College London and has used crowdsourcing on a range of projects, from digitising the locations of mines and dams, to policy support tools. He is also the founding editor of the open access e-journal Advances in Environmental Monitoring and Modelling. He talks here about the practicalities of using crowdsourcing, issues involving quality control as well as the future for peer review and open-access journals.

Could you outline a project where you’ve used crowdsourcing?

I have been involved in a number of such projects, including the development of a global database of mines using a Geo-wiki tool that we developed in Google Earth and, more recently, crowdsourcing the validation of environmental models by making them very easy to apply to real world problems at sites throughout the world.

Was there a lot of interest?

There was actually relatively little interest in the dams project. Partly because it was developed in the early days of the technology (2006) and, being based in Google Earth, was not as easy to use as it could be. Most of the 36,000 dams in the database were digitised by a group of my PhD students and myself. The web-based models are more widely used, since they solve a problem for the users by providing information for conservation, water resources and other environmental projects. We have more than 1,000 users of the Co$ting Nature ecosystem services modelling tool and the WaterWorld water resources tool. And these users – members of conservation NGOs, international development organisation, academics and students – help us to improve the associated datasets, models and systems by testing them against their own on-the-ground knowledge of the sites to which they are applied.

Did you come across any issues of quality control?

Even the specialists and dedicated PhD students developing the dams database still had to carry out a significant process of comparison and quality control. Any dataset that has more than one collector is open to differences in interpretation and thus inconsistency. Crowdsourced data cannot usually be used ‘as is’ but require a strong validation step.

Do you see crowdsourcing as a way to engage with the public or is it preaching to the converted?

A bit of both. There are notable successes in crowdsourcing as an engagement process. Our geodata portal provides a range of environmental datasets to a wide audience because these geographic data are visualised in Google Earth and Google Maps so that users do not have to be specialists in Geographical Information Systems in order to use and understand them. As a result we have a wide range of users, from folks at NASA through to artists and school children.

You are a founding editor of an open access journal. Do you see crowdsourcing playing more of a role in peer review in the future?

Yes I hope so. I would like to see scientific publications both more accessible to a wide range of audiences and also peer reviewed by a broad community of interested stakeholders rather than two or three anonymous reviewers. I would like to see journals evolve so that they ‘publish’ online all papers that they receive that pass basic editorial and communication effectiveness checks.

As it is, much research – some of which later turns out to be useful and correct – remains unpublished because it does not agree with the consensus views of the reviewers. Thus there are inbuilt disincentives on some types of innovation because of the associated difficulties in getting the work published.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *