John Haynes is chief executive officer, AIP Publishing.  John received a PhD in chemistry from the University of British Columbia and followed this by postdoctoral research at the University of Oxford. He moved from academe to academic publishing and has gained almost thirty years’ experience in the STM publishing industry. Starting out as a commissioning editor with Taylor and Francis and Academic Press, taking up increasingly senior roles with IOP Publishing and Royal Society of Chemistry, and joining American Institute of Physics in 2009 as vice president, publishing. Dr Haynes was appointed as AIP Publishing’s first chief executive officer in 2013 when the new company was created as a wholly-owned subsidiary of AIP. Paula Gantz met up with John Haynes recently to talk about preprints and how they are impacting the STM sector.

Please give us a little background about preprints.

Preprints are early versions of scientific articles. Many will at a later stage be submitted to a traditional journal and go through a peer review process prior to being published. Preprints are complementary to the peer review literature, and provide an early view on research.

John Haynes casual

The first real electronic preprint service was called XXX; and was established by Paul Ginsparg at the Los Alamos National Lab. Now called arXiv and based at Cornell University, it was created to serve the high-energy physics community, enabling researchers based at the large accelerator labs to share articles. Prior to this electronic means of distribution, scientists used to circulate their papers via a hard copy in the regular mail, and Ginsparg designed an electronic system to do this, to save time and to be more efficient. From those small beginnings, 26 years ago, arXiv now includes over one million articles, and preprints have been adopted by other academic communities as well. There’s bioRxiv for biomedicine. There’s ChemRxiv which has just been launched by the American Chemical Society and some chemistry publishers.

Preprint services are typically free for the researcher to post to and free to read.

Who supports all this?

That is one of the challenges for the service providers: to find a sustainable and viable business model. arXiv, which is the biggest preprint server, is supported by a diverse group of stakeholders. It receives support from Cornell University itself and funding from the Simons Foundation. It receives pledges from about a hundred libraries as well, but it is looking to raise funds to accomplish the necessary upgrades to its technology.

Are you involved with arXiv?

There are opportunities for publishers and preprint services to work together. Authors have pain points in submitting an article first to the preprint server, and then later submitting the article to a journal. If publishers could reduce the workload and make a single point of submission, that’s to the author’s advantage. For the reader, a reader might find a version of a paper on a preprint server and that same paper in a peer-reviewed form will appear in a journal. How are these two linked?

A published article has a DOI, but the preprint may or may not have a DOI. How do you create the scholarly record in a better way for readers so that they’re aware of both the peer reviewed version and the preprint? We are talking with arXiv about ideas like that.

What are the top one or two benefits of preprints?

There are two main ones. If you ask authors why they submit their paper to a preprint service, a predominant reason is that it sets precedence, it’s their claim. Because they know that it might take weeks or months or, in some subject disciplines, even years to be peer reviewed and published in a journal. The preprint is the early view and documents the discovery.

For the reader, the benefit of the preprint service is that it’s an open resource, with no access barriers and no subscription tolls. The preprint server becomes an alerting service or early view.

How does the quality control process work?

arXiv has a team of volunteers who are experts in their field who serve as moderators. They get a batch of papers from the daily postings and provide a rapid “go-no go” filter. It’s a coarse filtering process with the large majority passing. There’s no peer review in terms of a rigorous, constructive feedback from referees that a conventional journal provides. That was one of the initial fears that publishers had, that the preprint services would develop a peer review process and hence become completely competitive. That hasn’t happened and publishers, particularly society publishers, understand that they serve different purposes.

What about the other preprint servers?

The life science community has bioRxiv, which was founded at Cold Spring Harbor Lab. The chemistry community have typically been opposed to the idea of preprints partly because some journals considered posting to a preprint server as prior publication ruling it out from being published in the journal. So, chemistry and physics, although close in lots of ways in their publishing habits, have some different norms and it will be interesting to see how ChemRXiv plays out – it is currently in beta mode.

Do you see preprints expanding into something else? Do you think this will change the nature of peer review in any way, or do you see anything more futuristic involving this whole area?

I think when you start to create such a large corpus of content which is freely and openly available, you need to consider how can you start to add value and add service on top of that by using some of the new technologies; artificial intelligence, natural language processing, machine learning; and apply those to that corpus of content. How can you start to extract data or extract meaning? Paul Allen, who is the Microsoft co-founder, is mining some aspects of the computer science literature on arXiv in a service called Semantic Scholar which has the stated aim to “cut through the clutter”. There are opportunities for publishers to really think about how to add value across the whole corpus of content using these data mining techniques. Another forward-looking effort is starting to develop services to make it easier for authors. “I can submit to a preprint server and submit to a journal, and I don’t have to do it twice because it’s a bloody nuisance and it takes up too much of my precious time.”

pv-gantz-042513Paula Gantz ( consults for learned societies in the U.S., Europe and China. Her focus is on new products, new technologies and innovative business models. She has over 35 years experience in scientific and consumer publishing and an MBA from The Wharton School.

Paula Gantz Publishing Consultancy

Join the conversation about rights and licenses by following us on Twitter @IPRLicense