Riding the Tectonic Plates 2: Disruptive Effects

Photo: The Grand Canyon, USA
Image: Tobias Alt

Report from the Third Semantico Online Publishing Symposium

Technology is driving disruptive change in scholarly publishing – as well as altered expectations and behaviours among scholars, researchers, students, librarians and those who set institutional and governmental policy. This symposium was held recently in London to discuss how publishers can survive and thrive within this fast-changing landscape.

An invited audience of publishing industry leaders debated the issues under Chatham House rules. Delegates were from organisations including Beilstein-Institut, BioScientifica, CABI, Cross Ref, eLife, Mendeley, Nature Publishing Group, Palgrave Macmillan, Sage, SIPX and Springer.

The discussion was in three parts, covering the following themes:

  1. The changing user
  2. Changing business models
  3. Future tech trends

This post covers the disruptive effects of changing business models: what are the important new business models to focus on and what implications do they have for the future of publishing?

Part 2. Changing business models

In Part 1 of this series we looked mainly at the disruptive effects of Open Access (OA) on publishers, particularly on the production side, but these are relatively minor headaches compared to the effects it is having in the institutions.

OA is a different business model under which the money that currently goes to pay for library subscriptions will go in future to fund the authors of research papers in publishing their results. Under this new, author-pays model, funding is transferred from readers to authors in the form of APC (article processing charge) revenues. But who is going to be in charge of the pot: who will administer these APCs?

There is a widespread assumption that the job will ultimately fall to librarians, and in certain institutions, this is what they are being told. Some are trying to ‘retool’ themselves to become that and when people talk about what the librarian of the future is going to be, the conversation quite often goes in that direction.

It is acknowledged that this is quite a radical step for them, however there is a fear that if librarians don’t get that block grant, then what else will they have to do? With all of its resources online, will the library become just somewhere that students go for coffee?

Where’s the money?

The introduction of the new model is also causing a confusion and a degree of consternation among academics.

One publisher delegate gave an example to illustrate this bafflement. An author from a famous old UK university emailed her to say he was going to withdraw his article from her journal because he was being asked for an APC charge, and there wasn’t any money within his institution for APCs until April 2013 when block grants were to be given. In actual fact, a seven-figure sum had been allocated by RCUK for just this purpose, but it proved impossible to track down exactly who had the money. The academic was at a loss even to know who to ask.

Not only is there no process in place in many cases, but there is no existing context for this type of funding, which makes matters harder when the change is progressing at such speed. Open access is moving faster than the culture of academia can currently accommodate.

OA in the sciences and the humanities

OA began in the sciences, and the humanities have been slower to come to terms with it. ‘Implementing Finch’, a two-day workshop organised by the Academy of Social Sciences that took place in the week before our symposium, was felt by more than one one of our delegates to mark the moment when HSS (Humanities and Social Sciences) woke up to the implications of OA. Essentially it had been ‘asleep at the switch’ till then. This has proved a rude awakening for some – witness the outrage vented in letter columns and elsewhere in the media since our Symposium. ‘Pay-to-say’, as it has been dubbed, is attacked as a threat to academic freedom, to academic control of research outputs, to author copyrights, to equality between and within institutions, and to research funding in general. This resistance has not been confined to HSS either.

In some cases resistance will be slightly after the fact, since RCUK (88% of whose disbursements are to STEM) has largely set its direction on OA, but HEFCE has yet to complete a process of consultation over the Research Excellence Framework in 2014, the results of which will be eagerly anticipated.

From impact factor to impact cases

Accompanying the move to Open Access is a change in the way research grants are allocated. This change addresses a different, but linked part of the process of scholarly publishing, the criteria by which research applications are judged.

Up until now, ‘impact factor’ has ruled in helping to decide which institutions are producing quality research, based on the quality of the journals in which research results are accepted for publications.

What we are beginning to see is a shift towards ‘impact cases’, where researchers submitting grant applications are called on to make a case for the likely impact of the proposed research on the wider world. The difference is that academics get to argue their own case, with no reference to journals or impact factor. In the UK, academics at Nottingham University, for example, are already being asked to submit impact cases for every piece of research they publish.

The eLife scientific

If there were any doubt at all that OA is a contentious area in scholarly publishing right now, one would only have to look at some of the attacks levelled at the OA journal eLife since it began publishing last year.

In our symposium, whose delegates covered a spread of opinions, e-life’s representative also came in for a degree of good-natured sniping from some long-established publishing brands. However, most of the questioning of their business model was motivated by a desire to understand how it will work.

Delegates were particularly interested in the question of selectivity. E-life aspires to be a journal where researchers will want to publish, and the normal means of accomplishing that end under the old model would be to operate with a high rejection rate, in order to build a prestige brand. E-life aims to be selective, but how does that square with the open access model?

Comparisons are inevitable with PLoS, which launched in 2003 but was notably less selective; however in those 10 years the context has changed beyond recognition. eLife will start with a high rejection rate, but is concerned to stress that its real concern is the quality of the science it is publishing.

The reason why it is being funded (by Howard Hughes Medical Institute, the Max Planck Society and the Wellcome Trust) arises from a perception that the way journals work now is sub-optimal for scholarship and for science. The point is therefore not to create an OA version of what we have now, but to build something new – not to replicate the existing model but to change it.

However in getting such a start-up going, within an existing market context, means that it is necessary to build an attractive brand and – in some ways – play essentially the same game as the established players.

The starting point with eLife is to attract the right kind of science and refine and streamline the process by which articles are accepted and peer reviewed; something which is of great interest to academics.

The journal is run by academics and a lot of effort goes into thinking about how content is presented from the point of view of the user.

The long-term challenge, however, is that eLife should undermine its own existence, in that what it is trying to get away from is the idea that researchers are judged on the basis of where they publish. Right now, eLife is playing the same game, on the basis that it isn’t currently possible to do things any other way.

Mendeley’s firehose

Another disruptive start-up was represented at our table, and again the questioning about business model was intense, and took us into areas of altmetrics and Big Data.

Mendeley is a free reference manager and academic social network that helps users organize research, collaborate and discover the latest research online.

However, all free-to-user platforms have to be paid for somehow, and what particularly interested our delegates was how Mendeley is monetizing the valuable data its platform generates. It was assumed that this could be extremely attractive to large companies in scholarly publishing.

(It should be mentioned that since the Symposium, Mendeley has gone into advanced talks with Reed Elsevier.)

While maintaining standards of privacy, Mendeley aggregates some information anonymously, and has recently started monetizing parts of its data and sending it to libraries. This is called Mendeley Institutional Edition.

Here’s a brief example of how the company’s data can be useful to a university.

Say Mendeley has 7,000 users on an institution’s campus. The institution can see which of the journals it subscribes to are used most by that population, and what sort of usage they are getting. In addition, they can see which journals are getting usage that the institution doesn’t subscribe to. It is estimated that something like 30-50% of articles you might need as a scholar can’t be accessed through your institution’s subscription, so this is useful information to drive acquisition policy.

This is just a simple example, and represents a drive to monetize the data that is in its beginnings. Mendeley has information on demographics, academic status, institutional affiliation, and the whole clickstream of use. Only readership data is given out through the API at the moment, but the rest is possible.

To see where this process of monetizing user data might be headed, it is interesting to look at the general consumer market.

Facebook, which IPO’ed last year so spectacularly, makes an average annual earning per user of less than $5, but aspires to increase that metric twentyfold (Google, apparently, wants $20 per user).

To give a comparison, that $80+ dollars Facebook aspires to is the sort of spend that would put them in cable subscription territory. Can this be achieved, purely through monetizing big data, without some massive invasion of privacy? Selling user information in this way is something the publishing industry has always been careful about doing. With Mendeley you get a very interesting insight into what scholars are actually doing in the process of scholarly thinking, but the question for the company one delegate posed was: how much like Facebook are you planning on being?

On a more positive note perhaps, it’s interesting to see how Twitter is faring with the business of monetization. This other social media behemoth hasn’t got into the messy business of monetizing its data piece by piece itself, but instead licenses it to third parties who do the job of managing that complexity.

One of these, a data company, has made a business out building filters for a further tier of third-party developers, allowing them to tap into the ‘firehose’ and extract targeted subsets of the data. If, for instance, you wanted to monitor tweets posted in a specific timeframe, from branches of a well-known coffee shop chain, containing specific keywords and with certain user demographics, there is a company that can provide that for you.

Mendeley’s future could lay with building a similar type of ecosystem relevant to scholarly publishing.

Reputation management

At the root of many anxieties about OA is a concern that something extremely valuable isn’t lost in the change of model – a concern over the means of establishing and recognizing academic reputation.

There are not that many mechanisms for achieving a reputation in academe at the moment: it’s largely about where you publish your work. That’s the way the system works now, but it is changing.

Altmetrics proposes a richer mix than the system of citations we currently have as the mechanism for quantifying academic reputation; an extended dataset. The change from journal-level metrics to article-level metrics is a symptom of this move to greater granularity. However the worry is that something is lost along the way.

It is argued that we need to maintain a clear distinction between the concept of popularity and the concept of authority in order to understand how authority is gained in different sectors.

In the consumer domain you can build a brand through sheer popularity. The music industry has been undermined because ways have been found to harness the wisdom of crowds and enable breakthrough voices that doesn’t necessarily require having prestige conferred on them by some intermediary – a music publisher, a critic or a record label.

But in academic publishing you need a mechanism to get sustainable authoritative voice – which is a different thing from popularity. Your route to prestige is not quite the same as what altmetrics can tell you – assuming that to be information such as how widely discussed you are in the twittersphere, or how many downloads you have achieved.

Scientists, when they publish their results, are not trying to impress the world at large, they are trying to maintain or grow their credibility within a tightly limited group of their peers. If the Dunbar number is taken as an accurate measure this could be as few as 100-150 people.

In music, an album might need millions of sales to become a best-seller. The audience for academic research within a particular discipline is far smaller.

The best-read article on Nature is about mutant butterflies in Japan, but that isn’t necessarily the best piece of science talked about in that famous publication, by any means. Huge popularity is not an indicator of authority in scholarly publishing. The biggest number is not the best.

A shout-out for the citation

There is still considerable support for the citation as a key metric of reputation – but there are also those who would like to see it become more meaningful and more meaning-rich – i.e. not just the numerical count of how often a paper is quoted.

Not recorded under the present system is what the person doing the citing actually said about a particular piece of research in the sentence in which it was cited. The reference could be positive or negative, but that wouldn’t make any difference. It is still a citation. Semantics and data mining could lead to an enrichment of citation to give a clearer picture of true impact, but that is a technical problem yet to be cracked.

At the moment it faces a number of obstacles. Who will do the tagging, for a start (if tagging is needed). Authors would probably find it too complicated.

Then there is the complexity of the cultures and conventions of academic discourse within different disciplines, described as ‘a weird sort of theatre where you write the paper to accrue reputation without explicitly saying what it is you’re actually saying’.

In some areas this can manifest itself as appearing to disagree with someone but actually agreeing with them; a violent objection that actually turns out to be a veiled compliment.

Will any semantic engine we can currently imagine be sophisticated enough to parse that level of meaning?

Alternatively, would it require a change in the way people write papers to improve citations, and would there be strong enough incentives for authors to comply?

It is always the case that more questions are raised than answered at these symposiums, so this post has to end here, at the frontiers of what we can presently say with any certainty.

Related posts

Using personas to move customers along the value chain

Brill plays the long game: Perry Moree on making digital pay

5 user-driven tech trends that blindsided the IT department

Data stewardship and the ‘ephemerality index’: Jan Velterop interview part 2

Leave a Reply

Thank you for leaving a reply. Your email will not be published.
Email and Name are required fields.