Category Archives: Open Access and Institutional Repositories

Cancelling Wiley?

Because they were spaced almost a full year apart, I really did not connect the dots when two Canadian universities announced that they were cancelling their “Big Deals” with John Wiley & Sons publisher.  The Times Higher Education reported on the decision at the University of Montreal back in January 2014, while the announcement made by Brock University came only a few weeks ago.  I would not have considered this a trend worth commenting on had it not been for conversations I had last week at the Fall CNI Membership meeting.  During that meeting, two different deans of large university libraries told me, unbidden and in separate conversations, they they were also considering ending their deal with Wiley.  I was struck by the coincidence, which caused me to remember these two announcements from Canada and to begin to ponder the situation.

Two different questions occurred to me when I thought about these four significant cancellations or potential cancellations, all directed at a single publisher.  First, why was Wiley the focus of this dissatisfaction?  Second, what is the next step?

As for what the complaints are about Wiley, the answer is pretty much what it always is — money.  The THE article and the Brock University report both tell us that exchange rates have made the annual “higher than the inflation rate” price increases for these packages even harder to bear than usual.  They also point to another problem.  Pricing is based on the large number of titles included in these package deals, but many of those titles are not very useful.  The Brock post notes that the Wiley package has a significantly higher cost per use than does their Elsevier package, which presumably reflects the fact that many of the titles the University is paying for in the package simply do not get used.  The same reality is probably behind the fact noted by THE that Montreal would subscribe to less than 25% of the titles that had been included in the package it was cancelling.  It would be interesting to find out, a year on, how much those other titles have been missed.

In my conversations with the two library deans, much the same thing was said about Wiley — demanding a large price increase, being inflexible in negotiation, and selling “a lot of junk that I don’t need” in the package.  Libraries are beginning to discover that they do not need to put up with those tactics.  Publishers often tell us that they are publishing so many more articles, which justifies their price increases, and they tell us how selective their flagship journals are.  But when we look at these big deals, it is clear that selectivity is not an across-the-board approach; many articles that are not very useful just slide down the hierarchy to get published in journals whose main purpose is to pad out a “big” deal.

To me the more important question is “what now?”  Unfortunately, many times when a library makes this kind of decision there is actually little money saved, since the funds simply go into re-subscribing to a smaller, selected list of titles from the same publisher.  But presumably some of these cancellations result in dollars saved.  And when they don’t, I propose that libraries ought to reexamine their approach.  When you have cancelled a dross-laden package, think twice before reinvesting all of that money in as many individual subscriptions from the same publisher as possible; make a careful decision about where the division between useful titles and unnecessary ones really lies.  Because here is the thing — money that can be saved and reinvested in open access projects will give us a higher return on our investment, because those projects will provide greater access.

It seems clear that, over time, libraries will need to move more and more of their spending away from the consumption side of scholarly production and do much more to support the creation and dissemination of knowledge directly.  Commercial publishers hope to capture those dollars as well, but one of the real benefits of supporting open access can and should be more freedom from businesses addicted to 30% profits.  I would like to challenge libraries to consider, when they have to cancel, using the money to support non-profit or lower profit open access projects.  Work with a society to provide subvention for a scholarly journal to become OA.  Work with your university press to support OA monographs.  Finally, even if not compelled by immediate budget realities, think about making some strategic cancellations in order to take these kinds of steps.  We know that open access is our future, and it is vital that we take control of that future before others take it from us.

I don’t know if Wiley is the worst offender amongst the large commercial publishers, or whether there is a real trend toward cancelling Wiley packages.  But I know the future of scholarship lies elsewhere than with these large legacy corporations.  The process of weaning ourselves from them will be slow and drawn-out.  But especially when the cancellations are going to happen anyway, we should have the idea of using the funds to advance the transition to open access foremost in our minds.

For a similar, but likely better informed, perspective on the idea of cutting subscriptions to support open access, please read Cameron Neylon’s post on “Letting it go — Cancelling subscriptions, funding transitions,”which ties the idea in his title to the discussion going on in the Netherlands about Elsevier’s big deal.

Public access and protectionism

By now many folks have commented on the announcement from Nature Publishing Group early this week about public access to all of its content and most have sussed out the fairly obvious fact that this is not open access, in spite of the rah-rah headline in the Chronicle of Higher Education, nor even public access as it is defined by many national or funder mandates.  Just to review quickly the major points about why this announcement actually gives the scholarly community so much less than is implied by either of those terms, consider these limitations:

  1. A putative reader can only get to an article if they are sent a link by a subscriber, or the link is present in a news article written by one of the 100 news organizations that NPG has chosen to “honor.”
  2. Articles can only be read inside NPG’s proprietary reader
  3. No printing or downloading is possible, so a non-subscriber hoping to use one of these articles to further her own research better have a darn good memory!
  4. No machine processing will be possible; no text or data mining.

In short, all of the inconveniences of print journals are preserved; what NPG is facilitating here is essentially a replica of loaning a colleague your copy of the printed magazine.  Or, at best, the old-fashioned system whereby authors were given paper “off-prints” to send to colleagues.  Although, honestly, off-prints had more utility for furthering research than this “now you see it, now you don’t” system has.

If this is not open or public access, what is it?  I like the term “beggar access,” which Ross Mounce applied to NPG’s scheme in a recent blog post, since it makes clear that any potential reader must ask for and receive the link from a subscriber.  Some suggest that this is a small step forward, but I am not convinced.  There is nothing public or open about this “ask a subscriber” model; all it really does is prevent scholars from downloading PDFs from their subscription access to NPG journals and emailing them to colleagues who lack a subscription.  In short, it looks like another stage in the ongoing comedy of fear and incomprehension about the way digital scholarship works on the part of a major publisher.  But Mounce’s post suggests that the move is more than that; he points out ways in which it may be designed to prop up digital business that Nature and its parent Macmillan have invested in — specifically ReadCube and AltMetric.com.  The byzantine scheme announced by Nature will drive readers to ReadCube and will generate data for AltMetrics.com, helping ReadCube compete with, for example, Elsevier and their proprietary reading and sharing tool, Mendeley.

That is, this looks like another move in the efforts by the large commercial publishers to buy up and co-opt the potential of open access. On their lips, open access does not mean greater potential for research and the advancement of science; it means a new market to exploit.  If administrators, researchers and librarians allow that to happen, they will have only themselves to blame.

My colleague Haley Walton, who recently attended OpenCon 2014, told me about a presentation made by Audrey Watters that included the idea of “openwashing,” which Watters defines like this:

Openwashing: n., having an appearance of open-source and open-licensing for marketing purposes, while continuing proprietary practices.

This is exactly what is happening in this announcement from NPG; old business models and awkward exploitation of new markets are being dressed up and presented as a commitment to access to scholarship, but the ruse is pretty transparent.  It may quack like a duck, or be quacked about, but this plan is really a turkey.

If NPG really was committed to better access for scientific research, there is a simple step they could take — put an end to the six-month embargo they impose on author self-archiving.  Much of their competition allows immediate self-archiving of an author’s final manuscript version of articles, but Nature does not.  Instead, they require a six-month embargo on such distribution.  So this new move does only very little to ameliorate the situation; the public still cannot see Nature-published research until it is old news.

Speaking of news, at Duke we have a relationship between the office of Scholarly Communications and that of News & Communications whereby we are notified of upcoming articles about research done at Duke.  In many cases, we are able to work with authors to get a version of the article in question into our repository and provide an open link that can be included in the article when it is released, or added shortly after release.  Our researchers find that putting such links in news stories leads to much better coverage of their discoveries and increased impact on their disciplines.  We always do this in accordance with the specific journal policies — we do not want to place our authors in a difficult position — which means that we cannot include Nature-published articles in this program.  To be frank, articles published in Nature remain highly valued by promotion and tenure committees, but relatively obscure in terms of their ability to advance science.  NPG seems to understand this problem, which is why they have selected a small number of news outlets to be allowed to use these tightly-restricted, read-only links.  They want to avoid increasing irrelevance, but they cannot quite bring themselves to take the necessary risk.  The best way they could advance science would be to eliminate the six-month embargo.

It is interesting to consider what might happen if Nature embraced a more comprehensive opportunity to learn what researchers think about open access by tying their “get a link from a subscriber” offer with an announcement that they were lifting the six-month embargo on self-archiving.  That would demonstrate a real commitment to better access for science, and it would set up a nice experiment.  Is the “version of record” really as important to researchers as some claim?  Important enough to tolerate the straightjacket created by NPG’s proprietary links?  Or will researchers and authors prefer self-archiving, even though an earlier version of the article must be used? This is not an obvious choice, and NPG might actually win its point, if it were willing to try; they might discover that their scheme is more attractive to authors than self-archiving.  NPG would have little to lose if they did this, and they would gain much more credit for facilitating real openness.  But the only way to know what the real preference among academic authors is would be for Nature Publishing to drop its embargo requirement and let authors choose.  When they make that announcement, I will believe that their commitment to finding new ways to promote research and learning is real.

Attention, intention and value

How should we understand the value of academic publications?  That was the question addressed at the ALA Annual Conference last month during the SPARC/ACRL Forum.  The forum is the highlight of each ALA conference for me because it always features a timely topic and really smart speakers; this year was no exception.

One useful part of this conversation was a distinction drawn between different types of value that can be assigned to academic publications.  There is, for example, the value of risk capital, where a publication is valued because someone has been willing to invest a significant amount of money, or time, in its production.  Seeing the value of academic publications in this light really depends on clinging to the scarcity model that was a technological necessity during the age of print, but which is increasingly irrelevant.  Nevertheless, some of the irrational opposition we see these days towards open access publications seems to be based on a myopic approach that can only recognize this risk value; because online publication can be done more inexpensively, at both production and consumption, and therefore does not involve the risk of a large capital investment, it cannot be as good.  Because the economic barrier to entry has been lowered, there is a kind of “they’ll let anyone in here” elitism in this reaction.

Another kind of value that was discussed is the cultural value that is supposedly infused into publications by peer-review.  In essence, peer-review is used as a way to create a different, artificial type of scarcity — amongst all the material available in the digital age, peer-review separates and distinguishes some as having a higher cultural value.

Of course, there is another way to approach this kind of winnowing valuable material from the booming, buzzing confusion; one could look at how specific scholarship has been received by readers.  That is, one could look at the value created by attention.  We are especially familiar with attention value in the age of digital consumerism because we pay attention to Amazon sales figures, we seek recommendations through “purchased together” notes, and we look at consumer reviews before booking a hotel, or a cruise, or a restaurant.  Some will argue that these parallels show that we cannot trust attention value; it is only good for inconsequential decisions, the argument goes. But figuring out how to use attention as a means to make sound evaluations of scholarship — better evaluations than we are currently relying on — is the focus of the movement we call “alt-metrics.”

Before we discuss attention value in more detail, however, we need to acknowledge another unfortunate reminder that the cultural value created by peer-review may be even more suspect and unreliable. Last week we saw a troubling incident that provokes fundamental doubts about peer-review and how we value scholarly publications when Sage Publishing announced the retraction of sixty articles due to a “peer-review ring.”  Apparently a named author used fake e-mail identities, and maybe some cronies, in order to review his own articles and to cite them, thus creating an artificial and false sense of the value of these articles.  Sage has not made public the details, so it is hard to know exactly what happened, but as this article points out, the academic world needs to know — deserves to know — how this happened.  The fundamental problem that this incident raises is the suggestion that an author was able to select his own peer-reviewers and to direct the peer-review requests to e-mails he himself had created, so that the reviewers were all straw men.  Although all the articles were from one journal, the real problem here is that the system for peer-review apparently simply is not what we have been told it is, and does not, in fact, justify the value we are encouraged to place on it.

Sage journals are not inexpensive.  In fact, the recent study of “big deal” journal pricing by Theodore Bergstrom and colleagues (subscription required), notes that Sage journal prices, when calculated per citation (an effort to get at value instead of just looking at price), are ten times higher than those for journals produced by non-profits, and substantially higher even than Elsevier prices.  A colleague recently referred to Sage journals in my hearing as “insanely expensive.” So it is a legitimate question to ask if we are getting value for all that money.  One way high journal prices are often justified, now that printing and shipping costs are mostly off the table, is based on the expertise required at publishing houses to manage the peer-review system.  But this scandal at the Journal of Vibration and Control raises the real possibility that Sage actually uses a kind of DIY system for peer-review that is easily gamed and involves little intervention from the publisher.  How else could this have happened?  So we are clearly justified is thinking that the value peer-review creates for consumers and readers is suspect, and that attention value is quite likely to be a better measure.

Attention can be measured in many ways.  The traditional impact factor is one attempt to analyze attention, although it only looks at the journal level, measures only a very narrow type of attention, and tells us nothing about specific articles.  Other kinds of metrics, those we call “alt-metrics” but ought to simply call metrics, are able to give us a more granular, and hence more accurate, way to evaluate the value of academic articles.  Of course, the traditional publication system inhibits the use of these metrics, keeping many statistics proprietary and preventing cross-platform measurements.  Given the Sage scandal, it is easy to see why such publishers might be afraid of article-level measures of attention.  The simple fact is that the ability to evaluate the quality of academic publications in a trustworthy and meaningful way depends on open access, and it relies on various forms of metrics — views, downloads, citations, etc. — that assess attention.

But the most important message, in my opinion, that came out of the SPARC/ACRL forum is that in an open access environment we can do better than just measuring attention.  Attention measures are far better than what we have had in the past and what we are still offered by toll publishers. But in an open environment we can strive to measure intention as well as attention.  That is, we can look at why an article is getting attention and how it is being used.  We can potentially distinguish productive uses and substantive evaluations from negative or empty comments.  The goal, in an open access environment, is open and continuous review that comes from both colleagues and peers.  This was an exciting prospect when it was raised by Kristen Ratan of PLoS during the forum, where she suggested that we should develop metrics similar to the author-to-author comments possible on PubMed Commons that can map how users think about the scholarly works they encounter.  But, after the Sage Publishing debacle last week, it is easier to see that efforts to move towards an environment where such open and continuous review is possible are not just desirable, they are vital and very urgent.

Publishing ironies

Would Karl Marx have waived his copyright on principle?  I don’t know for sure, but I rather doubt it.  Marx was not entirely in sympathy with Proudhon’s famous assertion that “property is theft,” and in any case probably expected to make at least part of his living off from his intellectual property.  Nevertheless, there is something rather odd about a left-wing press asserting its own copyright to prevent the digital distribution of the Collected Works of Marx and Engels.  Marx’s interests are not being protected, of course; his works have been in the public domain for many years.  But Lawrence & Wishart Publishing wants to protect its own income from this property by asserting a copyright in new material that is contained in the volumes, including notes, introductions and original translations, and it has demanded that the Marxists Internet Archive remove digital copies of the works.

It is interesting to consider who is being hurt by the distribution and by the take down demand.  The distribution, as I say, does no harm to Marx or his descendants, since the copyright has already expired.  The party harmed, of course, is the publisher, which can continue to collect revenue from public domain works, and is entitled to enforce exclusivity if, as in this case, there is new material that is currently protected by copyright.

So we have the irony of Marxist literature being protected by that most capitalist of business structures, a monopoly, and a left-wing press asserting that monopoly to limit dissemination of Marxist ideas.

Does the take down demand harm anyone?  Much of this literature is available in other forms on the Internet, owing to its public domain status.  Potential readers will presumably be harmed, to a degree, because English versions of some more obscure works by Marx and Engels will become unavailable if the translations in the Collected Works were the first of their kind.  But I can’t help thinking that the folks who are really harmed by this decision are the contemporary scholars who contributed to the volumes published by Lawrence and Wishart.  Perhaps they thought that by contributing to a collected works project they had the opportunity to offer a definitive interpretation of some particular essay or letter.  Perhaps they hoped to make an impact on their chosen field of study.  But those opportunities are greatly reduced now.  Potential readers will find the works they are looking for in other editions that remain available in the Archive, or they will not find them at all.  They will look to other scholars to help them understand those works, scholars whose writings are more accessible.

While I cannot dispute the right of Lawrence and Wishart to demand exclusivity, it is a clear reminder about how poorly the traditional system of publishing, based on state-enforced exclusivity, serves scholars in an age when there are so many opportunities in the digital environment to reach a much larger audience.  I suspect that the price of the Collected Works set is high, and the publisher is quite obscure (a colleague here just shrugged when I mentioned the name), so its distribution will be quite limited.  It is a sad illustration of how traditional publishing that relies on subscriptions for digital material is inextricably mired in the print model, trying desperately to reproduce the scarcity of print resources in defiance of the abundance possible in the digital environment.  The losers in that effort are the scholars whose ability to impact their field is deliberately reduced by this effort — beyond their control — to preserve exclusivity and scarcity.

“Beyond their control” leads directly to the other irony from the publishing industry that I want to share in this post.  A colleague recently sent me a PDF of the preliminary program for the conference being held in Boston next month of the Society for Scholarly Publishing.  It was the description of the very first seminar that caught both her eye and mine:

Seminar 1: Open Access Mandates and Open Access “Mandates:” How Much Control Should Scholars Have over Their Work?Many universities now mandate that faculty authors deposit their work in Open Access university repositories.  Others are developing this expectation, but not yet mandating participation.  This seminar will review various mandatory and non-mandatory OA deposit policies, the implementation of different policies, and the responses of faculty members to them.  Panelists will discuss the degree to which academic institutions ought to determine the disposition of publications originating on their campus.

It is hard to believe that the SSP could print this session description with a straight face.  Surely they know that the law deliberately gives scholars a great deal of control over their work, in the form of copyright.  Scholars exercise that control in a variety of ways, including when they vote to adopt an open access policy, as many have done.  So where is the threat to scholar’s control over their own works?  Perhaps at the point where they are required to relinquish their copyright as a condition of publication.  If the SSP were really concerned about scholars having control over their own writings, the panel for this session would be discussing how to modify copyright transfer policies so that scholarly publishers would stop demanding that faculty authors give up all of their rights.

The SSP has carefully written the session description to make it sound like open access policies are imposed on faculty against their will.  But every policy I am aware of was adopted by the faculty themselves, usually after extensive discussions.  And the majority of policies have liberal waiver provisions, so that faculty who do not wish to grant a license for open access do not have to do so.  On the other hand, publishers almost never provide a similar way for authors to opt out of mandatory copyright transfer, other than paying a significant fee for an author-pays OA option, which offers authors a chance to buy what they already own.  Perhaps this concern about authorial control could be channeled into a discussion about the new models of scholarly publishing that are developing that do not require copyright transfer and that seek alternate ways to finance the improved access so many university faculties are indicating they want.

There is a lot to talk about here, especially in terms of authorial control.  Consulting the authors whose material is published in the Collected Works of Marx and Engels might have engendered discussion of a solution to the issue about the Marxists Archive other than simply demanding removal.  Maybe those authors should have resisted the demand to transfer copyright wholesale to Lawrence and Wishart in the first place. But publishers continue to think in terms of total control over the works they publish; that is the real threat to authors and that is the problem that the SSP ought to be addressing.

Attacking academic values

A new thing started happening here at Duke this week; we began getting inquiries from some faculty authors about how to obtain a formal waiver of our faculty open access policy.  We have had that policy in place for over three years, but for the first time a single publisher — the Nature Publishing Group — is telling all authors at Duke that they must obtain a waiver of the policy before their accepted articles can be published.  It is not clear why NPG suddenly requires these waivers after publishing many articles in the past three years by Duke authors, while the policy was in force and without waivers.

Indeed, the waivers are essentially meaningless because of the way Duke has implemented its open access policy.  When the policy was adopted unanimously by our Academic Council in March 2010, the statement in favor of openness was pretty clear, but so was the instruction that implementing the policy not become a burden to our faculty authors.  So throughout the ensuing years we have tried to ensure that all archiving of published work in our repository be done in compliance with any publisher policies to which our authors have agreed.  NPG allows authors to archive final submitted manuscripts after a six month delay, so that is what we would do, whether or not the author sought a policy waiver.  But suddenly that is not good enough; Nature wants a formal waiver even though it will have no practical effect.  The demand seems to be an effort to punish authors at institutions that adopt open access policies.

There are some comical aspects to this sudden requirement for waivers.  As I said, it seems to have taken NPG three years to figure out that Duke has an open access policy, even though we have made no secret of the fact.  Even more oddly, the e-mail that our faculty authors are getting from NPG lists nine schools from whose faculty such waivers are being required; apparently it was only four schools until recently.  But there are over thirty institutions with faculty-adopted OA policies in the U.S. alone.  Some of the largest schools and the oldest policies have not yet showed up on Nature’s radar; one wonders how they can be so unaware of the scholarly landscape on which their business depends.  NPG looks silly and poorly-informed, frankly, in the eyes of the academic authors I have spoken to.

In addition to making NPG look foolish, this belated demand for waivers has had positive effects for open access on our campus.  For one thing, it simply reminds our authors about the policy and gives us a chance to talk to them about it.  We explain why Nature’s demand is irrelevant and grant the waivers as a matter of course, while reminding each author that they can still voluntarily archive their work in compliance with the rights they have retained (which is the same situation as without the waiver).  I suspect that this move by NPG will actually increase the self-archiving of Nature articles in our repository.

Another effect of these new demands is that open access and the unreasonable demands of some commercial publishers has gotten back on the radar of our senior administrators.  Our policy allows the Provost to designate someone to grant waivers, and, in figuring our who that would be, we had a robust conversation that focused on how this demand is an attack on the right of our faculty to determine academic policy.

This last point is why I have moved, in the past few days, from laughing at the bumbling way NPG seems to be fighting its battle against OA policies to a sense of real outrage.  This effort to punish faculty who have voted for an internal and perfectly legal open access policy is nothing less than an attack on one of the core principles of academic freedom, faculty governance.  NPG thinks it has the right to tell faculties what policies are good for them and which are not, and to punish those who disagree.

As my sense of outrage grew, I began to explore the NPG website.  Initially I was looking to see if authors were told about the waiver requirement upfront.  As far as I can tell, they are not, in spite of rhetoric about transparency in the “information for authors” page.  The need for a waiver is not even mentioned on the  checklist that is supposed to guide authors through the publication process.  It seems that this requirement is communicated to authors only after their papers have been accepted.  I suspect that NPG is ashamed of their stratagem, and in my opinion they should be.  But as I looked at NPG policies, and especially its License to Publish, my concern for our authors grew much deeper.

Two concerns make me think that authors need to be carefully warned before they publish in an NPG journal.

First, because this contract is a license and tells authors that they retain copyright, it may give authors a false sense that they are keeping something valuable.  But a careful reading shows that the retention of copyright under this license is essentially a sham.  The license is exclusive and irrevocable, and it encompasses all of the rights granted under copyright.  It lasts for as long as copyright itself last.  In short, authors are left with nothing at all, except the limited set of rights that are granted back to authors by the agreement.  This is not much different than publishing with other journals that admit up front that they require a transfer of copyright; my concern is that this one is dressed up as a license, so authors may not realize that they are being just as completely shorn of their rights as they are by other publishers.

My bigger concern, however, is found in clause 7 of the NPG “license,” which reads in its entirety:

The Author(s) hereby waive or agree not to assert (where such waiver is not
possible at law) any and all moral rights they may now or in the future hold
in connection with the Contribution and the Supplementary Information.
I don’t think most publishers require authors to waive moral rights (I have actually added them in to a publication contract), and insisting on doing so is a serious threat to core academic values.  Moral rights are recognized by most countries of the world (including the UK, where NPG has its corporate offices) and usually include two basic rights — the right of attribution and the right to preserve the integrity of one’s work.  The United States is something of an outlier in that we do not have a formal recognition of moral rights in our copyright law, although we always assert that these values are protected by other laws.  But my point here is to wonder why NPG requires all of its authors to waive their right of attribution.  This is not an incidental matter; the clause is carefully structured to attempt to get authors even from the countries that do not allow the waiver of moral rights — they are considered that important –  still to promise not to assert those rights (whether or not that would be enforceable in those countries).  Nature actively does not want its authors to be able to insist that their names always be associated with their work.  Why?  Does NPG imagine reusing articles it is given to publish in other ways, without providing proper attribution?  If this seems like a remote possibility, it remains the only conceivable reason that NPG would insert this bizarre clause.
So this week I discovered two ways in which Nature Publishing Group is actively attacking core academic values.  First, by trying to interfere in academic policy decisions made through independent faculty governance processes.  Second, by trying to exempt themselves from the core principle of scholarship that scholars should get credit for the work they do.  Authors publish with Nature because they believe that the brand enhances their reputation.  But by giving NPG the ability to disassociate their work from their name, that value of the Nature brand is lost.  Why would any author agree to that?
Starting with those silly demands for a waiver of the open access policy, I discovered a much deeper threat being posed to our faculty authors.  With each waiver request I now have to have a conversation with all authors who publish with NPG.  I will use those conversations as an opportunity to encourage self-archiving.  But I now know that I also must warn authors that by signing the NPG license they are giving up the most precious thing they have — the right to get credit for their work.  I hope many of our authors will reconsider signing that license unaltered.  Since NPG has singled us out, I will now be singling out NPG for its especially egregious attack on fundamental academic values.

 

Walking the talk

All of the presentations at the SPARC Open Access meeting this week were excellent.  But there was one that was really special; an early career researcher named Erin McKiernan who brought everyone in the room to their feet to applaud her commitment to open access.  We are sometimes told that only established scholars who enjoy the security of tenure can “afford” to embrace more open ways to disseminate their work.  But Dr. McKiernan explained to us both the “why” and the “how” of a deep commitment to OA on the part of a younger scholar who is not willing to embrace traditional, toll-access publishing or to surrender her goals of advancing scholarship and having an academic career.

Erin McKiernan is a Ph.D from the University of Arizona who is now working as a scientist and teacher in Latin America.  Her unique experience informs her perspective on why young scholars should embrace open access.  Dr. McKiernan is a researcher in medical science at the National Institute of Public Health in Mexico and teaches (or has taught) at a couple of institutions in Latin America.  For her, the issue is that open access is fundamental to her ability to do her job; she told us that the research library available to her and her colleagues has subscriptions to only 139 journals, far fewer that most U.S. researchers expect to be able to consult.  Twenty-two of that number are only available in print format, because electronic access is too expensive.  This group includes key titles like Nature and Cell.  A number of other titles that U.S. researchers take for granted as core to their work — she mentioned Nature Medicine and PNAS — are entirely unavailable because of cost.  So in an age when digital communications ought to, at the very least, facilitate access to information needed to improve health and treat patients, the cost of these journals is, in Dr. McKiernan’s words, “impeding my colleagues’ ability to save lives.”  She made clear that some of these journals are so expensive that the choice is often between a couple of added subscriptions or the salary of a researcher.

This situation ought to be intolerable, and for Dr. McKiernan it is.  She outlined for us a personal pledge that ought to sound quite familiar.  First, she will not write, edit or review for a closed-access journal. Second, she will blog about her scientific research and post preprints of her articles so that her work is both transparent and accessible.  Finally, she told us that if a colleague chose to publish a paper on which she was a joint author in a closed-access journal, she would remove her name from that work.  This is a comprehensive and passionately-felt commitment to do science in the open and to make it accessible to everyone who could benefit from it — clinicians, patients and the general public as well as other scholars.

Listening to Dr. McKiernan, I was reminded of a former colleague who liked to say that he “would rather do my job than keep my job.”  But, realistically, Dr. McKiernan wants to have a career as a teacher and research scientist.  So she directly addressed the concerns we often hear that this kind of commitment to open access is a threat to promotion and tenure in the world of academia.  We know, of course, that some parts of this assertion are based on false impressions and bad information, such as the claim that open access journals are not peer-reviewed or that such peer-review is necessarily less rigorous than in traditional subscription journals.  This is patently false and really makes little sense — why should good peer-review be tied to a particular business model?  Dr. McKiernan pointed out that peer-review is a problem, but not just for open access journals.  We have all seen the stories about growing retraction rates and gibberish articles.  But these negative perceptions about OA persist, and Dr. McKiernan offered concrete suggestions for early-career researchers who want to work in the open and also get appropriate credit for their work.  Her list of ideas was as follows (with some annotations that I have added):

1. Make a list of open access publication options in your particular field.  Chances are you will be surprised by the range of possibilities.

2.  Discuss access issues with your collaborators up front, before the research is done and the articles written.

3. Write funds for article processing charges for Gold open access journals into all of your grant applications.

4. Document your altmetrics.

5. Blog about your science, and in language that is comprehensible to non-scientists.  Doing this can ultimately increase the impact of your work and can even lead sometimes to press coverage and to better press coverage.

6. Be active on social media.  This is the way academic reputations are built today, so ignoring the opportunities presented is unwise.

7. If for some reason you do publish a closed-access article, remember that you still have Green open access options available; you can self-archive a copy of your article in a disciplinary or institutional repository.  Dr. McKiernan mentioned that she uses FigShare for her publications.

The most exciting thing about Erin McKiernan’s presentation was that it demolished, for many of us, the perception of open access as a risky choice for younger academics.  After listening to her expression of such a heartfelt commitment — and particularly the pictures of the people for whom she does her work, which puts a more human face on the cost of placing subscription barriers on scholarship — I began to realize that, in reality, OA is the only choice.

 

 

 

 

So what about self-archiving?

There is a persistent problem with polemics.  When writing to address someone else’s position with which one disagrees, it is easy to lose sight of the proverbial forest for the trees.

In my previous two posts, I was addressing a misunderstand that I am afraid might lead authors to be less attentive and assertive about their publication contracts than they should be.  The specific issue was whether or not it is feasible to maintain that a copyright is transferred only in a final version of a scholarly article, leaving copyright in earlier versions in the hands of the author.  I argued that this was not the case, that the distinction between versions is a construct used by publishers that has little legal meaning, and that author rights that do persist in earlier versions, as they often do, are created by the specific terms of a copyright transfer agreement (i.e., they are creatures of a license).  These points, which I believe are correct, prompted a number of people to get in touch with me, concerned about how these specific “trees” might impact the overall forest of self-archiving policies and practices.

So now I want to make several points that all address one conclusion; this argument about the nature of a copyright transfer does not necessarily have any significant impact on what we do to enhance and encourage self-archiving on our campuses.  Most of the practices I am aware of already take account of the argument I have been making, even if they are not explicit about it.

On the LibLicense list today, Professor Steven Harnad, who is a pioneer in the movement to self-archive scholarly papers, posted a 10-point strategy for accomplishing Green open access.  Essentially, he points out that a significant number of publishers (his number is 60%) allow authors to self-archive their final submitted versions of their articles, and that those who have retained this right should exercise it.  Elsevier is one such publisher, about which more later.  Harnad argues that there are other strategies available for authors whose copyright transfer agreements do not allow self-archiving of even the final manuscript.  One option is to deposit the manuscript in a repository but embargo access to it.  At least that accomplishes preservation and access to the article metadata, and it facilitates fulfillment of individual requests for a copy.  Another option is to deposit a pre-print (the version of the article before peer-review) in a pre-print repository, which is a solution that has long worked well in specific disciplines like physics and computer science.

All of these strategies are completely consistent with the point I have been making about copyright transfer agreements.  Harnad’s model recognizes that copyright is transferred (perhaps improvidently) to publishers, and is based on authors taking full advantage of the rights that are licensed back to them in that transaction.  This makes perfect sense to me and nothing I have written in my previous two posts diminishes from this strategy.

One of the questions I have received a couple of times involves campus open access policies and how they affect, or are affected by, copyright transfers.  These policies often assert a license in scholarly articles, so the question is essentially whether that license survives a transfer of copyright.

It is a basic principle of law, and common sense, that one cannot sell, or give away, more than one owns.  So if an author has granted a license to her institution before she transfers her rights to a publisher, it seems clear that the license should survive, or, to put it another way, that the rights that are transferred to the publisher are still subject to this prior license.  There was an excellent article written in 2012 by law professor Eric Priest about this situation, and his conclusion is “that permission mandates can create legally enforceable, durable nonexclusive licenses.”  The article provides an extensive analysis of the legal effect of this “Harvard-style” license, and is well worth being read in its entirety by all who are interested in the legal status of Green open access.

An additional wrinkle to the status of a prior license is provided by section 205(e) of the copyright law, which actually addresses the issue of “priority between conflicting transfer of ownership and nonexclusive license.”  This provision basically affirms what I have said above, that a license granted prior to a transfer of copyright survives the transfer and prevails over the rights now held by the transferee, IF it is evidenced by a written instrument.  Because of this provision, some schools that have a license that is created by an open access policy also get a document from the author at the time of OA deposit that affirms the existence of that license.  Such documentation helps ensure the survival of a policy-based license even after the copyright is later trnsferred to a publisher.

Even when we decide that a license for Green open access exists and has survived a copyright transfer, however, we still have a policy decision to make about how aggressively to assert that license.  Many institutional practices look to the terms of the copyright transfer and try to abide by the provisions found therein, usually relating to the version that can be used and when it can be made openly accessible.  They do this, I think, to avoid creating an uncomfortable situation for the authors.  Even if legally that license they granted would survive the transfer of rights, if a conflict with the publisher developed, the authors (whom we are, after all, trying to serve) would be in a difficult place.  So my personal preference is to conform our practice to reasonable publisher policies about self-archiving and to work with authors to get unreasonable policies changed, rather than to provoke a dispute.  But this is a policy matter for specific institutions.

Finally, I want to say a couple of things specifically about Elsevier, since it was Elsevier’s take down notices directed against author self-archiving that began this series of discussions.

Elsevier’s policies permit authors to self-archive the final manuscript version of an article but not the published version, and, as far as I know, all of its take down notices were directed against final published versions on institutional or commercial websites.  So it is true that in my opinion, based on the analysis I have presented over the past week, that Elsevier is legally justified in this take down campaign.  It may well be a stupid and self-defeating strategy — I think it is — but they have the legal right to pursue it.  Authors, however, also have the legal right, based on Elsevier’s policies that are incorporated into their copyright transfer agreements, to post an earlier version of the articles — the final author’s manuscript(s) — in place of these final published versions.  So I hope that every time a take down notice from Elsevier that is directed against the author of the work in question is received, the article that is taken down is replaced by a  final manuscript version of the same content.

As many know, Elsevier also has an foolish and offensive provision in its current copyright transfer agreement that says that authors are allowed to self-archive a final manuscript version of their article UNLESS there is an institutional mandate to do so.  As I have said before, this “you may if you don’t have to but not if you must” approach is an unjustifiable interference with academic freedom, since it is an attempt to tie faculty rights to specific policies that the faculty themselves adopt to further their own institutional and academic missions.  Elsevier should be ashamed to take this stance, and our institutions that value academic freedom should protest.  But based on what has been said above, we can also see how futile this approach really is.  If the institution has a policy-created license, that license probably survives the copyright transfer, as Eric Priest argues.  In that case, the denial of a self-archiving right only in cases where a license exists is meaningless precisely because that license does exist; authors could self-archive based on the license and do not need the grant of rights that Elsevier is petulantly withholding.  I said above that institutions should consider whether or not they want to provoke disputes by relying on the prior existence of a license to self-archive.  Elsevier, however, seems to have decided to provoke exactly that dispute with this provision, and they are even more unwise to do so since it is likely to be a losing proposition for them.

 

Half-lives, policies and embargoes

Over the holidays I was contacted by a writer for Library Journal asking me what I thought about a study by Phil Davis, which was commissioned and released by the Association of American Publishers, that analyzed the “article half-life” for journals in a variety of disciplines and reported on the wide variation in that metric.  The main finding is that this idea of half-life — the point at which an article has received half of its lifetime downloads — varies a great deal from discipline to discipline. The writer asked me what I thought about the study, and about a blog post on the Scholarly Kitchen in which David Crotty argues that this study shows that the experience of the NIH with article embargoes — that public access after a one-year embargo does not harm journal subscription — cannot be generalized because the different disciplines vary so much.  I sent some comments, and the article in LJ came out early last week.

Since this exchange I have learned that the Davis study is being presented to legislators to prove the point Crotty makes — that public access policies should have long embargoes on them to protect journal subscriptions.  It is worth noting that Davis does not actually make that claim, but his study is being used to support that argument in the on-going debate over implementing the White House public access directive.  That makes it more important, in my opinion, to be clear about what this study really does tell us and to recognize a bad argument when we see it.

Here is my original reply to the LJ writer, which is based on the fact that this metric, “article half-life,” is entirely new to me and its relevance is completely unproved.  It certainly does not, in my opinion, support the much different claim that short embargoes on public access will lead to journal subscription cancellations:

I have to preface my comments by saying that I was only vaguely aware of Davis’ study before you pointed it out.  So my comments are based on only a very short acquaintance.

I have no reason to question Davis’ data or his results.  My question is about why the particular focus on the half-life of article downloads was chosen in the first place, and my issue is with the attempt to connect that unusual metric with the policy debate about public access policies and embargoes.

As far as I can tell, article half-life tells us something about usage, but not too much about the question of embargoes.  The discussion of how long an embargo should be imposed on public access is supposed to focus on preventing subscription cancellations.  What I do not see is any connection between this notion of article usage half-life and journal cancellation.  It is a big leap from saying that a journal retains some level of usefulness for X number of years to saying that an embargo shorter than X will lead to cancelled subscriptions, yet I think that is the argument that is being made.

Here are two paragraphs from Crotty’s Scholarly Kitchen post:

[snip]“As I understand it, the OSTP set a 12-month embargo as the default, based on the experience seen with the NIH and PubMed Central. The NIH has long had a public access policy with a 12-month embargo, and to date, no publisher has presented concrete evidence that this has resulted in lost subscriptions. With this singular piece of evidence, it made sense for the OSTP to start with a known quantity and work from there.

The new study, however, suggests that the NIH experience may have been a poor choice for a starting point. Clearly the evidence shows that by far, Health Science journals have the shortest article half-lives. The material being deposited in PubMed Central is, therefore, an outlier population, and many (sic) not set an appropriate standard for other fields.”[end quotation]

What immediately strikes me is the unacknowledged transition between the two paragraphs.  In the first he is talking about lost subscriptions, which makes sense.  But in the second he is talking about this notion of download half-life.  What neither Davis nor Crotty give us, however, is the connection between these half-life numbers and lost subscriptions.  In other words, why should policy decisions about embargoes be made based on this half-life number?  At best the connection between so-called article half-life and cancelled subscriptions is based on a highly speculative argument that has yet even to be made, much less proved.  At worst, this metric is irrelevant  to the debate.

My overall impression is that the publishing industry is unable to show evidence of lost subscriptions based on the NIH public access policy (which Crotty acknowledges), so they are trying to introduce this new concept to cloud the discussion and make it look like there is a threat to their businesses that still cannot be documented.  I think it is just not the right data point on which to base the discussion about public access embargoes.

A second point, of course, is that even if it were proved that there would be some economic loss to publishers with 6 or 12 month embargoes, that does not complete the policy discussion.  The government does not support scientific research in order to prop up private business models.  And the public is entitled to make a decision about return on its investment that considers the impact on these private corporate stakeholders but is not dictated by their interests.  It may still be good policy to insist on 6 month embargoes even if we had evidence that this would have a negative economic impact on [some] publishers.  Government agencies that fund research simply are not obligated to protect the existing monopoly on the dissemination of scholarship at the expense of the public interest.

By the way, Crotty is wrong, in the passage quoted above, to say that there is no evidence that short embargoes do not impact subscriptions other than the NIH experience.  The European Commission did a five-year pilot study testing embargoes across disciplines and concluded that maximum periods of six months in the life sciences and 12 months for other disciplines were the correct embargoes.

In addition to what I said in the long quote above, I want to make two additional points.

First, it bears repeating that Davis’ study was commissioned by the publishing industry and released without any apparent peer-review.  Such review might have pointed out that the actual relevance of this article half-life number is never explained or defended.  But the publishing industry is getting to be in the habit of attacking open access using “data” that is not subject to the very process that they tell us is at the core of the value that they, the publishers, add to scholarship.

The second point is that I have never heard of any librarian who used article half-life to make collecting or cancellation decisions.  Indeed, I had never even heard of the idea until the Davis study was released, and neither had the colleagues I asked.  We would not have known how to determine this number even if we had wanted to.  It is not among the metrics, as far as I can determine, that publishers offer to us when we buy their packages and platforms.  So it appears to be a data point cooked up because of what the publishing industry hoped it would show, which is now being presented to policy-makers, quite erroneously, as if it was relevant to the discuss of public access and embargoes.  Crotty says in his post that rational policy should be evidence-based, and that is true.  But we should not accept anything that is presented as evidence just because it looks like data; some connection to the topic at hand must be proved or our decision-making has not been improved one bit.

We cannot say it too often — library support for public access policies is rooted in our commitment to serve the best interests of scholarship and to see to it that all the folks who need or could use the fruits of scholarly research, especially taxpayer-funded research, are able to access it.  We are not supporting these policies in order to cancel library subscriptions, and the many efforts in the publishing industry to distract from the access issue and to claim, on the basis of no evidence or irrelevant data, that their business models are imperiled are just so many red-herrings.

NB — After this was written I discovered the post on the same topic by Peter Suber from Harvard, which comes to many of the same conclusions and elaborates on the data uncovered by the European Commission and the UK Research Councils that are much more directly relevant to this issue.  You can read his comments here.

Taking a stand

When I wrote a blog post two weeks ago about libraries, EBSCO and Harvard Business Publications, I was attending the eIFL General Assembly in Istanbul, and I think the message I wanted to convey — that librarians need to take a stand on this issue and not meekly agree to HBP’s new licensing fee — was partly inspired by my experiences at the eIFL GA.

Having attended two eIFL events in the past four years, I have learned that many U.S. librarians are not aware of the work eIFL does, so let me take a moment to review who they are.  The core mission of eIFL is to “enable access to knowledge in developing and transition countries.”  They are a small and distributed staff that work on several projects, including support for the development of library consortia in their partner countries, negotiating licenses for electronic resources on behalf of those consortia, developing capacity for advocacy focused on copyright reform and open access, and encouraging the use of free and open source software by libraries.  The key clientele for eIFL are academic libraries, and all of the country coordinators and delegates that I met at the General Assembly were from colleges and universities.  But eIFL also has a project to help connect communities to information through public libraries in the nations they serve.

The delegates at the General Assembly came from Eastern Europe, Central Asia and Africa.  These librarians face a variety of local conditions and challenges, but they share a common commitment to improving information access and use for the communities they serve.  It was the depth and strength of that commitment that I found so inspiring at the event.  I wanted to encourage U.S. librarians to take a stand because these librarians from all over the world are themselves so consistently taking a stand.

One way these librarians are taking a stand is in negotiations with publishers.  There were lots of vendor and publishing representatives at the General Assembly, and time for each delegation to speak with each vendor (“speed dating”) was built in to the schedule.  Although these meetings were short, they were clearly intense.  One vendor rep told me that they were difficult because the librarians had diverse needs and were well-versed for the negotiations.  He also told me that he enjoyed the intensity because it went beyond “just selling.”  And that is the key.  These librarians are supporting each other, learning from each other and from speakers at the event what they can expect and what they can aspire to with their electronic resources, and taking those aspirations, along with their local needs, into negotiations.  They are definitely not “easy” customers because they are well-informed and willing to fight for the purchases that best serve their communities.  Because they cannot buy everything, they buy very carefully and drive hard bargains.

Another area in which these eIFL librarians are taking a stand is in regard to copyright in their individual nations.  I saw several presentations, from library consortia in Poland, Uzbekistan, Mongolia and Zimbabwe, about how they had made their library consortia into recognized stakeholders in discussions of copyright reform on the national level.  One consortium is offering small grants for librarians to become advocates for fair copyright; another has established a copyright “help desk” to bring librarians up to speed.  One of the eIFL staff emphasized to me the importance of this copyright work.  Copyright advocacy is part of the solution, I was told, to the problem of burdensome licensing terms that have often been seen in those parts of the world.

One story was particularly interesting to me.  An African librarian told how publishers in her country often view libraries and librarians as a major “enemy” because it is believed that they encourage book piracy.  Through the consortium of academic libraries, librarians have now become actively involved in a major publishing event that is held annually in her country, and recently the libraries were asked to nominate a board member to that group.  As a result of these efforts, the relationship between librarians and publishers has improved, and there is much more understanding (thus less suspicion) about library goals and priorities.

eIFL librarians are also taking a stand amongst their own faculties by advocating for open access. There were multiple stories about new open access policies at different universities, and about the implementation of repository software.  There were also multiple presentations to convey the advantages that open resources offer to education.  These presentations discussed MOOCs (that was me), open data, alternative methods of research assessment and text-mining.  If these sound familiar, they should.  In spite of difficult conditions and low levels of resourcing, these librarians are investigating the same opportunities and addressing the same challenges as their U.S. counterparts.  Just to illustrate the breadth of the interest in the whole topic of openness, I wrote down the countries from which the librarians who grilled me about MOOCs came when I spent an hour fielding questions; they came from Azerbaijan, Lesotho, Kyrgyzstan, Lithuania, Malawi, Maldives, Macedonia, Fiji, China, Thailand, Ghana, Belarus, Armenia, Uzbekistan, Swaziland and Mongolia.  They came with questions that challenged my assumptions (especially about business models) and deepened my own understanding of the international impact of open online courses.

There is one last conversation I had that I want to report on, both for its own sake and because of how it illuminates the eIFL mission.  Mark Patterson, the editor of the open access journal eLife, was at the GA to talk about research assessment.  Later I sat and talked with him about eLife.  He told me that the most exciting thing about eLife was its model whereby scientists reclaim the decision about what is important to science.  While the editors of subscription-based journals must always strive for novelty in order to defend and extend their competitiveness, eLife and, presumably, other open access journals, have scientists making decisions about what is important to science, whether or not it is shiny and new.  Sometimes there is an article that is really important because it refines some idea or process in a small way, or because of its didactic value.  Such articles would escape the notice of many subscription journals, but the editors at eLife can catch them and publish them.  And the reason this seems to fit so well into the eIFL context is because it is about self-determination.  Whether I was talking about open access journals with Mark or to the country delegates at the GA, this was the dominant theme, the need to put self-determination at the center of scholarly communications systems, from publishing to purchasing.

Can we “fix” open access?

The later part of this past week was dominated, for me, by discussions of the article published in Science about a “sting” operation directed against a small subset of open access journals that purports to show that peer-review is sometimes not carried out very well, or not at all.  Different versions of a “fake” article, which the authors tell us could easily be determined to be poor science, were sent to a lot of different OA journals, and it was accepted by a large number of them.  This has set off lots of smug satisfaction amongst those who fear open access — I have to suspect that the editors of Science fall into that category — and quite a bit of hand-wring amongst those, like myself, who support open access and see it as a way forward out of the impasse that is the current scholarly communications system.  In short, everyone is playing their assigned parts.

I do not have much to say in regard to the Science article itself that has not been said already, and better, in blog posts by Michael Eisen, Peter Suber and Heather Joseph.  But by way of summary, let me quote here a message I posted on an e-mail list the other day.

My intention is not so much to minimize the problem of peer-review and open access fees as it is to arrive at a fair and accurate assessment of it.  As a step toward that goal, I do not think the Science article is very helpful, owing to two major problems.

First, it lacked any control, which is fundamental for any objective study.  Because the phony articles were only sent to open access journals, we simply do not know if they would have been “caught” more often in the peer-review process of any subscription journals.  There have been several such experiments with traditional journals that have exposed similar problems.  With this study, however, we have nothing to compare the results to.  In my opinion, there is a significant problem with the peer-review system as a whole; it is over-loaded, it tends toward bias, and, because to is pure cost for publishers, no one has much incentive to make it better.  By looking only at a sliver of the publishing system, this Science “sting” limited its ability to expose the roots of the problem.

The other flaw in the current study is that it selected journals from two sources, one of which was Jeffrey Beall’s list of “predatory” journals.  By starting with journals that had already been identified as problematic, it pre-judged its results.  It was weighted, in short, to find problems, not to evaluate the landscape fairly.  Also, it only looked at OA journals that charge open access article processing fees.  Since the majority of OA journals do not charge such fees, it does not even evaluate the full OA landscape.  Again, it focused its attention in a way most likely to reveal problems.  But the environment for OA scholarship is much more diverse than this study suggests.

 The internet has clearly lowered the economic barriers for entering publishing.  In the long run, that is a great thing.  But we are navigating a transition right now.  “Back in the day” there were still predatory publishing practices, such as huge price increases without warning and repackaging older material to try and sell it twice to the same customer, for example.  Librarians have become adept at identifying and avoiding these practices, to a degree, at least.  In the new environment, we need to assist our faculty in doing the same work to evaluate potential publication venues, and also recognize that they sometimes have their own reasons for selecting a journal, whether toll-access or open, that defy our criteria.  I have twice declined to underwrite OA fees for our faculty because the journals seemed suspect, and both time the authors thanked me for my concern and explained reasons why they wanted to publish there anyhow.  This is equally true in the traditional and the OA environment.  So assertions that a particular journal is “bad” or should never be used needs to be qualified with some humility.

At least one participant on the list to which I sent this message was not satisfied, however, and has pressed for an answer to the question of what we, as librarians, should do about the problem that the Science article raises, whether it is confined to open access or a much broader issue.

By way of an answer, I want to recall a quote (which paraphrases earlier versions) from a 2007 paper for CLIR by Andrew Dillon of the University of Texas – “The best way to predict the future is to help design it.”  Likewise, the best way to avoid a future that looks unpleasant or difficult is to take a role in designing a better one.

That the future belongs to open access is no longer really a question, but questions do remain.  Will open access be in the hands of trustworthy scholarly organizations?  Will authors be able to have confidence on the quality of the review and publication processes that they encounter?  Will open access publishing be dominated by commercial interest how will undermine its potential to improve the economics of scholarly communications?  If libraries are concerned about these questions, the solution is to become more involved in open access publishing themselves.  If the economic barriers for entering publisher have been lowered by new technologies, libraries have a great opportunity to ensure the continuing, and improving, quality of scholarly publishing by taking on new roles in that enterprise.

Many libraries are becoming publishers.  They are publishing theses and dissertations in institutional repositories.  They are digitizing unique collections and making them available online.  They are assisting scholars to archive their published works for greater access.  And they are beginning to use open systems to help new journals develop and to lower costs and increase access for established journals.  All these activities improve the scholarly environment of the Internet, and the last one, especially, is an important way to address concerns about the future of open access publishing.  The recently formed Library Publishing Coalition, which has over 50 members, is testament to the growing interest that libraries have in embracing this challenge.  Library-based open access journals and library-managed peer-revew processes are a major step toward address the problem of predatory publishing.

In a recent issue brief for Ithaka S&R, Rick Anderson from the University of Utah calls on libraries to shift some of their attention from collecting what he calls “commodity” works, which many libraries buy, toward making available the unique materials held in specific library collections (often our “special” collections).  This is not really a new suggestion, at least for those who focus on issues of scholarly communication, but Anderson’s terminology makes his piece especially though-provoking, although it also leads him astray, in my opinion. While Anderson’s call to focus more on the “non-commodity” materials, often unique, that our libraries hold is well-taken and can help improve the online scholarly environment, I do not believe that this is enough for library publishing to focus on.  Anderson’s claim that by focusing on non-commodity documents will allow us to “opt out of the scholarly communication wars” misses a couple of points.  First, it is not just publishers and libraries who are combatants in these “wars;” the scholars who themselves produce those “commodity” documents are frustrated and ill-served by the current system.  Second, there is very little reason left for those products — the articles and books written by university faculty members — to be regarded as commodities at all.  The need for extensive investment of capital into publishing operations, which is what made these works commodities in the first place, was a function of print technology and is largely past.

So libraries should definitely focus on local resources, but those resources include content created on campuses that has previously been considered commodities.  The goal of library publishing activities should be to move some of that content — the needs and wishes of the faculty authors should guide us — off of the commodity market entirely and into the “gift economy” along with those other non-commodity documents that Anderson encourages libraries to publish.

If libraries refocus their missions for a digital age, they will begin to become publishers not just of unique materials found in “special” collections, but also of the scholarly output of their constituents.  This is a service that will grow in importance over the coming years, and one that is enable by technologies that are being developed and improved every day.  Library publishing, with all of the attendant services that really are at the core of our mission, is the ultimate answer to how libraries should address the problem described only partially by the Science “sting” article.