Category Archives: Scholarly Publishing

Open Access at the tipping point

Open Access Day bookmark used under CC-BY license from[ guest post by Paolo Mangiafico ]

As readers of this blog almost certainly know, this week was Open Access Week, and it’s been heartening to see all of the stories about how open access is creating new opportunities for scholarship, and transforming scholarly communication.

It’s also been interesting to see organizations that one might not think of as being open access proponents proclaiming their OA bona fides this week. On Tuesday this press release from Nature came across my Twitter feed. I shared it with my colleagues Kevin and Haley, joking that our job was done and we could go home, now that even in Nature over 60% of published research articles were open access under Creative Commons licenses.

Even though Nature neglects to mention in this release that they are bringing in a lot of money from open access through high article processing charges (they aren’t doing this just to be nice) I still think it’s an important milestone because it shows that open access is becoming the norm, even in mainstream, high visibility journals. I’m optimistic that this is another indicator that we’re on our way to some kind of tipping point for open access, where other effects will come into play.

One of the statistics given in the press release is that the percentage of authors choosing CC-BY licenses in Nature Publishing Group’s open access journals rose from 26% in 2014 to 96% in September 2015. Just last year, a study by Taylor & Francis indicated that, when asked (or at least when asked with the leading questions in the T&F study), authors were more likely to choose other CC variants, yet in Nature open access journals the choice of CC-BY is now nearly unanimous. Maybe “choice” is too strong a word – they appear to have achieved this primarily by setting CC-BY as the default. Just as in the past when signing over all your rights to a publisher was the default (and, unfortunately, in many journals still is), it seems that few authors realize they can make a change, or see a strong reason to do so. What this signals is the power of setting a default.

When we were working toward an open access policy for Duke University faculty in 2010, we talked about setting the default to open. As we discussed the proposed open access policy with Duke faculty, we never called it a mandate, and we haven’t treated it as a mandate, in that the policy doesn’t force anyone to do something they are disinclined to do. But absent any expressed desire to the contrary (via an opt out) the policy enabled the faculty and the University to make as much scholarship produced at Duke be as widely available as possible. We approached the policy as a default position, and built services to make it easy for Duke authors to make their work open access via an institutional repository and have it appear on their University and departmental profile pages, so there are few reasons now not to do it. It will still take time, but I think this “green” open access option is something authors will increasingly be aware of and see as a natural and easy step in their publishing process. They’ll see open access links showing up on their colleagues’ profiles, being included in syllabi and getting cited by new audiences around the world, and linked from news stories, for example, and word of mouth will tell them that it’s really easy to get that for themselves too.

What makes me optimistic about the figures in the Nature press release is that they point to an environment where even in high visibility journals open access is no longer that thing only your activist colleague does, but is something that many people are doing as a matter of course. And as the percentage of authors making their work open access grows, suddenly various decision-making heuristics and biases start to tip in the other direction. Pretty soon the outlier will be the scholar whose work is not openly available, either via “green” repositories or “gold” open access journals, and I think momentum toward almost universal OA will increase.

Our work isn’t done, of course. Even with open access as a default, the next challenge will be to manage the costs. So far the shift to OA has mostly been an additional cost, and the big publishers who made big profits before are continuing to make big profits now via these new models. Even as OA becomes prevalent, and scholars see it as the norm, we’ll still have to work hard to find ways to exert downward pressure on author processing charges and other publishing costs, so that open access doesn’t just become another profit center that exploits scholarly authors and their funders and institutions. We need to do better to surface these costs, and to put in place mechanisms and perhaps shift to supporting other publishers and other models that will keep costs down.

But for now let’s call this a victory. Recognizing there’s still a lot to do, let’s pop the champagne bottle, celebrate open access week, and then get back to work on the next round of creating a better scholarly communication ecosystem.


What happens when there is no publication agreement?

Scholarly communication discussions and debates usually focus, quite obviously, on the terms of publication agreements and the licenses those agreements often give back to authors to use their own work in limited and specific ways.  This is such a common situation that it is hard to realize that it is not universal for scholarly authors.  But recently it has come to my attention that some authors actually never sign any agreement at all with their publishers, and in one situation that I will explain in a moment, that led to a dispute with the publisher about whether or not the author could place her article in an institutional repository.  The issue, broadly speaking, is when an implied license can be formed and what such licenses might permit.

In a couple of previous posts, I have discussed the idea of implied licenses: licenses that are formed without an explicit signature, usually because someone takes an action in response to a contractual offer, and the action is clear enough to manifest acceptance of that offer.  One of the most common implied licenses that we encounter underlies the transaction every time we open a web page.  Our browsers make a copy of the web page code, of course, and that copy implicates copyright.  But our courts have held that when someone makes a web page accessible, they are offering an implied license that authorizes the copying necessary to view that webpage.  No need to contact the rights holder each time you want to view the page, and no cause of action for infringement based simply on the fact that someone viewed a page and therefore copied the code, temporarily, in their browser cache.

It is important to recognize that such licenses are quite limited.  An implied license can, at best, be relied upon when doing the obvious acts that must have been anticipated by the offeror, such as viewing a web page.  An implied license would not, for example, authorize copying images from that website into a presentation or brochure; that would be well beyond the scope of an license implied by merely making the site available.  For those sorts of activities, either permission (an explicit license) or an exception in the copyright law would be needed.

So how might implied licensing help us untangle the situation where an author has submitted her work to a journal, and the journal has published it without obtaining an explicit transfer of right or a license?  As I said, this is a reversal of the normal situation, and it caught me by surprise.  But I have heard of it now from three different authors, all publishing in small, specialized journals in the humanities or social sciences.

The way the question came to me most recently was from an author who had published in a small journal and later asked, because she had no documentation that answered the question, if she could deposit her article in an open repository.  The publisher told her that she could do so only after obtaining permission from the Copyright Clearance Center, and she came to me, through a colleague, asking how the publisher could insist on her getting permission if she had not signed a transfer document.  Could the publisher, she asked, claim that the transfer had taken place through some kind of implied contract?

The answer here is clearly no; the copyright law says explicitly, in section 204, that “A transfer of copyright ownership… is not valid unless an instrument of conveyance, or a note or memorandum of the transfer, is in writing and signed by the owner of the rights conveyed or such owner’s duly authorized agent.”  So an implied transfer of rights is impossible; all that can be conveyed implicitly is a non-exclusive license (as in the web site example).

In the case of my author with no publication agreement, she remains the copyright holder, whatever the publisher may think.  At best, she has given the publisher a non-exclusive license, by implication from her act of submitting the article, to publish and distribute it in the journal. This is not really all that unusual. I have written opinion pieces for several newspapers in the past and never signed a copyright transfer; the pressure of daily publication apparently leads newspapers to rely on this kind of implied license quite frequently.  But it is unusual in academia, and requires some unpacking.  No transfer of copyright could have occurred by implication, so the rights remain with the author, who is free to do whatever she likes with the article and to authorize others to do things as well.  The publisher probably does have an implied license for publication, but that license is non-exclusive and quite limited.

As we worked through this situation, three unanswered questions occurred to me, and I will close by offering them for consideration:

  1. Are authors always correct when they tell us they did not sign a publication agreement?  Sometimes an agreement may have been forgotten amidst all the paperwork of academic life, or the agreement might have been online, a “click-through” contract at the point of submission.  We need to probe these possibilities when confronted with the claim that no agreement was signed, but those are very delicate conversations to have.
  2. Returning for a moment to the possibility of a click-through agreement that the author could have forgotten, we might also ask if this type of arrangement, increasingly common among academic publishers, are really valid to transfer copyright.  I am well aware that courts are becoming quite liberal in accepting online signatures and the like, but is there a limit?  Where there is a statute that explicitly requires a signed writing for a specified effect, as the Title 17 does for assignment of copyright, could an author challenge the sufficiency of a (non-negotiable) click-through agreement?  I expect that this issue will eventually come before a court (if any readers who know of such cases, please add the information in the comments), and I will be very interested in that discussion.
  3. Finally, what do we make of the journal’s claim, in the situation I was asked about, that the author must purchase permission to use her own work from the Copyright Clearance Center?  If there was no transfer of rights, the journal has no right to make such a demand and the CCC has no right to sell a license.  This is one more situation where it seems that the CCC is sometimes used to sell rights that are not actually held by the putative licensors, and it renews my concern about whether, and when, we actually are getting value for the money we spend on licensing.

Can this gulf be bridged?

Litigants in court cases often disagree sharply about the law and its application to the facts, so it is probably not a surprise that the briefs filed in the District Court’s re-examination of its ruling in the Georgia State copyright infringement trial should see the issues in such starkly different terms.

If you read the publishers’ brief, the 11th Circuit decision that sent the case back to the District Court changed everything, and every one of those 70 excerpts found to be fair use at trial now must be labeled infringement.  This is absurd, of course, and I don’t actually believe that the publishers expect, or even hope, to win the point.  They want a new ruling that they can appeal.  In my opinion the publisher strategy has now shifted from an effort to “win” the case, as they understand what winning would mean, to a determination to keep it going, in order to profit from ongoing uncertainty in the academic community (and, possibly, to spend so much money that GSU is forced to give up).

On the other hand, the brief from Georgia State, filed last Friday, argues that all 70 of those challenged excerpts are still fair use.  It seems likely that the actual outcome will be somewhere in the middle, and, to be fair to them, GSU does recognize this, by making a concession the publishers never make.  For a number of excerpts where a digital license was shown to be available at the time of the trial, GSU argues that the available licenses were not “reasonable” because they force students to pay based on what they are getting access to, whether or not the specific excerpt is ever actually used.  This is an interesting argument, tracking a long-standing complaint in academic libraries.  If the court accepts it, it would dramatically restructure the licensing market.  But GSU also seems to recognize that this is a stretch, and ends several of its analyses of specific excerpts by saying that the specific use “should be found to be fair if the Court finds the licensing scheme unreasonable, and unfair if the Court finds the licensing scheme reasonable.”  So it seems GSU is prepared for what I believe is the most likely outcome of this reconsideration on remand — a split between fair uses and ones that are not fair that is different than the original finding — probably with some more instances of infringement — but still a “split decision.”

The availability of licenses is one of the interesting issues in these briefs.  The publisher plaintiffs now argue that licenses were available, back in 2009, for those excerpts where the judge said no licenses were “reasonably available.”  They are continuing to try to introduce new evidence to this effect; which is something GSU vigorously opposes.  But those of us who have been involved in e-reserves for a while remember clearly that such licenses were not available at all through the CCC from Cambridge University Press and only occasionally from Oxford.  So what is this new evidence (which the publishers’ brief says was not offered before because they were so surprised that it was being requested)?  It is an  affidavit from a VP at the CCC, and my best guess is that it would argue that licenses were “reasonably available” because it was possible, through the CCC system, to send a direct request to the publisher in those instances where standard licenses for digital excerpts were not offered.  GSU argues that the evidence gathering phase of the case is over, a ruling about licenses has been made and affirmed by the Court of Appeals, and the issue settled.  A lot will depend on how Judge Evans views this issue; so far she has ruled against admitting new evidence.

Another controversy, about which I wrote before, involves whose incentive is at stake.  The Court of Appeals wrote a lengthy discussion of the incentive for authors to write, and its importance for the fundamental purpose of copyright.  To this they appended an odd sentence that says they are “primarily concerned… with [publisher’s] incentive to publish.”  The publishers, of course, hang a lot of weigh on this phrase, and take it out of context to do so.  GSU, on their side, make a rather forced argument intended to limit the impact of the sentence.  Neither side can admit what I believe is the truth here: that that one sentence was inserted into an opinion where it does not fit because doing so was a condition of the dissenting judge for keeping his opinion as a “special concurrence” rather than the dissent it really was.  If I am right, this compromise served the publishers well, since they can now cite the phrase from the actual opinion of the Court; it is seldom useful to cite a dissent, after all.  So the publishers quote this phrase repeatedly and use it to argue that all of the factors really collapse into the fourth factor, and that any impact at all, no matter how small, on their markets or potential markets effectively eliminates fair use.  Authors, and the reasons that academic authors write books and articles, do not appear in the publishers’ analysis, as, indeed, they could not if the argument for publisher hegemony over scholarship is to be maintained.

GSU, as we have already seen, takes a more balanced approach.  For the first factor, they discount the publishers’ attempt to make “market substitution” a touchstone even at that point in the analysis, and focus instead on the 11th Circuit’s affirmation that non-profit educational use favors fair use even when transformation is not found.  The GSU brief fleshes this out nicely by discussing the purpose of copyright in relationship to scholarship and teaching.  On the second factor, GSU discusses author incentives directly, which in my opinion is the core of the second factor, even though courts seldom recognize this.  GSU also points out that the publishers have ignored the 11th Circuit’s instruction, both here and in the third factor analysis, to apply a case-by-case inquiry to those factors; instead, the publishers assert that since every book contains some authorial opinion, the second factor always disfavors fair use, and that no amount is small enough to overcome the possibility of “market substitution.”  For their part, GSU introduces, albeit briefly, a discussion of the content of each excerpt (they are often surveys or summaries of research) for the discussion of factor two, and of the reason the specific amount was assigned, in regard to factor three.

As I said, these differences in approach lead to wildly different conclusions.  Consider these paragraphs by which each side sums up its fair use analysis for each of the excerpts at issue:

The publishers end nearly every discussion of a specific passage with these words — “On remand, the Court should find no fair use as to this work because: (1) factor one favors fair use only slightly given the nontransformativeness of the use; (2) factor two favors Plaintiffs, given the evaluative/analytical nature of the material copied; (3) factor three favors Plaintiffs because even assuming narrow tailoring to Professor _____________’s pedagogical purpose, it is counterbalanced by the threat of market substitution, especially in light of the repeated use; and (4) factor four “strongly favors Plaintiffs,” and is entitled to “relatively great weight,” which tips the balance as to this work decidedly against fair use. ”

On the other side, GSU closes many discussions (although there is more diversity in their analysis and their summations than in the publishers’) this way — “Given the teaching purpose of the use, the nature of the work and the decidedly small amount used, the fact that this use did not supplant sales of the work, and the lack of digital licensing, the use of this narrowly tailored excerpt constituted fair use.”

These are starkly contrasting visions of what is happening with these excerpts and with electronic reserves, as practiced at a great many universities, as a whole.  It will be interesting, to say the least, to see how Judge Evans decides between such divergent views.

Who pays, and what are we paying for?

[ guest post by Paolo Mangiafico ]

I wasn’t at the Society for Scholarly Publishing’s annual meeting in Virginia last week, but was able to follow some of the presentations and discussions via the #SSP2015 hashtag on Twitter and some followup blog posts. Something that caught my eye yesterday was a post on Medium by @CollabraOA titled “What exactly am I paying for?” that summarized a panel discussion at SSP on the topic of “How Much Does it Cost?” versus “What are you Getting for/doing with the Money?” An Overview and Discussion of the Open Access Journal Business Model, (lack of) Transparency, and What is Important for the Various Stakeholders.

The post has summaries (and links to slides) of the presentations by panelists Dan Morgan (University of California Press), Rebecca Kennison (K|N Consultants), Peter Binfield (PeerJ), and Robert Kiley (The Wellcome Trust), as well as links to other readings on the topic, such as this article from a couple of years ago titled “Open access: The true cost of science publishing” by Richard Van Noorden in Nature.

A few things from the summary of the panel discussion that stood out to me (excerpted or paraphrased here):

  • From Robert Kiley’s discussion of the Wellcome Trust’s experience with paying article processing charges (APCs) on behalf of their funded authors: the average APC levied by hybrid journals (which publish both subscription and OA [open access] articles) is 64% higher than the average APC charged by wholly OA, or “born OA”, journals. Despite these higher prices, some of the problems the Trust have encountered, such as articles not being deposited to Europe PubMed Central, incorrect or contradictory licenses appearing on articles, and confusion as to whether the APC has been paid, were almost exclusively related to articles in hybrid journals. Robert asked: “Are we getting what we pay for?”
  • From Rebecca Kennison’s discussion on transparency of publishing costs, and how the initial APC for PLOS Biology was set when it was launched: it was based on the average price paid by authors publishing in that era’s top science journals, for page and color charges, etc. The thinking was that if biology authors are used to paying around $3000 USD to get published in a subscription journal, they will be able to transfer this to pay the APC for PLOS Biology instead. She noted how much of a role this $3000 price point has played in OA price-setting since the early 2000s. This is fascinating when you consider that it was a “What the Market Will Bear” price point, and not based on publishing costs. / The desire for transparency is not so much to make publishers reveal all costs, or push publishers to offer services “at cost”, but to ensure that librarians and funders, or anyone paying an OA charge, are simply more aware, and sure, of what they are paying for, and whether it is the best use of funds. It is not a matter of caveat emptor, but emptor informari.
  • From Pete Binfield’s discussion of the relationship between cost and prestige: despite the fact that “born OA” publishers can be much more efficient, authors still seem to be willing to pay for things like “prestige” and “the best venue for discoverability,” where more traditional publishers are still perceived to have an advantage because of established “brands.”

This discussion resonated with a different one that has been playing out among anthropologists in the past few weeks, regarding whether and when to transition the long established journals of the American Anthropological Association (AAA) to open access, a process that has already begun with the high profile Cultural Anthropology journal.

In an editorial in the February 2015 issue of American Anthropologist, the editor, Michael Chibnik, argued that while he “cannot disagree with the rhetoric of those advocating open access for American Anthropologist” he also could not see how to make the finances work without continuing to rely on the existing subscription model via a publisher like Wiley Blackwell. While admitting “I do not know all the details of the financial arrangements between AAA and WB” (see discussion about the lack of transparency explored in the panel mentioned above) he briefly outlines why several alternative funding models he has heard about are unlikely to work, concluding “The obstacles to AA becoming open access in the near future may be difficult to overcome.”

This elicited several responses, from Martin Eve, who challenged many of the assertions in the piece, one by one; from the Board of the Society for Cultural Anthropology, who argued in a commentary titled “Open Access: A Collective Ecology for AAA Publishing in the Digital Age” that open access was the right thing to do despite the difficulties; and from Alex Golub, who wrote a blog post titled “Open access: What Cultural Anthropology gets right, and American Anthropologist gets wrong.”

The Society for Cultural Anthropology commentary points out that research libraries are key stakeholders in the emerging OA landscape, and potential partners with scholarly societies for new models of scholarly publishing. Both SCA and Golub reference some new projects like Collabra, Open Library of the Humanities, Knowledge Unlatched, and SciELO, that, in Golub’s words, “blur the distinction between journal, platform, and community the same way Duke Ellington blurred the boundary between composer, performer, and conductor” and are examples of “experiments to move beyond cold war publishing institutions.”

It’s not clear yet what financial models will ultimately prove successful and sustainable for scholarly publishing and scholarly societies going forward, but simply maintaining the status quo with its hidden and inflated costs and frequently vestigial practices is almost certainly not the answer. As Alex Golub concludes in his post:

The AAA wasn’t always structured the way it is today, and it may not be structured this way in the future. The question now is whether the AAA can change quickly enough to be relevant, or whether institutions like the SCA are the true future of our discipline. These are issues tied up with a lot more than just publishing: The shrinking of academe, the growing role of nonacademic stakeholders in academic practices, and much besides. Does Cultural Anthropology face a lot of issues down the road? Absolutely. Is complete and total failure on the menu? Yes. But I reckon that in ten years when I sit down to reblog this post, we will look back on this debate and say: The people who did the right thing and took a leap of faith fared far better than the ones who clung to a broken solution. Cultural Anthropology acted like Netflix, while American Anthropologist acted like Blockbuster. Except, of course, no one will remember what Blockbuster was.

A distinction without a difference

The discussion of the new Elsevier policies about sharing and open access has continued at a brisk pace, as anyone following the lists, blogs and Twitter feeds will know.  On one of the most active lists, Elsevier officials have been regular contributors, trying to calm fears and offering rationales, often specious, for their new policy. If one of the stated reasons for their change was to make the policy simpler, the evidence of all these many “clarifying” statements indicates that it is already a dismal failure.

As I read one of the most recent messages from Dr. Alicia Wise of Elsevier, one key aspect of the new policy documents finally sunk in for me, and when I fully realized what Elsevier was doing, and what they clearly thought would be a welcome concession to the academics who create the content from which they make billions, my jaw dropped in amazement.

It appears that Elsevier is making a distinction between an author’s personal website or blog and the repository at the institution where that author works. Authors are, I think, able to post final manuscripts to the former for public access, but posting to the latter must be restricted only to internal users for the duration of the newly-imposed embargo periods. In the four column chart that was included in their original announcement, this disparate treatment of repositories and other sites is illustrated in the “After Acceptance” column, where it says that “author manuscripts can be shared… [o]n personal websites or blogs,” but that sharing must be done “privately” on institutional repositories. I think I missed this at first because the chart is so difficult to understand; it must be read from left to right and understood as cumulative, since by themselves the columns are incomplete and confusing.  But, in their publicity campaign around these new rules, Elsevier is placing a lot of weight on this distinction.

In a way, I guess this situation is a little better than what I thought when I first saw the policy. But really, I think I must have missed the distinction at first because it was so improbable that Elsevier would really try to treat individual websites and IRs differently. Now that I fully understand that intention, it provides clear evidence of just how out of touch with the real conditions of academic work Elsevier has become.

Questions abound. Many scientists, for example, maintain lab websites, and their personal profiles are often subordinate to those sites. Articles are most often linked, in these situations, from the main lab website.  Is this a personal website? Given the distinction Elsevier makes, I think it must be, but it is indicative of the fact that the real world does not conform to Elsevier’s attempt to make a simple distinction between “the Internet we think is OK” and “the Internet we are still afraid of.”

By the way, since the new policy allows authors to replace pre-prints on ArXive and RePEC — those two are specifically mentioned — with final author manuscripts, it is even clearer to see that this new policy is a direct attack on repositories, as the Chronicle of Higher Education perceives in this article.  Elsevier seems to want to broaden its ongoing attack on repositories, shifting from a focus on just those campuses that have an open access policy to now inhibiting green self-archiving on all university campuses.  But they are doing so using a distinction that ultimately makes no sense.

That distinction gets really messy when we try to apply it to the actual conditions of campus IT, something Elsevier apparently knows little about and did not consider as the wrote the new policy documents.  I am reminded that, in a conversation unrelated to the Elsevier policy change, a librarian told me recently that her campus Counsel’s Office had told her that she should treat the repository as an extension of faculty members’ personal sites.  Even before it was enshrined by Elsevier, this was clearly a distinction without a difference.

For one thing, when we consider how users access these copies of final authors’ manuscripts, the line between a personal website and a repository vanishes entirely. In both cases the manuscript would reside on the same servers, or, at least, in the same “cloud.” And our analytics tell us that most people find our repositories through an Internet search engine; they do not go through the “front door” of repository software. The result is that a manuscript will be found just as easily, in the same manner and by the same potential users, if it is on a personal website or in an institutional repository. A Google or Google Scholar search will still find the free copy, so trying to wall off institutional repositories is a truly foolish and futile move.

For many of our campuses, this effort becomes even more problematic as we adopt software that helps faculty members create and populate standardized web profiles. With this software – VIVO and Elements are examples that are becoming quite common — the open access copies that are presented on a faculty author’s individual profile page actually “reside” in the repository. Elsevier apparently views these two “places” – the repository and the faculty web site – as if they really were different rooms in a building, and they could control access to one while making the other open to the public. But that is simply not how the Internet works. After 30 years of experience with hypertext, and with all the money at their disposal, one would think that Elsevier should have gained a better grasp on the technological conditions that prevail on the campuses where the content they publish is created and disseminated. But this policy seems written to facilitate feel-good press releases while still keeping the affordances of the Internet at bay, rather than to provide practical guidelines or address any of the actual needs of researchers.

From control to contempt

I hope it was clear, when I wrote about the press release from Elsevier addressing their new approach to authors’ rights and self-archiving, that I believe the fundamental issue is control.  In a comment to my original post, Mark Seeley, who is Elsevier’s General Counsel, objected to the language I used about control.  Nevertheless, the point he made, about how publishers want people to access “their content,” but in a way that “ensures that their business has continuity” actually re-enforced that the language I used was right on the mark.

My colleague Paolo Mangiafico has suggested that what these new policies are really about is capturing the ecosystem for scholarly sharing under Elsevier’s control.  As Paolo points out, these new policies, which impose long embargo periods on do-it-yourself sharing by authors but offer limited opportunities to share articles when a link or API provided by Elsevier is used, should be seen alongside the company’s purchase of Mendeley; both provide Elsevier an opportunity to capture data about how works are used and re-used, and both  reflect an effort to grab the reins over scholarly sharing to ensure that it is more difficult to share outside of Elsevier’s walled garden than it is inside that enclosure.

I deliberately quote Mr. Seeley’s phrase about “their content” because it is characteristic of how publishers seem to think about what they publish.  I believe it may even be a nearly unconscious gesture of denial of the evident fact that academic publishers rely on others — faculty authors, editors and reviewers — to do most of the work, while the publisher collects all of the profit and fights the authors for subsequent control of the works those authors have created. That denial must be resisted, however, because it is in that gesture that the desire for control becomes outright disrespect for the authors that publishing is supposed to serve.

Nowhere is this disrespect more evident than in publisher claims that the works they publish are “work made for hire,” which means, in legal terms, that the publisher IS the author.  The faculty member who puts pen to paper is completely erased from the transaction.  To be clear, as far as I know Elsevier is not making such a claim with its new policies.  But these work made for hire assertions are growing in academic publishing.

Three years ago I wrote about an author agreement from Oxford University Press that claimed work made for hire over book chapters; that agreement is still in use as far as I am aware.  At the time, I pointed out two reasons why I thought OUP might want to make that claim.  First, if something is a work made for hire, the provision in U.S. copyright law that allows an author or her heirs to terminate any license or transfer after 35 years simply does not apply.  More significantly, an open access license, such as is created by many university policies, probably is not effective if the work is considered made for hire.  This should be pretty obvious, since our law employs the legal fiction that says the employer, not the actual writer, is the author from the very moment of creation in work made for hire situations.  So we should read these claims, when we find them in author agreements, as pretty direct assaults on an author’s ability to comply with an open access policy, no matter how much she may want to.

As disturbing as the Oxford agreement is, however, it should be said that it makes some legal sense.  When a work is created by an independent contractor (and it is not clear to me if an academic author should be defined that way), there are only selected types of works that can even be considered work made for hire; one of them is “contribution[s] to a collective work.”  So a chapter in an edited book is at least plausible as a work made for hire, although the other requirement — an explicit agreement, which some courts have said must predate the creation of the work — may still not be met.  In any case, the situation is much worse with the publication agreement from the American Society of Mechanical Engineers (ASME), which was recently brought to my attention.

ASME takes as its motto the phrase “Setting the Standard,” and with this publication agreement they may well set the standard for contemptuous maltreatment of their authors, many of whom are undoubtedly also members of the society.  A couple of points should be noted here.  First, the contract does claim that the works in question were prepared as work made for hire.  It attempts to “back date” this claim by beginning with an “acknowledgement” that the paper was “specially ordered and commissioned as a work made for hire and, accordingly, ASME is the author of the Paper.”  This acknowledgement is almost certainly untrue in many, if not most, cases, especially since it appears to apply even to conference presentations, which are most certainly not “specially commissioned.”  The legal fiction behind work made for hire has been pushed into the realm of pure fantasy here.

What’s more, later in the agreement the “author” agrees to waive all moral rights, which means that they surrender the right to be attributed as the author of the paper and to protect its integrity.  Basically, an author who is foolish enough to sign this agreement has no relationship at all to the work, once the agreement is in place.  They are given back a very limited set of permissions to use the work internally within their organization and to create some, but not all, forms of derivative works from it (they cannot produce or allow a translation, for example).  Apparently ASME has recently started to disallow some students who publish with them to use the published paper as part of a dissertation, since most dissertations are now online and ASME does not permit the  writer to deposit the article, even in such revised form, in an open repository.

To me, this agreement is the epitome of disrespect for scholarly authors.  Your job, authors are told, is not to spread knowledge, not to teach, not to be part of a wider scholarly conversation.  It is to produce content for us, which we will own and you will have nothing to say about.  You are, as nearly as possible, just “chopped liver.”  It is mind-boggling to me that any self-respecting author would sign this blatant slap in their own face, and that a member-based organization could get away with demanding it.  The best explanation I can think of is that most people do not read the agreements they sign.  But authors — they are authors, darn it, in spite of the work for hire fiction — deserve more respect from publishers who rely on them for content (free content, in fact; the ASME agreement is explicit that writers are paid nothing and are responsible for their own expenses related to the paper).  Indeed, authors should have more respect for themselves, and for the traditions of academic freedom, than to agree to this outlandish publication contract.

Stepping back from sharing

The announcement from Elsevier about its new policies regarding author rights was a masterpiece of doublespeak, proclaiming that the company was “unleashing the power of sharing” while in fact tying up sharing in as many leashes as they could.  This is a retreat from open access, and it needs to be called out for what it is.

For context, since 2004 Elsevier has allowed authors to self-archive the final accepted manuscripts of their articles in an institutional repository without delay.  In 2012 they added a foolish and forgettable attempt to punish institutions that adopted an open access policy by purporting to revoke self-archiving rights from authors at such institutions.  This was a vain effort to undermine OA policies; clearly Elsevier was hoping that their sanctions would discourage adoption.  This did not prove to be the case.  Faculty authors continued to vote for green open access as the default policy for scholarship.  In just a week at the end of last month the University of North Carolina, Chapel Hill, Penn State, and Dartmouth all adopted such policies.

Attempting to catch up to reality, Elsevier announced last week that it was doing away with its punitive restriction that applied only to authors whose institutions had the temerity to support open access. They now call that policy “complex” — it was really just ambiguous and unenforceable — and assert that they are “simplifying” matters for Elsevier authors.  In reality they are simply punishing any authors who are foolish enough to publish under these terms.

Two major features of this retreat from openness need to be highlighted.  First, it imposes an embargo of at least one year on all self-archiving of final authors’ manuscripts, and those embargoes can be as long as four years.  Second, when the time finally does roll around when an author can make her own work available through an institutional repository, Elsevier now dictates how that access is to be controlled, mandating the most restrictive form of Creative Commons license, the CC-BY-NC-ND license for all green open access.

These embargoes are the principal feature of this new policy, and they are both complicated and draconian.  Far from making life simpler for authors, they now must navigate through several web pages to finally find the list of different embargo periods.  The list itself is 50 pages long, since each journal has its own embargo, but an effort to greatly extend the default expectation is obvious.  Many U.S. and European journals have embargoes of 24, 36 and even 48 months.  There are lots of 12 month embargoes, and one suspects that that delay is imposed because those journals that are deposited in PubMed Central, for which 12 months is the maximum embargo permitted.  Now that maximum embargo is also being imposed on individual authors.  For many others an even longer embargo, which is entirely unsupported by any evidence that it is needed to maintain journal viability, is now the rule.  And there is a handful of journals, all from Latin America, Africa, and the Middle East, as far as I can see, where no embargo is imposed; I wonder if that is the result of country-specific rules or simply a cynical calculation of the actual frequency of self-archiving from those journals.

The other effort to micromanage self-archiving in this new policy is the requirement that all authors who persevere and wish, after the embargo period, to deposit their final manuscript in a repository, must apply a non-commercial and no derivative works limitation on the license for each article.  This, of course, further limits the usefulness of these articles for real sharing and scholarly advancement.  It is one more way in which the new policy is exactly a reverse of what Elsevier calls it; it is a retreat from sharing and an effort to hamstring the movement toward more open scholarship.

The rapid growth of open access policies at U.S. institutions and around the world suggests that more and more scholarly authors want to make their work as accessible as possible.  Elsevier is pushing hard in the opposite direction, trying to delay and restrict scholarly sharing as much as they can.  It seems clear that they are hoping to control the terms of such sharing, in order to both restrict it putative impact on their business model and ultimately to turn it to their profit, if possible.  This latter goal may be a bigger threat to open access than the details of embargoes and licenses are. In any case, it is time, I believe, to look again at the boycott of Elsevier that was undertaken by many scholarly authors a few years ago; with this new salvo fired against the values of open scholarship, it is even more impossible to imagine a responsible author deciding to publish with Elsevier.

Listening to Lessig

Like many other attendees, I was pleased when I saw that the closing keynote address for this year’s Association of College and Research Libraries Conference was to be given by Professor Larry Lessig of Harvard.  But, to be honest, my excitement was mingled with a certain cynicism.  I have heard Lessig speak before, and I am afraid I worried that I would be listening to essentially the same lecture again.

My suspicion was not wholly unwarranted.  In part I think it is the fault of Lessig’s instantly recognizable lecture style.  It is energetic and entertaining, but because its rhythms and conventions are so idiosyncratic, I think it may flatten the message a little bit.

In any case, I sat down in the ballroom of the Oregon Convention Center on Saturday with somewhat mixed expectations.  But what I did not expect was for Lessig to begin his talk by acknowledging that all his public lectures were really the same.  Had he read my mind?  No, his point was a little different.  Over the years, he told us, he has had three major themes – political corruption, net neutrality, and copyright/open access.  But, he told his audience of attentive librarians, those three themes are fundamentally just one theme.  Each is about equality.  Not three themes, but only one — equality.  Equality of access to the political process is the heart of his current campaign against the corruption of our political system by the endless pursuit of money.  Equality of access to the means of communication and culture is key to the fight for net neutrality.  And equality of access to knowledge is what animates the open access movement.

So it turns out that my worry, prior to the talk, was both unfair and, in a sense, correct.  All Lessig’s lectures are very much the same, because the underlying value he is asking us to focus on is the same.

Thinking about this unity-behind-diversity in the messages about political corruption, net neutrality and open access set me thinking about the way my colleagues and I frame our advocacy for the last of those items, open access to scholarship.  Our messages, I think, tend to focus on incremental change, on the benefits to individual scholars, and on not rocking the academic boat too much.  Lessig reminded me that there are good reasons to rock a little bit harder.  Publishing in toll access journals and neglecting open access options or additional means of dissemination is not just short-sighted.  It is dumb, and it is harmful.  We need to say that occasionally.

Publishing exclusively through closed access channels is dumb because it ignores huge opportunities available that can, quite simply, make the world a better place.  And such publishing fails to take full advantage of the greatest communications revolution since the printing press.  Indeed, online toll-access deliberately breaks network technology in order to protect its outmoded and exclusionary business model.  Doing this is simply propping up the buggy whip manufactures because we are afraid of how fast the automobile might travel.  The academy is not usually this dumb, but in this case we are wasting vast amounts of money to support an obsolete technology.  I know that the promotion and tenure process is often cited as the reason for clinging to the old model, but this is simply using one outdated and inefficient system as an excuse for adhering to another such system.  Traditional modes of evaluation are breaking down as fast as traditional publishing and for the same reasons.  Hiding our heads in the sand is no solution.

More to the point, however – more to Lessig’s point – is the fact that this traditional system we are so reluctant to wean ourselves from actually hurts people.  It fosters ignorance and inequality.  It makes education more difficult for many, retards economic progress, and slows development worldwide.  As academics and librarians who by inclination and by professional responsibility should be committed to the most universal education possible, it is shameful that we cling to a system where only the rich can read our scholarship, only the privileged gain access to the raw materials of self-enlightenment.  How can a researcher studying the causes and treatments of malaria, for example, be satisfied to publish in a way that ensures that many who treat that disease around the globe will never be able to see her research?  How can an anthropologist accept a mode of publishing that limits access for the very populations he studies, so they will never be able to know about or benefit from his observations?  Why would a literary scholar writing about post-colonialist literature publish in a way that fosters the same inequalities as earlier forms of colonialism did?

In this wonderful column from Insider Higher Ed., the ever-insightful Barbara Fister writes about what we really mean when we talk about serving a community, and what we might mean by it.  She comments on the “members-only” approach to knowledge sharing that has become an accepted practice, and challenges us to rethink it.  Like Lessig, Fister is calling us to consider our core values of equality and the democratization of knowledge.  She also reminds us of how dumb – her word is wasteful – the current system is.

Perhaps the most vivid example of how subscription-based publishing fosters, and even demands, inequality is found in the ongoing lawsuit brought against a course pack publisher in India by three academic publishers.  Two of the “usual suspects” are here – Oxford University Press and Cambridge University Press (joined, in the Delhi University suit, by Taylor and Francis) – and this lawsuit is even more shameful than the one brought against Georgia State.  The problem, of course, is that the books published by these modern-day colonialists are too expensive for use in India.  I once was told by a professor of IP law in India that a single textbook on trademark law cost over a month’s salary for his students.  Photocopying, whether it is authorized by Indian law or not (and that is the point at issue) is a matter of educational survival, but these publishers want to stop it.  Their rule – no one should ever learn anything without paying us – is a recipe for continued ignorance and inequality.  It is disgraceful.

I use the word colonialists in the paragraph above quite deliberately.  What we are seeing here is the exploitation of a monopoly that is imposed on a culture with the demand that people pay the developed world monopoly holders in order to make progress as a society.  We have seen this too many times before.

The thing I like best in the article linked above – the whole thing is well worth a careful read — is the brief story of how a student group in India began handing out leaflets about the lawsuit at a book fair where CUP representatives were hawking their wares.  They wanted to let people know that buying books from Oxford and Cambridge is supporting a worldwide campaign of intimidation that is aimed at reducing access to knowledge and culture.  Publishing with these presses is a form of colonial occupation that extorts from whole populations a high price to obtain the means of cultural and intellectual growth.  The reaction, of course, was predictable; the publisher summoned the police to protect themselves and others from these unpleasant truths.  But the technique has merit; perhaps we can also find ways to shame these publishers when they attend our academic or professional conference, when they send salespeople to our campuses, and when they recruit our colleagues to write and review for them.  A commitment to equality demands no less.

Copyright, Open Access, and Human Rights

The United Nations Human Rights Council is holding its 28th session this month, and one item on the agenda is discussion about a report from Farida Shaheed, who is a “Special Rapporteur” in the area of “cultural rights.”  Ms. Shaheed is a well-known Pakistani sociologist and human rights activist.  Her report is a remarkable document in many ways, with a lot of things to like for those who are concerned about the overreach of copyright laws.  There are also some points that are troubling, although, on balance, I would love to see this report get attention and action from the U.N.

In some sense, the most remarkable thing about this report is its frank recognition that intellectual property laws are in tension with the fundamental human right of access to science and culture. In only its third paragraph, the report reminds us that since at least 2005, the World Intellectual Property Organization, a U.N. agency, has been mandated (whether effectively or not) to give “renewed attention to alternative policy approaches to promote innovation and creativity without the social costs of privatization.”  In short, the WIPO is charged, whether effectively or not, to find ways to facilitate open access to science and culture.  This charge is made explicit in the recommendations, where the Special Rapporteur directly suggests that “[p]ublic and private universities and public research agencies should adopt policies to promote open access to published research, materials, and data on an open and equitable basis, especially through the adoption of Creative Commons licenses” (para. 113).

When librarians and other open access advocates discuss OA policies with their faculties, perhaps we should recognize that there is a compelling argument to be made that this is not just a “what’s best for academia and for my interests” issue, but a true human rights issue.  Ms. Shaheed’s report makes this case in a concise and compelling way.  And this point also reminds us of why open access that is achieved simply by paying the commercial publishers to release articles is not a solution, because it does not really promote equitable access.  The fees charged are too high for many authors, they are not administered in a transparent way, and, frankly, some of the publishers cannot be trusted to fulfill their end of the bargain.  Barbara Fister discussed some of these problems in more detail in her recent Inside Higher Ed blog post, called “New Predatory Publishing in Old Bottles.”  If we take open access seriously as a step toward a more democratic and equitable culture, we must embrace a wider variety of “flavors” of OA, and not assume that the “usual suspects” can do it for us.

To return to a reading of the Human Rights Council report, there are strong endorsements of the idea that cultural and scientific development depends on restraining the reach of IP protections.  The section on “Copyright policy and cultural participation” is structured around three themes that all begin with “promoting cultural participation through…” and then go on to discuss copyright limitations and exceptions, international cooperation, and open licensing.  Here are some specific recommendations that I found very encouraging:

  • In regard to negotiations that are already underway, the report endorses ratification of the Marrakesh Treaty to Facilitate Access to Published Works for Persons Who are Blind, Visually Disabled, or Otherwise Print Disabled.  On the other hand, Ms. Shaheed expresses concern (para 19) about the Trans-Pacific Partnership Agreement that is currently being negotiated in secret, which is the problem that the report focuses on, calling for all such multinational agreements to be discussed in a transparent way (para 92).  Since the TPP is often defended with the claim that it will benefit developing countries, it is fascinating to see it cited as an example of a “democratic deficit in international policy making.”  From a different article, this editorial about the TPP by a U.S. Senator raises the same type of concern, and together they make a strong case against the agreement.
  • In paragraph 22 the report discusses the potential that a pervasive licensing environment can inhibit artistic self-expression and slow cultural development.
  • The report calls for the adoption of international treaties on copyright limitations and exceptions for libraries and education.  Given the current climate in the WIPO, this seems like a long shot, but is valuable in part because it calls attention to that climate, which is dominated by representatives of commercial interests (including, unfortunately, the U.S. Trade Representative).
  • On the issue of copyright limitations and exceptions, the report specifically points to fair use as a tool for allowing a more “comprehensive and adaptable” approach to unlicensed uses (para 73).  The report notes that most countries take the route of adopting exceptions for specific types of use, which provides more certainty, but adds that that approach may be inadequate in the current environment.  In general the report calls for a flexible approach to “uncompensated use of copyrighted works , in particular in contexts of income disparity, non-profit efforts, or under-capitalized artists” (para 106).  It specifically asserts, in this regard, that member states should not take a rigid approach to the so-called “three-step test” for copyright exceptions that is found in the Berne Convention (para 104), which is often used as a weapon by commercial interests against broadly applicable exceptions.
  • One of the recommendations I like best in this report, found in paragraph 107, is that states should enact laws that would prevent copyright limitations and exceptions from being overruled by contractual provisions, and protect such exceptions from excessive technological interference as well.  The UK has recently adopted the contractual part of this idea, stating that certain uses that are allowed by the law cannot be prohibited by contracts.  As I have said before, this is an idea we need to incorporate into U.S. copyright law, and it is good to see the U.N. special rapporteur endorse it so firmly.

There is an overall emphasis in the report that focuses on copyright as an authors’ right, and it is this focus that gives me some ambiguous feelings about the document.  On the one hand, I agree that a focus on authors and supporting authorship will help re-balance our approach to copyright.  Where we have most often gone wrong in this area is when we have allowed copyright discussions to focus on supporting the business models of intermediary organizations, regardless of whether or not those models really helped incentivize authors and creators.  Throughout its history, copyright has been called an authors’ right and treated like a publishers’ privilege.  Re-focusing on authors is part of restoring copyright to its proper function, and makes sense in a document about human rights.  And yet, it is also true that too much stress on author rights can also become unbalanced.  Copyright cannot benefit society unless it weighs the rights of both users and creators, especially since the former often aspire to become the latter.  Authorial control is an important part of the creative incentive, but it can easily go too far.

One troublesome area where this is a real danger is protection for “traditional knowledge and traditional cultural expressions” — the cultural creations of indigenous peoples.  This is an area where there have certainly been abuses, and it is not surprising to find a concern over TK and TCE, as the international community abbreviates them, in a report about IP and human rights.  Unfortunately, protection for these cultural products raises as many problems as it does solutions.  Should there be a public domain for TCEs?  If not, why not?  And who is the legitimate rights holder in traditional knowledge?  The national government?  In an article back in 2011, David Hansen explored some of these issues and found real incompatibility between traditional knowledge protections and the values that animate IP law.

So my final attitude toward this report is mixed, but still strongly positive.  I think it recommends many of the right steps toward restoring copyright and other IP rights to their proper scope and function.  It rightly places the focus on authors and on economic and cultural development.  It reminds us that all high-level copyright conversations should have a human rights perspective.  Where I have concerns, I see a chance for continued conversation.  But at least those conversations would take place with the proper grounding, if the report is taken as seriously as it deserves.

Resistance is Futile

This is a guest post by Jeff Kosokoff, the Head of Collection Strategy & Development for the Duke University Libraries.

In an outstanding example of Buzzword Bingo, EBSCO’s Friday press release announcing their acquisition of YBP from Baker & Taylor (B&T) says that they are assembling the tools “to truly streamline and improve administrative (‘back end’) services in ways that optimize the impact these services have on the end user experience” (EBSCO PR: ).

If B&T was trying to move YBP, then there were likely multiple bidders, perhaps ProQuest, perhaps one or more of the major publishing concerns. After all, YBP is one of a small handful of comprehensive book jobbers still standing and they have well-established relationships with a large percentage of libraries. At its core, this is a business decision, and EBSCO is not a charity. The acquisition is a way to maximize profits at a company that has been very good at doing so. There is real value in creating more seamless and streamlined workflows within libraries, and this is especially true where libraries are facing staffing challenges. This recent acquisition, along with the recent demise of Swets, has certainly allowed EBSCO to extend their customer base and increase engagement.

After the initial shock, I suppose not many in the library community were particularly surprised. YBP faces increasing financial challenges as library print book acquisition expenditures fall. As we would expect, the narrative from the parties was upbeat and filled with promises that nothing would change beyond some wonderful synergies to come. EBSCO is continuing to position itself as the closest thing we have to a soup-to-nuts library information content and services vendor. EBSCO claims to be the largest library discovery service provider (6,000+ customers), the leading e-content provider (360,000+ serials, 57,000+ e-journals, 600,000+ e-books). Over the past 20 years, they have moved from a serials jobber to a reseller of abstracting and indexing, a major serials and book jobber, a comprehensive and state-of-the-art discovery service, a contract publisher and aggregator of a massive amount of full-text content through a diverse portfolio of subject databases, an e-book aggregation platform. EBSCO has a strong engagement with development of the KOHA open source ILS. While the materials outcomes are unclear, EBSCO is even a partner in the Open Access Scholarly Publishers Association. Adding YBP also brings in another major library analytics tool, Collection HQ to join EBSCO’s Plum Analytics.

So, what’s the problem?

  1. Not Enough Competition?

What are the risks associated with giving too much of your business to a single company? What if things go sour? Many librarians have been frustrated with the dwindling choices we have in the jobber market. In the halcyon summer of 2014, my institution felt like it had three choices for our serials jobbing. When Swets went under last fall, we were left with only two choices. The book jobber market has faced so much consolidation over the past 2 decades that those of us who are dissatisfied with our book jobbers do not feel we have many options.

  1. Too Big/Monolithic To Fail?

I tend to agree that EBSCO can leverage this merger to enable efficiencies. Those efficiencies will certainly be best realized if a library commits more resources to EBSCO’s family of library services and systems. Business risk assessment professionals can offer comment here, but doing too much business with a single corporate entity is fraught with obvious risks. The movement by EBSCO and others to promote end-to-end library systems flies in the face of efforts to modularize library systems and services. Many have been working to build interoperable components instead of single systems. Modularity allows one to both mitigate risk and promote flexibility and evolution of the local set of systems over time. How will we feel limited when it becomes difficult or impossible to replace an unsatisfactory component of a monolithic implementation? The more singular one’s commitment to a small number of providers, the impacts of failure and obsolescence grow. And what happens if EBSCO’s corporate parent gets into trouble? Admittedly, EBSCO is probably the most diversified company in our space. In fact, it is a little surprising not to see EBSCO’s Vulcan Industries competing with Demco and Brodart in the lucrative library fixtures space.

  1. Can a wholly-owned subsidiary really remain agnostic?

For those of us in the trenches of emerging e-book workflows, a continuing challenge is metadata supply and alignment. This is an emerging area of practice, and while ProQuest, perhaps the major aggregated e-book competitor to EBSCO, says it will be business as usual (PQ Blog: ), it seems like they might be more than a little nervous. Often a book is available on multiple platforms, and one wonders if long-standing meta-data workflow issues between YBP and EBL will receive the attention they deserve. If I were a smaller e-book publisher that does not want to join with EBSCO’s platform, I think would be very anxious about the acquisition.

  1. Can we learn anything from EBSCO’s past behavior?

During my 20-year professional career in libraries, I have witnessed a lot of very aggressive behavior from vendors. In my experience, EBSCO has been one of the most aggressive. In the 1990s, at a time when most discovery providers were content to resell existing databases, EBSCO brought new A&I products to market that directly competed with existing ones in business, health care and the social sciences. While one could argue this drove improvements, the outcome was essentially a doubling of our subscription loads. Combined with exclusive agreements with minor and major periodical publishers such as Time-Warner and Harvard Business Publishing, libraries have sometimes felt that EBSCO was forcing the issue and gaining something of an unproductive advantage. In the discovery space, there are those who feel EBSCO takes an all-or-nothing approach where their systems work great if-and-only-if you also bring along all your subscriptions to their platform.

Consolidation and further amalgamation in the library information services market has been a trend for a few years now. We have fewer ILS vendors, fewer jobbers, bigger players, and lots and lots of financial capital trying to make lots and lots of money in our industry. Perhaps resistance is futile; perhaps it is the only rational choice.