The title of Nimmer’s lecture was “Infringement 2.0,” and his overall framework involved the changing role of copyright and infringement in the current environment, where copyright protects every scrap of original expression, whether the creator needs or wants that protection, and where copying and widespread distribution can be accomplished with the click of a mouse. I want to try to outline several points from the lecture that seemed especially interesting to me (fully recognizing that I alone am responsible for any misrepresentations of Prof. Nimmer’s meaning).
Nimmer began with a more qualified definition of infringement that we tend to think about normally, in my opinion — the unauthorized wholesale copying of works of high authorship. Not just unauthorized copying, but wholesale copying of works of high authorship. This definition seems to suggest that courts should not spend time worrying about copyrights in family photos and other ephemera; Nimmer even raised the question of whether we should protect pornography, although he immediately recognized the First Amendment issues that such a stance would raise.
With this qualified definition of infringement as a starting place, Nimmer took us on a tour of some recent copyright rulings. What I found really interesting was his suggestion that courts are using fair use, in the digital environment to approximate the qualified definition of infringement that he suggested. Two examples will have to suffice. In the case involving the anti-plagiarism software Turnitin (A.V. v. iParadigms), the Fourth Circuit rejected an infringement claim based on the copying of entire student papers that are submitted to the service and stored to be used for comparison against later submissions. The Court reached this result by finding that Turnitin’s use was a fair use, and Nimmer suggested that this use did not meet his qualified definition of an infringement because it did not copy works of “high authorship.” More significantly, perhaps, Nimmer also approved of the District Court decision in last year’s Google Books case that Google’s scanning of millions of titles was a fair use. In his framework, Google’s scanning did not amount to “wholesale” copying; even though entire works are scanned in to the database, users see only “snippets,” and those very short excerpts serve important social purposes.
Whatever one may think of the individual cases, this was a fascinating approach. The copyright law says that what is fair use is not, therefore, infringement, so it makes perfect sense, for one sufficiently learned and bold, to try to understand fair use jurisprudence by looking at the limits on infringement that are thus defined by implication.
Another topic Nimmer addressed at length was the doctrine of first sale, and he was highly critical of the Ninth Circuit decision in Vernor v. Autodesk, which found that Mr. Vernor committed copyright infringement when he resold legal copies of CAD DVDs in apparent violation of licensing terms. The Ninth Circuit spent a lot of time examining those license terms, but Nimmer suggested that they were asking the wrong question. The proper question here, he suggested, was not “was this a sale or a licensing transaction,” as the Court assumed, but rather “who was the legitimate owner of the material substrate that made up this copy?” He pointed out that in both the foundational Supreme Court case about first sale, from 1908, and in last year’s decision in the Kirtsaeng case, the Court was dealing with legal copies where an attempt had been made, through a license, to restrict downstream resale of those copies. In both cases the Supreme Court ignored those attempts at licensing and allowed the legitimate owner of the material copies to resell the works on whatever terms he could negotiate. Based on those precedents, Nimmer suggested that the Ninth Circuit erred when it found that Vernor had infringed copyright with his resale, based on provisions in the purported license.
Another place where Nimmer suggested a radical way to rethink the copyright environment was on the international front. He asserted that the foundational principles of international copyright agreements — the prohibition of formalities and so-called “national treatment” — simply do not make sense in the Internet age, where potential copyright infringements nearly always cross national borders, and copyright owners are often impossible to locate. He suggested that this out-dated approach be replaced by something the U.N. and the W.I.P.O could do very well — a searchable, worldwide registry for copyright owners that Nimmer called a “panopticon.” His idea is that if a copyright owner has registered his or her work in the panopticon, they would be entitled to significant remedies for any act of infringement that is found. If they do not register, however, a action for infringement could only result in an award based on actual losses, not the much more substantial “statutory damages” that are often available.
This idea is nothing if not ambitious, but its foundations are commonsensical. If copyright protection is going to be completely automatic, and no notice on individual works is required, it is unfair to insist that users must have authorization for their uses if the rights holders have done nothing at all to make their claims known or to facilitate asking for permission. Lots of other property rights regimes have notice or registration rules (think of buying a house or a car) and those rules are in place to protect the owner. Why not a similar regime for international copyright, with an incentive, in terms of potential recovery available, for participation?
Finally, I want to end with Nimmer’s prediction about the prospect of a new copyright act in the United States. It seemed that he does believe that Congress will seriously undertake such a thoroughgoing revision of the law, and he suggested a betting pool on when the new copyright act would pass. For himself, he wanted to reserve a date in May of 2029. So we have that to look forward to.
Indeed, the waivers are essentially meaningless because of the way Duke has implemented its open access policy. When the policy was adopted unanimously by our Academic Council in March 2010, the statement in favor of openness was pretty clear, but so was the instruction that implementing the policy not become a burden to our faculty authors. So throughout the ensuing years we have tried to ensure that all archiving of published work in our repository be done in compliance with any publisher policies to which our authors have agreed. NPG allows authors to archive final submitted manuscripts after a six month delay, so that is what we would do, whether or not the author sought a policy waiver. But suddenly that is not good enough; Nature wants a formal waiver even though it will have no practical effect. The demand seems to be an effort to punish authors at institutions that adopt open access policies.
There are some comical aspects to this sudden requirement for waivers. As I said, it seems to have taken NPG three years to figure out that Duke has an open access policy, even though we have made no secret of the fact. Even more oddly, the e-mail that our faculty authors are getting from NPG lists nine schools from whose faculty such waivers are being required; apparently it was only four schools until recently. But there are over thirty institutions with faculty-adopted OA policies in the U.S. alone. Some of the largest schools and the oldest policies have not yet showed up on Nature’s radar; one wonders how they can be so unaware of the scholarly landscape on which their business depends. NPG looks silly and poorly-informed, frankly, in the eyes of the academic authors I have spoken to.
In addition to making NPG look foolish, this belated demand for waivers has had positive effects for open access on our campus. For one thing, it simply reminds our authors about the policy and gives us a chance to talk to them about it. We explain why Nature’s demand is irrelevant and grant the waivers as a matter of course, while reminding each author that they can still voluntarily archive their work in compliance with the rights they have retained (which is the same situation as without the waiver). I suspect that this move by NPG will actually increase the self-archiving of Nature articles in our repository.
Another effect of these new demands is that open access and the unreasonable demands of some commercial publishers has gotten back on the radar of our senior administrators. Our policy allows the Provost to designate someone to grant waivers, and, in figuring our who that would be, we had a robust conversation that focused on how this demand is an attack on the right of our faculty to determine academic policy.
This last point is why I have moved, in the past few days, from laughing at the bumbling way NPG seems to be fighting its battle against OA policies to a sense of real outrage. This effort to punish faculty who have voted for an internal and perfectly legal open access policy is nothing less than an attack on one of the core principles of academic freedom, faculty governance. NPG thinks it has the right to tell faculties what policies are good for them and which are not, and to punish those who disagree.
As my sense of outrage grew, I began to explore the NPG website. Initially I was looking to see if authors were told about the waiver requirement upfront. As far as I can tell, they are not, in spite of rhetoric about transparency in the “information for authors” page. The need for a waiver is not even mentioned on the checklist that is supposed to guide authors through the publication process. It seems that this requirement is communicated to authors only after their papers have been accepted. I suspect that NPG is ashamed of their stratagem, and in my opinion they should be. But as I looked at NPG policies, and especially its License to Publish, my concern for our authors grew much deeper.
Two concerns make me think that authors need to be carefully warned before they publish in an NPG journal.
First, because this contract is a license and tells authors that they retain copyright, it may give authors a false sense that they are keeping something valuable. But a careful reading shows that the retention of copyright under this license is essentially a sham. The license is exclusive and irrevocable, and it encompasses all of the rights granted under copyright. It lasts for as long as copyright itself last. In short, authors are left with nothing at all, except the limited set of rights that are granted back to authors by the agreement. This is not much different than publishing with other journals that admit up front that they require a transfer of copyright; my concern is that this one is dressed up as a license, so authors may not realize that they are being just as completely shorn of their rights as they are by other publishers.
My bigger concern, however, is found in clause 7 of the NPG “license,” which reads in its entirety:
The Author(s) hereby waive or agree not to assert (where such waiver is notpossible at law) any and all moral rights they may now or in the future holdin connection with the Contribution and the Supplementary Information.
Let me start by saying, however, that even though I work for Duke and specialize in copyright issues, I have absolutely no involvement in this case and no “insider” information. As Will Rogers said, “all I know is what I read in the papers.” What I say here is my own opinion based on those reports and nothing more.
The basic facts are easy to restate. The SSHA has informed DUP that it wants to end its long-standing association and look for a different publisher for its flagship journal, Social Science History. The Press, however, asserts that language in their original contract means that the SSHA can stop participating in the journal, but cannot remove it from the control of DUP.
The first point I would make about this dispute is that it involves a specific piece of contract language that is probably unusual. For that reason I think some of the comments about how this case might determine the future for lots of other society journals are exaggerated. My guess is that this case will eventually be resolved through an interpretation of that specific language and thus, even if there is a court decision instead of a settlement, it will not have too much of a ripple effect.
Most of the comments I have seen (from librarians and academics) have assumed that Duke University Press is the bad guy here, trying to wrest control of a scholarly journal out of the hands of scholars. While I do not intend to mount a defense of DUP here, I do want to suggest that there is more to this case than a good v. evil argument about academic freedom. This is a business disagreement, and regardless of which side wins, in my opinion, it will probably not be a good outcome for scholarship.
To understand a lawsuit, it is always a good practice to ask why the plaintiff decided to file the litigation in the first place. A commenter on the Chronicle story linked above suggests that it is hard to understand in this case, since he believes the results could not justify the costs. But surely the SSHA thought it was worthwhile, and a little deeper reflection suggests why. In their statement, the SSHA explains that they wanted to “assess the open market” to find out what Social Science History is “worth.” That should tell us pretty clearly that the SSHA wants to shop the journal around to the big commercial publishers in order to get a big payout to the Association. Other societies have done it before, and it seems clear that the SSHA wants to cash in on its journal. So the reason the lawsuit is worth the expense is that the SSHA has expectations of perhaps hundreds of thousands of dollars in profit if they can sell their “brand name” journal to Wiley or Sage.
By the way, this is why the society would not be satisfied to simply walk away and start up a new journal to compete with a continuing Social Science History published by DUP, as that same commentator suggests. As I said, this is a business dispute, and the key asset at issue is the brand of the journal, its “goodwill” as it is called in economic valuations. Presumably no large commercial press would be willing to pay big money for a new start-up that the SSHA might launch if it lost control over Social Science History. The marketing and the value for a new purchaser depends on that well-established title being on the auction block.
So what will be the result if the SSHA wins this lawsuit or gets control over the title in a settlement? I think the inevitable result will be a sale to a large commercial publisher and subsequently much higher subscription rates for libraries and other subscribers. In previous cases where a society has sold its journal or journals to a big publishers, we have seen prices increase by as much as three or four hundred percent, especially if the journal has had a fairly moderate price to begin with (I have no idea about the current subscription price for Social Science History). Nor do I think that a win by the SSHA would necessarily be a win for academic freedom or for keeping scholarship in the hands of scholars. If the journal is sold to a large publisher, experience suggests that the scholars who make up the society will have less control over it than they did before and that the quality of the journal will decline. But lots of money will potentially flow into the SSHA coffers.
What happens if, on the other hand, Duke University Press wins the case or otherwise retains control over the journal? In my opinion, DUP has taken the stand that it has in large part to defend the value of its journal list against the loss of one of its premier titles. So if they win, at least their journal package will not lose value. But this is still probably not a good outcome, since the journal will be divorced from its roots and from the scholarly community that has made it a valuable brand in the first place. As several people have suggested, DUP might have a hard time finding top scholars in the field who would be willing to edit the journal or review for it if it is severed from its connection to their scholarly society. So while its value for DUP would be preserved initially, I think a decline in quality and in value over time is inevitable.
This is why I find it impossible to root for either side in this dispute (even though I usually root for Duke teams). To me, this case is an object lesson in why scholarship should not be treated as a commodity around which commercial value, and the disputes that accompany such value, accrues. The radical distinction between the “gift economy” in which individual scholars work, giving away their most precious intellectual assets to publishers without remuneration for the sake of the scholarly mission, and the commercial economy in which publishers work, and to which some societies aspire, was never more clear. Whoever wins this case, the scholars who donate their labor as authors, editors and reviewers for this journal will be the long-term losers. And the only way to change that situation is to radically rethink they way scholarship is supported and disseminated. We need new business models, focused on open access and better ways to support the scholarly mission, while all this dispute offers us is a fight over the way the same old traditional pie of subscription money is sliced.]]>
Erin McKiernan is a Ph.D from the University of Arizona who is now working as a scientist and teacher in Latin America. Her unique experience informs her perspective on why young scholars should embrace open access. Dr. McKiernan is a researcher in medical science at the National Institute of Public Health in Mexico and teaches (or has taught) at a couple of institutions in Latin America. For her, the issue is that open access is fundamental to her ability to do her job; she told us that the research library available to her and her colleagues has subscriptions to only 139 journals, far fewer that most U.S. researchers expect to be able to consult. Twenty-two of that number are only available in print format, because electronic access is too expensive. This group includes key titles like Nature and Cell. A number of other titles that U.S. researchers take for granted as core to their work — she mentioned Nature Medicine and PNAS — are entirely unavailable because of cost. So in an age when digital communications ought to, at the very least, facilitate access to information needed to improve health and treat patients, the cost of these journals is, in Dr. McKiernan’s words, “impeding my colleagues’ ability to save lives.” She made clear that some of these journals are so expensive that the choice is often between a couple of added subscriptions or the salary of a researcher.
This situation ought to be intolerable, and for Dr. McKiernan it is. She outlined for us a personal pledge that ought to sound quite familiar. First, she will not write, edit or review for a closed-access journal. Second, she will blog about her scientific research and post preprints of her articles so that her work is both transparent and accessible. Finally, she told us that if a colleague chose to publish a paper on which she was a joint author in a closed-access journal, she would remove her name from that work. This is a comprehensive and passionately-felt commitment to do science in the open and to make it accessible to everyone who could benefit from it — clinicians, patients and the general public as well as other scholars.
Listening to Dr. McKiernan, I was reminded of a former colleague who liked to say that he “would rather do my job than keep my job.” But, realistically, Dr. McKiernan wants to have a career as a teacher and research scientist. So she directly addressed the concerns we often hear that this kind of commitment to open access is a threat to promotion and tenure in the world of academia. We know, of course, that some parts of this assertion are based on false impressions and bad information, such as the claim that open access journals are not peer-reviewed or that such peer-review is necessarily less rigorous than in traditional subscription journals. This is patently false and really makes little sense — why should good peer-review be tied to a particular business model? Dr. McKiernan pointed out that peer-review is a problem, but not just for open access journals. We have all seen the stories about growing retraction rates and gibberish articles. But these negative perceptions about OA persist, and Dr. McKiernan offered concrete suggestions for early-career researchers who want to work in the open and also get appropriate credit for their work. Her list of ideas was as follows (with some annotations that I have added):
1. Make a list of open access publication options in your particular field. Chances are you will be surprised by the range of possibilities.
2. Discuss access issues with your collaborators up front, before the research is done and the articles written.
3. Write funds for article processing charges for Gold open access journals into all of your grant applications.
4. Document your altmetrics.
5. Blog about your science, and in language that is comprehensible to non-scientists. Doing this can ultimately increase the impact of your work and can even lead sometimes to press coverage and to better press coverage.
6. Be active on social media. This is the way academic reputations are built today, so ignoring the opportunities presented is unwise.
7. If for some reason you do publish a closed-access article, remember that you still have Green open access options available; you can self-archive a copy of your article in a disciplinary or institutional repository. Dr. McKiernan mentioned that she uses FigShare for her publications.
The most exciting thing about Erin McKiernan’s presentation was that it demolished, for many of us, the perception of open access as a risky choice for younger academics. After listening to her expression of such a heartfelt commitment — and particularly the pictures of the people for whom she does her work, which puts a more human face on the cost of placing subscription barriers on scholarship — I began to realize that, in reality, OA is the only choice.
Last year the Andrew W. Mellon Foundation [...]]]>
Last year the Andrew W. Mellon Foundation funded a three-year project to continue the long-running Scholarly Communications Institute which has previously been held at the University of Virginia. Starting in November, the new SCI will be hosted by Duke in close collaborations with UNC Chapel Hill, NC State University, North Carolina Central University and the Triangle Research Libraries Network. This new iteration of the SCI will benefit, we believe, from the extraordinary depth and diversity of resources related to innovation in scholarly communications here in the Triangle, and it will also take on a new format, in which participants will have a major role in setting the agenda each year.
Starting this year — starting right now! — the SCI invites applications from working groups of 3 – 8 people that are organized around a project or problem that concerns scholarly communications. These working groups can and should be diverse, consisting of scholars, librarians, publishers, technologists and folks from outside academia (journalist? museums? non-profits?). We hope that proposals will be very creative about connections, and include people that would like to work together even if they have not previously been able to do so.
The SCI Advisory panel will select 3 to 5 of these working group proposals and cover the costs for those teams to travel to the Triangle and spend four days together in Chapel Hill in a setting that is part retreat, part seminar, part development sprint and part un-conference. We want these groups to work together and to interact. The groups will, we hope, jump-start their own projects and “cross-pollinate” ideas that will advance and challenge each others projects and discussions.
The theme for the 2014 SCI is Scholarship and the Crowd. It will be held November 9-13 at the Rizzo Center in Chapel Hill, NC. Proposals are due by March 24.
The goal of the SCI is not to schedule breakthroughs but create conditions that favor them. The Working Groups selected will set the agenda and define the deliverables. The Institute will offer the space , the environment and the network of peers to foster creative thinking, with the hope of both advancing the specific projects and also developing ideas and perspectives that can give those projects a broader potential to influence the landscape of publishing, digital humanities and other topics related to scholarly communications.
If you or someone you know might be interested in developing a proposal for this first Triangle-based SCI, you will find the call for proposals and an FAQ at trianglesci.org.
In the poster above, Fair Use Week at Harvard is connected directly to the origins of this unique aspect of American copyright law, through a statue of Justice Joseph Story, who first defined what we came to call fair use in an 1841 case involving the letters of George Washington. It is fitting that that case involved a scholarly work, a life of Washington, because fair use was then and still is today one of the most important underpinnings of scholarship. We argue about its scope sometimes, but we rely on it everyday. The most basic relationship in academic writing, the quotation of a scholar in another scholar’s work, is a form of fair use that is so central and natural to scholarship that we forget what it really is and the body of law that it depends on. Fair Use Week is worthy of celebration on our campuses because it is a reminder that this aspect of copyright law is a sine qua non for scholarship and has been for a great many years.
Two institutions that I know of will be featuring fair use information and opinion in blogs, and I wanted to draw these resources to the attention of readers.
Ohio State University hosts a “Copyright Corner” blog that has been providing basic information about fair use all month long. During the next week it would be worthwhile for readers to review what has been written there.
At Harvard, a new blog called Copyright at Harvard Library will feature posts from invited guests for Fair Use Week. I hope readers will keep up with that blog, partly because one of those posts — on Tuesday, I am told — will be a contribution from me. And I thought I would offer here a quick summary of what I will say in some detail over there.
My post focuses on a case decided by the Second Circuit Court of Appeals last month. It was an odd case, involving a lawsuit brought over a surreptitious recording of a conference call made by the business news service Bloomberg. The recording was distributed to Bloomberg subscribers and the company that held the call sued, claiming copyright infringement. There are two fascinating issues in the case, I think. The first involves that fundamental requirement for a copyright, fixation in a tangible medium of expression. Since they recorded the phone call live, not from some prior fixation, Bloomberg tried to defend themselves by saying that there was no copyright in the call for them to infringe. The second issue, of course, was fair use. Both the lower court and the Second Circuit ultimately decided the issue based on fair use, and the analysis that the Appellate Court especially applied is really fascinating. In my post I try to explain how extraordinary the analysis is, and why it has potential implications for the still-pending appeal in the Georgia State University copyright case and its fair use defense.
I hope that is enough to whet your appetite and send you to the Harvard Fair Use Week blog repeatedly this week, to read my contribution and those from Kenny Crews, Krista Cox and others.]]>
In my previous two posts, I was addressing a misunderstand that I am afraid might lead authors to be less attentive and assertive about [...]]]>
In my previous two posts, I was addressing a misunderstand that I am afraid might lead authors to be less attentive and assertive about their publication contracts than they should be. The specific issue was whether or not it is feasible to maintain that a copyright is transferred only in a final version of a scholarly article, leaving copyright in earlier versions in the hands of the author. I argued that this was not the case, that the distinction between versions is a construct used by publishers that has little legal meaning, and that author rights that do persist in earlier versions, as they often do, are created by the specific terms of a copyright transfer agreement (i.e., they are creatures of a license). These points, which I believe are correct, prompted a number of people to get in touch with me, concerned about how these specific “trees” might impact the overall forest of self-archiving policies and practices.
So now I want to make several points that all address one conclusion; this argument about the nature of a copyright transfer does not necessarily have any significant impact on what we do to enhance and encourage self-archiving on our campuses. Most of the practices I am aware of already take account of the argument I have been making, even if they are not explicit about it.
On the LibLicense list today, Professor Steven Harnad, who is a pioneer in the movement to self-archive scholarly papers, posted a 10-point strategy for accomplishing Green open access. Essentially, he points out that a significant number of publishers (his number is 60%) allow authors to self-archive their final submitted versions of their articles, and that those who have retained this right should exercise it. Elsevier is one such publisher, about which more later. Harnad argues that there are other strategies available for authors whose copyright transfer agreements do not allow self-archiving of even the final manuscript. One option is to deposit the manuscript in a repository but embargo access to it. At least that accomplishes preservation and access to the article metadata, and it facilitates fulfillment of individual requests for a copy. Another option is to deposit a pre-print (the version of the article before peer-review) in a pre-print repository, which is a solution that has long worked well in specific disciplines like physics and computer science.
All of these strategies are completely consistent with the point I have been making about copyright transfer agreements. Harnad’s model recognizes that copyright is transferred (perhaps improvidently) to publishers, and is based on authors taking full advantage of the rights that are licensed back to them in that transaction. This makes perfect sense to me and nothing I have written in my previous two posts diminishes from this strategy.
One of the questions I have received a couple of times involves campus open access policies and how they affect, or are affected by, copyright transfers. These policies often assert a license in scholarly articles, so the question is essentially whether that license survives a transfer of copyright.
It is a basic principle of law, and common sense, that one cannot sell, or give away, more than one owns. So if an author has granted a license to her institution before she transfers her rights to a publisher, it seems clear that the license should survive, or, to put it another way, that the rights that are transferred to the publisher are still subject to this prior license. There was an excellent article written in 2012 by law professor Eric Priest about this situation, and his conclusion is “that permission mandates can create legally enforceable, durable nonexclusive licenses.“ The article provides an extensive analysis of the legal effect of this “Harvard-style” license, and is well worth being read in its entirety by all who are interested in the legal status of Green open access.
An additional wrinkle to the status of a prior license is provided by section 205(e) of the copyright law, which actually addresses the issue of “priority between conflicting transfer of ownership and nonexclusive license.” This provision basically affirms what I have said above, that a license granted prior to a transfer of copyright survives the transfer and prevails over the rights now held by the transferee, IF it is evidenced by a written instrument. Because of this provision, some schools that have a license that is created by an open access policy also get a document from the author at the time of OA deposit that affirms the existence of that license. Such documentation helps ensure the survival of a policy-based license even after the copyright is later trnsferred to a publisher.
Even when we decide that a license for Green open access exists and has survived a copyright transfer, however, we still have a policy decision to make about how aggressively to assert that license. Many institutional practices look to the terms of the copyright transfer and try to abide by the provisions found therein, usually relating to the version that can be used and when it can be made openly accessible. They do this, I think, to avoid creating an uncomfortable situation for the authors. Even if legally that license they granted would survive the transfer of rights, if a conflict with the publisher developed, the authors (whom we are, after all, trying to serve) would be in a difficult place. So my personal preference is to conform our practice to reasonable publisher policies about self-archiving and to work with authors to get unreasonable policies changed, rather than to provoke a dispute. But this is a policy matter for specific institutions.
Finally, I want to say a couple of things specifically about Elsevier, since it was Elsevier’s take down notices directed against author self-archiving that began this series of discussions.
Elsevier’s policies permit authors to self-archive the final manuscript version of an article but not the published version, and, as far as I know, all of its take down notices were directed against final published versions on institutional or commercial websites. So it is true that in my opinion, based on the analysis I have presented over the past week, that Elsevier is legally justified in this take down campaign. It may well be a stupid and self-defeating strategy — I think it is — but they have the legal right to pursue it. Authors, however, also have the legal right, based on Elsevier’s policies that are incorporated into their copyright transfer agreements, to post an earlier version of the articles — the final author’s manuscript(s) — in place of these final published versions. So I hope that every time a take down notice from Elsevier that is directed against the author of the work in question is received, the article that is taken down is replaced by a final manuscript version of the same content.
As many know, Elsevier also has an foolish and offensive provision in its current copyright transfer agreement that says that authors are allowed to self-archive a final manuscript version of their article UNLESS there is an institutional mandate to do so. As I have said before, this “you may if you don’t have to but not if you must” approach is an unjustifiable interference with academic freedom, since it is an attempt to tie faculty rights to specific policies that the faculty themselves adopt to further their own institutional and academic missions. Elsevier should be ashamed to take this stance, and our institutions that value academic freedom should protest. But based on what has been said above, we can also see how futile this approach really is. If the institution has a policy-created license, that license probably survives the copyright transfer, as Eric Priest argues. In that case, the denial of a self-archiving right only in cases where a license exists is meaningless precisely because that license does exist; authors could self-archive based on the license and do not need the grant of rights that Elsevier is petulantly withholding. I said above that institutions should consider whether or not they want to provoke disputes by relying on the prior existence of a license to self-archive. Elsevier, however, seems to have decided to provoke exactly that dispute with this provision, and they are even more unwise to do so since it is likely to be a losing proposition for them.
Here is part of Richard’s post, which summarizes the discussion:
Last week, the Scholarly Communications Officer at Duke University in the US, Kevin Smith, published a blog post challenging a widely held assumption amongst OA advocates that when scholars transfer copyright in their papers they transfer only the final version of the article.
This is not true, Smith argued.
If correct, this would seem to have important implications for Green OA, not least because it would mean that publishers have greater control over self-archiving than OA advocates assume.
However Charles Oppenheim, a UK-based copyright specialist, believes that OA advocates are correct in thinking that when an author signs a copyright assignment only the rights in the final version of the paper are transferred, and so authors retain the rights to all earlier versions of their work, certainly under UK and EU law. As such, they are free to post earlier versions of their papers on the Web.
And here is the response that I just sent to the LibLicense list, in which I focus on copyright as protection over expressive content, rather than arbitrary distinctions between different versions of that content:
I had really hoped I could ignore this rather muddled controversy, mostly due to a lack of time to address it. But a tweet from Nancy Sims, of the University of Minnesota, made me realize that my original post used slightly careless language that may contribute to the confusion. So I feel I should set that straight, and respond to the whole business.
I wrote that different versions of an article were derivatives of one another. That is probably a defensible position, but Nancy made the point much clearer — the different versions are still the same work, so subject to a single copyright.
Throughout this discussion, the proponents of the position that copyright is transferred only in a final version really do not make any legal arguments as such, just an assertion of what they wish were the situation (I wish it were too). But here is a legal point — the U.S. copyright law makes the difficulty with this position pretty clearly in section 202 when it states the obvious principle that copyright is distinct from any particular material object that embodies the copyrighted work. So it is simply not true to say that version A has a copyright and version B has a different copyright. The copyright is in the expressive content, not in different versions; if all embody substantially the same expression, they are all the one work, for copyright purposes, because the copyright protects that expressive content. Hence Nancy’s perfectly correct remark that the different versions are the same work, from a copyright perspective.
Part of the point I wanted to make in my original post is that this notion of versions is, at least in part, an artificial construction that publishers use to assert control while also giving the appearance of generosity in their licensing back to authors of very limited rights to use earlier versions. The versions are artificially based on steps in the publication permission process (before submission, peer-review, submission, publication), not on anything intrinsic to the work itself that would justify a change in copyright status. If we look at how articles are really composed — usually by composing one file and then editing it repeatedly, it is easy to see how artificial, in the sense of unrelated to content, the distinctions are. How much time must elapse before a revision is a different version? If I do some revisions, then go have a cup of tea before returning to make other revisions, have I created two different “versions” entitled to separate copyright protection? The question is absurd, of course, and shows how unworkable the idea of different copyrights in different versions of the same work would be.
It has been said that no publisher makes the claim I am here suggesting. But if we look at actual copyright transfer agreements it is easy to see that they do. The default policies for Wiley, for example tell authors that they can archive a pre-print and archive a post-print, subject to certain conditions, including rules about the types of repositories that the archiving can take place in and a limitation to non-commercial reuse. If an author transfers rights only in the final version, how can Wiley make restrictions on the use of these earlier versions? The better — indeed the only logical — interpretation is that the copyright that is transferred covers the work as a whole, which is the nature of copyright, and that Wiley then licenses back to authors certain rights to reuse different versions. Those version rights are based on what Wiley wants to allow and to hold on to, not on any legal distinction between the versions. Elsevier’s policies are similar — they allow the preprint to be used on any website, the post-print to be self-archived on a scholarly website ONLY if the institution does not have a mandate and with acknowledgement of the publisher, and do not allow any archiving of the final version. Again, all of this is grounded on a claim that a copyright that is inclusive of the different versions, because they are the same work, has been transferred to Elsevier.
Let’s imagine what would happen if a dispute ever arose over a use of an earlier version of an article after the copyright had been transferred. A court would be asked to determine if the use of the earlier version was an infringement of the copyright held by the assignee. Courts have a standard for making this determination; it is “substantial similarity.” So if the re-used version of the work was substantially similar to the work in which the copyright was assigned — that language is itself bound up in the misunderstanding I am trying to refute — a court would probably find infringement. This has been that case in situations where the works were much more different that two versions of a scholarly article. George Harrison, for example, was found to have infringed the copyright in the song “He’s So Fine” when he wrote “My Sweet Lord,” even though the court acknowledged that it was probably a case of unconscious borrowing (see Bright Tunes Music v. Harrisongs Music, 420 F. Supp. 177, S.D.N.Y. 1976). And the author of a sequel novel to “Catcher in the Rye” was held to have infringed copyright in Salinger’s novel even though they told very different stories, due to similarities in characters and incidents (Salinger v. Colting, 607 F. 3d 68, 2d Cir. 2010). If these very different “versions” of the same work were held to be copyright infringement, how is it possible that two versions of the same scholarly article could have separate and distinct copyrights?
In many ways I wish it were true that each version had a distinct copyright, so that transfer of the rights in one version did not impact reuse of the earlier version. That situation would make academic reuse much easier, and it would conform to a basic sense that most academics have that they still “own” something, even after they assign the copyright. But that position is contrary to the very foundations of copyright law (and not just U.S. law), which vests rights in the content of expression, not in versions that represent artificial points in the process of composition or publication. And much as this mistaken idea may be attractive, it has dangerous consequences; it gives authors a false sense that the consequences of signing a copyright transfer agreement are less draconian than they really are. Instead of plying our faculty with these comforting illusions, we need to help them understand that copyright is a valuable asset that should not be given away without very careful thought, precisely because, once it is given away, all reuse of the expression in the article, regardless of version, is entirely governed by whatever rights, if any, are licensed back to the author in the transfer agreement.]]>
This is a difficult distinction for faculty authors to understand. My colleagues and I talk about it all the time with our faculty authors, but they persistently do not see much difference between the two versions, so they sometimes believe that there is little reason to observe the distinction. Publishers think (or at least say publicly) that they add a lot of value to submitted manuscripts, but a great many authors do not see it that way.
Unfortunately, some of the attention that this new strategy from Elsevier has garnered has made the difficulty of explaining what is going on to faculty authors a little more difficult. This article from The Economist Called “No Peeking…” is a case in point. The article correctly suggests that this is going to prove a self-defeating tactic for Elsevier, whose desperation to stem the movement toward open access often leads it into foolish decisions. The Economist, however, misstates the copyright law in its article in a way that will unfortunately reenforce a common misconception on campuses.
Here are three sentences from The Economist article that embody the misconception I am concerned about:
Like journalists writing for a newspaper, academics submitting an article to a journal usually sign contracts which transfer copyright to the publisher…. As the University of California, Irvine, which was on the receiving end of some of the takedown notices, points out in advice to its staff, it is usually only the final version of an article, as it appears in a journal, that is covered by publisher’s copyright. There is nothing to stop scientists making earlier versions available.
The problem with the first sentence is that academic authors are really not like journalists. Many journalists are full-time employees of their newspapers, so that their articles are owned by the newspaper from the start, as works made for hire. On the other hand, academic authors are not employees of publishers and their writings are not work for hire. Their rights (as well as those of some free-lance journalist) are entirely governed by the contracts they sign. The important implication of this is that academic authors have much more control over the rights they surrender and retain than do journalists; faculty members can simply refuse to transfer copyright (because they own it unless and until it is transferred in writing) or they can negotiate the exact terms of publication, transferring or licensing some rights and holding on to others.
The bigger issue in this article, however, is in the second and third sentences quoted above, about how the copyright that is transferred to publishers only “covers” the final version of the article. This is a common misconception that is both wrong and dangerous. It is the same misconception that leads some people to believe that if they re-draw an illustration, chart or table from a copyrighted publication, they do not implicate copyright. But the truth is that a copyright includes any work that is derived from the copyrighted work and is “substantially similar.” When someone wants to use a figure from a published work, they may well be able to do so under fair use, but redrawing the figure, unless it is redrawn into something quite different (which would undermine the purpose), does not alter the copyright situation.
When we turn to the issue of article versions, the situation is the same. Each version is a revision of the original, and the copyright is the same for all these derivatives. When copyright is transferred to a publisher, the rights in the entire set of versions, as derivatives of one another, are included in the transfer. Authors are not allowed to use their post-prints because the rights in that version are not covered in the transfer; they are allowed to use post-prints only because the right to do so, in specified situations, is licensed back to them as part of the publication agreement.
Once a copyright transfer has been signed, all of the rights that the author may still have are because of specific contractual terms, which are usually contained in the transfer document itself. In short, these agreements usually give all of the rights under copyright to the publisher and then license back very small, carefully defined slivers of those rights back to the author. One of those slivers is often, but not always, the right to use a submitted version, or post-print, in carefully limited ways. For example, many publishers allow posting of the submitted version only on defined websites, usually a personal site or institutional repository. Often the contracts also allow posting of the submitted version only after some lapse of time. These restrictions would not make sense or be enforceable IF the author retained some kind of copyright in earlier versions, as The Economist implies. But they do not; they have only, and exactly, what the contract gives back to them.
One important lesson to be gained from this correction of the language of The Economist article is that publication contracts are extremely important. They entirely determine what an author can do with his or her own work in the future. For many academics, signing such agreements is a very bad idea; they should be negotiated, either to make them licenses to publish, which allows the author to retain her copyright, or to be certain that the rights that are licensed back are broad enough and flexible enough to permit the future uses the author wants. Before the transfer, the author has a good deal of leverage to negotiate these agreements, but afterward she has very little. So it is vital to pay attention to the agreement itself and not rely on a false sense of security based on a misconception of how copyright works.
Another point to learn from this situation is that the whole idea of article “versions” is artificial. It has been developed primarily by publishers in order to make a claim that they add substantial value to the final published version, which may or may not be true, depending on the article and the publisher. Another marketing advantage that publishers get from this fabricated distinction is the ability to claim that they support author rights and reuse of articles to promote better access, while still retaining the ability to slap down authors who use their own articles in ways the publishers have not pre-approved. As my colleague Will Cross has put it, “This versioning is a creation of publishers to reenforce the sense that they are following the academic “gentleman’s agreement” that Elsevier has been breaking here.”
[Hat tip to Will Cross and to Lisa Macklin of Emory, who discussed the implications of this particular mistake with me by e-mail and provided some ideas incorporated herein. Will qualified his statement quoted above by acknowledging that pre-prints, especially, have a longer history, but the use of these distinctions as contractual dividing lines is related to recent pressures on publishing.]]]>
Since this exchange I have learned that the Davis study is being presented to legislators to prove the point Crotty makes — that public access policies should have long embargoes on them to protect journal subscriptions. It is worth noting that Davis does not actually make that claim, but his study is being used to support that argument in the on-going debate over implementing the White House public access directive. That makes it more important, in my opinion, to be clear about what this study really does tell us and to recognize a bad argument when we see it.
Here is my original reply to the LJ writer, which is based on the fact that this metric, “article half-life,” is entirely new to me and its relevance is completely unproved. It certainly does not, in my opinion, support the much different claim that short embargoes on public access will lead to journal subscription cancellations:
I have to preface my comments by saying that I was only vaguely aware of Davis’ study before you pointed it out. So my comments are based on only a very short acquaintance.
I have no reason to question Davis’ data or his results. My question is about why the particular focus on the half-life of article downloads was chosen in the first place, and my issue is with the attempt to connect that unusual metric with the policy debate about public access policies and embargoes.
As far as I can tell, article half-life tells us something about usage, but not too much about the question of embargoes. The discussion of how long an embargo should be imposed on public access is supposed to focus on preventing subscription cancellations. What I do not see is any connection between this notion of article usage half-life and journal cancellation. It is a big leap from saying that a journal retains some level of usefulness for X number of years to saying that an embargo shorter than X will lead to cancelled subscriptions, yet I think that is the argument that is being made.
Here are two paragraphs from Crotty’s Scholarly Kitchen post:
[snip]“As I understand it, the OSTP set a 12-month embargo as the default, based on the experience seen with the NIH and PubMed Central. The NIH has long had a public access policy with a 12-month embargo, and to date, no publisher has presented concrete evidence that this has resulted in lost subscriptions. With this singular piece of evidence, it made sense for the OSTP to start with a known quantity and work from there.
The new study, however, suggests that the NIH experience may have been a poor choice for a starting point. Clearly the evidence shows that by far, Health Science journals have the shortest article half-lives. The material being deposited in PubMed Central is, therefore, an outlier population, and many (sic) not set an appropriate standard for other fields.”[end quotation]
What immediately strikes me is the unacknowledged transition between the two paragraphs. In the first he is talking about lost subscriptions, which makes sense. But in the second he is talking about this notion of download half-life. What neither Davis nor Crotty give us, however, is the connection between these half-life numbers and lost subscriptions. In other words, why should policy decisions about embargoes be made based on this half-life number? At best the connection between so-called article half-life and cancelled subscriptions is based on a highly speculative argument that has yet even to be made, much less proved. At worst, this metric is irrelevant to the debate.
My overall impression is that the publishing industry is unable to show evidence of lost subscriptions based on the NIH public access policy (which Crotty acknowledges), so they are trying to introduce this new concept to cloud the discussion and make it look like there is a threat to their businesses that still cannot be documented. I think it is just not the right data point on which to base the discussion about public access embargoes.
A second point, of course, is that even if it were proved that there would be some economic loss to publishers with 6 or 12 month embargoes, that does not complete the policy discussion. The government does not support scientific research in order to prop up private business models. And the public is entitled to make a decision about return on its investment that considers the impact on these private corporate stakeholders but is not dictated by their interests. It may still be good policy to insist on 6 month embargoes even if we had evidence that this would have a negative economic impact on [some] publishers. Government agencies that fund research simply are not obligated to protect the existing monopoly on the dissemination of scholarship at the expense of the public interest.
By the way, Crotty is wrong, in the passage quoted above, to say that there is no evidence that short embargoes do not impact subscriptions other than the NIH experience. The European Commission did a five-year pilot study testing embargoes across disciplines and concluded that maximum periods of six months in the life sciences and 12 months for other disciplines were the correct embargoes.
In addition to what I said in the long quote above, I want to make two additional points.
First, it bears repeating that Davis’ study was commissioned by the publishing industry and released without any apparent peer-review. Such review might have pointed out that the actual relevance of this article half-life number is never explained or defended. But the publishing industry is getting to be in the habit of attacking open access using “data” that is not subject to the very process that they tell us is at the core of the value that they, the publishers, add to scholarship.
The second point is that I have never heard of any librarian who used article half-life to make collecting or cancellation decisions. Indeed, I had never even heard of the idea until the Davis study was released, and neither had the colleagues I asked. We would not have known how to determine this number even if we had wanted to. It is not among the metrics, as far as I can determine, that publishers offer to us when we buy their packages and platforms. So it appears to be a data point cooked up because of what the publishing industry hoped it would show, which is now being presented to policy-makers, quite erroneously, as if it was relevant to the discuss of public access and embargoes. Crotty says in his post that rational policy should be evidence-based, and that is true. But we should not accept anything that is presented as evidence just because it looks like data; some connection to the topic at hand must be proved or our decision-making has not been improved one bit.
We cannot say it too often — library support for public access policies is rooted in our commitment to serve the best interests of scholarship and to see to it that all the folks who need or could use the fruits of scholarly research, especially taxpayer-funded research, are able to access it. We are not supporting these policies in order to cancel library subscriptions, and the many efforts in the publishing industry to distract from the access issue and to claim, on the basis of no evidence or irrelevant data, that their business models are imperiled are just so many red-herrings.
NB — After this was written I discovered the post on the same topic by Peter Suber from Harvard, which comes to many of the same conclusions and elaborates on the data uncovered by the European Commission and the UK Research Councils that are much more directly relevant to this issue. You can read his comments here.]]>