Category Archives: Uncategorized

Fair Use is for Innovation

[cross-posted from the Copyright at Harvard Library Blog and written for Fair Use Week]

Remember Betamax? I do, but mostly for the fair use case that it precipitated, Sony Corp. v. Universal Studios, Inc. That case was decided by the Supreme Court in 1984. Among other things, it stands for the proposition that fair use allows for copying of copyrighted works for personal, non-transformative purposes, such as in-home recording of television shows to view at a later time. Betamax machines aren’t particularly relevant anymore, but the approach on how courts should apply fair use in light of technological change, as outlined in the case, is as relevant now, as ever.

Fair use and the purpose of copyright

At this point, personal home copying is commonplace; we do it all the time with home DVRs, when we back up our computers and phones, and when we transfer mp3s from an old device to a new one. It’s worth remembering that the legality of this sort of everyday copying, and the legality of the technology that supports it, wasn’t always accepted.

One of the issues that the Betamax case brought to a head was what courts should do when faced with new technology that makes following the literal terms of the Copyright Act result in legal outcomes that don’t match up with copyright’s underlying purpose. The Betamax court framed the issue this way:

“The immediate effect of our copyright law is to secure a fair return for an `author’s’ creative labor. But the ultimate aim is, by this incentive, to stimulate artistic creativity for the general public good. `The sole interest of the United States and the primary object in conferring the monopoly,’ this Court has said, `lie in the general benefits derived by the public from the labors of authors.’  When technological change has rendered its literal terms ambiguous, the Copyright Act must be construed in light of this basic purpose.” (citations omitted).

Fair use is one of the tools that gives courts some flexibility in construing the terms of the Copyright Act in light of its basic constitutional purpose. It is an “equitable rule of reason” that gives courts requires courts to “avoid rigid application of the Copyright Statute when on occasion it would stifle the very creativity which that law was designed to foster.” In that role, fair use has, through the years, facilitated all sorts of technological advancements, from video game development to plagiarism detection software to search engines to image search.

ReDigi and digital resale

One area where we are seeing some interesting emerging innovation is in technology that facilitates secondary markets for digital copies. As a result of so much invention in personal copying and storage devices (and distribution mechanisms to get content to those devices), we now find ourselves in a situation where users have legitimately purchased copies of works for which they never obtained a physical copy. iTunes is the most prominent example, where $0.99 will buy you an mp3 and a variety of other files.

What users can do with those copies is an interesting question. In the past, purchasers of physical copies of books or records could resell, lend, or even destroy the copies they purchased. Congress and the courts recognized that it was desirable for such secondary uses to go unimpeded by copyright, and so crafted a limit on the ability of copyright holders to control downstream distributions of their works after the “first sale” of the copy.

For digital copies, however, the question is a bit more complicated. Users who want to resell or lend their digital copies may be free to “distribute,” but reselling or lending digital copies also, technically, requires a reproduction of the file from one device to the next. The first sale rules, at least as codified in the statute, only address distribution and not reproduction, so technically these resales don’t fall within its scope.

This seems like a prime opportunity for fair use to jump in and bridge the gap between the strict terms of the Copyright Act and the underlying purpose of what the Act is trying to achieve. That’s precisely the issue currently being argued on appeal by a company called ReDigi, which has setup its own online market place for reselling your unused iTunes files. ReDigi lost before the lower court, but it is now taking up its case before the Second Circuit Court of Appeals.

As with the Betamax case, the implications for other applications of fair use extend far beyond the immediate uses that ReDigi seeks to make in reselling mp3s. Among other things, it could facilitate library lending of e-books (as argued in this excellent amicus brief from ALA, ACRL, and ARL and Internet Archive), and could relieve all sorts of legal concerns about transferring and providing access to born-digital archival materials.  It’s the kind of case that could also fuel the vision outlined by Internet Archive in its ambitious $100 million, 4 million book digital-lending project.

Whether or not ReDigi wins this particular battle, I think it’s worth celebrating that fair use has provided the flexibility to pursue these sorts of innovations in the past that help fulfill the Copyright Act’s Constitutional purpose of promoting progress.

How to Restrict Access to the Law (and Make Money Doing It!)

Standardization is really important. Huge parts of modern life—everything from sending an email to the structural integrity of your car—depend on standards. Among other things, standards make sure we’re all on the same page. When I say “2017-02-07” you might have some clues about what I mean, but if I tell you that this string of numbers is expressed according to ISO 8601, you’d know for sure that I’m referring to today’s date.

Standardization is so important, in fact, that a large number of standards are made part of the law. On Friday the Federal District Court for the District of Columbia issued an opinion in ASTM v. Public.Resource.Org, addressing some hard questions about the extent to which copyright applies to standards, and in particular standards that have the force of law by virtue of their official adoption by regulatory agencies. The court concluded that the standards at issue in that case—a variety of technical and education standards developed by ASTM, APA, and several other groups—are protected by copyright and that their incorporation into binding law through regulations does not affect that copyright-protected status. I find that conclusion troubling.

A Standards Business Model

First, why do organizations like ASTM care? Imagine that you’re developing a new standard and you think you need to make some money to recoup your costs. One way to do that is to charge people who want to use the standard. To make that work you’d probably try to obtain some form of intellectual property protection so that you have leverage when asking for your fee. What kind of IP protection do you want? There might be some ways that you could try to work your standard into something patentable, but patents are expensive and hard to obtain. Another option is copyright. Copyright lasts much longer, is easier to obtain, and has some hefty enforcement provisions (statutory damage awards up to $150,000 per work infringed). So, you go with asserting copyright.

Next, you need to get people using this standard. Of course, voluntary adoption is great. But mandatory compliance is even better. So, you lobby some government agencies to adopt your standard as binding law itself.  That way, anyone who is obligated to follow the law will have to also follow your standard. It’s important, though, that the text of your standard not be reproduced in the regulation itself. Those are generally freely available, but you need to sell the thing. Instead, you aim to get the regulation “incorporated by reference” into the regulation; the regulation says that the public must comply with Standard X and gives a reference to it, but if a member of the public wants to know what Standard X  actually says in order to comply with the law, they’ve got to go buy a copy from you.

Now, I don’t mean to say that incorporation of standards into law is a bad thing or is only done to make money. It isn’t, but the restricting access part of this model seems problematic. It’s also at the core of the business model staked out by ASTM and the other plaintiffs in ASTM v. Public.Resource.Org. Posting free copies of those standards to the web for public access, as does, poses a threat to that model.

Should Standards Receive Legal Protection?

First, the business model is premised on copyright protection of standards, but there are persuasive arguments for why standards should be excluded from copyright protection. The text of the Copyright Act is a good place to start. Section 102(b) states specifically that “systems,” among other things, are not protected. That and a variety of other  theories for why copyright protection should not apply were raised in the ASTM case.  This post from TechDirt does a good job working through the copyright-related arguments made by The court rejected them all, but I imagine we will hear more about them on appeal.

Beyond copyright, though, the main reason I find the ASTM decision troubling is that it gives relatively little attention to fundamental questions about due process, the public’s right to access the law, and earlier caselaw on the subject.   Rather than write out my own ideas on this, I’ll leave you with this good quote from the 1980 First Circuit case Bldg. Officials & Code Adm. v. Code Tech., Inc., which outline those concerns and raises some good questions that I hope will be addressed on appeal in the ASTM case:

“[Earlier Supreme Court cases hold that] the public owns the law not just because it usually pays the salaries of those who draft legislation, but also because . . . ‘Each citizen is a ruler, –a law-maker.’ The citizens are the authors of the law, and therefore its owners, regardless of who actually drafts the provisions, because the law derives its authority from the consent of the public, expressed through the democratic process.

Along with this metaphorical concept of citizen authorship, the cases go on to emphasize the very important and practical policy that citizens must have free access to the laws which govern them. This policy is, at bottom, based on the concept of due process. . . . Due process requires people to have notice of what the law requires of them so that they may obey it and avoid its sanctions. . . . But if access to the law is limited, then the people will or may be unable to learn of its requirements and may be thereby deprived of the notice to which due process entitles them. [Defendant] points out that the holder of a copyright has the right to refuse to publish the copyrighted material at all and may prevent anyone else from doing so, thereby preventing any public access to the material. . . . We cannot see how this aspect of copyright protection can be squared with the right of the public to know the law to which it is subject.”

Rebels in the Campus Bookstore

A guest post by Will Cross, Director of Copyright and Digital Scholarship at North Carolina State University

As the semester winds down most normal people are sweating through final projects, scheduling visits with family and friends, or looking forward to a well-deserved holiday break by the fire (or at least the warming glow of the new Star Wars movie).  I can’t stop thinking about textbooks.

Several recent events have kept this topic on my mind.  First, Kevin and I are preparing to teach a class in the spring and we’re currently putting the finishing touches on our assigned readings.  Sitting at the breakfast table working through the syllabus, I was struck by a seemingly-unrelated comment from my wife, Kimberly, who is finishing her first semester in a doctoral program.  Making her own plans for the spring, she noted “I need to decide if I’m going to renew my statistics textbook.”

Readers who have been out of school for a few years might be surprised that many students like Kimberly rent, rather than purchase, their more expensive textbooks.  If textbook rental companies like Chegg and College Book Renter are not familiar names, you may also be surprised by how quickly textbook prices have spiraled out of control in the past decade.  Increasing at nearly triple the rate of inflation, textbook costs have outpaced rises in health care and housing prices, leaving students with an expected bill of more than $1,200 a year.

Faced with these unsustainable costs, students like Kimberly find themselves in an arms race, seeking alternative channels to acquire textbooks while publishers work to plug leaks in their captive marketplace.  Indeed, one of the largest copyright cases decided by the Supreme Court in recent years resulted from publishers’ attempt to create a “super-property” right in order to quash the sale of less expensive international textbooks.  The following year a casebook company attempted something similar using license provisions to strip property rights from students who “purchased” (ironically) their property law textbook.

While prices have gone up, student spending has not always followed suit, with many students renting, borrowing, or pirating textbooks.  Many more simply choose their courses and majors based on the costs of textbooks or delay their purchases to determine the extent to which a title is used in class, setting them back days or weeks in assigned readings.  Of greatest concern, a recent PIRG survey revealed that more than 65% of students simply muddle through with no textbook, even though the majority recognized that this presented a “significant concern” for their ability to successfully complete the course.  As a result, more than 10% of students fail a course each year because they simply cannot afford the book.

Textbook costs have priced many students out of equal participation in higher education and colleges and universities should regard this as social justice issue that threatens students’ academic progress.  Students have written powerfully about these issues on social media, using hashtags like #textbookbroke to document the burdens

created by high prices.  For example, tweets from Kansas’ #KUopentextbook project have documented the harm done by students’ lost opportunities to travel to conferences, take unpaid internships, and compete on equal footing in the classroom.  As one student put it, “my wage shouldn’t determine my GPA.”

Closed, commercial textbooks also do significant harm to instructional design and academic freedom, forcing instructors to use one-size-fits-all books rather than diverse, tailored course materials.  This issue received national attention in November when an instructor was formally reprimanded for refusing to assign a $180 algebra book written by the chair and vice chair of his department.  As SPARC’s Nicole Allen notes, the well-intentioned practice of assigning a single book for multiple sections was designed to support a strong local used-book market but in practice it often entrenches a system of static commercial works.  It can also homogenize educational materials, limiting them to publisher-approved narratives that inhibit an instructor’s ability to bring her own voice and experience into the classroom.  Indeed, many publishers include value-added materials like test banks and pre-made assignments designed to create textbooks that are fully “teacher-proof.”

Students are often caught in the crossfire of a broken textbook market where books are sold by a small group of for-profit publishers who control 80% of the market, and purchasing decisions are made by faculty instructors but students are asked to pick up the bill.  This situation – where for-profit publishers leverage faculty incentives to exploit a captive academic market – should sound familiar to anyone working to bring open access to scholarly publishing.  The scale, however, is quite different: the textbook market exceeds the scholarly journal market by roughly $4 billion each year.

As they have with open access, academic stakeholders have begun to rebel, designing open materials that are not just cheaper than closed works but are positively better.  These open educational resources (OER’s) may be peer-reviewed Creative Commons-licensed textbooks like those found in Rice University’s OpenStax program or the University of Minnesota-led Open Textbook Network. They also encompass modular learning objects like those found in the MERLOT repository or even full courses like those offered through MIT’s OpenCourseWareCommunity colleges and system-wide efforts like Affordable Learning Georgia have been particularly effective in this space, with programs like Tidewater’s “Z-Degree” that completely remove student textbook costs from the equation.

In the past several years, academic libraries have joined the fray, raising awareness, offering grants, and collaborating with faculty authors to create a diverse body of open educational resources.  In the NCSU Libraries, we have followed the outstanding examples of institutions like Temple and UMass-Amherst by offering grants for faculty members to replace closed, commercial works with open, pedagogically-transformative OERs.  These projects create massive efficiencies for libraries – spending a few thousand dollars to save students millions – and a growing body of empirical data indicates that student learning and retention are improved by open materials.

It’s no surprise that an open textbook would be more effective than one that a third of students can’t afford to buy.  The greatest potential for OERs, however, comes from the way they empower instructors and engage with library expertise.  The “teacher proof” books offered today frequently reduce instructors to hired hands, reciting homogenized narratives approved by for-profit publishers.  In contrast, as one recent study concluded, an OER “puts ownership of curriculum directly back into the hands of teachers, both encouraging them to reflect on how the materials might be redesigned and improved and empowering them to make these improvements directly.”  Combined with support from libraries for instructional design, copyright and licensing, and digital competencies, OERs have the potential to transform pedagogy at the deepest levels.

For today’s students, textbook prices mean more than just a few extra days of subsisting on ramen noodles.  Too often, students have to choose between adding another thousand dollars to an already historical debt load or trying to get by without essential resources and closed, and commercial textbooks often leave faculty instructors with no choice at all.   These, to borrow a phrase, aren’t the books we’re looking for.

Happy Birthday and extended collective licensing

I had not intended to write about the case decided last month involving the claim by Warner/Chappell Music that they owned the copyright in the song “Happy Birthday To You.”  I figured that it would be so widely covered that I would have little to contribute.  Obviously I have changed my mind, and it is partly because of the nature of the coverage I have seen.

Consider this story from Reuters, which says that the judge in the case ruled that Warner/Chappell Music does not hold a valid copyright in the song, and that that ruling puts “Happy Birthday To You” in the public domain.  Unfortunately, this is only half right.  The decision is complicated and careful, making a number of important distinctions, as legal arguments must always do.  The court held, in fact, that the tune to “Happy Birthday,” originally written for a song called “Good Morning To You,” is in the public domain.  But about the lyrics to “Happy Birthday,” the court remained uncertain. Judge George King held that there is no evidence that Warner/Chappell Music ever received a valid transfer of rights in the lyric, so they are not the legitimate copyright holder, but he lacked sufficient evidence to determine the status of the lyrics with certainty.  The decision lists several possible scenarios, including that the lyric is itself in the public domain, but it also acknowledges the possibility that the song is owned by someone who is either unknown or not able to be found.

In short, the lyric to “Happy Birthday to You” was found to be an orphan work, although the court didn’t put it that way.  This is a scenario familiar to most folks in the library community, where we can gather some evidence about ownership of a work, but cannot arrive at a definitive conclusion about the existence of rights, or the ownership of them if they do exist.  There is a nice discussion here of the problems involved in proving that one owns an older copyright, as in this case, by Laura Quilter from U. Mass Amherst.  But what really interests me about the situation uncovered in this decision is how it reflects on the proposals being floated by the Copyright Office to address the orphan work problem through extended collective licensing.

The Copyright Office scheme would require users to pay a set licensing fee to a collective rights management organization (CRO) if they wanted to use a putative orphan work.  The CRO would then be responsible to make reasonably diligent efforts to find a rights holder.  If a rights holder was found, licensing revenue would be disbursed by the CRO.  If an owner could not be found, the money would eventually be dedicated to some fund for the benefit of creative artists and, of course, to the maintenance of the CRO’s own bureaucracy.

Two things we can be sure of about this proposal.  One is that it would create a bureaucracy which would inevitable take its own maintenance and support as a top priority.  It is well-documented that such agencies have high overhead and pay out relatively low amounts to artists, even in situations where those artists are known or easy to identify.  The second thing I think we can be sure of is that this scheme would reduce the role of fair use in mass digitization projects.  Even if the shame included a so-called savings clause for fair use, the existence of a licensing scheme, even when the purported licensor does not actually hold the rights, would chill efforts to apply fair use to many projects.

With that background in mind, what does the “Happy Birthday” decision add to our think about this ECL proposal?

First, let’s think about the situation that lead to the case, where Warner/Chappell music was collecting licensing fees without any valid claim of ownership in the “Happy Birthday” song.  I am not asserting that Warner/Chappell Music was necessarily acting in bad faith; they had some reasons to believe they were valid rights holders, but those were dismantled in the court’s opinion.  Nevertheless, for many years users paid fees that we now believe were unnecessary, and fair use was badly curtailed, especially in things like documentary films, by litigation threats.  This situation would be replicated under the Copyright Office’s ECL plan.  Users would be paying fees to a licensor that did not actually hold rights, so those fees would be a pure loss in the economic realm of copyright (although they would support the creation of an otherwise unnecessary bureaucracy).  In the same way as with “Happy Birthday,” the availability of a putative licensor would have a chilling effect on fair use, especially where the use would be publicly accessible, as with documentary films, in the case of “Happy Birthday,” or mass digitization projects, in the case of libraries.  The upshot of both situations is the same — economic loss without any real public benefit.

The second consideration we can glean from the “Happy Birthday” case is about just how hard it is to determine the rights holders for many orphan works.  With all the powers of discovery and subpoena that were available to the court, it was unable to determine if the “Happy Birthday” lyrics were in the public domain, or, if they are still owned by someone, who that owner might be.  In a ECL scheme this problem would exist both for users and for the CRO, and it is severe enough to render the whole plan inefficient and unworkable.  If users were required to make some determinations before applying for such a license, that would increase the cost of every project without producing much in the way of useful results; many situations, as with “Happy Birthday,” could require tremendous investment of resources without bearing any fruit.  On the other side, the CRO would be unlikely, in similar situations, to ever actually find a rights holder to pay.  Here too, lots of resources (provided by the users) could easily be wasted in fruitless quests for rights holders.  And, of course, any leftover money would equally be wasted by providing for the expenses of an organization that would be unneeded and unable to fulfill its purpose.

If we take a serious look at what happened in the “Happy Birthday” case, where a putative rights holder was found to be collecting fees for something they did not own, and the actual rights situation was found to be undeterminable even after all of the discovery at the power of the court was exhausted, we should see an object lesson in what a bad idea, economically, an extended collective licensing scheme for orphan works would be.  Fair use is a workable and economically much more efficient approach to digitization projects that involve orphan works.

Where does FERPA fit?

When I wrote my last blog post about contract law and the issue of licensing student work for public distribution, several people asked me about FERPA, the Family Educational Rights and Privacy Act.  Basically, the questions amounted to this: Don’t we need to think about more than just the copyright licensing issue when we put student work in public, or when we require students to do their work in public?  And the answer, of course, is “yes.”  That previous post was focused on the licensing question, and made only a passing reference to potential privacy issues.  Here, I would like to look more closely at the issue of FERPA and student work in public, while acknowledging that I am not a FERPA expert.

The examples I gave about the types of assignments that might be made public, or might be done in public from the start, offer an interesting hierarchy regarding FERPA, I think.  So I want to address them in three categories.  First, things like theses, final papers and honors projects that an institution might want to put in its repository, then the issue of art exhibitions, and finally FERPA concerns when students work directly in a public forum.

The conversation started around the idea that a school might want to put final papers from a class, or perhaps capstone or honors papers, into its open access repository.  I noted that, as a matter of copyright law, it was probably enough to inform the students of this intent in the syllabus, so that subsequently handing in the work form an implied license.  I still think this is enough to deal with the issue of copyright, but it is not enough from the perspective of FERPA.  Rather, FERPA requires a written, dated and signed waiver for  educational records, such as these types of assignments, that are “in our keeping,” to be made public.  So for a final paper or project that is handed in to faculty and then released to the public through a repository, a document waiving FERPA in regard to that paper or project must be obtained.  This written document could also serve as an explicit copyright license, but, as I say, it is necessary as a waiver of FERPA, while copyright can be licensed by implication.

The situation is more ambiguous when we ask about art projects that are handed in with the expectation that they will be part of a public exhibition.  Once again, what I said about an implied copyright license for public display applies, but it seems like the FERPA waiver is also often implied.  On the surface there seems to be no difference between the art project handed in for a grade and the final paper or honors thesis.  We know we cannot put out a stack of papers and invite others, even other students, to look through them.  Yet, with art exhibitions, schools seldom obtain FERPA waivers; they simply hang the works, which surely are also “educational records in our keeping,” on the walls of a gallery and invite the public in to look.  My friend Stephen McDonald, who is General Counsel for the Rhode Island School of Design and one of higher education’s foremost authorities on FERPA, often speaks of an “implied pedagogical exception” to FERPA, and I think that construct might apply here.  The Family Policy Compliance Office at the Department of Education, which oversees FERPA compliance, has said fairly often that FERPA is not intended to interfere with ordinary pedagogical practice.  And they have been clear that putting a thesis or dissertation on a library shelf, which also involves public access to an educational record, is not a problem.  Perhaps the art works can be thought of in a similar way.  And it may be important that art is made for display; that is what its creators expect, and inclusion in an institutional art exhibit is desirable for them.

So in this example, it seems that both a copyright license, for public display of an art work, and a waiver of FERPA privacy rights are being implied.  I think it is important that we separate copyright and FERPA in our minds, and think about each of them carefully.  But it is interesting to see how they diverge and converge.

Finally, what do we make of those assignments where students are asked to do their work in public right from the start, either by creating a web page, developing a blog, posting a video to YouTube, or having a class discussion on Twitter?  This is not an abstract question; our faculty are making such assignments all the time.  A library school student I know recently told me that she elected not to sign up for a class because of a requirement that each student post a specified number of tweets.  Once again, I think that the copyright situation is addressed by an implied license.  But what of FERPA? The first question is really whether these kinds of materials are educational records as defined by FERPA in the first place.  The issue is whether the “records” are ever “in our keeping” when the student creates them directly on a public platform.  It may well be that FERPA is not implicated at all in this scenario, based on a strict reading of its definition of an educational record.  And, of course, that reading is entirely congruent with what student expectations must be in this situation; they would hardly expect privacy when their work is a webpage or a set of tweets. In my opinion, however, this does not entirely settle the matter.

Even if FERPA does not apply, some of its principles, based on the idea of protecting our students, are important and should be accounted for.  A couple of years ago I was asked precisely this question, and those interested can read this blog post on the HASTAC site that was written by Professor Cathy Davidson, formerly of Duke and now at the CUNY Graduate Center, based on advice I gave about this type of assignment.  To summarize that advice here, I suggested three important steps to respect student privacy even if FERPA was not implicated by the assignment to work in a public forum.  These steps are, I believe congruent with what I have been saying about implied licenses throughout these two posts.  First, students should know about the requirement in advance; they should be informed by the syllabus while they still have an option not to take the class.  Second, provision should be made for students to participate pseudonymously, a step that would clearly resolve any FERPA problem that might exist.  And, finally, I suggested that provision be made, at least in the instructors own mind, for an alternative assignment that could be available to the student who really needs to take the course but, for whatever reason, does not want to do his or her work in public.  Of course, instructors are entitled to assess the validity of those reasons, consider the pedagogical benefit from public work, and evaluate any proffered reasons why a pseudonym would not be a sufficient solution.

I actually suspect that the recalcitrant student who simply does not want class work done in public is a vanishing breed.  Most of our students today are very comfortable with having their writing, art works, and opinions on the Web.  But when they are not, we should take steps to accommodate their discomfort without compromising the pedagogical value we believe is behind the assignment.  Indeed, the fundamental conviction behind all of this extended discussion about student copyright and FERPA rights is to suggest that these legal regimes can be managed in such a way that students are respected while still taking advantage of the pedagogical opportunities that the digital environment offers.  Neither of these legal structures needs, or should be allowed, to make the Internet a “no go zone” for student work; they just call on us to think carefully and respectfully about that work, and the students who create it.

A win, oddly

Because I am on vacation this week and have very intermittent Internet access, I am hardly the first to announce that the Second Circuit Court of Appeals affirmed the lower court decision (mostly) in the Authors Guild v. HathiTrust lawsuit. I am a bit paranoid about major decisions coming down on days when I am out of touch, but that is another matter. The important point is that the decision is another important win for libraries and fair use, brought to us by the foolishly litigious Authors Guild. It is the first of three major appeals in fair use cases that academic libraries should be watching carefully, and it may help cause a domino effect in those other two (the Georgia State and Google Books cases).

This potential for impact on decisions currently being written by other judges is increased by the fact that the Second Circuit, in discussing transformation as a major element in fair use deliberately cited precedents from its own previous cases, but also cases from the Ninth Circuit and two other Circuit Courts of Appeal. The judges seem to be deliberately rejecting the idea that the circuits are split about transformative fair use.

This decision is very good news for libraries, and the ARL Public Policy Notes description of the decision is well worth reading. But for all its positives, it has to be admitted that there are some oddities in this decision.

Basically, the Court did three different things in this decision:

  1. It affirmed the lower court ruling that the Authors Guild did not have standing – the right to bring the lawsuit – of behalf of its members. Another reminder of the oft-repeated rule that only a rights holder may sue to defend those rights, and associations that claim to represent rights holders but do not own any rights are not proper plaintiffs. A simple lesson the Authors Guild declines to learn.
  2. The court also affirmed that mass digitization for the purpose of creating a searchable index of full-text materials, as well as to provide access to those materials for persons with disabilities, is fair use. There is a lot of language in this opinion that reinforces the ARL Code of Best Practices for Fair Use in Academic Libraries.
  3. Finally, the judges remanded the case back to the lower court in regard to its opinion about fair use for preservation. This is one of the oddities in the decision, so let’s address that one first.

The oddity about this remand is that it does not actually question the conclusion that digitization for preservation can be fair use. Instead, the Court sent this portion of the case back to the lower court to decide if there was any plaintiff remaining in the case, once it was determined that the AG lacked standing, who was at any real risk of having a preservation copy of their book released by HathiTrust while there were still copies commercially available. In short, The Court of Appeals suggested that any ruling about fair use might have been premature because there was no plaintiff in a legally-recognizable position to raise the challenge. It is still entirely possible that, if such a plaintiff is found in the remaining group of named authors, fair use could nevertheless be affirmed. And, because of the rest of the ruling, it would be hard to see what difference even a ruling against fair use for preservation would make to the actual practice of the HathiTrust. So this was really a technicality, and quite strange.

By the way, in regard to the key argument raised by the Authors Guild that the library-specific exception in section 108 precludes libraries from relying on fair use, the court paid almost no attention. It dismissed this silly argument in a footnote (footnote 4 on page 13). This was a losing argument from the start, and the reliance placed on it by the AG shows just how out of touch they are in their approach to copyright.

I think three points are important about the fair use decision favoring HathiTrust in this case (the factor-by-factor analysis is handled well in the ARL post).

First, the Second Circuit accepted the same broad approach to the issue of transformation as has become common in other decisions. It is not just actual changes to the original work that can support a finding of transformation, but a “different purpose… new expression, meaning or message.” And, as I said, the Court appealed to a broad consensus across the country in defining transformation this way.

Second, the Second Circuit held that the lower court was wrong to find that digitization for the purpose of facilitating access for persons with visual or print disabilities was transformative, but found that it was fair use nevertheless. This is important, because in the Georgia State appeal the plaintiffs are arguing that because Judge Evans found that copying for electronic reserves was not transformative, she was in error to still find fair use. But in the HathiTrust case the Second Circuit recognizes what is there for all who read Supreme Court opinions to see, that when a use is transformative it is very likely to be fair use, but when it is not transformative, it can still be fair use if a careful analysis of the factors indicates that conclusion. That is what the Second Circuit finds in regard to HathiTrust and its copies for the disabled, and it is what Judge Evans found in GSU. Both were correct decisions in keeping with the clear precedent from the Supreme Court.

Finally, there is the oddity of the Second Circuit panel’s treatment of the fourth fair use factor when it is analyzing the indexing function of HathiTrust. First, the appellate panel calls the fourth factor the most important consideration, and cites the case of Harper & Row v. The Nation for that proposition. But the Supreme Court really renounced that position 20 years ago in the “Oh Pretty Woman” case, so this is the first part of the oddity. The Second Circuit then goes on to define the idea of market harm very narrowly, saying that the only harm to a market that is recognized for the purpose of the fourth fair use factor is when “the secondary use serves as a substitute for the original work.” This seems to be how the court aligns itself with the ruling in “Pretty Woman,” but it is a strange way to get there. The effect of this proposition is to rule out consideration of almost all licensing markets when looking at the fourth factor. This is a conclusion that must be causing serious heartburn in the publishing community. While the Authors Guild continues to make fair use easier and more inclusive with their absurd litigation campaign, they cannot be winning themselves many friends amongst rights holders.

The bottom line is that this decision is very good for libraries and others who depend on fair use. It adds another precedent and some additional bits of analysis to our claims of fair use. But we should recognize that it grows out of what was a very dumb lawsuit to begin with. As is so often the case, we should be emboldened by this ruling, but not too much. The best protection the library community has against aggressive litigation is still, as it always has been, careful and responsible reflection. In that context, fair use is an increasingly safe option for us.


A significant number of subscribers got spammed by this list today. Routine maintenance of the development server at Duke triggered a mistaken torrent of hundreds of old posts. The biggest problem was that there was a partial subscriber list as part of the development instance of the blog. That list has been removed, so this particular problem should never happen again. There was no hacking involved, and subscriber e-mails did not get harvested or released to anyone.

I am very sorry this happened. I certainly understand if folks want to unsubscribe from the list, but emphasize again that the production version of the blog did not cause this and was not compromised. The list of subscribers that was inadvertently associated with the development instance is no more.

Publishing ironies

Would Karl Marx have waived his copyright on principle?  I don’t know for sure, but I rather doubt it.  Marx was not entirely in sympathy with Proudhon’s famous assertion that “property is theft,” and in any case probably expected to make at least part of his living off from his intellectual property.  Nevertheless, there is something rather odd about a left-wing press asserting its own copyright to prevent the digital distribution of the Collected Works of Marx and Engels.  Marx’s interests are not being protected, of course; his works have been in the public domain for many years.  But Lawrence & Wishart Publishing wants to protect its own income from this property by asserting a copyright in new material that is contained in the volumes, including notes, introductions and original translations, and it has demanded that the Marxists Internet Archive remove digital copies of the works.

It is interesting to consider who is being hurt by the distribution and by the take down demand.  The distribution, as I say, does no harm to Marx or his descendants, since the copyright has already expired.  The party harmed, of course, is the publisher, which can continue to collect revenue from public domain works, and is entitled to enforce exclusivity if, as in this case, there is new material that is currently protected by copyright.

So we have the irony of Marxist literature being protected by that most capitalist of business structures, a monopoly, and a left-wing press asserting that monopoly to limit dissemination of Marxist ideas.

Does the take down demand harm anyone?  Much of this literature is available in other forms on the Internet, owing to its public domain status.  Potential readers will presumably be harmed, to a degree, because English versions of some more obscure works by Marx and Engels will become unavailable if the translations in the Collected Works were the first of their kind.  But I can’t help thinking that the folks who are really harmed by this decision are the contemporary scholars who contributed to the volumes published by Lawrence and Wishart.  Perhaps they thought that by contributing to a collected works project they had the opportunity to offer a definitive interpretation of some particular essay or letter.  Perhaps they hoped to make an impact on their chosen field of study.  But those opportunities are greatly reduced now.  Potential readers will find the works they are looking for in other editions that remain available in the Archive, or they will not find them at all.  They will look to other scholars to help them understand those works, scholars whose writings are more accessible.

While I cannot dispute the right of Lawrence and Wishart to demand exclusivity, it is a clear reminder about how poorly the traditional system of publishing, based on state-enforced exclusivity, serves scholars in an age when there are so many opportunities in the digital environment to reach a much larger audience.  I suspect that the price of the Collected Works set is high, and the publisher is quite obscure (a colleague here just shrugged when I mentioned the name), so its distribution will be quite limited.  It is a sad illustration of how traditional publishing that relies on subscriptions for digital material is inextricably mired in the print model, trying desperately to reproduce the scarcity of print resources in defiance of the abundance possible in the digital environment.  The losers in that effort are the scholars whose ability to impact their field is deliberately reduced by this effort — beyond their control — to preserve exclusivity and scarcity.

“Beyond their control” leads directly to the other irony from the publishing industry that I want to share in this post.  A colleague recently sent me a PDF of the preliminary program for the conference being held in Boston next month of the Society for Scholarly Publishing.  It was the description of the very first seminar that caught both her eye and mine:

Seminar 1: Open Access Mandates and Open Access “Mandates:” How Much Control Should Scholars Have over Their Work?Many universities now mandate that faculty authors deposit their work in Open Access university repositories.  Others are developing this expectation, but not yet mandating participation.  This seminar will review various mandatory and non-mandatory OA deposit policies, the implementation of different policies, and the responses of faculty members to them.  Panelists will discuss the degree to which academic institutions ought to determine the disposition of publications originating on their campus.

It is hard to believe that the SSP could print this session description with a straight face.  Surely they know that the law deliberately gives scholars a great deal of control over their work, in the form of copyright.  Scholars exercise that control in a variety of ways, including when they vote to adopt an open access policy, as many have done.  So where is the threat to scholar’s control over their own works?  Perhaps at the point where they are required to relinquish their copyright as a condition of publication.  If the SSP were really concerned about scholars having control over their own writings, the panel for this session would be discussing how to modify copyright transfer policies so that scholarly publishers would stop demanding that faculty authors give up all of their rights.

The SSP has carefully written the session description to make it sound like open access policies are imposed on faculty against their will.  But every policy I am aware of was adopted by the faculty themselves, usually after extensive discussions.  And the majority of policies have liberal waiver provisions, so that faculty who do not wish to grant a license for open access do not have to do so.  On the other hand, publishers almost never provide a similar way for authors to opt out of mandatory copyright transfer, other than paying a significant fee for an author-pays OA option, which offers authors a chance to buy what they already own.  Perhaps this concern about authorial control could be channeled into a discussion about the new models of scholarly publishing that are developing that do not require copyright transfer and that seek alternate ways to finance the improved access so many university faculties are indicating they want.

There is a lot to talk about here, especially in terms of authorial control.  Consulting the authors whose material is published in the Collected Works of Marx and Engels might have engendered discussion of a solution to the issue about the Marxists Archive other than simply demanding removal.  Maybe those authors should have resisted the demand to transfer copyright wholesale to Lawrence and Wishart in the first place. But publishers continue to think in terms of total control over the works they publish; that is the real threat to authors and that is the problem that the SSP ought to be addressing.

Walking the talk

All of the presentations at the SPARC Open Access meeting this week were excellent.  But there was one that was really special; an early career researcher named Erin McKiernan who brought everyone in the room to their feet to applaud her commitment to open access.  We are sometimes told that only established scholars who enjoy the security of tenure can “afford” to embrace more open ways to disseminate their work.  But Dr. McKiernan explained to us both the “why” and the “how” of a deep commitment to OA on the part of a younger scholar who is not willing to embrace traditional, toll-access publishing or to surrender her goals of advancing scholarship and having an academic career.

Erin McKiernan is a Ph.D from the University of Arizona who is now working as a scientist and teacher in Latin America.  Her unique experience informs her perspective on why young scholars should embrace open access.  Dr. McKiernan is a researcher in medical science at the National Institute of Public Health in Mexico and teaches (or has taught) at a couple of institutions in Latin America.  For her, the issue is that open access is fundamental to her ability to do her job; she told us that the research library available to her and her colleagues has subscriptions to only 139 journals, far fewer that most U.S. researchers expect to be able to consult.  Twenty-two of that number are only available in print format, because electronic access is too expensive.  This group includes key titles like Nature and Cell.  A number of other titles that U.S. researchers take for granted as core to their work — she mentioned Nature Medicine and PNAS — are entirely unavailable because of cost.  So in an age when digital communications ought to, at the very least, facilitate access to information needed to improve health and treat patients, the cost of these journals is, in Dr. McKiernan’s words, “impeding my colleagues’ ability to save lives.”  She made clear that some of these journals are so expensive that the choice is often between a couple of added subscriptions or the salary of a researcher.

This situation ought to be intolerable, and for Dr. McKiernan it is.  She outlined for us a personal pledge that ought to sound quite familiar.  First, she will not write, edit or review for a closed-access journal. Second, she will blog about her scientific research and post preprints of her articles so that her work is both transparent and accessible.  Finally, she told us that if a colleague chose to publish a paper on which she was a joint author in a closed-access journal, she would remove her name from that work.  This is a comprehensive and passionately-felt commitment to do science in the open and to make it accessible to everyone who could benefit from it — clinicians, patients and the general public as well as other scholars.

Listening to Dr. McKiernan, I was reminded of a former colleague who liked to say that he “would rather do my job than keep my job.”  But, realistically, Dr. McKiernan wants to have a career as a teacher and research scientist.  So she directly addressed the concerns we often hear that this kind of commitment to open access is a threat to promotion and tenure in the world of academia.  We know, of course, that some parts of this assertion are based on false impressions and bad information, such as the claim that open access journals are not peer-reviewed or that such peer-review is necessarily less rigorous than in traditional subscription journals.  This is patently false and really makes little sense — why should good peer-review be tied to a particular business model?  Dr. McKiernan pointed out that peer-review is a problem, but not just for open access journals.  We have all seen the stories about growing retraction rates and gibberish articles.  But these negative perceptions about OA persist, and Dr. McKiernan offered concrete suggestions for early-career researchers who want to work in the open and also get appropriate credit for their work.  Her list of ideas was as follows (with some annotations that I have added):

1. Make a list of open access publication options in your particular field.  Chances are you will be surprised by the range of possibilities.

2.  Discuss access issues with your collaborators up front, before the research is done and the articles written.

3. Write funds for article processing charges for Gold open access journals into all of your grant applications.

4. Document your altmetrics.

5. Blog about your science, and in language that is comprehensible to non-scientists.  Doing this can ultimately increase the impact of your work and can even lead sometimes to press coverage and to better press coverage.

6. Be active on social media.  This is the way academic reputations are built today, so ignoring the opportunities presented is unwise.

7. If for some reason you do publish a closed-access article, remember that you still have Green open access options available; you can self-archive a copy of your article in a disciplinary or institutional repository.  Dr. McKiernan mentioned that she uses FigShare for her publications.

The most exciting thing about Erin McKiernan’s presentation was that it demolished, for many of us, the perception of open access as a risky choice for younger academics.  After listening to her expression of such a heartfelt commitment — and particularly the pictures of the people for whom she does her work, which puts a more human face on the cost of placing subscription barriers on scholarship — I began to realize that, in reality, OA is the only choice.





Its the content, not the version!

My last post about copyright assignment and different versions of a scholarly article set off a small controversy, some of which can be found in the comments to that ppost and some of which took place on other social media venues.  Yesterday Richard Poynder posted to the Lib-License list about this discussion, and I felt compelled to respond, since it seems clear this is not an isolated misunderstanding that will fade away.

Here is part of Richard’s post, which summarizes the discussion:

Last week, the Scholarly Communications Officer at Duke University in the US, Kevin Smith, published a blog post challenging a widely held assumption amongst OA advocates that when scholars transfer copyright in their papers they transfer only the final version of the article.

This is not true, Smith argued.

If correct, this would seem to have important implications for Green OA, not least because it would mean that publishers have greater control over self-archiving than OA advocates assume.

However Charles Oppenheim, a UK-based copyright specialist, believes that OA advocates are correct in thinking that when an author signs a copyright assignment only the rights in the final version of the paper are transferred, and so authors retain the rights to all earlier versions of their work, certainly under UK and EU law. As such, they are free to post earlier versions of their papers on the Web.

And here is the response that I just sent to the LibLicense list, in which I focus on copyright as protection over expressive content, rather than arbitrary distinctions between different versions of that content:

I had really hoped I could ignore this rather muddled controversy, mostly due to a lack of time to address it.  But a tweet from Nancy Sims, of the University of Minnesota, made me realize that my original post used slightly careless language that may contribute to the confusion.  So I feel I should set that straight, and respond to the whole business.

I wrote that different versions of an article were derivatives of one another.  That is probably a defensible position, but Nancy made the point much clearer — the different versions are still the same work, so subject to a single copyright.

Throughout this discussion, the proponents of the position that copyright is transferred only in a final version really do not make any legal arguments as such, just an assertion of what they wish were the situation (I wish it were too).  But here is a legal point — the U.S. copyright law makes the difficulty with this position pretty clearly in section 202 when it states the obvious principle that copyright is distinct from any particular material object that embodies the copyrighted work.   So it is simply not true to say that version A has a copyright and version B has a different copyright.  The copyright is in the expressive content, not in different versions; if all embody substantially the same expression, they are all the one work, for copyright purposes, because the copyright protects that expressive content.  Hence Nancy’s perfectly correct remark that the different versions are the same work, from a copyright perspective.

Part of the point I wanted to make in my original post is that this notion of versions is, at least in part, an artificial construction that publishers use to assert control while also giving the appearance of generosity in their licensing back to authors of very limited rights to use earlier versions.  The versions are artificially based on steps in the publication permission process (before submission, peer-review, submission, publication), not on anything intrinsic to the work itself that would justify a change in copyright status.  If we look at how articles are really composed — usually by composing one file and then editing it repeatedly, it is easy to see how artificial, in the sense of unrelated to content, the distinctions are.  How much time must elapse before a revision is a different version?  If I do some revisions, then go have a cup of tea before returning to make other revisions, have I created two different “versions” entitled to separate copyright protection?  The question is absurd, of course, and shows how unworkable the idea of different copyrights in different versions of the same work would be.

It has been said that no publisher makes the claim I am here suggesting.  But if we look at actual copyright transfer agreements it is easy to see that they do.  The default policies for Wiley, for example tell authors that they can archive a pre-print and archive a post-print, subject to certain conditions, including rules about the types of repositories that the archiving can take place in and a limitation to non-commercial reuse.  If an author transfers rights only in the final version, how can Wiley make restrictions on the use of these earlier versions?  The better — indeed the only logical — interpretation is that the copyright that is transferred covers the work as a whole, which is the nature of copyright, and that Wiley then licenses back to authors certain rights to reuse different versions.  Those version rights are based on what Wiley wants to allow and to hold on to, not on any legal distinction between the versions.  Elsevier’s policies are similar — they allow the preprint to be used on any website, the post-print to be self-archived on a scholarly website ONLY if the institution does not have a mandate and with acknowledgement of the publisher, and do not allow any archiving of the final version.  Again, all of this is grounded on a claim that a copyright that is inclusive of the different versions, because they are the same work, has been transferred to Elsevier.

Let’s imagine what would happen if a dispute ever arose over a use of an earlier version of an article after the copyright had been transferred.  A court would be asked to determine if the use of the earlier version was an infringement of the copyright held by the assignee.  Courts have a standard for making this determination; it is “substantial similarity.”  So if the re-used version of the work was substantially similar to the work in which the copyright was assigned — that language is itself bound up in the misunderstanding I am trying to refute — a court would probably find infringement.  This has been that case in situations where the works were much more different that two versions of a scholarly article.  George Harrison, for example, was found to have infringed the copyright in the song “He’s So Fine” when he wrote “My Sweet Lord,” even though the court acknowledged that it was probably a case of unconscious borrowing (see Bright Tunes Music v. Harrisongs Music, 420 F. Supp. 177, S.D.N.Y. 1976).  And the author of a sequel novel to “Catcher in the Rye” was held to have infringed copyright in Salinger’s novel even though they told very different stories, due to similarities in characters and incidents (Salinger v. Colting, 607 F. 3d 68, 2d Cir. 2010).  If these very different “versions” of the same work were held to be copyright infringement, how is it possible that two versions of the same scholarly article could have separate and distinct copyrights?

In many ways I wish it were true that each version had a distinct copyright, so that transfer of the rights in one version did not impact reuse of the earlier version.  That situation would make academic reuse much easier, and it would conform to a basic sense that most academics have that they still “own” something, even after they assign the copyright.  But that position is contrary to the very foundations of copyright law (and not just U.S. law), which vests rights in the content of expression, not in versions that represent artificial points in the process of composition or publication.  And much as this mistaken idea may be attractive, it has dangerous consequences; it gives authors a false sense that the consequences of signing a copyright transfer agreement are less draconian than they really are.  Instead of plying our faculty with these comforting illusions, we need to help them understand that copyright is a valuable asset that should not be given away without very careful thought, precisely because, once it is given away, all reuse of the expression in the article, regardless of version, is entirely governed by whatever rights, if any, are licensed back to the author in the transfer agreement.