Category Archives: Fair Use

Free speech, fair use, and affirmative defenses

On an e-mail list to which I do not subscribe, there was recently a long exchange about fair use and large-scale digitization.  Part of the exchange was forwarded to me by a friend seeking comment about a specific issue that was raised, but in the course of looking back at the thread I discovered this comment:

Fair use doesn’t “allow” large scale digitization and didn’t “allow” digitization in the case of HathiTrust. The fair use provision does not allow anything up front- it has to be won through litigation. The fair use provision was used as an affirmative defense in litigation concerning the HathiTrust et al., and after much time and money spent in litigation, the court ruled, and the appeals court ruled, that HathiTrusts’s activity could be considered fair.

This comment repeats a mistake that is very common in discussions of fair use — while noting correctly that fair use is an “affirmative defense,” it concludes from that fact that fair use must be something unusual, a privilege that we rely on rarely because it is risky and difficult to prove.  But, as I hope to show with the rest of this post, affirmative defenses are quite common; in fact, almost all positive rights have to be treated as affirmative defenses in litigation.  We rely on things that are “allowed” by affirmative defenses all the time.

Basically, to call something an affirmative defense is to make a technical point about how it functions in a court case.  We should not be frightened by the phrase or invest it with too much significance.  Some of our most cherished rights would have to be called affirmative defenses in the technical sense that is the only proper usage of that phrase.

Consider the case of Cohen v. California (1971), one of our most important cases about the meaning of the First Amendment.  Mr. Cohen entered the Sacramento court house wearing a piece of clothing on which he had written a profane anti-war message — “F**K the Draft” — and was arrested for disturbing the peace because that message was considered “offensive conduct.”  The Supreme Court ultimately held that it was protected speech, in spite of the profanity, and that Cohen’s arrest was therefore improper. But let’s imagine for a moment how the trial over this issue must proceed.  The state would provide evidence that Cohen did wear the jacket, had deliberately painted the words on it, and knew what was written on his jacket when he entered the court house.  Cohen’s defense would then be to actually admit all of those points, but raise an additional fact — his words were political speech protected by the First Amendment to the U.S. Constitution.  Free speech would thus function as an affirmative defense to vindicate Mr. Cohen’s right.

I hope this example illustrates two things.  First, all an affirmative defense means is that the defendant must raise additional facts or legal principles in addition to what the plaintiff or prosecution has asserted.  This is not uncommon; anytime a defendant does more than simply deny the truth of everything the plaintiff says, they are raising an affirmative defense.  Second, all of our most cherished rights in America can function as affirmative defenses in court, but that does not mean they are unusual or unreliable.

In any court case, the plaintiff has to prove some facts in order to establish that a “cause of action” exists.  A defendant then has two avenues — he can simply deny the truth of some or all of what the plaintiff has said (we call that arguing that the plaintiff failed to meet her “burden of proof”) or he can produce additional facts that show that what he is accused of doing is actually permissible (which is the defendant’s “burden of proof”).  If we take the Georgia State copyright case as an example, we can see both strategies at work.  For over 20 of the challenged excerpts, GSU successfully argued that the publishers had not met their burden of proof by showing that they owned a valid copyright in the works in question.  Since the publishers could not produce valid transfers of copyright, there was no further need for a defense.  For 40+ other excerpts, however, GSU successfully argued some additional facts and showed that their use was fair use (although the Appeals Court has now told the trial court to reanalyze this).  Just like Mr. Cohen in the free speech case, GSU invoked a positive right that is precious to all Americans, but in the context of the lawsuit that right was presented as an affirmative defense.

I can’t say it often enough — when one is sued for doing something one believes is actually allowed by the law, that “right,” whether it is free speech or fair use, is always presented in the form of an affirmative defense.  All that means is that it is something which the defendant must raise to justify herself (something for which she bears the burden of proof), but these things are not rare, disreputable or frightening; they are the very rights that define our citizenship.

Fair use is one such right, and the copyright law very clearly calls it a right (in section 108(f)(4) of Title 17).  It is a key and indispensable component of our system of copyright, as the Supreme Court has reminded us many times (E.g., Campbell v. Acuff Rose Music, Inc., 510 U.S. 569 (1994) at 575).  It is, especially, a “safety valve” that protects free speech from encroachment by copyright holders, and it is useful to think of those two rights — free speech and fair use — together.

So it is simply wrong to say that fair use does not “allow” anything because it is an affirmative defense, just as it would be wrong to say that about free speech.  The First Amendment allows me to have campaign signs on my lawn during this election, even if my neighbors disagree strongly with me.  It allows me carry a placard down a public sidewalk proclaiming that “The End is Near,” if I am so inclined.  It would allow me even to wear a swastika tattoo, as offensive as that would be to many.  In the same way, there are many activities that we can say with assurance are allowed by the right of fair use.  When we use a quotation from a previous work in a new article we are writing, we do not stop to do a individualized analysis because we know, and pretty much everyone agrees, that this is a settled instance of fair use.  Nor do we need to re-litigate the Sony v. Universal Pictures case every time we want to record a TV show to watch at a later time; the Supreme Court has confirmed for us that doing this is allowed by fair use.  And in the HathiTrust case, the Second Circuit told us that fair use supports large-scale digitization for the purpose of indexing and access for persons with disabilities.  It is possible that a rights holder could challenge such an activity again, just as some government entity could again try to outlaw profanity in political speech.  Possible, but unwise and very unlikely.

When we say something is an affirmative defense, all we are doing is indicating how it would be raised in litigation.  Many of our most cherished freedoms would be raised as affirmative defenses.  So we must resist the urge to allow ourselves to be frightened by that phrase or to accept arguments intended to make fair use seem odd, unusual, or risky.  Fair use is no more unusual or dangerous than free speech is.

Swimming in muddy waters

Since the ruling from the Eleventh Circuit Court of Appeals in the Georgia State copyright case came out two weeks ago, most commentators have come to the same conclusions.  It is a mostly negative ruling, in which publishers actually lost a lot of what they were fighting for.  Georgia State also lost, in the sense that the case is not over for them and they are no longer assured of being reimbursed for their attorney’s fees and court costs, as Judge Evans had originally ordered.  But apart from those parties to the case, has the library community lost by this decision, or gained?  Once again, what we have gained is mostly negative — we know that we do not have to strictly observe the 1976 Classroom Copying Guidelines, we know that the cases involving commercially-produced course packs do not dictate the fair use result for e-reserves, we know that 10% or one chapter is not a bright line rule.  But there is little benefit in knowing how NOT to make fair use decisions; it is easy to see why one commentator has pled for bright line rules.

One affirmative point we can take from the case is that we know we still can, and must, do item-by-item analyses to make fair use decisions.  But what exactly should the process for those decisions look like?

As Brandon Butler from American University has pointed out, the decision-making processes will be different when we are assessing uses for teaching that are transformative versus those, like e-reserves, that are not.  We still have a good deal of freedom when the use is transformative — when the original material becomes part of a new expression, a new meaning, or a new purpose.  And this is important for a great deal of scholarship and teaching.  We should not lose sight of this important application of fair use, or assume, incorrectly, that the 11th Circuit ruling creates new limits on transformative fair use.

But when we must make decisions about digital course readings, we need to apply the “old-fashioned” four factor test.  What does it look like after the Appeals Court ruling?  I am afraid it has gotten pretty muddy:

The first fair use factor — the purpose and character of the use — continues to favor fair use whenever that use is undertaken by a non-profit educational institution.  If a commercial intermediary is involved, as was the situation in the course pack cases, this is no longer true.  But where there is no profit being made and the user is the educational institution itself, the first factor supports a claim of fair use.  And that is where the clarity ends.

The second factor — the nature of the original — can go either way, depending on the specifics of the work involved.  Is it more factual or interpretive?  This is a judgment call, and one which librarians may be hard-pressed to make when processing a number of e-reserve requests in a discipline they are unfamiliar with.  The good news is that the Court said that this factor is relatively unimportant, so the safest course may be to consider this factor neutral; call it a draw — at least where the item is not clearly creative — and move on.

On the third factor we thought we had a rule, even if many of us didn’t like it — 10% or one chapter was the amount that Judge Evans said was “decidedly small” and therefore OK for fair use for digital course readings.  The bad news is that we no longer have that rule.  The good news is that we no longer have that rule.  The 11th Circuit panel wanted a more nuanced approach, that balances amount with the other factors and especially looks at how appropriate the amount used is in relation to the educational purpose.  When the other factors line up in favor of fair use, this approach could well allow more than 10%.  If the other factors tend to disfavor fair use, only a much shorter portion might be permissible.  It is just very hard, after this ruling, to have clear standards about the amount that is usable, and that makes things difficult for day-to-day decisions.

With the fourth factor — impact on the market for the original — the 11th Circuit made things even more unclear.  The panel actually affirmed the lower court in its analysis of this factor, emphasizing that it is permissible to take into account the availability of a license for the specific use as part of evaluating this factor.  So if a license for a digital excerpt is unavailable, does that mean this factor favors fair use, as Judge Evans said?  Maybe, but the 11th Circuit added two complications.  First, it said that the Judge should have included the importance of license income to the value of the work in her fourth factor reasoning, rather than treating it as an additional consideration for breaking “ties.”  Second, they said that the fourth factor should have more weight in non-transformational settings.  How are we to put these instructions into practice?  Libraries do not have access to publishers’ accounts, as the judge did, so we cannot assess the importance of licensing income (nor can we trust publishers to give us straight answers about that importance).  And what does more weight mean?  If there is no digital license available, does more weight on this factor mean more room for fair use, perhaps of a larger excerpt?  Again, maybe.  But it also seems to mean that where such a license is available, even 10% or one chapter might be too much for fair use.

How does one swim in water that is this muddy?  The answer, of course, is very carefully.  We must keep on making those decisions, and we do have space to do so.  The fair use checklist, by the way, received a relatively sympathetic description from the 11th Circuit, but not a definite embrace.  At this point, my best advice is to keep on doing what we have been doing, thinking carefully about each situation and making a responsible decision.  I would recommend a somewhat more conservative approach, perhaps, than I might have done three weeks ago, especially when a license for a digital excerpt is available.  But the bottom line is that the situation is not much different than we have always known it to be, there is just a little more mud in the water.

GSU appeal ruling — the more I read, the better it seems

Those of us who heard the oral arguments in the Eleventh Circuit Court of Appeals last November, in which the publishers appeal of the District Court ruling favoring fair use in their copyright infringement lawsuit against Georgia State was heard, mostly expected a discouraging result from the Appellate panel. An initial or cursory reading of the opinion issued by the panel of the 11th Circuit on Friday might even confirm those fears.  After all, the fairly positive ruling from the District Court is reversed, the injection and the order for the publishers to pay GSU’s costs and attorney’s fees are both vacated, and the whole case is remanded back to the District Court to reconsider.  But once one begins to read carefully, the panel opinion gets much better.  All of the big points that the publishers were pushing, which consequently are the really bad potential rulings for higher education, go against the publishers.  In many ways, they won a reversal but lost the possibility of achieving any of their most desired outcomes.  Higher ed didn’t exactly win on Friday, but the plaintiff publishers lost a lot.

There is a thorough and smart analysis of the ruling from Nancy Sims of the University of Minnesota found here.

For those of us struggling to make responsible fair use decisions on a day-to-day basis, this Appeals Court ruling doesn’t actually change much.  The message for us is that it could have been much worse, the case is far from over, and we must just keep on making the same kind of reasoned and reasonable fair use decisions we have been making for years.

What we got from the three-judge panel of the Eleventh Circuit on Friday is a mostly negative ruling that outlines where both Judge Evans of the District Court and the plaintiff publishers are wrong, as the panel thinks, about fair use.  To be exact, two judges — Tjoflat and Marcus — tell us those things.  The third judge, Vinson, concurs in the result — the reversal and remand of the case — but would have accepted virtual all of the publishers arguments and closed the door on fair use for even very small classroom readings.  His concurrence suggests that getting an opinion together was probably a difficult process involving a lot of compromise (it took eleven months), and it also tells us how bad the opinion could have been for universities.  Instead, what the panel majority issued is mostly bad for the publishers.

In my opinion, there were five major principles that publishers wanted to get from this lawsuit, and in the Appeals Court ruling they lost on all of them:

  1. Publishers wanted the Appeals Court to hold that Judge Evans should have ruled based on the big picture — the large number of electronic reserve items made available to students without permission — rather than doing an item-by-item analysis for each reading.  Instead, the Appeals Court affirmed that the item-by-item approach was the correct form the analysis should take.  This, of course, is the key that allows universities to make individualized fair use decisions, and it rejects the attempt to force all schools to purchase a blanket license from the Copyright Clearance Center (which was, in my opinion, the fundamental goal for which the case was filed in the first place).
  2. The plaintiffs wanted a ruling that non-profit educational use did not mean that the first fair use factor always favored fair use.  They wanted the Appeals Court to hold that where the copying is non-transformative, and both Judge Evans and the Appeals Court felt that the copying at issue was non-transformative, the first fair use factor does not favor the defendants, even when they are non-profit educational institutions.  But the Court of Appeals correctly applied Supreme Court precedent and held that the first fair use factor still favors fair use for such “verbatim” copying when it is done for an educational purpose without profit.
  3. The Appeals Court held that the so-called “course pack” cases, which rejected fair use for course packs made for a fee by commercial copy shops, were not controlling precedent in the situation before it, where GSU was doing the copying itself and made no profit from it.
  4. The publishers wanted a clear statement that the Classroom Copying Guidelines were a limit on fair use for multiple copies made for classroom use, defining a maximum amount for such copying of 1000 words.  They lost there too; the panel held that the Guidelines were intended as a minimum safe harbor and did not define a limit on fair use.  Therefore they do not control the decision for this type of copying.  Instead, the panel rejected the 10% or one chapter rule applied by Judge Evans as too rigid and instructed her to use a more flexible approach that takes account the amount appropriate for the pedagogical purpose.
  5. Finally, the publishers were hoping that the Appeals Court would reject the idea that the availability of a license for a digital excerpt was relevant to the fourth fair use factor; they wanted a rule that says that any unlicensed use is an economic loss for them, even if they have decided not to make the desired license available.  They lost that too; the panel affirmed that the District Court was correct to consider the availability of a license for the specific use when evaluating market harm.

These losses, which constitute the heart of what the publishers were hoping to achieve when they brought the lawsuit, are probably final.  They are now binding precedent in the 11th Circuit, and persuasive throughout the country.  The publishers could presumably appeal to the Supreme Court, but it seems unlikely the current Court would take the case because there is no split amongst the Circuit Courts, only a growing consensus about fair use.

So if the publishers lost on everything that really mattered to them, why was the case reversed?

First, as I have said, this is a big loss for the publisher plaintiffs, but it is not a win for GSU.  With the reversal of the District Court ruling and the injunction and fee award vacated, their copyright policy is again up in the air.  And, of course, they have lost the immediate prospect of collecting about $3.5 million in costs.  But for now, they, like all libraries, should probably just carry on with their normal practices and wait to see what happens on remand.

When (if?) the case gets back to Judge Evans, who I very much doubt wanted it back, the fair use analysis will look somewhat different. The Appeals panel found specific errors in her analysis of the second and third fair use factors. On the second factor they have told her that she cannot presume that the works in question are all “informational.” She has been told to do a work-by-work evaluation, but also told that this factor is not very important.  On the third factor, the amount used, the panel said that her bright-line rule of no more than 10% or one chapter was too rigid (as would have been the much lower bright-line rule the publishers wanted).  Here too, the Appeals Court wants a more nuanced and fact-specific analysis, looking at both quantity and quality (the heart of the work).  Significantly, they have told Judge Evans to look at the pedagogical appropriateness of the excerpt when determining how the amount factors into a fair use analysis.  Since this corresponds with what many of us tell campus faculty — use only what you really need and no more —  it is nice that the panel approved.  Finally, Judge Evans has been told to give the fourth factor — market harm — more weight, rather than counting all the factors equally.  This would probably, but not certainly, result in fewer findings of fair use.  Instead of the split we got — 43 fair uses versus 5 infringements (plus 26 for which there was no prima facie showing of infringement)– there would probably be a different division.  Maybe 30 excerpts would be fair use and 18 infringing; who knows?  But I think we should consider whether or not getting to that point really benefits anyone.

Which brings me to considering what the various players should do now, in light of this ruling.

The publishers, as I say, have pretty much lost even as it looks like they were winning.  There is no good that can come out of a remand for them.  At best they will get that different division between fair use and infringement as a result, and will be able to use it to spread a little more fear amongst academic libraries about the uncertainty of fair use.  But that is not their real goal, I hope.  They were hoping to radically change the landscape, and they have failed spectacularly.  If there is any common sense left in their board rooms and executive suites, they need to considering settling with Georgia State and then engaging in real, good-faith negotiations with higher education and library groups.  Don’t open those negotiations with threats, as you did before.  We now know how toothless your threats are.  But it is still the case that libraries and faculties would like some standards they can follow that are realistic in light of what the courts have told us.  There is no windfall for publishers in such negotiations, but there might be some stability, not to mention the savings they will realize if they stop wasting money on foolish and unavailing litigation.

As for academic libraries, this long, drawn-out case, although not over yet, appears to have been much ado about nothing.  We still should be making careful, responsible and good-faith decisions about copyright and fair use, just as we have done for years.  We need to educate ourselves, look at the array of precedents we have from the federal courts, and continue to do our best.  There has been no revolution, and no dramatic alteration of the conditions under which we do our work.  The bottom line is that, after this ruling, libraries should just keep calm and carry on.

A reversal for Georgia State

The Eleventh Circuit Court of Appeals has issued its ruling in the publisher appeal of a district court decision that found most instances of electronic reserve copying at Georgia State to be fair use.  The appellate court ruling is 129 pages long, and I will have much more to say after I read it carefully.  But the hot news right now is that the Court of Appeals has reversed the District Court’s judgment and remanded the case back for proceedings consistent with the new opinion.  The injunction issued by the District Court and the order awarding costs and attorney’s fees to GSU have been vacated.

Looking at its analysis of the four fair use factors suggests that applying the Court of Appeals’ ruling will be challenging.  The panel has held that th=Judge Evans of the District Court was correct to find that the first factor favored fair use, even though, both courts say, the use is not transformative.  The non-profit educational character of the use seems to carry the day on that factor.  On the other hand, the Appeals Court finds error in the District Court’s sweeping finding that the second factor favored fair use.  The panel also disagrees with the 10% or 1 chapter standard used by Judge Evans to decide about the third fair use factor, the amount used; they object to any mechanical standard and want a more nuanced, work-by-work analysis.  The Court of Appeals also agrees with the District Court about the fourth factor — largely favoring plaintiffs when a digital license is available.  But the 11th Circuit wants the factors to be balanced with a different touch; not treated as all equal.

What is interesting here is that it looks like a considerable victory for the publishers, but there is still a lot of work to do.  Judge Evans will need to do her analysis over again, and the results will be different.  But given the way the Appeals Court agrees with some parts of her initial analysis and corrects her on others, it is hard to predict, on first glance, what the final result will be.

A copy of the opinion can be found here.  I am sure there will be lots more written about the, including by me, in the days to come.

Are fair use and open access incompatible?

There has been a spirited discussion on a list to which I subscribe about the plight of this graduate student who is trying to publish an article that critiques a previously published work.  I’ll go into details below, but I want to start by noting that during that discussion, my colleague Laura Quilter from the University of Massachusetts, Amherst captured the nub of the problem with this phrase: “the incompatibility of fair use with the policies of open content publishers.”  Laura’s phrase is carefully worded; the problem we need to unpack here is about the policies of open content publishers, and the solution is to help them understand that fair use and open licensing are NOT incompatible.

Briefly, the situation is this.  An author has written a paper that critiques previous work, specifically about the existence, or not, of “striped nanoparticles.”  In order to assess and refute evidence cited in some earlier papers, the author wants to reproduce some figures from those earlier publications and compare them to imagery from his own research.  He has encountered two obstacles that we should consider.  First, his article was rejected by some traditional publications because it was not groundbreaking; it merely reinterpreted and critiqued previously published evidence.  Then, when it was accepted by PLoS One, he encountered a copyright difficulty.  PLoS requires permission for all material not created by the author(s) of papers they publish.  One of the publishers of those previous papers — Wiley — was willing to give permission for reuse but not for publication under the Creative Commons Attribution (CC BY) license that PLoS One uses.  Wiley apparently told the author that “We are happy to grant permission to reproduce our content in your article, but are unable to change its copyright status.”

It is easy to see the problem that PLoS faces here.  Once the article is published under a CC license, it seems that there is little control over downstream uses.  Even if the initial use of the Wiley content is fair use — and of course it probably is — how can we ensure that all the downstream uses are fair use, especially since the license permits more types of reuse than fair use does?  Isn’t this why fair use and open licensing are incompatible?

But this may be an overly simplistic view of the situation.  Indeed, I think this researcher is caught up in a net of simplified views of copyright and scholarly publication that creates an untenable and unnecessary dilemma.  If we start by looking at where each player in this controversy has gone wrong, we may get to a potential solution.

Let’s start with Wiley.  Are they in the wrong here in any way?  I think they are.  It is nice that they are willing to grant permission in a general way, but they are probably wrong, or disingenuous, to say that they are “unable” to change the copyright status of the material.  Under normal agreements, Wiley now owns the copyright in the previously published figures, so they are perfectly able to permit their incorporation into a CC licensed article.  They can “change the copyright status” (if that is really what is involved) if they want to; they simply do not want to.  The author believes this is a deliberate move to stifle his criticism, although it is equally possible that it is just normal publishing myopia about copyright.

There is also some blame here for the system of scholarly publishing.  The roadblock encountered with traditional publishers — that they do not want articles that are “derivative” from prior work — is common; most scientists have encountered it.  In order to generate high impact factors, journals want new, exciting and sexy discoveries, not ongoing discussions that pick apart and evaluate previously announced discoveries.  We have found striped nanoparticles!  Don’t dispute the discovery, just move on to the next big announcement.

This attitude, of course, is antithetical to how science works.  All knowledge, in fact, is incremental, building on what has gone before and subject to correction, addition and even rejection by later research.  The standard of review applied by the big and famous scientific journals, which is based on commercial rather than scholarly needs, actually cuts against the progress of science.  On the other hand, the review standard applied by PLoS One — which is focused on scientific validity rather than making a big splash, and under which the article in question was apparently accepted — better serves the scientific enterprise.

But this does not let PLoS off the hook in this particular situation.  It is their policies, which draw a too-sharp line between copyright protection and open content, that have created a problem that need not exist.

First, we should recognize that the use the author wants to make of previously published figures is almost certainly fair use.  He is drawing small excerpts from several published articles in order to compare and critique as part of his own scholarly argument.  This is what fair use exists to allow.  It is nice that Wiley and others will grant permission for the use, but their OK is not needed here.

Second, the claim that you cannot include material used as fair use in a CC-licensed article is bogus.  In fact, it happens all the time.  I simply do not believe that no one who publishes in PLoS journals ever quotes from the text of a prior publication; the ubiquitous academic quotation, of course, is the most common form of fair use, and I am sure PLoS publishes CC-licensed articles that rely on that form of fair use every day.  The irony of this situation is that it points out that PLoS is applying a standard to imagery that it clearly does not apply to text.  But that differential treatment is not called for by the law or by CC licenses; fair use is equally possible for figures, illustrations and text from prior work, and the CC licenses do not exclude reliance on such fair uses.

Next, we can look at the CC licenses themselves to see how downstream uses can be handled.  If we read the text of the Creative Commons license “deed” carefully, we find these lines:

Licensors should clearly mark any material not subject to the license. This includes other CC-licensed material, or material used under an exception or limitation to copyright.

Obviously, the CC licenses themselves expect that not everything that is part of a licensed work will be equally subject to the license; they realize that authors will — indeed must — rely on fair use as one of those exceptions and limitations to copyright.  How should licensors mark such material?  The most usual way is a footnote, of course.  But a caption to the figure that indicates the source of the different pieces and even says that copyrights may be held by the respective publishers would work as well.

Finally, let’s acknowledge that there is nothing new or unusual in the procedure recommended above. Traditional publishers have done things this way for years.  When Wiley publishes an article or a textbook that asserts that they, Wiley, own the copyright, they are not asserting that they own copyright over the text of every quotation or the images used by permission as illustrations.  Such incorporated material remains in the hands of the original rights holder, even after it is included in the new work under fair use or a grant of permission.  The copyright in the new work applies to what is new, and downstream users are expected to understand this.  Likewise, the partial waiver of copyright accomplished by a CC license applies to what is new in the licensed work, not to material that is legally drawn from earlier works.

So I think there is a way forward here, which is for PLoS to agree to publish the article with all of the borrowings under fair use or by permission clearly marked, just as they would do if those borrowings were all in the form of textual quotations.  And I think we can learn two lessons from this situation:

  1. The standard of review applied by open content publishers is more supportive of the true values of science than that used by traditional publishers.  Over reliance on impact factor hurts scholarship in many ways, but one of them is by pushing publishers to focus on the next big thing instead of the ongoing scientific conversation that is the core of scholarship.  The movement toward open access has given us a chance to reverse that unfortunate emphasis.
  2. Open content licenses should not be seen as all-or-nothing affairs, which must either apply to every word and image in a work or not be used at all.  To take this stance is to introduce rigidity that has never been a part of our copyright system or of traditional publishing.  It would be a shame if excessive enthusiasm for openness were allowed to actually undermine the value of research by making the scientific conversation, with all its reliance on what has gone before, more difficult.

How useful is the EU’s gift to libraries?

On Thursday the European Union’s Court of Justice issued an opinion that allows libraries to digitize books in their holdings and make those digital copies accessible, on site, to patrons.  In a way, this is a remarkable ruling that recognizes the unique place of libraries in the dissemination and democratization of knowledge.  Yet the decision does not really give libraries a tool that promises to be very useful.  It is worth taking a moment, I think, to reflect on what this EU ruling is, what it is not, and how it compares to the current state of things for U.S. libraries.

There are news stories about the EU ruling here, here and here.

What the EU Court of Justice said is that, based on the EU “Copyright Directive,” libraries have permission to make digital copies of works in their collections and make those copies available to patrons on dedicated terminals in the library,  The Court is interpreting language already found in the Directive, and adding two points.  First, library digitization is implied by the authorization for digital copies on dedicated terminals contained in the Directive, and, second, that this is permissible even if the publisher is offering a license agreement.  Finally, the Court makes clear that this ruling does not permit further distribution of copies of the digitized work, either by printing or by downloading to a patron’s storage device.

As far as recognizing what this decision is not, it is very important to realize that it is not the law in the United States.  It is easy sometimes, when the media gets ahold of a copyright-related story, to forget that different jurisdictions have different rules.  The welter of copyright information, guidelines, suggestions and downright misinformation can make the whole area so complex that simple principles can be forgotten.  So let’s remind ourselves that this interesting move comes from the European Union Court of Justice and is the law only for the EU member states.

The other thing this ruling is not is broad permission for mass digitization.  The authorization is restricted to copies that are made available to library patrons on dedicated terminals in the library.  It does not permit wide-spread distribution over the Internet, just “reading stations” in the library.  That restriction makes it unlikely, in my opinion, that many European libraries would invest in the costs of mass digitization just for such a relatively small benefit.

So how does this ruling in the EU compare to the rights and needs of libraries in the U.S.?

Let’s consider section 108(c) of the U.S. copyright law, which permits copying of published works for preservation purposes.  That provision seems to get us only a little way toward what the EU Court has allowed.  Under 108(c), a U.S. library could digitize a book if three conditions were met.  First, the digitization must be for the purpose of preserving a book from the collection that is damaged, deteriorating, or permanently missing.  Second, an unused replacement for the book must not be available at a fair price.  Third, the digital copy may not be made available to the public outside of the library’s premises.  This last condition is similar, obviously, to the EU’s dedicated terminal authorization; a patron can read the digital copy only while present in the library.

Two differences between the EU ruling and section 108(c) are especially interesting:

  1. The works for which this type of copying are allowed in the U.S are much more limited.  The EU says that libraries can digitize any book in their collection, even if it is not damaged or deteriorating, and even if another copy, even a electronic one, could be purchased.  This seems like the major place where the EU Court has expanded the scope for library digitization.
  2. On the other hand, the use of a digital copy may be less restricted in the U.S.  Instead of a dedicated terminal, a U.S. library could, presumably, make the copy available on a restricted network, so that more than one patron could use it at a time, as long as all of them were only able to access the digital copy while on the library premises.

In the U.S., of course, libraries also can rely on fair use.  Does fair use get us closer to being able to do in the U.S. what is allowed to European libraries?  Maybe a little closer.  Fair use might get us past the restriction in 108(c) about only digitizing damaged books; we could conceivably digitize a book that did not meet the preservation standard if we had a permissible purpose.  And the restriction of that digitized book to in-library use only would help with the fourth fair use factor, impact on the market.  But still we would have issues about the purpose of the copying and the nature of the original work.  Would general reading be a purpose that supports fair use?  I am not sure.  And what books could we (or could we not) digitize?  The specific book at issue in the case before the EU Court was a history textbook.  But textbooks might be especially hard for a U.S. library to justify digitizing for even limited access under fair use.

If we wanted to claim fair use for digitizing a work for limited, on site access, my first priority would be to ask why — what is the purpose that supports digitization?  Is a digital version superior for some articulable reason to the print copy we own (remembering that if the problem is condition, we should look to 108)?  One obvious purpose would be for use with adaptive software by disabled patrons.  Also, I would look at the type of original; as I said, I think a textbook, such as was at issue in the EU case, would be harder to justify under U.S. fair use, although some purposes, such as access for the disabled, might do it.  Finally, I would look at the market effect.  Is a version that would meet the need available?  Although the EU Court said that European libraries did not need to ask this question, I think in the U.S. we still must.

Ultimately, the EU Court gave European libraries a limited but useful option here.  Unfortunately, in the U.S. we have only pieces of that option available to us, under different parts of the U.S. law.  It will be interesting to see whether, in this age of copyright harmonization, U.S. officials begin to reconsider this particular slice of library needs because of what the EU has ruled.



Copyright roundup 3 — Changes in UK law

In this final installment of the copyright roundup I have been doing this week, I want to note some remarkable developments in the copyright law of the United Kingdom, where a hugely significant revision of the statute received final approval this month and will be given royal assent, the last stage of becoming law, in June.

Readers may recall that the UK undertook a study of how to reform copyright law in ways that would encourage more innovation and economic competitiveness.  The resulting report, called the Hargreaves Report, made a number of recommendations, many of which were focused on creating limitations and exceptions to the exclusive rights in copyright so that the law would work more like it does in the U.S., including the flexibility provided by fair use.  The final results of this legislative process do not include an American-like fair use provision, but they do result in a significant expansion of the fair dealing provisions in U.K. law to better accomplish some of the same things fair use has allowed in the U.S.

Fair dealing is found in a couple of provisions of the British law and allows certain specified activities if those activities are done in a “fair” manner, with specified criteria for fairness.  Until now the categories have been narrow and few, but Parliament has just expanded them dramatically.  A description of this expansion from the Charter Institute of Library and Information Professionals can be found on the CILIP site.  A number of activities that are probably permitted by fair use in the U.S. are now also encompassed by fair dealing in Britain, including private copying, copying by libraries in order to provide those copies to individual users, and some significant expansion of the ability to make copies for the purpose of education.

On this last point, I wonder if the two British university presses that are suing a U.S. university over educational copying have noticed that the tide is against them even at home.

There is an explanatory memo about these changes written by the U.K. Intellectual Property Office available here.  It is interesting to see how certain goals that have been accomplished by the courts in the U.S. and, importantly, in Canada are now intentionally being supported in this British legislation.  As I say, we are seeing a fairly strong international tide pushing towards expanded user rights in the digital environment, lest legacy industries use copyright to suppress economic development in their anxiety to prevent competition.

Several points about this legislative reform seem especially important to me.

First is the emphasis in several of the new provisions on supporting both research with and preservation of sound recordings and film.  This is one of several places where the U.K. may reasonable be said to have just leapfrogged over the United States, since the provisions about non-profit use and preservation of music and film remain a mess in our law.

Second, the British are now adopting an exception for text and data mining into their law.  This is huge, and reinforces the idea I have expressed before that libraries should be reluctant about agreeing to licensing terms around TDM; the rights are likely already held by users in many cases, so those provisions really would have the effect, despite being promoted as assisting research, of putting constraints (and sometimes added costs) on what scholars can already do.  This is probably true in the U.S., where fair use likely gets us further than vendor licenses would, and it has now been made explicit in the U.K.

Another major improvement in the U.K. over U.S. copyright is the fact, explained in the CILIP post, that

[M]any of these core “permitted acts” in copyright law given to us by parliament all not be able to be overridden by contracts that have been signed.  This is of vital importance, as without this provision, existing and new exceptions in law could subsequently simply be overridden by a contract. Also many contracts are based in the laws of other countries (often the US). This important provision means that libraries and their users no longer need to  worry about what the contract allows or disallows but just apply UK copyright exceptions to the electronic publications they have purchased. 

This type of approach is desperately needed in the U.S.  If we truly believe that the activities that are supported by core exceptions to the rights under copyright, like education, library services and fair use, are beneficial to society and part of the basic public purpose of copyright, they should remain in place regardless of provisions inserted into private law contracts.  Now that the British have made this acknowledgement, it is time for the U.S. to catch up.

Competitiveness is often an important part of the discussion over copyright law.  Rights holders argue that terms should be lengthened and enforcement improved in order to enhance competition with other nations.  The U.K. began its copyright reform process in order to improve its ability to compete for high-tech business.  And this new revision of the British law puts the U.S. back in a situation where we must continue to strengthen not the rights of legacy industries but the rights of users — which is where innovation will come from — because other parts of the world are moving past us in this area.  How to we do this, in the two key areas I have identified?  In the area of the right to mine text and data for non-profit research purposes, this is something our courts can do, through interpretation of the fair use provision.  We can hope that such an opinion might appear in the near future, although I am not aware of what case might prompt it.  But contract preemption is something that Congress will have to address.  If the U.S. Congress is serious about copyright reform, and really wants to help it to continue to be a tool of economic progress in the U.S., they should put the issue of making user rights exceptions impervious to contract provisions that attempt to limit or eliminate them at the top of the legislative agenda.

Copyright roundup 2 — Orphan Works

Recently the Copyright Office has held a series of roundtable discussions and comment periods on the subject of orphan works.  As seasoned readers will know, this has become a kind of movable feast, happening at regular but unpredictable intervals.  My suspicion is that the CO is under a lot of pressure from big rights holder groups to find some way to impose a collective licensing scheme for orphan works, and these periodic discussions and reports are an effort to stave off the importuning of the lobbyists.  Certainly Congress has shown very little interest in adopting an orphan works “solution,” and as more and more courts recognize that fair use can move us a long way towards productive uses of orphaned works without introducing the “tax for nobody” that would be imposed by an extended collective licensing scheme, that appetite is likely to decline even further.

Because the events seem to have so little payoff, I admit that I allowed the pressure of other work to cause me to largely ignore this iteration.  In the past I have helped Duke and other organizations prepare comments, but this time I left the heavy lifting to colleagues.  Fortunately their is a growing cadre of people able to advance the arguments in favor of fair use and the best ways to deal with the immense problem of orphan works, so my neglect was trivial.  But I still want to help my readers find some of the best commentary from this latest round of discussions.

From what I have heard, at least one of these roundtables generated a lot more heat than light, featuring some shouting and at least one direct threat of litigation from a rights holders’ representative.  But apart from the circus atmosphere, substantive issues were discussed, and a great summary of the more mature parts of the conversation can be found in this post from the ARL Policy Notes blog.

For the library community, the strongest support we get in these events comes from the superb work of the Library Copyright Alliance, which is supported by the ALA, the ARL and the ACRL.  The full set of comments prepared by the LCA and submitted on behalf of our profession is a wonderful introduction to the problem, why it matters so much to libraries, and the directions from which a solution might come.

Perhaps the most important result of this discussion, building nicely on the LCA comments, is this great set of comments about myths and misstatements regarding fair use.  These events sometimes seem like mere opportunities for lobbyists to tell tall tales to Congressional staffers and bureaucrats, and it is often necessary to try, after the fact, to set straight a very crooked record.  On the issue of fair use, Brandon Butler, Peter Jazsi and Mike Carroll, all from American University, do a great job of correcting the erroneous things that were said in these public events.  Their comments offer a clear vision of fair use as a coherent and reliable doctrine that has evolved logically over time, continuing to perform its core function even in periods of rapid technological change.  This is a great statement and should be required reading for every librarian and academic.

One sentence summary of the comments; fair use is neither unpredictable nor incoherent, as some have argued, but is an evolving doctrine that is relied upon safely by millions of ordinary people and can provide a strong foundation for the careful consideration of even mass programs of digitization.

Finally, as I said above, one of the purposes of these regular events seems to be to try to stumble towards an extended collective licensing scheme that Congress might consider, even though these schemes impose an unnecessary tax on users without benefiting legitimate rights holders and have not worked well in the nations that have tried them. These comments about ECL schemes from the Electronic Frontiers Foundation are also worth reading. They do an excellent job of briefly explaining the broad consensus that ECL is a poor solution to the orphan works problem.


On Copyright and negligence

Last week I received the April 2014 issue of Against the Grain, which, to be honest, is not a publication I read at all regularly.  But I do sometimes skim it for copyright articles, and today my eye was caught by an op-ed piece from Mark Herring of Winthrop University about the Google Books decision.

Although its title asks a simple and moderate question — “Is the Google Books Decision an Unqualified Good?” — the article itself is quite extreme in its point of view and for the most part does not engage with the actual decision.  Instead it is a hyperbolic diatribe about why we should all be afraid of Google; it ends with the assertion that “In a sense, we all work for Google now, free of charge.”  I have no clue what that means, but it is pretty clearly an exaggeration.  Nevertheless, there are a couple of points made in this op-ed that are prevalent enough to be worth discussing.

By the way, normally I would provide a link to any article I discuss here, particularly when I do so in a critical way.  I want readers to have a way to evaluate the whole debate, not just my side of it.  But in this instance, the op-ed does not appear to be on the ATG website.  So anyone who wants to see both sides, as it were, is encouraged to track down a copy of the April issue of ATG.

I want to start with Dean Herring’s second reservation about what he calls the “Google Book Theft.”  He complains that there is “no evidence, no empirical evidence, that shows any additional exposure of any authors’ works improves royalties” and calls Google Books “cruel” for “taking away from academics any chance to improve [their] anemic bottom lines.”  Of course, it is easy to see the shift in this paragraph when I put the two sentences together — from no “evidence of improvement” Herring moves immediately to “taking away any chance” of improvement, a leap not justified by logic.  But I am more interested in looking at the decision for what it actually is, a legal opinion at the end of the first stage of a court case.  In that context, should we have expected either Google or the judge to have presented evidence of an improvement in royalties?

The point I want to emphasize is that copyright infringement is a “tort” — a civil (non-criminal) wrong for which courts can provide a remedy.  In structure, a copyright infringement case is not very different from other kinds of tort litigation.  For one thing, there must be a finding of harm.  In copyright infringement cases the harm is often presumed — if a plaintiff shows that their copyright has been infringed, the court will usually presume, subject to rebuttal, that there has been harm.  But a judge is entitled to look at a particular set of circumstances, as Judge Chin did, and decide that he can find no harm.  Some harm is a necessary element of most torts and is explicit as well in the fair use argument (under the market harm factor).  So it is asking the wrong question to require evidence of an improvement in royalties; all the court needed to conclude in order to stay within the framework of legal analysis was that the likelihood was more on the side of such improvement than harm.

To put this another way, the burden of showing harm falls on the plaintiff.

This general framework of tort litigation is also important when we look at another argument Dean Herring makes, that after this case fair use could apply to anything.  He writes, “Determining what fair use is now is anyone’s guess.  Everything is, the way I read it.”  It is a fairly common strategy of those who favor stronger and stronger copyright protection to take the line that copyright, and fair use especially, is too difficult and must be avoided because of its uncertainty.  This hand-wringing about how the court has now abandoned all structure or logic in making a fair use finding is really just another version of that argument, in my opinion.  But fair use remains today what it was before Judge Chin’s ruling, an “equitable rule of reason” that requires courts to examine the specific circumstances of a challenged use and determine, based on those particular facts, if the use was fair.  It is not a bright-line rule, but that does not mean it is random, unpredictable or unusable.

Here is where I want to return to tort litigation, and suggest an analogy I heard a few weeks ago from Peter Jaszi, who teaches copyright law at American University.  He reminded his audience that we have lots of laws that depend on courts determining what is reasonable.  It is very common in contract law, for instance.  But Professor Jaszi focused on a different area of tort law for his analogy for fair use — the law of negligence.  For a driver, for instance, to avoid being guilty of negligence she must exercise “due care,” which is defined as the standard of care that a “reasonably prudent” person would exercise in the same situation.  Arguably, this standard for non-negligent driving is even more nebulous than fair use (where we are given factors to assess the facts).  Yet all of us continue to drive, and I dare say most of us think we know what is an appropriate level of care when we do so.  Most of us, anyway, are not paralyzed with fear because the basic rule about legal driving is so uncertain and subjective.  Nor should we be about fair use.  And, of course, neither “fair use” nor “due care” results in a free-for-all, they just give the courts the discretion to look at specific facts and try to render justice.

The best thing that Professor Jaszi said during his discussion of this parallel, and the thing I have come all this rhetorical distance to repeat, involves how we learn what it means to exercise due care as drivers.  The simple answer is that we talk to other people, and we watch how other people drive.  Over time we develop a flexible but fairly accurate sense of what we should and should not do in a particular situation, even if we have not seen the particular circumstances before.  We do not require rigid rules that anticipate every eventuality in order to dare to get in the car each day; we have internalized that nebulous standard of due care and we carry it with us on each trip.  Sometimes we make mistakes, but most of the time we do just fine.  And here is a model for how we should think about fair use. We learn what it is by talking with others about situations we encounter frequently.  We think through the factors in new circumstances as they arise.  We read how courts have applied those factors in defining ways.  And we become good fair users — fair fair users, if you will — just as we become good drivers.

Fair use is neither an empty notion nor a license to do anything, and one case, not even the Google Books case, cannot make it so.  It remains a flexible tool that is a necessary part of the copyright structure, and one which careful and sensible people can learn to use with care and discernment.


Nimmer on infringement 2.0

I was reminded once again of Mark Twain’s comment — “Only one thing is impossible for God: to find any sense in any copyright law on the planet” — as I listened to Professor David Nimmer deliver the annual Frey Lecture in Intellectual Property at the Duke Law School this week.  As the person now responsible for revising and updating the seminal treatise Nimmer on Copyright, which was begun by his father over fifty years ago, Professor Nimmer has the monumental task of trying to make U.S. copyright law and jurisprudence coherent by creating a framework that makes it all (or most of it) make sense.  Judging from his lecture, it is a task he embraces with grace, humor and aplomb.

The title of Nimmer’s lecture was “Infringement 2.0,” and his overall framework involved the changing role of copyright and infringement in the current environment, where copyright protects every scrap of original expression, whether the creator needs or wants that protection, and where copying and widespread distribution can be accomplished with the click of a mouse.  I want to try to outline several points from the lecture that seemed especially interesting to me (fully recognizing that I alone am responsible for any misrepresentations of Prof. Nimmer’s meaning).

Nimmer began with a more qualified definition of infringement that we tend to think about normally, in my opinion — the unauthorized wholesale copying of works of high authorship.  Not just unauthorized copying, but wholesale copying of works of high authorship.  This definition seems to suggest that courts should not spend time worrying about copyrights in family photos and other ephemera; Nimmer even raised the question of whether we should protect pornography, although he immediately recognized the First Amendment issues that such a stance would raise.

With this qualified definition of infringement as a starting place, Nimmer took us on a tour of some recent copyright rulings.  What I found really interesting was his suggestion that courts are using fair use, in the digital environment to approximate the qualified definition of infringement that he suggested.  Two examples will have to suffice.  In the case involving the anti-plagiarism software Turnitin (A.V. v. iParadigms), the Fourth Circuit rejected an infringement claim based on the copying of entire student papers that are submitted to the service and stored to be used for comparison against later submissions.  The Court reached this result by finding that Turnitin’s use was a fair use, and Nimmer suggested that this use did not meet his qualified definition of an infringement because it did not copy works of “high authorship.”  More significantly, perhaps, Nimmer also approved of the District Court decision in last year’s Google Books case that Google’s scanning of millions of titles was a fair use.  In his framework, Google’s scanning did not amount to “wholesale” copying; even though entire works are scanned in to the database, users see only “snippets,” and those very short excerpts serve important social purposes.

Whatever one may think of the individual cases, this was a fascinating approach.  The copyright law says that what is fair use is not, therefore, infringement, so it makes perfect sense, for one sufficiently learned and bold, to try to understand fair use jurisprudence by looking at the limits on infringement that are thus defined by implication.

Another topic Nimmer addressed at length was the doctrine of first sale, and he was highly critical of the Ninth Circuit decision in Vernor v. Autodesk, which found that Mr. Vernor committed copyright infringement when he resold legal copies of CAD DVDs in apparent violation of licensing terms.   The Ninth Circuit spent a lot of time examining those license terms, but Nimmer suggested that they were asking the wrong question.  The proper question here, he suggested, was not “was this a sale or a licensing transaction,” as the Court assumed, but rather “who was the legitimate owner of the material substrate that made up this copy?”  He pointed out that in both the foundational Supreme Court case about first sale, from 1908, and in last year’s decision in the Kirtsaeng case, the Court was dealing with legal copies where an attempt had been made, through a license, to restrict downstream resale of those copies.  In both cases the Supreme Court ignored those attempts at licensing and allowed the legitimate owner of the material copies to resell the works on whatever terms he could negotiate.  Based on those precedents, Nimmer suggested that the Ninth Circuit erred when it found that Vernor had infringed copyright with his resale, based on provisions in the purported license.

Another place where Nimmer suggested a radical way to rethink the copyright environment was on the international front.  He asserted that the foundational principles of international copyright agreements — the prohibition of formalities and so-called “national treatment” — simply do not make sense in the Internet age, where potential copyright infringements nearly always cross national borders, and copyright owners are often impossible to locate.  He suggested that this out-dated approach be replaced by something the U.N. and the W.I.P.O could do very well — a searchable, worldwide registry for copyright owners that Nimmer called a “panopticon.”  His idea is that if a copyright owner has registered his or her work in the panopticon, they would be entitled to significant remedies for any act of infringement that is found.  If they do not register, however, a action for infringement could only result in an award based on actual losses, not the much more substantial “statutory damages” that are often available.

This idea is nothing if not ambitious, but its foundations are commonsensical.  If copyright protection is going to be completely automatic, and no notice on individual works is required, it is unfair to insist that users must have authorization for their uses if the rights holders have done nothing at all to make their claims known or to facilitate asking for permission.  Lots of other property rights regimes have notice or registration rules (think of buying a house or a car) and those rules are in place to protect the owner.  Why not a similar regime for international copyright, with an incentive, in terms of potential recovery available, for participation?

Finally, I want to end with Nimmer’s prediction about the prospect of a new copyright act in the United States.  It seemed that he does believe that Congress will seriously undertake such a thoroughgoing revision of the law, and he suggested a betting pool on when the new copyright act would pass.  For himself, he wanted to reserve a date in May of 2029.  So we have that to look forward to.