The good side of a bad lawsuit

For those of us who believe that education and technological innovation require more space in the fair use analysis than courts usually recognize, there was an interesting decision recently that might be heartening if it were not so heavily dependent on the fact that the plaintiff in the case was so unsympathetic.

In this case, the unsympathetic plaintiff is called “Righthaven” and their business is to receive transfers of copyright in news articles (mostly from the Las Vegas Review-Journal)and then bring suits against Internet sites that reprint all or part of the stories.  They ask for maximum damages at trial, but offer to settle cases for a few thousand dollars; thus they are supported primarily by people who pay up out of fear and to avoid the cost of putting on a defense.  On March 18 a district court judge dismissed one of those lawsuits, holding that the reposting of the article in question was a fair use.  The judge’s reasoning, as reported here, here and here, was especially interesting and reflects why it is good to get sued by a really obnoxious plaintiff.

First, the Judge applied an interesting twist on the transformative fair use analysis, finding that the use of the news story at issue by the defendant non-profit organization was not a market competitor with the newspaper where it was originally published.  The Center for Intercultural Organizing, he found, did not serve the same market as the newspaper, so the fourth fair use factor favored a finding of fair use.  This is an interesting conclusion, given that there was no “new work” created here but merely a re-posting of an entire article where it would more easily be found by those who care about the work the CIO does.  While this thinking about the fourth fair use factor usually happens in the context of a new work like a parody or critical commentary that is clearly transformative, here the judge simply tries to divide the audience for a news story from that of a community organizing website.  Would this kind of logic translate to the context of re-posting certain works for educational purposes?  It might, but we must remain aware that the nature of this particular plaintiff really colored this decision.

The other unusual bit of reasoning in this case makes the “disliked plaintiff” effect quite clear.  The judge talked a good deal about how the rights holder (Righthaven)was using the copyright, which is not usually part of the fair use analysis.  Usually, the use inquiry focuses on how the defendant is using the work, but here the judge looked at how Righthaven was exploiting the copyright solely as a means for bringing lawsuits.  Righthaven does not produce creative work nor support those who do; it simply sues, or threatens to sue, other entities.  This use “exclusively for lawsuits” was a mark in favor of fair use, the judge seems to be saying, because finding otherwise would have a chilling effect on other fair uses.  This is an extraordinary bit of reasoning — linked to, but conceptually separate from, a concern for a chilling effect on free speech — that represents a substantial departure from the usually fair use analysis.

I am not saying, by the way, that there is anything wrong with the judge’s approach here.  Part of the message we can take from this case (apart from the value of being sued by a really bad plaintiff) is that fair use is deliberately open-ended and driven by specific circumstances.  The fair use section of the copyright law explicitly makes the list of four factors non-exclusive, so judges are free to consider other things, including the good faith of both plaintiffs and defendants.  The judge did that here, and I have no quarrel with his method or his conclusion.  We need to learn from this case both that the fair use analysis is intended to uncover particular facts about a specific situation, and that it is not easy, because of that purpose, to translate one decision, for better or worse, into other circumstances.

Piling on

Since posting my comments on the Google Book Settlement earlier this week, I have followed other commentary as closely as time has allowed.  I have been interested to see that no one else whose comments I have seen seems to think that an appeal is likely.  Indeed, I draw that conclusion entirely from the absolute silence I find about that option, while there is much discussion of other possibilities.

I imagine the reason for this is the strong sense that the rejection was, as Prof. Pamela Samuelson puts it in this interview, the only conceivable ruling that the judge could have made and that it is quite water-tight from a legal perspective.   While it is not unheard of for parties to spend lots of money on lost causes, the majority of commentators obviously feel that Google, the Author’s Guild and the Association of American Publishers will not throw good money after bad by filing an appeal.

I am perfectly willing to pile on to this bandwagon, abandon my speculation about an appeal, and think about what other options the rejection might open up.  One theme that seems to be emerging is that a renewed emphasis on solving the orphan works problem is now called for; certainly that is reflected in this article from the Chronicle of Higher Education.  I absolutely agree that the rejection of the settlement should be a call for librarians, especially, to re-engage with the orphan works issue, and want to consider a little bit what form that re-engagement might take.

The Google Books Settlement gave librarians, copyright activists and even Congress a chance to sit back and assume that orphan works was being dealt with.  Sure, we thought, there are millions of works that are still protected by copyright but for which no rights holder can be found; access to these works is a problem, but Google is going to solve it.  Now we cannot look to Google for a solution, so it is worth revisiting what a sensible solution might look like.

I think we should consider the possibility that a legislative solution may not be either the most practical or the most desirable way to resolve the issue of access to orphan works.  The orphan works bill that came closest to passing a few years ago was hardly ideal, since it would have created requirements both burdensome and vague for gaining a measure of extra protection from copyright liability.  A good bill that really addresses the orphan works problem is probably both hard to conceive and unlikely to pass.  So what alternatives short of a legislative solution might we consider?

Tho obvious answer is fair use, since most proposals for orphan works solutions would essentially codify a fair use analysis.  Fair use, after all, is really an assessment of risk, since its goal is too reuse content in a way that wards off litigation.  The Congressional proposals around orphan works would have simply reduced the damages available is defined situations, thus also having as a primary purpose the reduction of the risk of litigation.  Careful thinking about projects like mass digitization of orphan works can accomplish the same goal by balancing analysis of the public domain, permissions where they are possible and needed, and a recognition that for truly orphan works, the fair use argument is much stronger since there is no market that can be harmed by the reuse.

When I say “truly” orphan works, I begin to hint at another element that might go in to an informal solution of the orphan works problem, the creation of rights registries to help locate copyright holders.  This article about a copyright database, or registry, being built in the European Union — called the ARROW project — indicates that such an idea can garner support as a way to address the difficulty of orphans.

The Google Books Settlement, of course, envisioned the creation of a rights registry that would have helped a lot with the orphan works problem, but now we need to think about other, and perhaps less ambitious projects.

A registry would help because it would provide an easy (or easier) way to determine that a work is not an orphan.  A search in a comprehensive registry could help a putative user find the rights holder to whom a permission request should be directed and, if no rights holder has registered, create a presumption that due diligence has been performed.  As EU Commissioner Neelie Kroes puts it in the article,

one search in ARROW should be all you should need to determine the copyright status of a cultural good in Europe.

When I suggest a less ambitious registry than ARROW or the Google Registry that was never born, I am thinking that there are certain kinds of cultural goods — photography is an obvious example — where there are unique problems in marking the work in a way that permits easy identification of the rights holder.  A registry for photographs, especially as image-matching software becomes so impressively accurate, could help photographers protect their rights and give potential users a little more security when deciding to use a work believed to be orphaned.

I want to emphasis that I am not suggesting a re-introduction of formalities in the US, akin to copyright notice and registration with the Copyright Office, anymore than the EU database would be a formality.  Instead, I am proposing a voluntary mechanism that would help rights holders protect their own interests, make permission requests easier, and increase the accuracy of determinations about real orphan works.

GBS and GSU: two cases, going forward

The last week has seen two important decisions in copyright cases with significant interest for higher education.  The first, of course, is the rejection of the amended settlement in the Google Books case; that decision has gotten lots of attention, so instead of rehashing it I want to suggest what I think the future holds in that dispute.  Much less attention was paid to an order in the Georgia State case over alleged copyright infringement in providing digital readings to students, about which more in a bit.

The rejection of the Google Books settlement is pretty comprehensive.  To my mind, the core of Judge Denny Chin’s reasoning is that the massive licensing mechanism that would be created to allow Google to proceed with marketing a digital books database is simply beyond the power of the courts.  He argues both that it is outside the scope of the issue originally presented in the case, and thus inappropriate for a settlement ruling, and that the mechanism itself is something that Congress, not the courts, has the prerogative to create.

Given the sweep of the rejection, and especially its finding that the “forwarding looking business model” is outside of the authority of the federal courts, this seems like a difficult decision to appeal.  Nevertheless, I believe that it will be appealed, because I think the parties have very little choice.  The other key part of Judge Chin’s decision, to me, is his strong suggestion that the settlement be converted to an opt-in agreement rather than an opt-out one.  This would destroy its attraction to both sides, I believe, since it would exclude the ability to exploit orphan works.  Without that huge financial opportunity, I don’t think settlement is worth it to either party.

Aside from reforming the settlement agreement in this way so that it could be approved by Judge Chin, the parties have two other options — continue the original litigation or appeal the rejection of the settlement as it stands.  The first option seems unattractive to both parties at this point.  Both would risk losing, of course, but more to the point, neither would have much to gain, at least not in comparison to the huge profit opportunity they think they have found in settlement.  So I believe both sides will resist either returning to the original issue or reformulating the agreement in the way the Judge suggests and will instead appeal his decision, hoping to preserve that agreement more or less as it stands.

One interesting area of speculation is about what impact Judge Chin’s elevation to the Second Circuit Court of Appeals will have on the case as it proceeds.  If the parties went back to the district court  for trial, a new judge would presumably be assigned.  If they appeal, as I expect, a panel of the Second Circuit that does not include Judge Chin will hear the case.  But will an appellate panel by less willing to overturn a decision made by their new colleague?  Personally, I don’t think that would be a significant factor, since I would expect this clear and carefully reasoned decision to be upheld on its own merits.

If more litigation is in the future for Google Books, that is even more certainly the case in the Georgia State dispute.  Last Thursday Judge Orinda Evans issued an order denying the motion to dismiss made by GSU on sovereign immunity grounds and setting a date — May 16 — for a trial.

The motion to dismiss made by Georgia State raised the issue that federal courts (which are courts of limited jurisdiction; they can hear only certain types of cases) usually are not allowed to hear cases against state governmental entities.  This case is based on a “loophole” in that rule that allows a plaintiff to sue a government official in his or her official capacity in order to stop an ongoing violation of federal law.  This loophole has generally been applied to prevent state violations of civil rights, but here the issue is copyright infringement.  The issue GSU raised was the degree of control that the named officials, who include all of the Georgia Board of Regents and the President of GSU, actually have over the actions that are alleged to be infringing.

One of the established principles in the type of case I have been describing is that there must be a “nexus of control” such that an injunction in the case (damages are not allowed) can actually lead to an effective remedy.  Courts hate to issue ineffective orders, so they will not allow me, for example, to sue the Governor of North Carolina if the DMV suspends my drivers license because the Governor has limited influence over day-to-day decisions at DMV.  This issue of whether or not the defendants are close enough to the alleged illegality to actually control it is normally considered a purely legal matter that can be decided by a judge without trial.  But in this case, Judge Evans has essentially postponed her decision about sovereign immunity until she hears all the facts at trial; in legalese, she is treating the question as a “mixed” issue of law and fact.  She will thus decide at trial if that nexus of control exists.  If it does not, she will dismiss the case on the grounds that she was never authorized to hear it.  If there is sufficient control for her to decide the case, however, she will then rule on whether or not the alleged infringements were, in fact fair use.  If she finds for fair use, she will not issue an injunction because there would be no reasonable suspicion of ongoing infringements that she would need to remedy.

It is clear that Judge Evans is ready for a trial after almost three years of maneuvering in this case; she has set a trial date less than 60 days from now.  There has been extensive discovery, so the parties ought to also be ready for trial.  All that remains open to question is whether both parties actually want a trial, or if the imminent prospect of one will push them to seek a settlement.  Since I have already indulged in so much speculation this morning, I will add my personal observation that neither side has shown much interest in a settlement of the case thus far, and I think both will opt to go to trial.  Both sides, I think, believe strongly in the principles they each think they are defending, and it is difficult to imagine what the middle ground would be upon which they could agree.  So I will set aside May 16, fully expecting to hear news from the long-awaited trial of Cambridge University Press v. Patton, et al.

Libraries, pricing and piracy

I recently told the north Carolina Serials conference that the so-called “journals pricing crisis” had outlasted any meaningful definition of the word crisis and was no longer the driving force behind our discussions of scholarly communications, if, indeed, it ever was.  Nevertheless, it simply will not go away, as witnessed by another round of library uproar over publisher prices.

The fuss over Harper Collin’s new e-book pricing model just keeps growing, of course, and on Tuesday an article about it was on the front page of the dead-tree edition of the New York Times.  Of possibly greater significance are the reports that the librarian from Imperial College London has issued an ultimatum on behalf of Research Libraries UK to Elsevier over the price increase they are demanding for the journal “big deal” package.  She is demanding a price decrease in line with library budgetary realities and greater flexibility.

Each time these kerfuffles make the news I ask myself if this will be the one that bears fruit, that actually leads to large-scale cancellations and, just maybe, real change.  So often, frankly, libraries ultimately back down because our first concern is always our patrons, and we cannot overlook the need, or at least the desire, that our academic communities feel for these resources.  That tendency to give in has lead many in the content industries to believe, apparently, that libraries really are bottomless wells of money, down which they can drop a bucket whenever they feel the pinch.

I continue to hope that we librarians will realize, at some point, that reform of the severely broken scholarly communications system will ultimately benefit our academic communities more than continuous access to big deal journal packages will.  This is especially so as an increasing number of researchers tell us that they do not use those packages much anymore.  Faster and less formal means of getting access to research findings are becoming the norm in many disciplines, from physics to economics.  In some cases pre-print repositories are now the major source of the rapid awareness of new research that is needed, in others it is the more informal exchange of ideas prior to formal publication.  As more of these paths develop, our dedication to big-deal journal packages will seem increasingly like a relic of a previous age.

This is not to say that the transition will be easy.  The road to reform will certainly take us through more painful encounters, both with publishers and with our own faculty.  Like so many other librarians, I am anxious for change but hope that other libraries and other librarians (like Deborah Shorley from Imperial) will take the lead in these confrontations.

But it also seems likely that other forces will get publisher attention and force changes in standard business models before we librarians do it.  Is Ms. Shorley’s ultimatum more of a threat to Elsevier, I wonder, than a downgraded outlook by a major securities research firm?  I doubt it, but Elsevier is facing both those things, according to this story about a report prepared at Sanford Bernstein.  It their report the research firm expresses the opinion that Elsevier is “in denial” about the need to reform its business practices and about the unsustainable future of the “big deal.”  They predict that Elsevier will “underperform” in the market.  Perhaps direct pressure on shareholder value will get the attention that outcries from libraries, followed by capitulation, has failed to capture.

Finally, I want to note that pricing strategy is getting some interesting attention outside of the library environment as well.  The Social Science Research Council has just released a massive report, three years in the making, from their “Media Piracy Project” about intellectual property piracy in the developing world; there are stories about the report here and here.  Perhaps surprisingly, at least to the content industries, they do not blame moral failings for the rise in piracy and they do not recommend increased enforcement (which makes it ironic that the White House has just released legislative recommendations calling for greater IP enforcement precisely to address piracy).  Instead, the report suggests that poor pricing strategies by the content industries are a major factor behind piracy in the developing world and that new business models will be more effective at addressing the problem.  Certainly it is obvious that ratcheting up IP enforcement has not worked in the past and is unlikely to suddenly provide a miraculous answer.  As for the contention that content industries have followed disastrous pricing policies that undermine their own best interests, librarians have been trying to tell them that for years.

PS.  Through Friday, April 15th, all of the Duke Libraries’ blogs will be gathering responses to our first-ever reader feedback poll.  Click the survey link in the sidebar to participate.  Thank you for your feedback!

Patent reform, publication and repositories

Patent reform has been percolating in Congress for quite a few years now, and I have to admit to being caught off-guard when I saw the announcement that a comprehensive reform package had passed in the U.S. Senate by an overwhelming majority.  This story about the bill (which has not been passed in the House) set off an animated discussion between David Hansen, the intern in Duke’s Scholarly Communications office this year, and me regarding the issue the article raises about creating a rush to publication and whether there was a potential impact, if the law changes in this way, on disciplinary and institutional repositories.

This post is necessarily tentative, since neither David nor I are patent experts by any means.  Any reader who can correct or clarify our tentative conclusions is welcome to do so.  But based on our initial discussions, I think we are in fact likely to see more pressure to publish quickly and that that pressure could give some repositories a more prominent role in communicating scientific research.

Two facts about the patent system and the bill in Congress are relevant here.

First, under both the current system and the proposed new one, an invention must be “novel” to receive a patent.  To show that an invention is novel, patent applicants provide a list of “prior art” that shows what their invention is based on and establishes that it represents some new idea or “creative spark.”  If the prior art anticipates the new invention too completely, such that the new twist seems obvious, that will defeat the application for a patent.  An inventor herself, however, is allowed to publish a report of her new discovery, usually in a journal article, and for a period of 1 year that publication will not count as prior art such as to render the invention non-novel. Essentially, the inventor has a one year window to file a patent application after publication of the information about her own discovery.

Second, the new system embodied in the Senate bill would change the priority for who gets the patent in a particular invention when there are rival claimants.  The current system in the U.S. awards the patent to the first person to invent the object of the application, regardless of who filed the first application.  This seems fair, of course, but it results in significant problems of proof and makes the patent application system lengthy and expensive.  The new bill would adopt the system used in most of the rest of the world, where the patent goes to the first person to file an application.

When we understand these two facts, it is easy to see why this proposed change could lead researchers to want to publish their results even more quickly.  In the past, delay did not matter as much because if I was the first to invent, I would get the patent in preference to someone who filed before me.  Under the new system, an earlier filer could defeat my patent.  If I published my results in an article, however, that article can serve as prior art that would defeat the other claimant’s application.  It would not be prior art for the purpose of my application, however, as long as I filed within one year.  Thus there could be pressure to publish articles “defensively,” to undermine any applications that beat my own to the Patent Office mailbox.

Also, because of the way academic rewards are structured, academic inventors often want to publish articles about their research even before the invention is finalized in a way that justifies a patent application.  That one year window for inventor-authors has served this perceived need to get a peer-reviewed, tenure-supporting article into print even before the application was filed.  Under a “first to file” system, however, the whole process may get telescoped.  Since the filing date would matter, researchers might want to publish more quickly and file more quickly in order, again, to defend against another claimant who might also be planning to file.

Assuming there is this added pressure to publish quickly when a patent application is in the offing, disciplinary and institutional repositories may have an important role to play.  Researchers already complain that the process of publishing in a journal takes too long, so that publications are really just a formal record of research that is often 6 months to a year out-of-date.  The added pressure of defending against a rival patent application would seem to make this delay even less acceptable.  Pre-print repositories may be the solution, where a record of research can be “published” such as to serve as prior art to defeat anyone else’s patent application while waiting for the formal process of journal publication to proceed.  Since even reports in a single printed dissertation have been held to be prior art for this purpose, I have no doubt that a pre-print in ArXive or some other disciplinary repository would also serve the purpose, as would institutionally-managed repositories.

If the patent law changes and researchers really do start to feel this added pressure, librarians may serve an important role in directing them to appropriate institutional or disciplinary repositories where their pre-prints can “hit the streets” as quickly as possible.  And repository managers will need to be sure that they can turn these deposits around in a way that helps our researchers protect their rights.

Did he really say that?

Librarians have raised a pretty loud outcry (for librarians) about the new e-book pricing policy announced last week by Harper Collins, under which libraries would be able to loan an e-book only a set number of times before having to pay again to continue lending.  This model seems unfair to libraries, especially because they would not be able to plan their budgets, since the actual cost of each e-book purchased this way would be unknown and variable.  But now publishing consultant Martin Taylor has written a column praising Harper Collins and telling librarians to suck it up and fork over the money.  His core argument is that publishers have “serious concerns” about the impact of library lending on their e-book markets and that “librarians have not managed to address [these concerns].”  This, to my mind, is a remarkable statement.

It is not the job of librarians to address the concerns publishers have for their bottom line; to say that we should implies a view that libraries are nothing more than a market, the existence of which is justified only insofar as they serve publisher’s interests.  But libraries serve the interests of an altogether different clientele.  Public libraries serve the readers of their geographic areas and are responsible to local boards or town councils.  Academic libraries serve students, faculty and, often, the local populace, while being responsible for their fiscal management to deans and provosts.  Publishers are entitled, if they want, to make a business decision about how they price e-books, but libraries are equally entitled to make a business decision about how to spend their money in ways that best serve their patrons and their institutions.  If buying e-books under this new model is not good for our patrons, publishers have no cause to complain or berated us for being out-of-touch.

Taylor suggests that the price for each loan of an e-book under the Harper Collins model is reasonable.  But this claim confuses price with value.  No matter what the price of each loan is, if the book represents a drain on a library’s resources that cannot be known in advance, it is a bad value.  There is almost no scenario in which a library’s money would not be more responsibly spent elsewhere.

Some publishers have always disliked the deference to libraries that is built in to US policy and, under the “first sale” doctrine that is found in section 109 of the Copyright Act, US law.  First sale was first formally recognized in US law in 1903, when Bobbs-Merrill publishing tried to control the down-stream pricing of one of their books by placing a statement on the title page claiming that the book could never be sold for less than $1.  When Macy’s department stories offered the book at a discount, Bobbs-Merrill sued and lost in the U.S. Supreme Court.  The Court made clear what US lending libraries were already assuming, that once a first sale of a work had occurred, the exclusive right of distribution was “exhausted” and the purchaser could resell, or lend, the book without permission or control from the publisher.  It was discontent with this well-established public policy that led Pat Schroeder, when she was president of the Association of American Publishers, to call all librarians pirates.

Since public policy has always been on the side of library lending as a fundamental building block of democracy, publishers now find that the only way they can attack it, and try to develop an income stream they have never had before, is through DRM – technological controls that prevent lending e-books more than a set number of times.  Like Pat Schroeder’s rhetoric of piracy, this approach has been tried before, by the music industry.  Record companies finally figured out that consumers would prefer not to spend their money for products that have their own obsolescence built in (unless the consumer pays again and again), and they abandoned the use of DRM.  The publishing industry is entitled to try the same failed experiment if they like, but, again, they should not complain if consumers, in this case libraries, choose not to support the model.

Taylor recognizes that the Harper Collins’ model would cost libraries money they have never had to spend before – repeated fees to keep loaning content they have already purchased – and he helpfully provides suggestions about where that money should come from.  He mentions and rejects the possibility that the publishers might forgo this new income stream.  He would be happy to take tax money, but he realizes that this is unlikely.  So instead he suggests that library branches be closed and librarians be laid off in order to free up the extra money.  That’s right; the core of his argument is that we should close libraries so that publishers can make more money. Of course, the libraries that would get closed or under-staffed are always those in places where libraries are most needed, in disadvantaged neighbors or at less wealthy colleges and universities.

These libraries are, apparently, expendable if they cease to serve the narrow (and probably misconceived) interests of publishers at this particular moment in history.  This kind of support, I expect, will not do Harper Collins much good; I can only hope that this naked self-interest and disregard for public policy and the general welfare will make Taylor’s column what it should be, a rallying cry to libraries and those who support them in city halls, state legislatures and academic administrations to stand up against business practices that threaten their core missions.

The "traditional contours" of copyright

The Supreme Court on Monday granted certiorari , which is the technical language for agreeing to hear a case, in Golan v. Holder, a copyright case with potentially significant implications for the public domain in the U.S.  I wrote about this case back in 2009, when it was first decided by a federal District Court in Colorado.  The decision I approved of at that time was subsequently reversed by the 10th Circuit Court of Appeals, and now the Supreme Court has agreed to decided the issue.

This post from the Patently O blog reports on the grant of “cert” and does a pretty good job of explaining the issue.  Basically the problem is that a law passed to reconcile U.S. copyright law with the international treaties that we agreed to in 1988 and after had the effect of removing some works from the public domain.  This had virtually never happened before; until the Uruguay Round Agreements Act (URAA) of 1994, things that were in the public domain stayed there, and users could safely depend on their availability for use and reuse.  For a subset of materials, however, the URAA changed the rules pretty dramatically and, according to the petitioners, in a way that conflicts with the basic protection of free speech found in the US Constitution.  The briefs for the case, including the amicus brief filed by the Stanford Fair Use Center and the brief in opposition to cert written by the Solicitor General’s office back when Elaine Kagan was SG (which explains why she took no part in the cert decision) can be found here.

The PatentlyO post compares this case to the earlier one in which the Supreme Court decided that the 20 year extension of copyright’s term in the US was constitutional.  I think the relationship between these two cases needs to be explicated a bit.  The case about the copyright term, Eldred v. Ashcroft, was decided, at least in part, on the grounds that the Copyright Term Extension Act did not alter “the traditional contours” of copyright.  When the courts face a challenge to a law based on Constitutional grounds, one of the major decisions they make is what “level of scrutiny” to apply to that law.  For example, a law that tried to regulate speech based on its content — forbidding expressions of support for the Tea Party, for example — would get the strictest scrutiny.  No law has ever survived this kind of analysis in the Supreme Court.  In Eldred, the Court decided that copyright law per se was not in conflict with free speech principles and so an extension of its term by a finite number of years would be evaluated on the basis of an ordinary level of scrutiny.  The Court said, however, that it would apply much more rigor if it were assessing a law that altered the “traditional contours” of copyright.

In the URAA, plaintiffs believe they have found such a law, since re-protecting works that had previously been in the public domain seems like a dramatic break with the past for US copyright law.  So this case relies on the Eldred decision precisely because the plaintiffs believe that it presents the situation the Court worried about, but did not find, in Eldred.  Where the CTEA was found constitutional in Eldred, the plaintiffs hope the Court will apply the same standard to find the URAA unconstitutional in Golan.

The URAA basically said that if a foreign work had risen into the public domain in the US only because of its failure to comply with the formalities that the US used to impose for obtaining copyright protection — notice and registration — it would be restored to protection as long as it was still protected in its country of origin.  As a principle this sounds fair (if you accept that formalities should have been abolished), but in practice it has had some serious consequences for those who had been using those putative public domain works.  Lawrence Golan, for example, is a symphony conductor who has suddenly found that he must get permission to perform the works of Igor Stravinsky when, in the past, he did not have that added expense.  The Court is asked to decide whether this is such a radical change to U.S. copyright law that it conflicts with the First Amendment.

One of the best resources I know of to understand the difficulties that these “restored” copyrights can create is this article by Peter Hirtle of Cornell University, which shows how difficult it can be to determine for sure whether a work really is in the public domain in the US because of the possibility of restoration.  Often, potential users are simply unable to find the full information they would need to decide for sure if they can use or reuse a specific work.  Presumably the Supreme Court will tell us, sometime next year, whether this uncertainty has changed the copyright game so radically that it now threatens our constitutional commitment to free speech.

What’s Arnold Schwarzenegger got to do with copyright?

I can’t ignore termination any longer!  This is a copyright subject that has significant implications for academic authors, so it needs to be discussed in this space.  But until this week I have not been sure what to say or how to say it.  Fortunately I can now point readers to some entertaining explanations of the “termination right” (which sounds like something out of a sci fi film noir).

Basically, the termination right is a mechanism built into to copyright that gives an original copyright owner a chance to reclaim their rights after something less than half the duration of copyright.  It is intended to reward creators who trade the rights away relatively cheaply and later discover that they are more valuable than anyone expected.

The termination right is found in section 203 of the Copyright Act and applies to all copyrights except those in works made for hire.  It allows an author who transfers her rights or grants an exclusive license to reclaim those rights after 35 years.  For the vast majority of copyrights this will not be very important, since few works retain any value at all after that length of time (which is why the life plus 70 term of copyright is so foolish).  But there may be academic works written by faculty at our institutions that are still valuable, if not profitable; academic works retain research and historical value long past their period of economic profitability.  The termination right is a chance for academic authors to reclaim their rights and consider new ways of making their scholarship available to a broader audience, especially in a time when so many institutions offer an open access repository.

Now is the time to think about termination because works granted copyright under the 1976 act are just now starting to be eligible for termination.  Do you have a senior faculty member whose classic work of scholarship has been out-of-print for a while but would be a jewel in your repository?  This is the moment to discuss termination (of the copyright!) with that author.

The window for termination is defined in a rather complex fashion, but it is nicely (and humorously) explained in this column by copyright and higher education attorney Zick Rubin, “Ill Be Back (in 35 years)”: The Author as (Copyright) Terminator.  Rubin focuses on still-viable textbooks, but termination may be a bonus for authors of out-of-print but still in-copyright monographs as well.

For other formats, termination can be just as important.  Consider this column from Variety about the termination of transfer of music copyright, which is one place where the purpose of termination – to give back to the original creator the chance to capture profits – seems especially likely to succeed.

Readers of both documents will note the theme of 1970’s nostalgia running throughout.  So just to vary the cultural references, I will also point to this news report of a court case over Betty Boop, the 1930’s cartoon icon.  The effort by the family of Betty Boop’s creator Max Fleischer to recover the ongoing value of Betty was not brought under the termination provisions, of course, and it was not successful.  But it still illustrates the problem termination is intended to solve, and it makes reference to other cartoon figures – Spiderman, the Fantastic Four and others – where termination is precisely the tool for copyright reclamation  under discussion.

If  creators of disco music and cartoon characters can reap a benefit from the termination clause in copyright, there is no reason at all that we should not help our academic authors do the same.

US endorses public domain for TK — Man bites dog!

From Dave Hansen, J.D., the 2010-11 intern for Duke’s Scholarly Communications office:

A while back Kevin wrote a blog post highlighting the Ghanaian copyright law’s treatment of traditional knowledge and folklore. He pointed out two very basic ambiguities in Ghana’s domestic protections: (1) How exactly is “traditional knowledge” defined, and (2) who owns it?

These two questions are coming up again this week as a group of intellectual property delegates will meet at the World Intellectual Property Organization (WIPO) headquarters in Geneva to discuss a draft text for the international protection of traditional knowledge. As the WIPO meeting agenda indicates, the discussion will focus on a heavily annotated draft text produced at the last meeting of WIPO’s traditional knowledge working group.

First, delegates must address the contentious question of what, exactly, constitutes “traditional knowledge.” Although the working text of the agreement has more bracketed terms than anything else, it generally focuses on protecting three general classes of knowledge:  (1) knowledge created or preserved in a “traditional context,” (2) knowledge customarily recognized as belonging to traditional groups, and (3) knowledge integral to the cultural identity of a particular community. These definitions, while just as vague as those in the Ghanaian copyright law, are the subject of intense comment and seem likely to change.

What is more interesting is the discussion of who should be granted traditional knowledge rights—a debate which largely centers on the type of protection afforded by the agreement. Traditional knowledge protections can come in two basic varieties: “defensive” and “positive.” Defensive traditional knowledge protections ensure that rights to pre-existing content will not become restricted from use by the original community. This protection is typically achieved by instituting a registry or database of existing TK, providing prior art which will defeat future claims of originality or novelty by those trying to assert copyright or patent rights over TK content. Positive protections, however, grant exclusive rights over traditional knowledge that are analogous to the rights granted by copyright or patent law—rights that can be asserted to exclude, license, and profit from particular works. While the draft agreement certainly provides for some increased defensive protections, the bulk of the rights granting language focuses on positive rights.

The implications that this positive-defensive rights debate has on the scope of the global public domain is not lost on negotiators. While defensive protections essentially seek to document what should already be available for public use, positive protections seek to pull some works out of what, in the United States at least, would be considered the public domain. In the comments to the draft text it is clear that some delegates are resisting the push for strong positive rights. Norway and the United States, among others, are asking that the agreement find the “right balance between TK that was subject to protection and knowledge which was or had become part of the public domain.” The United States, echoing this concern, cited the WIPO Development Agenda’s call to “support a robust public domain in WIPO’s Member States” as reason to resist a broad positive rights framework.

On the other end of the spectrum, representatives from developing nations made the point that already  traditional knowledge—some of which would be thought of as in the public domain in the United States—is not freely available for anyone to use, and those given access should have responsibilities and obligations extending indefinitely into the future. The representative of one indigenous tribe made the following comment:

Public domain was a western concept that was designed for commerce and was a bargain that was set for a grant of private property rights for a limited amount of time after which knowledge would go into the public domain. Such a concept did not necessarily exist in indigenous knowledge systems.

True enough, but the underlying “commerce” concerns of the western public domain, in the United States at least, go to the very heart of its philosophy on the appropriate encouragement of the “progress science and the useful arts” and the scope of acceptable limitations on free speech.

As the draft text develops, it seems increasingly likely that this agreement will provide the first ever legal definition of the scope of the international public domain—something ACTA, TRIPS, Berne, and all other international IP agreements have thus far failed to do. While the move toward international protection of traditional knowledge has been a long time coming, this deliberate new focus on the scope of the public domain is, hopefully, a sign that that IP and trade representatives from the United States and Europe have (finally) come to acknowledge the importance of a vibrant public domain.

Bringing this back to the world of scholarly communications, positive protections that award rights over certain traditional knowledge works is somewhat worrisome because it is library collections that house some of the rare copies of expressions of traditional knowledge existing in the United States and other developed nations. Expanding international protections may severely curtail what academics can do with those works, and it will almost certainly limit their ability to collect some of these works in the future. The big picture impact of this traditional knowledge agreement remains to be seen, but the scope of the public domain is at play—for traditional knowledge specifically, but inevitably outlining its scope in general—and that is a concern which extends far beyond libraries and the scholarly world.

Precedent and procedure in Georgia

As soon as I read this short note in Inside Higher Ed reporting that the 11th Circuit Court of Appeals had barred a pharmacy association from suing the University System of Georgia for copyright infringement because of sovereign immunity, I knew I needed to read the decision and blog about it.  It seemed, after all, to have potential relevance in the copyright infringement case against Georgia State University, which could, once a decision is reached, have a much broader impact on higher education.  Since this pharmacy case comes from the Appellate Circuit in which the GSU case has been brought, it is a binding precedent for Judge Orinda Evans in the latter case.

Nevertheless, once I read the decision I discovered two things that reduced its importance in my eyes.  First, the report in Inside Higher Ed is not entirely accurate; it makes the decision seem more sweeping and categorical than it really is.  Second, once we see what was actually decided by the 11th Circuit it is clear that it is not in any way decisive for the Georgia State case.

The decision is difficult to follow and concludes without a clear summary, so I hope that any misreadings I make here will be corrected by others.  But as I see it, the 11th Circuit ruling is a split decision for sovereign immunity.  On the one hand, the Court upheld the lower court’s decision to dismiss the claim for damages from the alleged infringement because it is barred by the rules that prevent federal courts from hearing most cases, especially those seeking money damages, against the states.  But on the other hand, the 11th Circuit reinstated the claim for an injunction (an order to stop doing what it is doing) against the Georgia system that relies on the exception to sovereign immunity known as Ex Parte Young (after the Supreme Court case that established it).

So Inside Higher Ed got it only half right — the pharmacy association cannot bring a case seeking damages against the state entity, but its claim asking for an injunction will go forward.  The state is not, contra IHE, immune from that type of suit.

The Georgia State Case about e-reserves and course readings in an LMS is itself based on Ex Parte Young and seeks only an injunction, not damages.  So the pharmacy decision handed down yesterday does not mean that the GSU case must also be dismissed, since the Appeals Court actually upheld the injunctive part of the claim.

Georgia State has also asked the court in its e-reserves case to dismiss based on sovereign immunity, and what this pharmacy case really does is remind us of what GSU must do to win on that point.  The issue is essentially whether the court believes that there is a likelihood of ongoing violation of the copyright law or not.  In the pharmacy case, the Appeals Court was not persuaded that there was no chance of continuing infringement, so it refused to dismiss and will let that issue be decided at trial.  Likewise, GSU has basically said that its new copyright policy means that infringement is a thing of the past and that the new policy ensures that its ongoing actions will be within the scope of fair use.  The GSU court has already articulated a standard for what must be proved at trial — the plaintiff must show a substantial number of continuing infringements and the defendant then has the burden of demonstrating that those alleged infringements are really fair use — that looks very much like what the 11th Circuit wants in such cases.

So the impact of this recently decided case on the GSU trial will probably be minimal.  One or both of the GSU parties will certainly point it out to the GSU trial court — they are obligated to do so — as a supplemental authority.  But even though this appellate decision is a binding precedent in Georgia, it is not, in my opinion, determinative of the issues before the GSU court.

Discussions about the changing world of scholarly communications and copyright