Category Archives: Technologies

Why Can’t I Digitize My (Institution’s) Library?

By David Hansen, J.D., Scholarly Communications Intern

On Tuesday Judge Denny Chin set a deadline of mid-September for Google, the Authors Guild, and the AAP to work out a settlement for Google Books. The lawsuit, filed in 2005, seems to have been going on forever, and I wonder what, in the meantime, libraries can do to move forward. After looking at my own (personal) digital library, I wonder how the same principles regarding digitization might apply to institutional libraries.

Over the weekend I joined Google Music, a service that uploads my collection of music and stores it . . . somewhere. Somewhere in Google’s cloud. With it, I can access my entire collection of music from any computer. It’s great.

What is not great is my internet connection. I’ve had the service for about a week, and at this point only about half of my music collection is uploaded. Uploading large amounts of data understandably takes time, and since Google Music “store[s] a unique copy of Your Music on your behalf,” each and every file has to be transferred.  Uploading these copies is generally considered “space-shifting,” which is something that Google –and the courts—have concluded is lawful “personal use.”

Apparently there are other approaches to what Google Music does. Ars Technica has published this article outlining the legal positions of Google Music, Amazon Cloud Player, Apple’s iCloud, and MP3Tunes. All three services provide online streamed copies of user’s music collections. Apple does so with licenses from the record labels.

Google Music and Amazon Cloud Player both seem to operate as a “digital locker,” making unique copies of the user’s own files. They presumably rely on time-shifting cases that make users’ actions lawful, and on the Cartoon Network v. Cablevision case (discussed at length in the Ars article) which held that Cablevision would not be directly liable for “publicly performing” the works in question, although it provided a DVR service that allowed users to record and retransmit their own unique copies of previously transmitted shows. The court in Cartoon Network placed some emphasis on the fact that each user only had access to their own personal and unique copies of the recorded shows.

MP3Tunes acts in a similar way, but with two differences: First,  MP3Tunes will delete redundant copies when more than one user uploads identical files. This de-duplication process, while obviously more efficient than the Google and Amazon services, may conflict with the Cartoon Network case because each user accesses one centralized copy of their song, rather than multiple users accessing multiple ‘unique’ copies of the their own recordings. The second major difference is that MP3Tunes is currently being sued by EMI. Most of the suit focuses on the safe harbor provisions of the DMCA, and whether MP3Tunes can be held liable directly, notwithstanding the Cartoon Network case cited above, for “publicly performing” the works in question.  But another major issue is whether space-shifting to the cloud is a permissible fair use.

For libraries that want to make digital copies of their print collections—i.e., space shifting—there are some limited exceptions in the law that permit copying for preservation (section 108 of the Copyright act).  There is a need, however, to provide more complete digital access to the entire campus community beyond that which is contemplated by section 108. The University of Michigan (along with Florida, Illinois, and Wisconsin) has recently announced that it will be making available to campus users copies of orphan works, held jointly by the University of Michigan and HathiTrust, based on an assertion of fair use and its own risk analysis. The fair use argument relies on the idea that only works in each respective library’s print collection will be made available online to their users through the HathiTrust; one print copy, one digital access. No one is gaining access to books they don’t already own—just different, electronic access to those already in the print collection. The parallel to the ‘digital locker’ analogy that supports Google Music is strong, and the fair use argument for Michigan is bolstered even more by the fact that it isn’t in it for the money (as Google is).

This fair use assertion makes an end-run around section 108, but looking at the fair use factors, it is still appealing. Even more so for Michigan because a large part of the scanned corpus of the HathiTrust comes from Michigan, so for many books it would also be able to make the argument that the digital copies are not just practically the same books that are in its collection, but that they are identical copies of UM books, meeting some of the concerns of the Cartoon Network court.  Other libraries have less to rely on in that respect, as fewer (or none) of their physical copies were scanned for inclusion in the database. But the fact that Michigan and these other libraries are only making orphan works available means that even if the fair use analysis is slightly off, there is still almost no chance anyone will be sued. The orphan works identification process that Michigan has used (detailed here) employs a more than reasonably diligent search for copyright owners, and leaves little chance that there are any rights holders available or willing to bring an infringement suit.

Risk notwithstanding, though, I wonder, what’s wrong with a library digitizing its entire collection (not just orphan works) under the space-shift theory?  If the library takes those books out of circulation (perhaps in high-density storage) and limits online access to one user at a time (essentially, recreating the limitations of a physical visit to the library), the fair use analysis is still very much in the library’s favor. Google, in its amicus brief in support of MP3Tunes, makes the point well:

“[j]ust as the Supreme Court has held that ‘time-shifting’—recording television broadcasts for later viewing—is a lawful fair use, Sony Corp. of America v. Universal City Studios, 464 U.S. 417, 455 (1984), so too is ‘space-shifting’ lawfully acquired music onto digital music players or cloud-based equivalents, Recording Indus. Assoc. of Am. v. Diamond Multimedia Sys., 180 F.3d 1072, 1079 (9th Cir. 1999). A contrary holding would treat tens of millions of iPod owners who lawfully acquire their media as no better than those who misuse new technologies to pirate music and movies. “

Should space-shifting books be any different? These cases, admittedly, deal with space- and time-shifting for personal uses, and not for uses of educational institutions. That distinction may be critical in the end.  But shouldn’t uses for “teaching . . .  scholarship, or research” — which are specifically called out in the section of the copyright act that codifies fair use — carry at least as much weight as “personal use,” which has no mention anywhere in the act?

Finally, if a library can digitize its own library and make it available to patrons, can that library pool its digital holdings with other libraries, so that there is no needless duplication of digital copies? Storing these works in digital format is not cheap, and while my meager 20GB music collection has taken a half a week to upload to GoogleMusic’s “cloud”, the creation and duplication of millions of volumes of digital volumes is a monumental and inefficient task.  Such a restriction, as amici in the MP3Tunes case have argued, would be incredibly burdensome to both digital libraries and users in general.

Careless language and poor analogies

One of Will Rogers’ best known aphorism is “I only know what I read in the papers.”  In line with Rogers’ irony, if all one knows about the Aaron Swartz case is what one reads in the blogosphere, one knows very little indeed, and much of it wrong.

Swartz has been indicted on several federal charges after allegedly physically and technologically gaining unauthorized access to the MIT network and downloading a huge number of files from JSTOR.  On that everyone agrees.  After that the claims about and arguments based on this event diverge dramatically.

Predictably, many bloggers (an example is this one from the Copyright Alliance) call these actions by Swartz “theft” or “stealing.”  As always when talking about intellectual property, these words are misapplied.  The formal definition of theft from Black’s Law Dictionary is “the felonious taking and removing of another’s personal property with the intent of depriving the true owner of it.”  It should be clear from this definition why we call authorized use of intellectual property “infringement” rather than theft.  What Swartz is alleged to have done did not remove the intellectual property and showed no intent to deprive the original owner of it; he merely made, allegedly, unauthorized copies, which does not have the effect of depriving anyone else of intangible property. JSTOR was never without these files and they have, in fact, recovered the unauthorized copies.

Whenever someone uses the language of theft in reference to intellectual property, they are trying to cover the weakness of their argument, in my opinion.  Let’s just say infringement and talk about both the legitimate reasons to protect IP and the public policy that permits some unauthorized copying.

By the way, Swartz has not been charged with copyright infringement either.  The charges of wire fraud, computer fraud and illegally obtaining information from a protected computer all relate to the hacking itself, not to the downloads.

Another place where serious misrepresentations abound is when we are told (as in this post on the Scholarly Kitchen) that Swartz has “done this before” because of a previous incident where he download large numbers of documents from PACER, a database used by the federal courts.  That incident, however, involved neither illegal access nor copyright infringement.  Although PACER usually charges a fee, Swartz used a computer at a university on which access was being provided for free as an experiment.  And the materials he downloaded – documents from the federal courts – are not protected by any copyright due to section 105 of the US copyright law.  To be sure, Swartz was protesting the fees charged for access to works created at taxpayer expense for the public good, but his actions in that case have no analogy to the behavior charge in this indictment.

One place where there is significant disagreement is about Swartz’s intentions.  Many bloggers simply assume that he intended to release all of the downloaded files to the public, although Swartz claims he intended to do text-mining research with the articles.  He has done such work before, so there is some plausibility to his claim, which may explain when infringement charges have not been brought.  So turning this into a debate about the open access movement is wholly inappropriate.  It is important to recognize that the victim of these alleged crimes was not JSTOR or any of the journals it aggregates.  The victim was MIT.

However fervently one shares Swartz’s goals for greater access to legal and scholarly information and publications, the actions for which he has been charged do not serve those goals.  Quite frankly, Swartz’s actions were not radical enough, in the sense that they did not get to the root of the problem. It is clear that the system of scholarly dissemination is badly broken, and simply hacking it does not change that fact.  The real change, the real solution Swartz (apparently) seeks, will be found only when the academic authors, the original holders of copyright, stop transferring those copyrights to publishers without careful reflection and safeguards on their right to disseminate their own work widely.

Unintentional felons?

Whenever a new law is proposed in Congress, and especially when it deals with copyright, it behooves us to look both for the reasoning behind the bill and it potential for unintended impact on non-targeted activities.

Such a bill is S 978, also known as the “10 Strikes” bill, which was introduced by Sens. Klobuchar, Cornyn and Coons and recently reported out of committee to the full Senate.  The language of the bill amends both copyright law and the federal criminal statues to turn ten or more public performances of a copyrighted work “by electronic means”  — presumably unauthorized performances — into a felony punishable by up to five years in jail.

The purpose of this bill seems relatively obvious; it would further shift the expenses of copyright enforcement from the private companies that create content onto the taxpayer.  Copyright is generally a private tort, and the copyright owner has the obligation to bring lawsuits against infringers in order to enforce its rights.  By converting infringement into a federal crime, the costs of litigation would be borne by the government (the Justice Department) and, ultimately, by taxpayers.  This has been a continuing theme of the lobbying efforts undertaken by “Big Content” in the past few years.  During testimony in favor of this bill (and the PROTECT IP Act, a similar proposal to increase federal enforcement efforts) a DOJ official told the Judiciary Committee that there have already been 15 new attorneys and 51 FBI agents hired under the earlier PRO IP legislation.  The introduction of these bills is an example of the continuing success of industry lobbying.

Copyright law has had some criminal provisions for quite a while, but the threshold for this felony is really quite low — only 10 unauthorized public performances within 180 days.  So the expense of industry efforts to reign in YouTube, as well as less above-board media sharing sites, would dramatically shift to government lawyers instead of those employed by Disney or Comcast if this bill were adopted.

The intended consequences of this law are bad enough, at least for those who do not want to hand more tax money to the entertainment industries.  But the unintended consequences could be worse.  As the blog TechDirt points out, this bill could create liability for folks who embed YouTube videos into their webpages or blogs.  Others have suggested that online karaoke could also become a criminal act.  Since it is public performances and not just reproduction that is criminalized here, some one who embeds a video (or even links to it?) would need to know in advance that the video was made available with authorization.

As the parenthetical question above indicates, the absence of a definition of what constitutes a public performance makes this law especially ill-conceived.  And it is not even made explicit that only unauthorized public performances would trigger liability, although presumably this enforcement bill cannot by itself criminalize public performances that are not even infringing.

For higher education, it is useful to distinguish which performances might raise a problem if this bill were enacted and which ones would not.  Performances in a live classroom are specifically authorized by the Copyright Act, so they would not have the potential for criminal liability.  Film clips that are transmitted through a closed learning management system are similarly authorized (although with several qualifications), so this common practice would not become criminal either.  Nevertheless, the fact that we have to ask the question indicates how dangerous such thoughtless legislation can be.

Where risk would arise is in those many supplemental educational communication tools that faculty use to enrich there teaching.  Embedding a video in a class blog might become problematic, as could having students make and share videos in which background music, even if incidental, was included. And a cynic might see behind this new effort to ratchet up penalties for infringement an attempt to frighten other universities away from following the example of UCLA in streaming digital video for classroom teaching; under this bill criminal charges might be possible if a fair use defense of that practice were rejected.

Another big question raised by this proposal is whether or not “accomplice” liability might attach to universities because of criminalized public performances initiated by students.  Courts have apparently never accepted a criminal parallel to contributory infringement, but the Department of Homeland Security asserted exactly that theory when it began seizing the Internet domains of web sites that allegedly linked to pirated content.

With this “10 Strikes” bill is is easy to see why it is important, yet extremely rare, for Members of Congress to think before they “strike.”

Retractions and the risk of moral panic

Several people sent me a link to this story from the Chronicle of Higher Education reporting on a study that finds that biomedical researchers continue to cite and rely on published articles even after the papers have been retracted.  My initial reaction was what I presume it was supposed to be – “Gee, that’s terrible.”  The conclusion that the article attributes to the study’s author is that, at worst, some researchers cite articles they have not read, and that, at least, researchers are getting to papers through informal routes that bypass the “official” websites where retractions are generally noted.

This article, however, prompted me to remember an earlier blog post and to explore a web site dedicated to publicizing retractions.  The result is that I want to qualify the potential for a “moral panic” based on this study in two ways.

The first is to remind us all that the Internet is not to blame for the problem of bad science living on in spite of retractions.  It is certainly true that the digital environment has lead to more copies of a work circulating, and those copies can be very persistent.  But printed copies of erroneous studies were and remain much harder to change or stamp with a notice than digital ones are.  In the “old days,” a retraction would be printed several issues after the original article, where many researchers would never see it.  Indeed, it is hard to imagine that a study like the one reported by the Chronicle could even be done in that environment; in most cases it was simply impossible to know (at least for the non-specialist) if an article was citing a prior work that had been discredited.  Today more copies persist, but it is easier to disseminate news of a retraction.

The blog post I remembered about this topic was by Phil Davis on the Scholarly Kitchen blog.  In spite of the post’s unfortunate title, Davis does an excellent job of describing this problem without simply foisting the blame on the Internet and the increased availability it facilitates.  He does suggest that the tendency to cite retracted articles is exacerbated by article repositories, and I would add that that we must balance whatever potential harm there is in these repositories with the great benefits to scientific research that are offered by improved access.  More important, however, is Davis’ discussion of a potential solution to the problem, a service called CrossMark which could help address the “version” issue.

The other blog site that I explored for some insight into the retraction problem is “Retraction Watch,” which is mentioned in the Chronicle report.  What was most interesting about this site, I thought, was its sophisticated awareness of the variety of reasons for retraction and its recognition that not all retractions indicate that an article’s conclusions are unsound.

When we hear that an article has been retracted, we immediately suspect, I think, that there has been fraud, fabrication or falsification.  At the very least we suspect that the authors have discovered that their results cannot be verified or reproduced.  Often this is true, but there are other reasons for retraction as well.

One possible reason for retracting a paper is that it was sloppily presented, even if accurate.  That seems to have happened in regard to a paper by Stanford scientists that was retracted by the Journal of the American Chemical Society.  The authors agreed to the retraction, apparently, because of “inconsistencies” in the documentation and interpretation of the data, but have subsequently verified the fundamental finding that the paper reported.  And some retractions are even less grounded in fundamental scientific errors; retractions have occurred because of political pressure (such as with the conflicting studies about the effect of gun ownership on crime), or even because some people thought an article was in bad taste (Retraction Watch reports here on such a case).

What I like about Retraction Watch is that it looks seriously at the different reasons for retractions and, when they are not clearly explained, as in this retraction from the journal Cell, tries to dig deeper to discover what the flaw actually was, or was perceived to be.  This should be a model for our general reaction to retractions and the news that retracted articles continue to be cited.  We should ask the “why” question over and over while remembering that scholarly communications is a complex system with many layers; simple answers and moral condemnation in advance of specific facts are almost never helpful.

There’s more to life than copyright

It is a hard lesson for me to learn, but there are other issues related to scholarly communications besides copyright.  Today’s news has focused attention on free speech issues for academics.  Now we have talked about free speech as it is impacted by copyright, and some interesting examples of how copyright can be welded to censor disfavored speech can be found here and here.  But for a moment I want to focus on another aspect of free speech and scholarly communications on campuses.

Recently there has been a lot of interest in the scope of free speech rights for professors at public universities.  In 2006 the Supreme Court handed down a decision called Garcetti v. Cabellos which held that a public employee (a District Attorney) was not entitled to First Amendment protection for his speech related to his position.  In short, he could be fired because of things he said publicly that were related to his job.  This caused a great deal of anxiety for academics at public university, since it seemed to provide a loophole to avoid the academic freedom that is so cherished, but fuzzily defined, on our campuses.

Garcetti was followed by a number of decisions that did apply its ruling to employees of public universities, which increased the concern and brought the American Association of University Professors into the discussion.  One oddity of those decisions is that the opinion in Garcetti is itself skeptical about whether the ruling should be applied to academics.

Today comes news of a decision in the Fourth Circuit Court of Appeals, reported here and here, that reverses this trend and asserts that Garcetti should not be applied to professors.  In a dispute where a tenured associate professor claims to have been denied promotion over blog posts and newspaper columns he wrote expressing conservative, Christian-oriented viewpoints, the 4th Circuit held that such speech was protected by the First Amendment and that Garcetti did not mean that an academic could be punished for unpopular speech.

It is important to realize that this case was not decided by the ruling this week; it was remanded to District Court.  The Appeals Court held that it was improper to dismiss the case because of Garcetti and the presumption that a public university professor’s speech was not protected, but the university may still be able to prove that promotion was denied for a different, acceptable reason.  Only if the professor shows that he was actually denied promotion because of what he wrote will there be a First Amendment problem.  But I want to consider a couple of interesting (I hope) questions raised by the Appeals Court’s decision.

First, if a public university professor’s speech is protected, as we always thought before Garcetti, how about his or her right NOT to speak?  The flap going on in Wisconsin over a public records request to see the e-mails of a professor who has apparently taken a political stance unpopular with the current state government raises the issue of how far free speech should go to protect the decision to not speak, or not have one’s speech disseminate beyond those for whom it was intended.

There is a fairly long history of the Supreme Court recognizing and upholding the right not to speak based on the First Amendment.  Some of these are “ventriloquism” cases, where courts have held that the state cannot put words in someone’s mouth by requiring them to say specific words, like the Pledge of Allegiance or a motto on a license plate.  But there is also jurisprudence upholding a right to decide how to distribute one’s own speech.  In fact, in Harper & Row v. Nation Enterprises the Supreme Court argued that copyright was congruent with free speech partly because it supports the right to not speak publicly until one decided to do so.  Given the strong First Amendment rights for state-employed academics affirmed by the Fourth Circuit, finding a more complete negative right to determine if, when, and where protected personal communications are published is entirely plausible.  We shall see if that reasoning has an impact in Wisconsin.

Another point that is raised by this decision on academic free speech is who it applies to.  Does it apply only to faculty, including, presumably, librarians with faculty status?  Or could it apply to librarians and other staff as well?  One of the effects of the technological revolutions we have seen lately is that many more of us – include yours truly – are able to communicate widely and advocate for policies and legal interpretations that may be controversial.  It is interesting that the professor’s scholarship that was at issue in the 4th Circuit was in non-traditional forms like blogs and newspaper columns.  The shift in scholarly communications, and libraries’ more deep involvement in scholarly communications issues, raises the question of academic freedom protections for non-faculty and the scope of free speech rights in this newly developing dialog.

Piling on

Since posting my comments on the Google Book Settlement earlier this week, I have followed other commentary as closely as time has allowed.  I have been interested to see that no one else whose comments I have seen seems to think that an appeal is likely.  Indeed, I draw that conclusion entirely from the absolute silence I find about that option, while there is much discussion of other possibilities.

I imagine the reason for this is the strong sense that the rejection was, as Prof. Pamela Samuelson puts it in this interview, the only conceivable ruling that the judge could have made and that it is quite water-tight from a legal perspective.   While it is not unheard of for parties to spend lots of money on lost causes, the majority of commentators obviously feel that Google, the Author’s Guild and the Association of American Publishers will not throw good money after bad by filing an appeal.

I am perfectly willing to pile on to this bandwagon, abandon my speculation about an appeal, and think about what other options the rejection might open up.  One theme that seems to be emerging is that a renewed emphasis on solving the orphan works problem is now called for; certainly that is reflected in this article from the Chronicle of Higher Education.  I absolutely agree that the rejection of the settlement should be a call for librarians, especially, to re-engage with the orphan works issue, and want to consider a little bit what form that re-engagement might take.

The Google Books Settlement gave librarians, copyright activists and even Congress a chance to sit back and assume that orphan works was being dealt with.  Sure, we thought, there are millions of works that are still protected by copyright but for which no rights holder can be found; access to these works is a problem, but Google is going to solve it.  Now we cannot look to Google for a solution, so it is worth revisiting what a sensible solution might look like.

I think we should consider the possibility that a legislative solution may not be either the most practical or the most desirable way to resolve the issue of access to orphan works.  The orphan works bill that came closest to passing a few years ago was hardly ideal, since it would have created requirements both burdensome and vague for gaining a measure of extra protection from copyright liability.  A good bill that really addresses the orphan works problem is probably both hard to conceive and unlikely to pass.  So what alternatives short of a legislative solution might we consider?

Tho obvious answer is fair use, since most proposals for orphan works solutions would essentially codify a fair use analysis.  Fair use, after all, is really an assessment of risk, since its goal is too reuse content in a way that wards off litigation.  The Congressional proposals around orphan works would have simply reduced the damages available is defined situations, thus also having as a primary purpose the reduction of the risk of litigation.  Careful thinking about projects like mass digitization of orphan works can accomplish the same goal by balancing analysis of the public domain, permissions where they are possible and needed, and a recognition that for truly orphan works, the fair use argument is much stronger since there is no market that can be harmed by the reuse.

When I say “truly” orphan works, I begin to hint at another element that might go in to an informal solution of the orphan works problem, the creation of rights registries to help locate copyright holders.  This article about a copyright database, or registry, being built in the European Union — called the ARROW project — indicates that such an idea can garner support as a way to address the difficulty of orphans.

The Google Books Settlement, of course, envisioned the creation of a rights registry that would have helped a lot with the orphan works problem, but now we need to think about other, and perhaps less ambitious projects.

A registry would help because it would provide an easy (or easier) way to determine that a work is not an orphan.  A search in a comprehensive registry could help a putative user find the rights holder to whom a permission request should be directed and, if no rights holder has registered, create a presumption that due diligence has been performed.  As EU Commissioner Neelie Kroes puts it in the article,

one search in ARROW should be all you should need to determine the copyright status of a cultural good in Europe.

When I suggest a less ambitious registry than ARROW or the Google Registry that was never born, I am thinking that there are certain kinds of cultural goods — photography is an obvious example — where there are unique problems in marking the work in a way that permits easy identification of the rights holder.  A registry for photographs, especially as image-matching software becomes so impressively accurate, could help photographers protect their rights and give potential users a little more security when deciding to use a work believed to be orphaned.

I want to emphasis that I am not suggesting a re-introduction of formalities in the US, akin to copyright notice and registration with the Copyright Office, anymore than the EU database would be a formality.  Instead, I am proposing a voluntary mechanism that would help rights holders protect their own interests, make permission requests easier, and increase the accuracy of determinations about real orphan works.

Shakespeare and copyright

On Monday author, attorney and Author’s Guild president Scott Turow published an op-ed piece in the New York Times arguing that copyright protection is vital for creative production and that the Web is a serious threat to authors.  Such pieces appear regularly in the Times; every three months or so a different author or artist trots out these arguments.  It seems a little bit unfair to critique these editorials because they are usually manifestly uninformed; several critiques of Turow have already appeared, and I don’t want to seem to be piling on.

Nevertheless, Turow offers a chance to drive home a very different point than the one he thought he was making, owing to his woefully unfortunate choice of an example for his piece.  The core of the argument is that Shakespeare and his contemporaries flourished because their work was rewarded financially, owing to the innovation of producing plays in an enclosed environment and sharing the income from theater admissions with the playwrights.  Turow then analogizes this physical barrier to theater admission with the “cultural paywall” of copyright in order to argue that the Internet threat to copyright must be addressed with stronger laws (his piece was timed to influence hearings held in the Senate on Wednesday).

Turow chooses Shakespeare simply to show that authors need to make money in order to produce creative work.  That point itself is quite doubtful and multiple counterexamples could be ranged against it.  But even more basically, the example of Shakespeare actually proves some very different points than the ones Turow thinks he is making.

First, Shakespeare lived before there were any copyright laws in England — the Statute of Anne was adopted almost 100 years after his death — so his productivity is evidence that there are ways to support authorship other than with copyright.  In truth, it was not so much his share of theater revenues that paid Shakespeare’s bills as it was patronage.  And patronage remains important to many artists even today, since revenues from copyright so seldom actually filter down to authors and artists.  The National Endowment for the Arts is one such patronage arrangement, as are academic appointments that allow playwrights and poets and musicians to continue to create while still putting food on the table.  These kinds of direct support are much more effective, in many cases, than relying on the monopoly income provided by copyright, since most of that money remains with intermediaries.  The example of  Shakespeare proves that copyright is not an absolute necessity for supporting the arts.

The second reason Turow’s choice of a hero for his piece is unfortunate is that Shakespeare was, himself, a pirate (in Turow’s sense), basing most of his best known plays on materials that he borrowed from others and reworked.  If Boccaccio, or Spenser, or Holinshed had held a copyright in the modern sense in their works, Shakespeare’s productions could have been stopped by the courts (as unauthorized derivative works).  This is not an unfamiliar point; most schoolchildren are taught that Shakespeare borrowed his stories.  It is rather astonishing that Turow would choose Shakespeare to make his argument, therefore, and no surprise at all that TechDirt has reformulated Turow’s question to read “Would Shakespeare have survived today’s copyright laws?”

As much as Turow may want to argue that copyright is necessary to support authors and artists, what he really succeeds in proving, unintentionally, is that great art often depends on the ability of artists to borrow from and reshape earlier work, and copyright, in so far as it impedes that process, is part of the problem and not its solution.

Hot news, cold idea

At a meeting about public access to federally-funded research that I attended earlier in the year, a publisher strenuously asserted that it was not the role of the government to drive a business out of the market.  He was right of course, but so were a group of us who replied that neither was it the role of the government to prop up a business that otherwise could not survive.

I was reminded of this exchange when I looked at the “Discussion Draft” from the Federal Trade Commission on “support[ing] the reinvention of journalism.”  Unfortunately, the policy recommendations floated in this document have very little to do with reinventing journalism, but a lot to do with propping up the traditional business model of newspapers.  Most of the ideas put forward here, and they come not directly from the FTC but from those the FTC has discussed the issue with (a telling process of selection in itself), are about how to keep that status quo in news publishing from collapsing under its own weight and under the pressure created by new opportunities for disseminating news offered by the Internet.  Rather than looking at how journalism must change, the FTC has offered a set of proposals for how to protect the current set of badly mismanaged news organizations from the Internet.

There are lots of critiques of these proposals, including ones found here and here.  My favorite comment, from Kent Anderson of the “Scholarly Kitchen” blog, notes that the FTC does not “acknowledge how newspapers and other traditional media exploit free information tools like Facebook and Twitter to lazily learn about news through their desktops.”  So, in the great tradition of “what’s mine is mine and what’s yours is also mine,” newspapers seek to prevent others from disseminating news on the Internet while wanting to benefit from that dissemination whenever it can save them money.

Google released an extensive, and deadly accurate, critique of the FTC proposals, which can be found here.

What concerns me most about the FTC proposals and the ideas coming out of the news industry is that copyright law need to be revised to provide news organizations with additional protection.  Sometimes they suggest that fair use should be amended to exclude the possibility of a fair use of news coverage.  Worse, they often suggest, including to the FTC, a statutory version of the so-called “Hot News” doctrine.

The “Hot News” doctrine provided some protection for organizations that first reported a news event from those who would re-use the reportage, sometimes even exploiting technology to “scoop” the original reporters.  What technology, specifically?  The telegraph.  You see, the hot news doctrine dates from a 1918 Supreme Court case and has had very little traction in the modern world.  In that case, International News Service v. Associated Press, the Supreme Court upheld a injunction restraining INS from “appropriating news taken from [AP] bulletins… for the purpose of selling it to defendants clients.”  In spite of a recent attempt by AP to revive the doctrine, I want to suggest that there are at least four good reasons that “hot news” should have no place in copyright law.

First, we should recognize that the original decision by the Supreme Court was not a copyright ruling, but involved unfair trade practices.  These state law protections apply only between business competitors and would not prohibit non-profit distribution of the news by “citizen journalists” and those who post news stories to their Facebook sites.  Incorporating hot news into copyright would have the potential to do just that, expanding the protection for news way beyond what the Supreme Court authorize almost a century ago.

Second, times have changed a lot since 1918.  In the INS v. AP decision, the Supreme Court spilled a lot of ink discussing the economics of news gathering in order to justify the limited protection they were upholding.  Those economics have changed so drastically, as Anderson’s comment illustrates, that the foundations of the hot news doctrine have really been undermined.

Third, further erosion of those foundations came from the Supreme Court in 1991, when the ruled, in Feist Publications v. Rural Telephone Service, that no copyright could be obtained merely through “sweat of the brow.”  If the hot news doctrine were imported from unfair competition law into copyright, we would be importing a sweat of the brow doctrine that is at odds with the structure and underlying principles of the Copyright Act.

Finally, it is simply contrary to fundamental principles of democracy for the law to constrain ordinary citizens from talking to one another about the news of the day.  News is a unique category of information because of its importance to a democratic society.  While the opportunities to exchange information and ideas about the news that exist today can be used for good or for ill, it is not the place of the government to constrain those opportunities, even in the name of propping up newspapers’ foundering business models.

Reading the fine print

Yesterday’s announcement that the Library of Congress was designating new classes of works exempt from the anti-circumvention rules of the DMCA has generated lots of Internet buzz, especially about the exemption for those who “jailbreak” their cellular phones.  The major exemption for higher education, allowing circumvention by faculty for a range of defined educational purposes, has also gotten some press, some of it excellent and some of dubious accuracy.  In the latter category, unfortunately, is this piece from Inside Higher Education, which I will discuss below.

But first let’s look at the actual language of the exemption.  What follows is based on the detailed description of the six exemptions given in today’s Federal Register.

First, the exemption is to permit circumvention of technological protection measures — the breaking of digital locks — for certain classes of works and for defined purposes.  These rules do not change the definition of fair use; they merely specify a small group of purposes within the broader category of fair use for which circumvention is permitted.

Next, this exemption applies to lawfully made and acquired DVDs that are protected by Content Scrambling System (CSS).  This application is both broader and narrower than the previous rule.  It does not require that the DVD be part of a university’s library collection, much less the collection of a film or media studies library.  The DVD can come from anywhere as long as it is not pirated or stolen.  But it applies only to DVDs that use CSS; it does not, for example, apply to Blu-Ray discs.  So a faculty member can make a compilation of clips from her own DVD library, for example, unless she collects that library in some format other than traditional DVD.

The exemption applies to three specific activities for which circumvention is necessary.

First, it applies to educational uses by college and university faculty and by college and university students of film and media studies.  Notice that the category of faculty is all inclusive, but the category of students is limited.  The Library of Congress determined that not all students needed this exemption; presumably they were also aware of industry fears that students would carry the permission too far if the exemption were general.  Also, the application to educational uses does not include K-12 teachers, who were also determined not to need the ability to obtain high-quality clips.  Presumably they are still expected to point a digital camera at a TV screen if they want a clip from a motion picture.

The other activities to which the exemption applies are documentary film-making and non-commercial videos.  Presumably some of the limitations to the persons allowed to circumvent for educational purposes may be mitigated by these two defined activities.  A university student who is not studying film and media studies, for example, might still want to use a film clip in a class video project and could be permitted because it is a non-commercial video.

So once we are clear about what can be used, by whom and for what purposes, it remains to ask what exactly we can now do.  The answer is that we can circumvent technological protection measures in order to incorporate short portions of motion pictures into new works for the purpose of criticism and comment. Several phrases here call for explication.  First, circumvention is allowed for copying short portions, not entire films.  Second, this exemption applies only to motion pictures, not to other content, like video games, that may be available on DVD.  Third, the clip must be used to create a new work.  I was glad to see that the explanation of this phrase in the Federal Register is explicit that “new work” does include a compilation of film clips for teaching, as well as other videos in which a short clip may be subjected to criticism and comment.  Finally, that purpose of criticism and comment is a required aspect of the defined activity that is permitted.

The last requirement for this exemption is a reasonable belief that circumvention is necessary to accomplish the permitted purpose.  The announcement is very clear that if another method of obtaining the clip without circumvention is available and will yield a satisfactory result it should be used.

This seems like a lot of requirements, but I think that overall we have a pretty useful exemption here and one the application of which will not really be too difficult.  Once we understand the four italicized phrases above, it seems that we should be able to recognize permitted instances of circumvention when we see them.  Certainly this is easier to understand and apply than the exemption it replaced.  But when we look back at that item from Inside Higher Ed, it is easy to see how excessive enthusiasm can still lead to misunderstanding.

For one thing, the IHE piece does not acknowledge the limitation placed on students who can take advantage of this educational purpose exemption.  It may be, as I suggest above, that that limitation will be swallowed by the other permissions, but we should at least recognize the intent behind the rule.  More importantly, this exemption to the DMCA’s anti-circumvention rules really has nothing to do with the dispute between UCLA and AIME or with other projects to stream entire digital videos for teaching, in spite of what IHE suggests.  While such projects may or may not be justifiable, this exemption does nothing at all to change or define the boundaries of fair use; it merely carves out a portion of those uses, which the Registrar calls “classic fair use,” for which circumvention is now permitted.  There may be other uses that are fair, but this exemption neither determines that question nor authorizes circumvention for those purposes.

It is what it is, and no more, but what it is is good news for higher education.

The new, improved DMCA

Last week I wrote, but had not yet posted, a comment about the proposed copyright reform in Brazil and the more nuanced approach they took to anti-circumvention rules that protect technological systems intended to prevent unauthorized access.  In the course of that discussion I again criticized the Library of Congress’ long delay in announcing new classes of exceptions to the US anti-circumvention provisions.  I expressed the hope that, after waiting so long, they would at least get it right.

They did.

Before I had a chance to publish my post, the new exceptions were released, albeit eight months late.  Also, an important appellate court opinion about the DMCA anti-circumvention rules was handed down.  So now I have three points to make about the DMCA and anti-circumvention rather than just one, and taken together they constitute my first ever optimistic writing about this subject.

First, the new DMCA exceptions announced today by the Library of Congress include the broader exception for higher education that many of us asked for during the rule-making proceedings.  Indeed, the language is broader than I dared hope, apparently allowing circumvention of DVDs for a broad array of purposes in higher education.  Certainly all professors can now circumvent for the purpose of compiling clips for teaching, as well as for incorporating clips into larger scholarly works.  Documentary film-making and non-commercial videos seem also to be able to circumvent for purposes of criticism and comment using short portions of a protected film.  Indeed, this exception comes close to allowing circumvention (of one type of media) for most fair uses, although it does not quite get us to that point.

The new exceptions also include a provision to allow circumvention of e-book technological protections when necessary to enable a read aloud or screen reader functions.  This exception also addresses a problem that higher education has long felt when accommodating students with a visual disability.

Second, this case out of the Fifth Circuit, involving software used to control “uninterruptable power supply” (UPS) machines, made a very clear statement that the DMCA’s protection of DRM systems “prohibit only forms of access that would violate or impinge on the protections that the Copyright Act otherwise affords copyright owners…. Without showing a link between “access” and “protection” of copyrighted work, the DMCA anti-circumvention provision does not apply.”  The Court quotes another circuit for the proposition that the DMCA creates no additional rights other than what the copyright law already grants; it merely provides for a different form of protecting those rights. With this language we seem to move even further down the path toward saying that anti-circumvention is not prohibited when the purpose for which access is sought would be a fair use.

Which gets me to my third point, about the proposed copyright reform in Brazil. As I said in my earlier post:

“Brazil offers an international example of how to handle anti-circumvention the right way from the start, instead of creating a draconian rule and then forcing law-abiding users to beg for limited exceptions.  Brazil has introduced a balanced approach to anti-circumvention as part of its copyright reform proposal (available here, in Portuguese; see especially section 107).  As Canadian copyright law professor Michael Geist explains on his blog, this proposed reform imposes penalties for circumvention of legitimate technological controls on access, just as US law does.  But it also specifies that circumvention of such controls is permitted for access to public domain materials and for purposes that fall under Brazil’s ‘fair dealing’ exceptions; an obvious limitation that US law ignores.  What is more, the Brazilian proposal would impose penalties equivalent to those for unauthorized circumvention on those who would hinder circumvention for these legitimate purposes.”

Now, of course, we are much closer to the same kind of sensible approach then we were just a few days ago.  It is interesting to note that I mentioned in that earlier, never-published post, that the US Trade Representative would be upset at Brazil for not incorporating US-style DMCA rules.  But I have just seen this news about how the USTR is backing down about harsh anti-circumvention provisions even in  ACTA, the Anti-Counterfeiting Trade Agreement I have talked about before.  I believe I may hear the turning of a tide.