Category Archives: Technologies

Due process for file-sharers?

It is fascinating to see the different reactions to the decision by District Court Judge Nancy Gertner to reduce the damages assessed against Joel Tenenbaum for sharing unauthorized digital files of music from $675,000 (or $22,500 per song) to $67,500 (or 2,250 per song).  There is a relatively impartial description of the issue here on the Law Librarian blog, but other reactions have been all over the map.  Some have decried the ruling as a “lose/lose situations” because Tenenbaum has said that he still cannot pay the judgment and the RIAA has announced that it will not accepted the revised judgment either.  On the CNET blog the decision was called “A copyright ruling that no one can like.”  The Electronic Frontier Foundation, however, does like the decision, congratulating Judge Gertner for helping to “restore sanity” to copyright damages.

These reactions beg for a close reading of the decision in order to judge for oneself if it is a disaster for copyright law or a victory over the Evil Empire, and I postponed my own comments until I had had the opportunity to make such a reading.  My reaction was more muted, not to say boring, than those I had read.  I have no desire to defend large-scale sharing of music or movie files without authorization, and I fear that more legitimate uses of the fair use defense will be undermined if it is raised too often in these cases.  But I am glad to see the issue of whether statutory damages in a copyright case are susceptible to due process limitations teed up for appeal as cogently as Judge Gertner has done.

In her ruling, Judge Gertner explicitly states that she did not take the same legal route as was taken to reduce the damages in the other high profile file-sharing case, the equitable doctrine of remittur.  Had she taken that course, there is little doubt that the RIAA would request a new trial and the same issues would be rehashed again.  Instead, by ruling that the award was an unconstitutional violation of the Fifth Amendment right to due process, Gertner has forced the RIAA to appeal to a higher court, where the Constitutional issue will be squarely faced.

It is well established that punitive damages — those designed to punish and deter a defendant, and others, from repeating the actions for which they were found liable, are subject to checks imposed by the promise of due process.  Basically, a defendant cannot be held liable for amounts of money that are wildly out-of-line with the harm the caused.  Four million dollars for failing to disclose that a “new” car had actually been repainted is the classic example (from the BMW case the Judge discusses at length).  The question has been whether this analysis can be applied even in situations, like copyright, where a range statutory damages are specified in the law and serve to cabin in the discretion of juries, who tend to get emotional when awarding punitive damages.

Judge Gertner has held that the due process concerns outlined in BMW and other cases do apply to copyright, even though there is a statutorily-mandated range for damages.  She distinguishes the procedural issue — does the defendant have reasonable notice of how much his behavior may cost him — from the substantive issue of fairness.  Even though the procedural due process issue is off the table in a statutory damages situation, the Judge holds that substantive due process concerns can still provide Constitutional grounds for reducing an award that is within the mandated range in specific circumstances.

One of the most interesting aspects of her decision is Judge Gertner’s exploration of the legislative history of the statutory damages provision.  In both comments and actions taken by Senators Orrin Hatch and Patrick Leahy, who sponsored legislation in 2000 to increase the range of statutory damages for copyright infringement, the Judge finds evidence that they never imagined private individuals engaged in non-commercial conduct would be subject to the heaviest weight of the damages.  At one point Judge Gertner writes:

Since the jury’s award in this case fell within the range set forth in section 504(c), there is identity between the damages authorized by Congress and the jury’s award.  Nevertheless, it is far from clear that Congress contemplated that a damages award as extraordinarily high as the one assessed in this case would ever be imposed on an ordinary individual engaged in file-sharing without financial gain.  Just because the jury’s award fell within the broad range of damages that Congress set for all copyright cases does not mean that members of Congress who approved the language of section 504(c) intended to sanction the eye-popping award imposed in this case.

There is even a remarkable passage in the ruling where the Judge recounts an incident that occurred during a hearing on the bill to raise the statutory damages ceiling in which Senator Leahy himself announced he was downloading a song as he spoke, and dismissed concerns that he could be subject to the very damages he was proposing.

Judge Gertner’s ruling is clear and well-reasoned.  I confess that I find her argument compelling, since the extraordinary scope of statutory damages in copyright has a chilling effect on non-commercial and even educational uses.  More clarity on when the highest level of damages should, or should not, be in play is badly needed.  I cannot say for sure that the Judge got this right, but she has provided a solid record for review and an excellent basis upon which the higher courts can consider this important issue.

When is the price right?

It is always interesting when the events of my life and the materials I am reading coincide to force my attention on a particular point.  Fate, I wonder?  Providence? Maybe just coincidence.

Anyway, yesterday a colleague sent a message to a e-mail list in which he recommended a specific journal article in the library science literature.  I was interested and set out to obtain a copy.  In this particular case, however, Duke’s impressive collection of electronic resources did not go quite far enough.  Probably because we do not have an LIS school, the specific journal in question was canceled in print several years ago, and there is no electronic database in our collection that offers full-text access.

In the course of searching, however, I did discover that I could purchase a PDF of the single article directly from the journal publisher for a cost of $30.00.  That price point, I must say, exceeded my felt need to read the article immediately, and I opted instead for inter-library loan.

This mundane little incident occurred while I had this article on “The iPad for Academics” (from the “Chronicle of Higher Education”) open on my desktop in anticipation of a closer reading.  I have become an enthusiastic iPad user over the past two months, and this article confirms my own sense that one of the best functions of the device for academics is the ability to store, organize and comfortably read PDF journal articles.

In his editorial, however, Alex Golub goes beyond simply explaining the benefits and drawbacks of the iPad for academic use; he also comments on the changes he anticipates in scholarly publishing.  He is funny and scathing about our current model for purchasing journal content — “[publishers] have pursued business models of the “purchase this enormous bundle of journals you don’t want or else our Death Star will destroy another planet of your Rebel Alliance” variety” — and he predicts that this soon may change.  Golub anticipates that the iPad and the model of the apps store will lead eventually to the “retailization” of academic content.  He speculates on the benefits if academic journals marketed their content directly to their ultimate consumers:

“What if you could log on to your ScienceDirect or JSTOR app and get a complete browsable list of your favorite journal articles, available for purchase for, say, 25 cents each?”

Golub asserts that “academics are ready for this development,” but I have to wonder if publishers are there yet.  Certainly there is a huge gap between the 25 cent price point that Golub suggests and the $30 one that I encountered.  For the disaggregated purchase model that Golub is advocating to work, some middle ground would have to be found, but I imagine that a successful price point would need to be much closer to the low end of the spectrum than to the current norm.

The truth is, I suspect, that the publishers really do not expect, or even want, to sell many articles at $30 apiece.  That price is meant to discourage, not to sell; it is intended to shock academics into insisting that their libraries subscribe, not only to the individual title, but to the electronic bundle in which it is packaged.  The publishing industry is built on large payments for aggregated content and shows little inclination to transition to any form of disaggregation or micro-payments.  Indeed, if we could make this transition, the intermediary role of the publisher might begin to seem even less important than it does now.

In fact, I am not entirely convinced that the academy is really ready for some of the changes that disaggregation would usher in.  One consequence might be the disappearance of quite a few smaller journals.  Bundling keeps many journals with only niche markets in business.  Disaggregation would allow such journals to take advantage of “the long tail” that Internet marketing clearly supports, but it is not at all certain that all of those small, niche journals could exploit “long tail” marketing or survive on it.  Who would win and who would lose in that situation is an open question, but it is certain that there would be fairly dramatic changes in how academic content is registered, evaluated and disseminated.

I don’t want to sound like I am opposed to the idea Golub suggests for “apps based” sales of scholarly articles; it is an intriguing idea.  It might well be a better alternative for the majority of scholars than our current clunky and inefficient system.  But we should not underestimate the disruption to settled practices that significant change in the scholarly communications system will involve.  As librarians and others advocate for those changes, we need to remain aware of the potential for such disruptions, and sensitive to the varying reactions we are likely to hear from the scholars we serve.

Justice Stevens caught in the copyright crossfire

By Will Cross

About a month ago Kevin wrote about the retirement of Justice Stevens and quoted an excellent article called “Justice Stevens Invented the Internet.”  It argued that the development of the internet relied on Stevens’ opinion in the Sony Betamax case, and the standard it established that, so long as a device is “capable of substantial non-infringing uses,” the manufacturer of the device cannot be liable for infringing copies made by consumers with the device.

I could not agree more with this argument and to Justice Stevens’ credit I would add his majority opinion in Reno v. ACLU, which welcomed the Internet under the aegis of the First Amendment and struck down a requirement that “adult” online expression must be sent exclusively to users over the age of 18, a requirement Stevens noted would be technologically impossible to comply with.  Given that technological barriers will not work, the only alternative is to simply limit expression.  Regulation, Stevens wrote, may not “reduc[e] the adult population . . . to . . . only what is fit for children.”

Taken together, these cases established the legal framework that supports the internet as an open and free medium where users are protected from liability for unforeseen bad or inappropriate uses of expression made by others.  Technology and expression must be taken on their own terms, even if third parties subvert them for bad ends.

Unfortunately, this principle of an open internet has been steadily eroded by blowback from copyright firefights, particularly one that arose even as Stevens was drafting Reno: file sharing.

After a decade of fruitless lawsuits and on the heels of another legal victory, this time against file sharing service Limewire, content owners are gearing up for yet another round of lawsuits this week.  The problem with this bellicose response to file sharing is that Justice Stevens’ open internet is increasingly caught in the crossfire.

This response to file sharing has taken a significant toll on the efficiency of the legal system and has bent the law badly out of shape.  As Eric Goldman’s blog, cited above, notes, “there is ‘normal’ copyright law and then ‘P2P file sharing’ copyright law, and it’s a mistake to think those two legal doctrines are closely related.”  Content owners have repeatedly pushed for extreme, or simply non-legal, readings of copyright and fair use, most famously in the Lenz case dealing with bogus takedown notices (and a dancing baby) and the recent Jammie Thomas case dealing with excessive statutory damages.  They are also attempting to rewrite the already draconian DMCA, an irony matched only by the sublime absurdity of content owners suing one another over pirated anti-piracy technology.

More troubling, these lawsuits have also begun to target not only users but service providers.  Content owners have been overburdening ISP’s  with automated discovery requests for years and have recently begun to attack ISP’s directly.  They have also sought an injunction against the bandwidth provider for file sharing service The Pirate Bay, essentially arguing for fourth party liability.

This erosion in Justice Steven’s principle of an open internet reached a new low with a California court’s recent injunction against BitTorrent search engine IsoHunt requiring it to remove all links pointing to infringing files. This, of course flies in the face of Stevens’ principle about non-infringing uses and requires IsoHunt to have the same infeasible knowledge and control over users as was struck down in Reno.   If the Pirate Bay case is the equivalent of suing AT&T for an obscene caller’s ramblings then this case is akin to requiring that Sprint disconnect anyone whose phone might be used for unlawful acts even before those acts have been identified as unlawful.  It cannot be done and the only alternative is to shutter the technology completely or simply bend over backward to accommodate any and all measures litigious content owners may seek to employ.

This also ignores the substantial non-infringing uses of file sharing services similar to those that saved the VCR in Sony.   These uses include an increasing number of academic uses.  Kevin discussed the potential cost of attacking file sharing to higher education in a 2007 post on this topic and since then file sharing continues to be used to transmit academic materials including textbooks and journals.  Many universities have begun to move this sharing into an authorized practice with a service called iTunesU that facilitates academic sharing.  Under pressure from content-owners, however, and despite the developing market for academic file sharing, Oxford University has banned all file sharing, even that which explicitly legal.   With ACTA’s heavy artillery on the horizon the firefight only seems to be escalating.

Again, illegal file sharing is a real problem but the current move to eradicate anything that might be used unlawfully is in danger of reducing Justice Stevens’ open internet to “only what is fit for children.”  The war against file sharing is harming legitimate uses such as academic sharing and has an economic cost, and a cost to public safety.   It also has a cost to public knowledge, as poetically illustrated by Princeton’s demand that web sites remove the senior thesis of Justice Stevens’ replacement Elena Kagan, in order to protect Princeton’s market to sell the public writings of the next Justice.  With technology and expression in the cross-hairs even an unlikely total victory against illegal sharing seems to offer at best a Pyrrhic one for scholarly communications and society.

Facing the Future of Social Media

By Will Cross

As a scholarly communications librarian I am naturally excited when scholars embrace a promising new method of communication.  As such, I was delighted to see this new study published in the Chronicle of Higher Education.  Although academia is just scratching the surface of social media use, this study of almost 1,000 professors indicates that roughly 80% are already using social media and about one-third use social media to communicate directly with peers and students.

Of course this blog provides one vital (in every sense) example of such communication, but more interactive tools such as Facebook are also being used by libraries and scholars to promote academic discourse.  Even Twitter has recently been used to address scholarly issues, as with the recent coordinated protests against ACTA.  Scholars have also begun to study Twitter as a source of data for scholarly analysis similar to telephone surveys.  These nascent uses certainly do not present an imminent threat to replace traditional scholarly discussion and publication, but they do suggest the potential for new forms of communication among scholars that can act as a valuable adjunct.

As we enter this brave new world, however, we must be cautious; moving scholarly discourse into digital and commercialized spaces has costs that come along with the benefits.  The most visible example of this fact is the recent conflict over Facebook’s privacy settings.  As the Electronic Frontier Foundation’s Timeline describes, what began as a private tool for communication among friends and colleagues has essentially been transformed into a clearinghouse of personal data that is being mined and sold en masse to advertisers.  This has occurred based in large part on changes in the “default” settings, well-illustrated by this graph, and compounded by the fact that personal information continues to be made available and mined after it is removed from a user’s page and even when a user quits Facebook altogether.

Facebook is the most publicized offender, but more traditional “new media” present similar problems.  As ebook readers pop up on iPads and Android phones it has been revealed that ebook reading habits, personal annotations and highlights are being recorded and aggregated.  Even scholarly darling Second Life has been the subject of a recent class action lawsuit over ownership of content created within the “virtual world.”  This is similar to Facebook’s ill-fated 2009 claim to “perpetual worldwide ownership” of all content that was eventually rescinded when users revolted.

As scholarly communication, and perhaps eventually scholarly publishing, moves into these new arenas we must decide how to respond to these challenges to personal privacy and authorial ownership.  Some have argued for an open alternative to these commercial entities that must, at the end of the day, focus on their bottom line rather than social or scholarly good.  At the same time, businesses are looking to technology to control access and retain all information in social media.

Along with these technological solutions many groups are focusing on providing users with information.  The American Library Association has put out an excellent video called “Choose Privacy” that aims to educate users about these issues so that they may make informed decisions.  Business Week’s list of Ten Reasons to Delete Your Facebook Account goes a step further to argue for a specific action.

However we address these issues we must be cognizant of how social media change the norms of expression.  The Scholarly Kitchen has an excellent discussion of social media and privacy that highlights the way social media such as Facebook are transforming social norms about privacy.  Since these norms themselves influence privacy law and the Fourth Amendment’s complex and often-misunderstood “reasonable expectation” test, today’s social practices may drive tomorrow’s legal changes.

At the same time, the Scholarly Kitchen article cites a study describing the necessary tradeoff between sharing information and sacrificing some privacy.  The challenge for scholars and librarians, I would argue, is to find a balance that permits the appropriate sharing of information but retains the privacy and ownership values necessary for intellectual exploration, reflection and creation.  As is so often the case with new modes of expression, we must be careful to import the social, cultural and legal norms of scholarship that we need while leaving room for new opportunities to flourish.

Act 2 of the ACTA controversy

When I last wrote about the Anti-Counterfeiting Trade Agreement, or ACTA, it was primarily to complain about the secrecy in which the negotiations were taking place.  Earlier this month, however, the US Trade Representative (who had opposed release) finally caved in to pressure from at home and overseas and agreed to the release of a draft of the proposed agreement.  Much of the released text is in square brackets, indicating that full agreement has not been reached, and there are several points where different options on a particular matter are outlined.  Nevertheless, enough is now clear about ACTA to be quite sure that the complaints raised before the release were fully justified.  Now the issue is not simply that we do not know what is in ACTA, it is that what is in ACTA is a series of very bad ideas.

One of the most reliable guides to ACTA continues to be Canadian law profess Michael Geist, who discusses some of the provisions and the problems with ACTA in this blog post and in a video which can be found here.  Geist points out very effectively that, in spite of assurances, ACTA is not just about enforcement of existing IP law but would mandate substantive changes in national IP laws.  Also, as he explains, it is not just about commercial infringement, regardless of what we have been told.  More about that in a minute.

One of the frequent claims about ACTA is that it would mandate a “three strikes” regime that would require ISPs to “terminate” subscribers after repeated accusations from the content industries that that user had committed infringement.  Such termination would occur without judicial process.  Defenders of ACTA have insisted that these claims are not true, and now we can see what they meant.  The released text does not require termination, but it does offer a safe harbor for ISPs, such as we have in the US, only if the ISP implements security measures.  The only example given of an acceptable security measure, of course, is a three strikes termination procedure.

To organize a summary of the issues raised by the draft ACTA text, I want to look at two groups of problems, one procedural and one substantive.

Procedurally, ACTA is a blatant attempt to remake IP law without having to involve either the World Intellectual Property Organization (WIPO) or the United States Congress.  It appears that the WIPO does not please the IP industries because of it’s transparency and because of the attention it pays to the needs of developing nations, for whom high and impenetrable IP barriers are not conducive to growth.  These industries pull the strings of the U.S. Trade Representative, and an international agreement is born that is negotiated in secret and would set up an oversight structure independent of the WIPO.  As two law professors point out in this editorial from the Washington Post, the agreement, with it’s substantive changes in national copyright law, would also seem to violate the US Constitution if it is approved here as an executive agreement without the involvement of Congress.

It is constitutional concerns that also frame my substantive objections to ACTA, since many of the things it would require signatories to enact in their national laws seemed to conflict with the Fourth, Fifth and Sixth Amendments to the U.S. Constitution.  By agreeing to ACTA, the U.S. would derogate due process and substantive civil rights in regard to this one area of law.  The best analysis of these problems can be found in a two part post by Margot Kaminski, here and here on the Balkinization blog, but I will offer a brief catalog here.

In the first place, the three strikes termination provision discussed above would result in citizens being disconnected from the Internet on the basis of mere accusation.  This is a significant reduction in the usual standard of evidence for a claim of infringement.  And ACTA has provisions that would increase the level of remedies available a great deal; termination would only be the beginning.  When there is a court proceeding, the damages could be based on any “reasonable” valuation suggested by the rights holder.

In addition, rights holders could seek injunctions without involving the other party; so-called ex parte injunctions would be available.  Finally, there would be several provisions allowing seizures of allegedly infringing property, including authorization for border agents to seize material at the request of rights holders.  This provision would make the U.S. Border Patrol into a sort of private police force working for the content industries, but at taxpayer expense.

The most troubling provision, I think, is where ACTA would require the U.S., and other signatories, to increase the criminal penalties for willful infringement.  The U.S. already has such penalties, but the ACTA standard would expand the definition of “willful” to explicitly include private, non-commercial copying if done on a large scale.  And ACTA says that criminal penalties “shall include” the possibility of prison.  Not satisfied with million dollar judgments against private citizens who share unauthorized movie and music files, the content industries now want to send them to prison.

Many of the enforcement provisions of ACTA would substantively alter U.S. law and would provide a heavy advantage to plaintiffs, one that is not available to those bringing other types of claims.  We are being asked to change our law in a way highly advantageous to one special interest based on an agreement negotiated in secret and without any of the legislative checks and balances that would normally be in play. If the office of the U.S. Trade Representative thought that releasing this draft text would put an end to controversy, they were badly mistaken.

Pre-publication update:  After this post was written, the Library Copyright Alliance released this analysis of ACTA by Jonathan Band.  It is well worth reading for those who would like a sustained analysis of the continuing problems with ACTA.

A lens on the digital challenge

On March 19th a fascinating symposium was held in Chapel Hill, NC in honor of Laura (Lolly) Gasaway.  Lolly, for many years Professor &  Law Librarian at UNC Chapel Hill and now Associate Dean for Academic Affairs, is a prolific scholar and has had a tremendous impact on how libraries understand and work with copyright law.  She is also a gracious and generous friend; meeting her has been one of the best parts of coming to work in the Research Triangle, even if we are on opposite sides of the great basketball divide.  The symposium in her honor was a gallery of prominent and interesting speakers who witnessed to the full range of Lolly’s intellectual and practical influence.

I was particular interested in the remarks made by Professor Llewellyn Gibbons of the University of Toledo College of Law, who talked about the Visual Artist’s Rights Act (VARA) and its application in the digital environment.

VARA was adopted in 1990 and adds a section to the Copyright Act (section 106A) that carves out a special right for visual artists.  An artist who creates a covered work under VARA gets a truncated version of the “moral rights” that are recognized in most other countries — a right of attribution and a right to ensure the integrity of the art work.  VARA applies only to a narrow category of works — paintings, drawings, prints, sculptures, or still photographic images produced for exhibition only, and existing in single copies or in limited editions of 200 or fewer copies, signed by the artist.  It is interesting to note that this is the only recognition of these moral rights in U.S. law, in spite of our commitment when we joined the Berne Convention to protect such rights for all rights holders.

Professor Gibbons raised the issue of how well or poorly VARA might apply to an artist who works in digital media.  The real problem, he pointed out, is the limitation to works that exist in 200 or fewer copies.  How do we talk about a limited number of copies in an environment that promiscuously makes copies every time material is displayed, download or transmitted?  This question is remarkably similar to one that the Section 108 Study Group, co-chaired by Lolly, grappled with regarding the application in the digital realm of the limit on the number of preservation and interlibrary loan copies that a library can make.

Suppose an artist creates a digital work and displays it on her website.  That, we could argue, is a single copy.  But people will download that work and, without some control, soon there will be more than 200 copies.  And even that way of stating the problem assumes that the ephemeral copies created in a computer’s memory whenever a site is visited do not count (they are not copyright infringements because of section 117 of the Copyright Act, but that does not determine whether they would count toward the restriction in VARA).  Professor Gibbons discussed the possibility that a “download and delete” scheme, presumably based on coding that would prevent the 201st download and would prevent a downloaded copy from proliferating (similar to the DRM used by iTunes?), might preserve VARA rights for such an artist.

I am less than optimistic that the scheme Gibbons suggests could really work, but I look forward to reading his paper when the proceedings of the symposium are published in the Journal of Law & Technology.  In the meantime, it seems very obvious to me that the idea of digital art is completely outside of what Congress was imagining when it drafted VARA 20 years ago.  And that, perhaps, is the most important point.  This attempt to imagine how VARA could apply to digital art clearly demonstrates the inability of copyright law, even with relatively recent revisions, to keep up with changing technology.  It highlights the near impossibility of creating a law flexible enough to respond to new technologies.  The real digital challenge is to create a copyright law that is permeable enough to provide “escape hatches” through which new technological possibilities can slide so that creativity is not inhibited for the long periods of time it takes for law to catch up with human ingenuity.

Smoke got in my eyes

It has been widely reported that UCLA has decided to re-start its program of streaming digitized videos for course-related viewing.  They do so based on a set of principles adopted by the faculty, which can be read here.

Readers of this blog will recall that I have previously expressed ambivalence about whether and how this practice can be justified under our current copyright law.  I expressed that ambivalence in this posting, and many comments flowed in, most from experts for whose opinions I have great respect.  Several were more sanguine than I about the legality of streamed digital video, while some were certain that no justification could be found.

I still had not made up my mind when I read about UCLA’s decision to resume their activities.  But yesterday’s article in Inside Higher Ed., which includes statements from the lawyer for the trade group that originally threatened UCLA , has really helped me clarify the issues.  It seems to me that there are two glaring and obvious misstatements in AIME’s denunciations of UCLA, and that these misstatements actually point out why the practice is justifiable.

First, AIME’s lawyer insists that UCLA will stream videos to “an unlimited number of students.”  But a cursory reading of the principles adopted at UCLA shows that the program will limit access to each video only to students enrolled in a class for which that film is required content.  Surely that is a limit on the number of students who will see the film using UCLA’s streamed service; only those students who can authenticate into the course management site for the particular course will be able to view each film.  In other words, the audience is limited in exactly the same way that it would be if the film were shown in a classroom to the assembled students.

Second, the remarks from AIME stress the fact that UCLA will buy only a single copy of each film, as if that is different from prior practice.  But of course, most universities buy only one copy of most DVDs, which are then shown in class to a group of students or put on reserve so that students can come in and watch the film in a library or media lab one at a time.  During the time UCLA had suspended its program, this was the practice it followed.  What it did not do was buy large numbers of each film and hand them out to individual students, which the AIME statement seems to suggest is the only alternative.

This, of course, is absurd, and it is disingenuous.  In its negotiations with UCLA, I am very sure that AIME did not propose that there was some number of multiple DVDs which, if purchased, would render the practice of streaming that film for student viewing fair.  If there is such a number, I suggest that AIME should tell us what it is; I am sure many schools would prefer to buy that number of copies in order to provide the streaming services our faculty and students want while still not exposing themselves to liability.  But multiple sales are not the issue here.

What AIME is seeking, naturally, is repeated licensing fees.  They are happy to have schools buy only one copy and stream it if, for each film, a license fee greatly in excess of the cost of the DVD itself is paid, and paid each semester.  The film companies do not want to settle for slightly increased sales of DVDs in this matter, they want to turn our campus intranets into pay-per-use jukeboxes that will provide a new and virtually unlimited income stream.

In the past, universities bought single copies of films and showed them to groups gathered together in a classroom or to students on a one-to-one basis.  The film companies may have grumbled about the doctrine of first sale and the face-to-face teaching exception that permitted this, but there was little they could do.  Now those film companies hope to create new revenue by forcing us to pay to show the same single copy to the same students over a closed network.  In short, they want large fees for space-shifting.

The fact that AIME’s attorney uses this phrasing, with its two statements that misdirect the reader from the real issue, suggests that he realizes how strong the fair use argument, based on space and time shifting to accomplish a purpose that is specifically authorized by the copyright act, really is.  Rhetoric about single copies and unlimited students is a smokescreen, and when the smoke is cleared away it is easier than it has ever been for me to see the powerful fair use argument in clear focus.

Should the court consider the new Google patent?

On Tuesday Google was formally granted a patent on software to selectively control access to content, based on access rules and geographical locations.  There is a story on Ars Technica here that explains the patent and its potential application very nicely.  Basically, this is a technique for filtering what users can see based on where they are in the world.  Such filtering is not new; Yahoo! famously lost a court case in France and had to begin controlling access to its auction sites to prevent Nazi memorabilia from being sold in that country.  Different international laws about all kinds of topics can force Internet services to distinguish what can and cannot be seen in different parts of the world.

This story is interesting, however, for at least three reasons, the last one very relevant to the fairness hearing being held tomorrow in regard to the proposed settlement of the copyright lawsuit over the Google Books project.

The first thing that is interesting in this story is the fact that the patent application for this software was filed in September of 2004.  The five and a half year gap between initial filing and final approval is not necessarily unusual, but it gives me a chance to remind readers how long and costly the patent process is.  This is a huge difference between copyright and patents, and indicates why copyright is usually so much more important to higher education.  Every scholar, administrator, librarian and student owns copyrights, but relatively few can afford the time and money to obtain a patent, even if they have an invention that meets the much higher standards for patentability.

Which brings me to the second point: should software of this type even be eligible for patent protection?  Software patents were controversial for a long time because they were alleged to represent “only” abstract ideas – algorithms based on zeros and ones.  And until at least the mid-1990s, the Patent Office and the courts would not recognize patents for business methods.  All of that seemed to be resolved in favor of patenting business method software, but a case currently before the Supreme Court, called Bilski v. Kappos, has the potential to alter the rules on this issue.

But it is the impact of this patent on the Google Books settlement that really interests me.  Should the court considering the settlement tomorrow take notice of this patent?  If it did, what impact would it have?  Given the objections to the settlement from international copyright holders and the promises Google has made to exclude works from outside the US, UK and Canada, the need for some filtering system seems obvious.  So from one point of view, this patent is indicative of Google’s good faith efforts to do what it has promised to do.  Nevertheless, there are some less charitable interpretations that could be applied.

For one thing, this software could enable censorship of the type Google first practiced, then rejected, in China.  We should never forget that the origin of copyright was as a tool for censorship; anything that automates copyright enforcement runs the risk of facilitating repression.

Of most interest to the GB settlement, however, is the question of whether this patent ratchets up the worry about a Google monopoly over digital books.  Lots of comment has been made about the possibility that Google will have a monopoly over digital access to orphan works.  It is unlikely that any other party will be able to get the same kind of compulsory license for all orphan works that Google stands to gain in this class action settlement.  Now we must face the possibility that, even if a competitor could get such a license, in order to effectuate the necessary access restrictions they would have to seek a license from Google for these patented process.

The Ars Technica article points out that Google has promised not to use its patent portfolio for offensive purposes, that is, to limit competition, and that its record so far suggests that it is serious about that promise.  Nevertheless, courts need to look beyond promises to “do no evil” and think about long-term consequences.  As it considers whether the proposed settlement will give Google too much market power, it might be well to consider this patent on geographical filtering software as one more reason to keep a sharp eye on the project as it proceeds.

Can we stream digital video?

I had not even had a chance to open my daily e-mail from Inside Higher Ed yesterday before four colleagues had sent me a link to this story about an educational video trade association forcing UCLA to halt its practice of streaming digitized video on course Web sites.  Several suggested that I would surely want to blog about the story, and they were, of course, correct.

The story contains some chilling rhetoric from the representative of the Association for Information and Media Equipment – intentional, I am sure – about their plans to investigate and threaten other colleges and universities that are doing the same thing.  Many schools, of course, have explored these options because the pressure from faculty and students to provide greater digital access to our film collections is intense.  Some have concluded that the legal risk is too great and are resisting that pressure, at least for now.  Others have tried various justifications, clearly hoping to “fly under the radar.”  This story will certainly strike fear into many, and will give more ammunition to faculty members who complain that copyright law prevents them from teaching effectively in the media-saturated world of 21st Century America.

In response to the story, I want to suggest here what the major alternatives for legal streaming of digital video might be and the problems inherent in each alternative.  I know from conversations with colleagues that each of these strategies is being tried somewhere.

The first, and most obvious, possibility is to rely on the TEACH Act, which amended one of the Section 110 exceptions to the public performance right in copyright in order to allow “transmissions” of certain performances for distance education.  TEACH (or Section 110(2)) has a lot of specific requirements that must be met (see this TEACH ACT toolkit from NC State University), although many of those requirements would appear to be satisfied when digital video is streamed through a closed-access course management system.  The real problem with relying on TEACH is the portion limits it imposes; it permits transmission of entire “non-dramatic musical and literary works” and “reasonable and limited portions” of other audio-visual works. This second provision seems to apply to films and to disallow the transmission of entire films.  Some institutions would argue, I think, that an entire film is often the only “reasonable” portion to use for a particular teaching purpose, but that argument ignores the word “limited.” The point about a reasonable portion is well-taken, in my opinion, but only proves that TEACH was never an adequate solution to this problem.

Other institutions could assert fair use as the justification for streaming digital video.  These schools would point out, I imagine, that courts have often held that the use of an entire work can be a fair use, based on the overall balancing of the fair use factors and the totality of circumstances.  The trade group clearly disagrees, although the comments about fair use in the article are not really on target.  It is correct that password-protection alone is not enough to guarantee fair use, but it does strengthen the university’s position in the complete analysis of the factors.  Simply to say that a password does not make something fair use is as incomplete as asserting that an educational purpose always means a use is fair; both assertions miss the need for a complete examination and balancing of the factors.

The problem I see for a fair use justification is that courts would be likely, in my opinion, to look at the portion limits in the TEACH Act and say that that legislation was Congress’ opportunity to provide guidance on educational transmissions, and it selected a limited standard.  A court that took that approach would be unlikely to let a school “shoehorn” the entire film in under fair use, simply in order to avoid the inconvenient limits imposed by TEACH.  But I have to add that there is no agreement on this point — even the intern in my office this year, who is also a lawyer, disagrees with me — and some universities clearly have decided to rely on fair use to stream entire videos.

Perhaps the most interesting argument, however, is the one that UCLA seems to be making, according to the article, based on the performance exception that proceeds TEACH in Section 110 and permits performances in face-to-face teaching situations.  Section 110(1) is clearly meant to have some “give” in it, since it refers to “teaching situations” rather than classes and to “classroom[s] or similar place[s] devoted to instruction [emphasis mine].”  UCLA seems to want to stretch these terms to include the course Web site as part of the face-to-face instruction.  I know of other institutions, less bold than UCLA, perhaps, but still unwilling to accept unworkable limitations, that read 110(1) to permit streaming to designated sites like language labs, but not to course sites that can be accessed from anywhere.  These efforts to clarify the fuzzy boundaries of 110(1) are fascinating and seem to invite a court to step in and clarify; it is just that no one wants to be the defendant in that case if they can help it.  While I admit to lingering doubts, this last approach seems to me to be the most surprising, yet most promising, of the three.

There is still another obstacle, however, posed by the anti-circumvention rules added to copyright law by the Digital Millennium Copyright Act.  This provision prevents the circumvention of technological protection measures even, in some cases, when the purpose of the use would be permitted.  So ripping a DVD protected with CSS (Content Scramble System) may violate these rules even if it is otherwise legal.  The DMCA specifically stated that these rules should not inhibit fair use, but courts have been inconsistent about that provision in circumvention cases.

Also, the Library of Congress is charged with declaring categories of exceptions to anti-circumvention in a rulemaking process every three years.  New rules, which are desperately needed, were due in October 2009 but the Library punted, extending the old rules while giving itself unlimited time to try and craft new ones.  The situation is getting worse on university campuses and we have to ask when the Library of Congress is going to clarify the situation.

In the end, I agree with Tracy Mitrano from Cornell, quoted in the article, that this is one more place where copyright law is not up to the technological challenges posed in higher education today.  The need for revision “before [the law] does any more damage” is clear.  We can only hope that the educational media industry will eventually come to understand this (they are supposed to be educational, after all) and move away from threats and towards real dialogue.

Let the user beware

If the box that says “I Accept” (regarding a website’s terms of use)  really is the most dangerous place on the web, as I wrote several weeks ago, it is getting even riskier out there.  For a long time, a relatively safe rule-of-thumb has been that EULAs (end user license agreements) that forced you to see their terms were enforceable, while those that merely offered you a chance to see them by clicking on a link you did not have to follow were not.  That has never been a hard-and-fast rule, and its utility has been seriously eroded by several court cases in recent years.

The most recent case involved a challenge to a “choice of forum” clause contained in the EULA for a site called ServiceMagic.  A lawsuit by Victoria Major was dismissed because it was filed in Missouri while the EULA says that all lawsuits must be filed in Colorado.  The Missouri Court of Appeals upheld the dismissal, even though Ms. Major never read the EULA nor was forced to see it and click “I Accept.”  The court held, as have several others in recent years, that the link was placed prominently enough for the terms to be enforced, even though there was no technological requirement to actually click through the license.  Some details about the case can be found here, on ArsTechnica.

The was an uproar several years ago about a proposed change in the Uniform Commercial Code called UCITA (the Uniform Computer Information Transactions Act) that would have made it too easy, its opponents felt, for consumers to commit themselves to licensing terms about which they had no knowledge and no chance to negotiate.  UCITA was adopted only in Maryland and Virginia, and it has since been withdrawn by its sponsors.  But the goal of UCITA to speed up Internet commerce by simplifying licensing, even at the cost of consumer protection, is being accomplished by courts around the country anyway.  This latest case is one in a line of similar cases that make it even more imperative that users look for and read licensing agreements, even if the site itself does not force them to do so.

There are a couple of caveats to this trend.  First, it is not universal.  A similar case against online retailer Overstock, reported here, went the other way, apparently because the link to the terms was not sufficiently prominent.  And in the ServiceMagic case one judge on the appeals court wrote that she would only uphold reasonable and expected terms like choice of forum, not terms that were “unconscionable.”  So perhaps we can expect a more active review of licensing terms when the licenses is merely a “clickwrap” or “browsewrap.”  Nevertheless, the most important caveat raised by these cases may also be the oldest — “caveat emptor.”