Category Archives: Technologies

The textbook world is getting flat

Earlier this month I was able, thanks to the organizing efforts of a colleague, to participate in a phone call with Jeff Shelstad, one of the founders of Flat World Knowledge.  I wrote about Flat World some time ago, but I want to take the opportunity (before it fades in my mind) to describe their business in more detail, provide an update about their success and recommend their business model as a genuinely transformative opportunity for higher education.

Flat World Knowledge is essentially a publisher, founded in 2007 and currently focused on the market for “big” textbooks — the ones that cost students an arm and a leg and are issued in a new edition every other year to undercut second-hand resales.  Flat World, in contrast, is publishing their textbooks online, entirely open and free.  The books are licensed under a Creative Commons license.  Like other publishers, Flat World organizes peer-review for the books it publishes and provides copy editing and design services.  So two issues come to mind immediately — how do they make money, and what is in it for faculty who might write or adopt a Flat World textbook.  It is the answers to these questions that really make Flat World such an exciting venture.

First, although students can get free access to their online textbooks (through course-specific URL; more about this in a minute), they also can buy the textbook in several different formats (print, audio and self-print PDF, according to the “How it works” page).  According to Mr. Shelstad, about 50% of students currently opt to purchase a book that has been adopted for their course (at 29.95 for a print-on-demand copy), and Flat World plans to increase that percentage as they add new or improved formats.  Shelstad mentioned formats for hand-held devices, for example, and it seems exciting to me just to know that a textbook publisher is thinking this way.

For faculty who publish textbooks with Flat World, there is an opportunity to earn royalties on every dollar that is spent on their book, as well as the chance to continually update and correct the text.  These authors have a level of continuing control over their work that is unprecedented in the print world.

A unique level of control is also the principal advantage that faculty who adopt a Flat World textbook gain, since they are able to adapt a book for the specific needs of a course they are teaching.  Currently, adopting faculty can move sections of a book around with up / down / delete controls and annotate any portion.   Tools to insert materials and to edit at the word level are in development.  Once a faculty member has adapted and adopted a specific textbook, that version is saved and a course specific URL is created so that students in the class will see exactly the book that has been created, in a collaborative way, for their use.

I was especially interested in how these two different control points — that of the author and that of the adopting instructor — might relate.  I was delighted to hear that the adapted version will be separate from the original, using this system of unique URLs, and that all changes in the adapted texts will be indicated.  This seems to me to be a very sensible way to preserve the integrity of the original authored work while still permitting adaptation for a particular need.

Flat World is showing signs of being a genuinely transformative model for higher education.  They currently have 11 textbooks in their catalog, with 10 more to be added in the coming months.  Even with that relatively modest catalog, there are already over 500 course adoptions and more than 40,000 students using Flat World books.  The staff at Flat World is working on new ways to adapt the books, such as pulling in images, PowerPoint, etc.  It was heartening to hear that one of the reasons roll-out of these features is slow is that Flat World does not want to compromise the high standard they have for design of their books too radically.

Overall this is an exciting model that helps us look forward to the genuinely new ways technology can facilitate classroom and online education.  Just after our phone conversation,  this new announcement came out about Flat World’s partnership with Bookshare that will make textbooks available to people with print disabilities, highlighting yet another possibility for this adaptive technology.

Dissing incentives

This New York Times article about “Legal Battles over E-Book Rights to Older Books” caught my eye both because of a usage I dislike in its title and because of its importance in the continuing discussion of how much incentive copyright really provides for writers and other creators.  The article focuses on a dispute between the family of William Stryon and Random House, his publisher, over who has the right to profit from e-book sales of Styron’s work.

I have to say first that I dislike the reference to “e-book” rights because there is no distinct right to publish an e-book.  There are specific exclusive rights within copyright to publicly perform a work and to prepare a derivative work, both of which are important in allowing the creation and distribution of an e-book based on a published novel.  But “e-book rights” is a misnomer; at best a short-hand reference to a set of the enumerated rights in copyright that are involved in e-books.  In the contract dispute between Random House and Styron’s estate, the issue will be the scope of the assignment of these various exclusive rights, not the simple question of who got the “e-book” right since, as the family points out, e-books were unheard of when Styron published his novels and the profoundly moving “Darkness Visible.”

This brief item explains that the actual issue in this case is what “in book form” means in the publications contracts Styron agreed to.

The larger significance of this issue involves e-book versions of much of the great literature of the 20th century.  The length of copyright protection imposed on this cultural heritage is usually justified as providing an incentive for writers to write, artist to paint and filmmakers to “shoot.”  If, as Random House claims in the Styron case, however, the right to exploit new technologies as they develop is encompassed in the original publication contract, this incentive seems even more tenuous than it would ordinarily.  Even if we assume that Styron was more likely to write because he knew his children and grandchildren could continue to profit from his books than he would have been if copyright term was shorter, Random House’s claim that his original publication contract transferred the right to profit from new forms of distribution seems to reduce that putative incentive.  Presumably the family will have less control over the e-book created by Random House than one for which they contract directly (as they want to do), and it seems quite likely that they will profit less, if at all, from a version sold by Random House.

If copyright is really an author’s right, as publishing intermediaries like to claim when they want Congress to enact  stronger protections, should not the right to decide when and how to exploit new opportunities, not considered at the time of an original transfer, remain with the author or the author’s family?  In short,  publication contracts in copyright should be read narrowly to preserve the incentive for authors and others  to create which is the alleged purpose of the law.

This recent article by Professor Rebecca Tushnet about “Economics of Desire: Fair Use and Marketplace Assumptions” considers the incentive structure of copyright in some detail, based on the recognition that many creators create out of desire, or even compulsion, rather than a direct expectation of the money to be made for them or their heirs.  She argues persuasively that the economic incentives that the copyright monopoly creates “largely bypass[es] a persuasive account of creativity.”  Her conclusion that “Copyright law, and general cultural policy, could do more to direct material rewards to authors if we truly believe that monetary incentives will spur creativity” seems to directly address the Styron e-book dispute.  If we are serious about copyright incentives, she suggests, “we need to keep a close eye on which entities are benefiting material from all these new works.”  This is precisely the case with e-books and the literature of the 20th century; disputes like this raise real questions about how genuine our commitment to copyright as an incentive for creativity really is.

The most dangerous place on the Web

The most dangerous place on the Internet may well be inside that little button that says “I Agree.”  The opportunity to bind oneself to a contract almost unconsciously abounds on the Internet, and the immediacy of the Web encourages click-through agreements that are almost never read and, if they are, impossible to understand.

The Electronic Frontier Foundation has provided a nice primer on on-line agreements in this document called “The Clicks the Bind: Ways Users “Agree to Online Terms of Service.”  This is a long blog post or, in PDF, a three page document that should be read by everyone who uses the Internet.  It helpfully distinguishes major types of online agreements and the relative likelihood that the different forms result in binding contracts.  The document, by EFF’s Ed Bayley makes two programmatic assertions, both of which seem unarguable.

First, users should have to take an affirmative step to agree to terms of service.  Put another way, terms of service that are there if you want to look at them but do not require even that thoughtless click should not be enforceable.

Second, Bayley asserts that terms of service from online service providers should be publicly available, not just presented as a pop-up as one enters the site for the first time.  This would allow public discussion, which is important if people are to get past the habit of clicking without reflection on “I Accept” and come to some awareness of what they are agreeing to.  Even when TOS are publicly available, they are not very easy to understand.  Over a year ago, I printed out the TOS for Flickr just as an example and found that, at that time, they ran to over 12 pages of printed legelese.  A repeat of that experiment shows that they are shorter now — “only” 10 pages.  And to Flickr’s credit, they are available to anyone who wants to see in advance what they are getting into.

Bayley’s short essay is vital information, and the suggestions he makes seem like minimum steps that must be observed if courts are really going to hold individual users to the extensive and complex clauses found in these online terms of service.

Open access for hardware?

Jon Kuniholm may not have been an obvious choice for an Open Access Week speaker at Duke, but as the final participant in a panel on global access to health information yesterday, he made a profound impression.  The panel, called “Open Access, Local Action,” was all very interesting to the 30 or so staff, students and parents who gathered to listen (it was also listed as an event for Parents’ Weekend), but I want to focus on Jon’s presentation for this post because what he had to say was mostly new to me.

Jon is a Ph.D. candidate in Biomedical Engineering at Duke and a U.S. Marine Captain (Ret.). He is also an amputee, having lost his right arm in Iraq four years ago, and is thus a researcher with a personal interest in prosthetics.  He talked to us about why the money the government spends on R&D for prosthetic research does not produce the kinds of progress that it ought — the lack of coordination and such a small market that there is little incentive to move from workbench to marketplace once the research money is spent.  Jon offered potential solutions for this lack of progress that addressed both his very specific research and the broader problem of intellectual property restrictions.

In the very specific area of his own work on arm prosthetics, Jon envisions a remarkable collaboration, made possible by open hardware.  He would like to make the hardware being developed to improve neural control of prosthetic arms open and offer it to researchers in the video game industry.  His hope is that work undertaken to create new video game controllers (an area with a much larger market and much more money to spend) will also speed the development of better artificial arms, which has been largely stalled for quite a few years.

This is a remarkable vision, I think, of a win-win collaboration that would be founded on open sharing of technological development.  Openness, as some have been pointing out for quite a while, can breathe new vitality into innovation, in spite of claims from some industries that free access can only stifle and discourage it.  More information about the video controller project can be found at http://openprosthetics.wikispot.org/Open_Myoelectric_Signal_Processor

Jon Kuniholm does not stop with this vision of collaboration, however.  He has a concrete and well-informed notion of the mechanisms needed to bring it about.  I spoke with him briefly before the event about the intellectual property issues involved with this idea.  He pointed out that hardware can be shared openly from its inception because patent protection, unlike copyright, is not automatic and is, in fact, quite costly to obtain.  Where copyright does cover a work, regarding plans and specifications, for example, Jon advocates using the open source GPL, or General Public License.  The problem with open hardware, however, would come if another party saw profit in the hardware and filed its own patent application Since patent restricts the use of an idea, this would halt all other development based on the same hardware unless license fees were paid.  Since patents in the US law are granted to the first to invent (rather than the first to file a patent application), it would be possible, but very expensive, to fight such following-on patents.  Jon’s suggestion here is that the open hardware movement create mechanisms to publish what is called “prior art” — the science that leads up to new developments –in ways that will be very obvious to patent examiners.  The hope is that the ready availability of prior art will prevent patents from being issued that could shut down the kind of collaborative work based on open hardware that Jon and many others both need and foster.

Technological neutrality as a rhetorical strategy

There has been some really good attention paid recently to the issue of how our linguist choices really frame the debates about copyright law and, often, prejudge them.  In his new book, William Patry (who will be speaking at Duke Law School on October 22) devotes quite a bit of space to analyzing the language of moral panics and the metaphors employed by the copyright industries to skew an honest debate.  In a June 2009 article called “Why Lakoff Still Matters: Framing the Debate on Copyright Law and Digital Publishing,” Diane Gurman makes a similar plea for those who oppose the ever-expanding reach of copyright to create their own frames that would balance the rhetoric of theft and piracy.

Although it is often easy to spot the linguistic excess coming from the copyright industries, a recent letter to the Senate Judiciary Committee from the National Music Publishers Association took a more subtle, and even more dangerous, approach. There is a CNET news story about this letter here.  The theme of the letter, that copyright law should be technologically neutral, seems benign enough, but the work that the music publishing industry tries to get that rhetoric to do is very troubling.  The thrust of this “technological neutrality” appeal is a claim that music publishers should collect a fee for a public performance of a musical composition every time there is a digital download of a piece of music.

To call this grasp at a wholly new income stream “technological neutrality” shows amazing nerve; it is really the opposite of such neutrality.  Music publishers do not collect a public performance fee when a CD is sold because there is no way to prove or assume that a public performance (as opposed to a private one, over which rights holders have no control) will take place.  Why should a digital download be different?

Fred von Lohmann of the Electronic Frontier Foundation, who is quoted in the article, suggests that this is just a turf war between different rights societies over who will collect a fee and, hence, get a “cut.”  He is surely right about that, as he is when he points out that copyright law has never been technologically neutral.  Some exceptions (such as the section 108 library exceptions) apply only to certain technologies or treat different technologies differently.  There is a special rule, after all, for digital audio tape.  But pointing put the triviality of this use of “technological neutrality” may not be enough.  We should notice something really pernicious that is happening behind this smokescreen.

The language of copyright neutrality has quite a bit of appeal for copyright policy makers.  The fantasy of a law that adapts automatically to new innovation appeals to a legislative sense of economy.  That attraction is being used, in this letter, to attempt to vastly expand the scope of the exclusive rights protected by copyright.  And this is not the first time.  We should remember that copyright owners do not get absolute control over their works, only control within the scope of the enumerated rights.

A single line in the CNET story really encapsulates the problem here — “composers, songwriters and publishers are asking for a guarantee that they will get paid for a public performance even if there isn’t a public performance.”  In this letter, the apparently benign call for technological neutrality is being used to disguise an attempt to enlarge beyond all reason the scope of the public performance right.  This is not the first effort to use that right to expand the reach of the copyright monopoly.  As I wrote about here, the debacle regarding the Kindle text-to-voice feature was based on an attempt to expand “public” performance deeply into the private use of new technologies.  For another example, see this report on the unsuccessful attempt recently by music publishers to collect a fee for every ring-tone “performance” of copyrighted music.  So the desire to expand the reach of copyright control is well-established, what changes is the disingenuous rhetoric behind which these efforts are hidden.

Moving beyond the photo album

Last week G. Sayeed Choudhury, Associate Dean for Library Digital Programs at Johns Hopkins University, came to Duke to talk with the staff of the Libraries about e-scholarship and the changing role of the university library as part of our strategic planning process.  His presentation and conversations were fascinating, and we were left with a great deal of thought-provoking material to consider.  I was particular struck by one observation, which was actually Choudhury quoting from a 2004 article that appeared in D-Lib Magazine by Herbert Van de Sompel, Sandy Payette, John Erickson, Carl Lagoze and Simeon Warner.  In the article, “Rethinking Scholarly Communications,” the authors assert their belief that “the future scholarly communications system should closely resemble — and be intertwined with — the scholarly endeavor itself, rather than being its after-thought or annex.”  The article further makes the point, perhaps more obvious now that it was five years ago, that “the established scholarly communications system has not kept pace with these revolutionary changes in research practices.”

In developing this point, Choudhury talked about the traditional research article as a “snapshot” of research.  Those snapshots are increasingly far-removed from the actual research process and have less and less relevance to it.  Indeed, the traditional journal article seems more like a nostalgia item every day, reflecting the state of research on a particular topic as it was at some time in the past but beyond which science will have moved long before the formal article is published, thanks, in part, to the many informal ways of circulating research results long before the publication process is completed.

Choudhury called on libraries to move past a vision of themselves as merely a collection of these snapshots and become more active participants in the research process.  He recounted a conversation he had with one researcher who, in focusing on the real need he felt in his own work, told Sayeed that he did not care if the library ever licensed another e-journal again, but he did need their expertise to help preserve and curate his research data.  The challenge for libraries is to radically rethink how we spend our money and allocate the expertise of our staffs in ways that actually address felt needs on our campuses and do not leave us merely pasting more snapshots into a giant photo album that fewer people every day will look at.

Recently I have seen a lot of fuss over an article that appeared in the Times Higher Education supplement that posed the question “Do academic journals pose a threat to the advancement of science?”  The threat that the article focuses on is the concentration of power in a very few corporate hands that control the major scientific journals.  But read in the context of the radical changes that Choudhury, Van de Sompel and others are describing, it is clear that the threat being discussed is not a threat to the advancement of science but to the advancement of scientists.  Scholars and researchers have already found a way around the outmoded system of scholarly communications that is represented by the scientific journal.  The range of informal, digital options for disseminating research results will not merely ensure but improve the advancement of science.  All that is left for the traditional publication system to impede is the promotion and tenure process of the scientists doing that research.

This, of course, is the rub, especially for libraries.  Traditional scientific journals are increasingly irrelevant for the progress of science, but they remain the principal vehicle by which the productivity of scholars is measured.  One researcher told Choudhury very frankly that the only reason he still cared about publishing in journals was for the sake of his annual review.  Sooner or later, one hopes that universities will wake up to the tremendous inefficiency of this system, especially since the peer-reviewing on which such evaluations depend is already done in-house, by scholars paid by universities but volunteering their time to review articles for a publication process with diminishing scholarly relevance.  Nevertheless, the promotion and tenure system still relies, for the time being, on these journals, which presumably cannot survive if libraries begin canceling subscriptions at an even faster rate.  The economy may force such rapid cancellations, but even if it does not, pressure to move to a more active and relevant role in the research process will.  The question librarians must ask themselves is whether supporting an out-dated system of evaluating scholars is a sufficient justification for the millions of dollars they spend on journal subscriptions.  Even more urgently, universities need to ask if there isn’t a better, more efficient, way to evaluate the quality of the scholars and researchers they employ.

Books in the cloud

There have been lots of reports flying around recently about the decision by Amazon to delete copies of two works by George Orwell (ironic, that) from the Kindle devices of folks who thought they had bought those books for once and for all time.   There on reports and comments about this here, here and here.  Technologically naive as I am, my first response to the story was “wow, I didn’t know they could do that.”  My second response was to reconsider my growing inclination to buy a Kindle.

The really meaty issues, of course, are not whether I buy a Kindle or not, but relate to copyright issues and privacy.  We are used to the idea that a copy of a book is mine to do with as I please once I have purchased a legally-made copy of it.  The same is true of a CD or a DVD; I can rent, donate, destroy, lend or resell the single physical copy that embodies intellectual property because of the provision in our law called the doctrine of first sale.  We know that first sale does not necessarily apply in the same way to digital files, but the Kindle cases really pushes the issue.  Everything about the purchase of an e-book from Amazon looks like a sale, and consumers can easily be forgiven for thinking that they own something at the end of the process.  It is probably time for a legislative look at how first sale applies in the digital world.  While there have been suggestions of a “forward and delete” model for digital first sale that would allow a consumer to transfer a digital file to someone else as long as copies were not multiplied, this situation raises a more fundamental question.  When can a transaction that looks like a sale be treated as a mere license, and when will consumer protection concerns step in to enforce the privileges that go with a purchase?

It is important to note that, in the case of “1984” and “Animal Farm,” there was no question of preventing consumers from making unauthorized copies, which is the usual reason given for the assertion that first sale does not apply to the digital environment.  Here, it was Amazon that was selling the unauthorized copies, and consumers were deprived of ownership by remote action, even after they had purchased the books.  Why this could not be remedied by having Amazon pay the rights holder for the infringement, which would have been the solution if a publisher distributed print copies of a book without authorization from the copyright holder, is not clear to me.

One thing that several of the stories about this contretemps have in common is reference to Jonathan Zittrain’s must-read book “The Future of the Internet… and how to stop it.”  It is always good for an author when he correctly predicts a technological trend, and Zittrain got this one dead-on.  His warning that the Internet is moving away from the programmable devices that fostered so much innovation toward tightly-control, “tethered” appliances proves eerily prophetic when Amazon starts deleting books from consumers’ devices.  It makes reading Zittrain’s discussion of all of the implications of this development that much more important.

Zittrain had an excellent op-ed piece in the New York Times on July 20, called “Lost in the Cloud,” that discusses some of the privacy and censorship issues that are inherent in the development of these Internet appliances and makes brief reference to the Kindle issue.  I am happy to be able to report that Zittrain will be coming to Duke during the upcoming academic year as part of our Provost’s Lecture Series; I cannot imagine a more important discussion to have than one about the issues he is raising.

Can a “batty” ruling effect needed change?

It is thoroughly unbelievable news that US District Court Judge Deborah Batts has issued a permanent injunction against the US publication of a book that purports to update the story of Holden Caufield, the protagonist of J.D. Salinger’s “The Catcher in the Rye.”  The new book, written by Swedish author Fredrik Colting and already published in Britain, is called “Sixty Years Later: Coming through the Rye” and is told by a 76-year-old man called Mr. C.  There is little doubt that Mr. Colting is trying to ride the continuing popularity (which I personally have never understood) of “Catcher in the Rye” by creating a sequel.  But there is a great deal of doubt about whether this is a copyright infringement.  The portions of the decision I have been able to read suggest that Judge Batts got all of the major copyright issues involved completely wrong.

First there was the fair use argument.  In a very similar case involving a retelling of the the story of “Gone With the Wind” from the point of view of one of the slaves at Tara, the Eleventh Circuit Court of Appeal correctly recognized that the new work was a fair use of material copied from “Gone with the Wind.”  And in the recent decision finding that “The Harry Potter Lexicon” was not a fair use,  Judge Robert Patterson, in the same judicial district as Judge Batts, went out of his way to make clear that an author of an original work cannot control all sequels, prequels and reference works.  Judge Patterson even writes, citing other precedents in the Circuit, that “a work is not derivative, however, simply because it is “based upon” the preexisting work.” (p. 39)  But that erroneous conclusion is exactly the foundation of Judge Batts’ decision.

Judge Batts seems to know only one fair use precedent — the “Oh Pretty Woman” case from the Supreme Court — and she applies it slavishly.  Since she does not think that the new book is an actual parody of the original, she holds that it is an infringing derivative work.  But it should be clear to anyone who is a federal district court judge that there are other kinds of fair use than parody; indeed, a quick read of section 107 itself would get one that far.

The real problem, however, is that this should not have been decided as a fair use issue.  In the two cases cited above, there was a substantial amount of material that was actually copied from an original into the new work.  In the case of the “Wind Done Gone,” specific dialogue was reproduced, with commentary and perspective from the “new” protagonist.  In the case of “Coming Through the Rye,” there seems to be no evidence of actual expression that is copied in the sequel.  Judge Batts focuses her objection on the conclusion that “Holden Caufield is delineated by words” and that therefore Holden is copyrighted.  But this ignores the fundamental distinction between expression, which is protected by copyright, and ideas, which are not.  All ideas are delineated by words, but that does not give the ideas themselves, even the idea of a solipsistic teenager who inevitably grows up, copyright protection.  Even before she reads section 107, Judge Batts needs to read section 102(b) of the Copyright Act.

Indeed, her decision is so unaccountable that its leads this commentator at TechDirt to question whether there really is an idea/expression dichotomy in copyright law at all.  But that dichotomy carries a lot of weight in US law; it is frequently cited, including by the Supreme Court, as one of the basic concepts (along with fair use) that keeps copyright law from becoming an infringement of free speech.  Now that Judge Batts has read the distinction out of the law (or failed to read the law at all), the conflict with free speech becomes all too apparent, when a new book can be banned in the US because an old author doesn’t like it.

So what good can come from this ridiculous decision?  First, it should be, and very likely will be, overturned on appeal.  But more importantly, it should prompt Congress to look again at the exclusive right, granted in copyright law, to prepare derivative works.  That right has not always been part of copyright; there was a time when even abridgments and translations were held not to infringe on an original.  The pendulum has now swung the other way, and we grossly overprotect some original works from legitimate reuse because we think those new creations are derivative works.  As is frequently pointed out, Shakespeare could not have written his plays under today’s copyright regime in the US.  It is time for clearer definition of what is and, more importantly, what is not a derivative work that is entitled to protection.  If outrage over Judge Batts’ decision can prompt such clarity, some good might come from this very bad ruling.

What has changed

Courts in the U.S. have asserted for years that our copyright law is compatible with the First Amendment guarantee of free speech by citing to principles — fair use and the rule that copyright protects only expression and leaves the underlying ideas free for all to appropriate, reuse and build upon.  Both of these safeguards are still in place, yet I have twice claimed in this space that we need to look again at the relationship between copyright and free expression.  So the question presents itself, do I just not get it, as at least one commenter seems to think, or has something changed to make reliance on fair use and idea/expression inadequate these days?

Although I am not convinced that the two principles usually cited were ever adequate, especially as the scope of copyright’s monopoly expanded, what has clearly changed, in recent years, is that Congress adopted the Digital Millennium Copyright Act in 1998.  The DMCA added two provisions to the copyright act that have had a negative impact on free expression.

First were the legal protections provided for technological protection measures, or DRM (digital rights management) systems.  It is ironic that content owners decided to move toward technological locks because they felt that legal protections were inadequate, and then found they needed legal protection for those locks when they proved insecure.  But the combination of digital locks and “anti-circumvention” rules has been devastating for free speech; even use of public domain works can now be locked up, and the law will prevent access.

Lest we forget the power of DRM, here is a note about the Motion Picture Association of America “reminding” a court that it is illegal to circumvent DRM systems even for a use of the material that would be perfectly legal.  So when digital locks are used, one of the safeguards our courts have relied on to preserve free speech — fair use — is apparently useless.  As the EFF attorney mentioned in a blog post linked above says, it is by no means certain that fair use is entirely trumped by DRM, but there is a case that held that, and the content owners certainly believe that fair use is now obsolete.

An extensive study done by Patricia Akester, a researcher with the Centre for Intellectual Property and Information Law at Cambridge University, lends weight to that argument that what she calls “privileged uses” (like fair dealing in the UK and fair use in the US) are adversely impacted by DRM systems.  There is a report of her study here, and the full text (over 200 pages!) is here.  Akester may have done the first empirical study of these adverse effects, and her conclusions are sufficiently gloomy to lead her to suggest a legislative solution.  She proposes that a “DRM Deposit System” be established where content owners are required to deposit either the key to their lock or an unencrypted copy of the work.  Then a user could make an argument or meet a set of requirements for access when their proposed use was clearly within a privilege.  If the content owner declined to deposit with the system, circumvention for access for privilege uses would be allowed.  Some such system, similar to the “reverse notice and takedown” proposal discussed here over a year ago, is clearly needed if fair use is to continue to function as a safeguard of free speech.

The other provision of the DMCA that imperils free expression is the notice and takedown procedure itself, which was created to protect Internet service providers (ISPs) from liability for infringing activity that happened over their networks.  In one sense, this “safe harbor” has been good for fair use, allowing the creation of user generated content sites like Flickr and YouTube where lots of fair use experimentation can take place.  But that take down procedure is being abused, with bogus notices being sent to prevent legitimate and even socially necessary criticism and parody.  ISPs are quick to takedown sites that are named in these takedown notices, and the process for getting them restored subjects the original poster to an increased risk of liability.  It is very costly, after all, to defend free speech even against a bogus claim.  So abusive takedown notices have now become a favored way to suppress criticism and comment that is unpopular with a major company or content owner.  The long tradition of “I Hate BigCo, Inc., and here is why” web site, which courts have often held to be fair use of copyrighted and trademarked content, is now much riskier than it was before.  In fact, the Electronic Frontier Foundation has even created these six steps to safegaurd a gripe or parody site, recognizing that free speech is not longer sufficiently protected by traditional provisions within the copyright law alone.

Click-wrap and illusory promises

At the end of my last post I returned to a frequent theme, the unfairness of “clickwrap” licenses and the fear that they are over-enforced by courts, in spite of the inability of users to negotiate the terms or avoid enforcement of these one-sided deals.

So I was rather pleased to find an exception — actually a series of exceptions– to this over-enforcement in a recent case out of the federal district court in the northern district of Texas.  While this line of cases does not accomplish what I have wished for, a ruling that copyright law and its exceptions should preempt non-negotiable contracts, they do show that, in some circumstances, courts will reject a clickwrap agreement when the seller takes too much advantage of its powerful position.

Cathryn Harris agreed to a clickwrap license when she signed up for a particular online program run by Blockbuster.  Such licenses, of course, condition service or access on agreeing to a set of terms that is wholly non-negotiable; all the user can do is click “I Accept” or forgo the service entirely.  I have complained about enforcement of a similar licenses in the Turnitin case by a court in Virginia, where the students were compelled by their school to sign up with Turnitin.  In this case, when a dispute arose and Ms. Harris filed suit against Blockbuster, the company tried to enforce the clause in the clickwrap licenses that sends all disputes to arbitration (which is much less expensive).  Ms. Harris opposed the motion to compel arbitration, and the Texas court sided with her, ruling that the entire agreement, including the arbitration provision, was invalid.

The reason the court rejected the license was that it contained a provision saying the Blockbuster could change the terms of the agreement at any time, without notice.  Such provisions are not uncommon in clickwrap licenses, because the nature of the agreement makes it impossible for the seller to contact everyone who agrees to the terms of use.  But here the court said that such a clause makes the contract “illusory.”  Contracts, after all, are an exchange of promises, and a one-sided, “we can change the terms anytime” clause really means that the side that drafted the agreement has not made any promise at all that it is bound to stick to.  When an apparent promise really is just statement of discretion — “I will pay you $20 to wash my car if I decide it was worth it” — courts call those contracts illusory because there is no real exchange of promises.

As this analysis of the case shows, there have been several cases in which such clauses allowing one-sdied changes have caused a clickwrap agreement to be found illusory.  It is interesting that they are all about arbitration.  I suspect this is because arbitration is something that must be based on mutual agreement, and courts are reluctant to limit a person’s access to the legal system based on a promise they could not undertake voluntarily.

For the purposes of our concerns here, this case is a small indication that clickwrap licenses must be drafted carefully, and that the fact that users seldom read such agreements is not an excuse to overreach too far.  When the issue is important enough, a court will occasionally void a one-sided agreement rather than enforce terms that put one party at too great a disadvantage.  Perhaps we will soon see such a willingness to reexamine clickwrap agreements when the disadvantage caused is a loss of those user rights that Congress so clearly intended when it drafted the copyright law.