Category Archives: Technologies

A”twitter” about contracts

Although I had heard of Twitter for a while now, I did not really know what it was until prompted to learn more by two recent articles. One is this piece in the Chronicle of Higher Education about potential library uses for the “microblogging” or social messaging service. It recalls the discussions I heard recently about the different level of involvement folks from my institution felt at an academic conference when the audience for various talks was using Twitter during the programs to share comments, examples and the like. Rather than being distracting, as I suspected it would be, the reports were that this added a welcome dimension to the conference experience.

What caught my professional attention, however, was this report of an ongoing controversy between Twitter and some of its customers about the terms of service to which every user agrees when they sign up for the service. The specific argument concerns the degree to which Twitter was obligated to pursue complaints of harassment directed against another user. On that issue, Twitter seems to be caught between a rock and a hard place — if they do not take steps to stop harassment they seem to condone a clear violation of a condition of use that they imposed, but if they do take action they may put in jeopardy the “safe harbor” protection from liability based on user postings that they gain under section 230 of the Communications Decency Act.

The broader issue, in my opinion, is the role of these terms of use statements in governing the relationship between users and the providers of Internet services. For one thing, it seems that such contractual agreements can be changed at the will of the provider. As the article cited above tells it, rather than address the harassment issue, Twitter indicated that it would wash its hands of the issues and simply “update” its terms of service. More amazing yet is the statement that Twitter borrowed its TOS from Flickr, apparently without much attention to what they contained. A Twitter executive is quoted as saying that, as a start-up, Twitter just “threw something up early on and didn’t give it a lot of thought.”

Who knew that these Internet companies had such a cavalier attitude to the non-negotiable contracts they are imposing on Internet users? Actually, the terms Twitter uses, and says they borrowed from Flickr, are much less lengthy and burdensome than those now used by Flickr itself; since acquisition by Yahoo! the terms of use that a new Flickr user agrees to (standard Yahoo! terms) prints out to seven type-filled pages, where the Twitter TOS amounts to only two pages. These click-through terms are being enforced by courts as binding contracts, even when the Internet service provider doesn’t “give them a lot of thought.” In the case about the plagiarism detection site Turnitin, high-school student users were held to the terms of service they clicked through even though they made valiant efforts to modify those terms.

As more and more communication on campus happens over these kinds of proprietary sites and networks, and as commercial Internet tools become more common for student and faculty worker, these contracts will increasingly control what we can do. Often they give the owner of the site or tools an exploitable interest in the work created or stored there. Yet very few people even realize that they are binding themselves to detailed and enforceable terms whenever they click “I agree.” It is therefore becoming ever more important that courts find ways to introduce some nuance into their enforcement of these click – through agreements, rather than simply enforcing them blindly as the Virginia court did in Turnitin. At least one proposal for such a nuanced approached, that considers when a contract, especially a non-negotiable online contract, should be preempted by federal copyright law and the policy that law is aimed at enacting, is found in this complex but compelling article on “Copyright Preemption of Contracts” by Christina Bohannan. We can but hope that courts will develop a more sophisticated approach to these contracts, whether they use Bohannan’s proposed approach or some other, as they become more aware that such contracts may undermine both the policy behind copyright law and the traditional rules of contract formation, and they may do so, if left unchecked, based on very little thought or reflection by the party that is imposing the terms.

Turnitin and hold your nose

I have been very neglectful of posting for the past two weeks, mostly due to the pressures of other work, but the attention paid to the recent court decision involving the online plagiarism detection service Turnitin has finally provoke me enough.

Turnitin is a web-based service that compares submitted papers to vast database of essays available on the web and it is own proprietary database. It offers instructors a report on how likely it is that the given paper is plagiarized. Four high school students from Virginia who were required to submit their work to Turnitin or get a zero challenged the company in court. The district court’s opinion, dismissing all of the students claims, was issued March 11 and has provoked a lot of reaction, The Chronicle of Higher Education has a story about those reactions here, and William Patry discusses several aspects of the case in his blog post called “Turn-it-it and Kiss-it-goodbye.”

One aspect of the decision worth mentioning is its discussion of the claim that Turnitin infringes copyright because it adds a copy of every paper to its database as soon as the paper is submitted so it can be compared to later submissions. The plaintiffs tried to prevent this by indicating their lack of consent to have their work copied in this way on the papers they submitted, but the court found that the click-through contract they were obligated to agree to in order to submit in the first place took precedence. More on that in a moment. On the copyright issue, the court found that the company had a valid fair use defense regarding their storage and use of student work, even if the contract giving them permission had failed (which it did not).

I have been torn about the fair use analysis the court used in this case. I have a hard time justifying to myself the business model Turnitin use, although my doubts are likely bound up with broader concerns about this kind of attempt to use technology to force people to behave with integrity. But, to my mind, Turnitin’s business model is as dependant on infringement as is Grokster. The district court disagreed, finding that Turnitin made a transformative use of the works it archived for later comparision. What strikes me most about this decision is the way “transformative use” has become a talisman, invoked whenever the court wants to find fair use. The copyright statute seems to indicate pretty clearly that even non-transformative uses can be fair use, but courts are now so enamored with the notion of transformation that they are now finding it even in unlikely situations because it has become the sine qua non of fair use. This is both good and bad for higher education; some educational uses of copyrighted works seem to be purely iterative, not transformative, and fair use in those cases seems increasingly hard to argue. On the other, the more the concept of transformative use is expanded, the better it will be for educational; some of those uses that don’t seem transformative to me may well seem so to our courts.

The other, more troubling aspect of the Turnitin decision was the court’s attitude to the click-wrap license. The plaintiff students had no choice but to click through the license; they faced a zero if they didn’t and there was no way to communicate with Turnitin until they had accepted the license. Nevertheless, they tried to make their objection to the term that allowed Turnitin to copy and save their work as soon as possible; they included a notice with their paper that said they did not consent. Tough luck, said the court; you agreed to the license and you have to live with it. This strict enforcement of a “take it or leave it” license even when the party on whom it is imposed objects in a timely way seems to make a mockery of the notion of a contract as a bargain that may be “unconscionable” if there is no meaningful chance to negotiate.

If we need further confirmation that the court was aiming at a particular result and disregarding a reasoned discuss of the law, there was its astonishing dismissal of the plaintiff’s argument that, as minors, contracts they entered into are voidable. The court recognized that this was the usual rule in contract law, but said that the plaintiffs could not avail themselves of it because they had accepted the “benefits” of the contract. What benefit had they accepted, I wondered. Standing to sue, the court replied, the right to bring the case to challenge the contract itself. By this logic, of course, no contract could ever be challenged on the basis of “infancy.” Such absurd and circular reasoning can only serve, as Bill Patry says, to increase the cynicism so many people feel toward our courts.

Copyright reform suggestions, part 1

I am a little ashamed to admit that, at the American Library Association meeting last month, I learned about a very problematic provision of the U.S. copyright law that I had never heard of before. Representatives of the Association for Recorded Sound Collections and the Music Library Association spoke to several groups during the meetings in Philadelphia about the effects of section 301(c) on our ability to preserve historical sound recordings. ARSC and MLA are looking for support for their efforts to have 301(c) repealed or amended.

When our “new” Copyright Act was adopted in 1976, one of things it did was explicitly preempt state copyright protection. Before the 1976 Act, unpublished works were protected by a wide variety of different state laws (many with perpetual duration), and federal copyright protection usually only took effect when something was published. This created lots of confusing and difficult situations, so Congress took almost all works, published and unpublished, under federal protection, including the limited federal term of protection.

For some odd reason, Congress crafted an exception for sound recordings that were made prior to February 15, 1972. Those recordings, instead of being subject to the normal copyright rules, continue to be protected by state law until 2067. State protection, which was usually created by judges rather than legislators, often allowed perpetual protection for unpublished works, but were not designed to deal with other materials. Leaving these historical sound recordings subject to the patchwork of state laws has meant that, in fact if not by intent, these historical materials are subject to the most restrictive of state laws and for all practical purposes unusable until 2067. For the earliest recordings, which date from the 1890s, this amounts to a copyright term of over 170 years. Since even preservationists are reluctant to make copies under this bizarre and uncertain regime, many recordings are locked up by copyright for longer that the usable life of the medium in which they are recorded; they will be irretrievably lost before they are available in the public domain.

So here is an opportunity to reform our copyright act to mitigate one of its most pernicious effects – the unnecessary loss of our cultural heritage merely to time and decay – without harming anyone’s economic interests. In fact, compilations of some of these old recordings that are available for sale in other countries but technically infringing in the US could finally be sold here as well. The recording industry frequently lobbies Congress for full performance rights in sound recordings, and there was legislation to add such rights introduced into both houses late last year (the “Performance Rights Act”). Whether or not it is a good idea to subject radio stations to all the licensing fees such a law would require, this seems like a good time to demand a quid pro quo in the shape of repealing the foolish overprotection of historical sound recordings.

But it is just so easy!

The ease with which we can copy and use stuff found on the Internet, particularly photographs and other images, leads to some delicious ironies when some of the major corporate interests that rail against file-sharing are caught infringing other peoples’ copyrights. The Washington Post published an interesting story on Wednesday that looked at some of these cases where snapshots on the Web were misappropriated for commercial use. Often the unauthorized use is dismissed as accidental — it is amazing how many unsupervised interns appear to doing significant work for these companies — but whether they are the result of inattention or conscious laziness, these lapses suggest that some of the major commercial content owners have little concern for copyrights not their own property. Makes all the rhetoric about theft and the moral claims of creators that is thrown around by these big media companies seem rather disingenuous.

The best thing about this article, however, is the discussion of it, with the wonderful title “Good Artists Copy, Great Artists Steal” on the Info/Law blog. I don’t think I have the chance to point to Info/Law before, but it is an excellent place for information and analysis about the “convergence of intellectual property doctrine, communications regulation, First Amendment norms, and new technology.” This post, which also reports on a recent infringement action filed against Jerry Seinfeld and his wife, is an nice example of a careful yet entertaining dissection of the legal principles at stake in each of the two reported stories.

The point, of course, is that the Internet has fostered a culture of easy borrowing and creative remixing that is at odds with much of our current law. There is a great deal in that culture that is valuable, with its emphasis on user creativity and sharing, and its conflict with much of the prevailing rhetoric about intellectual property is becoming too obvious, and too ubiquitous, to ignore.

Still waiting

It seems we have been waiting for years for the e-book to “arrive.” The promise of having a whole library in a hand-held device has been made for a long time, but the technology has seldom lived up to expectation. The early readers were awkward to use and difficult to read. The latest generation of e-book readers seems to have improved a great deal, but problems still remain.

I participated in a trial of the Sony reader last year, and was very pleased with the visual display and the ease of use. But I was disappointed by the range of books available, which is probably the fault of my quirky and eclectic reading habits, and with the awkward way the reader displayed PDF files. Now the Amazon Kindle is getting a lot of attention. Several people have noted the limited selection (and Kindle does not allow reading of PDF files at all), but the debate about e-books has now begun to recognize another issue that reduces the value of e-books, digital rights management. UPDATE — Comment by Kim Knoch (click on comments above) explains that there is a way to read PDF files on Kindle for a small fee.

DRM is used, of course, to protect the value of a proprietary e-book by preventing copying and display in other devices. But the e-book vendors seem to have missed the obvious fact the DRM reduces the value of the e-book for consumers. By definition, DRM limits the options for readers, and in a our world of constant innovation and a plethora of devices that compete for our dollars, options are value.

A blog from the Free Software Foundation dedicated to a campaign against DRM – Defective by Design – makes this point in a post called “Don’t let DRM get between you and a good book.” The defective by design campaign is primarily a consumer movement, focused on electronic freedom and privacy (the threat DRM may sometimes pose to privacy is another important issue). They make the point that, with DRM limited e-books, every time an updated device is released it could require that consumers buy a new version of their favorite books. They also argue that DRM is bad for authors and publishers as well, supporting a form of “digital censorship.”

The same concern about DRM in e-books is also raised on a recent post on the if:book blog from the folks at the Institute for the Future of the Book. “The future of the sustainable book” is part of a much larger discussion, all of which is worth attention. Regarding all sorts of electronic texts, this telling remark clearly places DRM protected proprietary e-books low on the scale of sustainability: “since I work in book publishing, job one is to figure out what it means to create a sustainable book. Lots of models come to mind. Good ones like Wikipedia (device-neutral and always in the latest, free, edition) and bad ones like the Kindle, (which tries to create a market for an ebook reader with designed obsolescence).”

Today a e-mail appeared in my inbox that proclaimed that the era of DRM is over. The author was referring to a recent announcement by Sony BMG that they were finally considering following the lead of much of the rest of the music industry and selling music in an open MP3 format. This is good news, but it is not the end of DRM by any means. Many other issues regarding electronic protection measures remain, and we are still waiting for a truly usable, portable e-book and reader.

P2P and New Business Models

Peer-to-peer file sharing is usually not a scholarly communications issue in itself. Most such activity involves the infringing reproduction and distribution of music and video files, and it is more of a problem for colleges and universities than a benefit. Nevertheless, there are legitimate forms of file-sharing that happen at universities (and between them), and the big danger that recreational file swapping poses to schools is that draconian measures to control the illegal activity will also inhibit legal and productive collaboration.

Each time Congress proposes to address file-sharing at universities, this is one of the concerns that unites the higher education community against the proposals. Another concern is that the cost of implementing new mandates will be very high, even though university networks account for only a small portion of the overall problem. The recent proposal in Congress (see article here from the Chronicle of Higher Education) is a case in point. The proposal to require that universities develop a plan to address file-sharing is a little bit insulting – most schools already have a plan – and the instructions to offer alternatives to illegal music downloading and to explore technological solutions to the problem are unfunded mandates that could cost hundreds of millions of dollars. And filters that stop music sharing may also inhibit legitimate collaboration; the history of Internet filters suggests that they are often more effective at preventing legal activity than illegal.

The problem posed by illicit file-sharing will not be solved by increased enforcement measures; the genie is already out of the bottle in that regard — P2P swapping has grown beyond the bounds of any attempt to stop it using either law or technology. What are needed to curb the growth of P2P are business models that make legal acquisition of digital music and movies more attractive than the illegal alternatives. Georgia Harper from the University of Texas (see her blog here) has been a vocal advocate of business model development as a solution to some of our current copyright problems, and a conversation between Georgia and some speakers at a recent conference caused me to start wondering what such business models would look like.

One possibility came to my attention (rather belatedly, I suppose) while watching a football game on Saturday. Verizon Wireless was heavily advertising its V-Cast Song ID service, which allows a user who hears music that they like to capture a sample of the audio, identify the song and purchase a copy directly from, and to, their cell phone (see news report here). This, it seems to me, is exactly the kind of value-added service that can move listeners back to legal music downloading services, and it represents a much more positive solution to the problem of file-sharing than any of the legal remedies yet proposed.

Second thoughts

On Google — the New Yorker has a learned and fascinating article on the Google Library project this month, by historian Anthony Grafton. The Google project has gotten inordinate praise in some quarters, as well as its share of criticism (see here, for my contribution to the latter). But Grafton’s article is neither wholly critical nor wholly laudatory; his is an attempt to place Google in the history of efforts at building a universal library and to realistically assess what can actually be accomplished. He points out that a truly comprehensive history of humanity, which some have claimed Google will provide, will still remain out of reach. For example, much “gray” literature and archival material will never see the light of scanning, nor will the cultural production of many of the world’s poorest countries.

This latter point is especially troubling. Poor countries are not just consumers of cultural production, they do also produce it. The digitization of so much western/northern literature could have two negative effects on this production. One would be to push developing world literature further to the margins in the developed world. The other is that, in so far as technology is available within those developing countries, the easy access to material through Google could marginalize a country’s own cultural production even within its borders.

Nevertheless, Grafton is properly amazed at the level of access that digitization has made possible. As he says, picking up his opening theme, “Even [Alfred] Kazin’s democratic imagination could not have envisaged the hordes of the Web’s actual and potential users, many of whom will read material that would have been all but inaccessible to them a generation ago.” Digitization offers great things, but a realistic valuation of those benefits recognizes that no single means of access should replace all the others; the Internet will continue to coexist with libraries, archives and whatever the future holds that we can not yet imagine; all will be part of any genuinely comprehensive look at human history.

On Second Life — On a less exalted plane, the New York Post reported last week on a law suit filed by and against Second Life entrepreneurs alleging copyright infringement of products designed and sold entirely within the virtual environment. See another comment on the lawsuit here. As the comment points out, many educators are looking closely to consider the educational potential of Second Life or other virtual worlds. This lawsuit raises some interesting questions that will need to be answered in order to exploit that potential. For example, do real world laws protecting the rights of creators even apply to Second Life? Is copying someone else’s design in Second Life stealing, as the plaintiffs allege, or is it merely part of a giant “video game” that should not have real world legal consequences? The answer to that question should be a prerequisite to placing educational content into Second Life; teachers typically want to protect the content they produce, or at least share it on their own terms. Whether Second Life will be subject to real world laws, intra-world regulation amongst its members, or merely arbitrary decisions enforced by Linden Labs, its owner, will have a profound impact on how much time, money and content educators are likely to invest in Second Life.

Interestingly, the same defendant who argues that Second Life is a giant video game in which real world laws should not apply also claims that his home in Second Life was subject to an illegal search and seizure by the plaintiffs when they entered to photograph the allegedly infringing items. Just goes to show how hard it is for us to escape our real world notions of property and privacy.

Talk back on schol comm issues

Two interesting scholars have recently undertaken to write major pieces of scholarship about scholarly communications issues in blog form. This means that all of us have the opportunity to comment on these works in progress, a rare opportunity to participate in cutting edge research and to make our voices heard before a work of scholarship is published. Not only are these two projects interesting because of their topics, they also represent important experiments in the kind of collaborative scholarship that the digital environment makes possible.

Georgia Harper, well-known in copyright circles for her years of work in the Counsel’s Office at the University of Texas and her educational outreach to the whole academic community, is now a Ph.D. student in Library and Information Science. She is working on a major paper on the impact of mass digitization projects on copyright law and policy. Her work should be fascinating, and we are invited to participate as she develops the paper and solicits feedback at this blog site using CommentPress software and in collaboration with the Institute for the Future of the Book.

The growing influence of the Institute for the Future of the Book in these new experiments in collaborative scholarship is evident from the fact that the other project, Siva Vaidhyanathan’s growing book on “The Googlization of Everything,” is also a project of if:book. Vaidhyanathan’s project promises to be the more synoptic and polemic of the two as he tells us why we should worry that “one company is disrupting culture, commerce and community.” Combined with Georgia’s deep knowledge and experience in law and policy, these two projects offer a rich set of opportunities to imagine the future of publishing and scholarship.

Fixing the DMCA?

The Digital Millennium Copyright Act added two important sections to the copyright act, one that has proved somewhat useful in fostering fair use and the balance between owner’s and user’s rights, and one that, in stark contrast, threatens to drastically overturn that carefully crafted balance. The “safe harbor” provided for online service providers has assisted the growth of web 2.0 applications that offer an unprecedented opportunity for user creativity that pushes the boundaries of fair use. The strict protection of electronic protection measures (anti-circumvention rules), on the other hand, has arguably given content producers the means to control each and every use of their content, forbidding any uses they wish to prevent, even if those uses would otherwise be privileged under the rest of the copyright law.

A new article by Professors Riechman, Dinwoodie and Samuelson, available here on the Social Science Research Network and forthcoming in the Berkeley Technology Law Journal, examines these two provisions carefully, in the context of their origins in the World Intellectual Property Organization Copyright Treaty and the US Congress, as well as the important interpretation of each in the courts. The professors find in the development of the safe harbor “notice and takedown” mechanism that has successfully protected OSPs a fascinating suggestion for how to fix the clearly dysfunctional anti-circumvention rules.

It is difficult to summarize an article this complex, although the clear writing and argumentation in this piece makes it far easier than many other law journal articles to comprehend. The authors examine the way the concern of the US courts, starting with the famous Sony Betamax case before the Supreme Court in 1984, to protect so-called “dual-use” technologies (those capable of both infringing and non-infringing uses) so that copyright law not be allowed to stifle technological innovation, laid the groundwork for the safe-harbor provision of the DMCA. Building an elaborate analogy between these cases and the situations in which the anti-circumvention rules would come into play, the three professors suggest that, in the US (the article also deals with the European Community), courts could begin fashioning a similar solution to the over-protection of copyrighted works fostered by technological protection measures. In short, they propose a “reverse notice and takedown” procedure which would obligate content producers to “unlock” technological protection when necessary to foster uses privileged by the law as in the public interest. They discuss in detail how such a procedure make be established in both the US and the EC, and what the details of such a solution might look like.

Although long and complicated, with its treatment of both the US and the EC, this article richly rewards the time spent reading it. It provides a clear summary of where we are vis-à-vis the uneasy relationship between copyright and the digital environment, how we got to this point and how we might move forward in a responsible way. Scholarly work seems to get more attention from European courts and legislators than it does in the US, but this is one article that we must hope catches the attention of some well-placed American jurists who could consider implementing its creative solution to a problem that has rapidly become intolerable.

What faculty think

It is always dangerous to try and speculate about the opinions and attitudes of a large group, especially one af diverse as university faculty. But the University of California’s Office of Scholarly Communications always produces great research, and their recent report on “Faculty Attitudes and Behaviors Regarding Scholarly Communication” is no exception. The full report can be downloaded here, and a PDF of the Executive Summary and Summary of Findings is here. This is solid, empirical research that can help guide attempts to reform and renew the system of disseminating scholarly research.

One of the most interesting findings in this report is the disconnect it documents between attitudes and behaviors around open access and, especially, copyright. Faculty members report a high level of concern about these issues, but very little change in behavior as a result of that concern. Most respondents, regardless of their worries or desire for change, continue to pursue co0nventional scholarly behaviors around research publication. These behaviors are deeply ingrained in the fabric of scholarship, so this finding isn’t very surprising. But it does suggest that offering help to faculty around copyright management, as well as simple and convenient ways to deposit their work in open access repositories, is very important. When we are asking a group to change long-followed practices, we ought to make the case compelling and the changes as painless as possible.

One thing that may help with this change is the growth of informal means of scholarly communication. As blogs, wikis, and even e-mail become an increasingly ubiquitous part of the scholarly process, traditional channels of scholarship will seem less inevitable than they have before. The UC report notes that the traditional system of tenure and promotion, with its narrow view of what constitutes acceptable scholarship, is one major reason for strict allegiance to the traditional system; the proliferation of informal channels of communication, rather than “external” pressure, seems the most likely way to open up that view of scholarship. It is to be hoped that the value for a more open and informal way of evaluating and improving scholarship will make traditional channels, as valuable as they are, no longer the only option for perceiving quality work.

Another interesting finding of the report is that “senior faculty may be the most fertile targets for innovation in scholarly communications.” For many this seems counter-intuitive, although the report on legal scholarship discussed in our last post indicated the same possibility. While younger faculty may be more comfortable with technology (although that is by no means certain), it is senior faculty, the UC report suggests, who can afford to experiment, since tenure makes experimentation much lower risk. Is it possible that another explanation of this finding is that senior faculty, with their years of experience in traditional scholarly publishing, have reached a level of frustration that makes them embrace new alternatives more quickly?