Category Archives: Technologies

Power, error and a “cruel historian”

There was a short but fascinating article posted on the Association of College and Research Libraries’ blog earlier in the month called “Information is Power — Even When it is Wrong.”  Starting with a truly frightening story about how easily misinformation is spread on the web, librarian Amy Fry discusses some important lessons that we not only can, but must, learn about information in the digital age.

Misinformation is not new, of course, and in an election year we are reminded that it is probably most often distributed intentionally.  But good, old-fashioned error can also account for much mistaken information, and Fry’s article is a reminder of the tremendous and irreversible power that such errors gain; they quite literally take on a life of their own, and become, is some sense, as influential as truth.

Fry’s lessons are deceptive in their simplicity; but they remind us that simple rules are often the best guide to practice.  Her first rule — “Metadata is important” — codifies what librarians have know all along; information is only as good as its source, date and application.  Two other rules remind us that the Web is a different, and frightening, place in many ways.  That aggregators can mislead and that Google possesses enormous power to shape thoughts and beliefs on a massive scale are lessons too important for us to forget.  And finally, there is the powerful truth that there is no substitute for critical thinking.  If we all ever really learn that lesson, the world will be a much better place.

Fry’s article reminds me of a book I have been reading lately, “The Future of Reputation” by Daniel Solove.  His analysis of how easy it is to be subject to a viral attack is another example of the new conditions we have to adjust to as scholarship, and so much else in our lives, begins to move at the speed of digital.  Solove posits a fundamental tension, in the digital world of instant mass communication, blogs and social networking site, between freedom and privacy.  We now have the means to express ourselves more freely and fully than ever before, and to make a potentially permanent record of the things we say about ourselves and others.  The danger, of course, is that “the Internet is a cruel historian” that allows others to easily discover all the things we have written about ourselves or that others have written about us, whether they are true or not.  Privacy and reputation are in jeopardy from this new expressive freedom.

Solove’s book is sobering from many perspectives, including as a reminder of the world in which scholarship is carried out today.  With so much preliminary work on scholarly ideas being done by e-mail, in GoogleDocs, or on blogs, we need to remember that our tentative ideas and first drafts, our wild proposals and our silly comments, may not ever really be completely gone.  Errors and misstatements may live forever, and they may spread around the world in seconds; it will now require a special effort to ensure that there is a final “version of record” of any piece of scholarship, something that has not been much of a concern in the past.  There are tremendous opportunities for collaboration and more open commentary and correction in digital scholarship, but it is also an environment that requires a new level of awareness and attention.

E-textbooks: the state of play

As the new school year begins there has been lots of reporting about E-textbooks, and the welter of stories offers an opportunity to assess the overall state of play.

This story from Inside Higher Ed outlines some of the “next steps” for E-texts, as well as the “remaining obstacles,” which are substantial. The article focuses most of its attention on two initiatives – a highly speculative report that Amazon wants to introduce E-texts for its Kindle e-book reader, and a description of the progress being made by CourseSmart in partnering with higher education. It is worth looking at these two projects, along with some other business models for e-texts, in light of some recently articulated needs and concerns.

A recent study done by a coalition of student groups expresses some doubts about digital textbooks that are worth considering as we look at different possible business models. The report raises three potential problems with digital versions: their alleged failure to reduce costs, limitations on how much of an e-text a student is allowed to print, and the short duration of access provided by some licensing arrangements. These latter two concerns, obviously, support the contention that print textbooks are still serving student needs better than e-texts, especially if the digital versions are nor significantly less expensive. To these concerns we might add one more – students like to be able to highlight and annotate textbooks, and digital versions that do not support this activity will be disfavored.

So how do the different business models fare in addressing these concerns?

One model is simply the distribution of electronic versions of traditional textbooks by traditional publishers. This seems like the least promising of the models, since it likely solves none of the issues raised by the student groups. It is interesting that the representative of traditional publishers quoted in the Inside higher Ed story made no reference at all to cost concerns but stressed the potential for e-texts to shut down the market for used textbooks. Unsurprisingly, the focus here is on preventing competition and protecting income, not serving the needs of the student-consumers.

CourseSmart offers a business model that is very little different from that the traditional publishers might undertake themselves. There is some dispute about the issue of cost, however, with CourseSmart arguing not only that its digital versions of traditional textbooks are significantly cheaper, but that they remain so even when the income that students might usually expect by reselling their print texts is taken into account. It remains the case that that lower payment only purchases temporary access for the students and a restricted ability to print. Nevertheless, CourseSmart has been successful in arranging partnerships with San Diego State University and the state university system in Ohio, so it will be worth watching to see how those experiments develop, particularly in regard to student usage and satisfaction.

Amazon’s Kindle is yet another possibility for distributing e-texts. We know very little about how such texts would be priced or what features they would have, but we do know that the desire of students to be able to print would not be fulfilled. This is an important issue for students, apparently, since the student report on e-texts found that 60% of students surveyed would be willing to pay for a low-cost print copy of a textbook even if a free digital version was available to them.

This latter fact is precisely what Flat World Publishing is counting on with their plan to make free digital textbooks available and also sell print-on-demand copies to those who want a paper version. As I described this model a few weeks ago, Flat World is hoping to show that over the long-term, print on demand can prove a sustainable business model. Since this accords better with the expressed needs of student users than any of the above models, they might just be right.

The last model for distributing digital textbooks, one often overlooked in the debates (although endorsed by the student report mentioned above) but given some attention in this article from the LA Times, is open-access. Frustrated faculty members are increasingly considering creating digital textbooks that they will distribute for free. Supporting such work, with grants of up to $50,000, is another part of the initiative undertaken by the university system in Ohio. Ohio has long been a leader in supporting libraries in higher education, and this support for open access textbook offers a new avenue for leadership. The real “costs” we should be considering when we discuss e-texts ainclude reasonable support for the work of creating such resources, as well as credit for the scholarly product of that work when tenure reviews come around. So much of the expense of textbooks comes from the profit claimed by the “middlemen” who distribute them that real efforts to reduce the cost of education must focus on ways to encourage in-house creation of digital texts (which is little different from how textbooks have always been written) and to distribute them directly to students, as the Internet now makes possible.

The other side of the balance.

We are often told that copyright law is supposed to be a balance, offering, on the one hand, the financial incentive to creators that goes with monopoly rights and, on the other hand, sufficient exceptions to those monopoly rights to allow new creators to build on previous work. Without the latter half of this balance, creativity would effectively grind to a halt, and the incentive side would be useless. But most of the time, Congress and the courts seem to be serving the needs of those who want to profit from works already created at the expense of those who are trying to innovate and create new works. So it is especially pleasant to report on a couple of recent court decisions that can be seen as efforts to redress that imbalance and give some support to essential users’ rights.

First, there was the ruling in Jacobsen V. Katzer that essentially upheld the enforceability of an open source software license. Open source licenses are contracts (and that was part of the issue) that waive copyright, telling a downstream user that they are free to use the software in ways that would otherwise require permission, as long as they abide by certain conditions. In the Jacobsen case , such a license was challenged on several grounds — that it did not form an enforceable contract, that the terms of the license were not real conditions but merely “covenants” without legal teeth, and that the license was an attempt to enforce so-called “moral rights” which are largely not recognized in the US. The Federal Circuit Court of Appeals rejected these challenges and sent the case back to the District Court to be decided as a contract and copyright infringement case.

What this essentially means is that an open source license — and this likely includes the Creative Commons licenses often used in higher education as well as the more technical software license directly at issue — forms a contract between copyright holder and user that allows the user to use the work according to the terms of the license and lets the rights holder sue for infringement if those terms are breached. This is how these licenses are supposed to work, and it is nice to see a circuit court affirm their proper functionality. This ruling will make it easier for academics authors and other creators to share scholarly work without relinquishing total control.

One interesting part of this argument was the assertion about moral rights. It is quite true that the US protects moral rights, including the right of attribution, only for a small group of visual artists.  But that fact does not show why an attribution license is invalid, it shows why such a license better serves the needs of many creators, especially in academia, then copyright law alone does.  With an open access license an author can leverage their ownership of copyright to enforce the right of attribution when the law alone would not do so.  And attribution, of course, is usually the most important reward an academic author gets from her work. That is why this recent decision upholding these types of licenses is so important well beyond the sphere of software development.

The other important development was a DMCA case that decided that, before sending a “takedown notice” alleging that some particular web posting infringes copyright, the rights holder must consider whether fair use would authorize the particular posting.  This decision tracks the wording of the DMCA very closely, noting that the law permits takedown notices when the posting is not authorized by the rights holder or by law. Fair use, as the court correctly held, is a form of authorization by law (note my previous post here that noted that this has not been the case in previous DMCA practice). Therefore, a rights holder should not send a takedown notice in a case where a good faith consideration of fair use makes clear that the posting in question is not infringing.

The primary value of this second decision will be to limit the ability of rights holders to use the DMCA system to frighten people and to “chill” legitimate fair uses of commercial works.  The particular case involved one of those transformative uses that are so highly favored in the fair use analysis — a 29 second homemade video of a baby dancing to the sounds of a Prince song.  It should be obvious that such a video, even when available on YouTube, is not a commercial substitute for purchasing the song itself on CD or as an MP3.  So the takedown notice sent to YouTube over this parent-posted video seemed abusive, designed more to intimidate than to protect legitimate commercial interests.  Thus the court allowed the parents’ case against the rights-holder for misrepresentation under the DMCA to go forward, ruling that consideration of fair use is a prerequisite to the proper use of the DMCA takedown notice.  This, too, is a victory for user’s rights and, even more important, for free speech in the digital world.

“Fixing” Fair Use?

Whenever I hear suggestions that fair use should be “fixed,” I am reminded that there are two very different usages of that term. When you get your car fixed, it is returned to the state where it performs as it was meant to do. When you get your dog “fixed,” however, that is not the result. So I approach all suggestions for fixing fair use from the perspective that we do not want to render that important exception to copyright sterile and, thereby, unusable. We may want to fix fair use like you fix a car, but we must be careful not to fix it like you fix a dog.

From this admittedly cynical perspective, I was pleased by what I read in Mark Glaser’s “e-mail roundtable” on the question “Should copyright law change in the digital age.”

Glaser asks two lawyers — Peter Jaszi and Anthony Falzone — and two experts in new media — JD Lasica and Owen Gallagher — how fair use might be changed to better accommodate new uses like remixes that are made possible by digital technology. Interestingly, none of the four suggest actually tinkering with the language of section 107 itself, and both lawyers point out that the vagueness of fair use, while it can be maddening, is actually a strength. Only a flexible and dynamic (to use Jaszi’s words) doctrine can truly be technologically neutral and create the space necessary to experiment with new media and new uses that were unimaginable to the drafters of the law. What makes fair use frustrating and uncertain also makes it adaptable and supportive of creativity. “Fixing” fair use by removing its vague reliance on factors that can be applied in any situation would indeed be like fixing the dog.

Instead, these four experts discuss what might be added to our law to make certain uses that have become prevalent in the digital age less risky. By creating “safe harbors,” for example, that essentially immunize certain acts, at least when done for non-commercial purposes, the fear of using fair use, and the cost of adjudicating it, can be reduced. Lasica goes further and suggests some additional positive rights that could be incorporated into the copyright law, such as the right to make personal back-up copies, to time-shift and to change formats. Both of these suggestions would leave the fundamental structure of fair use, vague and flexible as it is, intact; they would simply take some common digital uses outside of its purview. Fair use would still allow for new technologies and creative uses not yet conceived, but the cost of reliance on fair use would be reduced by specific exceptions for activities that are now well-known and clearly of benefit to consumers. These proposals exemplify the right way to “fix” fair use.

More on e-textbooks

A few weeks ago I did a post suggesting that universities should look at digital textbooks, both licensed and open access, as a way to help students reduce the cost of higher education. The reauthorization of the Higher Education act, with several provisions related to monitoring of costs, reminds us that lots of eyes are watching that topic.

A couple of recent news items suggest the richness of the opportunities, both for education and for innovative business models, that online textbooks can offer.

First there is this interview with Eric Frank, a founder of Flat World Knowledge, about that company’s venture into creating textbooks that will be freely available online and also can be purchased through a print-on-demand service and even as an MP3. Frank explains very clearly the imbalances of the current system for publishing textbooks, where high prices drive a thriving used book market that undermines sales and drives prices even higher and where new editions are created not because of changes in the field of study but in order to renew revenues lost to used book sales or piracy. More importantly, Frank describes in considerable detail the alternative business model that Flat World is pursuing, which combines a more consistent revenue stream with free availability for those who want only online access and many flexibility features for both the original author and other instructors to change and adapt the books for specific pedagogical needs. Flat World has at least 15 schools on board to experiment with its new model for textbook delivery; it is a beta test that should be carefully watched — whether or not it succeeds, it will provide valuable lessons about how we might harness the educational potential of online publishing and break the strangle hold of out-dated business models.

On a more whimsical note there is this brief article from the ABA Journal about a law professor who wants to create an animated “case book” for tort law. Professor James Cooper from California Western is proposing that animated videos of some of the most important cases in tort law be available on YouTube for students to study. This is obviously not just an impractical whim; Prof. Cooper has produced numerous short videos on legal topics (available here), including a public services announcement on DVD piracy called IP PSA (in Spanish).

My first thought about this was that the famous case in which NY Court of Appeals Judge (later Supreme Court Justice) Benjamin Cardozo decided that tort liability did not exist when the harm caused was unforeseeable would make a great video. That case, Palsgraf v. the Long Island Railroad Company, has great dramatic elements — a moving train, an exploding package of fireworks and a huge set of scales yards away falling on an innocent bystander. It is a set of facts that a first-year student is unlikely to forget, if they read the case in the first place. Unfortunately, the pressures of law school and the arcane nature of some of the opinions leads many students not to bother. The animated gallery of cases that Prof. Cooper suggests cannot replace traditional law school methods, but it could provide a helpful supplement. And since federal judicial opinions are almost always available on the open web, it is at least possible that a combination of this YouTube gallery with some sophisticated linking and added commentary could replace a casebook with an alternative both more economical and more likely to get students’ attention.

The “Law Librarian Blog” asks if this idea is innovative or insulting. From my point of view (as a relatively recent law school graduate), it is both innovative and representative of the kind of experimentation that needs to be taking place. Animation may not be the future direction of law school instruction, but all such experiments will help us arrive at a clearer vision of what that future can be, and they help us break the grip of traditional notions that are no longer working.

Irrational publishing and recursive publics

A courtesy “heads up” from Ellen Duranceau, a scholarly communications colleague at MIT, alerted me to this podcast about scholarly communications with Dan Ariely, the author of the fascinating and best-selling book “Predictably Irrational.” This 20 minute interview is well worth the time for both librarians and scholarly authors who are concerned about the current state of scholarly publishing and interested in its future. I am looking forward to listening to the other interviews that MIT makes available.

Ariely was a Professor of Behavioral Economics at MIT, which is why Ellen is interviewing him, and he recently moved to a similar position here at Duke, which is why she alerted me to the podcast. Ellen deserves great credit for the insight – “I wish I had thought of that” – that Ariely would be a really interesting person to ask about the state of scholarly publishing. Not only because has he recently made the successful transition from obscure academic author to public intellectual, which he discusses in the interview, but because the theories and experiments that have made his work so well-known themselves suggest important insights into the scholarly communications system.

Much of Ariely’s work focuses on the odd things that happen when economic and social norms collide and intermingle, which is exactly what happens in the system of scholarly publishing. Faculty authors are largely driven by social norms and reward structures that are quite different from, and increasingly at odds with, the economic incentives that drive publishers. The result is a strange and dysfunctional system.

During the interview, Ariely refers to his “back of the envelope” calculation that it costs a university over $50,000 to support the production of a single scholarly article, which indicates how badly askew the economics of publishing are, when universities not only subsidize production to that extent but also repurchase that subsidized content after publication. It is precisely because the academy is governed by an entirely different set of social norms that we have allowed the economic situation to get so far out of hand. But Ariely’s endorsement of a more open and accessible system of scholarly communications is not itself, finally, based on these economic conditions. Rather, he has discovered, through his own experiences with the public attention he has received, the great benefit both to the individual scholars and to society, of open and interactive scholarship. The ultimate take-away from this interview for me was that scholarship itself can be improved by reaching out to larger publics and incorporating those publics into the work of research and writing.

As a sort of “proof of concept” of Ariely’s claim, I was interested in the experiment in a new kind of “hybrid” publishing going on with a recent book by Rice University professor Chris Kelty. “Two Bits: the Cultural Significance of Free Software” is published by Duke University Press (you can buy a copy here), but is also available online on this author-maintained website, twobits.net. One can read the book online, comment on its various chapters, and “modulate” with it – use it in small chunks to create new scholarship. Kelty uses the concepts of re-mix and recursive publics to experiment with what we really mean when we say that scholarship builds on the works of others. This experiment with modulations will be the most interesting part of Kelty’s new model of scholarship to follow, but in light of what is discussed in the Ariely interview, I think there are two more basic questions to ask about this kind of hybrid model for scholarly publishing. First, will online availability depress sales of the print book, or will people who come to it first online be motivated to buy a hard copy (as I was)? Second, will the experiment in public comment and reuse really result in improvements to the text and to scholarly output that builds creatively upon it? This latter question is a way of asking if the results that Dan Ariely reports in his interview can really be replicated for scholars who do not attract the same level of celebrity.

Copyright reform — what would “green” copyright look like?

My wife frequently accuses me of finding copyright and other intellectual property issues everywhere, often where no “normal” person would perceive such a question. So I was both surprised and vindicated to see discussions of “green” copyright in a couple of places recently; surprised because even with all my obsessing about copyright, I had never considered how one might make a more eco-friendly copyright law.

The most comprehensive discussion I have read so far about green issues for copyright reform comes from Michael Giest, the Canadian copyright scholar who is leading a powerful grass-roots opposition to the new proposed copyright law in Canada — Bill C-61, introduced in Parliament several months ago. In a column for the Toronto Star, and again on his fascinating blog site, Geist lists several problems with the proposed law that could hamper efforts to improve the environment (or at least slow the harm we are doing to it). Since a major complaint about the Canadian proposal is that it looks too much like US copyright law, it is fair to assume that these “Canadian” issues are US issues as well:

  1. Copyright law can impact our ability to recycle computers and other electronic devices in order to reduce the amount of “techno-waste” that is generated each year. Protections for software in general and especially prohibitions that prevent circumvention of digital protection measures can prevent new users from gaining access to recycled devices. It is no secret that Apple want to sell each of us a new iPhone every year or so, but there is potential environmental impact to legal enforcement of that business policy. Giest refers to a US case where the potential for this kind of ecological harm was very real — Lexmark v. Static Control Components, in which Lexmark tried to use the DMCA anti-circumvention rules to prevent a competitor from making chips that would allow the re-filling of laser printer ink cartridges. The courts found that such an application of US copyright law would be anti-competitive, but it is worth noting that a contrary decision might also have been anti-environmental.
  2. Protections that restrict copying of software and storage of copyrighted materials on shared networks can inhibit the efficiencies gained through “cloud computing.” If memory-intensive research — crunching huge data sets for example — can be done by a network of computers rather than at a single site, unused capacity can be exploited to reduce the need for multiple institutions to obtain massive computing capacity that may be used infrequently. Copyright law can have a lot to say about whether such shared projects will be feasible.
  3. A similar issue is raised regarding the possibility of consumer storage of memory-intensive materials in networked systems. In the US there already exist network-based video recording services that decrease the proliferation of digital devices that increase energy usage and eventually end up in landfills. US courts have not been consistent in their approach to these services, in part because our copyright law does not directly address the status of copies made solely for personal use. The new Canadian proposal would take up that issue and would authorize only a single copy of consumer-purchased songs or videos. With such a law, not only would consumer choices be severely restricted, the need for many individually owned storage devices would burgeon — good for the consumer electronics industry but bad for the environment.

In addition to these copyright issues that could have significant ecological impact, there are also “green” patent concerns. A recent study has shown the tremendous growth in patents issued for inventions, software and business methods that are aimed at environmental processes and problems. Because there is already so much controversy (and litigation) around software and business patents in general, it is a legitimate worry that the growing number of ecological patents could actually impede the progress of innovation in environmental sciences rather than promote that progress. Patent law, like copyright, is intended to promote innovation through a careful control grant of monopoly, but recent research has shown the significant danger that patents, and the cost of prosecuting and defending them, may be becoming an obstacle to innovation rather than an incentive; a nice, but dated explanation of the potential problems can be found here; this book review of 2008’s “Patent Failure” gives a more up-to-date review of the economic evidence that innovation is being stifled. Research into how to resolve our environmental dilemmas is too important to allow it to be slowed by the inefficiencies of our patent system, and adds another argument for the need for comprehensive reform of US intellectual property laws.

Shaking the money tree

In a talk given at Cornell University last week, Steve Worona of EDUCAUSE said about business models for distributing intellectual property that “every few years the entertainment industry has to be dragged kicking and screaming to the money tree and have it shaken for them.” His point that the first reaction of entertainment company executives is to “tamp down” new technologies in order to protect out-dated business models is certainly borne out by recent history. Back in the 1980’s, of course, the industry fought hard against the growing use of home video recorders, both in the Supreme Court and in Congress, even as a new business model that would eventually make billions for the studios was being developed in spite of their opposition. No less an advocate for the old ways than Jack Valenti eventually realized that the movie industry lost that battle because they were perceived as anti-consumer. Nevertheless, the recording industry continues to make the same mistaken, even going so far as to sue they very consumers on whom it relies.

Are there alternatives? Worona’s talk is very persuasive in its discussion of why old models (based on counting copies) do not work for new technologies (which replicate bits) and how it is possible to develop new models that really can “compete with free.” I have written about such models before, and also noted in a post last week this article by Tim Lee about an alternative path for copyright law that could support such new ways of profiting from intellectual property without crippling technological innovation. Some of those alternatives deserve further discussion. (and a lively discussion is continuing on the Cato Unbound site).

First, it is worth noting the survey, reported by Ars Technica, that suggests that young people are willing to pay for music if it is offered on terms that seem reasonable to them. Although I can imagine the skepticism this will generate within the content industries, it at least suggests that innovations, rather than lawsuits, are worth a try; both may be risky, but the rewards will be greater from the latter.

Lee’s article briefly catalogs a variety of business models, in several different content industries, that rely on new ways to make a profit. One that caught my eye was the Web service called Imeem, which combines a legal music downloading service with social networking opportunities. Revenue is generated through advertising, and the music is licensed using revenue-sharing agreements with the four major record labels. Users can create and share playlists and download music from those shared lists for free. As Lee says, “It is only a slight exaggeration to say the Imeem deal amounted to a de facto legalization of online file sharing, provided that the labels get a cut of any associated revenues.” Is this the future of the music business? I don’t know for sure, but I do know that I, as a music lover who has never obtained a music file from any online source other than iTunes, will now be looking on Imeem first; legal, ad-supported free music  certainly works for me.

In his talk at Cornell, Worona suggested that, when a business learns that it will have to compete with free – with someone offering the same or substitutable product at no cost – the appropriate response is not to call the FBI, as the recording industry has done, but to call its own marketing departments. That is what Imeem has done, and they are giving the money tree yet another shake; let’s hope the music industry is paying attention this time.

Bad strategy and poor reporting

It is hardly surprising that the recent effort by the Associated Press to stop bloggers from quoting news articles, even when they link to the source on AP’s own web site, has generated lots of comment in the blogging world. AP recently sent takedown notices, using the procedures outlined in the Digital Millennium Copyright Act, to try to have blog posts that quoted as little as 35 words of an AP story removed from the Internet. The has been enough coverage that it seems unnecessary to rehearse all the commentary; there is a story at Ars Technica here, and one from the Electronic Frontier Foundation here. Basically most of the coverage makes the same two, fairly obvious, points; this is a terrible strategy from a public relations point of view (as even AP now admits) and it represents an interpretation of fair use that would entirely eviscerate that vital exception if accepted by the courts.

What does deserve extended comment however, is one of the news stories that repeats a couple of common misconceptions that need to be dispelled. This report on the E-Commerce Times site offers the opportunity to clarify and correct two important errors about the DMCA and fair use.

First, the E-Commerce story quotes a source who refers repeatedly, and defiantly, to “this ruling.” This is probably just careless language, but it also re-enforces the mistaken notion that receipt of a DMCA takedown notice means that infringement definitely has taken place. In fact, a rights-holder sends a takedown notice, using very specific provisions that the DMCA added to chapter 5 of the copyright act (17 U.S.C. 512), because they merely believe that their copyright is being infringed. There is no required quantum of evidence beyond a “good faith belief that use of the material… is not authorized,” nor must a rights holder consider possible defenses to the claimed infringement. These provisions were never intended to substitute for a judicial determination on the question of infringement; they are intended, instead, to help the ISP avoid liability for any possible infringement by users of the service. The ISP does have to remove the material or block the user upon receipt of a take down notice, but they also must notify the user of the action and restore the material if the user sends a counter notice stating their own good faith belief that the removal was wrongful. Thus the notice and takedown process helps establish if there really is a conflict and gives the ISP a protected role when there is, but it leaves the resolution of the issue of infringement up to a court. The mere fact that the AP sent these initial notices is in no way any sort of “ruling” or definitive decision.

The second error in the E-Commerce story is its reference to “the fair use provisions of the Digital Millennium Copyright Act,” which, we are told, the AP hopes to clarify. There is, of course, no fair use provision in the DMCA; fair use is much older than that piece of relatively recent legislation. Indeed, fair use is a doctrine initially created by judges in the early 19th century (in the US) to mitigate the harmful effects of the copyright monopoly. The DMCA, which took effect only in 2000, does not add anything to the fair use analysis, nor does it, in theory, narrow its scope; where fair use is mentioned in the DMCA it is only to emphasis that Congress did not intend the provisions of the DMCA, which attempt to deal with some of the new issues arising in a digital environment, to alter the applicability of fair use.

This last point is important, because it reminds us that we are not dealing with any new provision about what uses are acceptable in the digital realm. Instead, the same old provision about fair use (17 U.S.C. 107), which emphasizes the privileged status of news reporting and has traditionally been held to protect short quotations, would be applied in deciding whether or not these passages from AP news stories were used by bloggers in a manner authorized by law. The assertions by AP that these uses are not fair use seem difficult to credit, but the point is that a court would have to decide the issue (if the AP decided to push that far; it is a much more costly and serious step than merely sending a takedown notice), and the standard used to make that decision would be the familiar four factors of fair use, just as they were outlined by Justice Story in 1841.

Everything old is new again?

Some intellectual property issues are hardy perennials; they bloom anew with great regularity. One such issue is the doctrine of first sale, which in other countries and other contexts is sometimes called the doctrine of exhaustion. However it is named, it refers to the nearly universal practice of holding that the “first sale” of a particular embodiment of intellectual property – a copy of a book or a CD – “exhausts” the exclusive right of the copyright holder to control further distribution of that embodiment. It is the right of first sale that allows used book stores, video rentals and lending libraries to flourish.

First sale has never been popular with the content industry; both licensing arrangements and DRM can be seen as modern attempts to exercise control over the downstream use and distribution of IP beyond what is allowed by copyright law. Back at the beginning of the 20th century, in fact, the Supreme Court had to deal with a case involving what I like to call the first “end user licensing agreement.” In Bobbs Merrill Co. v. Straus (1908), the Court found that an attempt by a publisher to mandate the retail price at which stores could sell the book “The Castaway” by Hallie Rives failed because of the doctrine we now call first sale. The publisher of this obscure novel inserted a “requirement” underneath the copyright notice that the retail price of the book must not be less than one dollar, and sued the store owned by Isidor Straus – Macy’s – when it sold copies for less.

In the past few weeks, two cases have been decided, one in the copyright arena and one dealing with patents, that again remind us of the continuing importance of first sale/exhaustion in a balanced system of IP protection.

In Universal Music v. Augusto the facts sounded strangely similar to Bobbs Merrill; a music company tried to distribute free promotional CDs of its music and prevent the resale of those CDs by simply placing a notice on the face of the disc. In granting summary judgment for the E-Bay vendor who resold some of this CDs, Judge Otero of the Central District of California noted that this kind of restraint on subsequent transfer had been rejected over 100 years ago. Also implicated rejected in this decision is the attempt to create a license transaction merely by a one-sided statement that that was what was occurring. The court rightly found that the CDs were transferred to the recipients (by gift, in this case) and were therefore subject to the exhaustion of the distribution right.

The patent case, Quanta Computer v. LG Electronics, also involved an attempt to control subsequent uses of a product embodying a patented process after the initial sale of that product. LG sued to collect a licensing fee from Quanta because Quanta used chips containing a process patented by LG, even though those chips were manufactured by an intermediary company (Intel) that had itself licensed the process from LG. In essence, LG wanted a cut on every downstream product that contained the already-authorized chips, but the Supreme Court said no: “The authorized sale of an article that substantially embodies a patent exhausts the patent holder’s rights and prevents the patent holder from invoking patent law to control postsale use of the article.”

As sturdy as these recurring issues are, however, we should not conclude that copyright law is ticking along without difficult, adequately resolving conflicts in the 21st century with its arsenal of 20th century doctrines. The current Issue of “Cato Unbound,” on “the future of copyright,” does a superb job of alerting us, if we didn’t already see it, that copyright law is struggling to keep up in the digital age. The lead essay by Rasmus Fleischer, begins with the fascinating point that in the 21st century we have moved to trying to regulate tools with our copyright law rather than content. In a digital age, he points out, many of the distinctions our law relies upon, like the difference between copying and distribution, no longer make any sense. As Fleischer says, “the distinction is ultimately artificial, since the same data transfer takes place in each.” This point undermines the comforting thought, expressed above, that first sale, for example, is still doing its job in copyright law, since the move to a digital environment makes application of an exception to the distribution right, but not the right of reproduction, highly problematic.

Fleischer’s article goes on to paint a fairly gloomy picture about a “copyright utopia” being advocated by the content industries, especially big entertainment companies, that could seriously undermine both technological innovation and civil liberties. He ends with the “urgent question regard[ing] what price we will have to pay for upholding the phantasm of universal copyright.”

In a reply essay, “Two Paths for Copyright Law,” Timothy B Lee suggests that things may not be as bleak as Fleischer suggests. He reminds us that it is only a very recent development that anyone has even considered question the legality of private, non-commercial copying for home use, and he opines that the effort to now assert control over such copying has already proved a failure. The alternative — the second path for copyright — is, as has been suggested before in this space, the development on new business models, which will largely be funded by advertising, to meet the non-commercial demand for content. The role of copyright law, in this scenario, is to protect content creators from unfair and unauthorized commercial exploitation of the works by competitors. It is commercial competition that copyright is intended to regulate, he suggests, not use by consumers. And he catalogs a wide variety of business models already being adopted by the major content industries, even as the pursue lawsuits against customers and strict laws from Congress, that seem to recognize the inevitable move towards a market solution, rather than a legal one, to the challenges posed by new technologies.

Some copyright doctrine remains unchanged over a hundred years, yet we have to adapt to rapid innovation even as we preserve what works in our law. The essays by Fleischer and Lee paint two different pictures of the future of copyright; the attraction of Lee’s vision, for me, is that it looks at what copyright has traditionally been designed to accomplish, the control of commercial competition, and offers hope that if we stay focused on that role for the law, the market will adjust to the technological innovations for users that currently frighten the content industries so.