Category Archives: Technologies

The World Blind Union, Amazon and the Author’s Guild — more from the eIFL conference

One of the most passionate and compelling speakers at the eIFL 2nd IP conference in Turkey last month was Chris Friend, who is the strategic priority leader for the World Blind Union’s Right to Read initiative and also works with Sight Saver’s International training blind leaders in Africa.  A couple of private conversations with Chris and his wife Judy gave me a much-needed education on the copyright issues facing vision-impaired people and the wide array of technological solutions that could be available if the IP problems were solved.  Also, our hotel room was next to that of Chris and Judy, so my wife and I were often lulled to sleep by the rhythmic sound of his text-reader.  At the conference, Chris presented about the World Blind Union’s proposed treaty before the world Intellectual Property Organization “for blind, visually impaired and other reading disabled persons.”

The treaty, which is linked in a variety of formats from this page by Knowledge Ecology International, makes for interesting reading.  It represents a carefully constructed effort to craft an exception to international copyright law that would make it easier for visually impaired people to find books in accessible formats.  Of course, WIPO has not been very interested, until recently, in harmonizing exceptions and limitations to copyright law, only protection.  But there are signs that that is changing, and the WBU proposed treaty would be a great place for WIPO to start.

The treaty includes five provisions that I want to highlight.

First, it would permit users to reproduce works into accessible formats without authorization and to distribute those formats on a non-profit basis exclusively to visually impaired persons (article 4a).  Second, it would permit distribute on a for-profit basis if the work is not reasonably available in an accessible format (article 4c).  Third, it provides a useful definition of what “reasonably available” means, pegged to the price of the non-accessible version of a work and distinguishing between what is reasonable in the developed world and what is reasonable in the developing world (article 4d).  Next, the proposed treaty includes a provision to permit circumvention of technological protection measures when those measures would prevent the creation of accessible formats (article 6).  Finally, the treaty would explicitly state that contractual provisions that are contrary to the treaty would be voided (article 7).  These last two provisions are extremely important as any discussion of harmonizing limitations and exceptions gets started, and we should be grateful to the WBU for stating them so clearly and in such a compelling context.

All this took on added urgency for me this week as another group that represents visually impaired people, the National Federation for the Blind, held a protest outside the headquarters of the Author’s Guild.  The protest, about which there are photos and a story here, was because of the pressure brought to bear on Amazon to disable the text-to-voice features on its Kindle 2 e-book device.  As I have written earlier, the legal claim made by the Author’s Guild that Kindle was infringing their copyrights was insupportable, but nevertheless, Amazon choose to cave in rather than risk a court battle, even one it could clearly win.  I find myself wondering why, if the Kindle feature is a copyright infringement, the Author’s Guild is not also opposing the text-reading software that Chris Friend was using in Istanbul; could it be something as obvious as avoiding really bad PR?  Anyway, the National Federation for the Blind is now taking the Author’s Guild to task for opposing a technology that, whatever other uses it might have, would be a great boon to the visually impaired.  Kindle 2 is not an ideal technology for blind people — one must still see well enough to turn pages in order to use it — but the text-to-speech function, combined with Amazon’s wide array of available e-books, would surely assist a great many people experience literature that would otherwise be unavailable to them.  Copyright law should not, and as of now does not, when understood properly, stand in the way of this benefit.  Neverthless, the flap over Kindle 2 helps make the point that exceptions for the blind and visually impaired must be built in to copyright law at the highest level in order to prevent self-serving misinterpretations from further burdening those who want to exercise their “right to read.”

Kindle 2, public performances and copyright

I had rather hoped to stay away from the controversy being generated by the new Kindle 2 Book Reader from Amazon and its “text to speech” feature that will allow the reader to offer a computer-generated audio reading of e-books, but there are copyright issues here too good to ignore.  It is hard to make sense of the claims being made in this kerfuffle, but it may be worth the effort in order to clarify what copyright does and does not protect.

In a widely-ridiculed public pronouncement, a spokesman for the Author’s Guild has denounced the audio feature of Kindle as an infringement of copyright, even though the e-books sold by Amazon are, of course, licensed from the publisher.  He is quoted as saying the “they,” meaning consumers, “don’t have the right to read a book out loud… That’s an audio right, which is derivative under copyright law.”  This led many to trot out a parade of horrible consequences, suggesting that parents might be sued by the Author’s Guild for reading”Goodnight, Moon” to their children.  So the President of the Author Guild took to the New York Times Op-Ed page to explain that that was not their intention. Unfortunately, his piece does not really explain what the claim really is.  He merely says that the Guild collects separate royalties for audio books and for e-books and that Kindle would “swindle” authors out of that double fee.  From a copyright perspective, it is interesting to try and sort out what infringement, if any, is involved in this “swindle.”

One way to look at this, of course, is as a simple contract dispute, and contract provisions are probably the way to settle this.  Authors and publishers can simply charge Amazon more for the e-book license to compensate for the potential decline in audio book sales when those e-books are “read” by Kindle.  Other e-book platforms would pay a lower price if they do not provide a text-to-speech function, and both sides could monitor to see if audio book sales really do decline.  For e-books already licensed to Amazon, the Author’s Guild could try to claim that this feature of Kindle 2 breaches the license terms, and try to demand additional money.  The public spat is likely an attempt to force such renegotiation.

But it is more interesting to ask if any copyrights are being infringed.  When a parent reads to a child, this is a private performance of a work that does not infringe any of the rights under copyright.  It is very important to remember that the performance right in copyright is only an exclusive right to authorize or deny PUBLIC performances, defined as a performance “at a place open to the public or at any place where a substantial number of persons outside of a normal circle of family and its social acquaintances is gathered.”  Based on this definition, it is unlikely that Kindle would ever infringe the public performance right under ordinary use.

This distinction of performances is undoubtedly why the Author’s Guild spokesman spoke of a derivative right.  Now he was simply wrong to refer to an “audio right” which is “derivative under copyright law.”  There is no separate “audio right,” there is only the public performance right discussed above and another exclusive right over the preparation of “derivative works.”  So a lot turns on whether an audio reading of a text can be called a derivative work.

It is generally thought that a derivative work must itself be an original work of creative authorship.  So a translation of an English text into Hindi involves new creative expression, as does the creation of a film from a novel; these are classic examples of deriviative works, and each involves the reuse of protected expression in combination with new creative authorship.  So a translator or a filmmaker must get a license from the original author to create these works, in which there is subsequently two (at least) copyright interests.  But an audio reading adds no creative expression, so it is hard to see how it is a derivative work. In this fascinating article, Julian Sanchez analyzes this argument very nicely, and suggests an exception — an abridgment has been held to be a derivative work, and it does not contain original expression that is added to the original.  I think there are historical reasons for this, but I will let Sanchez explain the ins and outs of this debate to those who are interested.

What I want to add to this discussion is an additional argument for why an audio reading should not be considered a derivative work.  There is a long standing rule of statutory interpretation that instructs court to read laws in wasy which do not make parts of the language used by legislatures irrelevant; we do not want interpretations that make whole portions of a law redundant or unnecessary, since we assume legislatures did not intend those readings.  If an audio reading is interpreted as a derivative work, we would have just such a reading, because that interpretation would make the public performance right “mere surplusage.” Why would Congress include a specific right over performance, and limit that exclusive control to public performances, if ALL audio readings were derivative works and therefore subject to the authors control based on a different exclusive right.  Audio readings are not derivative works because, unlike abridgments, they are subject to a different right, and we must assume that Congress intended that right (public performance) to circumscribe the control an author should have over readings of his or her work.

POSTSCRIPT — In the interval between writing this post and publishing it, the news has come out that Amazon has agreed to make changes to the Kindle.  I am afraid this merely reflects on the chilling effect of a lawsuit threat; it does not change the legal analysis, which suggests that the Author’s Guild won by making a very weak claim, but making it loudly.

UCG and you

A really superb article has just been released in the American Bar Association Journal called “Copyright in the Age of YouTube.”  Starting from the story of the dancing baby, whose 30-second home movie was the subject of a DMCA takedown notice for allegedly infringing copyright (a story I have written about before), this article does a really nice job of outlining the current issues about how copyright law addresses, or fails to address, the explosive growth of user generated content on the Internet.

Steven Seidenberg, who is described as a lawyer and freelance journalist, has done an enviable job of describing complex issues in a readable and even enjoyable way.  I recommend the article for anyone interested in how the law struggles to keep up with rapid and dramatic changes in technology.  The analogy between trying to police user generated content and playing “whac-a-mole” is just right.  But apart from its general quality and usefulness, there are two points made in this article that I want to highlight.

First, it is fascinating to see the suggestion made by Seidenberg that the Digital Millennium Copyright Act, which was passed in 1998 and took effect in 200, may already be out-of-date.  I have often remarked about the difficulty of applying a copyright law written for the photocopier to the Internet age; Seidenberg goes a step further by noting that even the DMCA was written for a simpler set of Internet technologies — bulletin boards and Usenet groups — and may not be able to account for the huge popularity of user generated content that is now born on the Internet.  Seidenberg’s neat summary and comparison of two court cases dealing with UGC — the as-yet-undecided case of Viacom v. YouTube and a less well-publicized decision involving Veoh Networks — explains the interplay of laws and the uncertainty of results in this environment.

The second point that seems significant to me is the complaint sometimes heard about the DMCA and the ruling in the dancing baby case that it has created too great a burden on the content companies for them to have to evaluate the possibility of fair use before they send a takedown notice.  Although I understand the argument that a business needs to streamline those processes that merely protect profits and do not generate them, it is hard to be too sympathetic when, in the higher education environment, we are called on to make many decisions about fair use every day, usually without recourse to a stable of lawyers.  The law of fair use is intentionally, and I think correctly, open-ended and flexible; it does not lend itself to streamlined or automated processes precisely so that it can remain useful as new innovations and technologies come along.  The burden of having to make individual fair use decisions is shared by users and content creators alike; it may not be efficient, but in the end the social benefit of doing things this way clearly outweigh the costs.

The last part of Seidenberg’s article addresses this general point as it discusses attempts to convince Congress to shift more of the burden for copyright enforcement off of the big content companies.  If they have not yet been successful in getting courts to force Internet providers like YouTube to police copyright for them, legislative efforts like the PRO-IP act to get the government more involved protecting these private rights have born more fruit.  Whether taxpayer-funded enforcement of private copyrights will ultimately squelch the growth of user generated content is an open question, and it is one that may play a huge role in defining the future for creativity and innovation.

Debating Internet regulation

Last week Federal Communications Commissioner Robert McDowell spoke with a small group of Duke administrators about a wide range of topics.  In response to one question (which was, I have reason to know, deliberately provocative), Commissioner McDowell, who is a Duke alum, gave a pretty ringing endorsement of the unregulated Internet.  He referred approvingly to the history of the Internet as an open environment that has, throughout its history, been free of government regulation.

McDowell chose to ignore, in these comments, the pre-history of the Internet as DARPAnet, a creation of the Defense Department’s Advanced Research Projects Agency.  But really his position is the one from which I want a government regulator to start; a stance of healthy skepticism toward regulation is the best way to ensure that careful thought proceeds regulatory enactment.  While suspicion of regulation is almost always a justified foundation, however, it is not necessarily the final word on the matter.

The context of the question Commissioner McDowell was answering was ‘Net neutrality, and in that context it is particularly easy for the FCC to oppose regulation, since that is the position favored by the major telecoms.  But it is far too simple to say that at long as the government keeps its hands off, the Internet will stay unfettered and equally accessible to all.  Commissioner McDowell clearly knows this, and his assertion was that competition is the best way to prevent private entities from closing off the Internet pipes to certain types of traffic.  But he also noted that the economic downturn has delayed the implementation of additional pipes, and it is still true today that the backbone of the Internet is in the hands of only a few major corporations.

The fear here is that these companies may find it desirable to implement differential pricing — charging more for certain kinds of traffic — and that regulation might be necessary to preserve the openness that has, so far, been a hallmark of the Internet.  ISPs might, for example, decide that voice-over-internet phone services compete with another part of the business of their parent telecoms and introduce higher prices for VOIP to choke off such services. UPDATE — As this report indicates, this is a very real concern that the FCC continues to monitor

A similar decision to charge more for high-bandwidth uses could be implemented in a misguided attempt to prevent video piracy.  Illegal video downloads, of course, use a lot of bandwidth, but so do perfectly legal file transfers.  The danger with these kind of “solutions,” whether they are differiential pricing, filtering or agreements between ISPs and content companies, is that they are likely to exclude too much content and too many users.  When this happens, the free speech goals which copyright is meant to serve are undermined, often in the name of copyright protection.

The recent announcement of a new anti-piracy strategy from the RIAA, and the continued behavior of YouTube toward repeat notices of copyright infringement, illustrates this danger.  The RIAA has agreements with some ISPs (it is not saying who) to cut off Internet access for those accused of repeated illegal downloading.  But we know that the RIAA has not been very careful about its accusations in the past, so there is a real concern that users will lose access based on inaccurate information and poorly substantiated charges.  And even before the RIAA’s new strategy is put in place, we know this kind of abuse is happening.  Here is a report from the ArsTechnica blog about a case where what is quite likely to be fair use — the posting of film clips on YouTube to  augment an online critical essay — has lead to the author having his YouTube account shut down because of DMCA notices that claim infringement but do not have to prove it or to take into consideration any of the myriad ways the uses on YouTube might be justified.  By disconnecting users after “three strikes” based on mere accusations, YouTube is already implementing the practice the RIAA is negotiating with ISPs.  And we can see that that process is ripe for abuse.

The moral here is that regulation of the Internet is a complex topic.  Reliance on the market alone will not always guarantee that the ‘Net will remain open and accessible on an equal basis for all.  As more and more basic and vital information and services become Web-based, such access must be preserved.  The trick will be to figure out the right moment and the right way to preserve access, but the time will come when those decisions must be faced, since we have already seen that reliance on market forces and good will alone will not suffice.

What is DRM really good for?

As the Library of Congress considers new exceptions to the anti-circumvention rules that legally protect the DRM systems that are used by many companies to lock up digital content of all kinds, it is helpful to consider if those protections really accomplish what they were intended to.

Digital Rights Management, or electronic protection measures, are technological locks that “physically” prevent uses that are infringing, as well as many uses that would not be infringing if they were possible.  The justification for using DRM is that it is necessary to prevent the widespread infringement that the digital environment enables, and thus to protect the revenues of content creators.  Those revenues, it is argued, provide the incentive that keeps Americans creating more movies, music, books, etc.  This purpose seemed so important in 1998 that the Digital Milleniuum Copyright Act included rather draconian legal protection for DRM systems, making it illegal to circumvent them even when the underlying purpose of the use would itself be legal.  But the juxtaposition of two recent blog posts raises an interesting question about whether DRM really does what is claimed, and whether what is claimed is really its purpose in any case.

First is this report from Canadian copyright professor Michael Geist noting that for the third straight year sales of digital music (a prime type of content “protected” with DRM) have grown faster in Canada than they have in the United States.  This growth comes in spite (?) of the fact that Canada does not have the same legal protections for DRM systems that the US does.  Apparently the incentives for creativity are just as strong, or stronger, in Canada, where circumvention is not punishable, as they are in the US, where we are told that those who circumvent and those who market the technology to circumvent must be stopped lest creativity grind to a halt.  The reality, as Geist points out, is that “copyright is simply not the issue,” and government intervention to drastically strengthen the copyright monopoly has not provided the promised benefit.

So why is DRM really so important to big content companies?  On the Electronic Frontier Foundation’s blog, Richard Esguerra gives us a more convincing answer when he notes that Apple is finally dropping DRM from the music files it sells through its iTunes store.  The timing, he suggests, shows that the big content companies really use DRM to eliminate competition and enforce a captive market; as soon as that purpose becomes moot, the companies drop the DRM.  It is no surprise that DRM is a marketing problem, especially for music, where it often prevents users from moving files from one device to another.  As long as the expected benefits in reduced competition outweighs the loss of sales, DRM is defended as a vital part of the copyright system.  But it is abandoned without a qualm once it no longer serves that anti-competitive purpose and threatens instead to hamper profits.

If DRM systems really are being used primarily to suppress competition and prevent innovation, they are working directly in opposition to the fundamental purpose of copyright law they were sold to us to support.  Read together, these two reports suggest that tinkering with exceptions, as the Library of Congress is charged to do every three years, is not enough; instead, the value of the whole idea of providing legal protection to DRM should be reexamined.

Just ’cause you’re paranoid…

When I wrote a post about a week and a half ago called “Can Copyright kill the Internet?,” I worried that my title might be considered a little bit extreme.  After all, the Internet is a big, sprawling “network of networks;” surely the puny efforts of legal enforcement cannot really do that much harm.  In some senses this is true, since it is difficult to apply national laws to the persistently international Internet.  On the other hand, as I pointed out in the earlier post, a business wanting to engage in commerce on the Internet has to take account of national laws around the world, and is frequently circumscribed by the most stringent law to be found regarding any particular activity.

But what really convinced me that my earlier post was not exaggerating the threat was this news item from Ars Technica called “‘Net filters “required” for all Australians, no opt-out.”  Incredibly, to my mind, at least, Australia is moving ahead with a plan to force Internet Service Providers to impose filters on ALL Internet access in the country to filter out “illegal” content.  The government would maintain two “blacklists” of material that must be blocked.  Australians who requested “unfiltered” access would not have material on the “additional material” blacklist blocked, but there would be no way to get access to Internet sites that the government deemed illegal and so put on its prinicple list of blocked content.

There are many problems with this plan, but I want to focus on two.  First, filters never work.  It is usually possible to get access to “bad” content in spite of the filter, and filters almost always filter out useful content as well as the bad stuff.  In the case of this plan, the task of updating the blacklist will be monumental, as banned material can switch locations and URLs faster than the content police can keep track.  And even when content is blocked, the blocking itself will serve as a challenge to many sophisticated computer users to find a way around the filter and gain access to the site.  Digital locks are usually broken in a matter of days, and the unfortunate result of filters has always been that law-abiding users find their choices of legitimate content constricted, while those who want to violate the rules find ways to do so.

The other problem, of course, is deciding what consititutes “illegal” material.  Few would dispute the need to reduce the amount of child pornography available on the ‘Net, but there are lots of other categories of sites where there is a legitimate debate.  What is defamatory in some countries, for example, is protected as political speech in the United States.  Will Australian officials be able to keep criticism of government policies (like this) off of Australian computers by declaring it “illegal” because potentially libelous?  What about material that potentially infringes copyright?  Will all such material be blocked?  And how will that determination be made?  Many sites — YouTube is the most obvious example — contain material that is authorized by the rights holder as well as videos that are clearly infringing.  Is YouTube a legal or an illegal site?

Ars Technica has followed up its original post with this one noting that the government in Australia is trying to suppress criticism of its plan.  This strengthens the fear that the filtering plan might be used to silence opposition, even though there ought to be a clear distinction made between what is illegal and what is merely dissent.  The article also notes that the point made above — that filters seem seldom to work very effectively — is being borne out in this instance.

So here is a concrete example of terribly bad policy that really does threaten the existence of the Internet as the revolutionary tool for democratic communication that it ought to be.

What does PRO-IP really do?

President Bush signed the “Prioritizing Resources and Organization for Intellectual Property Act of 2008” — PRO-IP — on October 13, making it Public Law 110-403.  Since then a lot of news reports and blog posts have denounced the law, and I have noticed that a number of them claim negative aspects of the bill based on previous proposed versions.  One article last week linked to a report about the bill that was a year old and announced an aspect (about which I also wrote way back than) that actually was removed from the bill as it was finally passed and signed.  So I spent my weekend reading the actual text of the final, adopted version to see what was and was not still there.  The link above, from Washington Watch, includes both the text of the bill as signed and some analysis of it; here is a news report that also reflects the content of the bill correctly..

First, what is not in PRO-IP?  The two most objectionable features, from my perspective, were both removed before final passage.  First, earlier versions included provisions that would have dramatically increased the statutory damages available in copyright infringement cases.  The obvious purpose of this provision was to make more money for the RIAA when it sues file-sharers, since the structure of the change would have increased the potential penalty for infringing a music CD by 10 or 12 times.  That provision was not included in the final version.  Also dropped was a provision that would have allowed the Justice Department to pursue civil (as opposed to criminal) copyright lawsuits, a provision one commentator called making federal employees essentially pro bono lawyers for the content industries.  Because the Justice Department itself objected to the provision, it was omitted as well.

So what is left?  Plenty of taxpayer money being spent to help out a few large content industries is the short answer.  The Congressional Budget Office estimates that PRO-IP will cost over 420 million over four years.

PRO-IP has five sections.  The first, dealing with civil enforcement, lowers the procedural barriers for bringing infringement lawsuits, and it allows for seizure and  impounding of allegedly infringing products while the lawsuit is pending.  It also raises the statutory damages available for counterfeiting of trademarks.  The second section “enhances” criminal enforcement measures in a parallel way.  Primarily, it allows for the seizure and ultimate forfeit of infringing goods and any equipment used to infringe.  The potential effect here is that computer equipment used for widespread and wilfull infringement could be seized in exactly the same way that cars and boats used for drug crimes are now taken by law enforcement.

With sections III and IV, PRO-IP really starts spending your money; over 55 million dollars a year is explicitly appropriated to increase federal and local enforcement efforts.  At the top, a new executive branch official is created — the Intellectual Property Enforcement Coordinator, or IP Czar, as the position has been called — whose job is not to seek balance in our copyright law, as is arguably the role of the Registrar of Copyright, for example, but directly to expand the role of the federal government in protecting these private rights.  The section also creates a new enforcement advisory committee, replacing an earlier group with one whose membership is significantly expanded.  This group is specifically charged with gathering information about the alleged cost of IP infringement that is used by the industry in its lobbying efforts.  Now taxpayers will pay for that research.  Indeed, this federal official is essentially a Cabinet-level lobbyist for Big Content.

PRO-IP also requires the addition of over a dozen FBI agents to full-time IP enforcement; it is not clear if these are new agents or ones who will be reassigned from less high priority duties.  Twenty-five million dollars are also allocated for grants to local law enforcement to pursue those dangerous file-sharers, and 20 million to hire more investigators for the Department of Justice.  The bill closes with a “sense of Congress” section that heaps great praise on the content industries and repeats much of the propaganda that those industries distribute to support their claim that federal intervention to protect their out-dated business models is necessary.  It also informs the Attorney General of the United States that IP enforcement should be “among his highest priorities.”

As is probably clear, I think PRO-IP is still bad legislation.  The provisions that most threatened to have a further chilling effect on higher education have been removed, but the bill still, in my opinion, is a huge gift of money to the major content industries.  The result will be that taxpayers will shoulder even more of the burden of fighting their desperate battle to prop up a business model that both consumers and the technologies they use have passed by.  Instead of looking for new ways to enhance and market their products, these industries continue to resort to legal enforcement that is bound to fail (see this report from the Electoronic Frontier Foundation on the fruitless campaign of the past five years), and they have now convinced Congress to invest much more taxpayer money in that effort.

Can Copyright kill the Internet?

The question seems extreme, and it is certainly rhetorical.  But the potential for copyright challenges to significantly limit the range of activities and services available on the Internet is very real, and severe limits on the full potential for digital communications could be imposed.

One of the great strengths of the Internet — its completely international character — is also one of its greatest weaknesses.  Since laws change across national boundaries, but the Internet goes merrily along, online services can potentially be made subject to the most restrictive provisions found anywhere in the world.

In the US, for example, there is solid case law holding that thumbnail versions of images used in image search engines are fair use.  The cases of Kelly v. Arriba Soft and Perfect 10 v. Google are solid examples of this principle.  But fair use is a fairly unique feature of US law; it does not exist in most other countries.  So when Google’s image searching was challenged in a German court on copyright infringement grounds, they did not have fair use to rely on for their defense, and they lost the case earlier this week.  The German court held that this valuable tool infringes copyright if the thumbnail images are used without authorization, even if the use is to provide an index that helps users actually find the original.  There are reports about the decision here and here.

How will Google react to this decision?  First, they will almost certainly appeal.  It is possible, ultimately, that they would have to employ some kind of technological measures that would prevent users in Germany from seeing the image search results with thumbnails, a result that would ultimately harm business in Germany more than Google.  It is very unlikely that Google would have to shut down its image search feature, but multiple decisions might force a reexamination of how Google provides services worldwide.  A similar case, involving the sale of Nazi memorabilia in France, led Yahoo to exactly that sort of system-wide change.

The general lesson here is that the current copyright regime throughout the world is in a fundamental conflict with the openness and creativity fostered by the Internet.  Most companies today want to do business on the Internet, but few are willing to embrace the fundamentally open nature of the medium.  The resulting conflict really does threaten to constrict the role the Internet can play in our lives.

The conflict is the subject of an interesting article from The Wall Street Journal by Professor Larry Lessig of Stanford, a short teaser for his forthcoming book “Remix.”  Lessig suggests that the copyright “war” over per-to-peer filesharing risks significant “collateral damage.”  That damage would come in the chilling effect that frivolous lawsuits and poorly-researched DMCA “takedown notices” could have on new forms of creativity and art — the products of the remix culture which, Lessig argues, offers a return to an era when amateur artists could thrive.  This culture offers “extraordinary” potential for economic growth, according to Lessig, if it is not choked off by aggressive enforcement directed at a very different activity.  To prevent that, he offers five changes that could make our copyright law less of a threat to the innovation and creativeity the Internet encourages.

Will copyright kill the Internet?  No.  But copyright will need to be revised to account for the new opportunities that the Internet creates, lest we find ourselves unable to exploit those opportunities.

PS — This story about the McCain/Palin campaign fighting back against DMCA takedown notices that are being used to force YouTube to remove campaign videos that contain short clips from news programming, is another example (if we needed on) of the potential for abuse of the copyright system to chill important speech on the Internet.  Good to see the McCain camp fight back, but I wonder if it is really YouTube’s job to evaluate the merits of the takedown claims.  A court recently told content owners that they must consider fair use BEFORE sending a takedown notice; I wonder if the better course isn’t to pursue some kind of sanctions against those who send clearly unwarrented notices.

Chipping away

Digital rights management, or DRM, is a delicate subject in higher education.  Also called technological protection measure, these systems to control access and prevent copying are sometimes used by academic units to protect our own resources or to fulfill obligations we have undertaken to obtain content for our communities.  Sometimes such use of DRM in higher ed. is actually mandated by law, especially in distance education settings.

But DRM systems also inhibit lots of legitimate academic uses, and they are protected by law much more strictly than copyrights are by themselves.  A section added to the copyright law by the Digital Millennium Copyright Act makes it illegal to circumvent technological protection measures or to “manufacture, import, offer to the public, provide or otherwise traffic in” any technology that is primarily designed to circumvent such measures.  The reason I say this is stronger protection than copyrights get, and the reason these measures can be such a problem for teaching and research, is that our courts have held that one cannot circumvent DRM even for uses that would be permissible under the copyright act, such as fair uses, or performances permitted in a face-to-face teaching setting.

It is frequently the case, for example, that professors want to show a class a set of film clips that have been compiled together to avoid wasting time, or wish to convert a portion of a DVD to a digital file to be streamed through a course management system, as is permitted by the Teach Act amendment.  These uses are almost certainly legal, but the anti-circumvention rules make it likely that the act of getting the files ready for such uses is not.

To avoid the harshest results of the anti-circumvention rules, Congress instructed the Library of Congress to make a set of exceptions every three years using the so-called “rule making” procedures for federal agencies.  There have been three rounds of such rule-making so far, in 2000, 2003 and 2006.  Only in the last round was there any significant exception for higher education and it was very narrow, allowing only “film and media studies professors” to circumvent DRM in order to create compilations of film clips for use in a live classroom.

Now the Library of Congress has announced the next round of rule-making which will culminate in new exceptions in 2009.  Higher ed. has another chance to chip away at the concrete-like strictures that hamper teaching, research and innovation.  We need to be sure that the exception for film clips is continued, and try hard to see it expanded; many other professors, for example, who teach subjects other than film could still benefit from such an exception without posing any significant risk to rights holders.  Ideally, an exception that allows circumvention in higher education institutions whenever the underlying use was authorized could be crafted.

There is a nice article describing the rule making process and its frustrations here, from Ars Technica.

One of the things we have learned in the previous processes is the importance of compelling stories.  The narrow exception discussed above was crafted largely in response to the limitations on his teaching described by one film professor who testified during the rule-making.  The exception seems crafted to solve his particular dilemma. As another round of exceptions is crafted over the coming year, it will be important for the higher ed. community to offer the Library of Congress convincing narratives of the other ways in which DRM inhibits our work and to lobby hard for broader exceptions that will address the full range of problems created by the anti-circumvention rules.

Copyright creep?

When I first became aware of the lawsuit filed by publishing giant Thomson Reuters against George Mason University to stop the release of the open source citation management program Zotero (hat tip to my colleague Paolo Mangiafico for directing me to this story), I wasn’t sure how it was relevant to issues of copyright and scholarly communications.  After all, this is essentially a licensing dispute; Thomson alleges that, in order to develop the newest version of Zotero, software developers at GMU “reverse-engineered” the proprietary .ens file format used by Thomson product Endnote in violation of a licensing agreement. Endnote, of course, is a very popular tool in academia, and it is alleged that GMU is marketing its new version of Zotero with the specific boast that it now allows users to convert EndNote files into its own open source and freely-sharable file format

I cannot comment on the merits of the breach of contract claim, and I have no argument with the right of Thomson Reuters to use a licensing agreement to protect its intellectual property.  Nevertheless, the idea of protecting these files, which simply organize data about books, journal articles and web sites into a form that can then be mapped into different citation styles, raises interesting questions about the scope of copyright law and where new and troubling developments might take it.

At least since the Supreme Court decided Feist v. Rural Telephone in 1991, we have known that facts and data are not themselves protected by copyright, and that collections of facts must meet a minimum standard of originality (greater than that found in the phone books that were at issue) in order to be protectable.  I do not know if the file format EndNote has created to store citation data is such an original arrangement of data and, apparently, neither do they.  Rather than rely on copyright law, they wrote a license agreement to try to prevent what they allege took place at GMU.  But two questions still bother me.

First, should universities agree to licenses that prevent reverse engineering?  In today’s high-tech environment, reverse engineering is a fundamental way in which innovation proceeds.  Our copyright law, in fact, recognizes the importance of such activities, providing specific exceptions to certain prohibitions in the law for cases of reverse engineering that have potential social benefits, such as encryption research or making materials available to handicapped persons.  So one could legitimately ask if a court should consider the benefits of the research being done when deciding whether and how strictly to enforce a contractual provision against reverse engineering.  In general, open source software is a gift that many universities like George Mason give to the academic community as a whole, and the value of that gift is increased if it is possible for scholars who have been using a costly commercial product to move their research resources from the latter into the former.  That increased value (an “externality” in economic jargon) could be weighed against Thomson’s loss (which they allege is around $10 million per year) in reaching a reasonable decision about contract enforcement.

Second, will we see a movement to cover databases under some kind of database protection law, potentially separate from copyright, if corporate database verdors are unsatisfied with even the low bar necessary for copyright protection and with the need to use licensing provisions where that protection is unavailable?  It is this kind of extension of intellectual property protection to subject matter that has traditionally not been protected that I mean by the phrase “copyright creep.” Such sui generis protection (not rooted in copyright principles) has been adopted in the European Union, and it is common these days to hear complaints about it from scholars in EU countries.  At a minimum, such protection would raise costs for obtaining access to commercial databases and, as is shown by the Zotero lawsuit, could be used to stifle innovation and cooperation.  The last attempts to introduce legislation for database protection in the US were several years ago — there is a nice summary of those efforts and the issues they raised here — but it is a topic that keeps coming back and about which higher education needs to be vigilent.  In many ways our interests would cut both ways in any database protection debate, so it is a case where careful thought and balance would be needed.