Category Archives: Digital Rights Management

Public access and protectionism

By now many folks have commented on the announcement from Nature Publishing Group early this week about public access to all of its content and most have sussed out the fairly obvious fact that this is not open access, in spite of the rah-rah headline in the Chronicle of Higher Education, nor even public access as it is defined by many national or funder mandates.  Just to review quickly the major points about why this announcement actually gives the scholarly community so much less than is implied by either of those terms, consider these limitations:

  1. A putative reader can only get to an article if they are sent a link by a subscriber, or the link is present in a news article written by one of the 100 news organizations that NPG has chosen to “honor.”
  2. Articles can only be read inside NPG’s proprietary reader
  3. No printing or downloading is possible, so a non-subscriber hoping to use one of these articles to further her own research better have a darn good memory!
  4. No machine processing will be possible; no text or data mining.

In short, all of the inconveniences of print journals are preserved; what NPG is facilitating here is essentially a replica of loaning a colleague your copy of the printed magazine.  Or, at best, the old-fashioned system whereby authors were given paper “off-prints” to send to colleagues.  Although, honestly, off-prints had more utility for furthering research than this “now you see it, now you don’t” system has.

If this is not open or public access, what is it?  I like the term “beggar access,” which Ross Mounce applied to NPG’s scheme in a recent blog post, since it makes clear that any potential reader must ask for and receive the link from a subscriber.  Some suggest that this is a small step forward, but I am not convinced.  There is nothing public or open about this “ask a subscriber” model; all it really does is prevent scholars from downloading PDFs from their subscription access to NPG journals and emailing them to colleagues who lack a subscription.  In short, it looks like another stage in the ongoing comedy of fear and incomprehension about the way digital scholarship works on the part of a major publisher.  But Mounce’s post suggests that the move is more than that; he points out ways in which it may be designed to prop up digital business that Nature and its parent Macmillan have invested in — specifically ReadCube and AltMetric.com.  The byzantine scheme announced by Nature will drive readers to ReadCube and will generate data for AltMetrics.com, helping ReadCube compete with, for example, Elsevier and their proprietary reading and sharing tool, Mendeley.

That is, this looks like another move in the efforts by the large commercial publishers to buy up and co-opt the potential of open access. On their lips, open access does not mean greater potential for research and the advancement of science; it means a new market to exploit.  If administrators, researchers and librarians allow that to happen, they will have only themselves to blame.

My colleague Haley Walton, who recently attended OpenCon 2014, told me about a presentation made by Audrey Watters that included the idea of “openwashing,” which Watters defines like this:

Openwashing: n., having an appearance of open-source and open-licensing for marketing purposes, while continuing proprietary practices.

This is exactly what is happening in this announcement from NPG; old business models and awkward exploitation of new markets are being dressed up and presented as a commitment to access to scholarship, but the ruse is pretty transparent.  It may quack like a duck, or be quacked about, but this plan is really a turkey.

If NPG really was committed to better access for scientific research, there is a simple step they could take — put an end to the six-month embargo they impose on author self-archiving.  Much of their competition allows immediate self-archiving of an author’s final manuscript version of articles, but Nature does not.  Instead, they require a six-month embargo on such distribution.  So this new move does only very little to ameliorate the situation; the public still cannot see Nature-published research until it is old news.

Speaking of news, at Duke we have a relationship between the office of Scholarly Communications and that of News & Communications whereby we are notified of upcoming articles about research done at Duke.  In many cases, we are able to work with authors to get a version of the article in question into our repository and provide an open link that can be included in the article when it is released, or added shortly after release.  Our researchers find that putting such links in news stories leads to much better coverage of their discoveries and increased impact on their disciplines.  We always do this in accordance with the specific journal policies — we do not want to place our authors in a difficult position — which means that we cannot include Nature-published articles in this program.  To be frank, articles published in Nature remain highly valued by promotion and tenure committees, but relatively obscure in terms of their ability to advance science.  NPG seems to understand this problem, which is why they have selected a small number of news outlets to be allowed to use these tightly-restricted, read-only links.  They want to avoid increasing irrelevance, but they cannot quite bring themselves to take the necessary risk.  The best way they could advance science would be to eliminate the six-month embargo.

It is interesting to consider what might happen if Nature embraced a more comprehensive opportunity to learn what researchers think about open access by tying their “get a link from a subscriber” offer with an announcement that they were lifting the six-month embargo on self-archiving.  That would demonstrate a real commitment to better access for science, and it would set up a nice experiment.  Is the “version of record” really as important to researchers as some claim?  Important enough to tolerate the straightjacket created by NPG’s proprietary links?  Or will researchers and authors prefer self-archiving, even though an earlier version of the article must be used? This is not an obvious choice, and NPG might actually win its point, if it were willing to try; they might discover that their scheme is more attractive to authors than self-archiving.  NPG would have little to lose if they did this, and they would gain much more credit for facilitating real openness.  But the only way to know what the real preference among academic authors is would be for Nature Publishing to drop its embargo requirement and let authors choose.  When they make that announcement, I will believe that their commitment to finding new ways to promote research and learning is real.

Silly Season

It is traditional in political reporting to refer to the run-up to primary elections as the “silly season” because of all the amazing things candidates will say while trying to appeal to different constituencies and bear up under the glare of media coverage.  Recently this time of year has also seen some developments in the copyright world that also justify some bewildered head shaking.

On the legislative front, the PROTECT IP act has been introduced in the Senate for a while now.  It is problematic even in its Senate form, since it would allow private actions to attack web domains based only on accusations of IP piracy, without the usual due process that is necessary to sue an alleged infringer.  But the act got worse, and stranger, when it was introduced into the House of Representative.  A provision was added that would role back the “safe harbor” provision for ISPs from the Digital Millennium Copyright Act and impose an affirmative obligation for web hosting services to police content uploaded by users.  This is in keeping, I am afraid, with the overall effort to force others — the ISPs and the government — to foot the bill for enforcing copyrights owned by the legacy content industries.  Discussions of this bill are all over the Internet; a representative one can be found here.

The argument that we should change the DMCA is becoming very common.  The content industries do not like the bargain they made a decade ago, and seem increasingly to want to shut down the most productive aspects of the internet in order to preserve their traditional business models.  An excellent argument for why we should not let this happen can be found in this discussion of copyright, the Internet and the First Amendment from Thomas Jefferson scholar David Post.

The real silliness, however, comes in the decision to rename the bill in the House, from PROTECT IP to ePARASITES.  I sometimes believe there is a congressional office for acronyms, staffed by some very silly people. When I first heard this new acronym, I thought it was a parody.  Although I now know that the “parasites” referred to are websites that facilitate unauthorized sharing, I initial concluded that it was a joke referring precisely to those industries supporting PROTECT IP who want the taxpaying public to bear all the costs for their failures to innovate.

Another round of silliness was created this week by the filing of a Second Amended Complaint in the lawsuit between the trade group AIME and UCLA over digital streamed video.  The judge dismissed the First Amended Complaint about a month ago but gave the plaintiffs (AIME and AVP Video) permission to refile.  This they have now done, but going through the (long) complaint, I fail to see how they have really addressed many of the judge’s reasons behind the dismissal.

A major reason behind the dismissal was lack of standing for AIME and sovereign immunity protections for the defendants.  I noted at the time that the lawsuit would really need different plaintiffs and different defendants to go forward.  Clearly AIME did not agree, since the new complaint names exactly the same defendants, simply with “and individual” added each time the previous document said they were sued in their official capacities.  This new document does not remove the claims against them in their official capacities, even though the judge already dismissed those claims, and it does not add any facts that I could see that would justify a suit against the individuals.  So the refiling really just seems to double down on the failings of the first complaint.

Also, AIME tried to rescue its “associational standing” but pointing to “injury in fact” to the association itself.  Such injury, incredibly, seems to be primarily the damage done to AIME by its relentless pursuit of this lawsuit, which it brought in the first place.  Staff time has been consumed, they say, and the reputation of the association harmed.  New members are reluctant to join.  Why any of this confers standing on AIME against UCLA is beyond me; members may not be joining because they do not want association dues spent tilting at windmills.  Also, the judge already rejected the argument that “diversion of resource” for the lawsuit was enough to establish the required showing of injury.  It is not clear to me that simply adding more detail can rescue this argument.

The new complaint again asserts that sovereign immunity was waived by UCLA when it signed license agreements with a jurisdictional clause, and by its own policy of obeying federal copyright law.  Both of these arguments were already rejected by the judge, so reasserting them seems more like a criticism of the previous decision than a new argument.

On the substantive arguments, I also can see very little that has been added that should change the outcome here.  The license between AVP and UCLA is reasserted, with the same language that caused the judge to read it in a way that undermined the first set of copyright claims.  One addition is the claim that the UCLA system is “open” (which the license does not allow) because it has a guest feature that can be turned on, but there is no assertion that it ever has been turned on in fact.  Another addition are the state law claims for tortious interference with a contract and prospective interference with a business advantage.  Like the previous state law claims, this seems entirely founded on the copyright infringement claim, so I see no reason these would not be preempted by the resolution of the copyright issue, as the previous claims were.

In both these instances, I think we see the emotion of righteous indignation overcoming reason.  The very thing, it seems, that makes this the silly season.

Did he really say that?

Librarians have raised a pretty loud outcry (for librarians) about the new e-book pricing policy announced last week by Harper Collins, under which libraries would be able to loan an e-book only a set number of times before having to pay again to continue lending.  This model seems unfair to libraries, especially because they would not be able to plan their budgets, since the actual cost of each e-book purchased this way would be unknown and variable.  But now publishing consultant Martin Taylor has written a column praising Harper Collins and telling librarians to suck it up and fork over the money.  His core argument is that publishers have “serious concerns” about the impact of library lending on their e-book markets and that “librarians have not managed to address [these concerns].”  This, to my mind, is a remarkable statement.

It is not the job of librarians to address the concerns publishers have for their bottom line; to say that we should implies a view that libraries are nothing more than a market, the existence of which is justified only insofar as they serve publisher’s interests.  But libraries serve the interests of an altogether different clientele.  Public libraries serve the readers of their geographic areas and are responsible to local boards or town councils.  Academic libraries serve students, faculty and, often, the local populace, while being responsible for their fiscal management to deans and provosts.  Publishers are entitled, if they want, to make a business decision about how they price e-books, but libraries are equally entitled to make a business decision about how to spend their money in ways that best serve their patrons and their institutions.  If buying e-books under this new model is not good for our patrons, publishers have no cause to complain or berated us for being out-of-touch.

Taylor suggests that the price for each loan of an e-book under the Harper Collins model is reasonable.  But this claim confuses price with value.  No matter what the price of each loan is, if the book represents a drain on a library’s resources that cannot be known in advance, it is a bad value.  There is almost no scenario in which a library’s money would not be more responsibly spent elsewhere.

Some publishers have always disliked the deference to libraries that is built in to US policy and, under the “first sale” doctrine that is found in section 109 of the Copyright Act, US law.  First sale was first formally recognized in US law in 1903, when Bobbs-Merrill publishing tried to control the down-stream pricing of one of their books by placing a statement on the title page claiming that the book could never be sold for less than $1.  When Macy’s department stories offered the book at a discount, Bobbs-Merrill sued and lost in the U.S. Supreme Court.  The Court made clear what US lending libraries were already assuming, that once a first sale of a work had occurred, the exclusive right of distribution was “exhausted” and the purchaser could resell, or lend, the book without permission or control from the publisher.  It was discontent with this well-established public policy that led Pat Schroeder, when she was president of the Association of American Publishers, to call all librarians pirates.

Since public policy has always been on the side of library lending as a fundamental building block of democracy, publishers now find that the only way they can attack it, and try to develop an income stream they have never had before, is through DRM – technological controls that prevent lending e-books more than a set number of times.  Like Pat Schroeder’s rhetoric of piracy, this approach has been tried before, by the music industry.  Record companies finally figured out that consumers would prefer not to spend their money for products that have their own obsolescence built in (unless the consumer pays again and again), and they abandoned the use of DRM.  The publishing industry is entitled to try the same failed experiment if they like, but, again, they should not complain if consumers, in this case libraries, choose not to support the model.

Taylor recognizes that the Harper Collins’ model would cost libraries money they have never had to spend before – repeated fees to keep loaning content they have already purchased – and he helpfully provides suggestions about where that money should come from.  He mentions and rejects the possibility that the publishers might forgo this new income stream.  He would be happy to take tax money, but he realizes that this is unlikely.  So instead he suggests that library branches be closed and librarians be laid off in order to free up the extra money.  That’s right; the core of his argument is that we should close libraries so that publishers can make more money. Of course, the libraries that would get closed or under-staffed are always those in places where libraries are most needed, in disadvantaged neighbors or at less wealthy colleges and universities.

These libraries are, apparently, expendable if they cease to serve the narrow (and probably misconceived) interests of publishers at this particular moment in history.  This kind of support, I expect, will not do Harper Collins much good; I can only hope that this naked self-interest and disregard for public policy and the general welfare will make Taylor’s column what it should be, a rallying cry to libraries and those who support them in city halls, state legislatures and academic administrations to stand up against business practices that threaten their core missions.

Reading the fine print

Yesterday’s announcement that the Library of Congress was designating new classes of works exempt from the anti-circumvention rules of the DMCA has generated lots of Internet buzz, especially about the exemption for those who “jailbreak” their cellular phones.  The major exemption for higher education, allowing circumvention by faculty for a range of defined educational purposes, has also gotten some press, some of it excellent and some of dubious accuracy.  In the latter category, unfortunately, is this piece from Inside Higher Education, which I will discuss below.

But first let’s look at the actual language of the exemption.  What follows is based on the detailed description of the six exemptions given in today’s Federal Register.

First, the exemption is to permit circumvention of technological protection measures — the breaking of digital locks — for certain classes of works and for defined purposes.  These rules do not change the definition of fair use; they merely specify a small group of purposes within the broader category of fair use for which circumvention is permitted.

Next, this exemption applies to lawfully made and acquired DVDs that are protected by Content Scrambling System (CSS).  This application is both broader and narrower than the previous rule.  It does not require that the DVD be part of a university’s library collection, much less the collection of a film or media studies library.  The DVD can come from anywhere as long as it is not pirated or stolen.  But it applies only to DVDs that use CSS; it does not, for example, apply to Blu-Ray discs.  So a faculty member can make a compilation of clips from her own DVD library, for example, unless she collects that library in some format other than traditional DVD.

The exemption applies to three specific activities for which circumvention is necessary.

First, it applies to educational uses by college and university faculty and by college and university students of film and media studies.  Notice that the category of faculty is all inclusive, but the category of students is limited.  The Library of Congress determined that not all students needed this exemption; presumably they were also aware of industry fears that students would carry the permission too far if the exemption were general.  Also, the application to educational uses does not include K-12 teachers, who were also determined not to need the ability to obtain high-quality clips.  Presumably they are still expected to point a digital camera at a TV screen if they want a clip from a motion picture.

The other activities to which the exemption applies are documentary film-making and non-commercial videos.  Presumably some of the limitations to the persons allowed to circumvent for educational purposes may be mitigated by these two defined activities.  A university student who is not studying film and media studies, for example, might still want to use a film clip in a class video project and could be permitted because it is a non-commercial video.

So once we are clear about what can be used, by whom and for what purposes, it remains to ask what exactly we can now do.  The answer is that we can circumvent technological protection measures in order to incorporate short portions of motion pictures into new works for the purpose of criticism and comment. Several phrases here call for explication.  First, circumvention is allowed for copying short portions, not entire films.  Second, this exemption applies only to motion pictures, not to other content, like video games, that may be available on DVD.  Third, the clip must be used to create a new work.  I was glad to see that the explanation of this phrase in the Federal Register is explicit that “new work” does include a compilation of film clips for teaching, as well as other videos in which a short clip may be subjected to criticism and comment.  Finally, that purpose of criticism and comment is a required aspect of the defined activity that is permitted.

The last requirement for this exemption is a reasonable belief that circumvention is necessary to accomplish the permitted purpose.  The announcement is very clear that if another method of obtaining the clip without circumvention is available and will yield a satisfactory result it should be used.

This seems like a lot of requirements, but I think that overall we have a pretty useful exemption here and one the application of which will not really be too difficult.  Once we understand the four italicized phrases above, it seems that we should be able to recognize permitted instances of circumvention when we see them.  Certainly this is easier to understand and apply than the exemption it replaced.  But when we look back at that item from Inside Higher Ed, it is easy to see how excessive enthusiasm can still lead to misunderstanding.

For one thing, the IHE piece does not acknowledge the limitation placed on students who can take advantage of this educational purpose exemption.  It may be, as I suggest above, that that limitation will be swallowed by the other permissions, but we should at least recognize the intent behind the rule.  More importantly, this exemption to the DMCA’s anti-circumvention rules really has nothing to do with the dispute between UCLA and AIME or with other projects to stream entire digital videos for teaching, in spite of what IHE suggests.  While such projects may or may not be justifiable, this exemption does nothing at all to change or define the boundaries of fair use; it merely carves out a portion of those uses, which the Registrar calls “classic fair use,” for which circumvention is now permitted.  There may be other uses that are fair, but this exemption neither determines that question nor authorizes circumvention for those purposes.

It is what it is, and no more, but what it is is good news for higher education.

The new, improved DMCA

Last week I wrote, but had not yet posted, a comment about the proposed copyright reform in Brazil and the more nuanced approach they took to anti-circumvention rules that protect technological systems intended to prevent unauthorized access.  In the course of that discussion I again criticized the Library of Congress’ long delay in announcing new classes of exceptions to the US anti-circumvention provisions.  I expressed the hope that, after waiting so long, they would at least get it right.

They did.

Before I had a chance to publish my post, the new exceptions were released, albeit eight months late.  Also, an important appellate court opinion about the DMCA anti-circumvention rules was handed down.  So now I have three points to make about the DMCA and anti-circumvention rather than just one, and taken together they constitute my first ever optimistic writing about this subject.

First, the new DMCA exceptions announced today by the Library of Congress include the broader exception for higher education that many of us asked for during the rule-making proceedings.  Indeed, the language is broader than I dared hope, apparently allowing circumvention of DVDs for a broad array of purposes in higher education.  Certainly all professors can now circumvent for the purpose of compiling clips for teaching, as well as for incorporating clips into larger scholarly works.  Documentary film-making and non-commercial videos seem also to be able to circumvent for purposes of criticism and comment using short portions of a protected film.  Indeed, this exception comes close to allowing circumvention (of one type of media) for most fair uses, although it does not quite get us to that point.

The new exceptions also include a provision to allow circumvention of e-book technological protections when necessary to enable a read aloud or screen reader functions.  This exception also addresses a problem that higher education has long felt when accommodating students with a visual disability.

Second, this case out of the Fifth Circuit, involving software used to control “uninterruptable power supply” (UPS) machines, made a very clear statement that the DMCA’s protection of DRM systems “prohibit only forms of access that would violate or impinge on the protections that the Copyright Act otherwise affords copyright owners…. Without showing a link between “access” and “protection” of copyrighted work, the DMCA anti-circumvention provision does not apply.”  The Court quotes another circuit for the proposition that the DMCA creates no additional rights other than what the copyright law already grants; it merely provides for a different form of protecting those rights. With this language we seem to move even further down the path toward saying that anti-circumvention is not prohibited when the purpose for which access is sought would be a fair use.

Which gets me to my third point, about the proposed copyright reform in Brazil. As I said in my earlier post:

“Brazil offers an international example of how to handle anti-circumvention the right way from the start, instead of creating a draconian rule and then forcing law-abiding users to beg for limited exceptions.  Brazil has introduced a balanced approach to anti-circumvention as part of its copyright reform proposal (available here, in Portuguese; see especially section 107).  As Canadian copyright law professor Michael Geist explains on his blog, this proposed reform imposes penalties for circumvention of legitimate technological controls on access, just as US law does.  But it also specifies that circumvention of such controls is permitted for access to public domain materials and for purposes that fall under Brazil’s ‘fair dealing’ exceptions; an obvious limitation that US law ignores.  What is more, the Brazilian proposal would impose penalties equivalent to those for unauthorized circumvention on those who would hinder circumvention for these legitimate purposes.”

Now, of course, we are much closer to the same kind of sensible approach then we were just a few days ago.  It is interesting to note that I mentioned in that earlier, never-published post, that the US Trade Representative would be upset at Brazil for not incorporating US-style DMCA rules.  But I have just seen this news about how the USTR is backing down about harsh anti-circumvention provisions even in  ACTA, the Anti-Counterfeiting Trade Agreement I have talked about before.  I believe I may hear the turning of a tide.

Fairness breeds complexity?

The title of this post is an axiom I learned in law school, drilled into us by a professor of tax law but made into an interrogative here.  Because the copyright law is often compared to the tax code these days, I have usually just accepted the complexity of the former, as with the latter, as a function of its attempt to be fair.  Because different situations and needs have to be addressed differently in order to be fair, laws that seek fairness inevitably (?) grow complex. But a recent blog post by Canadian copyright law professor Michael Geist, nicely articulating four principles for a copyright law that is built to last, has made me ask myself if simplicity is a plausible goal for a comprehensive copyright law.

Geist’s four principles are hard to argue with.  A copyright law that can last in today’s environment must, he says, be balanced, technologically neutral, simple & clear, and flexible.  That last point, flexibility, is the real key, since designing a law that can be adapted to new uses and new technologies, many of which are literally unforeseeable, requires that the focus be on first principles rather than outcomes.  This is different than the tax code, and it may provide the path to combining fairness with simplicity.

The principle of flexibility explains why fair use is an effective provision of US copyright law.  As frustrated as some of us get trying to navigate the deep and dangerous waters of fair use, it has allowed US law to adapt to new situations and technologies without great stresses.  In fact, Geist’s brief comment on fair dealing in Canadian law suggests (implicitly) that it should be more like US fair use; he argues that the catalog of fair dealing exceptions should be made “illustrative rather than exhaustive,” so courts would be free to build on it as technologies change.

In recent posts I have spoken of adapting fair use so that it gives more leeway to academic works than to other, more commercial intellectual properties.  Even though Geist is explicit in his post that “Flexibility takes a general purpose law and ensures that it works for stakeholders across the spectrum, whether documentary filmmakers, musicians, teachers, researchers , businesses, or consumers,” I do not think there is any contradiction here with asking that academic works be treated differently in the fair use analysis then a recently released movie, for example, might be.  Fair use would be applied in the same way to each, but because fair use appeals to the motivating principles of copyright law, it asks us to examine the circumstances of each type of material and each kind of use and measure them against those principles.  This is precisely how flexibility is accomplished, and I argue that the result of this uniform application of principles will be different outcomes for different types of works.

Geist’s approach to digital locks — DRM systems — is quite similar, asking us to look at first principles that underpin copyright law when deciding how to treat any particular technology.  Specifically, he suggests that forbidding or permitting the circumvention of such digital locks must be tied to the intended use for which the lock is “picked” if copyright balance is to be respected.  An added advantage of this approach is that it is much simpler — another core principle — than the current approach in the US, where categorical rules are enacted and then a series of complex exceptions are articulated every three years.  We will see shortly how that process will play out for the next three years, since the exceptions will be announce in a couple of months, but it is inevitable that the result will be unfair to some stakeholders and probably disappointing to all.  Far better that we heed Geist’s call for an approach based on first principles.  Perhaps Canada, as it considers a comprehensive overhaul of copyright law, can lead the way.

What has changed

Courts in the U.S. have asserted for years that our copyright law is compatible with the First Amendment guarantee of free speech by citing to principles — fair use and the rule that copyright protects only expression and leaves the underlying ideas free for all to appropriate, reuse and build upon.  Both of these safeguards are still in place, yet I have twice claimed in this space that we need to look again at the relationship between copyright and free expression.  So the question presents itself, do I just not get it, as at least one commenter seems to think, or has something changed to make reliance on fair use and idea/expression inadequate these days?

Although I am not convinced that the two principles usually cited were ever adequate, especially as the scope of copyright’s monopoly expanded, what has clearly changed, in recent years, is that Congress adopted the Digital Millennium Copyright Act in 1998.  The DMCA added two provisions to the copyright act that have had a negative impact on free expression.

First were the legal protections provided for technological protection measures, or DRM (digital rights management) systems.  It is ironic that content owners decided to move toward technological locks because they felt that legal protections were inadequate, and then found they needed legal protection for those locks when they proved insecure.  But the combination of digital locks and “anti-circumvention” rules has been devastating for free speech; even use of public domain works can now be locked up, and the law will prevent access.

Lest we forget the power of DRM, here is a note about the Motion Picture Association of America “reminding” a court that it is illegal to circumvent DRM systems even for a use of the material that would be perfectly legal.  So when digital locks are used, one of the safeguards our courts have relied on to preserve free speech — fair use — is apparently useless.  As the EFF attorney mentioned in a blog post linked above says, it is by no means certain that fair use is entirely trumped by DRM, but there is a case that held that, and the content owners certainly believe that fair use is now obsolete.

An extensive study done by Patricia Akester, a researcher with the Centre for Intellectual Property and Information Law at Cambridge University, lends weight to that argument that what she calls “privileged uses” (like fair dealing in the UK and fair use in the US) are adversely impacted by DRM systems.  There is a report of her study here, and the full text (over 200 pages!) is here.  Akester may have done the first empirical study of these adverse effects, and her conclusions are sufficiently gloomy to lead her to suggest a legislative solution.  She proposes that a “DRM Deposit System” be established where content owners are required to deposit either the key to their lock or an unencrypted copy of the work.  Then a user could make an argument or meet a set of requirements for access when their proposed use was clearly within a privilege.  If the content owner declined to deposit with the system, circumvention for access for privilege uses would be allowed.  Some such system, similar to the “reverse notice and takedown” proposal discussed here over a year ago, is clearly needed if fair use is to continue to function as a safeguard of free speech.

The other provision of the DMCA that imperils free expression is the notice and takedown procedure itself, which was created to protect Internet service providers (ISPs) from liability for infringing activity that happened over their networks.  In one sense, this “safe harbor” has been good for fair use, allowing the creation of user generated content sites like Flickr and YouTube where lots of fair use experimentation can take place.  But that take down procedure is being abused, with bogus notices being sent to prevent legitimate and even socially necessary criticism and parody.  ISPs are quick to takedown sites that are named in these takedown notices, and the process for getting them restored subjects the original poster to an increased risk of liability.  It is very costly, after all, to defend free speech even against a bogus claim.  So abusive takedown notices have now become a favored way to suppress criticism and comment that is unpopular with a major company or content owner.  The long tradition of “I Hate BigCo, Inc., and here is why” web site, which courts have often held to be fair use of copyrighted and trademarked content, is now much riskier than it was before.  In fact, the Electronic Frontier Foundation has even created these six steps to safegaurd a gripe or parody site, recognizing that free speech is not longer sufficiently protected by traditional provisions within the copyright law alone.

What is DRM really good for?

As the Library of Congress considers new exceptions to the anti-circumvention rules that legally protect the DRM systems that are used by many companies to lock up digital content of all kinds, it is helpful to consider if those protections really accomplish what they were intended to.

Digital Rights Management, or electronic protection measures, are technological locks that “physically” prevent uses that are infringing, as well as many uses that would not be infringing if they were possible.  The justification for using DRM is that it is necessary to prevent the widespread infringement that the digital environment enables, and thus to protect the revenues of content creators.  Those revenues, it is argued, provide the incentive that keeps Americans creating more movies, music, books, etc.  This purpose seemed so important in 1998 that the Digital Milleniuum Copyright Act included rather draconian legal protection for DRM systems, making it illegal to circumvent them even when the underlying purpose of the use would itself be legal.  But the juxtaposition of two recent blog posts raises an interesting question about whether DRM really does what is claimed, and whether what is claimed is really its purpose in any case.

First is this report from Canadian copyright professor Michael Geist noting that for the third straight year sales of digital music (a prime type of content “protected” with DRM) have grown faster in Canada than they have in the United States.  This growth comes in spite (?) of the fact that Canada does not have the same legal protections for DRM systems that the US does.  Apparently the incentives for creativity are just as strong, or stronger, in Canada, where circumvention is not punishable, as they are in the US, where we are told that those who circumvent and those who market the technology to circumvent must be stopped lest creativity grind to a halt.  The reality, as Geist points out, is that “copyright is simply not the issue,” and government intervention to drastically strengthen the copyright monopoly has not provided the promised benefit.

So why is DRM really so important to big content companies?  On the Electronic Frontier Foundation’s blog, Richard Esguerra gives us a more convincing answer when he notes that Apple is finally dropping DRM from the music files it sells through its iTunes store.  The timing, he suggests, shows that the big content companies really use DRM to eliminate competition and enforce a captive market; as soon as that purpose becomes moot, the companies drop the DRM.  It is no surprise that DRM is a marketing problem, especially for music, where it often prevents users from moving files from one device to another.  As long as the expected benefits in reduced competition outweighs the loss of sales, DRM is defended as a vital part of the copyright system.  But it is abandoned without a qualm once it no longer serves that anti-competitive purpose and threatens instead to hamper profits.

If DRM systems really are being used primarily to suppress competition and prevent innovation, they are working directly in opposition to the fundamental purpose of copyright law they were sold to us to support.  Read together, these two reports suggest that tinkering with exceptions, as the Library of Congress is charged to do every three years, is not enough; instead, the value of the whole idea of providing legal protection to DRM should be reexamined.

Chipping away

Digital rights management, or DRM, is a delicate subject in higher education.  Also called technological protection measure, these systems to control access and prevent copying are sometimes used by academic units to protect our own resources or to fulfill obligations we have undertaken to obtain content for our communities.  Sometimes such use of DRM in higher ed. is actually mandated by law, especially in distance education settings.

But DRM systems also inhibit lots of legitimate academic uses, and they are protected by law much more strictly than copyrights are by themselves.  A section added to the copyright law by the Digital Millennium Copyright Act makes it illegal to circumvent technological protection measures or to “manufacture, import, offer to the public, provide or otherwise traffic in” any technology that is primarily designed to circumvent such measures.  The reason I say this is stronger protection than copyrights get, and the reason these measures can be such a problem for teaching and research, is that our courts have held that one cannot circumvent DRM even for uses that would be permissible under the copyright act, such as fair uses, or performances permitted in a face-to-face teaching setting.

It is frequently the case, for example, that professors want to show a class a set of film clips that have been compiled together to avoid wasting time, or wish to convert a portion of a DVD to a digital file to be streamed through a course management system, as is permitted by the Teach Act amendment.  These uses are almost certainly legal, but the anti-circumvention rules make it likely that the act of getting the files ready for such uses is not.

To avoid the harshest results of the anti-circumvention rules, Congress instructed the Library of Congress to make a set of exceptions every three years using the so-called “rule making” procedures for federal agencies.  There have been three rounds of such rule-making so far, in 2000, 2003 and 2006.  Only in the last round was there any significant exception for higher education and it was very narrow, allowing only “film and media studies professors” to circumvent DRM in order to create compilations of film clips for use in a live classroom.

Now the Library of Congress has announced the next round of rule-making which will culminate in new exceptions in 2009.  Higher ed. has another chance to chip away at the concrete-like strictures that hamper teaching, research and innovation.  We need to be sure that the exception for film clips is continued, and try hard to see it expanded; many other professors, for example, who teach subjects other than film could still benefit from such an exception without posing any significant risk to rights holders.  Ideally, an exception that allows circumvention in higher education institutions whenever the underlying use was authorized could be crafted.

There is a nice article describing the rule making process and its frustrations here, from Ars Technica.

One of the things we have learned in the previous processes is the importance of compelling stories.  The narrow exception discussed above was crafted largely in response to the limitations on his teaching described by one film professor who testified during the rule-making.  The exception seems crafted to solve his particular dilemma. As another round of exceptions is crafted over the coming year, it will be important for the higher ed. community to offer the Library of Congress convincing narratives of the other ways in which DRM inhibits our work and to lobby hard for broader exceptions that will address the full range of problems created by the anti-circumvention rules.

E-textbooks: the state of play

As the new school year begins there has been lots of reporting about E-textbooks, and the welter of stories offers an opportunity to assess the overall state of play.

This story from Inside Higher Ed outlines some of the “next steps” for E-texts, as well as the “remaining obstacles,” which are substantial. The article focuses most of its attention on two initiatives – a highly speculative report that Amazon wants to introduce E-texts for its Kindle e-book reader, and a description of the progress being made by CourseSmart in partnering with higher education. It is worth looking at these two projects, along with some other business models for e-texts, in light of some recently articulated needs and concerns.

A recent study done by a coalition of student groups expresses some doubts about digital textbooks that are worth considering as we look at different possible business models. The report raises three potential problems with digital versions: their alleged failure to reduce costs, limitations on how much of an e-text a student is allowed to print, and the short duration of access provided by some licensing arrangements. These latter two concerns, obviously, support the contention that print textbooks are still serving student needs better than e-texts, especially if the digital versions are nor significantly less expensive. To these concerns we might add one more – students like to be able to highlight and annotate textbooks, and digital versions that do not support this activity will be disfavored.

So how do the different business models fare in addressing these concerns?

One model is simply the distribution of electronic versions of traditional textbooks by traditional publishers. This seems like the least promising of the models, since it likely solves none of the issues raised by the student groups. It is interesting that the representative of traditional publishers quoted in the Inside higher Ed story made no reference at all to cost concerns but stressed the potential for e-texts to shut down the market for used textbooks. Unsurprisingly, the focus here is on preventing competition and protecting income, not serving the needs of the student-consumers.

CourseSmart offers a business model that is very little different from that the traditional publishers might undertake themselves. There is some dispute about the issue of cost, however, with CourseSmart arguing not only that its digital versions of traditional textbooks are significantly cheaper, but that they remain so even when the income that students might usually expect by reselling their print texts is taken into account. It remains the case that that lower payment only purchases temporary access for the students and a restricted ability to print. Nevertheless, CourseSmart has been successful in arranging partnerships with San Diego State University and the state university system in Ohio, so it will be worth watching to see how those experiments develop, particularly in regard to student usage and satisfaction.

Amazon’s Kindle is yet another possibility for distributing e-texts. We know very little about how such texts would be priced or what features they would have, but we do know that the desire of students to be able to print would not be fulfilled. This is an important issue for students, apparently, since the student report on e-texts found that 60% of students surveyed would be willing to pay for a low-cost print copy of a textbook even if a free digital version was available to them.

This latter fact is precisely what Flat World Publishing is counting on with their plan to make free digital textbooks available and also sell print-on-demand copies to those who want a paper version. As I described this model a few weeks ago, Flat World is hoping to show that over the long-term, print on demand can prove a sustainable business model. Since this accords better with the expressed needs of student users than any of the above models, they might just be right.

The last model for distributing digital textbooks, one often overlooked in the debates (although endorsed by the student report mentioned above) but given some attention in this article from the LA Times, is open-access. Frustrated faculty members are increasingly considering creating digital textbooks that they will distribute for free. Supporting such work, with grants of up to $50,000, is another part of the initiative undertaken by the university system in Ohio. Ohio has long been a leader in supporting libraries in higher education, and this support for open access textbook offers a new avenue for leadership. The real “costs” we should be considering when we discuss e-texts ainclude reasonable support for the work of creating such resources, as well as credit for the scholarly product of that work when tenure reviews come around. So much of the expense of textbooks comes from the profit claimed by the “middlemen” who distribute them that real efforts to reduce the cost of education must focus on ways to encourage in-house creation of digital texts (which is little different from how textbooks have always been written) and to distribute them directly to students, as the Internet now makes possible.