Using copyright for its intended purpose

At its roots, copyright in the Anglo-American legal system is a statutory grant of rights intended to be an engine for innovation.  Copyright and patent legislation is the only type of law whose authorization in the Constitution is specifically tied to a purpose — “to promote the progress of science and useful arts.”  If copyright legislation does not serve this purpose it is, arguably, unconstitutional.

This is part of the real irony of SOPA, the bill currently being considered by the House of Representatives that would fundamentally alter how the Internet works in the U.S. in order to protect the traditional entertainment industries.  Such a bill, which would kill innovation in the name of protectionism, may be unconstitutional. That it is a bad idea is especially clear when we look at how other countries are considering adjusting their copyright laws precisely to better support innovation and economic growth.

In Brazil, a third draft of proposed copyright legislation has recently been released.  As Pedro Paranagua, a Brazilian copyright expert, tells us, there is both good and bad in the bill, but as I read his list of incorporated provisions, I am jealous of the attention being given to the real purpose of copyright, which is economic development through innovation.  Exhaustion of rights, what we call first sale in the U.S., would be defined in a way to prevent the recent debacle in which Omega abused copyright, in my opinion, to suppress legitimate price competition for its watches.  Collecting societies would be overseen by government watchdogs, and contract principles about serving the public interest and avoiding undue burdens would be explicitly incorporated into the copyright law.  Compulsory licenses would be available for uses of orphan works, and creators would have the explicit ability to dedicate their work to the public domain.  Finally, there is a proposed set of exceptions that covers at a lot of the socially beneficial uses that are still unreasonably controversial in the U.S.

Even one of the things that Pedro is nervous about, ISP liability under a notice and take down scheme, seems like a good idea that the U.S. must fight to maintain.  The notice and take-down system under the DMCA has allowed a lot of innovative businesses to thrive (YouTube being the most prominent), and that system is under severe threat if the provisions of SOPA get enacted.  So while Paranagua worries about a DMCA-style regime in Brazil, I am desperately hoping that we can keep that regime in place in the U.S.

Brazil has also been at the forefront of the World Intellectual Property Organizations discussion of limitations and exceptions.  The resulting WIPO agenda, looking primarily at exceptions for libraries and for access for persons with disabilities, reflects many of the ideas mention above, including cross-border uses (the subject of first sale and the Costco dispute), a solution to the problem of orphan works, and the relationship between copyright law and private contracts.

This last issue brings me to the most detailed document I have been looking at recently, the “Consultation on Copyright” released by the British government.  The UK has undertaken a thorough review of their copyright law in the past couple of years, explicitly to address the places where copyright interferes with innovation rather than fostering it.  The consultation is seeking hard data about the impact of the changes that were proposed by the commission it set up, called the Hargreaves Commission.  Many of the provisions are similar to the ones I have already mentioned.  But here is the language the really caught my eye:

The Government agrees that, where a copyright exception has been established in UK law in order to serve certain public purposes, restrictions should not be re-imposed by other means, such as contractual terms, in such ways as to undermine the benefits of the exception.
Although contract terms that purport to limit existing exceptions are widespread, it is far from clear whether such terms are enforceable under current contract law. Making it clear that every exception can be used to its fullest extent without being restricted by contract will introduce legal and practical certainty for those who rely on them.

I have argued in the past that contracts should not be allowed to preempt copyright’s limitations and exceptions, at least in cases where the contract at issue is not subject to “arms length” negotiation.  Here the Conservative government seems to be embracing that position (not because I suggested it, of course, but because the Hargreaves commission did) and even carrying it further.  Recognizing that copyright exists to serve a public purpose, and that that purpose should not be undermined by one-sided private agreement, such a “click-through” contracts on websites, would be an important step toward providing the consistency and certainty that all law-making aims for.

The point of this very quick and cursory survey of international proposals for copyright reform is simple.  Throughout the world, even in those countries that, unlike the U.S., embrace a natural-rights account of copyright, reform is focused on supporting innovation and not allowing a system that worked in the past become an obstacle for the future.  Yet in the U.S. all of our copyright proposals, and even statements from our Registrar of Copyrights, seem focused on protecting the old ways and staving off as long as possible the innovation that provides our best economic hope.  If we cannot learn from our competitors and our trading partners, we will certainly be left behind.


What fair use is for

When I considered the Authors’ Guild lawsuit against the Hathi Trust and some of its partner institutions a couple of months ago, only the complaint had been filed, so it it was natural to focus on the motivation of the plaintiffs.  And those motives are not hard to discern; after tasting the possibility of monetizing orphan works through the Google Books settlement and then having it snatched away, the AG is looking for an alternate way to use litigation to carve a profit-making opportunity from the labors of others.

Now, however, answers to the suit have been filed, both by the named defendants and by a potential defendant.

The defendants’ answer, filed on Dec. 2,  is a very lawyerly document, and for that reason it might disappoint some readers.  There are no lofty assertions about public benefits and the purpose of fair use ( I will make those assertions on their behalf).  Instead, the defendants’ answer does what it is required by the rules of federal civil procedure to do; it goes point-by-point through the complaint and, largely, admits the factual assertions while denying the conclusions of law.  It also states quite baldly the defenses on which these parties intend to rely.  We have a system called “notice pleading” in the U.S., and the defendants are only required to give notice to the court and the plaintiffs of the arguments they intend to make.

Nevertheless, we can see the broad outline of a response here.  The plaintiff’s complaint focused nearly exclusively on section 108 of the Copyright Act, the so-called “library exceptions” which deal with preservation and copying made for patrons (the foundation of inter-library loan).  The plaintiffs want the court to conclude, it seems, that this one section of the law entirely encompasses all that a library is entitled to do with copyrighted material.  As they go through the points alleged in the complaint, the defendants repeatedly assert that “Section 108 of the Copyright Act is one of many limitations on copyright holders’ rights” and “that plaintiffs description of section 108 is incomplete and therefore mischaracterizes the statute.”  What is left out, of course, is that section 108 states explicitly that fair use –section 107 — is still available and that nothing in 108 “affects the right of fair use” (section 108 (f)(4).

Fair use, which exists for the purpose of allowing exactly the activities that Hathi is designed for — research, teaching and scholarship — will naturally be the heart of this case, however badly the plaintiffs wish this were not so.  But the defendants raise several other defenses as well, including sovereign immunity (since all but one of the defendants are public institution),  the plaintiffs’ lack of standing (since they cannot show that they or their members own most of the works at issue), the statute of limitations, and the fact that many of the works that the AG wants to impound are actually in the public domain.  And, of course, these defendants assert fair use.  In fact, they assert a whole slew of the exceptions that Congress built in to the copyright law.  One of the places we will learn most from this case will be where the defense weaves together sections 107, 108, 109, 110 and 121 into, I imagine, a thicket of justification that emphasizes how comprehensively Congress intended to permit the socially beneficial uses that Hathi will facilitate.

The reference to section 121, which allows reproduction of copyrighted materials “for blind or other people with disabilities,” took on added importance on Dec. 9, when the National Federation of the Blind filed a motion asking the court to let it intervene as a defendant in the case.  Such intervention is allowed in federal procedure “as a matter of right” when a party can show that it has a substantial and legally protectable interest in the matter at hand, that they could be harmed by a decision, and that their interests will not be adequately represented by the parties already involved.

Unlike the formal defendant’s answer, this document does make a full-blown argument, and it is a compelling one.  The National Federation For the Blind tells several stories of university students and teachers whose ability to do their work is hampered by the laborious, and sometimes impossible, process of obtaining copies of works that can be read by a computer text-to-voice reader.   They provide a vivid picture of how the Hathi Trust project “would allow blind students and faculty to participate fully in university life,” and that this has been a major purpose of the Trust since its inception.

The spare language of the one document, and the fully developed rhetoric of the other, combine to produce a convincing picture of what fair use and the other exceptions to the copyright monopoly were intended not merely to allow but to facilitate.  Research, teaching, scholarship and access for persons with visual impairments.  If copyright is allowed to impede the advancement of these purposes through mass digitization, and in the name of tying these files up until a private organization figures out a way to make money from them — to reap where they did not sow, as it were — then copyright law will have proved a failure indeed.  But I do not believe that a court will allow that to happen.

SOPA and the Constitution

I have written before (here and here) about the bills now before Congress that go by the name of PROTECT IP in the Senate and SOPA (Stop Online Piracy Act) in the House of Representatives.  There are many reasons why these are bad bills, and an alternative approach has recently been proposed.  Since discussions about the flaws in these bills has entered the mainstream media, I have not felt a strong need to continue to write about them.  But I do think it is worthwhile to point readers to a blog post by Professor Marvin Ammori on the legal blog Balkinization about why he and Lawrence Tribe of Harvard have both written to Congress, independently, about the unconstitutionality of the bills.  This is a different, and in many ways more fundamental, objection then those that I have seen elsewhere.  Let’s hope Congress is listening.

How to COPE

Recently I have had the opportunity to review the first 14 months of Duke’s COPE fund, and it has been an interesting exercise.

COPE, of course, is the Compact for Open Access Publishing Equity, a plan by which academic libraries create funds to help faculty authors pay the article processing fees (APCs) that some open access journals charge as a way to replace subscription fees. Duke established a modest COPE fund in October 2010, and it was re-funded for fiscal year 2011-12.

Now, at the end of November 2011, I find that in 14 months we have reimbursed open access publishing fees for 20 articles written by 18 faculty members. We have six more COPE applications waiting for completion. Assuming that all of them are completed and reimbursed, we will have exhausted the available funding for COPE for the first time.

The authors we have reimbursed are predominately, but not exclusively, scholars in the biomedical sciences. That group is evenly divided between medical school and University-side scientists, but faculty in diverse fields like environmental studies and evolutionary anthropology have also benefited from the fund. The largest group of articles we helped fund were published with journals from the Public Library of Science, followed by Hindawi, Frontiers in Research, and BioMed Central. All of them are, of course, peer-reviewed publications.

With a relatively new initiative like COPE, it is not entirely clear how success should be measured. Is it more successful to spend the entire COPE fund on a campus, or to have it go unused? The answer, I suppose, would depend on the reasons behind the use or non-use.

At Duke we have been very clear that the purpose of COPE is to provide incentives for new models of scholarly publishing and to support open access. As the fund administrator, I am convinced that there is a good deal more open access publishing amongst our faculty than I previously expected. These 25+ articles are very much only the tip of the iceberg. And I cannot say for sure that the COPE fund caused an increase is that type of publishing; I am inclined to think faculty are turning to open access because of its numerous benefits, and that COPE funding is not the strongest factor for most of them. I have been told by authors, however, that COPE is an important example of the University “putting its money where its mouth is,” and I am pleased by that perception.

As open access publishing evolves, different funding models are being tried. It is important to recognize that author-side fees may not be the “winner” over time; many OA journals even now do not rely on such fees, although the best-known ones do. Nevertheless, we are clearly seeing an important transitional movement in scholarly publishing, and a COPE fund is one way for an institution to both encourage that transition and to tangibly affirm a commitment to open access for scholarship.

I expect that at Duke we will reexamine the policies and procedures we have in place for COPE, as we consider how it is to be re-funded, to see if they actually serve the incentive purposes behind the fund. In that task we will gain significant guidance from this recent blog boost by Stuart Shieber of Harvard University’s Scholarly Communications Office, about how funding agencies should assume the task of paying open access article fees. Stuart’s point about funders is important, and I hope that the policies he recommends are widely adopted. But his post is also a cogent and compelling re-assertion of those incentive purposes that COPE is intend to serve and how different policy decisions relate to the overall goals. As such, it provides a helpful guide for anyone considering a COPE fund or considering how to make such a fund more effective.

The unexpected reader

I have just returned from the Berlin 9 Conference on Open Access, which was held in Washington, D.C. at the lovely conference center facilities of the Howard Hughes Medical Institute.   It was a fascinating meeting, and quite different in tone from the one I attended last year in Beijing.

In its opening paragraph, this Chronicle of Higher Education report on the conference captures the fundamental difference.  This year the conference was much more clearly focused on the impact of open access on research; rather than talking about how open access will be accomplished, the discussion assumed that open access is inevitable and instead emphasized the differences that the evolution to open will make.

For the sciences especially, it was clear that openness is rapidly becoming the default, because awareness of its benefits is spreading so widely.  This year the partners in the discussion included many working scientists and, significantly, many academic administrators and research funders, who are well-placed, and, increasingly, motivated, to make the transition to open access.  The recent decision announce by the National Autonomous University of Mexico to make a decisive transition to open access is testimony to the impact a commitment by administrators can have.

Some of the most compelling discussion in Washington about the impact of openness centered on the idea of unexpected readers.  For years researchers have assumed that, especially for highly technical work,  all of the people who needed access to their work and could profit from it had access through the subscription databases.  This assumption has probably always been incorrect, but now the promise of open online access has really blown it up completely.  The possibility of unexpected readers, including computers that can make connections and uncover patterns in large collections of works, is now one of the great advantages of OA and one of the primary sources of the expectation for greater innovation.

One very touching story is worth retelling here to make this point.  Philip Bourne, a professor at UC San Diego and Editor in Chief of the journal PLoS Computational Biology, told of a rather remarkable manuscript that was sent directly to him in his editorial role.  He thought it was quite a special work of scholarship, on computer modelling of pandemics, and asked some of his colleagues with expertise in that field for their opinions.  Uniformly it was felt that the article was ground-breaking.  Finally, Bourne met directly with the author and, unusually, urged her to submit it to the journal Science.   You see, the author was a fifteen-year old high school student who had done her research as a visitor in university libraries and, for a while, using a “test” login obtained directly from a vendor.

The point here is not the obstacles to access that this young author encountered and overcame.  The point is that she was not at all the person the authors of previous articles on the topic thought they were writing for.  Yet she made a remarkable advance in the field because she was able to read those works in spite of conventional expectations.

By the way, Science selected her article for in-depth review, which is itself a big accomplishment for even experienced researchers, but ultimately decided not to publish her paper, which will now likely appear in PLoS Computational Biology, as she originally hoped.

In his presentation to the Berlin Conference, law professor Michael Carroll listed five types of readers who should have access to research output, and who do have access when open access becomes the default. On his list of such “unanticipated readers” were serendipitous readers, who find an article that is important to them without knowing they were looking for it, under-resourced readers  (like the high-school author described above), interdisciplinary readers, international readers and machine readers (computers that can derive information from a large corpus of research works).  By the way, the category of serendipitous readers includes all those who might find an article using a Google search and read that work if it is openly available but will encounter a pay-wall if it is not.

Open access serves all of these unexpected readers of scholarly works.  As Carroll summed up his point,  every time we create an open environment, we get unexpected developments and innovations.  We have come far enough down this road now that the burden of proof is no longer on open access advocates, it is on those who would claim that the traditional models of publishing and distribution are still workable.

Is the Copyright Office a neutral party?

Recently a friend was asking me about my job title.  I was hired primarily to address copyright issues, but my title is “Director of Scholarly Communications.”  I am, in fact,  involved in lots of other issues encompassed by that broader title, but my friend made the valid point that universities are beginning to divide the copyright function out from digital publishing services and the like.  Her question got me thinking, and I concluded that I like my title precisely because it emphasizes the context of my work.  My job is not simply to know the copyright law, but to help apply it, and even work to change it, in ways that best serve the needs of scholars and teachers on my campus.  I am not hired, I don’t believe, to be neutral; I have a defined constituency, and my title is a constant reminder of that fact.

This conversation came to mind as I read two documents released by the Copyright Office in the past couple of weeks.  The constituency of the Copyright Office, presumably, is the public as a whole, and their role ought to be more neutral than mine, seeking the balance between protection and use that has always been at the core of copyright law.  In these two documents, however, the tendency to think that, as the Copyright Office, advocacy for anything that increases copyright’s strength and reach is the proper role, shows through.  The needs and voices of owners in the legacy content industries seem to get extra weight, while the needs of users, who are more diffuse and do not have as many dedicated lobbyists, get less attention.

In its brief report on “Priorities and Special Projects of the United States Copyright Office” the Office details the studies, legislation and administrative projects that it intends to work on for the next two years.  In its legislative priorities, the first three listed are “Rogue Websites,” “Illegal Streaming,” and “Public Performance Right in Sound Recordings.”  Each of these priorities is an endorsement of particular legislative action by Congress — the first clearly endorses the bill alternately called PROTECTS IP or SOPA or ePARISITES.  Indeed, each of these priorities seems to be dictated by the current lobbying activities of the entertainment industry, and each is a very much non-neutral endorsement of greater scope for and stronger enforcement of copyright protections.  There is little sign that the voices arguing that copyright already reaches too far and is over-enforced are being heard.

Other priorities do seem more neutral.  The Copyright Office wants to “continue to provide analysis and support to Congress” on orphan works, and it repeats its intention of making legislative recommendations based on the report of the Section 108 Study Group, which addressed the specific exceptions to copyright for libraries.

This last priority created a rather humorous situation for me last week when I was contacted by a member of the library press seeking information about this “new” report on section 108.  In fact, the report was issued in 2008.  Nothing has come of it in three and a half years, and, even if all of its recommendations were suddenly adopted, it would do little to improve the situation for libraries because the Study Group was unable to find agreement on the most pressing issues.  The Copyright Office does not mention the more comprehensive proposal on library exceptions made by IFLA to the World Intellectual Property Organization.

My real interest focused on the Copyright Office’s desire to do something about orphan works.  In addition to the legislative priority listed, the report promised a study document on mass digitization, which would naturally have to address orphan works, and that document was issued a few days ago.  Here we get a glimpse of how the Copyright Office plans to address the difficulties posed by orphan works.

The report makes the CO’s preferred approach very clear — “As a practical matter, the issue of orphan works cannot reasonably be divorced from the issue of licensing.”  This is an interesting statement, since the last proposal to resolve the issue that the CO nearly got Congress to adopt a few years ago did not rely on licensing, but addressed the issue by reducing the potential damages for using an orphan work after a reasonably diligent attempt to find a rights holder.  There clearly are other approaches, but the appetite  of industry lobbyists has, since the Google Books case, been whetted by the possibility of profiting from other people’s (orphaned) works, and the CO has been swept up in the licensing fever.

The report gives a detailed and very helpful discussion of the various types of licenses that could be used, but it never addresses the question that seems most pressing if orphan works are to be licensed — who is going to get paid?  If works are genuinely orphaned, there is no rights holder to pay, so orphan works licensing proposals must decide who is going to sell the licenses and where the money is to go.  Other countries have adopted licensing schemes, and often the licensing is done by the government; in the U.S., however, I think we have to assume that private collective rights organizations (who are given prominent mention in the report) would collect the money and, perhaps after some period of time, keep it.

This report is about more than orphan works, of course; it addresses the broad issue of legal concerns in mass digitization.  I was interested to see how fair use was treated, both in that broader context and in relation to orphan works.  I was disappointed in both regards.

In the general discussion of mass digitization, fair use is pretty much summarily dismissed by the Copyright Office.  It begins with the assertion that “the large scale scanning and dissemination of entire books is difficult to square with fair use,” which seems to beg the question.  The rest of the section reads like a legal brief against Google’s position on fair use in the book scanning case.  Nowhere does the report consider what difference it might make for a fair use claim if the project were wholly non-commercial, nor do they consider that fair use might be part of an overall project strategy that included other approaches, such as selected permission-seeking.  The report treats fair use as an all-or-nothing approach and dismisses it as unworkable without the nuanced consideration it demands.

More troubling is the fact that, having dismissed fair use in the broader context of mass digitization, the Copyright Office never discusses it in the more narrow field of orphan works.  With orphans, of course, fair use seems like a stronger argument, since there is no market that can be harmed by the use, especially if it is itself non-commercial.  But it seems clear that the CO is committed to creating such a market by pushing a licensing scheme, and is not willing to discuss any option that might undermine its predetermined conclusion.

Silly Season

It is traditional in political reporting to refer to the run-up to primary elections as the “silly season” because of all the amazing things candidates will say while trying to appeal to different constituencies and bear up under the glare of media coverage.  Recently this time of year has also seen some developments in the copyright world that also justify some bewildered head shaking.

On the legislative front, the PROTECT IP act has been introduced in the Senate for a while now.  It is problematic even in its Senate form, since it would allow private actions to attack web domains based only on accusations of IP piracy, without the usual due process that is necessary to sue an alleged infringer.  But the act got worse, and stranger, when it was introduced into the House of Representative.  A provision was added that would role back the “safe harbor” provision for ISPs from the Digital Millennium Copyright Act and impose an affirmative obligation for web hosting services to police content uploaded by users.  This is in keeping, I am afraid, with the overall effort to force others — the ISPs and the government — to foot the bill for enforcing copyrights owned by the legacy content industries.  Discussions of this bill are all over the Internet; a representative one can be found here.

The argument that we should change the DMCA is becoming very common.  The content industries do not like the bargain they made a decade ago, and seem increasingly to want to shut down the most productive aspects of the internet in order to preserve their traditional business models.  An excellent argument for why we should not let this happen can be found in this discussion of copyright, the Internet and the First Amendment from Thomas Jefferson scholar David Post.

The real silliness, however, comes in the decision to rename the bill in the House, from PROTECT IP to ePARASITES.  I sometimes believe there is a congressional office for acronyms, staffed by some very silly people. When I first heard this new acronym, I thought it was a parody.  Although I now know that the “parasites” referred to are websites that facilitate unauthorized sharing, I initial concluded that it was a joke referring precisely to those industries supporting PROTECT IP who want the taxpaying public to bear all the costs for their failures to innovate.

Another round of silliness was created this week by the filing of a Second Amended Complaint in the lawsuit between the trade group AIME and UCLA over digital streamed video.  The judge dismissed the First Amended Complaint about a month ago but gave the plaintiffs (AIME and AVP Video) permission to refile.  This they have now done, but going through the (long) complaint, I fail to see how they have really addressed many of the judge’s reasons behind the dismissal.

A major reason behind the dismissal was lack of standing for AIME and sovereign immunity protections for the defendants.  I noted at the time that the lawsuit would really need different plaintiffs and different defendants to go forward.  Clearly AIME did not agree, since the new complaint names exactly the same defendants, simply with “and individual” added each time the previous document said they were sued in their official capacities.  This new document does not remove the claims against them in their official capacities, even though the judge already dismissed those claims, and it does not add any facts that I could see that would justify a suit against the individuals.  So the refiling really just seems to double down on the failings of the first complaint.

Also, AIME tried to rescue its “associational standing” but pointing to “injury in fact” to the association itself.  Such injury, incredibly, seems to be primarily the damage done to AIME by its relentless pursuit of this lawsuit, which it brought in the first place.  Staff time has been consumed, they say, and the reputation of the association harmed.  New members are reluctant to join.  Why any of this confers standing on AIME against UCLA is beyond me; members may not be joining because they do not want association dues spent tilting at windmills.  Also, the judge already rejected the argument that “diversion of resource” for the lawsuit was enough to establish the required showing of injury.  It is not clear to me that simply adding more detail can rescue this argument.

The new complaint again asserts that sovereign immunity was waived by UCLA when it signed license agreements with a jurisdictional clause, and by its own policy of obeying federal copyright law.  Both of these arguments were already rejected by the judge, so reasserting them seems more like a criticism of the previous decision than a new argument.

On the substantive arguments, I also can see very little that has been added that should change the outcome here.  The license between AVP and UCLA is reasserted, with the same language that caused the judge to read it in a way that undermined the first set of copyright claims.  One addition is the claim that the UCLA system is “open” (which the license does not allow) because it has a guest feature that can be turned on, but there is no assertion that it ever has been turned on in fact.  Another addition are the state law claims for tortious interference with a contract and prospective interference with a business advantage.  Like the previous state law claims, this seems entirely founded on the copyright infringement claim, so I see no reason these would not be preempted by the resolution of the copyright issue, as the previous claims were.

In both these instances, I think we see the emotion of righteous indignation overcoming reason.  The very thing, it seems, that makes this the silly season.

The ironies of risk avoidance

Start with a basic empirical premise: most librarians (and often the lawyers that advise them) are extremely averse to risk when it comes to copyright matters.  Years of experience convince me that this is true.  Many activities in a library create risk — letting the public in the door, hiring employees, signing licenses and other contracts — and we usually have policies and procedures in place to reduce and manage those risks.  But with copyright we tend to take an all or nothing approach; either we must be 100% confident that there is no risk of infringement or we will not undertake the project.  Copyright seems so big and scary that we decline to manage risk and try — we are never actually successful — to avoid it altogether.

The first irony about this approach is that the attempt to avoid risk actually creates risk.  Every time we make a choice we both select one option and reject others.  If I stay at work until 5 pm today, I will miss the chance to frolic in a gorgeous fall afternoon.  Economists refer to the “lost opportunity” costs that are part of every decision.  Perhaps the risk of these types of losses is most famously summed up by John Greenleaf Whittier, who wrote, “Of all sad words of tongue or pen, the saddest are these, “it might have been.”

When we consider any decision, we need to balance the risk that something bad will happen if we act with the risk that something good will be lost if we choose not to act.  For projects in libraries, which usually involve digitizing material in order to improve access, the loss of a real benefit is also a risk, and should be weighed against the risk of infringement.  If we make a serious attempt to balance the risk, recognizing that there is indeed risk in both choices, we will be compelled to make, I believe, a more careful and nuanced assessment of the copyright situation involved in any such undertaking.

Which brings me to another irony, a small but recent example of exactly the risk avoidance (as opposed to risk management) that I am complaining about.  Back in the late spring, I wrote a short paper about risk management for large-scale digital projects.  In it I advocated an approach that looks at multiple aspects of copyright law when evaluating a potential project and balances the risk involved, after careful analysis, with the pedagogical reward to be gained by going forward (or lost, if a negative decision is reached).

Several colleagues who read the paper suggested that I should publish it, and so earlier this week I submitted it to D-Lib Magazine.  I was pleased to get a very quick acceptance, but less pleased to be told that the article would be published as an “opinion piece” and carry a disclaimer distancing the editors of D-Lib from my “opinions.”  Why, I asked, was my article different from other academic writings, which almost always make a case for some position or action over against alternatives?  The reason, I was told, was the subject matter.  The editors did not know enough about copyright, they said, to be assured that my position was a sound one.  They went on to say that their lawyers discouraged publishing anything about copyright, since readers might “take this as legal advice to their detriment” and create liability for the magazine.

This struck me as odd.  Surely they do not believe that they, or any other journal editorial board, actually warrant the accuracy and soundness of every article they publish?  I seriously doubt that the D-Lib editors personally guarantee that technical articles they publish, which sometimes recommend software or hardware solutions, will never lead to unanticipated bugs or security breaches.  And I have never once heard of an actual case in which a journal, whether peer-reviewed or not, was held liable for bad information in an article they published (as opposed to where the article is itself infringing or defamatory).

The logic here is flawed because what is really behind this approach is fear, and the irony is that that fear prevents D-Lib from addressing a topic that is of great importance to the world of digital librarianship.  Fear could be dispelled by robust discussions and plentiful information about copyright issues, but instead a leading journal elects to remain silent, and therefore reinforces the perception that this is an area that librarians cannot address, must flee from.

In the age of digital libraries, nearly all of our decisions implicate copyright in some way.  As a profession, we cannot afford to hide our heads in the sand; we need to seek ways forward, and the search for workable options will require information, debate and discussion.  Copyright cannot remain a forbidden topic.

The decision not to publish my paper (I elected not to have it appear marked off as an opinion piece) probably is not any real loss to D-Lib or to the profession as a whole.  The paper will appear later this year in another journal, and I will post it here as soon as I am able.  Others must decide if it is any good or not.  But regardless of the quality of my argument in that paper, the topic, and the sensitive analysis of risk that it demands, is not something we can avoid.

By the way, this blog post is an opinion piece.  It, like everything else I write here, represents my own opinion and not the official position of Duke University.  But you knew that without my saying so, didn’t you?

Streaming video case dismissed

Yesterday a judge in Los Angeles dismissed the copyright infringement lawsuit brought by AIME, the Association for Information Media and Equipment, against UCLA.  The lawsuit had alleged that UCLA was infringing copyright by ripping DVDs to create a digital stream, which was then made available through a closed course management system to students in a particular class.  There are several technical issues that dominate the decision, but there is a little bit of good news, hardly definitive, for the fair use claim that was being made by UCLA.

The two major reasons for the decision were sovereign immunity — the doctrine that state entities can seldom be sued in federal court — and lack of standing.  AIME tried to argue that UCLA had waived its sovereign immunity when it signed a contract with AIME, but the judge rejected that argument as too broad.  So a major part of the decision applies only to state entities; it does not translate to private universities.

As for standing, AIME had a little bit of the “Righthaven” problem; they simply did not own the copyrights that were allegedly infringed, so they were not the proper plaintiffs to bring the case.  AIME wanted to claim what is called “associational” standing as a group that represents individual copyright holders, but the judge rejected that idea; she held that “individual copyright owners’ participation is necessary” in order to assert copyright infringement.  It has never been entirely clear why the lawsuit was brought the way it was, and it is a relief, from the point of view of legal consistency, that this attempt to assert associational standing has failed.  With Righthaven and a few other groups trying to create a business model based on copyright trolling, the failure of this claim for standing represents another welcome barrier to that activity.

Not, I hasten to add, that AIME is in any sense a copyright troll.  The lawsuit was, in my opinion, inept, but it was clearly motivated by zeal and a sense of righteous indignation rather than baser motives.  Calmer judgment simply got overwhelmed.

On the copyright issue, which is where I was most anxious to see the reasoning, everything pretty much turned on language in the AIME license that granted public performance rights to the licensees.  Given that language, the case would seem to have been doomed from the start.  But as a result, UCLA did not have to make the case that the streaming, as a potentially public performance, was justified by one of the specific educational exceptions in section 110 of the Copyright Act.  That argument may yet be plausible, but it was not decided in this case.

What solace the higher education market can take from this case is in a few lines in which the judge seems to accept without discussion two assertions — that streaming is not a “distribution” such as to infringe the exclusive right to authorize distribution, and that copying incidental to a licensed right (the right of public performance) was fair use.  These points were not, as I say, discussed or unpacked, just accepted as part of a general dismissal of the copyright infringement claim for “failure to state a claim upon which relief can be granted.”  Thus this ruling does not offer the higher ed community a slam-dunk fair use victory, it merely sharpens a couple of the arrows in the quiver of that argument.

It is interesting to note that the copyright claims, along with most of the others, were dismissed “without prejudice.”  This means that AIME could refile them, and the judge gave AIME two weeks to do so if it wants.  The problem, however, is that all claims against the Regents and against the individual defendants in their official capacity were dismissed with prejudice.  So AIME could file the same claims again, but not against these defendants and not until it solved the standing issue.  A claim against the individuals as individuals would still be possible, but it is doubtful it would have the effect AIME wants; instead, it would look like the act of a desperate bully who does not know when to retire from the field.

Whatever happens next in this case, if anything does, what the dismissal without prejudice should tell the rest of us is that the issue of most significance to higher education — whether or not streamed video for a course-related audience is fair use — has not been brought to a final judgment.

Really, what has Princeton done?

When it was announced that the faculty at Princeton University had unanimously adopted an open access policy for scholarly articles they authored, it was great news for the open access community, but it was also the cause of some overheated rhetoric.  Since the operative language of the Princeton policy differs very little from that that was adopted at Duke back in March, 2010, this is a good opportunity to reflect on what has, and has not, been done.

In all such policies the university is given a license in the works that is prior to any copyright transfer to a publisher.  Technically, therefore, the rights that are transferred are subject to that license; hence the language of “banning” the wholesale transfer of copyright, which has received a lot of attention.  I wanted to point out, however, that this rhetoric about a “ban” did not come from Princeton itself, but from a single blogger, to whose post all the stories that use that language point.  That blogger has now changed the post, including a quote from a Princeton official saying that the faculty is not being “banned” from anything.  Even the URL has changed; the corrected version of the post is here.

The differences amongst universities regarding these policies come in implementation.  Some universities may elect to act in a way that is contrary to the terms of the publication agreements the authors enter into (by posting articles or versions of articles where the publication agreement purports not to permit the specific posting).  Doing so would seem to be legally permissible under the claim of a prior license, but it could also put the faculty members in a difficult position unless they are very careful about what they sign (as they should be but seldom are).  An alternative is for the university to exercise the license in a more nuanced way, taking into account the various publisher policies as much as possible.  That, of course, makes open access repositories much more labor-intensive and difficult, especially as publishers change their policies to try a thwart these expressions of authorial rights.  How Princeton will actually implement its policy is still an open question, since they do not yet have a repository of their own.

Earlier today I received an inquiry about the Princeton policy from a colleague at another university.  To what degree, he asked, is this similar to the university simply claiming that scholarly articles are work made for hire?  My answer, of course, was that these policies are the very opposite of an institutional claim of work for hire.  If that were done, in fact, no such license would be necessary.  But these policies are founded on faculty ownership and express the desire of a faculty, as copyright owners, to manage their rights in a more socially and personally beneficial way.  It is important to note that the open access policies now in place at a couple of dozen U.S. institutions have all been adopted by the faculties themselves; they decided to grant a non-exclusive license to the university, which, again, they could not do except as copyright owners.

Probably the most important fact about these policies, indeed, is that they represent an assertion of authorial control.  We so often hear publishers and others in the content industry talk about protecting copyright, by which they usually mean the rights they hold by assignment from a creator, that it is salutary to remind academics that they own copyright in their scholarship from the moment their original expression is fixed in tangible form.  Transferring those rights to a publisher is one option they have, and it has become a tradition.  But it is only one option, and the tradition is beginning to be questioned, as this recent article from Times Higher Education and this one from Inside Higher Ed forcibly demonstrate.

Open access policies are not, at their root, either “land grabs” by institutions or acts of defiance aimed at publishers.  They are simply a recognition of the fact that authors are the initial owners of copyright, and they express a desire by those owners to manage their rights intentionally and in a way that most clearly benefits the goals of scholarship.


Discussions about the changing world of scholarly communications and copyright