Maybe not so revolutionary after all

When I wrote a few weeks ago suggesting broader latitude for fair use in the case of academic and scholarly works, I contrasted that position to the more “revolutionary” one proposed in the title of Steven Shavell’s recent article “Should Copyright of Academic Works be Abolished?”  Shavell, who is professor of law and economics at Harvard, premises his argument on the same phenomenon that I stressed in my blog post — the lack of incentive provided by copyright for academic authors.  He builds an elaborate economic model to demonstrate that authors would be as happy or happier to continue to create their works and society as a whole would be better off if academic copyright were eliminated, as long as, he suggests, publication costs were subsidized by universities or grantors.  He writes, “if publication fees would be largely defrayed by universities and grantors, as I suggest would be to their advantage, then the elimination of copyright of academic works would be likely to be socially desirable.”  Read in its entirety, however, this position is not as revolutionary as it might seem, and probably is less desirable from the perspective of academic authors than the suggestion I have made about broad fair use.

For one thing, a broad interpretation of fair use would help address one of the problems that Shavell is trying to solve with his proposal — the labor and permission costs associated with providing material for students in colleges and universities.  But more important, Shavell’s proposal that academic copyright be abandoned addresses neither all the legitimate concerns of academic authors nor all of the problems with the publication system as it now exists.

When Shavell speaks of universities defraying the costs of publication, it is important to remember that efforts at open access on campuses are one way in which universities are already doing this.  Shavell is well aware of this, and discusses open access movements at some length.  He ultimately concludes that such movements will be too slow because of what he calls the individual versus social incentive problem.  Each individual lacks sufficient incentive to make the change, even though the result would benefit society as a whole.  The result is that Shavell decides that a change in the law is needed, removing academic works (which he is at pains to define) from the scope of copyright protection.

My biggest concern with this proposal is that it neglects one benefit which academic authors do gain from copyright, the ability to control the dissemination of their work and, especially, the preparation of derivative works.  Of course, that control is of little use as things stand today, because copyright is so freely given away by academics who must then hope that the commercial publishers to whom they cede their rights exercise those rights for the best interests of the authors.  That is happening less frequently, unfortunately.  One of the reasons the Creative Commons license is such a benefit to academics is that it allows authors to both authorize broad reuse of their work and to assert control over that reuse, especially in regard to attribution, which American copyright law does not protect.  In order to use a CC license, however, one must be a copyright holder; copyright is the “teeth” that enforce the license.  So any analysis of the incentive structure for academic writing must factor in the potential loss of control when considering abolishing copyright in academic works.  This is one reason I have suggested broadening fair use for academic work rather than eliminating copyright altogether.

To me, what this suggests is that the problem with academic publishing is not copyright per se, but the transfer of copyright to corporate entities whose goals and values are usually quite different than those of academic authors.  Because he does not really consider open access a solution to the problem he outlines, Shavell assumes that the publishing structure would remain very much intact under the no-copyright regime he suggests, simply with a different mechanism for paying the bill.  But at least one open access option — a prior license granted to the institution by faculty in their scholarly writings before they submit those works for publication — could restructure publishing in the right direction without losing those benefits that academics really do get from owning copyright.

Shavell does briefly mention such prior licenses, such as those adopted by Harvard and MIT, but does not treat them extensively and does not recognize that some of the difficulties he finds with open access movements would be mitigated by the prior license mechanism.  He cites two major problems that would prevent open access from quickly solving the problem he finds with scholarly publishing — the fact that authors will not insist on open access if they have to pay for it and the alleged fact that open access journals lack prestige.  Neither of these problem exist for the prior license scheme, which, when combined with a broad latitude for fair use of academic writing, offers, at the very least, a significant intermediate step toward resolving the dilemma of scholarly publishing.

It may be that copyright should be eliminated for academic works, but it would hardly be easy to accomplish.  Nevertheless, Shavell’s analysis of the state of academic publishing, and its future, is complex and interesting.  But while we wait for Congress to move in the direction he suggests (if it ever does) the adoption of institutional licenses for open access to faculty writings and a broad latitude for fair use of those writings, both of which could be implemented immediately, are intermediate steps that would return a great deal of control to the authors for whom that is the major incentive.

Fairness breeds complexity?

The title of this post is an axiom I learned in law school, drilled into us by a professor of tax law but made into an interrogative here.  Because the copyright law is often compared to the tax code these days, I have usually just accepted the complexity of the former, as with the latter, as a function of its attempt to be fair.  Because different situations and needs have to be addressed differently in order to be fair, laws that seek fairness inevitably (?) grow complex. But a recent blog post by Canadian copyright law professor Michael Geist, nicely articulating four principles for a copyright law that is built to last, has made me ask myself if simplicity is a plausible goal for a comprehensive copyright law.

Geist’s four principles are hard to argue with.  A copyright law that can last in today’s environment must, he says, be balanced, technologically neutral, simple & clear, and flexible.  That last point, flexibility, is the real key, since designing a law that can be adapted to new uses and new technologies, many of which are literally unforeseeable, requires that the focus be on first principles rather than outcomes.  This is different than the tax code, and it may provide the path to combining fairness with simplicity.

The principle of flexibility explains why fair use is an effective provision of US copyright law.  As frustrated as some of us get trying to navigate the deep and dangerous waters of fair use, it has allowed US law to adapt to new situations and technologies without great stresses.  In fact, Geist’s brief comment on fair dealing in Canadian law suggests (implicitly) that it should be more like US fair use; he argues that the catalog of fair dealing exceptions should be made “illustrative rather than exhaustive,” so courts would be free to build on it as technologies change.

In recent posts I have spoken of adapting fair use so that it gives more leeway to academic works than to other, more commercial intellectual properties.  Even though Geist is explicit in his post that “Flexibility takes a general purpose law and ensures that it works for stakeholders across the spectrum, whether documentary filmmakers, musicians, teachers, researchers , businesses, or consumers,” I do not think there is any contradiction here with asking that academic works be treated differently in the fair use analysis then a recently released movie, for example, might be.  Fair use would be applied in the same way to each, but because fair use appeals to the motivating principles of copyright law, it asks us to examine the circumstances of each type of material and each kind of use and measure them against those principles.  This is precisely how flexibility is accomplished, and I argue that the result of this uniform application of principles will be different outcomes for different types of works.

Geist’s approach to digital locks — DRM systems — is quite similar, asking us to look at first principles that underpin copyright law when deciding how to treat any particular technology.  Specifically, he suggests that forbidding or permitting the circumvention of such digital locks must be tied to the intended use for which the lock is “picked” if copyright balance is to be respected.  An added advantage of this approach is that it is much simpler — another core principle — than the current approach in the US, where categorical rules are enacted and then a series of complex exceptions are articulated every three years.  We will see shortly how that process will play out for the next three years, since the exceptions will be announce in a couple of months, but it is inevitable that the result will be unfair to some stakeholders and probably disappointing to all.  Far better that we heed Geist’s call for an approach based on first principles.  Perhaps Canada, as it considers a comprehensive overhaul of copyright law, can lead the way.

Moving beyond the photo album

Last week G. Sayeed Choudhury, Associate Dean for Library Digital Programs at Johns Hopkins University, came to Duke to talk with the staff of the Libraries about e-scholarship and the changing role of the university library as part of our strategic planning process.  His presentation and conversations were fascinating, and we were left with a great deal of thought-provoking material to consider.  I was particular struck by one observation, which was actually Choudhury quoting from a 2004 article that appeared in D-Lib Magazine by Herbert Van de Sompel, Sandy Payette, John Erickson, Carl Lagoze and Simeon Warner.  In the article, “Rethinking Scholarly Communications,” the authors assert their belief that “the future scholarly communications system should closely resemble — and be intertwined with — the scholarly endeavor itself, rather than being its after-thought or annex.”  The article further makes the point, perhaps more obvious now that it was five years ago, that “the established scholarly communications system has not kept pace with these revolutionary changes in research practices.”

In developing this point, Choudhury talked about the traditional research article as a “snapshot” of research.  Those snapshots are increasingly far-removed from the actual research process and have less and less relevance to it.  Indeed, the traditional journal article seems more like a nostalgia item every day, reflecting the state of research on a particular topic as it was at some time in the past but beyond which science will have moved long before the formal article is published, thanks, in part, to the many informal ways of circulating research results long before the publication process is completed.

Choudhury called on libraries to move past a vision of themselves as merely a collection of these snapshots and become more active participants in the research process.  He recounted a conversation he had with one researcher who, in focusing on the real need he felt in his own work, told Sayeed that he did not care if the library ever licensed another e-journal again, but he did need their expertise to help preserve and curate his research data.  The challenge for libraries is to radically rethink how we spend our money and allocate the expertise of our staffs in ways that actually address felt needs on our campuses and do not leave us merely pasting more snapshots into a giant photo album that fewer people every day will look at.

Recently I have seen a lot of fuss over an article that appeared in the Times Higher Education supplement that posed the question “Do academic journals pose a threat to the advancement of science?”  The threat that the article focuses on is the concentration of power in a very few corporate hands that control the major scientific journals.  But read in the context of the radical changes that Choudhury, Van de Sompel and others are describing, it is clear that the threat being discussed is not a threat to the advancement of science but to the advancement of scientists.  Scholars and researchers have already found a way around the outmoded system of scholarly communications that is represented by the scientific journal.  The range of informal, digital options for disseminating research results will not merely ensure but improve the advancement of science.  All that is left for the traditional publication system to impede is the promotion and tenure process of the scientists doing that research.

This, of course, is the rub, especially for libraries.  Traditional scientific journals are increasingly irrelevant for the progress of science, but they remain the principal vehicle by which the productivity of scholars is measured.  One researcher told Choudhury very frankly that the only reason he still cared about publishing in journals was for the sake of his annual review.  Sooner or later, one hopes that universities will wake up to the tremendous inefficiency of this system, especially since the peer-reviewing on which such evaluations depend is already done in-house, by scholars paid by universities but volunteering their time to review articles for a publication process with diminishing scholarly relevance.  Nevertheless, the promotion and tenure system still relies, for the time being, on these journals, which presumably cannot survive if libraries begin canceling subscriptions at an even faster rate.  The economy may force such rapid cancellations, but even if it does not, pressure to move to a more active and relevant role in the research process will.  The question librarians must ask themselves is whether supporting an out-dated system of evaluating scholars is a sufficient justification for the millions of dollars they spend on journal subscriptions.  Even more urgently, universities need to ask if there isn’t a better, more efficient, way to evaluate the quality of the scholars and researchers they employ.

A model copyright law

Back in April, when I was writing about the experiences I had at the eIFL-IP conference in Istanbul, I referred several times to the “Draft Law on Copyright, Including Model Exceptions and Limitations for Libraries and Consumers.”  A copy of the Draft Law was distributed to the IP Conference participants “hot off the presses.”  When I mentioned it back in April, I promised to provide a link as soon as it became available on the eIFL website.  I am now delighted to be able to direct folks to the full text of the Draft Law, available as a PDF and soon to have an HTML version accompanying for easy browsing.

The goal of the Draft Law is to provide librarians and their legal advisers with practical ideas to help them  understand and influence the policy making process when national copyright laws are being revised.  It is directed toward developing countries, from which the majority of eIFL’s membership is drawn.  But there is much for all of us, in the US and the EU as in the developing world, to learn from this document.  Its clear set of definitions and the explanatory notes that accompany each exception and limitation make it ideal for gaining a synoptic view of the state of international copyright law.  Most important is the consistent focus on the public interest and the socially beneficial purpose that copyright law is intended to serve.

It has become a regular complaint about international copyright law that great strides have been made in harmonizing the levels of protection for intellectual property around the globe, but little effort has been made to harmonize limitations and exceptions.  Indeed industry lobbyists and even the U.S. Trade Representative often pressure developing countries to adopt draconian levels of IP protection while encouraging them to ignore or drastically limit the role of limitations and exceptions.  The result is often that copyright law becomes an obstacle to intellectual and creative development in many countries.  The World Intellectual Property Organization has seemed to awaken to this problem over the past two years, and has recently included copyright limitations and exceptions as part of its discussion, especially in the context of it’s so-called “development agenda.”   The eIFL Draft Law is an important contribution to this vital discussion, especially because it offers model limitations and exceptions that are designed to facilitate access to knowledge and the public interest.  It is a document that deserves study in both the developing and the developed world as we consider how IP law can serve its purpose of encouraging learning and creativity rather than stifling them.

Choosing between reform and revolution

A recent article by Steven Shavell called “Should Copyright of Academic Work be Abolished” caught my notice, as I am sure it did for many others, because of the radical question posed in its title, but it ultimately focused my attention on a different article altogether. I hope to have more to say about Professor Shavell’s work in a later post, but here I want to record my initial reaction, which was that copyright in academic works need not be abolished but should be heavily reformed. And the best reform I can think of (short of legislative revision) is the re-evaluation of fair use, based on more attention to the second fair use factor, that is suggested in Robert Kasunic’s article “Is That All There Is? Reflections on the Nature of the Second Fair Use Factor.”

The second fair use factor – the nature of the copyrighted work – is usually treated very mechanically by courts, and sometimes is ignored altogether. When it is discussed, it is in a few sentences addressed to only two issues – whether the work is published or not and whether it is creative or factual. Kasunic, who is Principal Legal Advisor to the Copyright Office, suggests that this treatment seriously undervalues the importance of this part of the fair use analysis. He argues convincingly that the second factor, when examined carefully, offers a wealth of information that could improve consideration of all of the fair use factors. Indeed, one of his major points is that the fair use factors are a guide for fact-gathering, not a mechanical “tally sheet” or scorecard.

If courts pursued the probing questions about the nature of an original work that Kasunic suggests when considering a claim of fair use, the result for academic work would be, I think, truly revolutionary, because those courts would learn how much more leeway should be accorded to academic work than would be appropriate for other types of work. Kasunic argues that part of the scrutiny that should be applied to the original work would ask what the particular incentive structure for that type of work is. When the purpose of copyright law is understood properly, as a mechanism to give incentives for creation, the expectations of the authors and creators are really the only guide for what uses should be compensated and what uses need not be. Thus it is important to ask what the normal incentives for creators of that particular type of work are and what markets supply those incentives. Unexpected markets, or markets that benefit only secondary owners of copyrights rather than authors, are not relevant in deciding if a particular use is fair or not.

When academic work is considered, it is clear that the scope of fair use would be very broad under this more sensitive and sensible analysis. Academics are usually not paid for their most frequent works of authorship, journal articles, and compensation for books authors is meager. Thus the protection of various markets s not necessary for this type of work in order to effectuate the purpose of copyright; incentives for authors clearly come from some place else. Also, it is usually a secondary copyright holder who is trying to protect those markets, which further reduces their value as an incentive for creation. Finally, secondary markets, such as permission fees for electronic reserves and course packs are usually wholly unexpected, and therefore have no incentive value, from the point of view of academic authors. In fact, I once had a faculty author ask me if a check from the Copyright Clearance Center was some kind of scam, so unexpected was the tiny windfall he was being offered.

As Kasunic points out, different types of authorship receive different rights under our copyright law; it is logical, therefore, to also think about fair use differently depending on the specific facts that surround the creation of a particular category of work. Academic works would, in such a fact-specific analysis, be subject to much more fair use than a commercial novel, film or song. Indeed, Kasunic selects as the example with which he closes his article the case of academic authors and fair uses claims for course packs and electronic reserves. Although he does not spell out a conclusion, it is clear from his discussion that the facts uncovered by the searching analysis he recommends would greatly favor a liberal application of fair use for that type of work.

Since an actual case such as Kasunic describes is currently being litigated – the lawsuit against Georgia State University alleging copyright infringement in the distribution of electronic course readings – it is hard to resist reading his article with that case in mind. Kasunic presents, to my mind, a compelling argument that the court should look very careful at why the works in question were created in the first place and focus a fair use finding on the incentives for creation and not extraneous claims for windfall profits made by secondary copyright holders. This would be a sensible application of a factor that has largely been treated as unimportant; it would take seriously the intent of Congress and their instructions to courts when they codified section 107. And it would dramatically increase the likelihood that many of the uses in question at Georgia State (at least those uses that involve academic writings) would be found to be fair use.

Libraries versus Salinger?

On Monday three major library associations, along with several other groups dedicated to supporting free expression and new creative work, filed a “friend of the court” brief in the appeal of the decision made in June to issue an injunction prohibiting the US publication of “Sixty Years Later: Coming Through the Rye,” a continuation of the story of Holden Caulfield that was begun in J.D. Salinger’s “Catcher in the Rye.”  I wrote several times about the case last month, and had a small role in rounding up the “amici” who participated in the brief, but I read the final product for the first time last night.  A couple of points struck me in the section of the brief addressing fair use that I would like to highlight.  A discussion of the case, and the arguments presented by the library organizations, from Tony Falzone, the Counsel of Record on the brief, can be found here.

First, I was struck by the excellent arguments made about how vital fair use is to supporting new creation, especially in the realm of creative literature.  As theologians (and Julie Andrews) have known for years, nothing comes from nothing, and the edifice of creative writing is always built on an extensive foundation.  From Shakespeare to Leonard Bernstein, Charles Lamb to Stanley Fish, new authors and literary critics use the grist provided by earlier writers to feed their imaginative mills.  In this context, the brief quotes a really amazing question from the judge who issued the injunction being challenged.  During the hearing she asked, in response to the argument that “Sixty Years Later” offered readers a new way of looking at the now quite old story of “Catcher,” “do people need [the new] version in order to view the story differently?  How about just reading it twice, or maybe five years later..”  Of course, this is not how literature or literary criticism works.  New works are never sui generis (not even Catcher in the Rye), and Judge Batts’ logic would deprive each new author of those giants upon whose shoulders, Issac Newton famously reminded us, we must all stand if we wish to see clearly.  Salinger may not think of himself as such a giant (and I admit I do not either), but he still cannot be afforded the level of control over future works that he seeks and that the court erroneously granted to him.

The depth of the problem is illustrated by the other aspect of the brief that caught my attention.  I had noted before that Judge Batts argues that some authors might actually have an additional incentive to write if they new that they would be protected from sequels and criticism; if they were assured, in effect, that they would have the last word regarding the characters, events and ideas about which they wrote.  What I had not seen, but the brief points out, is that the Judge is here importing the concept of “moral rights” into US law.  Many countries do recognize the moral rights of attribution and “integrity” — the right to protect a work from alteration.  The United States does not recognize these rights, with one very limited exception, and restricts the copyright incentive to economic rewards.  The District Court ignores this policy decision, presumably made to support the free expression of ideas that is necessary for a democratic society, in favor of serving the desire of a author from a previous decade to exercise extraordinary control over the future of the ideas and characters he published.  As the brief points out, the is no logical endpoint to the reasoning evoked here; if an author were incentivized by protection from negative reviews or parodies, shouldn’t we forbid those as well?  This is not how copyright works, because its fundamental purpose is to encourage new creativity, while the Judge’s reasoning would create a sterile world in which creative dialogue would be impossible.

One of the news reports about the filing of this brief carries the title College Libraries v. J.D. Salinger.  It struck me as I read the brief how unfair that tile is.  Librarians traditionally have great respect for authors, and libraries serve authorship by being places where the great ideas and expressions of the past are readily accessible to current writers and scholars.  Unfortunately, it is Salinger’s efforts to use copyright to ban a new book that is incompatible with both the mission of libraries and the purpose of copyright law.  Both libraries and copyright law support fundamental democratic values — free expression and the “marketplace of ideas” that asks each new intellectual creation to prove its worth by submitting to examination, criticism and even parody.  Occasionally copyright is wielded as a weapon, as in this case, to try an insulate some author from that rough-and-tumble exchange of idea.  When libraries oppose those efforts, they are calling both copyright law and authors in a democratic society to stay true to themselves.

Orphan works, fair use and best practices

All of the above are recurrent themes in copyright and scholarly communications these days, but a recent publication from the Society of American Archivists has put a little different spin, I think, on an ongoing conversation.

The SAA released a revised version of their Statement of Best Practices on Orphan Works on June 17.  In the statement about the purpose of the report, the SAA makes specific reference to the two bills that were considered by Congress in 2008 as attempts to solve the orphan works problem (I blogged about those bills here and here).  The revised statement of best practices is an explicit attempt to define a term that was used in those bills — a “reasonably diligent search” for a copyright holder.  It would be only after such a search that a remission of the damages for a user of an orphan work would be available under these bills, and the SAA is trying to suggests standards and practices that define what is reasonable and diligent in the real world of archival materials.

It is important to realize that there are two different approaches to using orphan works.  The bills proposed in Congress take a remedies-based approach, offering a substantial reduction of the possible penalties for users of orphan works if they first undertake a reasonably diligent search and, subsequently, a rights holder surfaces and demands compensation.  The SAA statement of best practices is directly related to this approach and undertakes to define the steps necessary if such a search requirement is enacted.  But the statement of best practices also recognizes another option, reliance on fair use.  The statement says “Fair use may be a better rationale for creating a copy or publishing a copy of a document,” but it does not make an explicit connection between fair use and the best practices outlined in the remainder of the statement.

Fair use is an exception to copyright’s monopoly that already exists and is currently available to potential users of orphan works.  The value of the “reasonably diligent search” in the fair use context is that it would have, I believe, a profound effect on the fourth fair use factor, the impact on potential markets for the work.  If a search such as is suggested in the SAA statement is carried out and no rights holder can be located, that would go a long way toward showing that no market is being harmed by the use (especially if the use itself is educational and non-profit).  In this situation, it is hard to imagine a court actually rejecting a fair use defense, and even if such a defense did fail, archivists and other employees of non-profit institutions could still fall back on the partial remission of damages that is provided in section 504(c)(2) of the Copyright Act.  As the SAA notes, a reasonable belief in fair use, even when a court disagrees in the end, “is sufficient to protect the archivist from statutory damages.”  Such protection is not as complete as would be provided by an orphan works bill, but it is is nonetheless substantial.  In the end, it really might make more sense for educational users to rely on fair use when contemplating a use of an orphan work, after employing some or all of the strategies in the SAA statement of best practices to try and find a rights holder.  Waiting for Congressional action may be both impractical and unnecessary.

Whether orphan works legislation proves useful or not will depend in large part on the details of any final bill.  There were strong hints last time that in order to gain approval, a bill would become so burdensome and expensive that the library and archives community would be better off without new legislation, simply relying on fair use.  No doubt that debate will be revived if any orphan works bills are re-introduced.  But the SAA has made an important contribution from either perspective that one takes.  In regard to potential legislation, they have offered a standard that legislators should consider as they draft a bill, as well as one that those concerned about the burden created by legislation can look at to measure the depth of the problem.  In regard to those who would rely on fair use, the statement of best practices provides a set of guidelines that can help give users confidence that they are truly making a good faith fair use effort.

Books in the cloud

There have been lots of reports flying around recently about the decision by Amazon to delete copies of two works by George Orwell (ironic, that) from the Kindle devices of folks who thought they had bought those books for once and for all time.   There on reports and comments about this here, here and here.  Technologically naive as I am, my first response to the story was “wow, I didn’t know they could do that.”  My second response was to reconsider my growing inclination to buy a Kindle.

The really meaty issues, of course, are not whether I buy a Kindle or not, but relate to copyright issues and privacy.  We are used to the idea that a copy of a book is mine to do with as I please once I have purchased a legally-made copy of it.  The same is true of a CD or a DVD; I can rent, donate, destroy, lend or resell the single physical copy that embodies intellectual property because of the provision in our law called the doctrine of first sale.  We know that first sale does not necessarily apply in the same way to digital files, but the Kindle cases really pushes the issue.  Everything about the purchase of an e-book from Amazon looks like a sale, and consumers can easily be forgiven for thinking that they own something at the end of the process.  It is probably time for a legislative look at how first sale applies in the digital world.  While there have been suggestions of a “forward and delete” model for digital first sale that would allow a consumer to transfer a digital file to someone else as long as copies were not multiplied, this situation raises a more fundamental question.  When can a transaction that looks like a sale be treated as a mere license, and when will consumer protection concerns step in to enforce the privileges that go with a purchase?

It is important to note that, in the case of “1984” and “Animal Farm,” there was no question of preventing consumers from making unauthorized copies, which is the usual reason given for the assertion that first sale does not apply to the digital environment.  Here, it was Amazon that was selling the unauthorized copies, and consumers were deprived of ownership by remote action, even after they had purchased the books.  Why this could not be remedied by having Amazon pay the rights holder for the infringement, which would have been the solution if a publisher distributed print copies of a book without authorization from the copyright holder, is not clear to me.

One thing that several of the stories about this contretemps have in common is reference to Jonathan Zittrain’s must-read book “The Future of the Internet… and how to stop it.”  It is always good for an author when he correctly predicts a technological trend, and Zittrain got this one dead-on.  His warning that the Internet is moving away from the programmable devices that fostered so much innovation toward tightly-control, “tethered” appliances proves eerily prophetic when Amazon starts deleting books from consumers’ devices.  It makes reading Zittrain’s discussion of all of the implications of this development that much more important.

Zittrain had an excellent op-ed piece in the New York Times on July 20, called “Lost in the Cloud,” that discusses some of the privacy and censorship issues that are inherent in the development of these Internet appliances and makes brief reference to the Kindle issue.  I am happy to be able to report that Zittrain will be coming to Duke during the upcoming academic year as part of our Provost’s Lecture Series; I cannot imagine a more important discussion to have than one about the issues he is raising.

The elements of an open access quiz

When I was a first-year law student, my professor for Torts used to threaten to call us up at 3 am and demand that, before we were fully awake, we be able to recite the elements of a negligence claim – duty, breach, causation and harm (thanks, Prof. Darling).  I was reminded of this demand by a small part of a recent news story, and the thought of three “elements” about open access that I would like to see every member of university promotion and tenure committees remember, even if quizzed in their sleep.

The news story, from the Chronicle of Higher Education, reports on an unusual tenure process at the College of New Jersey.  The Dean and faculty panel recommended against granting tenure to Professor Nagesh Rao, but the Provost and Board of Trustee disregarded that recommendation and granted tenure, after considerable internal and external protest.  My interest in the story is focused on one small comment, where Professor Rao is describing the reasons he thinks the faculty panel recommended against tenure.  In addition to mentioning that his subfield may be subject to some bias, he says that one of the principal places where he is published, an open access online journal called  “Postcolonial Text,” may have been “arbitrarily devalued” due to its business model.

For the record, “Postcolonial Text” is a peer-reviewed journal published on the Open  Journal Systems (OJS) platform.  I recently published (shameless plug alert) an article on open access for theological studies in an OJS journal, and can testify that the peer-review process — which is determined by the editors, not by the publication medium — was as rigorous as any traditional publication I have experienced.  We have reached the point, I think, where the notion that online or open access is somehow not as scholarly as print, toll-access publication is no longer a reflection on open access itself, but is an indication that some academics have simply failed to pay attention to radical changes in the environment for scholarship.  If what Professor Rao says is true, it is shows an embarrassing ignorance on the part of the panel that evaluated him.

So what are the “elements” of open access I want everyone who is responsible for evaluating scholarship to be able to recite, even when awakened in the dead of the night?  They are as follows:

1. Online open access journals are as likely to be peer-reviewed as are traditional print publications.  The medium cannot be used as a surrogate for investigation into the editorial practices and personnel of a given forum.
2. Open access based on an author fee is not a form of vanity publishing, and these arrangements, which are usually traditional journals with an open access option added on, are peer-reviewed in precisely the same way as traditional publications in the same journal.  They should be weight in an evaluation process in exactly the same way.
3. Many, perhaps most, works which an author self-archives in an institutional or other repository are also published in peer-reviewed forums.  P&T committees should not dismiss works just because they can be found in an open access repository, and authors should be responsible for ensuring that sufficient metadata accompanies the article to tell anyone who finds it about its peer-review and publication history.

That pesky checklist

The recent flurry of activity in the copyright infringement lawsuit brought by publishers against Georgia State University has focused attention – mine, at least – on the “Fair Use Checklist” that has been adopted for use in quite a number of college and university copyright policies.  As part of the mini-controversy over the naming of Dr. Kenneth Crews from Columbia University as an expert witness for the trial, the plaintiffs have objected that Dr. Crews, as a co-author of the checklist that is part of GSU’s new policy (see a previous post on this topic here) cannot be an impartial witness.  In one sense this seems an odd objection, since experts are hired by each side in a lawsuit precisely because thy favor the position taken by the party that hires them, but it also offers a chance to reflect on the use and misuse of the fair use checklist and to begin to explain publishers’ ambivalent attitude toward it.

There are two obvious problems with the checklist, it seems to me.  First, it can encourages a falsely mechanical view of fair use, where a “score” of seven pro versus six con, for example, means something is definitely fair use, while a one-digit reversal means it is not.  That, of course, is not how fair use really works, and no score card can actually predict the results of a judicial evaluation of the fair use factors.  Second, the checklist would be pretty easy to manipulate so that it tends toward the result someone is seeking.  There has been some discussion, for example, about whether or not there needs to be an equal number of check boxes on each side (favoring fair use v. disfavoring fair use) in order for the checklist to itself be fair.  Although this seems plausible, it is important to remember that courts have not necessarily articulated an equal number of circumstances to be considered on each side of the argument, and the checklist seeks to guide its user through the considerations that are actually in play, not some artificial list created without regard to case law for the sake of balance.

Against these two problems, both of which can be quite real, there are also a couple of sound reasons for using the checklist.  First, the very mechanical nature that makes it an imperfect tool also makes it one that can be used quickly and without an entire course in copyright law by staff and faculty.  These are the major groups that need to make fair use decisions day in and day out; the checklist is a way to at least be sure that they think about all of the factors that are relevant.  There are many people on college campuses that seem to believe that any educational use is a fair use, and the checklist helps counter that simplistic belief and remind all of its users of the full-range of necessary considerations.  Second, the checklist provides documentary evidence that a full fair use analysis was undertaken.  Since part of the “remedies” section of the copyright act gives college and university employees partial protection from damages for infringement when they make a good faith fair use decision, even if they turn out to be wrong, evidence of detailed analysis helps protect the institution from potential liability.

These two arguments in support of using a checklist may help explain the ambivalence that the publishers have shown toward its use.  The Association of American Publishers has announced support for several university policies that include the checklist, including Cornell’s and Syracuse’s, but they have lately seemed more hostile towards it.   It is easy to see why, really.  On the one hand, it is in publisher’s interest to have university employees get beyond a simplistic view of fair use, which is usually too generous, and look more closely at the full range of considerations that need to be taken into account (this explains, I think, the use of a version of the checklist by the Copyright Clearance Center as well).  On the other hand, that deeper consideration will, itself, make universities less attractive targets for litigation, which seems to be the chosen weapon in the battle to narrow educational fair use.

I have to admit that I too feel a good deal of ambivalence toward the checklist, albeit for somewhat different reasons.  I would like every staff and faculty member who must make fair use decisions to have a complete and nuanced view of the doctrine they are applying.  But I recognize how impossible that is.  Until our campuses are populated entirely by IP lawyers (may that day never come!), I will continue to believe that the fair use checklist is a highly imperfect, but even more highly necessary, tool for navigating the traitorous waters of contemporary fair use.

Discussions about the changing world of scholarly communications and copyright