Of groundhogs and sequoias

Lately my life has had a certain resemblance to that of Bill Murray in the movie “Groundhog Day.”  Like Murray, I seem to be repeating the same pattern in my daily work life over and over.

The basic pattern is this.  I am asked, often with a colleague or two, to meet with a faculty member or group of faculty members.  Sometimes this is at my home institution, and sometimes it takes place on a campus I am visiting.  Wherever they happen, the conversations follow predictable lines.  Yes, we agree, the current system for publishing scholarly articles, dominated by a small handful of commercial giants, is inequitable for authors and does not serve the best interests of scholarship.  Yes, open access offers many benefits for authors, institutions and society.  From there we usually begin to detail the various ways that open access can be accomplished, including the challenges and advantages associated with each model.  We always have the sustainability conversation, in which I try to convey the sense that we are involved in lots of experiments right now but the one thing that seems pretty clear is that the traditional model of scholarly publishing is itself not sustainable (which most folks realize).

Often the faculty authors and editors with whom we talk have specific horror stories to tell, specific ideas about how to get scholarly publishing on a better track, and specific worries about how the transition will be made.

In spite of the repetition, I enjoy these conversations. I learn a lot from hearing about the particular experiences of authors and editors, and about their notions of what a better system would look like.

There is another, more important reason that I do not resent having to have these discussions over and over again.  I constantly remind myself that the ideas about publishing and open access are beginning to filter down into our faculties and they are beginning to turn their attention to how to change the system.  This is a remarkable development, and it is a reminder that the 11,447 scholars who have signed the Cost of Knowledge pledge to boycott Elsevier (as of this writing) are really just the tip of an expanding iceberg.  Many others have not signed that pledge, which is often mistakenly assumed to be just for mathematicians, but have become more aware of the problem and, more importantly, ready to seek alternatives, because of that public campaign.

I think we have reached a point where we are no longer having to sell the idea of open access.  There is widespread acceptance that that is the way that all or most scholarship will be distributed in the near future.  The discussions we are having now focus on specific advantages of OA, like altmetrics, the mechanics of the transition, and the ways in which costs can be managed.

One specific question that arises in every conversation is how the promotion and tenure process will have to change as open access becomes the rule rather than an exception.  Part of the answer is to point out that several forms of open access are entirely compatable with the traditional evaluation techniques in P&T processes.  But as digital scholarship becomes the norm for many researchers, there is a growing awareness that P&T is going to have to change to take account new forms of scholarship.  It is not open access per se that will drive this change in P&T, but rather these new approaches to scholarship for which openness is an added benefit.

In this context I was delighted to see the recently released “Guidelines for Evaluating Work in the Digital Humanities and Digital Media” from the Modern Language Association (there is a story about the guidelines here).  To be perfectly honest, there is little in the Guidelines themselves that is groundbreaking; they are commonsense suggestions about how scholarship should be evaluated, with some really good, specific attention to uniquely online aspects.

What is important here is not so much what the Guidelines say as who is saying it.  It is very important that the MLA, one of the oldest and largest scholarly societies in the U.S., is taking notice of the changes that are happening in scholarly communications.  As with the faculty open access conversations, this is evidence that change is penetrating the academy broadly and deeply.  The revolution in scholarly communications will not, in the end, by accomplished by librarians; it will be accomplished by scholars, authors and their scholarly societies.  That those groups are beginning to notice the need for change and to engage in the debates about how to accomplish it is a significant step forward.

I say “revolution” with tongue in cheek here.  Perhaps some of us once expected a rapid conversion, a flipped switch that would change the scholarly publishing world to open access, but that is not going to happen.  Our world will be changed through many conversations, lots of experiments (some of which will not succeed), and the growing activities toward change of scholars, universities and societies.  I recently talked with a colleague who expressed some doubt whether a career in academic librarianship really made a difference, and I assured her that, in my opinion, we need to see ourselves as sequoia farmers.  We make small contributions and sometimes see very little growth.  But over time (and, in this case, place) the progress is substantial and the results can be gigantic.  And just occasionally — I think we are in one of those moments — we get to witness a growth spurt.

A safe harbor, not an anchor

Whenever Jonathan Band writes a “Friend of the Court” brief on behalf of the Library Copyright Alliance, it is sure to be worth careful reading.  One not only learns a lot about the particular case from Jonathan’s filings, but also a good deal about the legal and social place of libraries the U.S.

Jonathan’s most recent brief amici curia was filed ten days ago in the case brought by the Authors Guild against the Hathi Trust and five of its university partners. In some ways it is an unlikely case in which to seek any enlightenment, since the posture and the legal theories advanced by the plaintiffs are odd, to say the least.  While it is hard to see this complaint going very far, the consequences if it did, and especially if the recent motion for partial summary judgment filed by the Authors Guild garnered any credence from a court, would be catastrophic for libraries.  Fortunately, Jonathan’s brief in response to that motion is smart and, I think, devastating.  And, as usual, it tells libraries some important things about themselves.

The motion, which I discussed here several weeks ago, argues that the only exception that libraries can rely on in the Copyright Act is section 108, the specific exception that authorizes some preservation and interlibrary loan activities.  It explicitly claims that fair use is unavailable to libraries, whose rights, it asserts, are entirely circumscribed by section 108.

In short, the AG would transform a safe harbor included in the copyright law promote certain library services into an anchor that would restrain libraries from performing many of their day-to-day activities.  Or, as Jonathan puts it, “They [the Authors Guild] seek to transform an exception intended to benefit libraries into a regulation that restricts libraries.”

Jon goes on to list many of the library activities that the public depends upon that would be of doubtful legality if the Authors Guild’s argument was taken seriously, ranging from ordinary, daily lending of materials to digital exhibits.  One of his most effective arguments is based on the many portions of the Library of Congress’ American Memory project that explicitly rely on fair use.  As the brief says, under the plaintiffs’ theory, the Library of Congress, in which the Copyright Office itself resides, would be “a serial copyright infringer.”

The absurd results of this radical and insupportable theory advanced by the Authors Guild are balanced by arguments both that the plain language of the law (and its legislative history) support the obvious proposition that fair use is available for libraries, and that, in fact, section 108 would permit the orphan works project that Hathi proposed and the Authors Guild seeks to prevent.

So the brief effectively counters the bizarre theory advanced by the AG.  But it also implicitly tells us two things about where libraries stand today that are worth noting.

First, it reminds us that libraries are always adapting to the changing needs of their patrons, many of which today are driven by rapid advancements in technology.  The ways people encounter culture shift with alarming regularity and libraries must stay abreast of these shifts.  Fair use, which has existed in U.S. law for over 170 years, has always been a key part of libraries’ ability to respond to patron needs, and Congress recognized the continuing need for libraries to be able to rely on fair use when it drafted the 1976 Copyright Act.  Section 108 is important for libraries, and it still has a role to play in library services.  But fair use is, perhaps, more important in an era of rapid change.  As Jon writes (quoting a 1990 case):

While the specific exceptions provide courts with no discretion, fair use is “’an equitable
rule of reason’ which permits courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity which the law is designed to foster.”

The second, more troubling, reminder from this case and Jon’s filing is that the Authors Guild has shown itself willing to launch an extremely broad and devastating attack on libraries in order to protect some strange fantasy about how they can make more money.  Libraries have always been, and should remain, the best ally of authors who seek to find readers. It is foolish and short-sighted of the Author’s Guild to turn on libraries, and to advance a theory that would cripple them, without apparently realizing how much harm those actions could do to authors.

In a previous debate about this case, a commentator wrote “I don’t care about readers, I want buyers.”  It seems this attitude is assumed by the leaders of the Authors Guild, but it is disturbing for three reasons.

First, this attitude neglects the fact that library readers often become buyers.  Marketing their books is something authors expect from publishers, but if libraries are taken out of that equation, it will grow more difficult.  Especially in the world of online purchasing, the ability to discover and browse a book at a library is one of the best routes to placing an order for that book.

Second, not all authors (or even most, I daresay) share this attitude.  The majority of authors, including all scholarly authors, write, I hope, to be read.  Many are simply not motivated by the lure of profit, since profit is unavailable.  When profit is possible, it can be very important, but it does suppliant, in most authors I believe, the fundamental desire to communicate, to be read.

Finally, when the desire to make a profit does overcome the desire to express oneself and to be read, the result is inevitable an uninteresting book.  Those who do not think first about readers do not deserve any.

If the diverse members of the Authors Guild do not all share this assumption that readers and buyers are distinguishable, and only the latter are desirable, than they are seriously misrepresented by this filling, and should be grateful, along with the library community, to Jon Band for his work on behalf of readers everywhere.

Keeping it simple, or how to solve the Berne problem, part 2

My first post about the Berkeley orphan works conference focused on what we had done to create the massive orphan works problem we now face, and what mistakes we should avoid in the future as we try to solve it.  Now I get to be a little more positive and discuss some of the suggestions I heard (all of the PowerPoints are now available) for solving the problem that seem quite workable.  The overarching theme, I think, is keep it simple; rely on small legislative changes or solutions that can be implemented at the trial court level, rather than on big ideas.

Perhaps the foundational presentation focusing on a simple approach was from Jennifer Urban, one of the Directors of the Samuelson Law, Technology and Policy Clinic, who simply laid out the argument that use of orphan works most often will be fair use.  Her principal innovation in the fair use analysis was that it should begin, in this case at least, with looking at the second fair use factor, the nature of the original work being used.  The second fair use factor is not often asked to do much work, in my opinion, and attentive readers of this blog will know that I have suggested before that more emphasis be put there in regard to academic works.  Professor Urban’s argument about orphan works focused on the second factor for a similar reason — by starting there we could more clearly focus on the incentives for creation of a particular type of work and understand that there is no incentive to be gained for the creator or publisher of a true orphan work by charging a toll for use.  Indeed, Urban moved from the second to the fourth factor, an easy transition in this argument, and pointed out that an orphan work represents a complete “market failure” in which the economic impact factor clearly favors fair use.  So the simplest solution to the orphan works (or Berne) problem is just to recognize that the tools to facilitate beneficial uses of orphans already exist in U.S. law.

A proposal that meshed nicely with this approach was made by Professor Ariel Katz, from the University of Toronto, who suggested that courts could merely “tweak” the remedies for infringement to support uses of orphan works by taking into account, at the remedy stage, whether or not a reasonable search for a rights holder was done by the user prior to use.  If a court found that such a search was done, and no rights holder found, then damages could be waived or reduced to a reasonable fee for the use.  This suggestion can be seen as complementary to Professor Urban’s, since a fair use argument, if successful, could avoid a finding of infringement and, even if the judge did not accept fair use, a second step, adjusting the remedies, could still avoid the inefficiency of penalizing a beneficial use of an orphan work.  It would also provide an incentive for rights holders to take steps to be findable, which would protect their potential remedies and increase the likelihood of an efficient transaction over the proposed use. Taken together, theses two proposals require no legislation at all and could significantly improve the efficiency of the system by which culture replicates itself and develops.

Perhaps the most enlightening part of Professor Katz’s talk, however, was his analysis of the thinking that stands in the way of an elegant solution to the problem of orphan works.  He spoke about how the “permission first” mentality has become a  kind of “dogma” which blinds many to the possibility of simple and sensible solutions.  By focusing on the idea that all uses must be permissive, even if that permission comes from a licensing organization and does not benefit the work’s creator, we treat reuse of culture as a kind of “sin” and set up a licensing model that parallels the medieval system of indulgences. “The coin in the coffer rings and an orphan work from idle purgatory springs!” This approach is inappropriate and deeply inefficient when we speak of cultural creativity, which is inevitably cumulative and can be seriously undermined by a “permission first” attitude.

A different analogy was drawn by Professor Lydia Loren of Lewis & Clark School of Law, who preferred the term “hostage work” to the language of orphans.  She focused on the parallel with real property law and the doctrines on abandonment and waste.  As she said, there is a public interest that the law has long recognized in preventing what is called “permissive waste,” whereby a property owner allows the property to fall into disuse and become unproductive.  Such property, whether real or intellectual, is then held hostage to an exclusive right of ownership that is not being responsibly exercised.  In real property, we have doctrines like adverse possession and abandonment that will simply take that ownership right away when the waste is harmful to society.

Regarding “hostage” works of intellectual property, Professor Loren suggested that the incentive for creation had clearly already worked, since the work had come into being, but that the incentive to disseminate that work — to share it for the cultural benefit of all — had failed.  In light of that failure, “waste” should be prevented in a way that benefits the public.  Her fascinating suggestion was that the user of such a work should be protected from liability for infringement (if a rights holder arises), but only if the user has made a copy of the work available in openly accessible form.  Thus the public interest is served, by the accessible copy, yet the user can still make whatever use she wants, even a commercial one.  A rights holder that arises later might be able to stop that use, but the public has still benefited and the waste caused by a period of abandonment has been prevented.

These proposals gave me some reason to believe that we could make progress on the orphan works problem without needing large legislative changes, which almost never make copyright law better, and without actually shirking our commitments under the Berne Convention and TRIPS Agreement.  However unfortunate some of the effects of implementing those treaties in the U.S. has been, solutions to the worst damage done are still at hand.

How to solve the Berne Problem, part 1

The conference on Orphan Works & Mass Digitization, hosted by the Law School at the University of California, Berkeley last week, was exciting — at least to the 230 copyright geeks like me who attended — and filled with well-researched papers.  The three White Papers that were prepared by the Samuelson Law, Technology & Public Policy Clinic (written by former Duke Scholarly Communications Intern David Hansen) are well worth reading.  In this first post I want to look at a basic terminological issue and then focus on two general observations from the event.  In a subsequent post I will describe three specific suggestions made by conference speakers for solving the orphan works problem.

It is the phrase “orphan works problem” itself that sparked terminological debate.  Several speakers were uncomfortable with that expression, and an alternative – “hostage works” – seemed to gain some traction among participants.  But the suggestion that really got to the root of the issue was that we should refer to the proliferation of works still in copyright protection but for which no rights holder can be located as the “Berne problem.”  This is appropriate because the problem was so severely exacerbated by U.S. adherence to the Berne Convention in 1988 and the legislative changes that that decision required.  Four steps contributed significantly to the problematic situation we are currently in:

  1. The move to automatic protection, which often makes people into rights holders against their will and without their knowledge,
  2. Copyright term extension, which inevitably makes heirs or successors-in-interest into rights holders, again often unawares,
  3. The end of the renewal requirement, so that rights holders no longer have a chance to indicate their continued interest in a work; thus no “abandoned” works move any longer into the public domain,
  4. The end of the registration requirement, which now makes locating rights holders so much more difficult.

The combined effect of these changes to U.S. copyright law, all accomplished between 1978 and 1989, has been to create a huge class of orphan works.  So it is not surprising that many of the suggestions for how to deal with the problem pushed toward reversing or mitigating some of these changes.  Registries, for example, were a common theme; under these various proposals to create registries to assist in finding copyright holders for different types of works, we would simply be recreating (hopefully more efficiently) the registration database of the Copyright Office, which once could claim mandated comprehensiveness but unfortunately can do so no longer.

The first observation from the conference is that nearly all of the speakers (except the industry representatives and Registrar of Copyright Maria Pallante) seemed to think that legislation to solve orphan works is probably impossible and likely a bad idea.  The political climate in Washington makes attention to copyright issues unpalatable, and the proposal we saw several years ago was unattractive to many of the speakers.  Instead of newly created legislative schemes, potentially with burdensome and impractical requirements, many of the speakers looked for small changes that could be accomplished either in common law – by action of the courts, that is – or by simple legislative amendments to portion of the law as it currently stands.  In our next post we will examine some of these more modest suggestions.

Closely related to this distrust of the legislative process as a path for solving orphan works was a clear distaste, again expressed by multiple speakers, for solutions that would create a regime of extended collective licensing (ECL).  Such ECL programs would, of course, require a complex legislative enactment, and examples where such programs are in place were widely considered failures on a practical level; a professor from Canada, which has such a plan, was especially clear that this is not a workable option.  On the level of policy, an ECL scheme, where potential users of orphan works apply to some government-authorized board for permission and pay a fee, was denounced as economically inefficient.  The purpose of legislative licensing schemes is to facilitate the transactions so that users can find owners and owners benefit from the uses.  With orphan works, of course, there is no owner to be found so no transaction like this is actually facilitated.  Instead, the fee that would be paid to some collective organization would amount simply to a tax on use, with no economic benefit or incentive for creators at all.  One speaker refer to this sort of approach as similar to the medieval practice of selling indulgences, based on a dogmatic conviction that all unauthorized uses are a form of “sin.”  Any program based on such a foundation, rather than on solid economics, incentives for creation and cultural development, would be bad policy from the copyright point of view.

First sale goes to the Supreme Court, again

With the Orphan Works conference taking place last week, there is an awful lot to blog about.  I will address the conference in the next couple of postings (unless there is a GSU decision), but for now I want to look at another round in the John Wiley v. Kirtsaeng case.

Lest we have forgotten, Kirtsaeng was the latest in a series of cases asking to what degree the doctrine of first sale, which says that the purchaser of a lawful copy of a work may further distribute that copy as she pleases, applies to copies of works that were manufactured and sold abroad.  In 2010 the Supreme Court looked at this issue in Costco V. Omega.  In that case, Justice Kagan recused herself because she had worked on the case as Solicitor General for the Obama administration, urging a ruling in support of Omega watches over Costco, which was importing watches purchased cheaply overseas and underselling the MSRP in the US.  The Ninth Circuit agreed with SG Kagan and ruled against Costco, holding that the US doctrine of first sale did not apply when an item was made and purchased overseas.  The Supreme Court, without now-Justice Kagan, split 4-4, a vote which left the Ninth Circuit’s ruling in place but did not make it binding precedent for the rest of the country.

Then, in 2011, the Second Circuit upheld a lower court in ruling that Mr. Kirtsaeng was an infringer for reselling copyrighted textbooks that his family members bought in Thailand and sent to him in the US, where he could get a higher price for them than had been paid.  The Second Circuit handed down a sweeping ruling, which I criticized here, in which the two-judge majority went further than Costco and maintained that first sale would not apply even if the work that had been manufactured abroad was sold in the US with the authorization of the rights holder.  As I said in my earlier post, this created a situation where the copyright holder could knowingly and deliberately take advantage of all the protections of US law without being subject to one of its most important limitations.

Now the Supreme Court has agreed to review the case, and many people hope that they will correct the overly broad ruling made in the Second Circuit.

One of the things that often leads the Supreme Court to agree to hear a case is a split amongst the Circuit Courts of Appeal on a particular point of law.  Here such a split is very clear.  The Second Circuit holds that a foreign-made work can never be resold in the US by any purchaser without the consent of the rights holder.  The Ninth Circuit, in the Costco case, says that such a work may be resold in the US, but only after an authorized “first sale” in the US.  And the Third Circuit believes that a US resale is alright anytime the original sale was authorized by the rights holder, even if that sale occurred outside the US (so that both the resales in Costco and Kirtsaeng would be legit).  Given Justice Kagan’s position in the Costco case, I would guess, if I had to guess, that the Court would opt for the Ninth Circuit rule, which mitigates the absurd results from Kirtsaeng but still narrows first sale considerably over what the Third Circuit would allow.

I hope that as the Justices consider this case they will recall that, by adhering to the Berne Convention and the WTO’s TRIPS agreement, nearly all countries now extend “national treatment” to the citizens of every other signatory nation.  This means, I believe, that we should read the requirement of “lawfully made” quite broadly.  As long as a work is not pirated — that is, it is made and/or initially sold with authorization of the rights holder — we should recognize that it is entitled to full protection under US law and therefore ought to be subject to all of the limitations of that law.  If rights holders want to practice price discrimination in different countries, they should rely on the cost of exporting to enforce those differentials and accept a certain percentage of “gray market” goods.  But that is not what I expect to happen.

If my expectations rather than my hopes are fulfilled, it will be difficult for libraries to be secure in lending any of the works they purchase abroad, especially film.  And vendors who sell to libraries might have to bear the extra expense of selling through a US outlet, if libraries become fearful of buying abroad.  It is an issue that the library community, which depends for its most fundamental activities on first sale, needs to continue to watch closely.

 

Seeking a boundary

Is it just me, or do there seem to be a lot of lawsuits filed by publishers in the higher education space recently?  It is increasingly obvious that the disruption caused by the digital environment has led publishers to embrace litigation as a strategy for protecting their business models, and that that strategy cannot be good for the overall well-being of higher ed.

The latest such lawsuit, filed on March 16 in the Southern District of New York, comes from three major textbook publishers – Pearson Education, Cengage Learning, and MacMillian Higher Education – and targets a new Internet-based business called Boundless Learning that purports, according to the complaint, to create textbooks, using open educational resources, that parallel and can replace the high-priced textbooks that each of these publishers offer.  The sum of the complaint is that Boundless copies the selection, arrangement and organization of their textbooks much too closely and so constitutes copyright infringement.

One should never form conclusions about a lawsuit based only on the complaint, which is inevitably just one side of the story and is always pitched to make the defendant’s behavior look as dastardly as possible.  Nevertheless, it seems safe to conclude that this lawsuit is not as farfetched as some of the ones we have seen recently.  The plaintiffs are really just asking the court to help draw a line that has never been very clear in copyright jurisprudence, the line between idea and expression.  Their description of the situation makes it sound like the placement of that line is obvious, but it probably is not.  What it certainly is, however, is a vital line for higher education to examine and understand.

I occasionally run across an interesting assumption about copyright, which I think is common both in academia and in the wider world.  It is the belief that if you want to reuse an image or a figure without permission, you can simply redraw it and avoid all copyright entanglements.  Like all simple rules in copyright, this one is not true; the standard for infringement is “substantial similarity,” and a redrawn figure or image that is substantially similar to the original could still be found to be infringing.  Far better, I tell those who raise this possibility with me, to use the original figure or picture in a way that supports the assertion of fair use, or to get permission.

From the complaint, it sounds like Boundless Learning is doing something similar to this “redrawing” in hopes of avoiding copyright problems.  The plaintiffs assert that Boundless creates “shadow versions” of their copyrighted textbooks, imitating the arrangement of topics and sub-topics, the depth of coverage for each area, and even the choice of illustrative examples.  I fail to see how a running and a fishing bear illustrate the First and Second Laws of Thermodynamics, but the complaint claims that the Boundless biology text imitates Pearson’s standard text on the subject down to such similar, but not identical, pictures.

So the question is directly posed about where the line between idea and expression is to be fixed.  Is the example of a bear merely an idea, which cannot be protected by copyright, or does it represent a creative and expressive decision by the Pearson authors which is infringed even by a different picture?  To put it another way, how much actual expression must be copied, if any, to infringe on another’s protected expression?  We have seen the “derivative works” right expand a great deal over the years, so that today even characters in a work of fiction are often protected even when the alleged infringing character copies no actual expression from the original (as in the recent Catcher in the Rye sequel case).  Is there a boundary on that right?

Since scholarship is an inevitably cumulative process, in which each new work builds, more or less explicitly, on what has been done before, the boundaries of the derivative works right and the line between protected expression and public domain ideas are very important to understand.  If this case brings us a little more clarity, that would be good.  But the judge will need to be very careful not to develop rules that would inhibit the basic processes of teaching and research.

Finally, I want to point out an oddity I noticed in the complaint that was filed to initiate this case and which will bear watching (no pun intended).  In addition to naming Boundless as a defendant, the complaint lists 10 “Doe” defendants; people who are alleged to be complicit in the claimed infringement but whose names are not known and can only be discovered through court orders.  Naming such John and Jane Does is a common technique in file sharing lawsuits and in some of the more notorious copyright “trolling” cases.  But in this complaint it is quite odd, because we have no real clue what role these people are alleged to have played or why they are being sued.  Are they students who have ordered “shadow” textbooks from Boundless, employees of Boundless, or faculty members who have recommended Boundless to impoverished students? As the case goes forward, we may learn something about how to draw the line between idea and expression, and we may also discover just who the publishers are willing to target in their campaigns to use litigation to protect their markets, if and when the identities of these Doe defendants are revealed.

Dueling metrics?

One of the interesting consequences of the rapid growth of open access to scholarship — a consequence that I, at least, did not see coming — has been some degree of competition, from the perspective of authors, between open access platforms.  In this short article from AALL Spectrum, James Donovan and Carol Watson address a question they have encountered, “Will an institutional repository hurt my SSRN ranking?”  At Duke we have been asked a similar question in regard to RePec, the repository for economics.  Considering these questions gives us interesting insight into the maturing movement toward open access scholarship.

One way to deal with this concern, which we have undertaken in regard to RePec, is to work with the disciplinary repository to feed article statistics from the institutional repository into the rankings produced by the disciplinary one.  That method provides a more comprehensive and accurate ranking of the articles.  And such rankings are, of course, a more useful measure of impact than impact factors, which apply to journals but not to individual articles, can ever be.

I do not know if it is possible to connect institutional statistics to SSRN or not, but Donovan and Watson describe a different approach to addressing this question.  They begin by pointing out an assumption behind the question, that article readership is a zero-sum proposition, that there is a defined number of readers for any given scholarly article, so that new means of access will simply divide up that readership, not generate new “eyeballs on the article.”  This is the same assumption made by publishers who insist that self-archiving, or even national funder policies, imperial their revenue, and by those who argue that libraries will never spend subscription dollars on works that will be made available freely.  Donovan and Watson begin the process of showing that this assumption is false.

In their article the authors report two different research methods they employed to study the question of whether one repository siphons readers away from another repository, or whether, instead, readership grows overall when an article is available from multiple OA sources.  Both methods lead them to the same conclusion: multiple outlets produce additional readers, so the sensible course for an author who wants her work to have maximum impact is, as they say, to “use both!”  Far from harming the ranking in one database, availability of an article in a second repository appears to increase substantially the overall number of downloads.

I like this article for two particular reasons.  The first is that it attempts to find solid data on which to base the discussion.  Instead of mere assertions of “obvious” truths in the open access debate, many of which are based on that zero-sum assumption, Donovan and Watson attempt to move the discussion to real evidence that actually places that assumption in some doubt.  As we continue to explore business models and look for dissemination options that more fully serve the needs of scholarly authors, basing our discussions on real data would be a refreshing trend.

The second reason I like this article is that it appears to offer empirical evidence, beyond the many anecdotes that we have collected over the years, of the role of “unexpected readers” in increasing the reach of scholarly research.  The zero-sum assumption gives rise to the presumption that the current system works in an acceptable way merely because the people I expect to see my work can see it.  But open access offers the possible of discovering a myriad of readers who are not expected, either by publisher or author.  If we take seriously the idea that academic research is undertaken, in the end, for the good of society, these are precisely the readers we would want to see find our scholarship.  And to rule them out on the basis of an unproven assumption would be to sell ourselves short as scholars.

Momentum

I am leaving later today to fly to Bahrain, where I will be part of an international panel discussing open access at the annual meeting of the Special Libraries Association, Arab Gulf Chapter.  Libraries in the region, as I understand it,  have not yet taken significant steps toward open access to scholarship, but they are anxious to learn.  I think the spread of interest in the whys and hows of open access all over the world indicates how great the momentum behind this movement is.  Even Elsevier called open access the wave of the future recently, even as they continue to try to stem that tide.

Public access, of course, is a subset of open access, referring specifically to access provided for the public to the results of research that is supported by significant government investments.  The arguments for public access are so obvious that it may be the easiest form of open access to defend and to spread.  Taxpayers deserve access to the research products they have paid for; even the sponsors of the ill-fated Research Works Act acknowledged this as they stepped away from the foolish proposal.  And public access increases the accountability of governments for how they spend the money that is loaned to them by their citizens.  Many other countries are way ahead of the US in providing this accountability.

All of this makes the recent statement by the Association of American Publishers opposing open access mandates from government funders seem all the more ill-advised.  In some ways the language of the statement seems more carefully crafted and restrained, but close examination still proves that the arguments put forward are fundamentally misleading.

My favorite howler is this: “FRPAA [the current legislation that would expand public access mandates in the US] is little more than an attempt at intellectual eminent domain, but without fair compensation to authors and publishers,” said Tom Allen, President and CEO, AAP.”  Really?  It is hard to believe that the CEO said this , since it seems like a statement calculated to show how weak the publishers’ position really is.

FRPAA, of course, is nothing like eminent domain, for the simple reason that the government has invested in the creation of the intellectual property at issue in the first place.  Indeed, what the publishers want is a continuation of the “land grab” from which they have long benefited; they want property that is really a public good — created with public funds on many levels — turned over to them and reserved for private gain.  And do they really want to raise the issue of fair compensation for authors?  Scholarly authors are often paid with public funds and have their research supported with public funds.  Yet publishers take that work without any compensation to the authors.  Only when they pay for the products they subsequently sell can publishers ask about fairness.

Later in the statement, the AAP provides a list of the ways in which they invest in the products of scholarship — “validation, digital enhancement, production, interoperability and distribution.”  It is true that this is a list of services that publishers provide, more or less well.  Interoperability, for example, is better served by open access than traditional publication.  But let’s admit that these are services that publishers provide.  None of them however, create a proprietary interest in the works in question, and they are all services that authors should be free to evaluate.  If authors (who are the sole owners of copyright until they decide otherwise) believe that these services are not worth the cost of surrendering their rights, or that they can obtain them better through other forms of publishing, they should be free to do so.  The overwhelming support for public access by the research community suggests that they do believe that.

Finally, the AAP statement complains about the six month embargo that would be the maximum allowed under FRPAA.  I have heard several versions of this complaint, and suggestions that for some disciplines the embargo window should be much longer, even as much as five years.  To those concerns, I would respond that it is important not to confuse the period of time during which a work is useful in a particular discipline with the period of time during which it is profitable.  In biomedical sciences, research dates quickly.  But in other fields, and especially in the humanities, the usefulness of an article can be quite long-lived.  But these embargo windows are not intended to define the term of usability; they are merely there to protect publishers’ ability to profit from the article.  And the window of profitability is certainly much shorter in these fields than the window of usability; subscription sales are exhausted within a very short time after publication, even if scholars continue to consult a particular articles for many years.

I am reasonably certain that the six month embargo included in FRPAA as introduced will be vigorously negotiated, and perhaps variable windows will be introduced.  That’s fine, but I hope the negotiations will be based on real data and not unfounded and incredible assertions.  Publishers need to show us the curve that illustrates their profits.  Do they really continue to make significant revenue after six months?  After one year?  Everything I have seen suggests that six months is very reasonable, when viewed as a window for profitability.  If I am wrong, publishers need to show me that data.  Since I and my colleagues are the ones who create their products, they owe us that much, at least.

A masterpiece of misdirection

On February 28, the Authors Guild filed a memorandum in support of its “motion for partial judgment on the pleadings” in its lawsuit against the Hathi Trust and five of its partner libraries, asking the judge to rule that the activities the AG has complained about – the mass digitization of books and the proposed orphan works project — are not protected by the specific library exceptions found in section 108 of the copyright law and that Hathi Trust cannot even assert, much less successfully rely upon, fair use, which is section 107 of that law.  There is a news report on the motion from Publishers Weekly here.

The memorandum strikes me as a masterpiece of misdirection, trying to make plausible arguments that do not quite fit the actual case in front of the judge.  The problem is that if the judge accepts these arguments, it could be devastating for libraries.  At its heart, the motion argues that libraries do not have any fair use rights, since their entire set of privileges under the copyright act are encompassed by section 108.  I think there are lots of reasons to reject this logic, which runs counter to the express language that Congress used in section 108 itself, which says (in subsection (f)(4)) that “Nothing in this section… in any way affects the right of fair use.

One way to see the flaw in the AG’s argument is to look at the odd results that arise if it is accepted.  For one thing, libraries would thereby become disadvantaged actors under the copyright act.  Other institutions and persons would still have the broad and flexible opportunities under fair use, but libraries would not.  Indeed, in the other lawsuit about mass digitization in which the Authors Guild is a plaintiff, against Google itself, Google will be able to argue fair use to justify its mass digitization, if the case gets that far.  But the plaintiffs argue that libraries cannot assert the same defense in regard to the same activity, simply because they are libraries, and thus disadvantaged by the existence of an exception that was supposed to benefit them.

By the way, this logic runs counter to the legislative history of the codification of fair use.  Congress explicitly stated that it did not intend to change or harden the judicial application of fair use; they wanted judges to continue to be free to apply the factors to specific circumstances to make equitable decisions.  But if placing section 107 in the same law as all the specific exceptions limits its application to situations not covered by those exceptions, that judicial freedom is undermined and the clear intent of Congress frustrated.

The are many other specific exceptions in the copyright law, and viewing them as limits on fair use again shows the absurdity of the argument.  There is an exception that allows photographers to take pictures of publicly visible architectural works, even when those works are protected by copyright.  If that exception is taken as the entire expression of the rights of photographers, then they could not argue fair use when taking photographs of other publicly visible copyrighted works, like a piece of public sculpture. That result is absurd, of course, and was implicitly rejected by cases allowing such photography.  There is also an exception that allows public performance of music in the context of religious worship, but its existence does not mean that someone who sings a song in public outside of a worship service would not be able to argue fair use.

It is interesting to note that a consultation on copyright in Ireland, which issued this consultation paper earlier in the year, suggests the value of a fair use provision for the Irish copyright law and makes the point that each of the specific exceptions can actually be seen as examples of fair use (see page 120).  If each specific exception is read as an instantiation of the fair use analysis for a particular situation, the logic of ensuring that fair use always remains an option (as 108(f)(4) tries to do) is particularly clear. (Hat tip to David Hansen for bringing this paper to my attention)

One more potential absurdity – If libraries have no fair use rights, would it automatically be infringement for a library to capture and print a single still image from a film for a student to include in a paper?  Section 108 excludes film from all but its preservation sections, so making a copy for a patron from a film would not be permitted under the 108 subsections on copying for users.  Yet this activity would seem like an obvious fair use if anyone else did it.  Why, we should ask, would libraries (and their users) be penalized simply for being libraries?

One of the difficulties here is that fair use is sometimes equated to “fair dealing” or “fair practice” in international copyright law.  Those terms tend to be blanket concepts that incorporate but do not expand upon the specific exceptions within each national law.  Fair use is different.  It is a separate and free standing exception within the US scheme. We need to remember that the US law was not written within the context of international copyright harmonization and does not conform, in any number of ways, to the usual pattern of copyright in other countries. That fair use is a separate exception and not simply a blanket term or gap filler is proved, I think, by the specific reference inside section 108 to fair use as an alternative option for libraries.  It is also a fact recognized and pointed out by the Irish paper on copyright linked above, which suggests that fair use can be seen as “a doctrine that defines the ambit of copyrightability and thus not an infringement at all.”  As a boundary definition on the exclusive rights and the basic analysis underlying all of the specific copyright exceptions, fair use would, again, always be an appropriate defense that courts should never rule out as a potential argument.

The most creative part of the memorandum supporting the plaintiff’s motion is its attempt to convince the judge to ignore section 108(f)(4), the fair use “savings clause” quoted above.  That language seems pretty categorical, and the phrase “right of fair use” certainly suggests that fair use is a positive right that does not simply fall away when more specific exceptions to copyright are enumerated.  The plaintiffs ask the judge to ignore this phrase with two arguments.

First, they refer to the general principle that “the specific governs the general.”  Because fair use is, allegedly, general, and section 108 is specific, it is asserted that 108 preempts the application of fair use in libraries.  But as we have just seen, fair use is a positive right that Congress acknowledged by inserting section 107 into the law but did not intend to limit.  So it is a different sort of thing; it is not merely a general principle that can be set aside by specific rules, but a distinct exception — or even a boundary definition — intended to do its own work within the framework of the law.

Second, the plaintiffs rely on a different case, Corley v. Universal Studios, in which a judge dismissed a similar fair use “savings clause” in the Digital Millennium Copyright Act.  The problem is that that dismissal, which has not been followed by other courts even when interpreting the DMCA, is based on an entirely different reason than the one being asserted by the Authors Guild.  The judge in Corley held that the activity in question, circumventing technological measures intended to prevent copying, was simply not the sort of thing that fair use was intended to apply to.  He did not reject the “savings clause,” he merely found that the activity before him was outside its scope.  In the Hathi case, of course, the activities in question are directly within the ambit of fair use, and the judge should respect the clear intention that Congress expressed as powerfully as it could in 108(f)(4).  Whatever the ultimate decision about fair use may be, the defendants must be allowed to argue it if the structure of US copyright law is not to be grossly undermined.

An extraordinary week

It has been an extraordinary week for open access advocates, and it is only Wednesday!  For those keeping score, here is a recap of events, along with some commentary.

On Monday, Elsevier issued a press release withdrawing its support for the Research Works Act.  The RWA, of course, was a bill proposed in the US Congress that would have rolled backed the National Institutes of Health public access mandate and forbidden any other research funding agencies from adopting similar policies that would give taxpayers unfettered access to the research for which they have paid.

Within hours of Elsevier’s press release, the sponsors of the RWA in the House of Representatives announced that they would not pursue passage of the bill.  It seems it was Elsevier’s legislation from the start, so the publishing giant got to call the shots for Congress.  The announcement from Representatives Issa and Maloney contained the first extraordinary statement of the day, when they said that “The American people deserve to have access to the research for which they have paid.”  This, of course, is what they had tried to prevent, and we must read the statement with a suspicious eye.  But on its face, it seems to acknowledge the fundamental justice behind public access policies.
When the sponsors of the RWA folded their tents so promptly, I think we were left wondering if its introduction was simply a strategic move to stake out legislative ground, or a trial balloon by Elsevier to gauge support for open access.  If strategy it was, it seems to have failed spectacularly.

Elsevier followed up its withdrawal of support for the RWA with an open letter to the mathematics community.  These scholars, remember, are at the core of the boycott directed at Elsevier that has been gaining momentum for over a month and is still growing.  That letter also contained some extraordinary statements; in it the publisher seems to promise to lower some of its prices (although they base this promise on an arbitrary pricing standard that they have created) and to acknowledge that the bundling of journals into high-priced and inflexible packages (which they call “large discounted agreements”) is a problem.  I wonder if they mean this, or if it is simply more strategy?

The letter to the mathematicians contains an appeal for collaboration between Elsevier and the scholarly community.  In that vein, I respectfully offer three paths that mathematicians might pursue regarding Elsevier in the coming months:

  1. Talk with them, by all means, but don’t believe everything you hear.  Two principles are important to keep in mind.  First, their primary value is returning a profit to their shareholders, not the progress of your work or your discipline.  Second, they have no product to sell if you do not give them your intellectual property for free, so you have a lot of power here.  In a New York Times article published yesterday about the open access debate, scholars who support open access are called dishonest for continuing to submit their works to traditional journals; the boycott you have started reverses that alleged dishonesty and gives you considerable influence.  Don’t waste it.
  2. Keep exploring alternative publication models.  Even if Elsevier lowers its prices and introduces more flexibility into their bundling, it is hard to see the toll-access model as the path to the future.  For mathematics, where grants are smaller and many scholarly societies depend on subscription revenues, a “flipped” pricing model such as is being explored in physics with the SCOAP3 experiment, might make the most sense.  But in any case, it is important to keep experimenting with new ways to disseminate scholarship, especially more openly.
  3. Whenever you or a colleague/student does publish with Elsevier, look carefully at the publication agreement that is offered and cross out any language that ties your right to self-archive your work to the non-existence of an open access mandate from your institution of funder (you can find a sample agreement with this language here).  This is an outrageous interference with academic freedom, and authors should not tolerate it.  Simply pick up your pen and cross out any language that says you may only post a final manuscript of your work if you and your colleagues have not adopted a policy saying that you must do so.  In this regard, it is worth noting this article by Kristine Fowler from the AMS website analyzing the relative success that mathematicians have had negotiating the terms of their publication agreements with the largest publishers in their discipline.

Meanwhile, all of us – mathematicians, linguists, librarians, anthropologists or whatever — should transfer the energy we put into opposing the Research Works Act toward support for the Federal Research Public Access Act, which was introduced in both House of Congress a couple of weeks ago.  The case for FRPAA is made far better than I could put it in this essay on “Values and Scholarship” that was published by all 11 provosts of the universities that make up the CIC (Committee on Institutional Cooperation) in last Thursday’s edition of Inside Higher Education.  Their extraordinary, unified vision for scholarship in the digital age should provide the touchstone by which this discussion moves forward.

Discussions about the changing world of scholarly communications and copyright