How to COPE

Recently I have had the opportunity to review the first 14 months of Duke’s COPE fund, and it has been an interesting exercise.

COPE, of course, is the Compact for Open Access Publishing Equity, a plan by which academic libraries create funds to help faculty authors pay the article processing fees (APCs) that some open access journals charge as a way to replace subscription fees. Duke established a modest COPE fund in October 2010, and it was re-funded for fiscal year 2011-12.

Now, at the end of November 2011, I find that in 14 months we have reimbursed open access publishing fees for 20 articles written by 18 faculty members. We have six more COPE applications waiting for completion. Assuming that all of them are completed and reimbursed, we will have exhausted the available funding for COPE for the first time.

The authors we have reimbursed are predominately, but not exclusively, scholars in the biomedical sciences. That group is evenly divided between medical school and University-side scientists, but faculty in diverse fields like environmental studies and evolutionary anthropology have also benefited from the fund. The largest group of articles we helped fund were published with journals from the Public Library of Science, followed by Hindawi, Frontiers in Research, and BioMed Central. All of them are, of course, peer-reviewed publications.

With a relatively new initiative like COPE, it is not entirely clear how success should be measured. Is it more successful to spend the entire COPE fund on a campus, or to have it go unused? The answer, I suppose, would depend on the reasons behind the use or non-use.

At Duke we have been very clear that the purpose of COPE is to provide incentives for new models of scholarly publishing and to support open access. As the fund administrator, I am convinced that there is a good deal more open access publishing amongst our faculty than I previously expected. These 25+ articles are very much only the tip of the iceberg. And I cannot say for sure that the COPE fund caused an increase is that type of publishing; I am inclined to think faculty are turning to open access because of its numerous benefits, and that COPE funding is not the strongest factor for most of them. I have been told by authors, however, that COPE is an important example of the University “putting its money where its mouth is,” and I am pleased by that perception.

As open access publishing evolves, different funding models are being tried. It is important to recognize that author-side fees may not be the “winner” over time; many OA journals even now do not rely on such fees, although the best-known ones do. Nevertheless, we are clearly seeing an important transitional movement in scholarly publishing, and a COPE fund is one way for an institution to both encourage that transition and to tangibly affirm a commitment to open access for scholarship.

I expect that at Duke we will reexamine the policies and procedures we have in place for COPE, as we consider how it is to be re-funded, to see if they actually serve the incentive purposes behind the fund. In that task we will gain significant guidance from this recent blog boost by Stuart Shieber of Harvard University’s Scholarly Communications Office, about how funding agencies should assume the task of paying open access article fees. Stuart’s point about funders is important, and I hope that the policies he recommends are widely adopted. But his post is also a cogent and compelling re-assertion of those incentive purposes that COPE is intend to serve and how different policy decisions relate to the overall goals. As such, it provides a helpful guide for anyone considering a COPE fund or considering how to make such a fund more effective.

The unexpected reader

I have just returned from the Berlin 9 Conference on Open Access, which was held in Washington, D.C. at the lovely conference center facilities of the Howard Hughes Medical Institute.   It was a fascinating meeting, and quite different in tone from the one I attended last year in Beijing.

In its opening paragraph, this Chronicle of Higher Education report on the conference captures the fundamental difference.  This year the conference was much more clearly focused on the impact of open access on research; rather than talking about how open access will be accomplished, the discussion assumed that open access is inevitable and instead emphasized the differences that the evolution to open will make.

For the sciences especially, it was clear that openness is rapidly becoming the default, because awareness of its benefits is spreading so widely.  This year the partners in the discussion included many working scientists and, significantly, many academic administrators and research funders, who are well-placed, and, increasingly, motivated, to make the transition to open access.  The recent decision announce by the National Autonomous University of Mexico to make a decisive transition to open access is testimony to the impact a commitment by administrators can have.

Some of the most compelling discussion in Washington about the impact of openness centered on the idea of unexpected readers.  For years researchers have assumed that, especially for highly technical work,  all of the people who needed access to their work and could profit from it had access through the subscription databases.  This assumption has probably always been incorrect, but now the promise of open online access has really blown it up completely.  The possibility of unexpected readers, including computers that can make connections and uncover patterns in large collections of works, is now one of the great advantages of OA and one of the primary sources of the expectation for greater innovation.

One very touching story is worth retelling here to make this point.  Philip Bourne, a professor at UC San Diego and Editor in Chief of the journal PLoS Computational Biology, told of a rather remarkable manuscript that was sent directly to him in his editorial role.  He thought it was quite a special work of scholarship, on computer modelling of pandemics, and asked some of his colleagues with expertise in that field for their opinions.  Uniformly it was felt that the article was ground-breaking.  Finally, Bourne met directly with the author and, unusually, urged her to submit it to the journal Science.   You see, the author was a fifteen-year old high school student who had done her research as a visitor in university libraries and, for a while, using a “test” login obtained directly from a vendor.

The point here is not the obstacles to access that this young author encountered and overcame.  The point is that she was not at all the person the authors of previous articles on the topic thought they were writing for.  Yet she made a remarkable advance in the field because she was able to read those works in spite of conventional expectations.

By the way, Science selected her article for in-depth review, which is itself a big accomplishment for even experienced researchers, but ultimately decided not to publish her paper, which will now likely appear in PLoS Computational Biology, as she originally hoped.

In his presentation to the Berlin Conference, law professor Michael Carroll listed five types of readers who should have access to research output, and who do have access when open access becomes the default. On his list of such “unanticipated readers” were serendipitous readers, who find an article that is important to them without knowing they were looking for it, under-resourced readers  (like the high-school author described above), interdisciplinary readers, international readers and machine readers (computers that can derive information from a large corpus of research works).  By the way, the category of serendipitous readers includes all those who might find an article using a Google search and read that work if it is openly available but will encounter a pay-wall if it is not.

Open access serves all of these unexpected readers of scholarly works.  As Carroll summed up his point,  every time we create an open environment, we get unexpected developments and innovations.  We have come far enough down this road now that the burden of proof is no longer on open access advocates, it is on those who would claim that the traditional models of publishing and distribution are still workable.

Is the Copyright Office a neutral party?

Recently a friend was asking me about my job title.  I was hired primarily to address copyright issues, but my title is “Director of Scholarly Communications.”  I am, in fact,  involved in lots of other issues encompassed by that broader title, but my friend made the valid point that universities are beginning to divide the copyright function out from digital publishing services and the like.  Her question got me thinking, and I concluded that I like my title precisely because it emphasizes the context of my work.  My job is not simply to know the copyright law, but to help apply it, and even work to change it, in ways that best serve the needs of scholars and teachers on my campus.  I am not hired, I don’t believe, to be neutral; I have a defined constituency, and my title is a constant reminder of that fact.

This conversation came to mind as I read two documents released by the Copyright Office in the past couple of weeks.  The constituency of the Copyright Office, presumably, is the public as a whole, and their role ought to be more neutral than mine, seeking the balance between protection and use that has always been at the core of copyright law.  In these two documents, however, the tendency to think that, as the Copyright Office, advocacy for anything that increases copyright’s strength and reach is the proper role, shows through.  The needs and voices of owners in the legacy content industries seem to get extra weight, while the needs of users, who are more diffuse and do not have as many dedicated lobbyists, get less attention.

In its brief report on “Priorities and Special Projects of the United States Copyright Office” the Office details the studies, legislation and administrative projects that it intends to work on for the next two years.  In its legislative priorities, the first three listed are “Rogue Websites,” “Illegal Streaming,” and “Public Performance Right in Sound Recordings.”  Each of these priorities is an endorsement of particular legislative action by Congress — the first clearly endorses the bill alternately called PROTECTS IP or SOPA or ePARISITES.  Indeed, each of these priorities seems to be dictated by the current lobbying activities of the entertainment industry, and each is a very much non-neutral endorsement of greater scope for and stronger enforcement of copyright protections.  There is little sign that the voices arguing that copyright already reaches too far and is over-enforced are being heard.

Other priorities do seem more neutral.  The Copyright Office wants to “continue to provide analysis and support to Congress” on orphan works, and it repeats its intention of making legislative recommendations based on the report of the Section 108 Study Group, which addressed the specific exceptions to copyright for libraries.

This last priority created a rather humorous situation for me last week when I was contacted by a member of the library press seeking information about this “new” report on section 108.  In fact, the report was issued in 2008.  Nothing has come of it in three and a half years, and, even if all of its recommendations were suddenly adopted, it would do little to improve the situation for libraries because the Study Group was unable to find agreement on the most pressing issues.  The Copyright Office does not mention the more comprehensive proposal on library exceptions made by IFLA to the World Intellectual Property Organization.

My real interest focused on the Copyright Office’s desire to do something about orphan works.  In addition to the legislative priority listed, the report promised a study document on mass digitization, which would naturally have to address orphan works, and that document was issued a few days ago.  Here we get a glimpse of how the Copyright Office plans to address the difficulties posed by orphan works.

The report makes the CO’s preferred approach very clear — “As a practical matter, the issue of orphan works cannot reasonably be divorced from the issue of licensing.”  This is an interesting statement, since the last proposal to resolve the issue that the CO nearly got Congress to adopt a few years ago did not rely on licensing, but addressed the issue by reducing the potential damages for using an orphan work after a reasonably diligent attempt to find a rights holder.  There clearly are other approaches, but the appetite  of industry lobbyists has, since the Google Books case, been whetted by the possibility of profiting from other people’s (orphaned) works, and the CO has been swept up in the licensing fever.

The report gives a detailed and very helpful discussion of the various types of licenses that could be used, but it never addresses the question that seems most pressing if orphan works are to be licensed — who is going to get paid?  If works are genuinely orphaned, there is no rights holder to pay, so orphan works licensing proposals must decide who is going to sell the licenses and where the money is to go.  Other countries have adopted licensing schemes, and often the licensing is done by the government; in the U.S., however, I think we have to assume that private collective rights organizations (who are given prominent mention in the report) would collect the money and, perhaps after some period of time, keep it.

This report is about more than orphan works, of course; it addresses the broad issue of legal concerns in mass digitization.  I was interested to see how fair use was treated, both in that broader context and in relation to orphan works.  I was disappointed in both regards.

In the general discussion of mass digitization, fair use is pretty much summarily dismissed by the Copyright Office.  It begins with the assertion that “the large scale scanning and dissemination of entire books is difficult to square with fair use,” which seems to beg the question.  The rest of the section reads like a legal brief against Google’s position on fair use in the book scanning case.  Nowhere does the report consider what difference it might make for a fair use claim if the project were wholly non-commercial, nor do they consider that fair use might be part of an overall project strategy that included other approaches, such as selected permission-seeking.  The report treats fair use as an all-or-nothing approach and dismisses it as unworkable without the nuanced consideration it demands.

More troubling is the fact that, having dismissed fair use in the broader context of mass digitization, the Copyright Office never discusses it in the more narrow field of orphan works.  With orphans, of course, fair use seems like a stronger argument, since there is no market that can be harmed by the use, especially if it is itself non-commercial.  But it seems clear that the CO is committed to creating such a market by pushing a licensing scheme, and is not willing to discuss any option that might undermine its predetermined conclusion.