Protecting IP?

The American Association of University Professors recently issued a draft report, seeking comment, on the topic “Defending the Freedom to Innovate: Faculty Intellectual Property (IP) Rights After Stanford v. Roche.”  The report is very interesting; a strongly-worded warning that universities might be trying to assert more ownership over the IP rights in works created by faculty as the potential monetary value of that IP continues to rise.  I want to make one comment about the report itself, and then use one of its significant themes to make some further observations.

By way of background, Stanford v. Roche was a patent dispute that was decided by the Supreme Court in 2011.  At issue was a diagnostic test for the HIV virus that was developed by a Stanford faculty member who worked both in a Federal-funded lab at Stanford and for a private biotech company.  Part of the problem was conflicting language in the two employment agreements — when he joined the Stanford faculty, Professor Holodniy agreed that he would assign the rights and title in his inventions to Stanford, but when he was employed by Cetus, his agreement “hereby” assigned those rights in anything he invented to Cetus.  The Supreme Court held that the immediate assignment in the Cetus contract overrode a promised assignment in the Stanford contract.  Along the way they rejected Stanford’s proposed interpretation of the Bayh-Dole act claiming that that Federal law required that Universities receiving Federal funds own the patents in inventions that came out of those labs.  Instead, the Court affirmed that eligibility for patent rights (they are not automatic) vests initially in the inventor and are then subject to assignments made by employment contracts.  Universities are allowed to own and exploit patents to inventions that arise from work done by their faculty under federal grants, according to Bayh-Dole, but they do not automatically own those rights under the legislation.  Since the Roche assignment was immediately effective, it trumped a promise to assign made in the Stanford contract.

The AAUP report focuses on the declaration that faculty own those inchoate patent rights, at least in the absence of a direct and immediate assignment to the university.  But it is important to recognize that Stanford v. Roche involved compete assignments of those rights, both of which were made as conditions of employment.

That is why I am troubled by the easy analogy that the AAUP report makes between patent rights and copyright.  It suggests in several places that ownership of copyrighted materials could be treated as employer-owned, just as Stanford was allegedly suggesting patent rights should be.  But the report doesn’t really offer much substance behind this threat, citing only a conflict of interest policy at the University of Pennsylvania, which has no bearing on copyright ownership, and an academic article written by some university attorneys.  Yet copyrights are really quite different.  Unlike patents, they arise automatically as soon as original expression is fixed in a tangible medium of expression.  Patent rights, on the other hand, require an application process that is long, costly and requires the specialized services of a patent lawyer.  It is odd to me that in the section of the report defining the different types of IP rights, this difference is not mentioned.

The reason this seems significant to me is because it provides a possible rationale for a university to make a claim over patents developed on campus, and that reason does not apply equally to copyrights.  When an invention developed on campus is patented, often the university invests significantly in obtaining those rights; unlike copyrights they do not simply arise directly as fruits of the research.  While copyrights really are just spontaneous developments from the direct tasks faculty are hired to do, patent rights are not, even if the inventions themselves are.  Patent rights cost money — often something over $20,000  — well beyond the investments made in the research itself.  So even if we accept the AAUP’s argument that the investment a university makes in supporting the research that leads to an invention should not automatically give that institution a right to share in any profits, the investment in actually obtaining the rights over that invention also needs to be considered.  And the fact that no similar costs are associated with copyrights provides a sound reason, in my opinion, for the normal differentiation, which is that institutions make no or limited claims over copyrights (as “work made for hire”) but assert a greater interest in patent rights, if they can be secured.

What really struck me about the report, however, and it is an emphasis I fully agree with, is its argument that both the universities and faculty research share an obligation to put scholarship in the service of society:

Patents are regularly used in industry to exclude others from using inventions.  But faculty members should often be focused instead on creating conditions that give the public access to inventions… Commercial development of university knowledge to stimulate economic growth is unquestionably good.  But some administrative practices associated with patenting and licensing operations may negatively affect economic growth as well as scholarship.

This is exactly right.  In both the patent and the copyright arenas, concern for social welfare and the maximum impact of scholarship on economic and cultural development should have pride of place in IP practices.  But in the copyright arena, we need to acknowledge that it is not usually institutional policies that undermine public access and economic development, it is the ingrained practice of giving copyrights away for free to commercial interest so that they can be exploited for private gain.  It is unfortunate that the AAUP does not take the next step in its logic and remind its members that making provision for open access is a vital part of the commitment that the Association encourages.

“[F]aculty members should often be focused instead on creating conditions that give the public access.”  To do this, faculty authors must move beyond the practice, tied as it is to centuries-old technology, of surrendering copyright without remuneration AND without any guarantee that the fruits of their research will actually reach those who could benefit most from them.

During this Open Access Week, I hope the AAUP will look at this obvious extension of the appeal it is making in its draft report.  University exploitation of patents may well be a threat to academic freedom and to public benefit, but so are the commercial companies that exploit the copyrighted products of academic labor for huge profits and lock up access to scholarship in order to defend those profits.  Universities are harmed by this system, scholars are harmed by it, and society is harmed by it.  The threats against which faculty IP rights need to be defended come from several directions, and the AAUP needs to recognize that.

Many people probably saw this story about a scientist who inquired about payment after she was asked to blog on a prominent scientific web site and was called a “whore” for declining to provide her writings for free.  There are troubling gender and racial dynamics behind this outlandish reply, of course, but it also strikes me as very telling about the attitude toward scholarship and the value of copyrighted work.  The expectation that scholars will give away their work for free is so ingrained that any suggested departure is treated quite rudely, to say the least.  The result is that faculty scholarship makes money for lots of people, but not for the authors (at least, not directly) and certainly not for their universities, who have to pay millions to buy back the work they supported in the first place.

If we stop and think about it, this is an offensive situation.  The remark of that blog editor has the salutary effect of illustrating just how offensive the routine expectations of publishers really are.  Scholars need to defend their IP rights, as the AAUP report calls on them to do, and that defense should start with a refusal to transfer their copyrights without much stronger assurances that their work will be available to provide the social, economic and scholarly impact for which it was written for in the first place.  Perhaps the AAUP could begin to organize that kind of response, in order to make good their commitment to public access and economic growth.

 

 

The big picture about peer-review

In many mystery novels, there is a moment when someone makes an attempt to frighten or even kill the detective.  Invariably, the detective reacts by deciding that the threat is actually a good thing, because it means that he or she is getting close to the truth and making someone nervous.  In a sense, the article in Science by John Bohannon reporting on a “sting” operation carried out against a small subset of open access journals may be such a moment for the OA movement.  Clearly the publishers of Science are getting nervous, when they present such an obviously flawed report that was clearly designed to find what it did and to exclude the possibility of any other result.  But beyond that, we need to recognize that this failed attempt on the life of open access does point us toward a larger truth.

A good place to start is with the title of Bohannon’s article.  It is not, after all, “why open access is bad,” but rather “Who’s afraid of peer-review?”  Putting aside the irony that Bohannon’s own article was, apparently, never subjected to peer-review (because it is presented as “news” rather than research), this is a real question that we need to consider.  What does it mean for a journal to be peer-reviewed and how much confidence should it give us in articles we find in any specific title?

In the opening paragraphs of his article, Bohannon focuses on the Journal of Natural Pharmaceuticals as his “bad boy” example that accepted the bogus paper he concocted.  He quotes a mea culpa from the managing editor that includes a promise to shut down the journal by the end of the year.  But I want to think about the Journal of Natural Pharmaceuticals and about Science itself for a little bit.

I was a bit surprised, perhaps naively, to discover that the Journal of Natural Pharmaceuticals is indexed in two major discovery databases used by many libraries around the world, Academic OneFile from Gale/Cengage and Academic Search Complete from EBSCO.  These vendors, of course, have a strong economic incentive to include as much as possible, regardless of quality, because they market their products based on the number of titles indexed and percentage of full-text available.  Open access journals are great for these vendors because they can get lots of added full-text at no cost.  But they do help us sort the wheat from the chaff by letting us limit our searches to the “good stuff,” don’t they?  Maybe we should not be too sanguine about that.

I picked an article published last year in the Journal of Natural Pharmaceuticals and searched on one of its key terms, after limiting my search in both databases to only scholarly (peer reviewed) publications.  The article I selected from this apparently “predatory” journal was returned in both searches, since the journal identifies itself as peer-reviewed.  This should not surprise us, because the indexing relies on how the journal represents itself, not on any independent evaluation of specific articles.  Indeed, I am quite confident that once this latest issue of Science is indexed in these databases, a search on “peer review” limited to scholarly articles will return Bohannon’s paper, even though it was listed as “news,” not subject to peer-review, and reports on a study that employed highly questionable methods.

Librarians teach students to use that ability to limit searches to scholarly results in order to help them select the best stuff for their own research.  But in reality it probably doesn’t do that.  All it tells us is whether or not the journal itself claims that it employs a peer-review process; it cannot tell us which articles actually were subjected to that process or how rigorous it really is.  From the perspective of a student searching Academic OneFile, articles from Science and articles from the Journal of Natural Pharmaceuticals stand on equal footing.

Of course, it is perfectly possible that there are good, solid research articles in the Journal of Natural Pharmaceuticals.  These indexes list dozens of articles published over the last four years, written by academic researchers from universities in Africa, India, Australia and Japan.  Presumably these are real people, reporting real research, who decided that this journal was an appropriate place to publish their work.  And we simply do not know what level of peer-review these articles received.  So the question remains — should we tell our students that they can rely on these articles?  If not, how do we distinguish good peer-review from that which is sloppy or non-existent when the indexes we subscribe to do not?

The problem here is not with our indexes, nor is it with open access journals.  The problem is what we think peer-review can accomplish.  In a sense, saying a journal is peer-reviewed is rather like citing an impact factor.  At best, neither one actually tells us anything much about the quality of any specific articles, and at worst, both are more about marketing than scholarship.

The peer-review process is important, especially to our faculty authors.  It can be very helpful, when it is done well, because its goal is to assist them in producing an improved article or book.  But its value is greatly diminished from the other side — the consumption rather than the production side of publishing — when the label “peer-reviewed” is used by readers or by promotion and tenure committees as a surrogate for actually evaluating the quality of a specific article. Essentially, peer review is a black box, from the perspective of the reader/user.  I don’t know if the flaws in the “bogus” article that Bohannon submitted were as obvious as he contends, but had he allowed its acceptance by the Journal of Natural Pharmaceuticals to stand, that article would look just as peer-reviewed to users as anything published in Science.  The process, even within a single journal, is simply too diverse and too subject to derogation on any given day because a particular editor or reviewer is not “on their game” that day to be used in making generalized evaluations.

So what are we to do once we recognize the limits of the label “peer-reviewed?”  In general, we need to be much more attentive to the conditions under which scholarship is produced, evaluated and disseminated.  We cannot rely on some of those surrogates that we used for quality in the past, including impact factor and the mere label that a journal is peer-reviewed.  Those come from a time when they were the best we could do, the best that the available technology could give us.  Perhaps it is ironic, but it is open access itself that offers a better alternative.  Open peer-review, where an article is published along with the comments that were made about it during the evaluation process, could improve the current situation.  The evaluations on which a publisher relies, or does not rely, are important data that can help users and evaluators consider the quality of individual works.  Indeed, open peer review, where the reviewers are known and their assessments available, could streamline the promotion and tenure process by reducing the need to send out as many portfolios to external reviewers, since the evaluations that led to publication in the first place would be accessible.

There are many obstacles to achieving this state of affairs.  But we have Bohannon’s article to thank for helping us consider the full range of limitations that peer-reviewed journals are subject to, and for pointing us toward open access, not as the cause of the problem, but potentially as it solution.

Can we “fix” open access?

The later part of this past week was dominated, for me, by discussions of the article published in Science about a “sting” operation directed against a small subset of open access journals that purports to show that peer-review is sometimes not carried out very well, or not at all.  Different versions of a “fake” article, which the authors tell us could easily be determined to be poor science, were sent to a lot of different OA journals, and it was accepted by a large number of them.  This has set off lots of smug satisfaction amongst those who fear open access — I have to suspect that the editors of Science fall into that category — and quite a bit of hand-wring amongst those, like myself, who support open access and see it as a way forward out of the impasse that is the current scholarly communications system.  In short, everyone is playing their assigned parts.

I do not have much to say in regard to the Science article itself that has not been said already, and better, in blog posts by Michael Eisen, Peter Suber and Heather Joseph.  But by way of summary, let me quote here a message I posted on an e-mail list the other day.

My intention is not so much to minimize the problem of peer-review and open access fees as it is to arrive at a fair and accurate assessment of it.  As a step toward that goal, I do not think the Science article is very helpful, owing to two major problems.

First, it lacked any control, which is fundamental for any objective study.  Because the phony articles were only sent to open access journals, we simply do not know if they would have been “caught” more often in the peer-review process of any subscription journals.  There have been several such experiments with traditional journals that have exposed similar problems.  With this study, however, we have nothing to compare the results to.  In my opinion, there is a significant problem with the peer-review system as a whole; it is over-loaded, it tends toward bias, and, because to is pure cost for publishers, no one has much incentive to make it better.  By looking only at a sliver of the publishing system, this Science “sting” limited its ability to expose the roots of the problem.

The other flaw in the current study is that it selected journals from two sources, one of which was Jeffrey Beall’s list of “predatory” journals.  By starting with journals that had already been identified as problematic, it pre-judged its results.  It was weighted, in short, to find problems, not to evaluate the landscape fairly.  Also, it only looked at OA journals that charge open access article processing fees.  Since the majority of OA journals do not charge such fees, it does not even evaluate the full OA landscape.  Again, it focused its attention in a way most likely to reveal problems.  But the environment for OA scholarship is much more diverse than this study suggests.

 The internet has clearly lowered the economic barriers for entering publishing.  In the long run, that is a great thing.  But we are navigating a transition right now.  “Back in the day” there were still predatory publishing practices, such as huge price increases without warning and repackaging older material to try and sell it twice to the same customer, for example.  Librarians have become adept at identifying and avoiding these practices, to a degree, at least.  In the new environment, we need to assist our faculty in doing the same work to evaluate potential publication venues, and also recognize that they sometimes have their own reasons for selecting a journal, whether toll-access or open, that defy our criteria.  I have twice declined to underwrite OA fees for our faculty because the journals seemed suspect, and both time the authors thanked me for my concern and explained reasons why they wanted to publish there anyhow.  This is equally true in the traditional and the OA environment.  So assertions that a particular journal is “bad” or should never be used needs to be qualified with some humility.

At least one participant on the list to which I sent this message was not satisfied, however, and has pressed for an answer to the question of what we, as librarians, should do about the problem that the Science article raises, whether it is confined to open access or a much broader issue.

By way of an answer, I want to recall a quote (which paraphrases earlier versions) from a 2007 paper for CLIR by Andrew Dillon of the University of Texas — “The best way to predict the future is to help design it.”  Likewise, the best way to avoid a future that looks unpleasant or difficult is to take a role in designing a better one.

That the future belongs to open access is no longer really a question, but questions do remain.  Will open access be in the hands of trustworthy scholarly organizations?  Will authors be able to have confidence on the quality of the review and publication processes that they encounter?  Will open access publishing be dominated by commercial interest how will undermine its potential to improve the economics of scholarly communications?  If libraries are concerned about these questions, the solution is to become more involved in open access publishing themselves.  If the economic barriers for entering publisher have been lowered by new technologies, libraries have a great opportunity to ensure the continuing, and improving, quality of scholarly publishing by taking on new roles in that enterprise.

Many libraries are becoming publishers.  They are publishing theses and dissertations in institutional repositories.  They are digitizing unique collections and making them available online.  They are assisting scholars to archive their published works for greater access.  And they are beginning to use open systems to help new journals develop and to lower costs and increase access for established journals.  All these activities improve the scholarly environment of the Internet, and the last one, especially, is an important way to address concerns about the future of open access publishing.  The recently formed Library Publishing Coalition, which has over 50 members, is testament to the growing interest that libraries have in embracing this challenge.  Library-based open access journals and library-managed peer-revew processes are a major step toward address the problem of predatory publishing.

In a recent issue brief for Ithaka S&R, Rick Anderson from the University of Utah calls on libraries to shift some of their attention from collecting what he calls “commodity” works, which many libraries buy, toward making available the unique materials held in specific library collections (often our “special” collections).  This is not really a new suggestion, at least for those who focus on issues of scholarly communication, but Anderson’s terminology makes his piece especially though-provoking, although it also leads him astray, in my opinion. While Anderson’s call to focus more on the “non-commodity” materials, often unique, that our libraries hold is well-taken and can help improve the online scholarly environment, I do not believe that this is enough for library publishing to focus on.  Anderson’s claim that by focusing on non-commodity documents will allow us to “opt out of the scholarly communication wars” misses a couple of points.  First, it is not just publishers and libraries who are combatants in these “wars;” the scholars who themselves produce those “commodity” documents are frustrated and ill-served by the current system.  Second, there is very little reason left for those products — the articles and books written by university faculty members — to be regarded as commodities at all.  The need for extensive investment of capital into publishing operations, which is what made these works commodities in the first place, was a function of print technology and is largely past.

So libraries should definitely focus on local resources, but those resources include content created on campuses that has previously been considered commodities.  The goal of library publishing activities should be to move some of that content — the needs and wishes of the faculty authors should guide us — off of the commodity market entirely and into the “gift economy” along with those other non-commodity documents that Anderson encourages libraries to publish.

If libraries refocus their missions for a digital age, they will begin to become publishers not just of unique materials found in “special” collections, but also of the scholarly output of their constituents.  This is a service that will grow in importance over the coming years, and one that is enable by technologies that are being developed and improved every day.  Library publishing, with all of the attendant services that really are at the core of our mission, is the ultimate answer to how libraries should address the problem described only partially by the Science “sting” article.

 

 

 

The varieties of the public domain

It is well known that early publishing houses in America built themselves up, in large part, through the publication of unauthorized editions of popular British authors.  This was a time when foreign works, including English-language books published in Britain, did not enjoy copyright protection in the U.S.  Indeed, books published abroad in English did not get copyright in this country until 1891, a full century after the first U.S. copyright law.  And even after that time, the strict formalities imposed on foreign works, including the infamous “manufacturing clause,” kept many works out of copyright.  American publishers used this legal situation to make money off of the popularity of British authors without having to pay any royalties to those authors.  The firm Harper published unauthorized editions of Walter Scott, for instance, while Grossett & Dunlap (now part of Penguin) built its business in part by publishing Rudyard Kipling without his permission.  British authors and British publishers called this activity “piracy,” but in the U.S. there was a different name for it.  It was the public domain.

In his new book Without Copyrights: Piracy, Publishing and the Public Domain (Oxford University Press, 2013), law professor (and one-time professor of English Literature) Robert Spoo details the legal and the literary situation that modernist British authors faced because of the narrowness of American copyright or, alternately, the expansiveness of the American public domain.  Just to take one example, Virginia Woolf’s early novels were published in the U.S with substantial changes from their U.K. editions, because it was believed that revised American editions could get U.S. copyright even if the original edition had failed to meet the manufacturing requirements.  Woolf instructed her friend Lytton Strachey, for example, to make lots of revisions because her American publisher suggested “the more alterations the better — because of copyright” (Spoo, p. 95).  Joyce and Pound were both published in fragmentary format in magazines because of the (unproven) theory that such publication could stake out a copyright claim while avoiding the difficulties and expense of U.S. printing and binding.  It is fascinating, in my opinion, to see how the actual experience of literature was shaped for American readers by the strictures of the copyright law.

Whether because copyright for foreign works was simply unavailable (as it was prior to 1891) or because of the rule that English-language works by foreign nationals had to be typeset, printed and bound in the U.S. in order to enjoy copyright here (not fully repealed until 1986), lots of well-known works were in the American public domain in those days.  And in spite of the frequent resort to the word piracy, this was a perfectly legal situation, created intentionally to protect American publishers and printers.  As nineteenth-century copyright scholar Eaton Drone wrote:

[I]t is not piracy to take without authority either a part or the whole of what another has written, if neither a statute nor the common law is thereby violated… Hence, there may be an unauthorized appropriation of literary property which is neither piracy nor plagiarism, as the republication in the United States of the work of a foreign author.  This is not piracy, because no law is violated; and, without misrepresentation as to authorship, it is not plagiarism. (Quote by Spoo, p. 23)

Then, as now, accusations of piracy were thrown about rather irresponsibly, and Drone sought to clarify the situation.

Over time, publishers developed a system called “courtesy of the trade” which took the place of copyright protection for foreign authors.  That system had two prongs — the offer of some form of payment to the foreign author of a reprinted work and a “gentleman’s agreement” amongst U.S. publishers that others would not “jump the claim” of a publisher who had announced the intention to reprint a specific author. Although this was referred to as courtesy, it was really sharp business tactics, and it was not particularly fair to the authors.  They were not in a strong negotiating position as to the fees they were paid; they pretty much had to “take it or leave it,” especially since the system made it very difficult to shop their work to multiple American publishers.  And, of course, the system was used to create informal monopolies, which excluded competition and drove up prices.  In some ways the system of trade courtesy reminds me of the current situation for academic publishing.  Although lip-service is paid to the rights of scholarly authors by publishers, their work is appropriated without payment through a coercive system in which they had little option, until recently.  Such publication is not piracy, as Drone tells us, but it certainly is a form of free-riding, coupled with an effective monopoly that keeps prices on the sales side artificially high.

The public domain, of course, is no longer the wide open commons described by Spoo, where most works published abroad were free for anyone to reprint or otherwise use within 90 days of publication unless the authors met onerous requirements.  Today our public domain is almost as constrained as it was free-wheeling for much of our history.  Today, virtually no published works are entering the U.S. public domain; our cultural heritage is basically locked up.  And figuring out what is and is not in the American public domain is just as difficult today as it was for Ezra Pound or Charles Dickens.  As Spoo writes about contemporary international copyright law,

Far from unifying the global public domain, however, recent laws enacted in the United States and Europe only guarantee its continuing disharmony and fragmentation.  Worldwide availability of modernist works is threatened by a tragedy of the uncoordinated global commons, a congestion of divergent durational terms and other rules that impede the free use of works created nearly a century ago. (p. 8)

In the context of this confusion, it is all the more laudable that some groups are making continuing contributions to the public domain.  I began reading Spoo’s book shortly after returning from a meeting about the Copyright Research Management System that is an ongoing project of the HathiTrust.  CRMS is methodically researching books that fall into the “doubtful” categories of U.S. copyright — periods of years during which a published work might still be protected or might be in the public domain.  Since the beginning of the U.S. project in 2008, nearly 150,000 titles have been identified as public domain.  These are works that can be made available to the public without any harm to rights holders.

There is nothing underhanded about this project, as there arguably was about unauthorized reprinting by American publishers of unprotected foreign works.  Instead, this research provides a pure benefit.  Most of that benefit is in the ability to open up new works to the public that were previously closed simply due to lack of data.  Another part of the benefit, however, is in the fact that information is being gathered that is beneficial to rights holders and to future users.  As it determines that many books are in the public domain, the CRMS project has also determined that a significant number of the books it has researched are still in copyright, which is important information to know.  Even the fairly large category of “undetermined” is beneficial.  Although these books are not able to be opened to the public domain, there is now better data about these titles and the gaps in our knowledge about them have been identified.  Knowing what we don’t know, to paraphrase Don Rumsfeld, is itself a step forward.  HathiTrust should be proud of the work that it has done and continues to do, opening books to the public domain and gathering data that will clarify the contours of the public domain into the future.

The public domain is a changeable space, as Robert Spoo shows eloquently in Without Copyrights.  Changes in law, changes in the practices of authorship and publishing and even the cost of paper can influence what is, or is not a public domain resource.  As with the weather, many people complain about the vagaries of the public domain, but do nothing about them.  Spoo and the HathiTrust are each, in very different ways, doing something to strengthen our notions about those resources that are the vital common property of us all.

 

 

An odd anouncement

I did not initially pay much attention when publisher John Wiley announced early in September that they would impose download limits on users of their database “effective immediately.”  My first thought was “if they are going to disable the database, I wonder how much the price will decrease.”  Then I smiled to myself, because this was a pure flight of fantasy.  Like other publishers before it, Wiley, out of fear and confusion about the Internet, will reduce the functionality of its database in order to stop “piracy,” but the changes will likely do nothing to actually address the piracy problem and will simply make the product less useful to legitimate customers.  But it is foolish to imagine that, by way of apology for this act, Wiley will reduce the price of the database.  As contracts for the database come up for renewal, in fact, I will bet that the usual 6-9% price increase will be demanded, in fact, and maybe even more.

As the discussion of this plan unfolded, I got more interested, mostly because Wiley was doing such a bad job of explaining it to customers.  More about that in a moment.  But first it is worth asking the serious question of whether or not the plan — a hard limit on downloads of 100 articles within a “rolling” 24 hour period — will actually impact researchers.  I suspect that it will, at least at institutions like mine with a significant number of doctoral students.  Students who do intensive research, including those writing doctoral dissertations as well as students or post-docs working in research labs, sometimes choose to “binge” on research, dedicating a day or more to gathering all of the relevant literature on a topic.  Sometimes this material will be download so that it can be reviewed for relevance to the project at hand, and a significant amount of it will be discarded after that review.  Nothing in this practice is a threat to Wiley’s “weaken[ed]” business, nor is it outside of the bounds of the expected use of research databases.  But Wiley has decided, unilaterally, to make such intensive research more difficult.  In my opinion, this is a significant loss of functionality in their product — it becomes less useful for our legitimate users — which is why I wondered about a decrease in the price.

The text of the announcement was strangely written, in my opinion.  For one thing, I immediately distrust something that begins “As you are aware,” since it usually means that someone is about to assert categorically something that is highly dubious, and they do not wish to have to defend that assertion.  So it is here, where we are told that we are aware of the growing threat to Wiley’s intellectual property by people seeking free access.  I am very much aware that Duke pays a lot for the access that our researchers have to the Wiley databases, so this growing threat is purely notional to me.  As is so common for the legacy content industries, their “solutions” to piracy are often directed at the wrong target.  So it is with this one.  As a commenter on the LibLicense list pointed out, Wiley should be addressing automated downloads done by bots, not the varied and quite human research techniques of its registered users.

Another oddity was the second paragraph of the original announcement, which seems to apologize for taking this action “on our own,” without support form the “industry groups” in which Wiley is, they say, a “key player.”  As a customer, I am not sure why I should care about whether the resource I have purchased is broken in concert with other vendors or just by one “key player.”  But the fact that Wiley thought it needed to add this apology may indicate that it is aware that it is following a practice that has been largely shown throughout the content industry to be ineffective against piracy and alienating to genuine customers.  Perhaps, to look on the bright side, it means that other academic article vendors will not follow Wiley’s lead on this.

Things got even stranger when Wiley issued a “clarification” that finally addressed, after a 10 day delay, a question posed almost as soon as the first announcement was made, which was about exactly who would be affected by the limitation.  That delay, in fact, made me wonder if Wiley had not actually fully decided on the nature of the limitation at the time of the first announcement, and waited until a decision was made, belatedly, to answer the question.  In any case, the answer was that the limitation would only be imposed on “registered users.”  That clarification said users who accessed the database through IP recognition or via a proxy would not be affected, and that these non-registered users made up over 95% of the database usage.  So as Wiley asserts that this change will make little difference, they also raise the question of why do it at all.  It seems counter-intuitive that registered users would raise the biggest threat of piracy, and no evidence of that is offered.  And I wonder (I really do not know) why some users register while most, apparently, do not.  If Wiley offers registration as an option, they must think it is beneficial.  But by the structure of this new limitation, they create a strong incentive for users not to register.  But then Wiley adds a threat — they will continue to look for other, presumably more potent, ways to prevent “systematic downloads.”  So our most intensive researchers are not out 0f the woods yet; Wiley may take further action to make the database even less usable.

All of this made me doubt that this change had really been carefully thought out.  And it also reminded me that the best weapon against unilateral decisions that harm scholarship and research is to stop giving away the IP created by our faculty members to vendors who deal with it in costly and irresponsible ways.  One of the most disturbing things about the original announcement is Wiley’s reference to “publishers’ IP.”  Wiley, of course, created almost none of the content they sell; they own that IP only because it has been transferred to them.  If we could put an end to that uneven and unnecessary giveaway, this constant game of paying more for less would have to stop. So I decided to write a message back to Wiley, modeled on their announcement and expressive of the sentiment behind the growing number of open access policies at colleges and universities.  Here is how it will begin:

As you are aware, the products of scholarship, created on our campuses and at our expense, are threatened by a growing number of deliberate attempts to restrict access only to those who pay exorbitant rates.  These actors weaken our ability to support the scholarly enterprise by insisting copyright be transferred to them so that they can lock up this valuable resource for their own profit, without returning any of that profit to the creators.  This takes place every day, in all parts of the world.

University-based scholarship is a key player in the push for economic growth and human progress.  While we strive to remain friendly to all channels for disseminating research, we have to take appropriate actions on our own to insure that our IP assets have the greatest possible impact.  Therefore, effective immediately, we will limit the rights that we are willing to transfer to you in the scholarly products produced on our campuses.

 

 


 


 

Public art and fair use

A couple of week’s ago I was asked a question that set me thinking and required a bit of research.  That is hardly post-worthy, but when a case came down that addressed the same issue I had been thinking about, it suddenly seemed worth discussing.  It provides an opportunity to dissect an issue, remind ourselves of things we already know, and also explore the continuing evolution of fair use decisions from our courts.

The question involved the use, in a planned publication, of a photograph of a piece of public art.  In Durham we have several murals, painted on walls in public spaces, of Pauli Murray, one of our prominent citizens.  Rev. Murray was a civil rights activist, a women’s rights activist, and the first African-American women ordained as a priest in the U.S. Episcopal Church.  But could an author use a photograph of one of those murals in an upcoming article?

Of course, the first thing that comes to mind is that there are likely two sets of rights to be aware of here — the artist’s and the photographers.  In this case, however, the photo was take by the author of the publication that was in preparation, so the issue was focused on the scope of the artist’s rights.

Does it matter that this work of art can be seen by people walking on a public sidewalk or through a public park?  I knew, for example, that that fact did matter when the subject of the photography was a building.  In 1990 Congress added section 120 to our copyright law which permits “pictorial representations” of an “architectural work that has been constructed” as long as the building in question is “ordinarily visible from a public place.”  These pictures can be made, distributed and publicly displayed without the need to obtain permission.  But a quick look at section 120 reminded me that it is quite specific to architectural works; it does not tell us anything about the situation for a work of visual art that is painted on to the side of such a building or otherwise placed in a public spot.

Next up in my mental queue was an older case (with the great name of Letter Edged in Black Press v. Public Building Commission of Chicago) that involved the immense, untitled Picasso sculpture that is installed on the Civic Center Plaza in Chicago. But interesting as that case is, it is not applicable to the situation I was looking into.  The case was decided in 1970, so it was based on the provisions of our older, 1909 copyright law.  In the specific circumstances, a federal court held that the Picasso sculpture was in the public domain because all of the publicity around it had amounted to “publication” without copyright notice.  Under the 1909 law this put the work in the public domain.  But that is no longer the case under the newer Copyright Act, which replaced the 1909 law on January 1, 1978. So the “Chicago Picasso” case did not answer my question either.

Ultimately, the answer comes down to this — copyright is automatic when the work of art is “fixed”, including by being painted on a wall, so the artist holds a copyright in this mural.  Simply painting it in a public place does not make it public domain or give people any license to make use of it (other than by viewing it, of course).  There is no specific exception that would apply to use of a photograph of this type of art.  When we have exhausted those options — public domain, a license, or a specific exception — we are left with two more possible grounds for using a work, fair use or permission from the rights holder.

Which gets me to the recent case, which actually involved very similar circumstances — reuse of an image of an artwork painted on a publicly visible wall.  In this case, the artist was Derek Seltzer and a copy of his poster called “Scream Icon” was on a wall in Los Angeles that was part of a video used as backdrop for live concerts by the band Green Day.  In an opinion last month, which can be found here, the Ninth Circuit Court of Appeals affirmed a lower court ruling that this use of the poster was fair use.  The Ninth Circuit panel found that Green Day’s use of the image, which was in a context created by other posters and graffiti on the wall and was used to convey, the Court said, a very different (and much clearer) message than the original poster, was transformative:

[the] video backdrop using ‘Scream Icon’ conveys new information, new aesthetics, new insights and understandings that are plainly distinct from those of the original piece.

The Court went on to hold that the creative nature of the work weighed slightly against fair use, and that the use of the entire poster was neutral, since the poster could not be separated from its context on the wall.  Finally, they ruled that there was no impact on the potential market for or value of the poster caused by Green Day’s fleeting use of it in a video backdrop.  They also held that that use was only “fleetingly commercial” because the video (and the poster) was not used in any way to market Green Day’s concert tour.

It is interesting that the Ninth Circuit panel said that this was a close case (and reversed a grant of attorney’s fees to Green Day because of that).  In many ways it seems like a pretty clear case of transformative fair use, and one for which there is precedent.  In the Second Circuit a very similar case involving use of copyrighted visual material during a live performance was decided about six months ago, also in favor of transformative fair use.  The case was called SOFA Entertainment v. Dodger Productions and involved a seven second video clip from the Ed Sullivan Show shown on a screen at the end of the first act of the Broadway show Jersey Boys.  The Ninth Circuit cited  SOFA Entertainment, but seemed to feel that the case for transformative fair use regarding the poster was closer.  On the one hand, of course, the use is much more ephemeral, but perhaps the Seltzer court felt that the “biographical anchor” which the SOFA court found in the use of Ed Sullivan in a show about the Four Seasons was lacking for Green Day.  And maybe they just wanted to reverse the grant of attorney’s fees to the band, feeling that it was unfair to Mr. Seltzer.

All of this reminds us that the analysis of transformative fair use, while very useful for both creative artists and scholars, is evolving territory.  In fact, the Ninth Circuit panel noted this themselves when considering the issue of attorney’s fees, but also saw in this case a convergence between themselves and the Second Circuit on the other coast. Fair use is always extremely dependent on the specific facts and circumstances related to the particular use of particular material.  Reading these case decisions continues to give us additional data points to guide our analysis, but we never arrive at a finished picture.

So what about the photograph of that Pauli Murray mural in a scholarly article?  If we look at the fair use argument as it is developed in these cases and many others — especially the language quoted above about new meanings and new insights — the use of that photo is probably fair use.  But there was also some realpolitik in my answer to the inquiry.  The artist who painted those murals is known, still working, and easy to contact.  Scholars working on the life and impact of Pauli Murray may want to make later uses of these works dedicated to her.  So why not ask? Especially because this is a clearly non-profit, scholarly use that will, like the murals, honor the memory of a remarkable women, the artist has every reason to grant permission.  Asking permission, we know, does not prevent a later reliance on fair use, but in some cases it seems like an easy and respectful way to proceed even when fair use would likely also support the use.

Copyright policy here and abroad

Earlier this month, Jonathon Band, who, among his other accomplishments, is the principle attorney for the U.S. Library Copyright Alliance, posted a report of a talk he gave in Seoul, South Korea at a conference on “The Creative Economy and Intellectual Property.”  In response to an invitation to talk about how U.S. copyright policy helped to foster a creative economy, Band made an interesting distinction, one that caught my attention and made me nod my head in surprised agreement.

Band’s basic distinction is this: U.S. domestic policy does help to foster a creative economy because it seeks to balance copyright protections, which do support creative pursuits, with exceptions that limit the scope of claims to copyright infringement.  These exceptions are every bit as to encourage innovation as the protections themselves are, but U.S. policy about copyright in other countries does not similarly support a creative economy.

We can identify two reasons why the U.S approach to copyright in other countries does not support creativity and innovation, based on a distinction Band makes between process and substance.

In terms of process, the U.S. foreign policy about copyright is entirely in the hands of the Executive branch of government, which is very susceptible to lobbying from the traditional content industries.  The important role that the entertainment industries play in any Presidential election is just one reason for this understandable, if unfortunate, influence on the Executive branch.  And because that branch is solely responsible for our foreign relations, we are often in the position, as Band illustrates nicely, of advocating for much stricter copyright provisions abroad than we have, or are allowed to have, at home.

Part of the reason our domestic law is more balanced is because of the role of the courts, who are much less easily influenced by lobbying and who have a great role in maintaining the copyright balance, as we have seen in the important string of fair use decisions that have been coming out of courts all over the country in recent years. But U.S. courts have no role in shaping the kinds of policies we advocate for in other nations.

On the side of substance, our copyright policy toward other countries is determined and expressed by trade representatives, whose goal, naturally, is to improve the market for U.S. products around the world.  Thus their copyright focus is on (primarily) entertainment products that already exist, and which, they believe, must be strictly protected from all kinds of unauthorized use, even if those uses would be allowed in the U.S.  So at the same time that U.S. courts are developing a broad view of fair use that supports digital innovation and new industries, our trade reps are vigorously campaigning to prevent any other nation from getting the (correct) idea that fair use is a good idea if you want to support a creative economy.

To continue this distinction a little farther, I want to look at two other items that came to my attention this week.

On the domestic front, there is this info-graphic about fair use from the Association of Research Libraries, which is a great resource for starting a conversation with academic librarians and faculty members about the space that our domestic courts are opening for innovation, scholarship and creativity with their expanding approach to fair use.  Conveying to our communities that fair use is good news from the copyright front, and that considered, responsible decisions about how to use materials in teaching and scholarship are also quite likely to be good decisions about fair use, is an important role on campus.

On the international side, consider this press release from the European Commission suggesting that open access has reached a “tipping point” in Europe.  The European Community, of course, has been a leader in promoting open access to research and scholarship.  And it is helpful to see open access as a way to simply move past the pressure that the EC and other nations receive from the U.S. to strengthen copyright protections and weaken user rights.  Open access is a way for copyright holders — remember that in spite of the rhetoric, it is authors on our campuses who are the original copyright holders in virtually all works of scholarship — to exercise their rights in ways that are most beneficial to them and to avoid many of the restrictions imposed by secondary copyright holders on access and reuse.  It allows scholars to simply ignore the attempts by industry and the U.S trade reps to ratchet copyright protections ever higher and to use their own copyrights in a way that is true to copyright’s core purposes of supporting creativity and innovation.  Indeed, by making our works of scholarship openly accessible, we provide much needed access to scholars and others, especially in the developing world, access that will be denied if those users have to rely on national policies that are shaped by pressure from the U.S.

In different ways, both the growing consensus around fair use and the open access movement are responses to the issues that Jon Band raised in his talk.  Both are supports for a creative economy.  But it is open access, where authors hold on to their copyrights in order to use their works for the best interests of themselves, their discipline and scholarship in general, that has the most potential to foster growth and innovation both here and abroad.

Feelin’ stronger every day

I don’t mean this to sound vindictive or smug, but the publisher John Wiley keeps filing, and losing, lawsuits intended to enforce ever-stronger copyright claims, that, in the outcome, can only be encouraging to those of us who seek a more balanced law that both protects copyright holders and supports reuse rights.

Wiley was the plaintiff is the case recently decided by the U.S. Supreme Court that held that the doctrine of first sale applied, in the U.S., to any lawfully made work, regardless of the place of manufacture.  Wiley, of course, wanted the Court to limit first sale to works manufactured in the U.S. so it could choke off second-hand sales, at least for textbooks, but instead clarified the law in exactly the opposite direction. And then, in another set of cases, Wiley, joined by the American Institute of Physics, filed three different lawsuits, in different jurisdictions, alleging that law firms that filed copies of scholarly articles as disclosures of “prior art” with patent applications were infringing copyright if they did not pay for licenses for each article.   The two losses they have recorded thus far in those cases are more evidence of the robust notion of fair use our courts are developing for the digital age.

In one of those lost cases, in Northern District of Texas, the judge dismissed the case on summary judgment back in May, holding that fair use protected the challenged copying and distribution.  But that decision was announced from the bench, and as far as I know we do not have a written opinion yet that we can parse to see how it might apply in other situations.

Late last month, however, the other court in which such a lawsuit was filed did issue an opinion.  Actually, a magistrate judge assigned to consider the case by the federal district court in Minnesota issued a “report and recommendation” that strongly supports fair use for the situation in question.  It also offers an analysis that could easily provide an analogy to activities in higher education, including the e-reserves system that is being challenged in the Georgia State University lawsuit.

It is worth spending some time looking at the report and recommendation of the Minnesota magistrate.  His basic recommendation is that the suit be dismissed on summary judgment because the challenged use of copyrighted articles is fair use.  Apparently because of the North Texas case, Wiley had pared back its claims, allowing that the actual filing of articles with a patent application, and a single copy retained by the law firm in its case file, was fair use.  But if Wiley thought that this common sense concession would allow it to force firms that did more, such as maintaining a database of articles for its attorneys or sending copies of articles to clients, to pay licensing fees, it was badly mistaken.  The Magistrate Judge’s recommendation is a sweeping assertion of fair use, and there are four aspects of his analysis that I want to highlight.

First, and perhaps most importantly, Judge Keyes asserts that the use made by Schwegman, Lundberg & Woessner, the law firm that was sued, is transformative.  He finds transformation in the use of the articles in question for a different “intrinsic purpose” than that for which they were published:

This conclusion does not change merely because the “copying” Schwegman engaged in did not alter the content of the Articles.  That lack of alteration may make the label “transformative use” a messy fit… But reproduction of an original without any change can still qualify as fair use when the use’s purpose and character differs from the object of the original, such as photocopying for use in a classroom.

In reaching this decision that Schwegman’s purpose in using these articles, which was to comply with government requirements, to compare the invention to “prior art,” and to represent its’ clients interests, “did not supersede the intrinsic purpose of the original,” the Judge also stated a case for why the copying at issue in the Georgia State e-reserves case can similarly be viewed as transformative.  Indeed, he made that point explicit, since teaching is a different “intrinsic purpose” than that for which they were written and published, which the judge said was to inform the scientific community of new research  and allow for the testing of methods and conclusions.

Next, the Magistrate Judge directly refuted the claim by the two publishers that the Texaco case, which refused to allow fair use for copying within the research arm of a commercial company, should also be applied to reject fair use for this law firm (which is, of course, a commercial entity).  Here again the Judge made a careful distinction between the purpose that was rejected in Texaco and the purpose for which Schwegman was making its copies:

Here, there is no evidence that would allow a reasonable jury to conclude that Schwegman is similarly [to Texaco] maintaining mini-research libraries so that it can avoid paying for separate licenses for each of its lawyers, thereby superseding the original purpose of the Articles… the evidentiary character of Schwegman’s copying differentiates the firm’s use of the Articles from the Articles’ original purpose.

At this point as well as others, this recommendation undermines a central claim of the publishers in the Georgia State case.  Just because a licensing market exists, and the use to which the excerpts from published works are put therefore saves the users some money, that does not undermine fair use, Judge Keyes tells us, when the purpose of the use is intrinsically different from the original purpose.  If the Eleventh Circuit Court of Appeals applies this type of reasoning in the GSU appeal, we could see an even broader fair use ruling in Georgia State’s favor than we got from the District Court.  No guarantees, of course, that the Eleventh Circuit will take this approach, but this analysis in the Schwegman case is one more support for that possibility.

Several times in his report, Judge Keyes points out that the loss of licensing fees would have no impact at all on the incentives that scholarly authors have to write the articles that are at issue in the case.  This incentive-based approach is the right one to take, in my opinion, since it looks at exactly the question copyright law should be focused on — what is needed to ensure the optimal level of ongoing creation and innovation.  Since scholars do not get paid for their scholarly articles, and any small amounts that may trickle down to them from licensing fees are irrelevant to the decision to report on their work, the Judge says that the fact that “the Publishers may have lost licensing revenue from Schwegman’s copying is not determinative and does not create a fact issue for trial.” I have italicized the last part of this sentence to emphasize that Judge Keyes does not think this is a hard case or a compelling argument, since the standard he is applying, in recommending summary judgment, is whether any reasonable jury could find otherwise.  He believes they could not, given the facts that surround scholarly communications today.

Finally, there is a fascinating part of this recommendation that points out, I think, what good citizens libraries really are in the copyright realm.  The law firm, you see, could not account for where it obtained the original copy of some of these articles, so the publishers argued that this lack of an authorized or licensed original should defeat fair use, alleging “bad faith” and citing the Harper & Row case.  But without evidence of actual piracy, the Judge rejects this claim and holds that no reasonable jury could find that there was bad faith by Schwegman that would prevent a holding of fair use.  I find this important because most academic libraries either do own purchased copies of the books they excerpt for e-reserves or make every attempt to obtain them if a request is made for a book the library does not already own. I am frequently asked by librarians how hard they should search for a copy to purchase.  I believe this is a good practice, for both copyright and pedagogical reasons, but the Schwegman case is a reminder that by doing this we may be going beyond the absolute requirements of a fair use argument.  The fact is that, in spite of some over-heated rhetoric from the publishing industry (a former president of the AAP once called all librarians pirates), libraries try hard to be good citizen and to respect the appropriate boundaries of copyright.  What is causing problems these days is the publishers’ deep fear of the digital environment and their efforts, in response to that fear, to push the boundaries of copyright further and further, even to the point that its justifying purpose of supporting authorship and innovation is undermined.

The Schwegman report is just that, a report and recommendation.  It remains to be seen if the District Court judge responsible for the case will adopt it or not.  But the fact that it is out there, and is so supportive of a fair use argument that would allow and endorse library practices that the publishing industry has challenged, is another data point for our consideration.  It serves as a reminder that the key to finding favor in the fair use analysis is to be doing something the court believes is important and beneficial.  When we make limited copies to teach our students and support our researchers, we are on the side of the angels, and we continue to get these examples that must, over time, accumulate into a body of support for library practices.  It is the publishers like Wiley, who are in the position of asking courts to stretch copyright law solely to support new income streams, that are and will continue to be, in spite of their offensive strategies, always on the defensive.

Are we done with copyright?

There has been lots of talk about copyright reform in Washington over the past few months, as evidenced by the announcement from the Chair of the House Judiciary Committee that that panel would undertake a comprehensive review of the copyright law.  The first hearing for that review was held back in May.  As Mike Masnick from TechDirt noted, the Registrar of Copyrights is supportive of the effort but “still focused on bad ideas.”  More recently, the Department of Commerce Task Force on Internet policy issued a “Green Paper” last month that helps us see what is right and what is wrong with the current attention in D.C. on copyright reform.

The Task Force recommended three broad categories of reform: updating the balance of rights and exceptions, better enforcement of rights on the Internet, and improving the Internet as a marketplace for IP through licensing.  These last two are straight out of the legacy entertainment industries’ wish list, of course, and they would do nothing at all to better realize the fundamental purpose of copyright to promote creativity and innovation.  As for the first, it all depends, of course, on where one thinks the balance has gone wrong.  The Task Force includes as a priority the reform of the library exception in section 108, which is a favorite goal of the Copyright Office right now, but it is not at all likely that anything the Office cooks up would be better than leaving the current 108 alone.  The Green Paper also seeks “input” about digital first sale and remixes; note that input is a much weaker commitment than the task Force is willing to make to such things as online enforcement, reform of 108, or — another industry favorite — the extension of the public performance right for sound recordings.

Most of this is just patching the current law around the edges, and addressing only those problems that the industry lobbyists would like to see fixed.  But it provides a context for asking a much more searching question about the utility of copyright in a digital age.  Does copyright actually serve its purpose, or any socially desirable purpose, in the current economic and technological environment?  What would it look like if we reformed the law in a way that paid close attention to that environment, rather than just listening to lobbyists for industries that are failing to adapt and want the law to protect them?

Let’s start with this report from The Atlantic about research into the effect of copyright protection on the availability of books.  This should be the “sweet spot” for copyright; if it does anything right, it should encourage the creation and distribution of books.  But the research reported in the article suggests just the opposite, that copyright protection, especially protection that lasts as long as it now does, actually depresses availability, so that more books are available from the decades just outside of copyright than from the most recent years.  Here is the conclusion of the researcher, a professor at the University of Illinois:

Copyright correlates significantly with the disappearance of works rather than with their availability. Shortly after works are created and proprietized, they tend to disappear from public view only to reappear in significantly increased numbers when they fall into the public domain and lose their owners.

This is not really that surprising if we think about what happens in modern book publishing.  A title is published and sold for 3-5 years, at most.  Then other titles come along and push the “older” ones “off the shelf.”  The initial publisher no longer is interested in marketing the vast majority of these books that are 10 or 20 years old, and, because of copyright, no one else can do so.  So a vast black hole swallows up most of the works of our culture that are less than 95 years old.  Only as they reach the century mark do these works begin to reappear as reprints, because other publishers, and even members of the general public, can now reproduce and distribute them.

By the way, one of the ironies of this article, which helps to define and quantify what is widely known as the “orphan works problem,” is that many of the books that are unavailable actually may be in the public domain.  Because for much of the 20th century copyright had to be renewed after 28 years and most rights were not renewed (due to the same lack of interest by their publishers that sent them to the remainder table) many of these books could be reprinted, but the Byzantine tangle of rules and Copyright Office record makes it too risky for anyone to undertake the task.

As I say, this situation is not a surprise.  Back in 1970, then Harvard Law professor and now Associate Justice of the Supreme Court Stephen Breyer wrote about “The Uneasy Case for Copyright” (the link is to the article in JSTOR) in which he argued that copyright was not fulfilling its purpose very well and probably should not be dramatically expanded in the then-nearly-completed copyright revision process.  Unfortunately Breyer’s article had little impact on that process, which resulted in the 1976 Copyright Act, but the past forty years have proven that he was largely correct and the Atlantic article discussed above is just one of many pieces of evidence. But “the real force of Breyer’s article is in arguing that copyright must be justified in a particular economic context and that technological changes may modify economic conditions.”

This quote about Breyer’s “Uneasy Case” comes from a 2011 article by another law professor, Niva Elkin-Koren of the University of Haifa, called “The Changing Nature of Books and the Uneasy Case for Copyright.”  In her article, Professor Elkin-Koren carefully examines the technological changes and lays out the subsequent economic analysis, just as Breyer did back in 1970, for the eBook age.  If in Breyer’s day the case for copyright was “uneasy,” then in the digital age it is downright painful.  Were I to summarize Elkin-Koren’s conclusions in a single sentence, it would be this — what we need most in the digital environment is competition, and copyright in its current form suppresses competition.  The digital environment is fundamentally different than print, largely due to the lower barriers to entry into the publication market.  In this context, competition can be vital and beneficial, but the copyright monopoly threatens to help established players gain an even tighter stranglehold on the marketplace than was possible in the print era (and that grip was already too tight, as we have seen).  Placing Elkin-Koren’s work next to Breyer’s, it is plain to see that the harmful economic and cultural effects of our current copyright regime are more visible and more harmful than ever.

If copyright reform were undertaken in the context of this awareness, what would it look like?  Suppose we were willing to do more than simply rearrange the deck chairs, where might we end up?  Elkin-Koren suggests a possibility:

[A] legal right to control copies may no longer be useful in a digital environment.  One reason for the weakening strength of copyright is that the legal right against unauthorized copying is no longer effective in digital markets… In order to secure the rights of authors to a share in the commercial revenues extracted from their books it may suffice to simply define a legal right to receive compensation: a fixed share from any commercial use.

It is breathtaking to consider the consequences of this kind of lightweight approach to copyright, as opposed to the steady process of adding more and more restrictions and penalties that has characterized nearly all previous copyright legislation.  Harkening back to the first U.S. copyright law, perhaps a “right to vend” is all that we really need in the Internet era. Certainly this would bring competition into the digital marketplace for copyrighted goods, which would scare the entertainment industry to death!  It is a fantasy, I know, but I would love to see copyright reform discussions in Washington start from that simple baseline, and then have an open discussion (one not dominated by those lobbyists best known to legislators and bureaucrats) about the consequences, problems and additions, if any, that would be needed to make that simple proposal work.

More on the AHA, ETDs and Libraries

I wanted to be done with the American Historical Association and their muddle-headed statement about embargoing theses and dissertations for up to six years from open access in order to protect publishing opportunities.  I had hoped that the statement would receive the scorn that it deserves and we could all move on to discussing more serious matters.  And it has received a good deal of that well-deserved incredulity and disparagement, but there is still a bit of a debate going on — evidenced by this story in the New York Times — so I want to make a couple of additional points.

First, there is an article in Inside Higher Ed about the debate that does a pretty good job of summarizing the discussion, although it still treats the AHA’s statement with more seriousness than it deserves, in my opinion.  But one really telling tidbit from that article is the comment by the director of the Association of American University Presses that its members, whose wishes are supposedly being catered to by the AHA, were surprised by the statement.  Apparently he called over a dozen press directors after the statement was issued and found that none of them shared the concern that the AHA is so afraid of.  So one wonders, as I did in my last post, where the evidence is for the claim the AHA is making that ETDs imperil publication.

The AHA attempts to address this very question in an FAQ that was released shortly after the statement.  There, AHA Vice-President Jacqueline Jones directly pooses the issue of evidence, and answers like this:

This statement is based on direct communications between some AHA members and the acquisitions editors of several presses. In those communications, certain editors expressed the strong conviction that junior scholars placed their chances for a publishing contract at risk if their dissertations had been posted online for any length of time.

“Several presses” and “certain editors.”  This reliance on vague rumors seem to contradict what the AAUP director says that he found by calling his colleagues.  How are we to decide who is right with such unsupported statements?  Does this reflect the standard of evidence that is acceptable to historians? Even worse is the fact that in the same answer, Ms. Jones disputes the much more quantified study recently published in College and Research Libraries, which also contradicts the AHA, by asserting that only a small number of presses said revised dissertations were “always welcome” while a much larger number said that such submissions were evaluated on a case-by-case basis.  Ms. Jones suggests that the article authors take too much comfort in this portion of the responses they received to their survey.  But this misunderstands what is being said; all publishers evaluate all submissions on a case-by-case basis.  This is good news; it means your work will be considered (and, therefore, that the ETD was not a problem).  What, after all, is the alternative?  Even “always welcome” does not mean that all submitted dissertations will be guaranteed publication.  Does the AHA hope to return to a dimly-remembered time when all dissertations from elite universities, at least, were published without question and without revision?  If so, their rose-tinted nostalgia has lapsed into delusion, and the result is bad advice for graduate students.  If, based on this commitment to a past that never existed, a student decides to avoid an online presence for her work for five or six years, her career will be destroyed in this age where if you cannot be found online you might as well not exist.

Throughout this debate, lots of folks are making assertions about libraries that display a lack of awareness of how those institutions work.  Over and over again we hear that this fear of an online presence is because libraries will not buy monographs that are based on a revised dissertation if the unrevised version is available online.  And no matter how often librarians remind folks that this is not true, it keeps resurfacing.  Let me try again.  In 25 years as an academic librarian, I have never met a librarian who looks for an online version of a dissertation before buying the published, and presumably heavily revised, monograph based on that dissertation.  That is just not part of the process; most acquisitions librarians do not even know if there is an online version of the dissertation when they decide about purchasing the monograph; I certainly did not when I made these sorts of decisions. Libraries look for well-reviewed items that fit the curricular needs of their campus.  They may ask if the book is over-priced and/or too narrowly focused, and those questions may rule out many revised dissertations these days.  But they simply do not, based on my experience and discussions with many of my colleagues (more anecdotal evidence!), look to see if they can get an unrevised version for free.  Perhaps librarians trust publishers to have guided the revision process well, creating thereby a better book, while the AHA does not seem to value that process.

Occasionally in this discussion we have seen publishers assert the same fiction about library acquisitions, sometimes dressed in more sophisticated form.  They say that it is true that individual librarians do not make decisions based on OA ETDs, but that vendors like Yankee Book Peddler allow approval plan profiles to be designed so that revised dissertations are never considered.  This is true, but it does not prove what it is asserted to prove.  Many academic libraries, especially at smaller institutions that do not have a mandate to build a research collection, will exclude books based on revised dissertations from their approval plan because such books are likely to be very expensive and very narrowly focused.  Many libraries simply cannot put their limited funds toward highly-specialized monographs that will not broadly support a teaching-focused mission.  To try to use this situation to frighten people about open access is disingenuous and distracts us from the real economic tensions that are undermining the scholarly communications systems.

Finally, we should remember that dissertations have been available in pre-publication formats for a very long time.  The AHA statement talks about bound volumes and inter-library loan, but that is either extreme nostalgia or willful ignorance.  UMI/ProQuest  has offered dissertations for sale since the 1970s, and has sold those works in online form for years before ETDs began to pick up momentum.  And ETDs are not so new; early adopters began making electronic dissertations available a decade ago.  Duke’s own ETD program began in 2006, and we worked from the example of several predecessors. So why did the AHA wait until 2013 to issue its warning? Perhaps they took their own bad advice and nurtured their opinion until it suffered the same fate they are now urging on graduate students — irrelevance.

Discussions about the changing world of scholarly communications and copyright