How efficient is our licensing system?

Two letters landed on my desk a few weeks ago, both from the Copyright Clearance Center.  I have written before about concerns over what we are actually paying for when we pay permission fees to CCC, and my experience with these two letters deepened that concern.

The first letter asked us to give permission for another university to use in class a newsletter from the 1970s that is on our digital collections web site as part of the holdings of the Sallie Bingham Center for Women’s History.  We had to deny the request because we do not hold the rights in this particular newsletter; we have digitized and displayed it based on a belief that doing so is fair use.  In any case, the original author cannot be located since the work was pseudonymous.  The result is that CCC, having actually failed to contact the rights holder, will now send the other university a denial of permission message.  This disturbed me, and I decided to get in touch with the professor directly, suggesting that she link to our digital collection so her students would still be able to read the work.  Unfortunately, the permission mechanism at CCC is an either / or switch, which cannot accommodate all of the nuances of copyright in higher education.

The second letter started a much more complicate chase.  It informed us that someone at Duke was entitled to a royalty check, but that the original check had gone uncashed for 6 months.  The letter offered to reissue a check (for a $25 fee that was nearly half the total amount) or gave us the option to refuse the payment altogether.  Since the letter was sent to an impossible address – the name of one University entity but the box number for a different one – it was no surprise that the first check had been lost.  When this second notice found its way to my desk, I decided to investigate.

What I found was very troubling.  The folks at the CCC told me that the royalties had been collected from a South African university and they cited two titles.  Unfortunately, neither title seems to be something for which anyone at Duke has the rights in the first place.  One appeared to be a Harvard Business case study; the other was an article from a journal for which I was given an ISSN that is non-existent.  None of the authors named by CCC appear to have any connection to Duke, and I cannot locate the specific article at all.  When I asked for more details, two different representatives at CCC promised to call me back.  That was three weeks ago, and I have heard nothing further.

My concern here is not to collect the $56 dollars being offered to us.  Instead, I am wondering just what that South African university actually paid for.  Obviously a substantial fee was collected and permission granted.  Yet the CCC seems to have based that fee on a mistaken perception of who the rights holder is and that they were authorized by that rights holder to sell the permission.

We are often told that the application of fair use on campus can be quite narrow because there is an efficient mechanism for licensing reuse and rewarding authors.  This experience reinforces my perception that that mechanism is not as efficient as is often claimed, and that a great deal of the money we spend on permissions never does get to the authors who are supposed to be rewarded.  The fact is that locating rights holders is very difficult, and the Copyright Clearance Center is as much at the mercy of those difficulties as are the rest of us.  One of the reasons for reinstating a renewal prevision for US authors into our copyright law would be to make locating the real holder of specific rights much easier.  Until we have such a system in place, we must be wary of relying too heavily on any licensing organization to actually know who each rights holder is and how to get them the fees that are supposed to motivate them.

Should the court consider the new Google patent?

On Tuesday Google was formally granted a patent on software to selectively control access to content, based on access rules and geographical locations.  There is a story on Ars Technica here that explains the patent and its potential application very nicely.  Basically, this is a technique for filtering what users can see based on where they are in the world.  Such filtering is not new; Yahoo! famously lost a court case in France and had to begin controlling access to its auction sites to prevent Nazi memorabilia from being sold in that country.  Different international laws about all kinds of topics can force Internet services to distinguish what can and cannot be seen in different parts of the world.

This story is interesting, however, for at least three reasons, the last one very relevant to the fairness hearing being held tomorrow in regard to the proposed settlement of the copyright lawsuit over the Google Books project.

The first thing that is interesting in this story is the fact that the patent application for this software was filed in September of 2004.  The five and a half year gap between initial filing and final approval is not necessarily unusual, but it gives me a chance to remind readers how long and costly the patent process is.  This is a huge difference between copyright and patents, and indicates why copyright is usually so much more important to higher education.  Every scholar, administrator, librarian and student owns copyrights, but relatively few can afford the time and money to obtain a patent, even if they have an invention that meets the much higher standards for patentability.

Which brings me to the second point: should software of this type even be eligible for patent protection?  Software patents were controversial for a long time because they were alleged to represent “only” abstract ideas – algorithms based on zeros and ones.  And until at least the mid-1990s, the Patent Office and the courts would not recognize patents for business methods.  All of that seemed to be resolved in favor of patenting business method software, but a case currently before the Supreme Court, called Bilski v. Kappos, has the potential to alter the rules on this issue.

But it is the impact of this patent on the Google Books settlement that really interests me.  Should the court considering the settlement tomorrow take notice of this patent?  If it did, what impact would it have?  Given the objections to the settlement from international copyright holders and the promises Google has made to exclude works from outside the US, UK and Canada, the need for some filtering system seems obvious.  So from one point of view, this patent is indicative of Google’s good faith efforts to do what it has promised to do.  Nevertheless, there are some less charitable interpretations that could be applied.

For one thing, this software could enable censorship of the type Google first practiced, then rejected, in China.  We should never forget that the origin of copyright was as a tool for censorship; anything that automates copyright enforcement runs the risk of facilitating repression.

Of most interest to the GB settlement, however, is the question of whether this patent ratchets up the worry about a Google monopoly over digital books.  Lots of comment has been made about the possibility that Google will have a monopoly over digital access to orphan works.  It is unlikely that any other party will be able to get the same kind of compulsory license for all orphan works that Google stands to gain in this class action settlement.  Now we must face the possibility that, even if a competitor could get such a license, in order to effectuate the necessary access restrictions they would have to seek a license from Google for these patented process.

The Ars Technica article points out that Google has promised not to use its patent portfolio for offensive purposes, that is, to limit competition, and that its record so far suggests that it is serious about that promise.  Nevertheless, courts need to look beyond promises to “do no evil” and think about long-term consequences.  As it considers whether the proposed settlement will give Google too much market power, it might be well to consider this patent on geographical filtering software as one more reason to keep a sharp eye on the project as it proceeds.

OSTP comments and the issue of compensation

I wanted to post this earlier, but intervening events got the better of me.  As most readers will know, the White House Office of Science and Technology Policy recently collected a wide range of very useful and specific comments in response to a request for information about public access policies for federally-funded research.  I wanted to point readers to two sets of comments, those that I wrote on behalf of the Duke University Office of Scholarly Communications, which are available here, and the superb comments from Harvard Provost Steven Hyman, which are linked from Harvard’s Office of Scholarly Communications blog, the Occasional Pamphlet.

One issue that arises in some of the OSTP comments is “compensation” for publishers when the final published version of articles based on federally-funded research is made publicly accessible.  I was recently part of a conversation on public access in which several academic publishers from scholarly societies raised this term.  I bit my tongue at the time to keep from yelling because I thought it was an idiosyncratic notion with no legs.  But when I looked at the full set of OSTP comments, I notice that compensation is brought up by numerous publishers.  See, for example, the comments from the Association of American Publishers, from Elsevier, and from STM: the Association of Scientific, Technical and Medical Publishers.   This last set of comments is very explicit in suggesting that publishers deserve financial compensation (not just “compensation” in the form of embargos) for the value they add to scholarly articles through managing peer-review and copy editing (see page 10).

I continue to be amazed that scholarly publishers are willing to make this demand in this language, but all I can say is that it is a conversation I am anxious to have.  I hope we can discuss the failures of compensation that occur throughout the scholarly publication system.  Publishers, of course, are usually the only ones who actually profit financially from scholarly journal articles.   Taxpayers often underwrite the work upon which those articles are based, and universities also supply resources and salaries to the authors.  Where is compensation for those entities, which rightfully should be paid out of subscription income?  Instead, of course, it is the universities that pay to get back the products of their own research, through their library budgets.  Peer-review is, of course, managed by publishers, but the actual intellectual work is again done by university faculty members, who donate their time and labor to improve scholarship.  If we are going to talk about compensation, let’s discuss how they should be compensated from the profits publishers make from their work.  And finally, how should scholars be compensated for transferring their copyrights to publishers?  There are, after all, substantial lost opportunity costs whenever an author surrenders control over their work.  These transfers have almost always been gratuitous in the past, but if we are going to talk about compensation, perhaps that can change.

In short, I hope we can have a conversation about compensation, because such a conversation can only reveal how exploitative and unsustainable the current model is.  If we discuss the full range of compensation issues, and not only narrow questions about copy editing, perhaps we can make progress towards a fair system of scholarly communications.

LOL at the Federal Register

It sounds very strange, but I really did find myself laughing as I read a long notice from the Copyright Office in volume 75, number 15 (Jan. 25, 2010) of the Federal Register.  Admittedly, some background is necessary to acquit me of the suspicion of insanity.

The Copyright Office notice details a interim rule that makes an interesting change in Office policy.  Essentially, the Office is now giving itself the right to ask for mandatory deposit of a certain class of online-only publications.  Deposit is mandatory for all published works, although failure to comply with this rule does not impact the availability of copyright protection itself.  But until now the Office has exempted online-only publications and, indeed, has not taken a position on whether or not such works are actually “published.”  In this notice they do acknowledge, based on several court rulings, that online material is published, and they extend the mandatory deposit requirement to a circumscribed class of online-only works — essentially formal periodicals that are published online without a print equivalent.  The requirement of deposit is not automatic even for these works; it will be triggered only if the Copyright Office makes a demand.  And the notice is careful to exclude things like blog posts, although this blogger wonders how successful that effort really is.

For me the real humor in this dry Federal Register notice concerns the issue of Digital Rights Management, or DRM — the technological protections used by many rights holders to control access to their works.  Such protection measures are fiercely protected by the copyright law itself; the Digital Millennium Copyright Act added provisions that make it illegal to circumvent these electronic locks even for otherwise legal purposes.  Some courts have even held, in spite of plain language in the statute, that fair use of a work can be prevented if the user would have to break though a digital lock.

So the irony that made me laugh out loud was the discovery that the Copyright Office will require that DRM be removed from the copy of an online publication that is deposited subject to this new rule.  Why?  Because “copies of works submitted to the Copyright Office under this interim rule must be accessible to the Office, the Library, and the Library’s users.”  In order to insure this accessibility, the interim rule makes part of  the definition of the “best edition” that must be deposited the criterion that “technological measures that control access to or use of the work should be removed” (see page 3867 of the Register, column 3, for these quotes).

As if to soften the blow to publishers from this rule, the Copyright Office goes on to detail the special security measures that will be taken regarding use of these deposited copies.  Users will have to be in a Library of Congress facility, and only two users at a time will be given access, over a secure network.  Yet the notice goes on to say that “Authorized users may print from electronic works to the extent allowed by the fair use provisions of the copyright law . . . as is the case with traditional publications.”

Here is the problem.  DRM systems prevent users from exercising their rights under the Copyright law; we often cannot print from DRM-protected works even “to the extent allowed by law.”  If we disable technological protections to do so, we may be subject to draconian penalties.  No other library in the world has the power to exempt its users from these burdens.  The Library of Congress seems to recognize the unfair restriction placed on users by DRM and is using its unique position to mitigate that problem.  But what about the rest of us?

Of course, the Library of Congress does have the authority to relieve the rest of us of some of the burden created by the anti-circumvention rules.  The Library is given rule-making authority by the DMCA to declare exceptions to these rules.  Unfortunately, when the new exceptions were due in late 2009, the Library punted, issuing an indefinite extension of the old, inadequate exceptions and promising new rules in “no more than a few weeks.”  The new exceptions have still not been announced, three months later.  With the Library’s clear recognition in last week’s rule that DRM protected works are not acceptable to meet the needs of library users, we can only hope that these long-delayed exceptions, when announced, will recognize the widespread harm done by DRM and will declare broad exceptions  — like the one the Library has given itself — to help the rest of us.

“Renewing copyright” and a reflection on versions

In a post about two months ago I promised that I would offer a link to the article I wrote on reforming copyright law from the perspective of academic libraries.  That article was published this month in portal: Libraries and the Academy, and is now also available in DukeSpace, the open access repository at the Duke Libraries.

The full citation for the article is:  Kevin L. Smith, “Copyright Renewal for Libraries: Seven Steps Toward a User-friendly Law,” portal: Libraries and the Academy, Volume 10, Number 1, January 2010, pp. 5-27.

The published version is available on Project Muse at http://muse.jhu.edu/journals/portal_libraries_and_the_academy/summary/v010/10.1.smith.html

If you cannot access Project Muse, this is the link to the DukeSpace version, which is my final manuscript but lacks the formatting and copy editing done by the good folks who publish portal:

http://hdl.handle.net/10161/1702

As I said in the original post linked to above, I hope my suggestions will be read in combination with those made by Professor Jessica Litman in her wonderful article on “Real Copyright Reform.

I had intended to end this post with the information above, but a recent discovery has caused me to change that plan.  Late last week I discovered that a small error, an extra clause made up of words from elsewhere in the sentence, was inserted into the HTML version of the article.  It does not appear in my manuscript, nor in the PDF of the published article, only in the HTML version.  I contacted the editorial folks at portal and expect that the error will be fixed shortly, perhaps even before I publish this post (Note on 2/2 — the error has been corrected).  But it does raise some questions about some of the assertions made on behalf of traditional publication.

First, we are often told that copy editing adds value to an article and that publishers deserve compensation for adding that value whenever the public is given access to the final published version of an article.  On the compensation issue I shall write more later.  But here I want to note that the editorial process can insert errors as well as eliminate them.  I found the editorial assistance from portal to be superb, but, in spite of their best efforts, the multiple stages of the publication process are not all within their control.  The result was that a error that I was not responsible for, albeit a minor one, found its way into my work.

Second, this small incident raises questions about the assertion that publishers provide the scholarly community with the “version of record” that assures consistent quality.  In fact, there are two different versions of my article available at this moment (on 2/1) on the Project Muse site for this journal — the HTML is different from the PDF in at least this one respect.  So which is the version of record?  To make that determination, I am the final arbitrator, and I hope that the error I caught in the HTML will be corrected based on my request.

This suggests that there is at least an argument that the “version of record” should be the one that is closest to the author’s hand.  Who else has a greater incentive to insure accuracy, after all?  A serious error may impact the publisher’s reputation to some degree, but it can be devastating to that of the author.  And I would certainly hope that a significant error, such as an incorrect calculation or formula, would never be “corrected” by a copy editor without first consulting the author; it is easy to imagine cases where what looks like an error — a deviation from the expected — is in fact the heart of the argument.  Thus significant corrections should always be made with input from the author, and the author would then be free to correct any versions she has made available to the public.  So I would like to see discussions of “version of record” include the likelihood that the version nearest to the author may, at least sometimes, be the most accurate version available.