All posts by Kevin Smith, J.D.

Technological neutrality as a rhetorical strategy

There has been some really good attention paid recently to the issue of how our linguist choices really frame the debates about copyright law and, often, prejudge them.  In his new book, William Patry (who will be speaking at Duke Law School on October 22) devotes quite a bit of space to analyzing the language of moral panics and the metaphors employed by the copyright industries to skew an honest debate.  In a June 2009 article called “Why Lakoff Still Matters: Framing the Debate on Copyright Law and Digital Publishing,” Diane Gurman makes a similar plea for those who oppose the ever-expanding reach of copyright to create their own frames that would balance the rhetoric of theft and piracy.

Although it is often easy to spot the linguistic excess coming from the copyright industries, a recent letter to the Senate Judiciary Committee from the National Music Publishers Association took a more subtle, and even more dangerous, approach. There is a CNET news story about this letter here.  The theme of the letter, that copyright law should be technologically neutral, seems benign enough, but the work that the music publishing industry tries to get that rhetoric to do is very troubling.  The thrust of this “technological neutrality” appeal is a claim that music publishers should collect a fee for a public performance of a musical composition every time there is a digital download of a piece of music.

To call this grasp at a wholly new income stream “technological neutrality” shows amazing nerve; it is really the opposite of such neutrality.  Music publishers do not collect a public performance fee when a CD is sold because there is no way to prove or assume that a public performance (as opposed to a private one, over which rights holders have no control) will take place.  Why should a digital download be different?

Fred von Lohmann of the Electronic Frontier Foundation, who is quoted in the article, suggests that this is just a turf war between different rights societies over who will collect a fee and, hence, get a “cut.”  He is surely right about that, as he is when he points out that copyright law has never been technologically neutral.  Some exceptions (such as the section 108 library exceptions) apply only to certain technologies or treat different technologies differently.  There is a special rule, after all, for digital audio tape.  But pointing put the triviality of this use of “technological neutrality” may not be enough.  We should notice something really pernicious that is happening behind this smokescreen.

The language of copyright neutrality has quite a bit of appeal for copyright policy makers.  The fantasy of a law that adapts automatically to new innovation appeals to a legislative sense of economy.  That attraction is being used, in this letter, to attempt to vastly expand the scope of the exclusive rights protected by copyright.  And this is not the first time.  We should remember that copyright owners do not get absolute control over their works, only control within the scope of the enumerated rights.

A single line in the CNET story really encapsulates the problem here — “composers, songwriters and publishers are asking for a guarantee that they will get paid for a public performance even if there isn’t a public performance.”  In this letter, the apparently benign call for technological neutrality is being used to disguise an attempt to enlarge beyond all reason the scope of the public performance right.  This is not the first effort to use that right to expand the reach of the copyright monopoly.  As I wrote about here, the debacle regarding the Kindle text-to-voice feature was based on an attempt to expand “public” performance deeply into the private use of new technologies.  For another example, see this report on the unsuccessful attempt recently by music publishers to collect a fee for every ring-tone “performance” of copyrighted music.  So the desire to expand the reach of copyright control is well-established, what changes is the disingenuous rhetoric behind which these efforts are hidden.

I can’t define it, but I know it when I see it

No, the title, a paraphrase of a famous remark by Justice Potter Stewart, does not refer, in this instance, to pornography, but to non-commercial uses of copyrighted works.

One of the persistent criticisms — or perhaps reservations is the correct word — about the Creative Commons licensing scheme has been that one of the major terms used in CC licensing — non-commercial use — is too vague and subject to varying interpretations.  The core purpose of the Creative Commons, of course, is to allow copyright holders to license their works in a way that assures subsequent users that they can make use of the works within defined parameters.  Two of those parameters are attribution, which is protected by CC licenses even though not adequately ensured under U.S. copyright law alone, and, often, a restriction to non-commercial uses.  But if there is no agreement on what it means to call a use non-commercial, then there is a real problem with the licensing scheme; it would fail to provide that assurance, which reduces the need for transaction costs involved in seek permission, that is its basic purpose.

Now the Creative Commons has released a voluminous report it commissioned to study this potential problem.  Although “Defining Noncommercial” is a massive document that I have not read in its entirety, it is clear from the executive summary and a perusal of the survey data that the situation is not really as serious as some feared.  The report suggests that although a comprehensive definition of noncommercial remains elusive, there is not a major problem with its use in CC licensing.  Basically, most people seem to agree that “they know it when they see it.”

Two specific findings in the report struck me as particularly supportive of the continued use of “non-commercial” as a licensing term.  First, the marketing firm that did the research found that there was broad agreement on what non-commercial meant.  Most creators and users agreed that a use that made money for the user or involved advertising was commercial, while those that did not, were not.  This broad agreement helps explain why the millions of items licensed under CC licenses have generated so little litigation in the eight years since its founding.

Even more interesting was the finding that showed that when creators and users disagreed about whether or not a use was commercial, it was the users who were more likely to err on the side of seeing a use as commercial, and thus not covered by an “nc” license.  The reason this is such an encouraging finding is that it suggests that users will ask permission in doubtful cases, even when the creators (who hold the rights) do not think permission is needed in the particular situation.  Thus CC licenses can reduce the transaction costs involved in seeking permission, but they will not eliminate all need for permission and users are likely to ask when they are in doubt.  CC licensing, of course, facilitates asking permission as well, since works so licensed will have an identifiable rights holder.

This finding is consistent with our experience at the Duke University Libraries, where we placed most of our web pages under a CC license over two years ago.  We still do receive some requests for permission, even for pages that carry the CC license.  I try to inquire about why people are asking when the page carries a prior permission that almost always covers the proposed use.  Invariably I am told (mostly by other librarians) that they consider asking both the cautious and the courteous thing to do.  So while we believe that the license empowers many users and reduces transaction costs, we also see that users who are in doubt feel free to contact us for clarification.  This confirms that the non-commercial term is not the problem that some have feared.

Creative Commons has always striven to make its licenses effective and useful, and this study is one more tool for understanding how those licenses are and can be employed.  The CC itself suggests three lessons that we can take away from this study:

the findings suggest some reasons for the ongoing success of Creative Commons NC licenses, rules of thumb for licensors releasing works under NC licenses and licensees using works released under NC licenses, and serve as a reminder to would-be users of the NC licenses to consider carefully the potential societal costs of a decision to restrict commercial use.

Good advice, available for those who want to be sure that, in regard to non-commercial use, we “know it when we see it.”

Falling down before the finish

This article from the Guardian UK about how “Google Books deal forces us to deal with copyright” had me nodding in agreement, right up until its last few paragraphs.  Like author Nick Harkaway, I am cautiously relieved by the intervention from the Department of Justice that has forced a postponement of the hearing on the settlement in the Google Books copyright infringement case.  Harkaway expresses my feelings very succinctly when he writes that “it wasn’t the idea I objected to, but the method.”  As I sometimes put the same sentiment, bad law in the service of a worthwhile end can still create unfortunate consequences.  So I am hopeful that the extra time and renewed negotiations will lead to a more thoughtful implementation of the books project, perhaps less sweeping but also less monopolistic.

Harkaway also has my agreement when he expands his discussion to the problem of orphan works, and suggests that the Google Books deal gives added incentive to a broader, more generalizable solution for the millions of works still protected by copyright yet for which no rights holder can be found.  Harkaway embraces a familiar solution to this problem when he endorses renewed recourse to a renewal system.  Under this plan, rights holders would have to renew their copyright claim periodically in order to prevent the work from dropping into the public domain.  Thus orphan works would become free for use once a renewal period passed without action by the rights holder.    There are other ways to approach the orphan works problem, but it clearly needs to be addressed, and the renewal suggestion would be one very effective approach.

Unfortunately, I stopped agreeing with Harkaway right at the end of his article, when he suggested that data-mining and other new uses for copyrighted works should be sources of new income for rights holders.  This is an old mistake based on thinking that whenever new technologies enable new uses, a new right is created.  But copyright does not work that way, and there has never been a “use right.”  Copyright holders do not get the right to control every use of their work, and thinking about how such a right might work should tell us why — it raises a huge problem of censorship; imagine, for example a book author or film producer who could use copyright to prevent negative reviews.  Instead, rights holders get the exclusive right to control copying, distribution, public performance and public display, as well as the creation of derivative works. This is a lot of control, but these rights do not impinge on using a lawfully obtained copy, at least for private purposes like research.  Everytime a new technology comes along, however, some rights holders are seduced into thinking that they should gain from it, even if it does not implicate any of these exclusive rights.

If digital copies of the world’s books are legally created, through a Google settlement or in some other way, use of those copies for data-mining and other research uses will be, and should remain, free for all users.  It may sound plausible when Harkaway complains that Google will be improving its search algorithm using his work and making money from that improvement.  But where does a use right stop?  Should the heirs on John Updike be reimbursed if digital copies of his work are used to create a Updike concordance?  Should an academic who wants to study a certain grammatical construction across a huge range of published literature, a use contemplated by the Google settlement, have to pay the copyright owner of every book in the corpus for that opportunity?  It quickly becomes clear why a separate use right within the copyright bundle would be a very bad idea.  I can follow Harkaway through most of his article, but when he gets to those last three paragraphs, it is clear he has gone astray.

Manufacturing controversy

Some copyright cases just don’t grab one’s attention, and I have to admit that I saw reports of the decision in Omega v. Costco several times before the potential impact on academic libraries began to sink in.  The case involves chapter 6 of the Copyright Act, referred to as the manufacturing clauses.  Since the principle requirement of the chapter, that works be manufactured in the US in order to be eligible for copyright protection, expired in 1986, I pretty much ignored the case the first few times I read about it.  Now I think that was a mistake.

The case is fairly complicated, and there is a nice summary of it here, on the IP Law blog.  The basic ruling, however, from the Ninth Circuit Court of Appeals, was that the doctrine of first sale, the rule that says that one who purchases a lawfully made copy of a copyrighted work may lend, resell, or otherwise dispose of that particular copy, does not apply to works that are manufactured and sold outside the US.  Basically, the court held, on reasonably good authority, that such works do not qualify as “lawfully made under this title (i.e. the Copyright Act),” which is a condition on the application of first sale.

Once I paid attention, it became very clear why this is a cause for concern in libraries.  Academic libraries especially buy lots of foreign materials, often from overseas distributors.  If first sale does not apply to those materials, can libraries lend them at all?  A negative answer could devastate our services in support of all kinds of language programs and area studies.  This possibility is raised in passing in this amicus brief urging the Supreme Court to review the case, filed by the Electronic Frontier Foundation.  Interestingly, however, the major library associations have not taken a position on the petition asking the Supremes to hear the case.  I was given two different reasons for this decision not to act, one which seems sound to me and one which leaves me with some concern.

One reason for not encouraging the Supreme Court to “take cert” (that is, agree to review the lower court’s opinion) is that there is real danger that the Supreme Court would affirm the decision.  That would make a problematic case from the West coast into binding law throughout the country.  Better, perhaps, that this remain an anomalous precedent only impacting libraries in the nine western states that comprise the Ninth Circuit.  Several authorities (Patry on Copyright and a concurring opinion in an earlier Supreme Court case) seem to support the position taken by the appeals court, and asking for cert might be asking for trouble.

More reassuring, but more problematic, is the other reason given for not taking action on this case — the exception for libraries that is built into the manufacturing clauses.  Section 602(a)(3) excludes certain copies purchased by libraries for lending or archival purposes from the general statement in 602 that importation of copies of copyrighted works purchased overseas into the US is an infringement of the distribution right.  That seems to let libraries off the hook.  But it is not entirely clear that this exception, specific as it is to section 602, actually solves the first sale problem created by the Ninth Circuit.  Even if it does, however, I am left with two concerns.

First, the section 602(a)(3) exception explicitly excludes audiovisual works from its scope.  For those works, only a single copy for archival purposes is allowed, and no mention of lending is made.  This suggests that even if print collections of foreign materials purchased overseas are OK, collections of film are not.  That would be a crippling lacunae for academic libraries.

The other problem is that, if the Ninth Circuit ruling stands, it might encourage textbook publishers to move their manufacturing and distributing operations overseas in order to be able to shut down secondary markets and thereby increase their profits.  The exception for libraries would not apply to resale of used textbooks, on which so many students depend to reduce their educational costs.  Closing off those used book markets would not directly harm academic libraries, but it would certainly hurt higher education.  Also, it hardly seems sensible to add to the incentives that are luring American manufacturing overseas.

I am thus left on the horns of a dilemma.  I want to see this decision overturned, but I agree that the review that would be necessary to reverse it would also carry a significant risk of an affirmation, which would be far worse.  It is an uncomfortable place to be, and one in which a good outcome is difficult to imagine.

What problem can open access solve?

A recent conversation on an e-mail list for theological librarians (the branch of academic librarianship in which I began my own professional career) has lead me to reflect on exactly what problem it is that open access is designed to solve.

The exchange involved a journal called “Studies in Religion,” which is subscribed to primarily by seminaries and other small religious colleges and universities.  The journal has just announced that it will move from being published by Wilfrid Laurier University Press to Sage Publications, and the cost of an institutional subscription will rise from $64 per year to $300, an increase of about 470%.  For freestanding seminaries a “price break” will keep the increase down to a mere 350%.

The humanities have been largely insulated from the journal pricing increases that are the origin of the so-called crisis in scholarly communications, but they are fast catching up, unfortunately.  In this case, the motive for moving to a new publisher is probably to have “Studies in Religion” included in a large package of online journals.  The ironic result, of course, is that many schools with no interest in this title will be forced to subscribe to it while those institutions where it is most needed will likely have to cancel.

I have frequently argued that the solution to the continuing copyright battles in higher education is for scholars to stop transferring copyright to publishers and preserve their right to make their work available in open access.  Widespread open access can indeed reduce the need for scholars to ask permission to use their own works and the risk of copyright litigation against colleges and universities.  But it will not, by itself, solve the problem of journal prices.

We need to distinguish between the problem of skyrocketing journal costs and the access problem, of which costs are only part of the cause.

There was a time when publication in a prestigious journal, or even a second tier one, brought with it an assurance that all the people to whom a scholar’s work would be important would have a chance to see it.   Times have changed dramatically, and that sense of assurance based on publication in a toll-access journal is simply no longer possible.  Cost is certainly part of the problem; an increasing number of a scholar’s colleagues will be working at institutions that have had to cancel access to the journal or database in which her work has been published.  But it is also the cases that fewer and fewer researchers begin their work by browsing journals, or even journal databases.  Internet searches are the first recourse for many seeking information about a new topic or trying to stay current on a familiar one.  Articles in toll-access journals may not be found by such searches, or when they are found, the links will not work if the toll has not been paid.  Thus new technologies, and the research strategies they generate, are as much a cause of the access problem as prices are.  And it is the greater “findability” that open access offers that make it primarily an opportunity for greater access and impact rather than a solution to the pricing crisis.

“Not really a settlement at all”

The hearings last week before the House Judiciary Committee about the proposed settlement in the copyright infringement lawsuit over the Google Books project once again showed the disparate opinions that the proposed settlement has generated.  There is a NY Times report on the hearings here.

One of the most interesting features of the hearing was the statement by Marybeth Peters, the US Registrar of Copyrights.  This was the first time the Copyright Office has really weighed in on the settlement, and I think many were surprised by the strong opposition Ms. Peters expressed.  I had to nod in agreement when I read her statement that the Copyright Office had come to realize that “the settlement was not really a settlement at all” but was, in fact, a mechanism to create a new and exclusive business model for Google.  A class-action settlement, as Peters points out, usually resolves claims over past acts and provides some remedy going forward.  An example would be a suit brought by consumers over a flaw in a car design; the usual remedy would be a financial penalty and a commitment to repair the flaw.  In the Google case, however, the alleged infringement will be allowed to continue, with the blessing and financial participation of some percentage (but not all by any means) of the rights holders whose rights have allegedly been infringed.

Perhaps the widely divergent interpretations of the settlement agreement are due to the fact that it does not so much settle past wrongs as project a new business model into the future.  This begs people to evaluate the predicted consequences and to base their judgments on those predictions, rather than on a clear view of how past actions will be remedied.  A recent blog post entitled “The Google Books Settlement — What Did You Choose? confirms this sense of an either/or choice to be made — either love it or hate it.  Balancing Registrar Peters’ negative opinion, in this worldview, is this editorial from The Economist endorsing the settlement.

If you read the two contrasting opinions, it seems like they are talking about complete different projects.  Is Google creating a universal library where the whole world can access the wisdom of the ages, or is it a massive power and money grab by an overly ambitious company willing to corrupt the US legal system to gain its ends?  The interesting thing about this stark choice, however, is that both opinions may well be true.

It is important to remember that there are limits on the judge’s power in assessing this settlement.  His role is to determine the fairness between the parties before him, not to decide if the settlement is good for society as a whole.  And, of course, there will not be any party to the lawsuit who will oppose the settlement or appeal its approval, since a major effect of the deal is to align the economic interests of plaintiff and defendant.  Only, I suspect, a negative report from the Department of Justice (on the anti-trust issue, which is possible) or a threat of Congressional intervention (which is apparently unlikely) might interfere with approval of the settlement, and then the question arises “what next?”

In her statement to the Judiciary Committee, Peters did go on to acknowledge some positive aspects of the settlement, specifically the creation of the books Rights Registry, access for people who are blind or print disabled, and the ability of libraries to offer “immediate, unfettered and risk-free” access to millions of copyrighted works.  Those aspects, she said, “should be encouraged under separate circumstances.”  But that, of course, is the $64,000 question.  Under what circumstances, short of a compulsory license, would these advantages be possible?  If a class action suit is not the way to create such a license (and I agree that it is not), how else could it be done?  I find myself wondering if Registrar Peters was really asking the Congress to consider addressing the orphan works probably in a new way — through a compulsory licensing mechanism rather than a remission of damages.  If we really want the benefits of the Google Books Settlement without the monopoly it would create, it would probably take such a legislative revolution to get it done.

UPDATE — shortly after this post was written, it was announced that the Justice Department has filed with the court recommending that the agreement NOT be approved as it stands.  See a story on the filing here.

The joy of statistics

The World Economic Forum recently published its 2009 Global Competitiveness Report, and I was struck by one particular statistic, as well as the conclusion drawn from it by the US Chamber of Commerce.  One of the many statistical tables in the WEF report ranks the perceived strength of national protection for intellectual property.  The United States ranks 19th on this chart, out of 133 countries rated.  As this blog post from IP Watch reports, that ranking prompted The US Chamber of Commerce to call for stronger protection measures in the US.

As someone who believes that IP protection in the US is certainly strong enough and is often over-enforced, I was struck by several flaws in the reasoning of the Chamber of Commerce or, at least, differences about what I think the report could mean.

First, the Chamber of Commerce thinks that 19 out of 133 is a low ranking, a judgment that seems questionable at best.

Second, it is important to note that this chart reports “executive perceptions” about the strength of IP protection in a country.  While that may make sense for a report about international competitiveness, it is too subjective a measure to cause Congress to hasten to strengthen our copyright laws.

Finally, I wonder how strength of IP protection actually correlates to economic growth.  There is a pretty good correlation between perceived strength of protection and competitiveness in the WEF report, but of course, those books are cooked, since the results of the survey are part of the data on which conclusions are based.  I decided to try an experiment, which even as a non-economist I recognize to be crude, albeit interesting.  I started with this list of countries based on economic growth (the growth rate of the Gross Domestic Product) using data from the CIA World Fact Book.  The US is 67th in GDP growth rate, and I made a list of ten countries from the G-20 group of nations with higher growth rates than that of the US and compared that list of ten countries to the rankings of perceived strength of IP protection.  All ten of these countries, it turns out, are perceived to have weaker IP protection than the US.  To choose an obvious example, China has the fastest economic growth rate of any of the G-20 economies, but is ranked far below the US — at 61st — on the list of strong IP protectors.

It is easy to lie with statistics, of course, but this simple comparison suggests that weaker IP protections might actually correlate with economic growth, or that in any case there is a median position where IP protection is correctly calibrated to encourage economic growth, and the US has passed that point.  This search for the correct level of protection, I think, is something the World Intellectual Property Organization is struggling with as it considers its “development agenda” recommendations.  At the very least, nations need to preserve flexibility in their IP laws and recognize that the what is best for Big Content is not necessarily good for a nation.

Maybe not so revolutionary after all

When I wrote a few weeks ago suggesting broader latitude for fair use in the case of academic and scholarly works, I contrasted that position to the more “revolutionary” one proposed in the title of Steven Shavell’s recent article “Should Copyright of Academic Works be Abolished?”  Shavell, who is professor of law and economics at Harvard, premises his argument on the same phenomenon that I stressed in my blog post — the lack of incentive provided by copyright for academic authors.  He builds an elaborate economic model to demonstrate that authors would be as happy or happier to continue to create their works and society as a whole would be better off if academic copyright were eliminated, as long as, he suggests, publication costs were subsidized by universities or grantors.  He writes, “if publication fees would be largely defrayed by universities and grantors, as I suggest would be to their advantage, then the elimination of copyright of academic works would be likely to be socially desirable.”  Read in its entirety, however, this position is not as revolutionary as it might seem, and probably is less desirable from the perspective of academic authors than the suggestion I have made about broad fair use.

For one thing, a broad interpretation of fair use would help address one of the problems that Shavell is trying to solve with his proposal — the labor and permission costs associated with providing material for students in colleges and universities.  But more important, Shavell’s proposal that academic copyright be abandoned addresses neither all the legitimate concerns of academic authors nor all of the problems with the publication system as it now exists.

When Shavell speaks of universities defraying the costs of publication, it is important to remember that efforts at open access on campuses are one way in which universities are already doing this.  Shavell is well aware of this, and discusses open access movements at some length.  He ultimately concludes that such movements will be too slow because of what he calls the individual versus social incentive problem.  Each individual lacks sufficient incentive to make the change, even though the result would benefit society as a whole.  The result is that Shavell decides that a change in the law is needed, removing academic works (which he is at pains to define) from the scope of copyright protection.

My biggest concern with this proposal is that it neglects one benefit which academic authors do gain from copyright, the ability to control the dissemination of their work and, especially, the preparation of derivative works.  Of course, that control is of little use as things stand today, because copyright is so freely given away by academics who must then hope that the commercial publishers to whom they cede their rights exercise those rights for the best interests of the authors.  That is happening less frequently, unfortunately.  One of the reasons the Creative Commons license is such a benefit to academics is that it allows authors to both authorize broad reuse of their work and to assert control over that reuse, especially in regard to attribution, which American copyright law does not protect.  In order to use a CC license, however, one must be a copyright holder; copyright is the “teeth” that enforce the license.  So any analysis of the incentive structure for academic writing must factor in the potential loss of control when considering abolishing copyright in academic works.  This is one reason I have suggested broadening fair use for academic work rather than eliminating copyright altogether.

To me, what this suggests is that the problem with academic publishing is not copyright per se, but the transfer of copyright to corporate entities whose goals and values are usually quite different than those of academic authors.  Because he does not really consider open access a solution to the problem he outlines, Shavell assumes that the publishing structure would remain very much intact under the no-copyright regime he suggests, simply with a different mechanism for paying the bill.  But at least one open access option — a prior license granted to the institution by faculty in their scholarly writings before they submit those works for publication — could restructure publishing in the right direction without losing those benefits that academics really do get from owning copyright.

Shavell does briefly mention such prior licenses, such as those adopted by Harvard and MIT, but does not treat them extensively and does not recognize that some of the difficulties he finds with open access movements would be mitigated by the prior license mechanism.  He cites two major problems that would prevent open access from quickly solving the problem he finds with scholarly publishing — the fact that authors will not insist on open access if they have to pay for it and the alleged fact that open access journals lack prestige.  Neither of these problem exist for the prior license scheme, which, when combined with a broad latitude for fair use of academic writing, offers, at the very least, a significant intermediate step toward resolving the dilemma of scholarly publishing.

It may be that copyright should be eliminated for academic works, but it would hardly be easy to accomplish.  Nevertheless, Shavell’s analysis of the state of academic publishing, and its future, is complex and interesting.  But while we wait for Congress to move in the direction he suggests (if it ever does) the adoption of institutional licenses for open access to faculty writings and a broad latitude for fair use of those writings, both of which could be implemented immediately, are intermediate steps that would return a great deal of control to the authors for whom that is the major incentive.

Fairness breeds complexity?

The title of this post is an axiom I learned in law school, drilled into us by a professor of tax law but made into an interrogative here.  Because the copyright law is often compared to the tax code these days, I have usually just accepted the complexity of the former, as with the latter, as a function of its attempt to be fair.  Because different situations and needs have to be addressed differently in order to be fair, laws that seek fairness inevitably (?) grow complex. But a recent blog post by Canadian copyright law professor Michael Geist, nicely articulating four principles for a copyright law that is built to last, has made me ask myself if simplicity is a plausible goal for a comprehensive copyright law.

Geist’s four principles are hard to argue with.  A copyright law that can last in today’s environment must, he says, be balanced, technologically neutral, simple & clear, and flexible.  That last point, flexibility, is the real key, since designing a law that can be adapted to new uses and new technologies, many of which are literally unforeseeable, requires that the focus be on first principles rather than outcomes.  This is different than the tax code, and it may provide the path to combining fairness with simplicity.

The principle of flexibility explains why fair use is an effective provision of US copyright law.  As frustrated as some of us get trying to navigate the deep and dangerous waters of fair use, it has allowed US law to adapt to new situations and technologies without great stresses.  In fact, Geist’s brief comment on fair dealing in Canadian law suggests (implicitly) that it should be more like US fair use; he argues that the catalog of fair dealing exceptions should be made “illustrative rather than exhaustive,” so courts would be free to build on it as technologies change.

In recent posts I have spoken of adapting fair use so that it gives more leeway to academic works than to other, more commercial intellectual properties.  Even though Geist is explicit in his post that “Flexibility takes a general purpose law and ensures that it works for stakeholders across the spectrum, whether documentary filmmakers, musicians, teachers, researchers , businesses, or consumers,” I do not think there is any contradiction here with asking that academic works be treated differently in the fair use analysis then a recently released movie, for example, might be.  Fair use would be applied in the same way to each, but because fair use appeals to the motivating principles of copyright law, it asks us to examine the circumstances of each type of material and each kind of use and measure them against those principles.  This is precisely how flexibility is accomplished, and I argue that the result of this uniform application of principles will be different outcomes for different types of works.

Geist’s approach to digital locks — DRM systems — is quite similar, asking us to look at first principles that underpin copyright law when deciding how to treat any particular technology.  Specifically, he suggests that forbidding or permitting the circumvention of such digital locks must be tied to the intended use for which the lock is “picked” if copyright balance is to be respected.  An added advantage of this approach is that it is much simpler — another core principle — than the current approach in the US, where categorical rules are enacted and then a series of complex exceptions are articulated every three years.  We will see shortly how that process will play out for the next three years, since the exceptions will be announce in a couple of months, but it is inevitable that the result will be unfair to some stakeholders and probably disappointing to all.  Far better that we heed Geist’s call for an approach based on first principles.  Perhaps Canada, as it considers a comprehensive overhaul of copyright law, can lead the way.

Moving beyond the photo album

Last week G. Sayeed Choudhury, Associate Dean for Library Digital Programs at Johns Hopkins University, came to Duke to talk with the staff of the Libraries about e-scholarship and the changing role of the university library as part of our strategic planning process.  His presentation and conversations were fascinating, and we were left with a great deal of thought-provoking material to consider.  I was particular struck by one observation, which was actually Choudhury quoting from a 2004 article that appeared in D-Lib Magazine by Herbert Van de Sompel, Sandy Payette, John Erickson, Carl Lagoze and Simeon Warner.  In the article, “Rethinking Scholarly Communications,” the authors assert their belief that “the future scholarly communications system should closely resemble — and be intertwined with — the scholarly endeavor itself, rather than being its after-thought or annex.”  The article further makes the point, perhaps more obvious now that it was five years ago, that “the established scholarly communications system has not kept pace with these revolutionary changes in research practices.”

In developing this point, Choudhury talked about the traditional research article as a “snapshot” of research.  Those snapshots are increasingly far-removed from the actual research process and have less and less relevance to it.  Indeed, the traditional journal article seems more like a nostalgia item every day, reflecting the state of research on a particular topic as it was at some time in the past but beyond which science will have moved long before the formal article is published, thanks, in part, to the many informal ways of circulating research results long before the publication process is completed.

Choudhury called on libraries to move past a vision of themselves as merely a collection of these snapshots and become more active participants in the research process.  He recounted a conversation he had with one researcher who, in focusing on the real need he felt in his own work, told Sayeed that he did not care if the library ever licensed another e-journal again, but he did need their expertise to help preserve and curate his research data.  The challenge for libraries is to radically rethink how we spend our money and allocate the expertise of our staffs in ways that actually address felt needs on our campuses and do not leave us merely pasting more snapshots into a giant photo album that fewer people every day will look at.

Recently I have seen a lot of fuss over an article that appeared in the Times Higher Education supplement that posed the question “Do academic journals pose a threat to the advancement of science?”  The threat that the article focuses on is the concentration of power in a very few corporate hands that control the major scientific journals.  But read in the context of the radical changes that Choudhury, Van de Sompel and others are describing, it is clear that the threat being discussed is not a threat to the advancement of science but to the advancement of scientists.  Scholars and researchers have already found a way around the outmoded system of scholarly communications that is represented by the scientific journal.  The range of informal, digital options for disseminating research results will not merely ensure but improve the advancement of science.  All that is left for the traditional publication system to impede is the promotion and tenure process of the scientists doing that research.

This, of course, is the rub, especially for libraries.  Traditional scientific journals are increasingly irrelevant for the progress of science, but they remain the principal vehicle by which the productivity of scholars is measured.  One researcher told Choudhury very frankly that the only reason he still cared about publishing in journals was for the sake of his annual review.  Sooner or later, one hopes that universities will wake up to the tremendous inefficiency of this system, especially since the peer-reviewing on which such evaluations depend is already done in-house, by scholars paid by universities but volunteering their time to review articles for a publication process with diminishing scholarly relevance.  Nevertheless, the promotion and tenure system still relies, for the time being, on these journals, which presumably cannot survive if libraries begin canceling subscriptions at an even faster rate.  The economy may force such rapid cancellations, but even if it does not, pressure to move to a more active and relevant role in the research process will.  The question librarians must ask themselves is whether supporting an out-dated system of evaluating scholars is a sufficient justification for the millions of dollars they spend on journal subscriptions.  Even more urgently, universities need to ask if there isn’t a better, more efficient, way to evaluate the quality of the scholars and researchers they employ.