The other side of the balance.

We are often told that copyright law is supposed to be a balance, offering, on the one hand, the financial incentive to creators that goes with monopoly rights and, on the other hand, sufficient exceptions to those monopoly rights to allow new creators to build on previous work. Without the latter half of this balance, creativity would effectively grind to a halt, and the incentive side would be useless. But most of the time, Congress and the courts seem to be serving the needs of those who want to profit from works already created at the expense of those who are trying to innovate and create new works. So it is especially pleasant to report on a couple of recent court decisions that can be seen as efforts to redress that imbalance and give some support to essential users’ rights.

First, there was the ruling in Jacobsen V. Katzer that essentially upheld the enforceability of an open source software license. Open source licenses are contracts (and that was part of the issue) that waive copyright, telling a downstream user that they are free to use the software in ways that would otherwise require permission, as long as they abide by certain conditions. In the Jacobsen case , such a license was challenged on several grounds — that it did not form an enforceable contract, that the terms of the license were not real conditions but merely “covenants” without legal teeth, and that the license was an attempt to enforce so-called “moral rights” which are largely not recognized in the US. The Federal Circuit Court of Appeals rejected these challenges and sent the case back to the District Court to be decided as a contract and copyright infringement case.

What this essentially means is that an open source license — and this likely includes the Creative Commons licenses often used in higher education as well as the more technical software license directly at issue — forms a contract between copyright holder and user that allows the user to use the work according to the terms of the license and lets the rights holder sue for infringement if those terms are breached. This is how these licenses are supposed to work, and it is nice to see a circuit court affirm their proper functionality. This ruling will make it easier for academics authors and other creators to share scholarly work without relinquishing total control.

One interesting part of this argument was the assertion about moral rights. It is quite true that the US protects moral rights, including the right of attribution, only for a small group of visual artists.  But that fact does not show why an attribution license is invalid, it shows why such a license better serves the needs of many creators, especially in academia, then copyright law alone does.  With an open access license an author can leverage their ownership of copyright to enforce the right of attribution when the law alone would not do so.  And attribution, of course, is usually the most important reward an academic author gets from her work. That is why this recent decision upholding these types of licenses is so important well beyond the sphere of software development.

The other important development was a DMCA case that decided that, before sending a “takedown notice” alleging that some particular web posting infringes copyright, the rights holder must consider whether fair use would authorize the particular posting.  This decision tracks the wording of the DMCA very closely, noting that the law permits takedown notices when the posting is not authorized by the rights holder or by law. Fair use, as the court correctly held, is a form of authorization by law (note my previous post here that noted that this has not been the case in previous DMCA practice). Therefore, a rights holder should not send a takedown notice in a case where a good faith consideration of fair use makes clear that the posting in question is not infringing.

The primary value of this second decision will be to limit the ability of rights holders to use the DMCA system to frighten people and to “chill” legitimate fair uses of commercial works.  The particular case involved one of those transformative uses that are so highly favored in the fair use analysis — a 29 second homemade video of a baby dancing to the sounds of a Prince song.  It should be obvious that such a video, even when available on YouTube, is not a commercial substitute for purchasing the song itself on CD or as an MP3.  So the takedown notice sent to YouTube over this parent-posted video seemed abusive, designed more to intimidate than to protect legitimate commercial interests.  Thus the court allowed the parents’ case against the rights-holder for misrepresentation under the DMCA to go forward, ruling that consideration of fair use is a prerequisite to the proper use of the DMCA takedown notice.  This, too, is a victory for user’s rights and, even more important, for free speech in the digital world.

“Fixing” Fair Use?

Whenever I hear suggestions that fair use should be “fixed,” I am reminded that there are two very different usages of that term. When you get your car fixed, it is returned to the state where it performs as it was meant to do. When you get your dog “fixed,” however, that is not the result. So I approach all suggestions for fixing fair use from the perspective that we do not want to render that important exception to copyright sterile and, thereby, unusable. We may want to fix fair use like you fix a car, but we must be careful not to fix it like you fix a dog.

From this admittedly cynical perspective, I was pleased by what I read in Mark Glaser’s “e-mail roundtable” on the question “Should copyright law change in the digital age.”

Glaser asks two lawyers — Peter Jaszi and Anthony Falzone — and two experts in new media — JD Lasica and Owen Gallagher — how fair use might be changed to better accommodate new uses like remixes that are made possible by digital technology. Interestingly, none of the four suggest actually tinkering with the language of section 107 itself, and both lawyers point out that the vagueness of fair use, while it can be maddening, is actually a strength. Only a flexible and dynamic (to use Jaszi’s words) doctrine can truly be technologically neutral and create the space necessary to experiment with new media and new uses that were unimaginable to the drafters of the law. What makes fair use frustrating and uncertain also makes it adaptable and supportive of creativity. “Fixing” fair use by removing its vague reliance on factors that can be applied in any situation would indeed be like fixing the dog.

Instead, these four experts discuss what might be added to our law to make certain uses that have become prevalent in the digital age less risky. By creating “safe harbors,” for example, that essentially immunize certain acts, at least when done for non-commercial purposes, the fear of using fair use, and the cost of adjudicating it, can be reduced. Lasica goes further and suggests some additional positive rights that could be incorporated into the copyright law, such as the right to make personal back-up copies, to time-shift and to change formats. Both of these suggestions would leave the fundamental structure of fair use, vague and flexible as it is, intact; they would simply take some common digital uses outside of its purview. Fair use would still allow for new technologies and creative uses not yet conceived, but the cost of reliance on fair use would be reduced by specific exceptions for activities that are now well-known and clearly of benefit to consumers. These proposals exemplify the right way to “fix” fair use.

A template for authors’ rights, and a modest proposal

The Association of Research Libraries has just released an article written by Ben Grillot, a librarian and law student working as an intern for ARL, that is advertised as a summary of the policies of twelve publishers toward deposit of NIH-funded research articles into PubMed Central. In fact, Grillot’s article has a value well beyond the modest comparisions announced by its title.

I won’t attempt to summarize Grillot’s analysis or conclusions here; he writes so clearly and concisely that any summary would seem awkward and wordy in comparison. Suffice it to say that Grillot does a superb job of limning the ambiguities that need to be resolved as publishers come to terms with the new NIH public access mandate, as well as the competitive advantage that will be gained by those who resolve those unclear points quickly and fairly. The easier deposit in PubMed Central is made, the more a publisher will stand out from the crowd. But beyond its comparative analysis, Grillot’s article provides a kind of template that authors should consider whenever they are confronted with the choice of publisher for their research and with a publication agreement. His lucid explanation of the various provisions in the selected agreements, which themselves are usually far from lucid, offers a model for what questions a scholarly author should ask of the agreements she sees and how she should think about the way those questions are, or are not, answered.

Two quick points struck me as I read Grillot’s article beyond those conclusions that he reaches. First, I think many authors would be very surprised at just how limited their rights to make their own work available to others are when they sign publication agreements. We are often told that “most” publishers now support open access. But most also impose an embargo on such access, and during that embargo an author is often not able to place her own work on her personal website (about half the journals do not allow this, at least for the final author’s version), and is very unlikely to be able to post the work to a disciplinary website or institutional repository (7 or 8 of the 12 journals examined by Grillot do not allow this). The very limited set of open access rights retained by authors under these standard publication agreements argues forcefully for the approach taken recent by the Harvard Arts and Sciences faculty to grant Harvard a license for use in an institutional repository prior to any transfer of copyright to a publisher.

The second thing that caught my attention is the brief notation, in a footnote to table 2, that Oxford University Press charges authors more for participation in their “author pays” open access program if the author is affiliated with an institution that does not subscribe to Oxford’s journals. Authors’ rights are thus directly and explicitly tied to institution’s expenditure of monies with that publisher. No doubt this linkage between authors’ rights and institutional subscription makes business sense to Oxford, and far from criticizing it, I suggest that institutions emulate it. Whenever we negotiate a new contract for a journal database, whether a new acquisition or a renewal, we should insist that the rights that authors at our institutions who publish with that publisher retain are spelled out. For some of us it has seemed inopportune to tie the rights of individual scholarly authors to our enterprise-wide subscriptions, but it is starting to seem more and more logical. The decision by Oxford to link its grant of authors’ rights to the institutional purchase of its products convinces me that it is now time for our library acquisitions departments to start insisting that that linkage become a two-way street.

Updates on NIH Public Access

It seems like a good time to collect some of the interesting news items coming out lately about the NIH Public Access Policy, which has now been mandatory for just over 4 months. Most of these items come from Peter Suber’s Open Access News blog, to whom we direct a sweeping tip of the hat.

First is the important clarification that NIH issued about how author submission occurs. In greatly simplified language, the NIH outlined four methods by which submission can happen — publication in a journal that has an agreement to put all of its contents in PMC, arrangements with the publisher for deposit of a specific article, self-deposit of the article, or completion of the deposit process when the publisher has sent the final peer-reviewed manuscript to PMC. For more details, see the NIH policy home page.

Next came this report in Library Journal that submissions to PubMed Central have more than doubled in the six months since the mandatory policy was passed by Congress.

Then last week Oxford University Press announced that it would begin depositing articles that are funded by NIH for authors. In effect, this means that Oxford authors will be selecting the fourth of the methods NIH has identified, which is much easier for Oxford authors than the self-deposit on which they had to rely up till now.

Finally there is this note from Library Journal Academic Newswire, which both reports on the OUP decision and notes that NIH is confirming the fact that most journals which handle deposit for the authors are selecting a twelve month embargo on the articles, the longest embargo currently permitted by law.

Taken together, I think these reports indicate two things. First, the Public Access Policy is working, by which I mean that public access to bio-medical research is increasing dramatically without creating any real danger to the publishing industry. The announcement by OUP that they would cooperate in depositing articles indicates that publishers are coming to terms with the requirement and accepting it. Even the news that most publishers elect the 12 month embargo is a sign of growing accommodation; that overly-long embargo provides even the most skittish publishers enough security to adapt to the growing open access movement. Shorter embargoes are undoubtedly sufficient to protect publisher revenues, but the move to those shorter delays will have to take place gradually, as more and more publishers realize that, whatever the threats to their traditional business models are, NIH Public Access is not one of them.

Second, I hope that we are seeing an awakening realization on the part of scholarly authors that they have genuine choices as they consider how to disseminate their work. The soaring PMC submission rate, and the decisions by major publishers not to resist it, suggest that making submission easier for authors is rapidly becoming a competitive advantage. As authors realize that they have control over their work for as long as they retain copyright ownership, publishers might have to take on a service role they have never really played before, competing for the best scholarship by help authors meet the requirements of the funders who underwrite the research.

More on e-textbooks

A few weeks ago I did a post suggesting that universities should look at digital textbooks, both licensed and open access, as a way to help students reduce the cost of higher education. The reauthorization of the Higher Education act, with several provisions related to monitoring of costs, reminds us that lots of eyes are watching that topic.

A couple of recent news items suggest the richness of the opportunities, both for education and for innovative business models, that online textbooks can offer.

First there is this interview with Eric Frank, a founder of Flat World Knowledge, about that company’s venture into creating textbooks that will be freely available online and also can be purchased through a print-on-demand service and even as an MP3. Frank explains very clearly the imbalances of the current system for publishing textbooks, where high prices drive a thriving used book market that undermines sales and drives prices even higher and where new editions are created not because of changes in the field of study but in order to renew revenues lost to used book sales or piracy. More importantly, Frank describes in considerable detail the alternative business model that Flat World is pursuing, which combines a more consistent revenue stream with free availability for those who want only online access and many flexibility features for both the original author and other instructors to change and adapt the books for specific pedagogical needs. Flat World has at least 15 schools on board to experiment with its new model for textbook delivery; it is a beta test that should be carefully watched — whether or not it succeeds, it will provide valuable lessons about how we might harness the educational potential of online publishing and break the strangle hold of out-dated business models.

On a more whimsical note there is this brief article from the ABA Journal about a law professor who wants to create an animated “case book” for tort law. Professor James Cooper from California Western is proposing that animated videos of some of the most important cases in tort law be available on YouTube for students to study. This is obviously not just an impractical whim; Prof. Cooper has produced numerous short videos on legal topics (available here), including a public services announcement on DVD piracy called IP PSA (in Spanish).

My first thought about this was that the famous case in which NY Court of Appeals Judge (later Supreme Court Justice) Benjamin Cardozo decided that tort liability did not exist when the harm caused was unforeseeable would make a great video. That case, Palsgraf v. the Long Island Railroad Company, has great dramatic elements — a moving train, an exploding package of fireworks and a huge set of scales yards away falling on an innocent bystander. It is a set of facts that a first-year student is unlikely to forget, if they read the case in the first place. Unfortunately, the pressures of law school and the arcane nature of some of the opinions leads many students not to bother. The animated gallery of cases that Prof. Cooper suggests cannot replace traditional law school methods, but it could provide a helpful supplement. And since federal judicial opinions are almost always available on the open web, it is at least possible that a combination of this YouTube gallery with some sophisticated linking and added commentary could replace a casebook with an alternative both more economical and more likely to get students’ attention.

The “Law Librarian Blog” asks if this idea is innovative or insulting. From my point of view (as a relatively recent law school graduate), it is both innovative and representative of the kind of experimentation that needs to be taking place. Animation may not be the future direction of law school instruction, but all such experiments will help us arrive at a clearer vision of what that future can be, and they help us break the grip of traditional notions that are no longer working.

And advice from up north

When I first heard that the Canadian Association of University Teachers had approved an intellectual property advisory for faculty authors encouraging them to retain copyright in their published academic articles (hat tip to Heather Morrison), I was delighted and planned to post an enthusiastic plug for the short document in this space. I still am excited by the decision of CAUT, but another recent event has provided a sense of context that I think helps show how urgent the advice given by this Canadian counterpart of the Association of American University Professors is.

William Patry is a well-known copyright practitioner and scholar; it is hard to imagine a more distinguished resume for someone wanting to comment on copyright law today. His copyright blog has been a valuable source for me, often cited here, of interesting information and thoughtful reflection. So I owe Bill a lot of gratitude for the work he has done over the past four years, and am deeply saddened by his decision to give up his blog.

Patry gives two reasons for the decision to stop sharing his learning and insight in this format. First, he is finding that it is increasingly difficult to get others to understand that his blog is an expression of personal opinions and not those of his current employer. Second, he says that the state of copyright law has simply made it to depressing to constantly be the bearer of bad news. As he eloquently expresses the current state of things,

Copyright law has abandoned its reason for being: to encourage learning and the creation of new works. Instead, its principal functions now are to preserve existing failed business models, to suppress new business models and technologies, and to obtain, if possible, enormous windfall profits.

This analysis seems discouragingly correct to me, but it also reminds me that, in the small corner of the copyright world that is scholarship, there is something we can do to alleviate this problem. And the Canadian Association of University Teachers have clearly told us what that something is — retain copyright.

In its intellectual property advisory CAUT expresses concisely both the problem:

Without copyright ownership, academic staff can lose control of their own work and
may no longer be entitled to email it to students and colleagues, post it on a personal or
course web page, place it in an institutional repository, publish it in an open access journal
or include it in a subsequent compilation.

and the solution:

Journals require only your permission to publish an article, not a wholesale transfer
of the full copyright interest. To promote scholarly communication, autonomy, integrity
and academic freedom, and education and research activities more generally, it is
important for academic staff to retain copyright in their journal articles.

CAUT offers us a way out of the increasingly suffocating dilemma regarding copyright in which academia finds itself. We must hope that US educational groups and institutions of higher education will follow suit, and that individual faculty will continue to assert their rights as the original copyright holders in their scholarly writings.

In the meanwhile, a heartfelt thank you to Bill Patry for sharing his wisdom with us.

Insights from across the pond

One aspect of the international treaties on copyright to which the US is a party has been getting quite a bit of attention recently. The “three-step test” is a provision in the Berne Convention and in the TRIPs (Trade Related aspects of Intellectual Property) Agreement that broadly defines the role of limitations and exceptions in copyright law. It is possible to read the three-step test as providing only a very small window for limitations on and exceptions to the exclusive rights granted by copyright, and “Big Content” has been very active in promoting that interpretation. Recently a legal opinion letter was submitted to the National Institute of Health trying to argue that the NIH Public Access Policy, for example, violated the three-step test.

Applying the three-step test to something like the NIH policy is absurd, but the argument is made for its value as a scare-tactic. Politicians and bureaucrats are very sensitive these days to international aspects of intellectual property, so the three-step test is a very handy club with which to pound into legislative heads one’s own economic interests. So it is very refreshing to read the new Declaration by a group of European IP scholars from the Max Plank Institute on Intellectual Property, Competition and Tax Law on “A Balanced Interpretation of the “Three-Step Test” in Copyright Law.” The authors of the declaration argue convincingly that the test should be understood as a comprehensive framework for interpreting limitations and exceptions, rather than as a set of three steep hurdles over which any proposed exception must leap. They emphasize that the interests of third parties, as well as domestic decisions about the best way to restrict IP monopolies, are not incompatible with the international three-step test.

Beyond its main point, however, I was struck by a simple distinction that is made within the declaration that, to me, has implications well beyond the debate over limitations and exceptions. The authors remind us that the implications of any proposed limitation or exception for both “original rightsholders” and “subsequent rightsholders” should be considered. This simple recognition that the interests of “authors,” who are the original holders of copyright, are often not identical to or necessarily compatible with the interests of those to whom those rights are traditional transferred is profoundly true in the area of scholarly publishing. As I have stressed many times, scholarly authors are usually rewarded almost exclusively by reputation and by reward structures internal to their institutions. Thus their interest is usually in the widest possible distribution of their work. The “subsequent rightsholders” of scholarly work, however, are interested in profits, and their interests may prevent the wide distribution that would best serve scholarship.

To my simple mind, this distinction carries enormous power. Throughout history, all the way back to the 18th century “battle of the booksellers,” publishers and other distributors have appealed to the image of the poor, starving writer to demand stronger copyright protection. But the interests of the two groups are seldom the same and often conflict. Another recent document on international copyright, the “Green Paper on Copyright in the Knowledge Economy,” issued by the European Commission a couple of weeks ago, reinforces this point. In considering the very strong protections contained in a recent EU Directive on the Harmonization of Copyright, the EC report notes that there have been persistent questions raised about whether these broad exclusive rights actually translate into an advantage for authors of the works, who are, of course, supposed to be the principle beneficiaries of copyright protection. Performers, composers, film directors and journalists all complain, according to the report, that they are not making any extra revenue (no increased incentive) because of those new, stronger protections; all of the benefit is directed to the big distribution conglomerates that take copyright from creators and exploit it for their own benefit. This is the dilemma of scholarly authors on a larger scale, and we should watch the debates taking place in Europe for insights into why it is such a bad idea for scholarly authors to transfer copyright to publishers who do not have the best interests of either scholars or scholarship at heart.

Irrational publishing and recursive publics

A courtesy “heads up” from Ellen Duranceau, a scholarly communications colleague at MIT, alerted me to this podcast about scholarly communications with Dan Ariely, the author of the fascinating and best-selling book “Predictably Irrational.” This 20 minute interview is well worth the time for both librarians and scholarly authors who are concerned about the current state of scholarly publishing and interested in its future. I am looking forward to listening to the other interviews that MIT makes available.

Ariely was a Professor of Behavioral Economics at MIT, which is why Ellen is interviewing him, and he recently moved to a similar position here at Duke, which is why she alerted me to the podcast. Ellen deserves great credit for the insight – “I wish I had thought of that” – that Ariely would be a really interesting person to ask about the state of scholarly publishing. Not only because has he recently made the successful transition from obscure academic author to public intellectual, which he discusses in the interview, but because the theories and experiments that have made his work so well-known themselves suggest important insights into the scholarly communications system.

Much of Ariely’s work focuses on the odd things that happen when economic and social norms collide and intermingle, which is exactly what happens in the system of scholarly publishing. Faculty authors are largely driven by social norms and reward structures that are quite different from, and increasingly at odds with, the economic incentives that drive publishers. The result is a strange and dysfunctional system.

During the interview, Ariely refers to his “back of the envelope” calculation that it costs a university over $50,000 to support the production of a single scholarly article, which indicates how badly askew the economics of publishing are, when universities not only subsidize production to that extent but also repurchase that subsidized content after publication. It is precisely because the academy is governed by an entirely different set of social norms that we have allowed the economic situation to get so far out of hand. But Ariely’s endorsement of a more open and accessible system of scholarly communications is not itself, finally, based on these economic conditions. Rather, he has discovered, through his own experiences with the public attention he has received, the great benefit both to the individual scholars and to society, of open and interactive scholarship. The ultimate take-away from this interview for me was that scholarship itself can be improved by reaching out to larger publics and incorporating those publics into the work of research and writing.

As a sort of “proof of concept” of Ariely’s claim, I was interested in the experiment in a new kind of “hybrid” publishing going on with a recent book by Rice University professor Chris Kelty. “Two Bits: the Cultural Significance of Free Software” is published by Duke University Press (you can buy a copy here), but is also available online on this author-maintained website, twobits.net. One can read the book online, comment on its various chapters, and “modulate” with it – use it in small chunks to create new scholarship. Kelty uses the concepts of re-mix and recursive publics to experiment with what we really mean when we say that scholarship builds on the works of others. This experiment with modulations will be the most interesting part of Kelty’s new model of scholarship to follow, but in light of what is discussed in the Ariely interview, I think there are two more basic questions to ask about this kind of hybrid model for scholarly publishing. First, will online availability depress sales of the print book, or will people who come to it first online be motivated to buy a hard copy (as I was)? Second, will the experiment in public comment and reuse really result in improvements to the text and to scholarly output that builds creatively upon it? This latter question is a way of asking if the results that Dan Ariely reports in his interview can really be replicated for scholars who do not attract the same level of celebrity.

Copyright reform — what would “green” copyright look like?

My wife frequently accuses me of finding copyright and other intellectual property issues everywhere, often where no “normal” person would perceive such a question. So I was both surprised and vindicated to see discussions of “green” copyright in a couple of places recently; surprised because even with all my obsessing about copyright, I had never considered how one might make a more eco-friendly copyright law.

The most comprehensive discussion I have read so far about green issues for copyright reform comes from Michael Giest, the Canadian copyright scholar who is leading a powerful grass-roots opposition to the new proposed copyright law in Canada — Bill C-61, introduced in Parliament several months ago. In a column for the Toronto Star, and again on his fascinating blog site, Geist lists several problems with the proposed law that could hamper efforts to improve the environment (or at least slow the harm we are doing to it). Since a major complaint about the Canadian proposal is that it looks too much like US copyright law, it is fair to assume that these “Canadian” issues are US issues as well:

  1. Copyright law can impact our ability to recycle computers and other electronic devices in order to reduce the amount of “techno-waste” that is generated each year. Protections for software in general and especially prohibitions that prevent circumvention of digital protection measures can prevent new users from gaining access to recycled devices. It is no secret that Apple want to sell each of us a new iPhone every year or so, but there is potential environmental impact to legal enforcement of that business policy. Giest refers to a US case where the potential for this kind of ecological harm was very real — Lexmark v. Static Control Components, in which Lexmark tried to use the DMCA anti-circumvention rules to prevent a competitor from making chips that would allow the re-filling of laser printer ink cartridges. The courts found that such an application of US copyright law would be anti-competitive, but it is worth noting that a contrary decision might also have been anti-environmental.
  2. Protections that restrict copying of software and storage of copyrighted materials on shared networks can inhibit the efficiencies gained through “cloud computing.” If memory-intensive research — crunching huge data sets for example — can be done by a network of computers rather than at a single site, unused capacity can be exploited to reduce the need for multiple institutions to obtain massive computing capacity that may be used infrequently. Copyright law can have a lot to say about whether such shared projects will be feasible.
  3. A similar issue is raised regarding the possibility of consumer storage of memory-intensive materials in networked systems. In the US there already exist network-based video recording services that decrease the proliferation of digital devices that increase energy usage and eventually end up in landfills. US courts have not been consistent in their approach to these services, in part because our copyright law does not directly address the status of copies made solely for personal use. The new Canadian proposal would take up that issue and would authorize only a single copy of consumer-purchased songs or videos. With such a law, not only would consumer choices be severely restricted, the need for many individually owned storage devices would burgeon — good for the consumer electronics industry but bad for the environment.

In addition to these copyright issues that could have significant ecological impact, there are also “green” patent concerns. A recent study has shown the tremendous growth in patents issued for inventions, software and business methods that are aimed at environmental processes and problems. Because there is already so much controversy (and litigation) around software and business patents in general, it is a legitimate worry that the growing number of ecological patents could actually impede the progress of innovation in environmental sciences rather than promote that progress. Patent law, like copyright, is intended to promote innovation through a careful control grant of monopoly, but recent research has shown the significant danger that patents, and the cost of prosecuting and defending them, may be becoming an obstacle to innovation rather than an incentive; a nice, but dated explanation of the potential problems can be found here; this book review of 2008’s “Patent Failure” gives a more up-to-date review of the economic evidence that innovation is being stifled. Research into how to resolve our environmental dilemmas is too important to allow it to be slowed by the inefficiencies of our patent system, and adds another argument for the need for comprehensive reform of US intellectual property laws.

Where should we spend our money?

The attention paid in the last few weeks to the cost of textbooks and the promise, as well as the risk, of moving to e-texts has prompted me to consider the above question.

Some of the recent reportage has focused on e-textbooks as a way to reduce the costs students must pay for course materials; this article in USA Today is an example of this kind of story. There have also been several comments from open access advocates supporting the move toward open online textbooks; see this post by Georgia Harper and this one from Peter Suber.

There has also been some commentary recently on the abuse of new models of textbook distribution. The Boston Globe ran this article on “Textbooks, free and illegal, online” just a few days ago. It is unfortunate, but hardly surprising, that it is only in this article about “pirated” textbooks that the Association of American Publishers is quoted; they could do so much more if they were actively involved in a positive solution that could reduce textbook costs and improve access. But it is the faculty who write the textbooks who are quoted as seeking a legal solution, while the publishers merely resort to heavy-handed enforcement measures for a law that is rapidly becoming unenforcable in a technological environment for which it was never designed. The fuss usually works in individual cases — the Chronicle of Higher Education reports today that the specific site discussed is off-line — but it is ineffective to stem the digital tide.

But faculty do not come out unscathed in this discussion either, as is clear from this post about the practice of professors commissioning “custom” textbooks and receiving “royalties,” which William McGeveran of the University of Minnesota Law School calls “kickbacks,” from the required purchases by their students.

The lesson here seems to be that the digital environment is inevitably going to change the environment for textbooks as it has for most other kinds of intellectual property, for good or for ill. Georgia seems to feel that the publishers will eventually figure the market out and move to new profit models while supporting open access. But I think there is also an opportunity here for institutions to be more proactive and seek ways to invest in open access textbooks on a campus-wide level.

Why should schools consider doing this. First, with all the pressure that institutions of higher education are under to reduce the costs for students to attend, open access textbooks offers an avenue for proactive investment that will simultaneously reduce student costs and encourage faculty scholarship. Second, this is a place where universities actually can help combat copyright infringement. Universities have been made the scapegoats in the file-sharing wars, but there is really not a lot they can do to ameliorate that problem, especially since the vast majority of music and movie file-sharing does not occur over college and university networks. But by supporting open access to e-textbooks, we really can reduce the problem of infringement in that realm.

How can universities invest their funds in ways that will encourage open access textbooks and reduce costs (and therefore the incentive to infringe copyright) for students? I can think of three ways, off hand.

First, institutions could invest in infrastructure that would encourage new models for electronic course content. This means a great deal more than simply providing the storage space necessary for an institutional repository. Universities also need to support their faculty authors in efforts to retain copyright so that they can deposit their works in an IR and create new and unanticipated derivative works from those publications. The opportunity to combine materials located in an institutional repository in new ways would create a different spin on the custom textbook; it would offer a heretofore unimagined flexibility based on legal rights retained by the authors of the component parts and licensed to institutions or, using a Creative Commons license, to a broader group of users.

Second, universities and consortia could bring their purchasing power to bear to negotiate multi-user licenses for existing e-textbooks or new ones created in the commercial market. The current models all rely on students to each pay individually a licensing fee (putatively lower than the purchase cost of a hard copy) to obtain access, for a limited time, to an e-text. Multi-user site licenses could further reduce the price per user and give the university flexibility about whether to assess each student user for that lower cost or simply cite the funding to legislators as an investment in reducing student costs.

Finally, universities could make funds available for faculty to encourage the development of open access texts. There has been a great deal of talk recently about funding to support open access via “hybrid” publishing — traditional publications onto which an open access alternative is grafted if the author, or her institution, is willing to pay an added fee. It seems to me that a much wiser investment, and one with a greater return for the dollars spent, could be made by turning those funds to support faculty who want to create online open access textbooks that can be used by students on their own campuses and by others who teach similar courses. Adaptation by others, in that case, would provide an effective “peer-review” to measure the quality of the faculty author’s contribution. In this way, student costs could be reduced, faculty scholarship supported, and the real potential of the digital environment for collaborative learning more fully exploited.

Discussions about the changing world of scholarly communications and copyright