Calling for better policies

Unfortunately, I had to leave the SPARC Digital Repositories conference to catch a plane before the closing keynote address by David Shulenberger, VP for Academic Affairs at NASULGC, but there is a nice write up about his remarks here in LibraryJournal.com.   Shulenberger outlined seven steps to help academia weather difficult economic times and still “get to the next level” in scholarly communications.  Given the context, it is no surprise that the emphasis was on creating digital repositories.  I would note that his first step is for each university to be sure that a repository is available for their faculty, which is not quite the same thing as saying that every institution must have its own digital archive.  The possibilities for collaboration and sharing are precisely what have burgeoned in the digital environment, and it is important to remember that we can share infrastructure as well as the scholarship that infrastructure is designed to support.

The point I want to emphasize, however, was Shulenberger’s third step, in which he called on faculties and administrations to discuss current intellectual property policies and practices.  This is innocuous enough until Schulenberger delivers the punch line — “emulate Harvard.”  The point, of course, is that faculty should not surrender their copyright without thought and negotiation when publishing their scholarly output.  The time when that was a sensible or practical way of doing business is simply past, now that the digital environment offers so many new ways to disseminate research and scholarship.  The earlier part of Schulenberger’s remarks included a “calling out” of university presses and some scholarly societies for their support of the lawsuit against Georgia State and of legislative attempts to reverse the NIH Public Access policy.  The key to resisting these efforts to hamstring broader access to scholarship is precisely in this point — if faculty do not give away their copyrights, but rather give only licenses for publication, they will retain control and can use their work, and let others use it, without fear of being sued for infringing the copyright in works they or their colleagues produced in the first place.

The other appeal for better policies around copyright and intellectual property is this article in the ARL Research Bulletin by the President of NASULGC, Peter McPherson.  McPherson makes the point that colleges and universities are at the heart of the purpose of copyright to “promote the progress of science and the useful arts.”  To do that, he argues, we need to resist the trend toward every greater protection that has tilted the balance of the copyright bargain away from its core purpose.  He recommends that higher education work in concert to develop a comprehensive set of policies and to create a structure to advance those policies with lawmakers.  He argues quite correctly that our voice has been fragmented and unfocused, while those who believe, in opposition to the constitutional purpose of copyright, that they are entitled to squeeze every penny from each copyrighted work, speak in unison.

McPherson makes an excellent point, with which I want to close, when he notes that no single strategy will address all of the issues and all of the needs that arise around intellectual property within the academy.  It is precisely this need for creativity and flexibility that so urgently requires that we cooperate to develop an appropriate set of strategies, rather than each pursuing our own favorite issue or solution.  McPherson writes:

a “solution” to fully address some of the contemporary challenges we face in the intellectual property arena may be a combination of further legislation, public licenses, market-based allocation, or market-modification allocations.

I think this is exactly right, but would point out that the fundamental point made above, that IP rights can not simply be ceded away in exchange for traditional publication, is a prerequisite to any and all of these strategies.  Higher education should welcome the leadership of NASULGC, exemplified in these two articles, on this issue.

Keeping up with the world

At the SPARC Digital Repositories meeting earlier this week, I was particularly struck by the remarks about the policy environment for open access scholarship in Europe made by David Prosser of SPARC Europe. Without any apparent intent to be boastful, Prosser began his address by telling us that the policy argument in favor of open access has been won, and he proceeded to back up that assertion pretty effectively.

First, Prosser cited three separate studies of research policy in Europe that all concluded that open access was a necessary component of the ambitious European Community imperative to develop a highly competitive knowledge-driven economy. These reports all seemed to recognize that public access to scientific research is a prerequisite for increasing the pace of scientific and technical innovation.

Next, Prosser reminded us of the major funder mandates for open access. The private Wellcome Trust lead the way, but now six of the seven research councils in the UK have followed suit by requiring open access to funded research within six months of publication (not the one year embargo permitted by the US NIH mandate, which is currently subject to an attempt in Congress at reversal, even though that requirement is much more publisher-friendly). Most recently, the Irish Research Council for Science, Engineering and Technology has adopted a similar mandate for funded research. All these funders of research recognize that open access is not just a nice thing to do for the public who puts up the money, but is a fundamental step toward remaining competitive in today’s digital environment.

The same recognition surely underlies the decisions by nine European universities that have adopted self-archiving policies that ask or require faculty to deposit their published research in an institutional repository. This, too, is an important step toward a new level of scientific competitiveness for the European Community, and a failure to follow suit will be a threat to the US ability to maintain its pride of place in research and scholarship.

After Prosser’s talk, Syun Tutiya from Chiba University spoke about the open access policy environment in Japan. Although their successes are more modest than those detailed by Prosser, Professor Tutiya ended his remarks with a telling challenge to the US and our ability to compete in the global environment. Speaking about the need for international collaboration, Tutiya said that Japan was ready to collaborate, and Europe was ready, but you (the Americans who made up the majority at the meeting) are not ready. Until we take the importance of increasing access to fundamental scientific research more seriously and stop treating it as a political power struggle, we will not be ready to collaborate or, I am afraid, to compete with the rest of the world.

Creative Commons and credit

The November 2008 issue of College and Research Libraries News contains a lucid explanation of, and a convincing argument for using, the Creative Commons licensing system.  The article, “The beauty of ‘Some Rights Reserved‘” by Molly Kleinman, is a concise and cogent explanation of the CC system of licensing materials to permit sharing of creative and scholar works, as well as that reuse of protected material that is so necessary to “promote the progress of science and the useful arts.”  Kleinman describe efforts at the University of Michigan Library to teach faculty about the benefits to authors, teachers and scholars of using Creative Commons licenses, and her ability to explain the licenses so clearly must be a great boon to that effort.

I want to give the link to the “Get Creative” video that Kleinman references as an important part of their teaching of the CC licenses, since the link in the online article’s footnotes did not work for me — http://mirrors.creativecommons.org/getcreative/.  This too is worth a look for anyone who wants to understand how Creative Commons licenses work and wants to be entertained in the process.

But I also want to add a suggestion about one more point that might help convince faculty that a Creative Commons license on their works will serve them well.  In her section on the “benefits of Creative Commons in academic settings,” Kleinman emphases the large numbers of works available under CC licenses and the ease of reuse that those licenses make possible.  I want to add that CC licenses actually serve the fundamental values of academia better than does our copyright law in its current state.

Almost alone amongst the copyright laws of the world, our US law does not enforce any right of attribution.  Most countries recognize some “moral rights” that are often treated differently than the economic rights which are the sole subject of US law.  Attribution — the right of the creator to have his or her name associated with the work — is the most basic of these moral rights.  But that right is simply not protected in the United States except for a small group of visual artists who are entitled to attribution under a provision added to the copyright law in 1990.

Does this absence of an attribution right make any difference?  It certainly can.  There was a story in the higher education press about six months ago about a professor who found that his short book, published several years before and since out of print, had been incorporated whole into a larger work from the same publisher that carried the name of a different author.  Because the professor had transferred his copyright to the publisher, and the US has no moral right of attribution, he had no recourse to continue to get credit for his own scholarship.  For an academic author this is a dreadful fate, since scholarly publication is done more for reputation and standing in the discipline than it is for money (Samuel Johnson’s famous remark notwithstanding).

In an 2004 article on “The Right to Claim Authorship,” Professor Jane Ginsburg of Columbia describes the importance of an attribution right and discusses how other countries have structured that right for good or ill.  On the need to protect attribution she quotes an unnamed federal judge to this effect:

To deprive a person of credit to which he was justly entitled is to do him a
great wrong. Not only does he lose the general benefit of being associated
with a successful production; he loses the chance of using that work to sell
his abilities.

At the end of the article, Prof. Ginsburg proposes what the contours of a US attribution right might look like.  Her proposal makes a great deal of sense to me, but, and this is my point here, authors who use the Creative Commons licenses do not need a Congressionally recognized right of attribution because a CC license effectively leverages copyright ownership to ensure that the author gets proper credit.  In essence, a CC license, with its attribution condition on reuse, is a private law arrangement to effectuate what our public law has failed to do.

Because reputation is the foundation of the academic reward system, and giving proper credit to authors and creators is the most basic tenant of academic ethics, the protection of attribution is a fundamental value of scholarship.  And since the Creative Commons license protects attribution, and our copyright law by itself does not, the value of the former for those who live within the academic system and embrace its values is vastly increased.

Looking for the devil in the details

The more I read the Google Books settlement agreement, and the commentary it has spawned, the more I become convinced of two things.  First, this beast of a document will keep many lawyers in business and give many librarians headaches.  Second, it is the things we do not know that will be most troublesome.  The following is an unsystematic list of issues that I have been thinking about regarding the agreement, with no particular order and few definite conclusions.

Advertising — Perhaps it should be obvious, but Google Books is about to take on a very different look, as it becomes populated with advertising.  Up til now, Google has not sold advertising for these pages, probably to avoid undermining its fair use argument.  At this point, the only commercial links one gets when doing a search in Google Books are those to sources from which one can buy the books.  The settlement agreement explicitly authorizes advertisements on the Preview Use pages and anticipates ads on the results pages as well.  The agreement provides for the standard 70/30 split for advertising revenues (the Registry that represents publishers and authors gets the larger percentage), so it is now in the interests of the rightsholders to permit and encourage advertising.  This is not shocking, but it does further detract from the “social benefit” justification that Google has used for years and that has made it so appealing to librarians.  Book searches on depression or Alzheimer’s being used to sell the latest fad pharmaceuticals to treat those conditions might cause libraries to rethink the place of even free access to the Google product in their overall mission.

Orphan works — Does this agreement really spell the end of legislative attempts to reduce the risk of digitizing books that are still in copyright protection but for which no rightsholder can be found?  Larry Lessig certainly implied that it does in his initial post reacting to the deal.  Consider that there will be much less incentive to adopt such a proposal if many of the works involved are available for viewing via institutional subscriptions to Google Books or even for individual purchase.  By making allowance for unclaimed funds coming into the Registry that the settlement agreement will create, Google and the publishers clearly expect to make money off of orphan works.  As I suggested earlier, pay-per-use may well replace legislative attempts to refine the balance between rights protection and socially valuable uses, and libraries that want to make obscure works available to a broader public will be the losers.

It is worth noting that the agreement itself makes some allowance for the adoption of orphan works legislation, providing that both Google and the Fully Participating Libraries can take advantage of such legislation if it ever becomes law.  What we do not know is whether or not the Book Rights Registry would become available to users who wanted to use orphan works as part of their diligent search for rightsholders; it would be a tremendous resource but, at least initially, it is structured as a closed and private database.  See Georgia Harper’s interesting post on this issue here.  We also don’t know if the agreement will have such a pervasive effect that Congress will not bother to take up orphan works in the first place; they certainly have not been on fire to do so up to now.

Defining the public domain — I have complained before that Google has used a very narrow definition of the public domain, especially in regard to government publications.  On this score, the agreement seems to move things in a positive direction, at least in regard to the contents of the Google Books product itself.  Google has argued that it had to be careful about using government works because of the possibility that they would contain “inserts” (to use the term now adopted in the settlement agreement) for which there could be a continuing copyright interest.  This agreement would seem to remedy that concern by allowing for uses of such works unless the owner of the rights in the insert objects.  Even then, Google can appeal the objection using the dispute resolution procedure specified.  The restrictions on other public domain works that are still commercially available seem sensible to me.  If a PD work contains an insert to which a copyright interest still adheres (an introduction, for example), then all earlier editions of the PD work that contain that insert are treated as commercially available (and therefore “non-display”).  Editions without such inserts will remain in the public portion of the Google database.  On the other hand, out-of-print editions of a work that is still in copyright and is commercially available in another edition will all be treated as commercially available.

Future publications — One of the trickiest aspects of understanding this document is the definition of “books” that it uses.  Careful reading indicates that that term encompasses only works that are in copyright protection and registered with the Copyright Office as of the settlement date.  That means that this agreement deals only with works already published; it does not seem to tell us anything about how or if Google will deal with books (in the non-technical sense) published in the future.  The obvious conclusion is that publishers will be able to opt-in to all or some of the “display use” (snippets, preview, sales of institutional subscriptions or individual titles).  I wonder if such new publications will be subject to non-display uses (text minig, i.e.) when and if Google scans those works, or if those too will be opt-in only.  I also wonder what will happen when works published after the settlement go out of print.  Will publishers have to opt them out of display uses at that point, or will the original opt-in still control?  Finally, how often will the database to which institutions can subscribe be updated, and how will the effect of new content have on the price for that product be determined?

Commentary that is worth reading about the settlement agreement includes:

Karen Coyle’s “pinball” comments here.

Open Content Alliance’s objections here.

This Washington Post article on Google’s New Monopoly (requires free membership).

PC World’s article on how business considerations have trumped ideals in this negotiation.

Deep impact?

That a settlement between publishers, authors and Google over the latter’s Book Search project was in the works was not exactly a well-kept secret over the past few weeks.  Nevertheless, the announcement of the complex agreement has set many people buzzing, even before its provisions were fully digested.  There is a collection of comments to be found here, on Open Access News, and Siva Vaidhyanathan gives his initial view here.  As I read over the agreement, I am not sure its impact will be as deep, nor as overwhelmingly positive, as many of the commentators have suggested.  There is a nicely nuanced reaction to the agreement here, from Jack Balkin of Yale Law School.

First, it is important to realize that this is a proposed agreement to settle a pending law suit. It must be approved by the court and may change in its details during that process. The plaintiff classes in this class action suit are very large, so the process of notification will be complex and it is likely that class members will object and want to discuss changes in the agreement. This is not the final word.

I also want to note up front that this settlement would not resolve the fair use argument that is at the heart of the lawsuit; the parties have been very clear that they still have a significant disagreement over whether Google’s activities to date infringe copyright or are authorized as fair use. A decision on that issue would have provided libraries with more guidance as we proceed (or not!) with digitization projects, but both sides in the case, I suspect, wanted to avoid getting to that point. The likely result, unfortunately, is that the next time someone considers pushing the envelope on fair use, there will be even more pressure to just pay the costs of licensing up front and not go down the fair use path at all.

Under this agreement, it seems likely that the availability of in-copyright but out-of-print books would improve in the Google Book Search. Google would be able to show both the “snippet view” for such works that is already available and a “preview” view that would display up to 20% of a work, although no more than 5 adjacent pages and not the last pages of a work of fiction. For out-of-print works this would be the default availability, with the rightsholders able to opt out. For in-print books, the rightsholders would have to opt-in. So while it seems likely that, overall, there will be increased access in the Google Book Search product, some in-print works will also likely disappear, even from the snippet view, as rightsholders elect not to opt in.

The participating libraries are in an interesting “in-between” position here. They have no voice in the settlement agreement, and it appears that, for some of them, the options for using the digital scans of books that they receive from Google will be reduced. That depends on how their original agreements were worded, and that wording seems to have varied among the partner libraries. Under this proposed settlement, the libraries that provide books for scanning can receive digital files for any title they hold in their collections, even if they did not provide the copy of that title that was actually scanned. But there are strict limits on how those files can be used. They cannot be made available for reading even on campus, much less linked into a catalog. They cannot be used for interlibrary loan, e-reserves or in a course management system. They are essentially preservation copies, although there is a provision to allow research based on “text-mining.”

All libraries, of course, will be able to purchase institutional subscriptions which will give them access to the full text of many in-copyright works which publishers decide either not to opt out of this use (for out-of-print books) or which are opted in (for in-print works). We do not know much about the pricing structure yet, but, given the rather small amount of money changing hands at settlement, I think that the publishers are counting on making significant profit here. It will be especially interesting to see if some of the partner libraries choose to subscribe to this more robust version of the database to get the level of access that is denied to them with the scanned files of their own works.

Consumers will also be able to purchase digital copies of individual titles; the pricing structure could allow prices anywhere from $2 to $30 per title, but that structure will undoubtedly undergo further revision.

Finally, there are provisions for free access to this “fuller-text” version of the Google product, via dedicated terminals. One such terminal would be offered to every public library, although it is not clear if public libraries that still lack broadband access would benefit much from this offer. A free terminal would also be available to “colleges and universities,” with one such terminal for each 10,000 FTE (one per 4,000 for community colleges). I am sure that the exact definition of what is a college or university for this purpose will be a matter of some debate.  It is also interesting that no allowance is made for free access at the K-12 level.

For all three of these approaches to “access uses,” there are pretty strict limits imposed on cutting and pasting, and on printing.

Overall, I believe this agreement would increase access to a lot of books that are currently hard to find or even to know about. But there are significant strings attached to that access; for most people, it will probably come with a hefty price tag, which was not part of Google’s original, Utopian vision for its project.  The strict limits on access, both to the libraries’ own digital copies of books and to the public “access use” versions, seem to be what led Harvard to decide to continue to withhold in-copyright works from the project and remain at its limited level of participation.  Most troubling to me, however, is that this agreement would seem to move us one more big step in the direction of per-pay-use, where every library resource would be licensed and metered.

Just ’cause you’re paranoid…

When I wrote a post about a week and a half ago called “Can Copyright kill the Internet?,” I worried that my title might be considered a little bit extreme.  After all, the Internet is a big, sprawling “network of networks;” surely the puny efforts of legal enforcement cannot really do that much harm.  In some senses this is true, since it is difficult to apply national laws to the persistently international Internet.  On the other hand, as I pointed out in the earlier post, a business wanting to engage in commerce on the Internet has to take account of national laws around the world, and is frequently circumscribed by the most stringent law to be found regarding any particular activity.

But what really convinced me that my earlier post was not exaggerating the threat was this news item from Ars Technica called “‘Net filters “required” for all Australians, no opt-out.”  Incredibly, to my mind, at least, Australia is moving ahead with a plan to force Internet Service Providers to impose filters on ALL Internet access in the country to filter out “illegal” content.  The government would maintain two “blacklists” of material that must be blocked.  Australians who requested “unfiltered” access would not have material on the “additional material” blacklist blocked, but there would be no way to get access to Internet sites that the government deemed illegal and so put on its prinicple list of blocked content.

There are many problems with this plan, but I want to focus on two.  First, filters never work.  It is usually possible to get access to “bad” content in spite of the filter, and filters almost always filter out useful content as well as the bad stuff.  In the case of this plan, the task of updating the blacklist will be monumental, as banned material can switch locations and URLs faster than the content police can keep track.  And even when content is blocked, the blocking itself will serve as a challenge to many sophisticated computer users to find a way around the filter and gain access to the site.  Digital locks are usually broken in a matter of days, and the unfortunate result of filters has always been that law-abiding users find their choices of legitimate content constricted, while those who want to violate the rules find ways to do so.

The other problem, of course, is deciding what consititutes “illegal” material.  Few would dispute the need to reduce the amount of child pornography available on the ‘Net, but there are lots of other categories of sites where there is a legitimate debate.  What is defamatory in some countries, for example, is protected as political speech in the United States.  Will Australian officials be able to keep criticism of government policies (like this) off of Australian computers by declaring it “illegal” because potentially libelous?  What about material that potentially infringes copyright?  Will all such material be blocked?  And how will that determination be made?  Many sites — YouTube is the most obvious example — contain material that is authorized by the rights holder as well as videos that are clearly infringing.  Is YouTube a legal or an illegal site?

Ars Technica has followed up its original post with this one noting that the government in Australia is trying to suppress criticism of its plan.  This strengthens the fear that the filtering plan might be used to silence opposition, even though there ought to be a clear distinction made between what is illegal and what is merely dissent.  The article also notes that the point made above — that filters seem seldom to work very effectively — is being borne out in this instance.

So here is a concrete example of terribly bad policy that really does threaten the existence of the Internet as the revolutionary tool for democratic communication that it ought to be.

What does PRO-IP really do?

President Bush signed the “Prioritizing Resources and Organization for Intellectual Property Act of 2008” — PRO-IP — on October 13, making it Public Law 110-403.  Since then a lot of news reports and blog posts have denounced the law, and I have noticed that a number of them claim negative aspects of the bill based on previous proposed versions.  One article last week linked to a report about the bill that was a year old and announced an aspect (about which I also wrote way back than) that actually was removed from the bill as it was finally passed and signed.  So I spent my weekend reading the actual text of the final, adopted version to see what was and was not still there.  The link above, from Washington Watch, includes both the text of the bill as signed and some analysis of it; here is a news report that also reflects the content of the bill correctly..

First, what is not in PRO-IP?  The two most objectionable features, from my perspective, were both removed before final passage.  First, earlier versions included provisions that would have dramatically increased the statutory damages available in copyright infringement cases.  The obvious purpose of this provision was to make more money for the RIAA when it sues file-sharers, since the structure of the change would have increased the potential penalty for infringing a music CD by 10 or 12 times.  That provision was not included in the final version.  Also dropped was a provision that would have allowed the Justice Department to pursue civil (as opposed to criminal) copyright lawsuits, a provision one commentator called making federal employees essentially pro bono lawyers for the content industries.  Because the Justice Department itself objected to the provision, it was omitted as well.

So what is left?  Plenty of taxpayer money being spent to help out a few large content industries is the short answer.  The Congressional Budget Office estimates that PRO-IP will cost over 420 million over four years.

PRO-IP has five sections.  The first, dealing with civil enforcement, lowers the procedural barriers for bringing infringement lawsuits, and it allows for seizure and  impounding of allegedly infringing products while the lawsuit is pending.  It also raises the statutory damages available for counterfeiting of trademarks.  The second section “enhances” criminal enforcement measures in a parallel way.  Primarily, it allows for the seizure and ultimate forfeit of infringing goods and any equipment used to infringe.  The potential effect here is that computer equipment used for widespread and wilfull infringement could be seized in exactly the same way that cars and boats used for drug crimes are now taken by law enforcement.

With sections III and IV, PRO-IP really starts spending your money; over 55 million dollars a year is explicitly appropriated to increase federal and local enforcement efforts.  At the top, a new executive branch official is created — the Intellectual Property Enforcement Coordinator, or IP Czar, as the position has been called — whose job is not to seek balance in our copyright law, as is arguably the role of the Registrar of Copyright, for example, but directly to expand the role of the federal government in protecting these private rights.  The section also creates a new enforcement advisory committee, replacing an earlier group with one whose membership is significantly expanded.  This group is specifically charged with gathering information about the alleged cost of IP infringement that is used by the industry in its lobbying efforts.  Now taxpayers will pay for that research.  Indeed, this federal official is essentially a Cabinet-level lobbyist for Big Content.

PRO-IP also requires the addition of over a dozen FBI agents to full-time IP enforcement; it is not clear if these are new agents or ones who will be reassigned from less high priority duties.  Twenty-five million dollars are also allocated for grants to local law enforcement to pursue those dangerous file-sharers, and 20 million to hire more investigators for the Department of Justice.  The bill closes with a “sense of Congress” section that heaps great praise on the content industries and repeats much of the propaganda that those industries distribute to support their claim that federal intervention to protect their out-dated business models is necessary.  It also informs the Attorney General of the United States that IP enforcement should be “among his highest priorities.”

As is probably clear, I think PRO-IP is still bad legislation.  The provisions that most threatened to have a further chilling effect on higher education have been removed, but the bill still, in my opinion, is a huge gift of money to the major content industries.  The result will be that taxpayers will shoulder even more of the burden of fighting their desperate battle to prop up a business model that both consumers and the technologies they use have passed by.  Instead of looking for new ways to enhance and market their products, these industries continue to resort to legal enforcement that is bound to fail (see this report from the Electoronic Frontier Foundation on the fruitless campaign of the past five years), and they have now convinced Congress to invest much more taxpayer money in that effort.

OA @ Duke — why it matters very much!

As part of our Open Access Day celebration at Duke, we held a keynote and panel event on Tuesday, Oct. 14th featuring Duke faculty and a student talking about why open access is important to them and important to Duke.  About 50 staff and faculty members attended, and following is a brief summary of the very exciting talks we heard.

Prof. James Boyle of the Duke Law School and the board of Creative Commons began the afternoon with an entertaining and inspiring talk on why Open Access matters.  He pointed out that the Web, which was designed to share scientific information, now works very well for sharing pornography or bargain offers for shoe shoppers, but really is not very effective at sharing science.  The message of his talk was “Its the links, stupid” — the ability to build links into scientific work is key to speeding up the progress of science and innovation to the pace promised by this powerful technology.  Linking permits all kinds of new discovery, whether through text mining or “waterhole searching” (following the tracks of other).  But linking depends on information being freed from the access barriers that currently wall off most scholarship on the web.

Boyle offered a vision for open access based on three stages.  At “Open Access 1.0,” scientific research and information will be exposed to many more human eyeballs.  At the stage of Open Access 2.0, computers will have access to a depth of scientific information that will permit text mining for new and serendipitous discovery.  Finally, with Open Access 3.0 computers and humans will work together to create a map of knowledge within in a given field and amongst fields where relationships were previously not discoverable.

Law School Assistant Dean for Library Services Melanie Dunshee followed Boyle with some interesting information about Duke Law’s ten-year-old experiment with open access to legal scholarship.  Her talk gave a nice illustration of the path to open access, which consists in aligning faculty interests with the mission of the university to produce and disseminate knowledge.  The services provided by the Law School Library, and the many new ways that faculty scholarship is exposed and promoted, made the point about how to accomplish that alignment very concretely.

Next up was Dr. Ricardo Pietrobon from the Medical School, where he chairs the group that is doing “Research on Research.”  His presentation really built on Boyle’s call by suggesting that we need to move beyond text mining and data mining (once we get there) to consider what he called “scientific archeology.”  Only at that point, when open access encourages not just access but replicability, accountability and transparency, will the promise of the Internet for scientific learning be fulfilled.

The climax of the afternoon, and what made the need for open access very real to our audience, was the remarks by Josh Sommer, a Duke student who was diagnosed with a rare form of brain tumor during his freshman year.  Now three years out from surgery, Josh has refused to accept the “average” seven year life span of chordoma patients that he was given.  Instead, Josh has co-founded the Chordoma Foundation and has himself become actively involved in research to understand and treat this disease.  His story of how the privileged access he has as a Duke student has helped significantly in his research is only part of the story.  He also tells of previously unknown connections between other forms of cancer research and the effort to treat chordoma that have been discovered using open access medical literature.  Finally, Josh talked about his young friend Justin who died from chordoma earlier this year; a young man who did not have the advantages that have given Josh the ability to fight his grim prognosis (see the link above for more on Justin’s short life).  As Josh puts it, there is no reason that the knowledge that could have saved Justin’s life is walled off behind access barriers.  Josh Sommer personified for our event  the very message he wanted to deliver to those engaged in the effort to acheive more comprehensive open access to knowledge — perseverance.

Can Copyright kill the Internet?

The question seems extreme, and it is certainly rhetorical.  But the potential for copyright challenges to significantly limit the range of activities and services available on the Internet is very real, and severe limits on the full potential for digital communications could be imposed.

One of the great strengths of the Internet — its completely international character — is also one of its greatest weaknesses.  Since laws change across national boundaries, but the Internet goes merrily along, online services can potentially be made subject to the most restrictive provisions found anywhere in the world.

In the US, for example, there is solid case law holding that thumbnail versions of images used in image search engines are fair use.  The cases of Kelly v. Arriba Soft and Perfect 10 v. Google are solid examples of this principle.  But fair use is a fairly unique feature of US law; it does not exist in most other countries.  So when Google’s image searching was challenged in a German court on copyright infringement grounds, they did not have fair use to rely on for their defense, and they lost the case earlier this week.  The German court held that this valuable tool infringes copyright if the thumbnail images are used without authorization, even if the use is to provide an index that helps users actually find the original.  There are reports about the decision here and here.

How will Google react to this decision?  First, they will almost certainly appeal.  It is possible, ultimately, that they would have to employ some kind of technological measures that would prevent users in Germany from seeing the image search results with thumbnails, a result that would ultimately harm business in Germany more than Google.  It is very unlikely that Google would have to shut down its image search feature, but multiple decisions might force a reexamination of how Google provides services worldwide.  A similar case, involving the sale of Nazi memorabilia in France, led Yahoo to exactly that sort of system-wide change.

The general lesson here is that the current copyright regime throughout the world is in a fundamental conflict with the openness and creativity fostered by the Internet.  Most companies today want to do business on the Internet, but few are willing to embrace the fundamentally open nature of the medium.  The resulting conflict really does threaten to constrict the role the Internet can play in our lives.

The conflict is the subject of an interesting article from The Wall Street Journal by Professor Larry Lessig of Stanford, a short teaser for his forthcoming book “Remix.”  Lessig suggests that the copyright “war” over per-to-peer filesharing risks significant “collateral damage.”  That damage would come in the chilling effect that frivolous lawsuits and poorly-researched DMCA “takedown notices” could have on new forms of creativity and art — the products of the remix culture which, Lessig argues, offers a return to an era when amateur artists could thrive.  This culture offers “extraordinary” potential for economic growth, according to Lessig, if it is not choked off by aggressive enforcement directed at a very different activity.  To prevent that, he offers five changes that could make our copyright law less of a threat to the innovation and creativeity the Internet encourages.

Will copyright kill the Internet?  No.  But copyright will need to be revised to account for the new opportunities that the Internet creates, lest we find ourselves unable to exploit those opportunities.

PS — This story about the McCain/Palin campaign fighting back against DMCA takedown notices that are being used to force YouTube to remove campaign videos that contain short clips from news programming, is another example (if we needed on) of the potential for abuse of the copyright system to chill important speech on the Internet.  Good to see the McCain camp fight back, but I wonder if it is really YouTube’s job to evaluate the merits of the takedown claims.  A court recently told content owners that they must consider fair use BEFORE sending a takedown notice; I wonder if the better course isn’t to pursue some kind of sanctions against those who send clearly unwarrented notices.

Chipping away

Digital rights management, or DRM, is a delicate subject in higher education.  Also called technological protection measure, these systems to control access and prevent copying are sometimes used by academic units to protect our own resources or to fulfill obligations we have undertaken to obtain content for our communities.  Sometimes such use of DRM in higher ed. is actually mandated by law, especially in distance education settings.

But DRM systems also inhibit lots of legitimate academic uses, and they are protected by law much more strictly than copyrights are by themselves.  A section added to the copyright law by the Digital Millennium Copyright Act makes it illegal to circumvent technological protection measures or to “manufacture, import, offer to the public, provide or otherwise traffic in” any technology that is primarily designed to circumvent such measures.  The reason I say this is stronger protection than copyrights get, and the reason these measures can be such a problem for teaching and research, is that our courts have held that one cannot circumvent DRM even for uses that would be permissible under the copyright act, such as fair uses, or performances permitted in a face-to-face teaching setting.

It is frequently the case, for example, that professors want to show a class a set of film clips that have been compiled together to avoid wasting time, or wish to convert a portion of a DVD to a digital file to be streamed through a course management system, as is permitted by the Teach Act amendment.  These uses are almost certainly legal, but the anti-circumvention rules make it likely that the act of getting the files ready for such uses is not.

To avoid the harshest results of the anti-circumvention rules, Congress instructed the Library of Congress to make a set of exceptions every three years using the so-called “rule making” procedures for federal agencies.  There have been three rounds of such rule-making so far, in 2000, 2003 and 2006.  Only in the last round was there any significant exception for higher education and it was very narrow, allowing only “film and media studies professors” to circumvent DRM in order to create compilations of film clips for use in a live classroom.

Now the Library of Congress has announced the next round of rule-making which will culminate in new exceptions in 2009.  Higher ed. has another chance to chip away at the concrete-like strictures that hamper teaching, research and innovation.  We need to be sure that the exception for film clips is continued, and try hard to see it expanded; many other professors, for example, who teach subjects other than film could still benefit from such an exception without posing any significant risk to rights holders.  Ideally, an exception that allows circumvention in higher education institutions whenever the underlying use was authorized could be crafted.

There is a nice article describing the rule making process and its frustrations here, from Ars Technica.

One of the things we have learned in the previous processes is the importance of compelling stories.  The narrow exception discussed above was crafted largely in response to the limitations on his teaching described by one film professor who testified during the rule-making.  The exception seems crafted to solve his particular dilemma. As another round of exceptions is crafted over the coming year, it will be important for the higher ed. community to offer the Library of Congress convincing narratives of the other ways in which DRM inhibits our work and to lobby hard for broader exceptions that will address the full range of problems created by the anti-circumvention rules.

Discussions about the changing world of scholarly communications and copyright