Events of the last week have delayed me from writing about a conference held at the Duke Law School on April 12, but I do not want to forget to share what was a very exciting and stimulating experience. Scholars from the US and the European Community gathered to discuss “Copyright Limitations and Exceptions: from access to research to transformative use.” If I had any criticism of the conference, it was that too little time was actually dedicated to discussing the legal details of limitations and exceptions to copyright law under the Berne Convention (especially article 9(2)) and the TRIPs agreement. But that flaut, which would bother only a small number of fellw copyright geeks, was more than made up for by the presentation about what exciting new possibilities copyright limitations and exceptions, if handled properly, could foster for scholarship.
The quote in my title came from Prof. James Boyle of the Duke Law School, explaining how the very links that create value on the Web are still illegal for much of scientific literature, even when the texts are available in digital form. To use an image suggested by another Duke Law professor, Jerome Reichman, the “web” of science today resembles the Rhine river in medieval times — it is so clogged with demands for toll that progress is impeded. Just as merchants had to stop over and over again to pay each castle owner in order to be allowed to continue sailing the river, today researchers must stop at innumerable “toll gates” to gather the research they need. This is why, as Boyle said, “a picture of an article” is not enough; what scientific research needs is a “semantic web” of linkages that allows research to be structured and shared. Boyle explains this concept, and the legal and economic obstacles to it, in this column from the Financial Times, “The irony of a web without science.”
This concept of a true “web of science” was developed more fully by John Wilbanks of the Science Commons. He demonstrated very compellingly the vastly wasteful research process that is determined by the siloing of research as it now exists on the web by show how one would approach the task of finding research about a particular protein in various databases, including Google and PubMed. Then he showed what a true semantic web approach could produce; a much more targeted and efficient search, even when conducted (as it currently must be) over a relatively small field of content. His conclusion was that keyword searching is less and less useful for research in the life sciences and that the use of “ancient tools” like Google for such research is largely dictated by the access restrictions created by an outmoded system of law (copyright) and an outmoded economic model for publishing. Finding ways to loosen the stranglehold of copyright law over the research web should be a primary goal of all discussions of copyright limitations and exceptions, while the search for new ways to disseminate scholarly research should occupy the attention of every scholar who hopes to take advantage of the tools offered by the 21st century.
I’d like to hear more about the limits of keyword searching and how copyright revision might enable the semantic web to improve. Unfortunately, imho, librarians are moving the other way with cool tools like Endeca that don’t always live up to their billing. Thanks for the information.
“access restrictions created by an outmoded system of law (copyright) and an outmoded economic model for publishing.” I concur greatly with this idea and would add that these systems and economic models attempt to replicate a print based system of indexing and retrieval. This replication while protecting the rights of publishers is to the detriment of current day researchers (who access information in hybrid formats).
I too would be interested in seeing the recommended copyright revisions. It is my feeling that the resources are available for those who really wish to find them.