Category Archives: Technologies

Google Books, Fair Use, and the Public Good

Note — thanks to several readers who pointed out that I had carelessly misspelled Judge Leval’s name in my original posting.  That error has now been corrected.

On Friday the Second Circuit Court of Appeals issued its ruling in the appeal of the Authors Guild lawsuit against Google over the Google book search project.  The decision was a complete vindication of the District Court’s  dismissal of the case, affirming fair use and rejecting all of the counterarguments offered by the Authors Guild.

As it happens, I was traveling when the decision came down, confirming a troubling tendency of the federal courts to issue important copyright opinions when I am out-of-pocket.  (My wife says that it is not about me, but what sense does that make?)  In any case, that slight delay allows me to benefit richly from the analyses posted by some very smart colleagues.  Here are several great places to read about the decision:

From Brandon Butler of American University.

From Corynne McSherry of the Electronic Freedom Foundation

From Krista Cox of the Association of Research Libraries

From Carrie Russell at the American Library Association

I want to add, or really just pull out from these previous posts, three points that I think are especially important.

First, Judge Pierre Leval, who wrote the opinion, does a nice job of drawing a line from the idea of transformative uses to the public purpose of copyright law.  This is hardly surprising, since it was Judge Leval who wrote the 1990 article that coined the term transformative use and had such an influence on the Supreme Court in its 1994 decision in Campbell v. Acuff-Rose Music.  In this ruling, Judge Leval reminds us quite forcibly that the primary beneficiary intended by copyright law is the public, through “access to knowledge” (p.13) and “expand[ed] public learning” (p. 15).  Economic benefits for authors are instrumental, not the ultimate goal of the copyright monopoly.  Then Judge Leval explains how this analysis of transformation serves those goals, clarifying why fair use is an essential part of copyright’s fundamental purpose.  He tells us that transformation is an answer to the question of how a borrowing from a copyrighted work can be justified.  The court, on behalf of a rights holder, asks a user “why did you do this?”  When the answer to that question is “because I wanted to make a new contribution to knowledge,” that is a transformative purpose.  And, by definition, it is a purpose that benefits the public, which justifies whatever minor loss a rights holder might suffer from the use.  The second step in Judge Leval’s  analysis, asking if the new use is a market substitute for the original, ensures that that loss is not so great as to outweigh the benefit. Thus we have a coherent analysis that recognizes the public purpose of copyright and still respects it chosen method for accomplishing that purpose.

Another important thing we can learn from Judge Leval’s opinion is about the difference between a transformative use and a derivative work.  The Author’s Guild (really some individual authors set up as plaintiffs because the AG has been found to lack standing to sue in this case) argues that allowing the Google Books’ search function usurps a right held by those authors to license indexing of their works.  This is ridiculous on its face, of course — imagine the effect such a right would have on libraries — but the judge does a nice job of explaining why it is so wrong.  The decisions rest heavily on the idea/expression dichotomy that is fundamental in copyright, and stresses that what is presented in the Google Books “snippet view” is more information about books (facts) rather than expressive content from those books.  A derivative work, Judge Leval suggests, is one that represents protected aspects — the expressive content — of the original in an altered form (such as a translation or a movie script).  A transformative use, on the other hand, uses information about the works, as in an index, or uses their content for a different expressive purpose, as in parody or scholarly comment.  This is a difficult distinction to make, as all of us who work in copyright know all too well, and it remains to be seen if the approach outlined above will hold up or prove useful in the full range of situations.  But it is a pointer toward a coherent way to understand a difficult part of the copyright balance.

As an aside, while reading the opinion in this case I was struck by how well the four fair use factors were handled, in a way that showed that the test used by Judge Leval respected all of the factors while essentially applying two basic questions — is the use transformative and does the new work create a market substitute for the original.  In fact, I can suggest three specific passages that are especially exciting, I think, for the application of fair use and the issue of transformation — footnote 21 and accompanying text, which helpfully clarifies the relationship of the second fair use factor to the analysis of transformation; the full paragraph on page 33, that considers the use and misuse of the third factor; and the careful distinction of Google snippets from a case involving telephone ringtones that is found on pages 40-41.  These are discussions that I think will have a significant impact on our ongoing consideration of fair use.

Finally, we should note that the Authors Guild has already indicated its intention to ask the Supreme Court to review this decision.  This is a very bad idea, indicating that the AG simply does not know when to cut its losses and stop wasting the money provided by its members.  The real point, however, is that the Supreme Court is not likely to take the case anyway.  This is not a situation where a fundamental Constitutional issues is involved, as it was in the Campbell case (fair use as a protection for free expression) nor one where a fundamental point about our obligations in the international arena was at issue, as it was in the Kirtsaeng case about the application of first sale to works of foreign manufacture.  In short, this is just a case about a greedy plaintiff who wants to be given an even bigger slice of the copyright pie, which the courts have determined repeatedly it does not deserve.  This is not the sort of issue that attracts the very limited attention of the Supreme Court.  In fact, reading the Court of Appeals’ ruling leaves one with a sense that many of the AG’s arguments were rather silly, and there is no reason to believe they would be less silly when presented to the Supreme Court in a petition for certiorari.

There are some who have argued that there is a split among the Circuit Courts of Appeal over transformative use, which is also a situation that can lead to Supreme Court review.  But that split has always been predicated on the idea that other courts, especially the Ninth Circuit, have carried the idea of transformation too far and departed from the ambit of the original doctrine.  The fact that it is Judge Leval, the author of that approach to fair use, who wrote this opinion, effectively undermines that claim.  In short, this decision closes a circle that outlines a capacious and flexible approach to fair use.  For getting us to this point, I suppose we should thank the Authors Guild for the unintentional support they have provided for a balanced copyright law in the digital age.

MOOCs and student learning

Now that the MOOC on Copyright for Educators and Librarians has finished its first run, it seems like a good time to post some reflections on what I learned from the experience.

The first thing I learned is that offering a MOOC takes a lot of work, and it is easier when that work is shared.  In my case, I was working with two wonderful colleagues — Anne Gilliland from the University of North Carolina, Chapel Hill and Lisa Macklin from Emory — who made the effort of putting the course together much more pleasant.  Both are lawyers and librarians with lots of experience teaching the issues we were dealing with, and we are all friends as well, which made the whole process a lot easier.  We also benefited from the terrific support we got from consultants working for Duke’s Center for Instructional Technology, which may be the single most MOOC-savvy group at any university.

That we had a great team was not really a surprise.  I was a bit more surprised however, and quite pleasantly, by the quality of the student discussion in our MOOC.  I had heard from other instructors about how effective the online discussion forums could be, but was just a bit skeptical.  Then I was able to watch as MOOC participants would pose difficult questions or struggle with the application of copyright law to a particular situation, and repeatedly the other course participants would work through the problem in the forums and arrive at surprisingly good answers. Peer-to-peer teaching is a reality in MOOCs, and is certainly among the best features of these courses.

One thing we know about MOOCs is that they often have participants with considerable background in the topic; often they have enrolled for a refresher or to see how someone else teaches the topic.  These people are a great asset in the MOOC.  Even if they are not amongst the most-likely participants to complete a course according to whatever formula for completion is in place, they are tremendously important to the success of the course because of the contribution they make to peer-learning in the discussion forums.

Acknowledging the contribution of “expert students” also offers a reminder to MOOC instructors to take a more humble approach to the standards we set for completion of our courses.  The open and online nature of these courses means that students enroll with a wide variety of goals in mind.  As I just said, some are experts looking to see how others teach the topic.  Completion of quizzes and such may be unimportant to such participants, even though they are getting valuable career or personal development from the course.

Along these lines, I agree wholeheartedly with this essay by Jeff Pomerantz about apologies for failing to complete a course.  Like Jeff, my colleagues and I got multiple e-mails in which participants explained their “failure” to complete the course.  Like Jeff, we often smiled to ourselves and chocked those messages up to a misunderstanding of what MOOCs are.  And like Jeff, we learned that there are so many reasons for taking a course, so many different goals that participants bring to their involvement, that it is more likely we instructors who need to get a better understanding of MOOCs.

Many of the participants in our specific course were librarians and educators; they were our target audiences, so that makes sense.  These are groups that take assignments and course completion very seriously, which was reflected in our very high completion rate (over 15%).  But it also means that these were folks who wanted to explain to us when they were not going to complete the course according to official standards.  Maybe they did not realize that we were unable to track participation at an individual level due to the technology and the volume of students.  Nevertheless, we needed to treat their desire to explain with respect, and to recognize that many of those who did not earn a certificate of completion probably got what they wanted from the course, and also very likely made important contributions to what other participants learned.

Last week I attended a meeting of Duke’s MOOC instructors, which focused on discussions about how we can use data available about the MOOCs to learn more about the teaching and learning process.  It was a fascinating meeting on several levels, but one thing I got from it was two stories about the kinds of goals that MOOC participants might have.

  • One faculty member who had taught a MOOC explained incidentally his own motivation for taking a different online course.  His own career as a student had been so focused on his own specialty that he had never gotten a chance to take a basic course in a different field that had always interested him.  “There was so much to learn,” he said, “and so little time.” A MOOC gave him a chance to fill that long-felt gap, and I will bet that he was a valuable student to have in the course; very highly motivated, like so many MOOC participants, whether or not he finished the assignments that lead to completion.
  • One of the administrators of Duke’s online initiative told about overhearing two students discussing the fact that each was taking a MOOC, and interrupting the conversation to ask why each had enrolled.  One of the women was a Ph.D. student who explained that there were certain areas of study or skills that she needed to complete her dissertation that were most efficiently gained by taking parts of a MOOC or two.  She registered in order to listen to selected videos that have relevance for her specific research.  She is a perfect example of someone who will not count toward a completion statistic but who is gaining something very valuable through her participation.

The other thing I learned from this meeting about potential research enabled by MOOCs is the myriad ways that these online courses can help improve teaching and learning on our own campus.  Duke has said all along that improving the experience of our own students was an important goal of our involvement with MOOCs.  When I heard this, I usually thought about flipped classrooms.  But that is a very small part of what MOOCs can do for us, I discovered.  I was privileged to listen to a comprehensive discussion about how the data we gather from MOOCs can be used to improve the student experience in our regular classrooms.  Very specific questions were posed about the role of cohorts, the impact of positive and negative feedback, how we can harness the creative ideas students raise during courses, and how to better assess the degree to which individual students have met the unique goals they brought to the course.  All of this has obvious application well beyond the specific MOOC context.

The most important thing I learned from the experience of teaching a MOOC actually has little to do with online courses as such.  It is a renewed respect for the complexity and diversity of the learning process itself, and a sense of awe at being allowed to play a small role in it.



Signing My Rights Away (a guest post by Jennifer Ahern-Dodson)

NOTE — Authorship can be a tricky thing, impacted by contractual agreements and even by shifting media.  In this guest post by Jennifer Ahern-Dodson of Duke’s Thompson Writing Program we get an additional perspective on the issues, one that is unusual but might just become more common over time  It illustrates nicely, I think, the link between authorship credit, publication agreements and a concern for managing one’s online identity.  A big “thank you” to Jennifer for sharing her story:

Signing My Rights Away

Jennifer Ahern-Dodson

I stared at my name on the computer screen, listed in an index as a co-author for a chapter in a book that I don’t remember writing. How could I be published in a book and not know about it? I had Googled my name on the web (what public digital humanist Jesse Stommel calls the Googlesume), as part of my research developing a personal website through the Domain of One’s Own project, which emphasizes student and faculty control of their own web domains and identities. Who am I online? I started this project to find out.

I was taken aback by some of what I found because it felt so personal—my father’s obituary, a donation I had made to a non-profit, former home addresses. All of that is public information, so I shouldn’t have been surprised, but then about four screens in I found my name listed in the table of contents for a book I’d never heard of. Because the listed co-author and I had collaborated on projects before, including national presentations and a journal publication, I wondered if I had just forgotten something we’d written together.

I emailed her immediately and included a screenshot of the index page. Subject line: “Did we write this?”

She wrote back a few minutes later.

WHAT??!!!  We have a book chapter that we didn’t even know about???!!!!!  How is this possible?  Ahahahahahahahaha!!!!!

It’s a line for our CV! But, wait, what is this publication? Do we even want to list it? Would we list it as a new publication? Is it even our work? How did this happen?

This indeed was a mystery. At the time this was all unfolding, I was participating in a multidisciplinary faculty writing retreat. Once I shared the story with fellow writers, they enthusiastically joined in the brainstorming and generated a wide range of theories including plagiarism, erroneous attribution, a reprint, and an Internet scam (see Figure below). I mapped the possibilities for this curious little chapter called “Service Learning Increases Science Literacy,” listed on page 143 of the book National Service: Opposing Viewpoints (2011)[1].


AD picture 1

I needed to do more research and so requested the book through Interlibrary Loan and purchased it online as well.

And then there was the story of the editor. Who was she? Did she really exist? Was she a robot editor—just a name added to the front of a book jacket? I started wondering, now that so much of our work is digitized, are robots reading—and culling through—our work more than people? A quick search on Google revealed she was the editor for over 300 books, mostly for young adults. Follow up searches on LinkedIn and Google+ revealed profiles that seemed authentic.

The book arrives.

About a week later, the book arrived through Inter-library Loan. While still standing at the library service desk, I quickly flipped to page 143.

AD picture 2

What I discovered is a reprint (with a new title) of an article my author and I had published in the Journal of College Science Teaching.[2] It was republished with permission through the journal, conveyed through Copyright Clearance Center. The table of contents included a range of authors and works, including an
excerpt from a speech by George W. Bush.

It all looked legitimate. But how could I be published and not know about it?

In an email conversation with Kevin Smith, my university’s scholarly communication director and copyright specialist, I learned that typically in publication agreements, authors transfer copyright to the organization that publishes the journal. From then on, the organization has nearly total control. It can do what it wants with the article (like republish it or modify it), and for most other uses I might want to make (like including it on my website), I’d have to ask their permission.

AD picture 3

I also learned that republication is not uncommon. Although this book is marketed as “new,” it is in fact really just repackaged material from other sources that libraries likely already have. In this case, our article for a
college teaching journal was repackaged for an audience of high school teachers as part of an opposing viewpoints series, essentially marketing the same content to a different audience.

In a slightly different repackaging model, MIT Press has started re-publishing scholarly articles from its journals in a thematically curated eBook series called Batches.

These two models made visible for me the ways that copyright, institutional claims, and the Internet fuel change at a pace so rapid it seems almost impossible for authors to keep up.

Where to go from here

Although the ending to this mystery is not as thrilling as I thought it would be (someone plagiarized our work! Someone recorded and transcribed a talk! The book is a scam!), what I uncovered was this whole phenomenon of book republishing. Our chapter was legitimately repackaged in a mass marketed book with copyright secured, which allowed our work to be shared with a broader audience (which I see as a good thing). Yet, the process distanced me from my work in a way I was not expecting. In my naïve, yet I suspect widely held view of academic authorship, I assumed the contract I had signed was simply a formality, more of a commitment by the journal to publish the article and an agreement by my co-author and me to do so. I only skimmed the contract, distracted perhaps by the satisfaction of getting published and the opportunity to circulate my ideas more broadly.

As I submerged myself into the murky depths of republishing, I started to think about my own responsibility as both a writer and a teacher of undergraduate writers, to educate myself on authors’ rights. Could I negotiate publishing agreements to retain copyright? Or, at the very least, could I secure flexibility to re-use my work? As it turns out, yes. The Scholarly Publishing and Academic Resources Coalition has created an Author Addendum to help authors manage their copyright and negotiate with publishers rather than relinquishing intellectual property.

Although it is not uncommon for publishers to ask authors to sign over their legal rights to their work, at least one publisher—Nature Publishing, which includes the journals Scientific American and Nature—goes even farther. It requires authors not only to waive their legal rights but also their “moral rights.” Under this agreement, work could conceivably be republished without attribution to the original author. There was a story about this a couple of months ago, see

In my case, I clearly did not do due diligence as an author when I read and signed the agreement for the science literacy article, and neither the journal nor the book editor or publisher was under any legal obligation to notify me that my work was republished or retitled. I wonder, however, what would happen if we applied the concept of academic hospitality to our publishing relationships. Could a simple email notification when/if our work gets republished be a kind of professional courtesy we can expect? Or, should we as authors get more comfortable with less control over our work and choose to share our ideas more liberally in public domains in addition to academic journals, which have limited readership and at times draconian author agreements? Do institutions have any role to play in educating their faculty and graduate students about signing agreements?

In my quest to create a domain of my own, to “reclaim the web” and be an agent in crafting my own author identity online, I discovered that, in fact, I had given up control of some of my own work. Now, I’m aware of the need to balance going public with my work—both online and in print—with a thoughtful and informed understanding of my rights and responsibilities as an academic author.

[1] Gerdes, Louise, Ed. Greenhaven Press.

[2] Reynolds, J. and Ahern-Dodson, J. “Promoting science literacy through Research Service-Learning, an emerging pedagogy with significant benefits for students, faculty, universities, and communities.” Journal of College Science Teaching 39.6 (2010).


Planning for musical obsolescence

Gustavo Dudamel is one of the most celebrated conductors of his generation.  As Music Director of both the Los Angeles Philharmonic and the Simon Bolivar Orchestra of Venezuela, he has built a solid and enthusiastic following amongst lovers of symphonic music.  He is also, according to his website bio, deeply committed to “access to music for all.”  So it is particularly poignant that a recording by Dudamel should serve as the prime example of a new access problem for music.

When Dudamel and the Los Angeles Philharmonic release a new recording of a live performance of Hector Berlioz’s Symphonie Fantastique, it should be a significant event, another milestone in the interpretation of that great work.  But in this particular case we are entitled to wonder if the recording will really have any impact, or if it will drop into obscurity, almost unnoticed.

Why would such a question arise?  Because the Dudamel/LA Philharmonic recording was released only as a digital file and under licensing terms that make it impossible for libraries to purchase, preserve and make the work available.  When one goes to the LA Philharmonic site about this recording of Symphonie Fantastique and tries to purchase it, one is directed to the iTunes site, and the licensing terms that accompany the “purchase” — it is really just a license — restrict the user to personal uses.  Most librarians believe that this rules out traditional library functions, including lending for personal listening and use in a classroom.  Presumably, it would also prevent a library from reformatting the work for preservation purposes in order to help the recording outlive the inevitable obsolescence of the MP3 or MP4 format.  Remember that the section 108 authorization for preservation copying by libraries has restrictions on digital preservation and also explicitly allows contractual provisions to override that part of the law.

At a recent consultation to discuss this problem, it was interesting to note that several of the lawyers in the room encouraged the librarians to just download the music anyway and ignore the licensing terms, simply treating this piece of music like any other library acquisition.  Their argument was that iTunes and the LA Philharmonic really do not mean to prevent library acquisitions; they are just using a boilerplate license without full awareness of the impact of its terms.  But the librarians were unwilling.  Librarians as a group are very law-abiding and respectful of the rights of others.  And as a practical matter, libraries cannot build a collection by ignoring licensing terms; it would be even more confusing and uncertain than it is to try to comply with the myriad licensing terms we encounter every day!

In the particular case of the Dudamel recording of Berlioz, we know rather more about the situation than is normal, because a couple of intrepid librarians tried valiantly to pursue the issue.   Judy Tsou and John Vallier of the University of Washington tracked the rights back from the LA Philharmonic, through Deustche Grammophon to Universal Music Group, and engaged UMG in a negotiation for library-friendly licensing.  The response was, as librarians have come to expect, both inconsistent and discouraging.  First, Tsou and Vallier were told that an educational license for the download was impossible, but that UMG could license a CD.  Later, they dropped the idea of allowing the library to burn a CD from the MP3 and said an educational license for download was possible, but only for up to 25% of the “album.”  For this 25% there would be  a $250 processing fee as well as an unspecified additional charge that would make the total cost “a lot more” than the $250.  Even worse, the license would be limited to 2 years, making preservation impossible. The e-mail exchange asserts that UMG is “not able” to license more than 25% of the album for educational use, which suggests that part of the problem is that the rights ownership and licensing through to UMG is tangled.  But in any case, this is an impossible proposal.  The cost is absurd for one quarter of an album, and what sense does it make for a library to acquire only part of a performance like this for such a limited time? Such a proposal fundamentally misunderstands what libraries do and how important they are to our cultural memory.

Reading over the documents and messages in this exchange, it is not at all clear what role Maestro Dudamel and the LA Philharmonic have in this mess.  It is possible that they simply do not know how the recording is being licensed or that it is unavailable for libraries to acquire and preserve.  Or they may think that by releasing the recording in digital format only they are being up-to-date and actually encouraging access to music for everyone.  In either case, they have a responsibility to know more about the situation, because the state of affairs they have allowed impedes access, in direct contradiction to Maestro Dudamel’s express commitment, and it ensures that this recording will not be part of the ongoing canon of interpretation of Berlioz.

As far as access is concerned, the form of its release means that people who cannot afford an MP3 player will not be able to hear this recording.  Many of those people depend on libraries, and that option will be closed to them because libraries cannot acquire the album.  Also, access will become impossible at that inevitable point in time when this format for digital music becomes obsolete.  Maybe UMG and the Philharmonic will pay attention and release the recording on a different format before that happens, but maybe they won’t.  The most reliable source of preservation is libraries, and they will not be there to help with this one.  So access for listeners 20 or 30 years from now is very much in question.

This question of the future should have great consequence for Maestro Dudamel and the orchestra.  Without libraries that can collect their recording, how will it be used in classrooms in order to teach future generations of musicians?  Those who study Berlioz and examine the performance history of the Symphonie Fantastique simply may not know about this performance by Dudamel and the LA Philharmonic.  That performance, regardless of how brilliant it is, may get, at best, a footnote in the history of Berlioz — “In 2013 the Symphonie Fantastique was recorded by the LA Philharmonic under the baton of Gustavo Dudamel; unfortunately, that recording is now lost.”  These licensing terms matter, and without due attention to the consequences that seemingly harmless boilerplate like “personal use only” can produce, a great work of art may be doomed to obscurity.

A MOOC on copyright

It has taken a while to get here, but I am happy to be able to announce that two of my colleagues and I  will be offering a four-week MOOC on copyright designed to assist teachers and librarians deal with the daily challenges they encounter in regard to managing what they create and using what they need.

The MOOC will be offered on the Coursera platform and will run for the first time starting July 21.  It is available as of today for folks to sign up at

It has been a great pleasure working with Anne Gilliland from the University of North Carolina Chapel Hill and Lisa Macklin from Emory University to create this course.  I hope and believe that the course is much stronger because the three of us worked together than it could possibly have been if any one of us did it alone.

This course will be four weeks in duration and focuses on U.S. copyright law.  While we are well aware of all the MOOC participants from other countries — and welcome folks from all over to join us — we also wanted to keep the course short and as focused as possible.  We hope perhaps to do other courses over time, and a more in-depth attention to international issues and to how copyright works on the global Internet might be a good future topic.  In the meanwhile, this course deals with the U.S. law and the specific situations and issues that arise for librarians and educators at all levels.

We especially hope to attract K-12 teachers, who encounter many of the same issues that arise in higher education, and who often have even fewer resources to appeal to for assistance.  That is one reason for the summertime launch.

Another point about the focus in this course — our goal is to provide participants with a practical framework for analyzing copyright issues that they encounter in their professional work. We use a lot of real life examples — some of them quite complex and amusing — to help participants get used to the systematic analysis of copyright problems.

For many in the academic library community, the winding up of the courses offered by the Center for Intellectual Property at the University of Maryland University College has left a real gap.  This course is intentionally a first step toward addressing that gap.  It is, of course, free, and a statement of accomplishment is available for all participants who complete the course.  We hope this can assist our colleagues in education with some professional development, and maybe, depending on local requirements, even continuing education requirements.

We very much hope that this course will be a service to the library and education community, and that it provides a relatively fun and painless way to go deeper into copyright than the average presentation or short workshop allows.

Copyright roundup 3 — Changes in UK law

In this final installment of the copyright roundup I have been doing this week, I want to note some remarkable developments in the copyright law of the United Kingdom, where a hugely significant revision of the statute received final approval this month and will be given royal assent, the last stage of becoming law, in June.

Readers may recall that the UK undertook a study of how to reform copyright law in ways that would encourage more innovation and economic competitiveness.  The resulting report, called the Hargreaves Report, made a number of recommendations, many of which were focused on creating limitations and exceptions to the exclusive rights in copyright so that the law would work more like it does in the U.S., including the flexibility provided by fair use.  The final results of this legislative process do not include an American-like fair use provision, but they do result in a significant expansion of the fair dealing provisions in U.K. law to better accomplish some of the same things fair use has allowed in the U.S.

Fair dealing is found in a couple of provisions of the British law and allows certain specified activities if those activities are done in a “fair” manner, with specified criteria for fairness.  Until now the categories have been narrow and few, but Parliament has just expanded them dramatically.  A description of this expansion from the Charter Institute of Library and Information Professionals can be found on the CILIP site.  A number of activities that are probably permitted by fair use in the U.S. are now also encompassed by fair dealing in Britain, including private copying, copying by libraries in order to provide those copies to individual users, and some significant expansion of the ability to make copies for the purpose of education.

On this last point, I wonder if the two British university presses that are suing a U.S. university over educational copying have noticed that the tide is against them even at home.

There is an explanatory memo about these changes written by the U.K. Intellectual Property Office available here.  It is interesting to see how certain goals that have been accomplished by the courts in the U.S. and, importantly, in Canada are now intentionally being supported in this British legislation.  As I say, we are seeing a fairly strong international tide pushing towards expanded user rights in the digital environment, lest legacy industries use copyright to suppress economic development in their anxiety to prevent competition.

Several points about this legislative reform seem especially important to me.

First is the emphasis in several of the new provisions on supporting both research with and preservation of sound recordings and film.  This is one of several places where the U.K. may reasonable be said to have just leapfrogged over the United States, since the provisions about non-profit use and preservation of music and film remain a mess in our law.

Second, the British are now adopting an exception for text and data mining into their law.  This is huge, and reinforces the idea I have expressed before that libraries should be reluctant about agreeing to licensing terms around TDM; the rights are likely already held by users in many cases, so those provisions really would have the effect, despite being promoted as assisting research, of putting constraints (and sometimes added costs) on what scholars can already do.  This is probably true in the U.S., where fair use likely gets us further than vendor licenses would, and it has now been made explicit in the U.K.

Another major improvement in the U.K. over U.S. copyright is the fact, explained in the CILIP post, that

[M]any of these core “permitted acts” in copyright law given to us by parliament all not be able to be overridden by contracts that have been signed.  This is of vital importance, as without this provision, existing and new exceptions in law could subsequently simply be overridden by a contract. Also many contracts are based in the laws of other countries (often the US). This important provision means that libraries and their users no longer need to  worry about what the contract allows or disallows but just apply UK copyright exceptions to the electronic publications they have purchased. 

This type of approach is desperately needed in the U.S.  If we truly believe that the activities that are supported by core exceptions to the rights under copyright, like education, library services and fair use, are beneficial to society and part of the basic public purpose of copyright, they should remain in place regardless of provisions inserted into private law contracts.  Now that the British have made this acknowledgement, it is time for the U.S. to catch up.

Competitiveness is often an important part of the discussion over copyright law.  Rights holders argue that terms should be lengthened and enforcement improved in order to enhance competition with other nations.  The U.K. began its copyright reform process in order to improve its ability to compete for high-tech business.  And this new revision of the British law puts the U.S. back in a situation where we must continue to strengthen not the rights of legacy industries but the rights of users — which is where innovation will come from — because other parts of the world are moving past us in this area.  How to we do this, in the two key areas I have identified?  In the area of the right to mine text and data for non-profit research purposes, this is something our courts can do, through interpretation of the fair use provision.  We can hope that such an opinion might appear in the near future, although I am not aware of what case might prompt it.  But contract preemption is something that Congress will have to address.  If the U.S. Congress is serious about copyright reform, and really wants to help it to continue to be a tool of economic progress in the U.S., they should put the issue of making user rights exceptions impervious to contract provisions that attempt to limit or eliminate them at the top of the legislative agenda.

Walking the talk

All of the presentations at the SPARC Open Access meeting this week were excellent.  But there was one that was really special; an early career researcher named Erin McKiernan who brought everyone in the room to their feet to applaud her commitment to open access.  We are sometimes told that only established scholars who enjoy the security of tenure can “afford” to embrace more open ways to disseminate their work.  But Dr. McKiernan explained to us both the “why” and the “how” of a deep commitment to OA on the part of a younger scholar who is not willing to embrace traditional, toll-access publishing or to surrender her goals of advancing scholarship and having an academic career.

Erin McKiernan is a Ph.D from the University of Arizona who is now working as a scientist and teacher in Latin America.  Her unique experience informs her perspective on why young scholars should embrace open access.  Dr. McKiernan is a researcher in medical science at the National Institute of Public Health in Mexico and teaches (or has taught) at a couple of institutions in Latin America.  For her, the issue is that open access is fundamental to her ability to do her job; she told us that the research library available to her and her colleagues has subscriptions to only 139 journals, far fewer that most U.S. researchers expect to be able to consult.  Twenty-two of that number are only available in print format, because electronic access is too expensive.  This group includes key titles like Nature and Cell.  A number of other titles that U.S. researchers take for granted as core to their work — she mentioned Nature Medicine and PNAS — are entirely unavailable because of cost.  So in an age when digital communications ought to, at the very least, facilitate access to information needed to improve health and treat patients, the cost of these journals is, in Dr. McKiernan’s words, “impeding my colleagues’ ability to save lives.”  She made clear that some of these journals are so expensive that the choice is often between a couple of added subscriptions or the salary of a researcher.

This situation ought to be intolerable, and for Dr. McKiernan it is.  She outlined for us a personal pledge that ought to sound quite familiar.  First, she will not write, edit or review for a closed-access journal. Second, she will blog about her scientific research and post preprints of her articles so that her work is both transparent and accessible.  Finally, she told us that if a colleague chose to publish a paper on which she was a joint author in a closed-access journal, she would remove her name from that work.  This is a comprehensive and passionately-felt commitment to do science in the open and to make it accessible to everyone who could benefit from it — clinicians, patients and the general public as well as other scholars.

Listening to Dr. McKiernan, I was reminded of a former colleague who liked to say that he “would rather do my job than keep my job.”  But, realistically, Dr. McKiernan wants to have a career as a teacher and research scientist.  So she directly addressed the concerns we often hear that this kind of commitment to open access is a threat to promotion and tenure in the world of academia.  We know, of course, that some parts of this assertion are based on false impressions and bad information, such as the claim that open access journals are not peer-reviewed or that such peer-review is necessarily less rigorous than in traditional subscription journals.  This is patently false and really makes little sense — why should good peer-review be tied to a particular business model?  Dr. McKiernan pointed out that peer-review is a problem, but not just for open access journals.  We have all seen the stories about growing retraction rates and gibberish articles.  But these negative perceptions about OA persist, and Dr. McKiernan offered concrete suggestions for early-career researchers who want to work in the open and also get appropriate credit for their work.  Her list of ideas was as follows (with some annotations that I have added):

1. Make a list of open access publication options in your particular field.  Chances are you will be surprised by the range of possibilities.

2.  Discuss access issues with your collaborators up front, before the research is done and the articles written.

3. Write funds for article processing charges for Gold open access journals into all of your grant applications.

4. Document your altmetrics.

5. Blog about your science, and in language that is comprehensible to non-scientists.  Doing this can ultimately increase the impact of your work and can even lead sometimes to press coverage and to better press coverage.

6. Be active on social media.  This is the way academic reputations are built today, so ignoring the opportunities presented is unwise.

7. If for some reason you do publish a closed-access article, remember that you still have Green open access options available; you can self-archive a copy of your article in a disciplinary or institutional repository.  Dr. McKiernan mentioned that she uses FigShare for her publications.

The most exciting thing about Erin McKiernan’s presentation was that it demolished, for many of us, the perception of open access as a risky choice for younger academics.  After listening to her expression of such a heartfelt commitment — and particularly the pictures of the people for whom she does her work, which puts a more human face on the cost of placing subscription barriers on scholarship — I began to realize that, in reality, OA is the only choice.





Please propose to us

Later this year, the first in a new series of Scholarly Communication Institutes will be held here in the Research Triangle and we are looking for proposals from diverse and creative teams of people who are interested in projects that have the potential to reshape scholarly communications.

Last year the Andrew W. Mellon Foundation funded a three-year project to continue the long-running Scholarly Communications Institute which has previously been held at the University of Virginia.  Starting in November, the new SCI will be hosted by Duke in close collaborations with UNC Chapel Hill, NC State University, North Carolina Central University and the Triangle Research Libraries Network.  This new iteration of the SCI will benefit, we believe, from the extraordinary depth and diversity of resources related to innovation in scholarly communications here in the Triangle, and it will also take on a new format, in which participants will have a major role in setting the agenda each year.

Starting this year — starting right now! — the SCI invites applications from working groups of 3 – 8 people that are organized around a project or problem that concerns scholarly communications.  These working groups can and should be diverse, consisting of scholars, librarians, publishers, technologists and folks from outside academia (journalist? museums? non-profits?).  We hope that proposals will be very creative about connections, and include people that would like to work together even if they have not previously been able to do so.

The SCI Advisory panel will select 3 to 5 of these working group proposals and cover the costs for those teams to travel to the Triangle and spend four days together  in Chapel Hill in a setting that is part retreat, part seminar, part development sprint and part un-conference.  We want these groups to work together and to interact.  The groups will, we hope, jump-start their own projects and “cross-pollinate” ideas that will advance and challenge each others projects and discussions.

The theme for the 2014 SCI is Scholarship and the Crowd.  It will be held November 9-13 at the Rizzo Center in Chapel Hill, NC.  Proposals are due by March 24.

The goal of the SCI is not to schedule breakthroughs but create conditions that favor them.  The Working Groups selected will set the agenda and define the deliverables.  The Institute will offer the space , the environment and the network of peers to foster creative thinking, with the hope of both advancing the specific projects and also developing ideas and perspectives that can give those projects a broader potential to influence the landscape of publishing, digital humanities and other topics related to scholarly communications.

If you or someone you know might be interested in developing a proposal for this first Triangle-based SCI, you will find the call for proposals and an FAQ at


Reflections on the Future of the Research Library

Since September, the Duke University Libraries have been engaging in a set of conversations we are calling a seminar on the future of the research library.  Our University Librarian initiated this discussion with the deliberate intent that, in spite of the large size of our staff, we engage in the core activity of a seminar – a gathering of individuals who come together for intensive study and the active exchange of ideas.  Such a process has intrinsic value, of course, in the continuing professional development and mutual understanding it fosters in the Libraries’ staff.  It also is timed to help us be best prepared to welcome a new Provost in 2014, since Duke’s Provost over the past fifteen years – the only Provost many of us have known at Duke – will be retiring from that role.

Last week our seminar hosted a talk by Professor Ian Baucom, a Professor in Duke’s English department and Director of the John Hope Franklin Humanities Institute.  His talk, and the discussion that followed it, really helped me focus my thoughts about the future role of academic libraries, academic librarians and scholarly communications.  So I want to use this space, in hopes that readers will indulge this end of the year philosophizing, to share some of those thoughts.  These reflections grow out of Ian’s discuss of several constellations of issues that are important to universities today and how those “hot” issues might impact the place of libraries in a research university.

Given his role as the Director of an intentionally interdisciplinary center, it is not surprising that that was the first constellation of issues Ian discussed.  He pointed out the evolution of the idea of interdisciplinarity over the past few decades, from conversations between disciplines, especially between the Humanities and the Social Sciences, to a more deeply transformative methodological commitment, which has been partly driven by advances in technology and the opportunities they have created.  In this environment, Ian talked about the special tools and skills that librarians could bring to teams pursuing interdisciplinary research.  Those tools could be technological; they could reflect expertise in the management of data; or they could involve helping to describe the product of a research project, make it findable and usable, and preserve it.  The changing role for librarians that this invitation suggests is toward serving as consultants and collaborators in the production of research results.

Ian challenged the Libraries to think about whether our fundamental commitment is to information or to knowledge.  This immediately struck me, as I think it was intended to, as a false dichotomy.  Libraries are not mere storage facilities for information, nor are they, by themselves, producers of knowledge.  Rather, they serve as the bridge that helps students and researchers use information to produce knowledge.  That role, if we will embrace it, implies a much more active and engaged role in the process of knowledge than has traditionally been accorded to (or embraced by) librarians.

Some of the most exciting ideas for me that Ian discussed were around the notion of civic engagement, which is, of course, another important topic on our campuses these days, especially when the Governors of several states (including our own) have challenged the value of higher education.  Ian pointed out that library is often one of the most public-facing parts of a university, and suggested three ways in which this outward-looking aspect of the research library could help the university enhance its civic role.  The first — he called it the centrifugal aspect of this role — was to help the university find a public language for the expert knowledge that it produces.   As an example of this, I thought of the recent effort here at the Duke Libraries to get copies of articles that will be the subject of press releases or other news stories into our repository so that the public announcements can link to an accessible version of each article.  This is one way we help “translate” that expert knowledge for a wider public.

The second role for libraries in assisting the civic engagement of their parent universities that Ian cited, the centripetal aspect, was to pull the issues that are important to the communities around a university into the campus.  This we can do in a variety of ways: everything from exhibits in our spaces to seminars and events that we sponsor.  The role here is what Ian called “instigator,” being the focal point on campus where civic issues become part of the academic discourse, having an impact on and being impacted by that expert knowledge that is our fundamental goal and creation.

Finally, the third aspect of civic engagement for academic libraries returns us to the idea of collaboration.  In many instances, it is the library that is the point of first contact, or the most logical partner, for collaboration with civic organizations, NGOs, local advocacy groups and public institutions.

Three roles for librarians that move well beyond traditional thinking emerged for me from Ian Baucom’s talk — the librarian as consultant has long been on my mind, and the librarian as collaborator is a natural outgrowth of that.  But librarians as translators and as instigators were new to me, and helped to flesh out a vision of what the research library might aspire to in the age of global, digital scholarly communications.  In my second post on this event I will turn to issues of globalization and, especially, publishing.

An odd anouncement

I did not initially pay much attention when publisher John Wiley announced early in September that they would impose download limits on users of their database “effective immediately.”  My first thought was “if they are going to disable the database, I wonder how much the price will decrease.”  Then I smiled to myself, because this was a pure flight of fantasy.  Like other publishers before it, Wiley, out of fear and confusion about the Internet, will reduce the functionality of its database in order to stop “piracy,” but the changes will likely do nothing to actually address the piracy problem and will simply make the product less useful to legitimate customers.  But it is foolish to imagine that, by way of apology for this act, Wiley will reduce the price of the database.  As contracts for the database come up for renewal, in fact, I will bet that the usual 6-9% price increase will be demanded, in fact, and maybe even more.

As the discussion of this plan unfolded, I got more interested, mostly because Wiley was doing such a bad job of explaining it to customers.  More about that in a moment.  But first it is worth asking the serious question of whether or not the plan — a hard limit on downloads of 100 articles within a “rolling” 24 hour period — will actually impact researchers.  I suspect that it will, at least at institutions like mine with a significant number of doctoral students.  Students who do intensive research, including those writing doctoral dissertations as well as students or post-docs working in research labs, sometimes choose to “binge” on research, dedicating a day or more to gathering all of the relevant literature on a topic.  Sometimes this material will be download so that it can be reviewed for relevance to the project at hand, and a significant amount of it will be discarded after that review.  Nothing in this practice is a threat to Wiley’s “weaken[ed]” business, nor is it outside of the bounds of the expected use of research databases.  But Wiley has decided, unilaterally, to make such intensive research more difficult.  In my opinion, this is a significant loss of functionality in their product — it becomes less useful for our legitimate users — which is why I wondered about a decrease in the price.

The text of the announcement was strangely written, in my opinion.  For one thing, I immediately distrust something that begins “As you are aware,” since it usually means that someone is about to assert categorically something that is highly dubious, and they do not wish to have to defend that assertion.  So it is here, where we are told that we are aware of the growing threat to Wiley’s intellectual property by people seeking free access.  I am very much aware that Duke pays a lot for the access that our researchers have to the Wiley databases, so this growing threat is purely notional to me.  As is so common for the legacy content industries, their “solutions” to piracy are often directed at the wrong target.  So it is with this one.  As a commenter on the LibLicense list pointed out, Wiley should be addressing automated downloads done by bots, not the varied and quite human research techniques of its registered users.

Another oddity was the second paragraph of the original announcement, which seems to apologize for taking this action “on our own,” without support form the “industry groups” in which Wiley is, they say, a “key player.”  As a customer, I am not sure why I should care about whether the resource I have purchased is broken in concert with other vendors or just by one “key player.”  But the fact that Wiley thought it needed to add this apology may indicate that it is aware that it is following a practice that has been largely shown throughout the content industry to be ineffective against piracy and alienating to genuine customers.  Perhaps, to look on the bright side, it means that other academic article vendors will not follow Wiley’s lead on this.

Things got even stranger when Wiley issued a “clarification” that finally addressed, after a 10 day delay, a question posed almost as soon as the first announcement was made, which was about exactly who would be affected by the limitation.  That delay, in fact, made me wonder if Wiley had not actually fully decided on the nature of the limitation at the time of the first announcement, and waited until a decision was made, belatedly, to answer the question.  In any case, the answer was that the limitation would only be imposed on “registered users.”  That clarification said users who accessed the database through IP recognition or via a proxy would not be affected, and that these non-registered users made up over 95% of the database usage.  So as Wiley asserts that this change will make little difference, they also raise the question of why do it at all.  It seems counter-intuitive that registered users would raise the biggest threat of piracy, and no evidence of that is offered.  And I wonder (I really do not know) why some users register while most, apparently, do not.  If Wiley offers registration as an option, they must think it is beneficial.  But by the structure of this new limitation, they create a strong incentive for users not to register.  But then Wiley adds a threat — they will continue to look for other, presumably more potent, ways to prevent “systematic downloads.”  So our most intensive researchers are not out 0f the woods yet; Wiley may take further action to make the database even less usable.

All of this made me doubt that this change had really been carefully thought out.  And it also reminded me that the best weapon against unilateral decisions that harm scholarship and research is to stop giving away the IP created by our faculty members to vendors who deal with it in costly and irresponsible ways.  One of the most disturbing things about the original announcement is Wiley’s reference to “publishers’ IP.”  Wiley, of course, created almost none of the content they sell; they own that IP only because it has been transferred to them.  If we could put an end to that uneven and unnecessary giveaway, this constant game of paying more for less would have to stop. So I decided to write a message back to Wiley, modeled on their announcement and expressive of the sentiment behind the growing number of open access policies at colleges and universities.  Here is how it will begin:

As you are aware, the products of scholarship, created on our campuses and at our expense, are threatened by a growing number of deliberate attempts to restrict access only to those who pay exorbitant rates.  These actors weaken our ability to support the scholarly enterprise by insisting copyright be transferred to them so that they can lock up this valuable resource for their own profit, without returning any of that profit to the creators.  This takes place every day, in all parts of the world.

University-based scholarship is a key player in the push for economic growth and human progress.  While we strive to remain friendly to all channels for disseminating research, we have to take appropriate actions on our own to insure that our IP assets have the greatest possible impact.  Therefore, effective immediately, we will limit the rights that we are willing to transfer to you in the scholarly products produced on our campuses.