How useful is the EU’s gift to libraries?

On Thursday the European Union’s Court of Justice issued an opinion that allows libraries to digitize books in their holdings and make those digital copies accessible, on site, to patrons.  In a way, this is a remarkable ruling that recognizes the unique place of libraries in the dissemination and democratization of knowledge.  Yet the decision does not really give libraries a tool that promises to be very useful.  It is worth taking a moment, I think, to reflect on what this EU ruling is, what it is not, and how it compares to the current state of things for U.S. libraries.

There are news stories about the EU ruling here, here and here.

What the EU Court of Justice said is that, based on the EU “Copyright Directive,” libraries have permission to make digital copies of works in their collections and make those copies available to patrons on dedicated terminals in the library,  The Court is interpreting language already found in the Directive, and adding two points.  First, library digitization is implied by the authorization for digital copies on dedicated terminals contained in the Directive, and, second, that this is permissible even if the publisher is offering a license agreement.  Finally, the Court makes clear that this ruling does not permit further distribution of copies of the digitized work, either by printing or by downloading to a patron’s storage device.

As far as recognizing what this decision is not, it is very important to realize that it is not the law in the United States.  It is easy sometimes, when the media gets ahold of a copyright-related story, to forget that different jurisdictions have different rules.  The welter of copyright information, guidelines, suggestions and downright misinformation can make the whole area so complex that simple principles can be forgotten.  So let’s remind ourselves that this interesting move comes from the European Union Court of Justice and is the law only for the EU member states.

The other thing this ruling is not is broad permission for mass digitization.  The authorization is restricted to copies that are made available to library patrons on dedicated terminals in the library.  It does not permit wide-spread distribution over the Internet, just “reading stations” in the library.  That restriction makes it unlikely, in my opinion, that many European libraries would invest in the costs of mass digitization just for such a relatively small benefit.

So how does this ruling in the EU compare to the rights and needs of libraries in the U.S.?

Let’s consider section 108(c) of the U.S. copyright law, which permits copying of published works for preservation purposes.  That provision seems to get us only a little way toward what the EU Court has allowed.  Under 108(c), a U.S. library could digitize a book if three conditions were met.  First, the digitization must be for the purpose of preserving a book from the collection that is damaged, deteriorating, or permanently missing.  Second, an unused replacement for the book must not be available at a fair price.  Third, the digital copy may not be made available to the public outside of the library’s premises.  This last condition is similar, obviously, to the EU’s dedicated terminal authorization; a patron can read the digital copy only while present in the library.

Two differences between the EU ruling and section 108(c) are especially interesting:

  1. The works for which this type of copying are allowed in the U.S are much more limited.  The EU says that libraries can digitize any book in their collection, even if it is not damaged or deteriorating, and even if another copy, even a electronic one, could be purchased.  This seems like the major place where the EU Court has expanded the scope for library digitization.
  2. On the other hand, the use of a digital copy may be less restricted in the U.S.  Instead of a dedicated terminal, a U.S. library could, presumably, make the copy available on a restricted network, so that more than one patron could use it at a time, as long as all of them were only able to access the digital copy while on the library premises.

In the U.S., of course, libraries also can rely on fair use.  Does fair use get us closer to being able to do in the U.S. what is allowed to European libraries?  Maybe a little closer.  Fair use might get us past the restriction in 108(c) about only digitizing damaged books; we could conceivably digitize a book that did not meet the preservation standard if we had a permissible purpose.  And the restriction of that digitized book to in-library use only would help with the fourth fair use factor, impact on the market.  But still we would have issues about the purpose of the copying and the nature of the original work.  Would general reading be a purpose that supports fair use?  I am not sure.  And what books could we (or could we not) digitize?  The specific book at issue in the case before the EU Court was a history textbook.  But textbooks might be especially hard for a U.S. library to justify digitizing for even limited access under fair use.

If we wanted to claim fair use for digitizing a work for limited, on site access, my first priority would be to ask why — what is the purpose that supports digitization?  Is a digital version superior for some articulable reason to the print copy we own (remembering that if the problem is condition, we should look to 108)?  One obvious purpose would be for use with adaptive software by disabled patrons.  Also, I would look at the type of original; as I said, I think a textbook, such as was at issue in the EU case, would be harder to justify under U.S. fair use, although some purposes, such as access for the disabled, might do it.  Finally, I would look at the market effect.  Is a version that would meet the need available?  Although the EU Court said that European libraries did not need to ask this question, I think in the U.S. we still must.

Ultimately, the EU Court gave European libraries a limited but useful option here.  Unfortunately, in the U.S. we have only pieces of that option available to us, under different parts of the U.S. law.  It will be interesting to see whether, in this age of copyright harmonization, U.S. officials begin to reconsider this particular slice of library needs because of what the EU has ruled.

 

 

MOOCs and student learning

Now that the MOOC on Copyright for Educators and Librarians has finished its first run, it seems like a good time to post some reflections on what I learned from the experience.

The first thing I learned is that offering a MOOC takes a lot of work, and it is easier when that work is shared.  In my case, I was working with two wonderful colleagues — Anne Gilliland from the University of North Carolina, Chapel Hill and Lisa Macklin from Emory — who made the effort of putting the course together much more pleasant.  Both are lawyers and librarians with lots of experience teaching the issues we were dealing with, and we are all friends as well, which made the whole process a lot easier.  We also benefited from the terrific support we got from consultants working for Duke’s Center for Instructional Technology, which may be the single most MOOC-savvy group at any university.

That we had a great team was not really a surprise.  I was a bit more surprised however, and quite pleasantly, by the quality of the student discussion in our MOOC.  I had heard from other instructors about how effective the online discussion forums could be, but was just a bit skeptical.  Then I was able to watch as MOOC participants would pose difficult questions or struggle with the application of copyright law to a particular situation, and repeatedly the other course participants would work through the problem in the forums and arrive at surprisingly good answers. Peer-to-peer teaching is a reality in MOOCs, and is certainly among the best features of these courses.

One thing we know about MOOCs is that they often have participants with considerable background in the topic; often they have enrolled for a refresher or to see how someone else teaches the topic.  These people are a great asset in the MOOC.  Even if they are not amongst the most-likely participants to complete a course according to whatever formula for completion is in place, they are tremendously important to the success of the course because of the contribution they make to peer-learning in the discussion forums.

Acknowledging the contribution of “expert students” also offers a reminder to MOOC instructors to take a more humble approach to the standards we set for completion of our courses.  The open and online nature of these courses means that students enroll with a wide variety of goals in mind.  As I just said, some are experts looking to see how others teach the topic.  Completion of quizzes and such may be unimportant to such participants, even though they are getting valuable career or personal development from the course.

Along these lines, I agree wholeheartedly with this essay by Jeff Pomerantz about apologies for failing to complete a course.  Like Jeff, my colleagues and I got multiple e-mails in which participants explained their “failure” to complete the course.  Like Jeff, we often smiled to ourselves and chocked those messages up to a misunderstanding of what MOOCs are.  And like Jeff, we learned that there are so many reasons for taking a course, so many different goals that participants bring to their involvement, that it is more likely we instructors who need to get a better understanding of MOOCs.

Many of the participants in our specific course were librarians and educators; they were our target audiences, so that makes sense.  These are groups that take assignments and course completion very seriously, which was reflected in our very high completion rate (over 15%).  But it also means that these were folks who wanted to explain to us when they were not going to complete the course according to official standards.  Maybe they did not realize that we were unable to track participation at an individual level due to the technology and the volume of students.  Nevertheless, we needed to treat their desire to explain with respect, and to recognize that many of those who did not earn a certificate of completion probably got what they wanted from the course, and also very likely made important contributions to what other participants learned.

Last week I attended a meeting of Duke’s MOOC instructors, which focused on discussions about how we can use data available about the MOOCs to learn more about the teaching and learning process.  It was a fascinating meeting on several levels, but one thing I got from it was two stories about the kinds of goals that MOOC participants might have.

  • One faculty member who had taught a MOOC explained incidentally his own motivation for taking a different online course.  His own career as a student had been so focused on his own specialty that he had never gotten a chance to take a basic course in a different field that had always interested him.  “There was so much to learn,” he said, “and so little time.” A MOOC gave him a chance to fill that long-felt gap, and I will bet that he was a valuable student to have in the course; very highly motivated, like so many MOOC participants, whether or not he finished the assignments that lead to completion.
  • One of the administrators of Duke’s online initiative told about overhearing two students discussing the fact that each was taking a MOOC, and interrupting the conversation to ask why each had enrolled.  One of the women was a Ph.D. student who explained that there were certain areas of study or skills that she needed to complete her dissertation that were most efficiently gained by taking parts of a MOOC or two.  She registered in order to listen to selected videos that have relevance for her specific research.  She is a perfect example of someone who will not count toward a completion statistic but who is gaining something very valuable through her participation.

The other thing I learned from this meeting about potential research enabled by MOOCs is the myriad ways that these online courses can help improve teaching and learning on our own campus.  Duke has said all along that improving the experience of our own students was an important goal of our involvement with MOOCs.  When I heard this, I usually thought about flipped classrooms.  But that is a very small part of what MOOCs can do for us, I discovered.  I was privileged to listen to a comprehensive discussion about how the data we gather from MOOCs can be used to improve the student experience in our regular classrooms.  Very specific questions were posed about the role of cohorts, the impact of positive and negative feedback, how we can harness the creative ideas students raise during courses, and how to better assess the degree to which individual students have met the unique goals they brought to the course.  All of this has obvious application well beyond the specific MOOC context.

The most important thing I learned from the experience of teaching a MOOC actually has little to do with online courses as such.  It is a renewed respect for the complexity and diversity of the learning process itself, and a sense of awe at being allowed to play a small role in it.

 

 

Who owns that journal? — an update

Earlier this year I wrote about a lawsuit involving the Duke University Press and their dispute with the Social Science History Association over who would control the journal Social Science History. A decision from the trial court in North Carolina has now been issued in the case, so it seems like a good time to update the story.

In my earlier post, I summarized the facts of the dispute this way:

The SSHA has informed DUP that it wants to end its long-standing association and look for a different publisher for its flagship journal, Social Science History.  The Press, however, asserts that language in their original contract means that the SSHA can stop participating in the journal, but cannot remove it from the control of DUP
I also said that there was probably more to the case than met the eye, and the facts recounted in the District Court decision, which is a summary judgment based on the documents filed by both parties, seem to confirm that.
The District Court’s account of the facts of the dispute really helps explain that unusual language that is at the heart of the case — “Discontinue participation in publishing the journal.”  This is very odd language, and it turns out to come about because of an extraordinary agreement.  Apparently in 1996 the Social Science history Association felt they were unable to support the journal anymore, and Duke University Press agree to take over the financial responsibility.  DUP collected dues for the SSHA, took all the risks of publishing the journal, and returned 50% of the dues income to the SSHA each year.  The unusual language about withdrawal of participation comes from this 1996 agreement, which the SSHA decided it wanted to terminate in 2012, after soliciting new bids for publishing services from other academic publishers.
The other facts I found especially interesting involved the move by the SSHA to pressure Duke Press regarding its interpretation of the agreement, by contacting third parties like HighWire Press, which distributes Duke Press journals online, and demanding that they withhold payment of monies apparently due to DUP because of the conflict.  When the lawsuit was filed, the SSHA alleged that Duke Press had violated the agreement and infringed SSHA’s copyright by publishing an electronic version of the journal.
Apparently the full complaint included seven distinct claims made by the SSHA against Duke Press, and three counterclaims by the Press.  So disentangling the decision is difficult.  But it comes down to two key findings.
First, the court holds that the unusual language about withdrawing from participation must be interpreted in light of the agreement as a whole and the intent of the parties that it expresses.  While acknowledging that the interpretation offered by the Press is possible if the wording is read in isolation, overall the court determines that the ownership of the journal always remained with the SSHA.  So on that central issue — who owns the journal and has the right to determine its future — it is the Association rather than the Press, and the Court finds that the 1996 agreement was terminated as of Jan. 1, 2014.
The Association is therefore free to find another publisher, and I think my speculation that the journal will move to a large commercial press that will return higher profits to the Association will be justified.  I am very confident that we will see a significant hike in the cost of Social Science History.
The other important holding in this decision is that Duke Press was not in breach of contract or committing copyright infringement when it published the journal electronically. The court holds that authorization for such publication, while not explicitly included by the 1996 agreement, “is implied by the plain terms of the agreement and is evidenced by the actions of the parties.”  Specifically, the SSHA knew about electronic publication for years, but only objected to it when its efforts to terminate the agreement led to conflict.  It was the SSHA, the court finds, that breached the agreement by causing third parties to withhold funds due to DUP, and the court orders the SSHA to relinquish all claim to those monies.
This case seems both unusual and unfortunate to me.  It is unusual because the specific circumstances that led to the conflict are unlikely to be repeated; it does not set any precedent that we all must now be cognizant of.  It is unfortunate, first, because it pitted two venerable and respected academic organizations against each other.  And even more disturbing is that this is as an illustration of the baleful effects that can result from the large amounts of money that commercial publishing extracts from the academy, and the temptation to get in on the profits.

About that simian selfie

By now, most people know about the macaque monkeys that took pictures of themselves in the Indonesian jungle, and the controversy over who, if anyone, owns a copyright in the resulting pictures.  The events actually took place several years ago, but the popular news media has recently picked up the story because of threats by the photographer whose cameras were used, David Slater, to sue Wikipedia if the photographs, which Slater says are his intellectual property, are not taken down.  Wikipedia refused his take down request, and the story has gone viral.  I know a copyright story has penetrated into popular culture when I am asked about it at home, over the dinner table!

la-fg-british-photographer-monkey-selfie-20140-001There has been so much commentary, including an impassioned defense of the rights of monkey artists, that I am hesitant to wade into the fray.  But I think there are some lessons we can learn from both the story and people’s reaction to it; there is an opportunity here to remind ourselves of some basic principles about copyright law.

So let my offer two comments about the controversy itself, and two comments about the public reaction to it.

First — cards on the table — I am inclined to believe that these photos do not have any copyright protection because they lack any element of human agency.  The fundamental principle of copyright law, stated very early in the U.S. law, is that copyright protects “works of authorship.”  And although I never really thought the idea would be challenged, I think authorship implies human agency.  There must be a spark of creative, a decision to create a work, for something to be a work of authorship, and these photos do not have that.  The story has shifted a bit over time, but it does not appear that Mr. Slater gave his cameras to the monkeys deliberately.  Even if he did, that could hardly be called a creative decision.  He had no control over what they did with the cameras; they were as likely to smash them on the ground as to take pictures.

In the well-known Feist decision about the white pages of phone books, the Supreme Court discussed the idea of a work of authorship, and told us that mere hard work — sweat of the brow, as they said — does not by itself earn copyright protection.  A work of authorship must be original in the sense of not copied, but it also must have a spark of human creativity.  It need not be much, but some human decision is required.

By the way, the Feist decision is a roadmap to why Mr. Slater’s arguments about the money and effort that went into his photo-taking trip to Indonesia do not succeed in convincing me that he should have a copyright in these specific photos.  His other argument, that the monkeys stood in the shoes of a hired assistant, such that Mr. Slater should own the copyright in these photos as the “employer,” also fails, if it were ever seriously proposed. For one to be an independent contractor, human agency is again required.  In U.S. law, there must be an explicit agreement before any copyrighted work created by an contractor could be considered work for hire.  And monkeys do not have legal capacity to enter into a contract.  This is a legal point; no matter how smart other creatures are or the degree to which humans should recognize animal rights, the law, including copyright and contract, deals with relationships between human beings.

Human relations are the foundation of my other legal point about this situation.  We need to remember that copyright has a specific incentive purpose; it is designed to reward creators so that they will continue to create.  The law grants a government-enforced monopoly so that authors and other creators can make enough money to support their creative efforts.  And this justification completely fails when the creator is not a participant in human society.  Granting a copyright in a monkey-taken photograph would be a mockery of the incentive purpose of the law.

Mr. Slater seems to argue the incentive purpose of copyright when he asserts that it takes a lot of hard work and investment to be a photographer, and only a few of the many photos he takes will make any money.  All true, I am sure, but not relevant.  This is not a photo that he took, and it does not meet the standard of a work of authorship eligible for the peculiar legal structure we humans call copyright.  Many reactions to this story have focused on the idea that it is somehow unfair to deny Mr. Slater profit from the photo.  But copyright law is not about fairness, nor does it exist to reward industry or protect an investment.  Laws are passed for many purposes, and fairness is often well down on the list of reasons.  It is silly to expect to be able to use copyright to leverage some result considered “fair” in every situation that involves an (apparently) creative work.

My final observation about the reactions this story has provoked is actually eloquently made by Mike Masnick of TechDirt when he writes about “How that monkey selfie reveals the dangerous belief that every bit of culture must be owned.”  It is curious to see how uncomfortable many people are with the idea that no one at all owns these photos; that they are therefore the property of everyone.  Perhaps it is the unfortunate influence of the big content industries that makes some people uncomfortable with the idea of the public domain.  One of the early articles in this new round of media coverage asked its readers to vote on who owned the copyright in the photo.  Most voters split between Mr. Slater and the monkey, with less than 17% being willing to assert that no one owned it, it was in the public domain.  And apparently Wikipedia has decided to let the public vote on continuing to make the photo available.

Maybe the real lesson we should take from this silliness is that the public domain is real, and it is important.  Much of human culture would not have been possible without free access to our shared literary and artistic heritage.  The U.S. Constitution builds a public domain in to the very words by which it gives Congress the power to pass copyright laws, by requiring that those rights be “for a limited time.”  We cannot have copyright without a public domain; to attempt that would be cultural suicide.  So we need to learn not just to accept the public domain, but to celebrate it.  Perhaps that is why the monkey is smiling,

Signing My Rights Away (a guest post by Jennifer Ahern-Dodson)

NOTE — Authorship can be a tricky thing, impacted by contractual agreements and even by shifting media.  In this guest post by Jennifer Ahern-Dodson of Duke’s Thompson Writing Program we get an additional perspective on the issues, one that is unusual but might just become more common over time  It illustrates nicely, I think, the link between authorship credit, publication agreements and a concern for managing one’s online identity.  A big “thank you” to Jennifer for sharing her story:

Signing My Rights Away

Jennifer Ahern-Dodson

I stared at my name on the computer screen, listed in an index as a co-author for a chapter in a book that I don’t remember writing. How could I be published in a book and not know about it? I had Googled my name on the web (what public digital humanist Jesse Stommel calls the Googlesume), as part of my research developing a personal website through the Domain of One’s Own project, which emphasizes student and faculty control of their own web domains and identities. Who am I online? I started this project to find out.

I was taken aback by some of what I found because it felt so personal—my father’s obituary, a donation I had made to a non-profit, former home addresses. All of that is public information, so I shouldn’t have been surprised, but then about four screens in I found my name listed in the table of contents for a book I’d never heard of. Because the listed co-author and I had collaborated on projects before, including national presentations and a journal publication, I wondered if I had just forgotten something we’d written together.

I emailed her immediately and included a screenshot of the index page. Subject line: “Did we write this?”

She wrote back a few minutes later.

WHAT??!!!  We have a book chapter that we didn’t even know about???!!!!!  How is this possible?  Ahahahahahahahaha!!!!!

It’s a line for our CV! But, wait, what is this publication? Do we even want to list it? Would we list it as a new publication? Is it even our work? How did this happen?

This indeed was a mystery. At the time this was all unfolding, I was participating in a multidisciplinary faculty writing retreat. Once I shared the story with fellow writers, they enthusiastically joined in the brainstorming and generated a wide range of theories including plagiarism, erroneous attribution, a reprint, and an Internet scam (see Figure below). I mapped the possibilities for this curious little chapter called “Service Learning Increases Science Literacy,” listed on page 143 of the book National Service: Opposing Viewpoints (2011)[1].

 

AD picture 1

I needed to do more research and so requested the book through Interlibrary Loan and purchased it online as well.

And then there was the story of the editor. Who was she? Did she really exist? Was she a robot editor—just a name added to the front of a book jacket? I started wondering, now that so much of our work is digitized, are robots reading—and culling through—our work more than people? A quick search on Google revealed she was the editor for over 300 books, mostly for young adults. Follow up searches on LinkedIn and Google+ revealed profiles that seemed authentic.

The book arrives.

About a week later, the book arrived through Inter-library Loan. While still standing at the library service desk, I quickly flipped to page 143.

AD picture 2

What I discovered is a reprint (with a new title) of an article my author and I had published in the Journal of College Science Teaching.[2] It was republished with permission through the journal, conveyed through Copyright Clearance Center. The table of contents included a range of authors and works, including an
excerpt from a speech by George W. Bush.

It all looked legitimate. But how could I be published and not know about it?

In an email conversation with Kevin Smith, my university’s scholarly communication director and copyright specialist, I learned that typically in publication agreements, authors transfer copyright to the organization that publishes the journal. From then on, the organization has nearly total control. It can do what it wants with the article (like republish it or modify it), and for most other uses I might want to make (like including it on my website), I’d have to ask their permission.

AD picture 3

I also learned that republication is not uncommon. Although this book is marketed as “new,” it is in fact really just repackaged material from other sources that libraries likely already have. In this case, our article for a
college teaching journal was repackaged for an audience of high school teachers as part of an opposing viewpoints series, essentially marketing the same content to a different audience.

In a slightly different repackaging model, MIT Press has started re-publishing scholarly articles from its journals in a thematically curated eBook series called Batches.

These two models made visible for me the ways that copyright, institutional claims, and the Internet fuel change at a pace so rapid it seems almost impossible for authors to keep up.

Where to go from here

Although the ending to this mystery is not as thrilling as I thought it would be (someone plagiarized our work! Someone recorded and transcribed a talk! The book is a scam!), what I uncovered was this whole phenomenon of book republishing. Our chapter was legitimately repackaged in a mass marketed book with copyright secured, which allowed our work to be shared with a broader audience (which I see as a good thing). Yet, the process distanced me from my work in a way I was not expecting. In my naïve, yet I suspect widely held view of academic authorship, I assumed the contract I had signed was simply a formality, more of a commitment by the journal to publish the article and an agreement by my co-author and me to do so. I only skimmed the contract, distracted perhaps by the satisfaction of getting published and the opportunity to circulate my ideas more broadly.

As I submerged myself into the murky depths of republishing, I started to think about my own responsibility as both a writer and a teacher of undergraduate writers, to educate myself on authors’ rights. Could I negotiate publishing agreements to retain copyright? Or, at the very least, could I secure flexibility to re-use my work? As it turns out, yes. The Scholarly Publishing and Academic Resources Coalition has created an Author Addendum to help authors manage their copyright and negotiate with publishers rather than relinquishing intellectual property.

Although it is not uncommon for publishers to ask authors to sign over their legal rights to their work, at least one publisher—Nature Publishing, which includes the journals Scientific American and Nature—goes even farther. It requires authors not only to waive their legal rights but also their “moral rights.” Under this agreement, work could conceivably be republished without attribution to the original author. There was a story about this a couple of months ago, see http://chronicle.com/article/Nature-Publishing-Group/145637/.

In my case, I clearly did not do due diligence as an author when I read and signed the agreement for the science literacy article, and neither the journal nor the book editor or publisher was under any legal obligation to notify me that my work was republished or retitled. I wonder, however, what would happen if we applied the concept of academic hospitality to our publishing relationships. Could a simple email notification when/if our work gets republished be a kind of professional courtesy we can expect? Or, should we as authors get more comfortable with less control over our work and choose to share our ideas more liberally in public domains in addition to academic journals, which have limited readership and at times draconian author agreements? Do institutions have any role to play in educating their faculty and graduate students about signing agreements?

In my quest to create a domain of my own, to “reclaim the web” and be an agent in crafting my own author identity online, I discovered that, in fact, I had given up control of some of my own work. Now, I’m aware of the need to balance going public with my work—both online and in print—with a thoughtful and informed understanding of my rights and responsibilities as an academic author.

[1] Gerdes, Louise, Ed. Greenhaven Press.

[2] Reynolds, J. and Ahern-Dodson, J. “Promoting science literacy through Research Service-Learning, an emerging pedagogy with significant benefits for students, faculty, universities, and communities.” Journal of College Science Teaching 39.6 (2010).

 

Planning for musical obsolescence

Gustavo Dudamel is one of the most celebrated conductors of his generation.  As Music Director of both the Los Angeles Philharmonic and the Simon Bolivar Orchestra of Venezuela, he has built a solid and enthusiastic following amongst lovers of symphonic music.  He is also, according to his website bio, deeply committed to “access to music for all.”  So it is particularly poignant that a recording by Dudamel should serve as the prime example of a new access problem for music.

When Dudamel and the Los Angeles Philharmonic release a new recording of a live performance of Hector Berlioz’s Symphonie Fantastique, it should be a significant event, another milestone in the interpretation of that great work.  But in this particular case we are entitled to wonder if the recording will really have any impact, or if it will drop into obscurity, almost unnoticed.

Why would such a question arise?  Because the Dudamel/LA Philharmonic recording was released only as a digital file and under licensing terms that make it impossible for libraries to purchase, preserve and make the work available.  When one goes to the LA Philharmonic site about this recording of Symphonie Fantastique and tries to purchase it, one is directed to the iTunes site, and the licensing terms that accompany the “purchase” — it is really just a license — restrict the user to personal uses.  Most librarians believe that this rules out traditional library functions, including lending for personal listening and use in a classroom.  Presumably, it would also prevent a library from reformatting the work for preservation purposes in order to help the recording outlive the inevitable obsolescence of the MP3 or MP4 format.  Remember that the section 108 authorization for preservation copying by libraries has restrictions on digital preservation and also explicitly allows contractual provisions to override that part of the law.

At a recent consultation to discuss this problem, it was interesting to note that several of the lawyers in the room encouraged the librarians to just download the music anyway and ignore the licensing terms, simply treating this piece of music like any other library acquisition.  Their argument was that iTunes and the LA Philharmonic really do not mean to prevent library acquisitions; they are just using a boilerplate license without full awareness of the impact of its terms.  But the librarians were unwilling.  Librarians as a group are very law-abiding and respectful of the rights of others.  And as a practical matter, libraries cannot build a collection by ignoring licensing terms; it would be even more confusing and uncertain than it is to try to comply with the myriad licensing terms we encounter every day!

In the particular case of the Dudamel recording of Berlioz, we know rather more about the situation than is normal, because a couple of intrepid librarians tried valiantly to pursue the issue.   Judy Tsou and John Vallier of the University of Washington tracked the rights back from the LA Philharmonic, through Deustche Grammophon to Universal Music Group, and engaged UMG in a negotiation for library-friendly licensing.  The response was, as librarians have come to expect, both inconsistent and discouraging.  First, Tsou and Vallier were told that an educational license for the download was impossible, but that UMG could license a CD.  Later, they dropped the idea of allowing the library to burn a CD from the MP3 and said an educational license for download was possible, but only for up to 25% of the “album.”  For this 25% there would be  a $250 processing fee as well as an unspecified additional charge that would make the total cost “a lot more” than the $250.  Even worse, the license would be limited to 2 years, making preservation impossible. The e-mail exchange asserts that UMG is “not able” to license more than 25% of the album for educational use, which suggests that part of the problem is that the rights ownership and licensing through to UMG is tangled.  But in any case, this is an impossible proposal.  The cost is absurd for one quarter of an album, and what sense does it make for a library to acquire only part of a performance like this for such a limited time? Such a proposal fundamentally misunderstands what libraries do and how important they are to our cultural memory.

Reading over the documents and messages in this exchange, it is not at all clear what role Maestro Dudamel and the LA Philharmonic have in this mess.  It is possible that they simply do not know how the recording is being licensed or that it is unavailable for libraries to acquire and preserve.  Or they may think that by releasing the recording in digital format only they are being up-to-date and actually encouraging access to music for everyone.  In either case, they have a responsibility to know more about the situation, because the state of affairs they have allowed impedes access, in direct contradiction to Maestro Dudamel’s express commitment, and it ensures that this recording will not be part of the ongoing canon of interpretation of Berlioz.

As far as access is concerned, the form of its release means that people who cannot afford an MP3 player will not be able to hear this recording.  Many of those people depend on libraries, and that option will be closed to them because libraries cannot acquire the album.  Also, access will become impossible at that inevitable point in time when this format for digital music becomes obsolete.  Maybe UMG and the Philharmonic will pay attention and release the recording on a different format before that happens, but maybe they won’t.  The most reliable source of preservation is libraries, and they will not be there to help with this one.  So access for listeners 20 or 30 years from now is very much in question.

This question of the future should have great consequence for Maestro Dudamel and the orchestra.  Without libraries that can collect their recording, how will it be used in classrooms in order to teach future generations of musicians?  Those who study Berlioz and examine the performance history of the Symphonie Fantastique simply may not know about this performance by Dudamel and the LA Philharmonic.  That performance, regardless of how brilliant it is, may get, at best, a footnote in the history of Berlioz — “In 2013 the Symphonie Fantastique was recorded by the LA Philharmonic under the baton of Gustavo Dudamel; unfortunately, that recording is now lost.”  These licensing terms matter, and without due attention to the consequences that seemingly harmless boilerplate like “personal use only” can produce, a great work of art may be doomed to obscurity.

Attention, intention and value

How should we understand the value of academic publications?  That was the question addressed at the ALA Annual Conference last month during the SPARC/ACRL Forum.  The forum is the highlight of each ALA conference for me because it always features a timely topic and really smart speakers; this year was no exception.

One useful part of this conversation was a distinction drawn between different types of value that can be assigned to academic publications.  There is, for example, the value of risk capital, where a publication is valued because someone has been willing to invest a significant amount of money, or time, in its production.  Seeing the value of academic publications in this light really depends on clinging to the scarcity model that was a technological necessity during the age of print, but which is increasingly irrelevant.  Nevertheless, some of the irrational opposition we see these days towards open access publications seems to be based on a myopic approach that can only recognize this risk value; because online publication can be done more inexpensively, at both production and consumption, and therefore does not involve the risk of a large capital investment, it cannot be as good.  Because the economic barrier to entry has been lowered, there is a kind of “they’ll let anyone in here” elitism in this reaction.

Another kind of value that was discussed is the cultural value that is supposedly infused into publications by peer-review.  In essence, peer-review is used as a way to create a different, artificial type of scarcity — amongst all the material available in the digital age, peer-review separates and distinguishes some as having a higher cultural value.

Of course, there is another way to approach this kind of winnowing valuable material from the booming, buzzing confusion; one could look at how specific scholarship has been received by readers.  That is, one could look at the value created by attention.  We are especially familiar with attention value in the age of digital consumerism because we pay attention to Amazon sales figures, we seek recommendations through “purchased together” notes, and we look at consumer reviews before booking a hotel, or a cruise, or a restaurant.  Some will argue that these parallels show that we cannot trust attention value; it is only good for inconsequential decisions, the argument goes. But figuring out how to use attention as a means to make sound evaluations of scholarship — better evaluations than we are currently relying on — is the focus of the movement we call “alt-metrics.”

Before we discuss attention value in more detail, however, we need to acknowledge another unfortunate reminder that the cultural value created by peer-review may be even more suspect and unreliable. Last week we saw a troubling incident that provokes fundamental doubts about peer-review and how we value scholarly publications when Sage Publishing announced the retraction of sixty articles due to a “peer-review ring.”  Apparently a named author used fake e-mail identities, and maybe some cronies, in order to review his own articles and to cite them, thus creating an artificial and false sense of the value of these articles.  Sage has not made public the details, so it is hard to know exactly what happened, but as this article points out, the academic world needs to know — deserves to know — how this happened.  The fundamental problem that this incident raises is the suggestion that an author was able to select his own peer-reviewers and to direct the peer-review requests to e-mails he himself had created, so that the reviewers were all straw men.  Although all the articles were from one journal, the real problem here is that the system for peer-review apparently simply is not what we have been told it is, and does not, in fact, justify the value we are encouraged to place on it.

Sage journals are not inexpensive.  In fact, the recent study of “big deal” journal pricing by Theodore Bergstrom and colleagues (subscription required), notes that Sage journal prices, when calculated per citation (an effort to get at value instead of just looking at price), are ten times higher than those for journals produced by non-profits, and substantially higher even than Elsevier prices.  A colleague recently referred to Sage journals in my hearing as “insanely expensive.” So it is a legitimate question to ask if we are getting value for all that money.  One way high journal prices are often justified, now that printing and shipping costs are mostly off the table, is based on the expertise required at publishing houses to manage the peer-review system.  But this scandal at the Journal of Vibration and Control raises the real possibility that Sage actually uses a kind of DIY system for peer-review that is easily gamed and involves little intervention from the publisher.  How else could this have happened?  So we are clearly justified is thinking that the value peer-review creates for consumers and readers is suspect, and that attention value is quite likely to be a better measure.

Attention can be measured in many ways.  The traditional impact factor is one attempt to analyze attention, although it only looks at the journal level, measures only a very narrow type of attention, and tells us nothing about specific articles.  Other kinds of metrics, those we call “alt-metrics” but ought to simply call metrics, are able to give us a more granular, and hence more accurate, way to evaluate the value of academic articles.  Of course, the traditional publication system inhibits the use of these metrics, keeping many statistics proprietary and preventing cross-platform measurements.  Given the Sage scandal, it is easy to see why such publishers might be afraid of article-level measures of attention.  The simple fact is that the ability to evaluate the quality of academic publications in a trustworthy and meaningful way depends on open access, and it relies on various forms of metrics — views, downloads, citations, etc. — that assess attention.

But the most important message, in my opinion, that came out of the SPARC/ACRL forum is that in an open access environment we can do better than just measuring attention.  Attention measures are far better than what we have had in the past and what we are still offered by toll publishers. But in an open environment we can strive to measure intention as well as attention.  That is, we can look at why an article is getting attention and how it is being used.  We can potentially distinguish productive uses and substantive evaluations from negative or empty comments.  The goal, in an open access environment, is open and continuous review that comes from both colleagues and peers.  This was an exciting prospect when it was raised by Kristen Ratan of PLoS during the forum, where she suggested that we should develop metrics similar to the author-to-author comments possible on PubMed Commons that can map how users think about the scholarly works they encounter.  But, after the Sage Publishing debacle last week, it is easier to see that efforts to move towards an environment where such open and continuous review is possible are not just desirable, they are vital and very urgent.

A win, oddly

Because I am on vacation this week and have very intermittent Internet access, I am hardly the first to announce that the Second Circuit Court of Appeals affirmed the lower court decision (mostly) in the Authors Guild v. HathiTrust lawsuit. I am a bit paranoid about major decisions coming down on days when I am out of touch, but that is another matter. The important point is that the decision is another important win for libraries and fair use, brought to us by the foolishly litigious Authors Guild. It is the first of three major appeals in fair use cases that academic libraries should be watching carefully, and it may help cause a domino effect in those other two (the Georgia State and Google Books cases).

This potential for impact on decisions currently being written by other judges is increased by the fact that the Second Circuit, in discussing transformation as a major element in fair use deliberately cited precedents from its own previous cases, but also cases from the Ninth Circuit and two other Circuit Courts of Appeal. The judges seem to be deliberately rejecting the idea that the circuits are split about transformative fair use.

This decision is very good news for libraries, and the ARL Public Policy Notes description of the decision is well worth reading. But for all its positives, it has to be admitted that there are some oddities in this decision.

Basically, the Court did three different things in this decision:

  1. It affirmed the lower court ruling that the Authors Guild did not have standing – the right to bring the lawsuit – of behalf of its members. Another reminder of the oft-repeated rule that only a rights holder may sue to defend those rights, and associations that claim to represent rights holders but do not own any rights are not proper plaintiffs. A simple lesson the Authors Guild declines to learn.
  2. The court also affirmed that mass digitization for the purpose of creating a searchable index of full-text materials, as well as to provide access to those materials for persons with disabilities, is fair use. There is a lot of language in this opinion that reinforces the ARL Code of Best Practices for Fair Use in Academic Libraries.
  3. Finally, the judges remanded the case back to the lower court in regard to its opinion about fair use for preservation. This is one of the oddities in the decision, so let’s address that one first.

The oddity about this remand is that it does not actually question the conclusion that digitization for preservation can be fair use. Instead, the Court sent this portion of the case back to the lower court to decide if there was any plaintiff remaining in the case, once it was determined that the AG lacked standing, who was at any real risk of having a preservation copy of their book released by HathiTrust while there were still copies commercially available. In short, The Court of Appeals suggested that any ruling about fair use might have been premature because there was no plaintiff in a legally-recognizable position to raise the challenge. It is still entirely possible that, if such a plaintiff is found in the remaining group of named authors, fair use could nevertheless be affirmed. And, because of the rest of the ruling, it would be hard to see what difference even a ruling against fair use for preservation would make to the actual practice of the HathiTrust. So this was really a technicality, and quite strange.

By the way, in regard to the key argument raised by the Authors Guild that the library-specific exception in section 108 precludes libraries from relying on fair use, the court paid almost no attention. It dismissed this silly argument in a footnote (footnote 4 on page 13). This was a losing argument from the start, and the reliance placed on it by the AG shows just how out of touch they are in their approach to copyright.

I think three points are important about the fair use decision favoring HathiTrust in this case (the factor-by-factor analysis is handled well in the ARL post).

First, the Second Circuit accepted the same broad approach to the issue of transformation as has become common in other decisions. It is not just actual changes to the original work that can support a finding of transformation, but a “different purpose… new expression, meaning or message.” And, as I said, the Court appealed to a broad consensus across the country in defining transformation this way.

Second, the Second Circuit held that the lower court was wrong to find that digitization for the purpose of facilitating access for persons with visual or print disabilities was transformative, but found that it was fair use nevertheless. This is important, because in the Georgia State appeal the plaintiffs are arguing that because Judge Evans found that copying for electronic reserves was not transformative, she was in error to still find fair use. But in the HathiTrust case the Second Circuit recognizes what is there for all who read Supreme Court opinions to see, that when a use is transformative it is very likely to be fair use, but when it is not transformative, it can still be fair use if a careful analysis of the factors indicates that conclusion. That is what the Second Circuit finds in regard to HathiTrust and its copies for the disabled, and it is what Judge Evans found in GSU. Both were correct decisions in keeping with the clear precedent from the Supreme Court.

Finally, there is the oddity of the Second Circuit panel’s treatment of the fourth fair use factor when it is analyzing the indexing function of HathiTrust. First, the appellate panel calls the fourth factor the most important consideration, and cites the case of Harper & Row v. The Nation for that proposition. But the Supreme Court really renounced that position 20 years ago in the “Oh Pretty Woman” case, so this is the first part of the oddity. The Second Circuit then goes on to define the idea of market harm very narrowly, saying that the only harm to a market that is recognized for the purpose of the fourth fair use factor is when “the secondary use serves as a substitute for the original work.” This seems to be how the court aligns itself with the ruling in “Pretty Woman,” but it is a strange way to get there. The effect of this proposition is to rule out consideration of almost all licensing markets when looking at the fourth factor. This is a conclusion that must be causing serious heartburn in the publishing community. While the Authors Guild continues to make fair use easier and more inclusive with their absurd litigation campaign, they cannot be winning themselves many friends amongst rights holders.

The bottom line is that this decision is very good for libraries and others who depend on fair use. It adds another precedent and some additional bits of analysis to our claims of fair use. But we should recognize that it grows out of what was a very dumb lawsuit to begin with. As is so often the case, we should be emboldened by this ruling, but not too much. The best protection the library community has against aggressive litigation is still, as it always has been, careful and responsible reflection. In that context, fair use is an increasingly safe option for us.

Apology

A significant number of subscribers got spammed by this list today. Routine maintenance of the development server at Duke triggered a mistaken torrent of hundreds of old posts. The biggest problem was that there was a partial subscriber list as part of the development instance of the blog. That list has been removed, so this particular problem should never happen again. There was no hacking involved, and subscriber e-mails did not get harvested or released to anyone.

I am very sorry this happened. I certainly understand if folks want to unsubscribe from the list, but emphasize again that the production version of the blog did not cause this and was not compromised. The list of subscribers that was inadvertently associated with the development instance is no more.

A MOOC on copyright

It has taken a while to get here, but I am happy to be able to announce that two of my colleagues and I  will be offering a four-week MOOC on copyright designed to assist teachers and librarians deal with the daily challenges they encounter in regard to managing what they create and using what they need.

The MOOC will be offered on the Coursera platform and will run for the first time starting July 21.  It is available as of today for folks to sign up at https://www.coursera.org/course/cfel.

It has been a great pleasure working with Anne Gilliland from the University of North Carolina Chapel Hill and Lisa Macklin from Emory University to create this course.  I hope and believe that the course is much stronger because the three of us worked together than it could possibly have been if any one of us did it alone.

This course will be four weeks in duration and focuses on U.S. copyright law.  While we are well aware of all the MOOC participants from other countries — and welcome folks from all over to join us — we also wanted to keep the course short and as focused as possible.  We hope perhaps to do other courses over time, and a more in-depth attention to international issues and to how copyright works on the global Internet might be a good future topic.  In the meanwhile, this course deals with the U.S. law and the specific situations and issues that arise for librarians and educators at all levels.

We especially hope to attract K-12 teachers, who encounter many of the same issues that arise in higher education, and who often have even fewer resources to appeal to for assistance.  That is one reason for the summertime launch.

Another point about the focus in this course — our goal is to provide participants with a practical framework for analyzing copyright issues that they encounter in their professional work. We use a lot of real life examples — some of them quite complex and amusing — to help participants get used to the systematic analysis of copyright problems.

For many in the academic library community, the winding up of the courses offered by the Center for Intellectual Property at the University of Maryland University College has left a real gap.  This course is intentionally a first step toward addressing that gap.  It is, of course, free, and a statement of accomplishment is available for all participants who complete the course.  We hope this can assist our colleagues in education with some professional development, and maybe, depending on local requirements, even continuing education requirements.

We very much hope that this course will be a service to the library and education community, and that it provides a relatively fun and painless way to go deeper into copyright than the average presentation or short workshop allows.

Discussions about the changing world of scholarly communications and copyright