Category Archives: Technologies

Steal this book?

Last week I was researching a copyright and fair use issue for a faculty member, and needed to see a copy of a book held by Duke’s Rubenstein Rare Book and Manuscript Library.  As I explained the issue and what material I wanted to use to the Rubenstein staff, a researcher sitting nearby listened intently. As soon as we finished, she told me that she was the President of the Authors Guild and that they were suing Google over fair use.  She began to explain to me why Google was wrong, but that the author for whom I was doing the research should be allowed to rely on fair use.  When I introduced myself as a lawyer and copyright specialist for the Libraries, the conversation came to a polite but stilted conclusion.

This week, however, I got a chance to see more fully what that researcher, whose name is Roxana Robinson and who was giving a lecture that afternoon in the Library, has to say about Google, in a column she wrote for the Wall Street Journal called “How Google Stole the Work of Millions of Authors” (behind a paywall).  Ms. Robinson, a novelist and biographer, unfortunately proves what I suspected at the time of our encounter, that her perspective on fair use is based on a preconceived idea about who are good users entitled to rely on fair use (authors) and who are bad, unworthy users (Google), rather than on an understanding of the careful legal analysis of specific uses that actually underlies these decisions.

The WSJ column employs some interesting rhetoric, starting with its title, which is clearly intended to provoke a visceral response.  Many people have noted that the language of theft and stealing is inappropriate when the issue is copyright infringement.  This point is made in great detail in William Patry’s book “Moral Panics and the Copyright Wars.”  As is true for most crimes, the definition of theft includes an intention, a mental state or “mens rea” that is a required element of that crime.  For theft this intention is “to deprive the true owner of [the personal property]” (definition from Black’s Law Dictionary, Seventh edition).  Because of the nature of intellectual property, copyright infringement never meets this definition; that is why the law has a different word — infringement — for the unauthorized taking of someone else’s IP.

So the headline of Ms. Robinson’s column is legally incorrect and intended, I think, to stir up her base rather than to make an argument that could sway the Supreme Court (for more on this point, see the rebuttal published in Fortune “Why the Authors Guild is Still Wrong about Google’s Book Scanning“).

The column also makes a couple of sardonic remarks about quotes that can be found using Google Books.  Here the argument breaks down pretty badly, because both of the quotes Ms. Robinson chooses, one from Shakespeare and one from Emerson, are in the public domain.  Her effort to be ironic seriously backfires here, because her own column is actually proving the utility of the Google Books database in a way that emphasizes its lawful use of PD texts.  Rhetoric has truly overcome logic.

It is worthwhile, nevertheless, to think a minute about the logic structure of the argument that what Google has done is infringement.  Ms. Robinson makes the point that there are many books that were scanned by Google, that Google is a profitable company, and that no authorization for the scanning was asked for or given by the authors of the works that were scanned.  All of this is true, of course, but it does not amount to an argument that Google has infringed any copyrights.  What is missing, at least as I see it, is any notice that the authors have been harmed.  The rhetoric of the column clearly tells us that the Authors Guild, and at least some individual authors who are involved in the lawsuit, are angry.  But it does not explain a fundamental element of any tort action — harm.

The two courts that have considered this case both found that there was no harm done here — no negative impact on the market for or value of the works in question, to use the language that is part of a fair use analysis.  Users cannot obtain any significant portions of books that are limited to snippet views; the AG’s own experts were unable to retrieve as much as 16% of any work using word searches and snippet results, and even that amount of text was randomized in a way that made reading a coherent piece of the work impossible.  The is just no evidence that any sales are lost due to this finding aid, and it is quite possible that sales will be gained.

There is, of course, the question of a licensing market.  But that is almost a silly question.  A market for licensing scans to create an index has never existed, and it is impossible to imagine that any of the authors had such an idea in mind when they wrote their works.  As Judge Leval said in his decision for the Second Circuit Court of Appeals, this is not really even a use of the work, it is a use of information about the work, for which a secondary licensing market simply is not appropriate.  Creating such a market would be revolutionary, and it would do much more harm to the overall environment for books and reading than anything Google could think up.  What the Authors Guild seems to be saying here is that Google should pay us for something we never thought we would or should get paid for, simply because they have a lot of money.  Perhaps when we recognize how weak that argument actually is it becomes understandable that Ms. Robinson relied on overheated rhetoric rather than legal or logical arguments.  But if the purpose of her essay is to convince people that the Supreme Court needs to take the case to right a serious wrong, it falls far short, and is unlikely to convince the nine citizens whose opinion on that issue matters the most.

Rebels in the Campus Bookstore

A guest post by Will Cross, Director of Copyright and Digital Scholarship at North Carolina State University

As the semester winds down most normal people are sweating through final projects, scheduling visits with family and friends, or looking forward to a well-deserved holiday break by the fire (or at least the warming glow of the new Star Wars movie).  I can’t stop thinking about textbooks.

Several recent events have kept this topic on my mind.  First, Kevin and I are preparing to teach a class in the spring and we’re currently putting the finishing touches on our assigned readings.  Sitting at the breakfast table working through the syllabus, I was struck by a seemingly-unrelated comment from my wife, Kimberly, who is finishing her first semester in a doctoral program.  Making her own plans for the spring, she noted “I need to decide if I’m going to renew my statistics textbook.”

Readers who have been out of school for a few years might be surprised that many students like Kimberly rent, rather than purchase, their more expensive textbooks.  If textbook rental companies like Chegg and College Book Renter are not familiar names, you may also be surprised by how quickly textbook prices have spiraled out of control in the past decade.  Increasing at nearly triple the rate of inflation, textbook costs have outpaced rises in health care and housing prices, leaving students with an expected bill of more than $1,200 a year.

Faced with these unsustainable costs, students like Kimberly find themselves in an arms race, seeking alternative channels to acquire textbooks while publishers work to plug leaks in their captive marketplace.  Indeed, one of the largest copyright cases decided by the Supreme Court in recent years resulted from publishers’ attempt to create a “super-property” right in order to quash the sale of less expensive international textbooks.  The following year a casebook company attempted something similar using license provisions to strip property rights from students who “purchased” (ironically) their property law textbook.

While prices have gone up, student spending has not always followed suit, with many students renting, borrowing, or pirating textbooks.  Many more simply choose their courses and majors based on the costs of textbooks or delay their purchases to determine the extent to which a title is used in class, setting them back days or weeks in assigned readings.  Of greatest concern, a recent PIRG survey revealed that more than 65% of students simply muddle through with no textbook, even though the majority recognized that this presented a “significant concern” for their ability to successfully complete the course.  As a result, more than 10% of students fail a course each year because they simply cannot afford the book.

Textbook costs have priced many students out of equal participation in higher education and colleges and universities should regard this as social justice issue that threatens students’ academic progress.  Students have written powerfully about these issues on social media, using hashtags like #textbookbroke to document the burdens

created by high prices.  For example, tweets from Kansas’ #KUopentextbook project have documented the harm done by students’ lost opportunities to travel to conferences, take unpaid internships, and compete on equal footing in the classroom.  As one student put it, “my wage shouldn’t determine my GPA.”

Closed, commercial textbooks also do significant harm to instructional design and academic freedom, forcing instructors to use one-size-fits-all books rather than diverse, tailored course materials.  This issue received national attention in November when an instructor was formally reprimanded for refusing to assign a $180 algebra book written by the chair and vice chair of his department.  As SPARC’s Nicole Allen notes, the well-intentioned practice of assigning a single book for multiple sections was designed to support a strong local used-book market but in practice it often entrenches a system of static commercial works.  It can also homogenize educational materials, limiting them to publisher-approved narratives that inhibit an instructor’s ability to bring her own voice and experience into the classroom.  Indeed, many publishers include value-added materials like test banks and pre-made assignments designed to create textbooks that are fully “teacher-proof.”

Students are often caught in the crossfire of a broken textbook market where books are sold by a small group of for-profit publishers who control 80% of the market, and purchasing decisions are made by faculty instructors but students are asked to pick up the bill.  This situation – where for-profit publishers leverage faculty incentives to exploit a captive academic market – should sound familiar to anyone working to bring open access to scholarly publishing.  The scale, however, is quite different: the textbook market exceeds the scholarly journal market by roughly $4 billion each year.

As they have with open access, academic stakeholders have begun to rebel, designing open materials that are not just cheaper than closed works but are positively better.  These open educational resources (OER’s) may be peer-reviewed Creative Commons-licensed textbooks like those found in Rice University’s OpenStax program or the University of Minnesota-led Open Textbook Network. They also encompass modular learning objects like those found in the MERLOT repository or even full courses like those offered through MIT’s OpenCourseWareCommunity colleges and system-wide efforts like Affordable Learning Georgia have been particularly effective in this space, with programs like Tidewater’s “Z-Degree” that completely remove student textbook costs from the equation.

In the past several years, academic libraries have joined the fray, raising awareness, offering grants, and collaborating with faculty authors to create a diverse body of open educational resources.  In the NCSU Libraries, we have followed the outstanding examples of institutions like Temple and UMass-Amherst by offering grants for faculty members to replace closed, commercial works with open, pedagogically-transformative OERs.  These projects create massive efficiencies for libraries – spending a few thousand dollars to save students millions – and a growing body of empirical data indicates that student learning and retention are improved by open materials.

It’s no surprise that an open textbook would be more effective than one that a third of students can’t afford to buy.  The greatest potential for OERs, however, comes from the way they empower instructors and engage with library expertise.  The “teacher proof” books offered today frequently reduce instructors to hired hands, reciting homogenized narratives approved by for-profit publishers.  In contrast, as one recent study concluded, an OER “puts ownership of curriculum directly back into the hands of teachers, both encouraging them to reflect on how the materials might be redesigned and improved and empowering them to make these improvements directly.”  Combined with support from libraries for instructional design, copyright and licensing, and digital competencies, OERs have the potential to transform pedagogy at the deepest levels.

For today’s students, textbook prices mean more than just a few extra days of subsisting on ramen noodles.  Too often, students have to choose between adding another thousand dollars to an already historical debt load or trying to get by without essential resources and closed, and commercial textbooks often leave faculty instructors with no choice at all.   These, to borrow a phrase, aren’t the books we’re looking for.

Google Books, Fair Use, and the Public Good

Note — thanks to several readers who pointed out that I had carelessly misspelled Judge Leval’s name in my original posting.  That error has now been corrected.

On Friday the Second Circuit Court of Appeals issued its ruling in the appeal of the Authors Guild lawsuit against Google over the Google book search project.  The decision was a complete vindication of the District Court’s  dismissal of the case, affirming fair use and rejecting all of the counterarguments offered by the Authors Guild.

As it happens, I was traveling when the decision came down, confirming a troubling tendency of the federal courts to issue important copyright opinions when I am out-of-pocket.  (My wife says that it is not about me, but what sense does that make?)  In any case, that slight delay allows me to benefit richly from the analyses posted by some very smart colleagues.  Here are several great places to read about the decision:

From Brandon Butler of American University.

From Corynne McSherry of the Electronic Freedom Foundation

From Krista Cox of the Association of Research Libraries

From Carrie Russell at the American Library Association

I want to add, or really just pull out from these previous posts, three points that I think are especially important.

First, Judge Pierre Leval, who wrote the opinion, does a nice job of drawing a line from the idea of transformative uses to the public purpose of copyright law.  This is hardly surprising, since it was Judge Leval who wrote the 1990 article that coined the term transformative use and had such an influence on the Supreme Court in its 1994 decision in Campbell v. Acuff-Rose Music.  In this ruling, Judge Leval reminds us quite forcibly that the primary beneficiary intended by copyright law is the public, through “access to knowledge” (p.13) and “expand[ed] public learning” (p. 15).  Economic benefits for authors are instrumental, not the ultimate goal of the copyright monopoly.  Then Judge Leval explains how this analysis of transformation serves those goals, clarifying why fair use is an essential part of copyright’s fundamental purpose.  He tells us that transformation is an answer to the question of how a borrowing from a copyrighted work can be justified.  The court, on behalf of a rights holder, asks a user “why did you do this?”  When the answer to that question is “because I wanted to make a new contribution to knowledge,” that is a transformative purpose.  And, by definition, it is a purpose that benefits the public, which justifies whatever minor loss a rights holder might suffer from the use.  The second step in Judge Leval’s  analysis, asking if the new use is a market substitute for the original, ensures that that loss is not so great as to outweigh the benefit. Thus we have a coherent analysis that recognizes the public purpose of copyright and still respects it chosen method for accomplishing that purpose.

Another important thing we can learn from Judge Leval’s opinion is about the difference between a transformative use and a derivative work.  The Author’s Guild (really some individual authors set up as plaintiffs because the AG has been found to lack standing to sue in this case) argues that allowing the Google Books’ search function usurps a right held by those authors to license indexing of their works.  This is ridiculous on its face, of course — imagine the effect such a right would have on libraries — but the judge does a nice job of explaining why it is so wrong.  The decisions rest heavily on the idea/expression dichotomy that is fundamental in copyright, and stresses that what is presented in the Google Books “snippet view” is more information about books (facts) rather than expressive content from those books.  A derivative work, Judge Leval suggests, is one that represents protected aspects — the expressive content — of the original in an altered form (such as a translation or a movie script).  A transformative use, on the other hand, uses information about the works, as in an index, or uses their content for a different expressive purpose, as in parody or scholarly comment.  This is a difficult distinction to make, as all of us who work in copyright know all too well, and it remains to be seen if the approach outlined above will hold up or prove useful in the full range of situations.  But it is a pointer toward a coherent way to understand a difficult part of the copyright balance.

As an aside, while reading the opinion in this case I was struck by how well the four fair use factors were handled, in a way that showed that the test used by Judge Leval respected all of the factors while essentially applying two basic questions — is the use transformative and does the new work create a market substitute for the original.  In fact, I can suggest three specific passages that are especially exciting, I think, for the application of fair use and the issue of transformation — footnote 21 and accompanying text, which helpfully clarifies the relationship of the second fair use factor to the analysis of transformation; the full paragraph on page 33, that considers the use and misuse of the third factor; and the careful distinction of Google snippets from a case involving telephone ringtones that is found on pages 40-41.  These are discussions that I think will have a significant impact on our ongoing consideration of fair use.

Finally, we should note that the Authors Guild has already indicated its intention to ask the Supreme Court to review this decision.  This is a very bad idea, indicating that the AG simply does not know when to cut its losses and stop wasting the money provided by its members.  The real point, however, is that the Supreme Court is not likely to take the case anyway.  This is not a situation where a fundamental Constitutional issues is involved, as it was in the Campbell case (fair use as a protection for free expression) nor one where a fundamental point about our obligations in the international arena was at issue, as it was in the Kirtsaeng case about the application of first sale to works of foreign manufacture.  In short, this is just a case about a greedy plaintiff who wants to be given an even bigger slice of the copyright pie, which the courts have determined repeatedly it does not deserve.  This is not the sort of issue that attracts the very limited attention of the Supreme Court.  In fact, reading the Court of Appeals’ ruling leaves one with a sense that many of the AG’s arguments were rather silly, and there is no reason to believe they would be less silly when presented to the Supreme Court in a petition for certiorari.

There are some who have argued that there is a split among the Circuit Courts of Appeal over transformative use, which is also a situation that can lead to Supreme Court review.  But that split has always been predicated on the idea that other courts, especially the Ninth Circuit, have carried the idea of transformation too far and departed from the ambit of the original doctrine.  The fact that it is Judge Leval, the author of that approach to fair use, who wrote this opinion, effectively undermines that claim.  In short, this decision closes a circle that outlines a capacious and flexible approach to fair use.  For getting us to this point, I suppose we should thank the Authors Guild for the unintentional support they have provided for a balanced copyright law in the digital age.

MOOCs and student learning

Now that the MOOC on Copyright for Educators and Librarians has finished its first run, it seems like a good time to post some reflections on what I learned from the experience.

The first thing I learned is that offering a MOOC takes a lot of work, and it is easier when that work is shared.  In my case, I was working with two wonderful colleagues — Anne Gilliland from the University of North Carolina, Chapel Hill and Lisa Macklin from Emory — who made the effort of putting the course together much more pleasant.  Both are lawyers and librarians with lots of experience teaching the issues we were dealing with, and we are all friends as well, which made the whole process a lot easier.  We also benefited from the terrific support we got from consultants working for Duke’s Center for Instructional Technology, which may be the single most MOOC-savvy group at any university.

That we had a great team was not really a surprise.  I was a bit more surprised however, and quite pleasantly, by the quality of the student discussion in our MOOC.  I had heard from other instructors about how effective the online discussion forums could be, but was just a bit skeptical.  Then I was able to watch as MOOC participants would pose difficult questions or struggle with the application of copyright law to a particular situation, and repeatedly the other course participants would work through the problem in the forums and arrive at surprisingly good answers. Peer-to-peer teaching is a reality in MOOCs, and is certainly among the best features of these courses.

One thing we know about MOOCs is that they often have participants with considerable background in the topic; often they have enrolled for a refresher or to see how someone else teaches the topic.  These people are a great asset in the MOOC.  Even if they are not amongst the most-likely participants to complete a course according to whatever formula for completion is in place, they are tremendously important to the success of the course because of the contribution they make to peer-learning in the discussion forums.

Acknowledging the contribution of “expert students” also offers a reminder to MOOC instructors to take a more humble approach to the standards we set for completion of our courses.  The open and online nature of these courses means that students enroll with a wide variety of goals in mind.  As I just said, some are experts looking to see how others teach the topic.  Completion of quizzes and such may be unimportant to such participants, even though they are getting valuable career or personal development from the course.

Along these lines, I agree wholeheartedly with this essay by Jeff Pomerantz about apologies for failing to complete a course.  Like Jeff, my colleagues and I got multiple e-mails in which participants explained their “failure” to complete the course.  Like Jeff, we often smiled to ourselves and chocked those messages up to a misunderstanding of what MOOCs are.  And like Jeff, we learned that there are so many reasons for taking a course, so many different goals that participants bring to their involvement, that it is more likely we instructors who need to get a better understanding of MOOCs.

Many of the participants in our specific course were librarians and educators; they were our target audiences, so that makes sense.  These are groups that take assignments and course completion very seriously, which was reflected in our very high completion rate (over 15%).  But it also means that these were folks who wanted to explain to us when they were not going to complete the course according to official standards.  Maybe they did not realize that we were unable to track participation at an individual level due to the technology and the volume of students.  Nevertheless, we needed to treat their desire to explain with respect, and to recognize that many of those who did not earn a certificate of completion probably got what they wanted from the course, and also very likely made important contributions to what other participants learned.

Last week I attended a meeting of Duke’s MOOC instructors, which focused on discussions about how we can use data available about the MOOCs to learn more about the teaching and learning process.  It was a fascinating meeting on several levels, but one thing I got from it was two stories about the kinds of goals that MOOC participants might have.

  • One faculty member who had taught a MOOC explained incidentally his own motivation for taking a different online course.  His own career as a student had been so focused on his own specialty that he had never gotten a chance to take a basic course in a different field that had always interested him.  “There was so much to learn,” he said, “and so little time.” A MOOC gave him a chance to fill that long-felt gap, and I will bet that he was a valuable student to have in the course; very highly motivated, like so many MOOC participants, whether or not he finished the assignments that lead to completion.
  • One of the administrators of Duke’s online initiative told about overhearing two students discussing the fact that each was taking a MOOC, and interrupting the conversation to ask why each had enrolled.  One of the women was a Ph.D. student who explained that there were certain areas of study or skills that she needed to complete her dissertation that were most efficiently gained by taking parts of a MOOC or two.  She registered in order to listen to selected videos that have relevance for her specific research.  She is a perfect example of someone who will not count toward a completion statistic but who is gaining something very valuable through her participation.

The other thing I learned from this meeting about potential research enabled by MOOCs is the myriad ways that these online courses can help improve teaching and learning on our own campus.  Duke has said all along that improving the experience of our own students was an important goal of our involvement with MOOCs.  When I heard this, I usually thought about flipped classrooms.  But that is a very small part of what MOOCs can do for us, I discovered.  I was privileged to listen to a comprehensive discussion about how the data we gather from MOOCs can be used to improve the student experience in our regular classrooms.  Very specific questions were posed about the role of cohorts, the impact of positive and negative feedback, how we can harness the creative ideas students raise during courses, and how to better assess the degree to which individual students have met the unique goals they brought to the course.  All of this has obvious application well beyond the specific MOOC context.

The most important thing I learned from the experience of teaching a MOOC actually has little to do with online courses as such.  It is a renewed respect for the complexity and diversity of the learning process itself, and a sense of awe at being allowed to play a small role in it.

 

 

Signing My Rights Away (a guest post by Jennifer Ahern-Dodson)

NOTE — Authorship can be a tricky thing, impacted by contractual agreements and even by shifting media.  In this guest post by Jennifer Ahern-Dodson of Duke’s Thompson Writing Program we get an additional perspective on the issues, one that is unusual but might just become more common over time  It illustrates nicely, I think, the link between authorship credit, publication agreements and a concern for managing one’s online identity.  A big “thank you” to Jennifer for sharing her story:

Signing My Rights Away

Jennifer Ahern-Dodson

I stared at my name on the computer screen, listed in an index as a co-author for a chapter in a book that I don’t remember writing. How could I be published in a book and not know about it? I had Googled my name on the web (what public digital humanist Jesse Stommel calls the Googlesume), as part of my research developing a personal website through the Domain of One’s Own project, which emphasizes student and faculty control of their own web domains and identities. Who am I online? I started this project to find out.

I was taken aback by some of what I found because it felt so personal—my father’s obituary, a donation I had made to a non-profit, former home addresses. All of that is public information, so I shouldn’t have been surprised, but then about four screens in I found my name listed in the table of contents for a book I’d never heard of. Because the listed co-author and I had collaborated on projects before, including national presentations and a journal publication, I wondered if I had just forgotten something we’d written together.

I emailed her immediately and included a screenshot of the index page. Subject line: “Did we write this?”

She wrote back a few minutes later.

WHAT??!!!  We have a book chapter that we didn’t even know about???!!!!!  How is this possible?  Ahahahahahahahaha!!!!!

It’s a line for our CV! But, wait, what is this publication? Do we even want to list it? Would we list it as a new publication? Is it even our work? How did this happen?

This indeed was a mystery. At the time this was all unfolding, I was participating in a multidisciplinary faculty writing retreat. Once I shared the story with fellow writers, they enthusiastically joined in the brainstorming and generated a wide range of theories including plagiarism, erroneous attribution, a reprint, and an Internet scam (see Figure below). I mapped the possibilities for this curious little chapter called “Service Learning Increases Science Literacy,” listed on page 143 of the book National Service: Opposing Viewpoints (2011)[1].

 

AD picture 1

I needed to do more research and so requested the book through Interlibrary Loan and purchased it online as well.

And then there was the story of the editor. Who was she? Did she really exist? Was she a robot editor—just a name added to the front of a book jacket? I started wondering, now that so much of our work is digitized, are robots reading—and culling through—our work more than people? A quick search on Google revealed she was the editor for over 300 books, mostly for young adults. Follow up searches on LinkedIn and Google+ revealed profiles that seemed authentic.

The book arrives.

About a week later, the book arrived through Inter-library Loan. While still standing at the library service desk, I quickly flipped to page 143.

AD picture 2

What I discovered is a reprint (with a new title) of an article my author and I had published in the Journal of College Science Teaching.[2] It was republished with permission through the journal, conveyed through Copyright Clearance Center. The table of contents included a range of authors and works, including an
excerpt from a speech by George W. Bush.

It all looked legitimate. But how could I be published and not know about it?

In an email conversation with Kevin Smith, my university’s scholarly communication director and copyright specialist, I learned that typically in publication agreements, authors transfer copyright to the organization that publishes the journal. From then on, the organization has nearly total control. It can do what it wants with the article (like republish it or modify it), and for most other uses I might want to make (like including it on my website), I’d have to ask their permission.

AD picture 3

I also learned that republication is not uncommon. Although this book is marketed as “new,” it is in fact really just repackaged material from other sources that libraries likely already have. In this case, our article for a
college teaching journal was repackaged for an audience of high school teachers as part of an opposing viewpoints series, essentially marketing the same content to a different audience.

In a slightly different repackaging model, MIT Press has started re-publishing scholarly articles from its journals in a thematically curated eBook series called Batches.

These two models made visible for me the ways that copyright, institutional claims, and the Internet fuel change at a pace so rapid it seems almost impossible for authors to keep up.

Where to go from here

Although the ending to this mystery is not as thrilling as I thought it would be (someone plagiarized our work! Someone recorded and transcribed a talk! The book is a scam!), what I uncovered was this whole phenomenon of book republishing. Our chapter was legitimately repackaged in a mass marketed book with copyright secured, which allowed our work to be shared with a broader audience (which I see as a good thing). Yet, the process distanced me from my work in a way I was not expecting. In my naïve, yet I suspect widely held view of academic authorship, I assumed the contract I had signed was simply a formality, more of a commitment by the journal to publish the article and an agreement by my co-author and me to do so. I only skimmed the contract, distracted perhaps by the satisfaction of getting published and the opportunity to circulate my ideas more broadly.

As I submerged myself into the murky depths of republishing, I started to think about my own responsibility as both a writer and a teacher of undergraduate writers, to educate myself on authors’ rights. Could I negotiate publishing agreements to retain copyright? Or, at the very least, could I secure flexibility to re-use my work? As it turns out, yes. The Scholarly Publishing and Academic Resources Coalition has created an Author Addendum to help authors manage their copyright and negotiate with publishers rather than relinquishing intellectual property.

Although it is not uncommon for publishers to ask authors to sign over their legal rights to their work, at least one publisher—Nature Publishing, which includes the journals Scientific American and Nature—goes even farther. It requires authors not only to waive their legal rights but also their “moral rights.” Under this agreement, work could conceivably be republished without attribution to the original author. There was a story about this a couple of months ago, see http://chronicle.com/article/Nature-Publishing-Group/145637/.

In my case, I clearly did not do due diligence as an author when I read and signed the agreement for the science literacy article, and neither the journal nor the book editor or publisher was under any legal obligation to notify me that my work was republished or retitled. I wonder, however, what would happen if we applied the concept of academic hospitality to our publishing relationships. Could a simple email notification when/if our work gets republished be a kind of professional courtesy we can expect? Or, should we as authors get more comfortable with less control over our work and choose to share our ideas more liberally in public domains in addition to academic journals, which have limited readership and at times draconian author agreements? Do institutions have any role to play in educating their faculty and graduate students about signing agreements?

In my quest to create a domain of my own, to “reclaim the web” and be an agent in crafting my own author identity online, I discovered that, in fact, I had given up control of some of my own work. Now, I’m aware of the need to balance going public with my work—both online and in print—with a thoughtful and informed understanding of my rights and responsibilities as an academic author.

[1] Gerdes, Louise, Ed. Greenhaven Press.

[2] Reynolds, J. and Ahern-Dodson, J. “Promoting science literacy through Research Service-Learning, an emerging pedagogy with significant benefits for students, faculty, universities, and communities.” Journal of College Science Teaching 39.6 (2010).

 

Planning for musical obsolescence

Gustavo Dudamel is one of the most celebrated conductors of his generation.  As Music Director of both the Los Angeles Philharmonic and the Simon Bolivar Orchestra of Venezuela, he has built a solid and enthusiastic following amongst lovers of symphonic music.  He is also, according to his website bio, deeply committed to “access to music for all.”  So it is particularly poignant that a recording by Dudamel should serve as the prime example of a new access problem for music.

When Dudamel and the Los Angeles Philharmonic release a new recording of a live performance of Hector Berlioz’s Symphonie Fantastique, it should be a significant event, another milestone in the interpretation of that great work.  But in this particular case we are entitled to wonder if the recording will really have any impact, or if it will drop into obscurity, almost unnoticed.

Why would such a question arise?  Because the Dudamel/LA Philharmonic recording was released only as a digital file and under licensing terms that make it impossible for libraries to purchase, preserve and make the work available.  When one goes to the LA Philharmonic site about this recording of Symphonie Fantastique and tries to purchase it, one is directed to the iTunes site, and the licensing terms that accompany the “purchase” — it is really just a license — restrict the user to personal uses.  Most librarians believe that this rules out traditional library functions, including lending for personal listening and use in a classroom.  Presumably, it would also prevent a library from reformatting the work for preservation purposes in order to help the recording outlive the inevitable obsolescence of the MP3 or MP4 format.  Remember that the section 108 authorization for preservation copying by libraries has restrictions on digital preservation and also explicitly allows contractual provisions to override that part of the law.

At a recent consultation to discuss this problem, it was interesting to note that several of the lawyers in the room encouraged the librarians to just download the music anyway and ignore the licensing terms, simply treating this piece of music like any other library acquisition.  Their argument was that iTunes and the LA Philharmonic really do not mean to prevent library acquisitions; they are just using a boilerplate license without full awareness of the impact of its terms.  But the librarians were unwilling.  Librarians as a group are very law-abiding and respectful of the rights of others.  And as a practical matter, libraries cannot build a collection by ignoring licensing terms; it would be even more confusing and uncertain than it is to try to comply with the myriad licensing terms we encounter every day!

In the particular case of the Dudamel recording of Berlioz, we know rather more about the situation than is normal, because a couple of intrepid librarians tried valiantly to pursue the issue.   Judy Tsou and John Vallier of the University of Washington tracked the rights back from the LA Philharmonic, through Deustche Grammophon to Universal Music Group, and engaged UMG in a negotiation for library-friendly licensing.  The response was, as librarians have come to expect, both inconsistent and discouraging.  First, Tsou and Vallier were told that an educational license for the download was impossible, but that UMG could license a CD.  Later, they dropped the idea of allowing the library to burn a CD from the MP3 and said an educational license for download was possible, but only for up to 25% of the “album.”  For this 25% there would be  a $250 processing fee as well as an unspecified additional charge that would make the total cost “a lot more” than the $250.  Even worse, the license would be limited to 2 years, making preservation impossible. The e-mail exchange asserts that UMG is “not able” to license more than 25% of the album for educational use, which suggests that part of the problem is that the rights ownership and licensing through to UMG is tangled.  But in any case, this is an impossible proposal.  The cost is absurd for one quarter of an album, and what sense does it make for a library to acquire only part of a performance like this for such a limited time? Such a proposal fundamentally misunderstands what libraries do and how important they are to our cultural memory.

Reading over the documents and messages in this exchange, it is not at all clear what role Maestro Dudamel and the LA Philharmonic have in this mess.  It is possible that they simply do not know how the recording is being licensed or that it is unavailable for libraries to acquire and preserve.  Or they may think that by releasing the recording in digital format only they are being up-to-date and actually encouraging access to music for everyone.  In either case, they have a responsibility to know more about the situation, because the state of affairs they have allowed impedes access, in direct contradiction to Maestro Dudamel’s express commitment, and it ensures that this recording will not be part of the ongoing canon of interpretation of Berlioz.

As far as access is concerned, the form of its release means that people who cannot afford an MP3 player will not be able to hear this recording.  Many of those people depend on libraries, and that option will be closed to them because libraries cannot acquire the album.  Also, access will become impossible at that inevitable point in time when this format for digital music becomes obsolete.  Maybe UMG and the Philharmonic will pay attention and release the recording on a different format before that happens, but maybe they won’t.  The most reliable source of preservation is libraries, and they will not be there to help with this one.  So access for listeners 20 or 30 years from now is very much in question.

This question of the future should have great consequence for Maestro Dudamel and the orchestra.  Without libraries that can collect their recording, how will it be used in classrooms in order to teach future generations of musicians?  Those who study Berlioz and examine the performance history of the Symphonie Fantastique simply may not know about this performance by Dudamel and the LA Philharmonic.  That performance, regardless of how brilliant it is, may get, at best, a footnote in the history of Berlioz — “In 2013 the Symphonie Fantastique was recorded by the LA Philharmonic under the baton of Gustavo Dudamel; unfortunately, that recording is now lost.”  These licensing terms matter, and without due attention to the consequences that seemingly harmless boilerplate like “personal use only” can produce, a great work of art may be doomed to obscurity.

A MOOC on copyright

It has taken a while to get here, but I am happy to be able to announce that two of my colleagues and I  will be offering a four-week MOOC on copyright designed to assist teachers and librarians deal with the daily challenges they encounter in regard to managing what they create and using what they need.

The MOOC will be offered on the Coursera platform and will run for the first time starting July 21.  It is available as of today for folks to sign up at https://www.coursera.org/course/cfel.

It has been a great pleasure working with Anne Gilliland from the University of North Carolina Chapel Hill and Lisa Macklin from Emory University to create this course.  I hope and believe that the course is much stronger because the three of us worked together than it could possibly have been if any one of us did it alone.

This course will be four weeks in duration and focuses on U.S. copyright law.  While we are well aware of all the MOOC participants from other countries — and welcome folks from all over to join us — we also wanted to keep the course short and as focused as possible.  We hope perhaps to do other courses over time, and a more in-depth attention to international issues and to how copyright works on the global Internet might be a good future topic.  In the meanwhile, this course deals with the U.S. law and the specific situations and issues that arise for librarians and educators at all levels.

We especially hope to attract K-12 teachers, who encounter many of the same issues that arise in higher education, and who often have even fewer resources to appeal to for assistance.  That is one reason for the summertime launch.

Another point about the focus in this course — our goal is to provide participants with a practical framework for analyzing copyright issues that they encounter in their professional work. We use a lot of real life examples — some of them quite complex and amusing — to help participants get used to the systematic analysis of copyright problems.

For many in the academic library community, the winding up of the courses offered by the Center for Intellectual Property at the University of Maryland University College has left a real gap.  This course is intentionally a first step toward addressing that gap.  It is, of course, free, and a statement of accomplishment is available for all participants who complete the course.  We hope this can assist our colleagues in education with some professional development, and maybe, depending on local requirements, even continuing education requirements.

We very much hope that this course will be a service to the library and education community, and that it provides a relatively fun and painless way to go deeper into copyright than the average presentation or short workshop allows.

Copyright roundup 3 — Changes in UK law

In this final installment of the copyright roundup I have been doing this week, I want to note some remarkable developments in the copyright law of the United Kingdom, where a hugely significant revision of the statute received final approval this month and will be given royal assent, the last stage of becoming law, in June.

Readers may recall that the UK undertook a study of how to reform copyright law in ways that would encourage more innovation and economic competitiveness.  The resulting report, called the Hargreaves Report, made a number of recommendations, many of which were focused on creating limitations and exceptions to the exclusive rights in copyright so that the law would work more like it does in the U.S., including the flexibility provided by fair use.  The final results of this legislative process do not include an American-like fair use provision, but they do result in a significant expansion of the fair dealing provisions in U.K. law to better accomplish some of the same things fair use has allowed in the U.S.

Fair dealing is found in a couple of provisions of the British law and allows certain specified activities if those activities are done in a “fair” manner, with specified criteria for fairness.  Until now the categories have been narrow and few, but Parliament has just expanded them dramatically.  A description of this expansion from the Charter Institute of Library and Information Professionals can be found on the CILIP site.  A number of activities that are probably permitted by fair use in the U.S. are now also encompassed by fair dealing in Britain, including private copying, copying by libraries in order to provide those copies to individual users, and some significant expansion of the ability to make copies for the purpose of education.

On this last point, I wonder if the two British university presses that are suing a U.S. university over educational copying have noticed that the tide is against them even at home.

There is an explanatory memo about these changes written by the U.K. Intellectual Property Office available here.  It is interesting to see how certain goals that have been accomplished by the courts in the U.S. and, importantly, in Canada are now intentionally being supported in this British legislation.  As I say, we are seeing a fairly strong international tide pushing towards expanded user rights in the digital environment, lest legacy industries use copyright to suppress economic development in their anxiety to prevent competition.

Several points about this legislative reform seem especially important to me.

First is the emphasis in several of the new provisions on supporting both research with and preservation of sound recordings and film.  This is one of several places where the U.K. may reasonable be said to have just leapfrogged over the United States, since the provisions about non-profit use and preservation of music and film remain a mess in our law.

Second, the British are now adopting an exception for text and data mining into their law.  This is huge, and reinforces the idea I have expressed before that libraries should be reluctant about agreeing to licensing terms around TDM; the rights are likely already held by users in many cases, so those provisions really would have the effect, despite being promoted as assisting research, of putting constraints (and sometimes added costs) on what scholars can already do.  This is probably true in the U.S., where fair use likely gets us further than vendor licenses would, and it has now been made explicit in the U.K.

Another major improvement in the U.K. over U.S. copyright is the fact, explained in the CILIP post, that

[M]any of these core “permitted acts” in copyright law given to us by parliament all not be able to be overridden by contracts that have been signed.  This is of vital importance, as without this provision, existing and new exceptions in law could subsequently simply be overridden by a contract. Also many contracts are based in the laws of other countries (often the US). This important provision means that libraries and their users no longer need to  worry about what the contract allows or disallows but just apply UK copyright exceptions to the electronic publications they have purchased. 

This type of approach is desperately needed in the U.S.  If we truly believe that the activities that are supported by core exceptions to the rights under copyright, like education, library services and fair use, are beneficial to society and part of the basic public purpose of copyright, they should remain in place regardless of provisions inserted into private law contracts.  Now that the British have made this acknowledgement, it is time for the U.S. to catch up.

Competitiveness is often an important part of the discussion over copyright law.  Rights holders argue that terms should be lengthened and enforcement improved in order to enhance competition with other nations.  The U.K. began its copyright reform process in order to improve its ability to compete for high-tech business.  And this new revision of the British law puts the U.S. back in a situation where we must continue to strengthen not the rights of legacy industries but the rights of users — which is where innovation will come from — because other parts of the world are moving past us in this area.  How to we do this, in the two key areas I have identified?  In the area of the right to mine text and data for non-profit research purposes, this is something our courts can do, through interpretation of the fair use provision.  We can hope that such an opinion might appear in the near future, although I am not aware of what case might prompt it.  But contract preemption is something that Congress will have to address.  If the U.S. Congress is serious about copyright reform, and really wants to help it to continue to be a tool of economic progress in the U.S., they should put the issue of making user rights exceptions impervious to contract provisions that attempt to limit or eliminate them at the top of the legislative agenda.

Walking the talk

All of the presentations at the SPARC Open Access meeting this week were excellent.  But there was one that was really special; an early career researcher named Erin McKiernan who brought everyone in the room to their feet to applaud her commitment to open access.  We are sometimes told that only established scholars who enjoy the security of tenure can “afford” to embrace more open ways to disseminate their work.  But Dr. McKiernan explained to us both the “why” and the “how” of a deep commitment to OA on the part of a younger scholar who is not willing to embrace traditional, toll-access publishing or to surrender her goals of advancing scholarship and having an academic career.

Erin McKiernan is a Ph.D from the University of Arizona who is now working as a scientist and teacher in Latin America.  Her unique experience informs her perspective on why young scholars should embrace open access.  Dr. McKiernan is a researcher in medical science at the National Institute of Public Health in Mexico and teaches (or has taught) at a couple of institutions in Latin America.  For her, the issue is that open access is fundamental to her ability to do her job; she told us that the research library available to her and her colleagues has subscriptions to only 139 journals, far fewer that most U.S. researchers expect to be able to consult.  Twenty-two of that number are only available in print format, because electronic access is too expensive.  This group includes key titles like Nature and Cell.  A number of other titles that U.S. researchers take for granted as core to their work — she mentioned Nature Medicine and PNAS — are entirely unavailable because of cost.  So in an age when digital communications ought to, at the very least, facilitate access to information needed to improve health and treat patients, the cost of these journals is, in Dr. McKiernan’s words, “impeding my colleagues’ ability to save lives.”  She made clear that some of these journals are so expensive that the choice is often between a couple of added subscriptions or the salary of a researcher.

This situation ought to be intolerable, and for Dr. McKiernan it is.  She outlined for us a personal pledge that ought to sound quite familiar.  First, she will not write, edit or review for a closed-access journal. Second, she will blog about her scientific research and post preprints of her articles so that her work is both transparent and accessible.  Finally, she told us that if a colleague chose to publish a paper on which she was a joint author in a closed-access journal, she would remove her name from that work.  This is a comprehensive and passionately-felt commitment to do science in the open and to make it accessible to everyone who could benefit from it — clinicians, patients and the general public as well as other scholars.

Listening to Dr. McKiernan, I was reminded of a former colleague who liked to say that he “would rather do my job than keep my job.”  But, realistically, Dr. McKiernan wants to have a career as a teacher and research scientist.  So she directly addressed the concerns we often hear that this kind of commitment to open access is a threat to promotion and tenure in the world of academia.  We know, of course, that some parts of this assertion are based on false impressions and bad information, such as the claim that open access journals are not peer-reviewed or that such peer-review is necessarily less rigorous than in traditional subscription journals.  This is patently false and really makes little sense — why should good peer-review be tied to a particular business model?  Dr. McKiernan pointed out that peer-review is a problem, but not just for open access journals.  We have all seen the stories about growing retraction rates and gibberish articles.  But these negative perceptions about OA persist, and Dr. McKiernan offered concrete suggestions for early-career researchers who want to work in the open and also get appropriate credit for their work.  Her list of ideas was as follows (with some annotations that I have added):

1. Make a list of open access publication options in your particular field.  Chances are you will be surprised by the range of possibilities.

2.  Discuss access issues with your collaborators up front, before the research is done and the articles written.

3. Write funds for article processing charges for Gold open access journals into all of your grant applications.

4. Document your altmetrics.

5. Blog about your science, and in language that is comprehensible to non-scientists.  Doing this can ultimately increase the impact of your work and can even lead sometimes to press coverage and to better press coverage.

6. Be active on social media.  This is the way academic reputations are built today, so ignoring the opportunities presented is unwise.

7. If for some reason you do publish a closed-access article, remember that you still have Green open access options available; you can self-archive a copy of your article in a disciplinary or institutional repository.  Dr. McKiernan mentioned that she uses FigShare for her publications.

The most exciting thing about Erin McKiernan’s presentation was that it demolished, for many of us, the perception of open access as a risky choice for younger academics.  After listening to her expression of such a heartfelt commitment — and particularly the pictures of the people for whom she does her work, which puts a more human face on the cost of placing subscription barriers on scholarship — I began to realize that, in reality, OA is the only choice.

 

 

 

 

Please propose to us

Later this year, the first in a new series of Scholarly Communication Institutes will be held here in the Research Triangle and we are looking for proposals from diverse and creative teams of people who are interested in projects that have the potential to reshape scholarly communications.

Last year the Andrew W. Mellon Foundation funded a three-year project to continue the long-running Scholarly Communications Institute which has previously been held at the University of Virginia.  Starting in November, the new SCI will be hosted by Duke in close collaborations with UNC Chapel Hill, NC State University, North Carolina Central University and the Triangle Research Libraries Network.  This new iteration of the SCI will benefit, we believe, from the extraordinary depth and diversity of resources related to innovation in scholarly communications here in the Triangle, and it will also take on a new format, in which participants will have a major role in setting the agenda each year.

Starting this year — starting right now! — the SCI invites applications from working groups of 3 – 8 people that are organized around a project or problem that concerns scholarly communications.  These working groups can and should be diverse, consisting of scholars, librarians, publishers, technologists and folks from outside academia (journalist? museums? non-profits?).  We hope that proposals will be very creative about connections, and include people that would like to work together even if they have not previously been able to do so.

The SCI Advisory panel will select 3 to 5 of these working group proposals and cover the costs for those teams to travel to the Triangle and spend four days together  in Chapel Hill in a setting that is part retreat, part seminar, part development sprint and part un-conference.  We want these groups to work together and to interact.  The groups will, we hope, jump-start their own projects and “cross-pollinate” ideas that will advance and challenge each others projects and discussions.

The theme for the 2014 SCI is Scholarship and the Crowd.  It will be held November 9-13 at the Rizzo Center in Chapel Hill, NC.  Proposals are due by March 24.

The goal of the SCI is not to schedule breakthroughs but create conditions that favor them.  The Working Groups selected will set the agenda and define the deliverables.  The Institute will offer the space , the environment and the network of peers to foster creative thinking, with the hope of both advancing the specific projects and also developing ideas and perspectives that can give those projects a broader potential to influence the landscape of publishing, digital humanities and other topics related to scholarly communications.

If you or someone you know might be interested in developing a proposal for this first Triangle-based SCI, you will find the call for proposals and an FAQ at trianglesci.org.