Category Archives: Libraries

How useful is the EU’s gift to libraries?

On Thursday the European Union’s Court of Justice issued an opinion that allows libraries to digitize books in their holdings and make those digital copies accessible, on site, to patrons.  In a way, this is a remarkable ruling that recognizes the unique place of libraries in the dissemination and democratization of knowledge.  Yet the decision does not really give libraries a tool that promises to be very useful.  It is worth taking a moment, I think, to reflect on what this EU ruling is, what it is not, and how it compares to the current state of things for U.S. libraries.

There are news stories about the EU ruling here, here and here.

What the EU Court of Justice said is that, based on the EU “Copyright Directive,” libraries have permission to make digital copies of works in their collections and make those copies available to patrons on dedicated terminals in the library,  The Court is interpreting language already found in the Directive, and adding two points.  First, library digitization is implied by the authorization for digital copies on dedicated terminals contained in the Directive, and, second, that this is permissible even if the publisher is offering a license agreement.  Finally, the Court makes clear that this ruling does not permit further distribution of copies of the digitized work, either by printing or by downloading to a patron’s storage device.

As far as recognizing what this decision is not, it is very important to realize that it is not the law in the United States.  It is easy sometimes, when the media gets ahold of a copyright-related story, to forget that different jurisdictions have different rules.  The welter of copyright information, guidelines, suggestions and downright misinformation can make the whole area so complex that simple principles can be forgotten.  So let’s remind ourselves that this interesting move comes from the European Union Court of Justice and is the law only for the EU member states.

The other thing this ruling is not is broad permission for mass digitization.  The authorization is restricted to copies that are made available to library patrons on dedicated terminals in the library.  It does not permit wide-spread distribution over the Internet, just “reading stations” in the library.  That restriction makes it unlikely, in my opinion, that many European libraries would invest in the costs of mass digitization just for such a relatively small benefit.

So how does this ruling in the EU compare to the rights and needs of libraries in the U.S.?

Let’s consider section 108(c) of the U.S. copyright law, which permits copying of published works for preservation purposes.  That provision seems to get us only a little way toward what the EU Court has allowed.  Under 108(c), a U.S. library could digitize a book if three conditions were met.  First, the digitization must be for the purpose of preserving a book from the collection that is damaged, deteriorating, or permanently missing.  Second, an unused replacement for the book must not be available at a fair price.  Third, the digital copy may not be made available to the public outside of the library’s premises.  This last condition is similar, obviously, to the EU’s dedicated terminal authorization; a patron can read the digital copy only while present in the library.

Two differences between the EU ruling and section 108(c) are especially interesting:

  1. The works for which this type of copying are allowed in the U.S are much more limited.  The EU says that libraries can digitize any book in their collection, even if it is not damaged or deteriorating, and even if another copy, even a electronic one, could be purchased.  This seems like the major place where the EU Court has expanded the scope for library digitization.
  2. On the other hand, the use of a digital copy may be less restricted in the U.S.  Instead of a dedicated terminal, a U.S. library could, presumably, make the copy available on a restricted network, so that more than one patron could use it at a time, as long as all of them were only able to access the digital copy while on the library premises.

In the U.S., of course, libraries also can rely on fair use.  Does fair use get us closer to being able to do in the U.S. what is allowed to European libraries?  Maybe a little closer.  Fair use might get us past the restriction in 108(c) about only digitizing damaged books; we could conceivably digitize a book that did not meet the preservation standard if we had a permissible purpose.  And the restriction of that digitized book to in-library use only would help with the fourth fair use factor, impact on the market.  But still we would have issues about the purpose of the copying and the nature of the original work.  Would general reading be a purpose that supports fair use?  I am not sure.  And what books could we (or could we not) digitize?  The specific book at issue in the case before the EU Court was a history textbook.  But textbooks might be especially hard for a U.S. library to justify digitizing for even limited access under fair use.

If we wanted to claim fair use for digitizing a work for limited, on site access, my first priority would be to ask why — what is the purpose that supports digitization?  Is a digital version superior for some articulable reason to the print copy we own (remembering that if the problem is condition, we should look to 108)?  One obvious purpose would be for use with adaptive software by disabled patrons.  Also, I would look at the type of original; as I said, I think a textbook, such as was at issue in the EU case, would be harder to justify under U.S. fair use, although some purposes, such as access for the disabled, might do it.  Finally, I would look at the market effect.  Is a version that would meet the need available?  Although the EU Court said that European libraries did not need to ask this question, I think in the U.S. we still must.

Ultimately, the EU Court gave European libraries a limited but useful option here.  Unfortunately, in the U.S. we have only pieces of that option available to us, under different parts of the U.S. law.  It will be interesting to see whether, in this age of copyright harmonization, U.S. officials begin to reconsider this particular slice of library needs because of what the EU has ruled.

 

 

Planning for musical obsolescence

Gustavo Dudamel is one of the most celebrated conductors of his generation.  As Music Director of both the Los Angeles Philharmonic and the Simon Bolivar Orchestra of Venezuela, he has built a solid and enthusiastic following amongst lovers of symphonic music.  He is also, according to his website bio, deeply committed to “access to music for all.”  So it is particularly poignant that a recording by Dudamel should serve as the prime example of a new access problem for music.

When Dudamel and the Los Angeles Philharmonic release a new recording of a live performance of Hector Berlioz’s Symphonie Fantastique, it should be a significant event, another milestone in the interpretation of that great work.  But in this particular case we are entitled to wonder if the recording will really have any impact, or if it will drop into obscurity, almost unnoticed.

Why would such a question arise?  Because the Dudamel/LA Philharmonic recording was released only as a digital file and under licensing terms that make it impossible for libraries to purchase, preserve and make the work available.  When one goes to the LA Philharmonic site about this recording of Symphonie Fantastique and tries to purchase it, one is directed to the iTunes site, and the licensing terms that accompany the “purchase” — it is really just a license — restrict the user to personal uses.  Most librarians believe that this rules out traditional library functions, including lending for personal listening and use in a classroom.  Presumably, it would also prevent a library from reformatting the work for preservation purposes in order to help the recording outlive the inevitable obsolescence of the MP3 or MP4 format.  Remember that the section 108 authorization for preservation copying by libraries has restrictions on digital preservation and also explicitly allows contractual provisions to override that part of the law.

At a recent consultation to discuss this problem, it was interesting to note that several of the lawyers in the room encouraged the librarians to just download the music anyway and ignore the licensing terms, simply treating this piece of music like any other library acquisition.  Their argument was that iTunes and the LA Philharmonic really do not mean to prevent library acquisitions; they are just using a boilerplate license without full awareness of the impact of its terms.  But the librarians were unwilling.  Librarians as a group are very law-abiding and respectful of the rights of others.  And as a practical matter, libraries cannot build a collection by ignoring licensing terms; it would be even more confusing and uncertain than it is to try to comply with the myriad licensing terms we encounter every day!

In the particular case of the Dudamel recording of Berlioz, we know rather more about the situation than is normal, because a couple of intrepid librarians tried valiantly to pursue the issue.   Judy Tsou and John Vallier of the University of Washington tracked the rights back from the LA Philharmonic, through Deustche Grammophon to Universal Music Group, and engaged UMG in a negotiation for library-friendly licensing.  The response was, as librarians have come to expect, both inconsistent and discouraging.  First, Tsou and Vallier were told that an educational license for the download was impossible, but that UMG could license a CD.  Later, they dropped the idea of allowing the library to burn a CD from the MP3 and said an educational license for download was possible, but only for up to 25% of the “album.”  For this 25% there would be  a $250 processing fee as well as an unspecified additional charge that would make the total cost “a lot more” than the $250.  Even worse, the license would be limited to 2 years, making preservation impossible. The e-mail exchange asserts that UMG is “not able” to license more than 25% of the album for educational use, which suggests that part of the problem is that the rights ownership and licensing through to UMG is tangled.  But in any case, this is an impossible proposal.  The cost is absurd for one quarter of an album, and what sense does it make for a library to acquire only part of a performance like this for such a limited time? Such a proposal fundamentally misunderstands what libraries do and how important they are to our cultural memory.

Reading over the documents and messages in this exchange, it is not at all clear what role Maestro Dudamel and the LA Philharmonic have in this mess.  It is possible that they simply do not know how the recording is being licensed or that it is unavailable for libraries to acquire and preserve.  Or they may think that by releasing the recording in digital format only they are being up-to-date and actually encouraging access to music for everyone.  In either case, they have a responsibility to know more about the situation, because the state of affairs they have allowed impedes access, in direct contradiction to Maestro Dudamel’s express commitment, and it ensures that this recording will not be part of the ongoing canon of interpretation of Berlioz.

As far as access is concerned, the form of its release means that people who cannot afford an MP3 player will not be able to hear this recording.  Many of those people depend on libraries, and that option will be closed to them because libraries cannot acquire the album.  Also, access will become impossible at that inevitable point in time when this format for digital music becomes obsolete.  Maybe UMG and the Philharmonic will pay attention and release the recording on a different format before that happens, but maybe they won’t.  The most reliable source of preservation is libraries, and they will not be there to help with this one.  So access for listeners 20 or 30 years from now is very much in question.

This question of the future should have great consequence for Maestro Dudamel and the orchestra.  Without libraries that can collect their recording, how will it be used in classrooms in order to teach future generations of musicians?  Those who study Berlioz and examine the performance history of the Symphonie Fantastique simply may not know about this performance by Dudamel and the LA Philharmonic.  That performance, regardless of how brilliant it is, may get, at best, a footnote in the history of Berlioz — “In 2013 the Symphonie Fantastique was recorded by the LA Philharmonic under the baton of Gustavo Dudamel; unfortunately, that recording is now lost.”  These licensing terms matter, and without due attention to the consequences that seemingly harmless boilerplate like “personal use only” can produce, a great work of art may be doomed to obscurity.

A MOOC on copyright

It has taken a while to get here, but I am happy to be able to announce that two of my colleagues and I  will be offering a four-week MOOC on copyright designed to assist teachers and librarians deal with the daily challenges they encounter in regard to managing what they create and using what they need.

The MOOC will be offered on the Coursera platform and will run for the first time starting July 21.  It is available as of today for folks to sign up at https://www.coursera.org/course/cfel.

It has been a great pleasure working with Anne Gilliland from the University of North Carolina Chapel Hill and Lisa Macklin from Emory University to create this course.  I hope and believe that the course is much stronger because the three of us worked together than it could possibly have been if any one of us did it alone.

This course will be four weeks in duration and focuses on U.S. copyright law.  While we are well aware of all the MOOC participants from other countries — and welcome folks from all over to join us — we also wanted to keep the course short and as focused as possible.  We hope perhaps to do other courses over time, and a more in-depth attention to international issues and to how copyright works on the global Internet might be a good future topic.  In the meanwhile, this course deals with the U.S. law and the specific situations and issues that arise for librarians and educators at all levels.

We especially hope to attract K-12 teachers, who encounter many of the same issues that arise in higher education, and who often have even fewer resources to appeal to for assistance.  That is one reason for the summertime launch.

Another point about the focus in this course — our goal is to provide participants with a practical framework for analyzing copyright issues that they encounter in their professional work. We use a lot of real life examples — some of them quite complex and amusing — to help participants get used to the systematic analysis of copyright problems.

For many in the academic library community, the winding up of the courses offered by the Center for Intellectual Property at the University of Maryland University College has left a real gap.  This course is intentionally a first step toward addressing that gap.  It is, of course, free, and a statement of accomplishment is available for all participants who complete the course.  We hope this can assist our colleagues in education with some professional development, and maybe, depending on local requirements, even continuing education requirements.

We very much hope that this course will be a service to the library and education community, and that it provides a relatively fun and painless way to go deeper into copyright than the average presentation or short workshop allows.

Please propose to us

Later this year, the first in a new series of Scholarly Communication Institutes will be held here in the Research Triangle and we are looking for proposals from diverse and creative teams of people who are interested in projects that have the potential to reshape scholarly communications.

Last year the Andrew W. Mellon Foundation funded a three-year project to continue the long-running Scholarly Communications Institute which has previously been held at the University of Virginia.  Starting in November, the new SCI will be hosted by Duke in close collaborations with UNC Chapel Hill, NC State University, North Carolina Central University and the Triangle Research Libraries Network.  This new iteration of the SCI will benefit, we believe, from the extraordinary depth and diversity of resources related to innovation in scholarly communications here in the Triangle, and it will also take on a new format, in which participants will have a major role in setting the agenda each year.

Starting this year — starting right now! — the SCI invites applications from working groups of 3 – 8 people that are organized around a project or problem that concerns scholarly communications.  These working groups can and should be diverse, consisting of scholars, librarians, publishers, technologists and folks from outside academia (journalist? museums? non-profits?).  We hope that proposals will be very creative about connections, and include people that would like to work together even if they have not previously been able to do so.

The SCI Advisory panel will select 3 to 5 of these working group proposals and cover the costs for those teams to travel to the Triangle and spend four days together  in Chapel Hill in a setting that is part retreat, part seminar, part development sprint and part un-conference.  We want these groups to work together and to interact.  The groups will, we hope, jump-start their own projects and “cross-pollinate” ideas that will advance and challenge each others projects and discussions.

The theme for the 2014 SCI is Scholarship and the Crowd.  It will be held November 9-13 at the Rizzo Center in Chapel Hill, NC.  Proposals are due by March 24.

The goal of the SCI is not to schedule breakthroughs but create conditions that favor them.  The Working Groups selected will set the agenda and define the deliverables.  The Institute will offer the space , the environment and the network of peers to foster creative thinking, with the hope of both advancing the specific projects and also developing ideas and perspectives that can give those projects a broader potential to influence the landscape of publishing, digital humanities and other topics related to scholarly communications.

If you or someone you know might be interested in developing a proposal for this first Triangle-based SCI, you will find the call for proposals and an FAQ at trianglesci.org.

 

What week is it? Fair Use week of course!

Next week Fair Use Week will be observed on a number of university campus.  I want to use this short post to bring some resources to my readers’ attention, make a comment on why we should all celebrate fair use week, and provide a foretaste of my contribution to the festivities, which will appear in a different forum.

Harvard announces Fair Use Week with an homage to Justice Joseph Story, who originated the concept in 1841
Harvard announces Fair Use Week with an homage to Justice Joseph Story, who originated the concept in 1841

In the poster above, Fair Use Week at Harvard is connected directly to the origins of this unique aspect of American copyright law, through a statue of Justice Joseph Story, who first defined what we came to call fair use in an 1841 case involving the letters of George Washington.  It is fitting that that case involved a scholarly work, a life of Washington, because fair use was then and still is today one of the most important underpinnings of scholarship.  We argue about its scope sometimes, but we rely on it everyday.  The most basic relationship in academic writing, the quotation of a scholar in another scholar’s work, is a form of fair use that is so central and natural to scholarship that we forget what it really is and the body of law that it depends on.  Fair Use Week is worthy of celebration on our campuses because it is a reminder that this aspect of copyright law is a sine qua non for scholarship and has been for a great many years.

Two institutions that I know of will be featuring fair use information and opinion in blogs, and I wanted to draw these resources to the attention of readers.

Ohio State University hosts a “Copyright Corner” blog that has been providing basic information about fair use all month long.  During the next week it would be worthwhile for readers to review what has been written there.

At Harvard, a new blog called Copyright at Harvard Library will feature posts from invited guests for Fair Use Week.  I hope readers will keep up with that blog, partly because one of those posts — on Tuesday, I am told — will be a contribution from me. And I thought I would offer here a quick summary of what I will say in some detail over there.

My post focuses on a case decided by the Second Circuit Court of Appeals last month.  It was an odd case, involving a lawsuit brought over a surreptitious recording of a conference call made by the business news service Bloomberg.  The recording was distributed to Bloomberg subscribers and the company that held the call sued, claiming copyright infringement.  There are two fascinating issues in the case, I think.  The first involves that fundamental requirement for a copyright, fixation in a tangible medium of expression.  Since they recorded the phone call live, not from some prior fixation, Bloomberg tried to defend themselves by saying that there was no copyright in the call for them to infringe.  The second issue, of course, was fair use.  Both the lower court and the Second Circuit ultimately decided the issue based on fair use, and the analysis that the Appellate Court especially applied is really fascinating.  In my post I try to explain how extraordinary the analysis is, and why it has potential implications for the still-pending appeal in the Georgia State University copyright case and its fair use defense.

I hope that is enough to whet your appetite and send you to the Harvard Fair Use Week blog repeatedly this week, to read my contribution and those from Kenny Crews, Krista Cox and others.

Reflections on the Future of the Research Library

Since September, the Duke University Libraries have been engaging in a set of conversations we are calling a seminar on the future of the research library.  Our University Librarian initiated this discussion with the deliberate intent that, in spite of the large size of our staff, we engage in the core activity of a seminar – a gathering of individuals who come together for intensive study and the active exchange of ideas.  Such a process has intrinsic value, of course, in the continuing professional development and mutual understanding it fosters in the Libraries’ staff.  It also is timed to help us be best prepared to welcome a new Provost in 2014, since Duke’s Provost over the past fifteen years – the only Provost many of us have known at Duke – will be retiring from that role.

Last week our seminar hosted a talk by Professor Ian Baucom, a Professor in Duke’s English department and Director of the John Hope Franklin Humanities Institute.  His talk, and the discussion that followed it, really helped me focus my thoughts about the future role of academic libraries, academic librarians and scholarly communications.  So I want to use this space, in hopes that readers will indulge this end of the year philosophizing, to share some of those thoughts.  These reflections grow out of Ian’s discuss of several constellations of issues that are important to universities today and how those “hot” issues might impact the place of libraries in a research university.

Given his role as the Director of an intentionally interdisciplinary center, it is not surprising that that was the first constellation of issues Ian discussed.  He pointed out the evolution of the idea of interdisciplinarity over the past few decades, from conversations between disciplines, especially between the Humanities and the Social Sciences, to a more deeply transformative methodological commitment, which has been partly driven by advances in technology and the opportunities they have created.  In this environment, Ian talked about the special tools and skills that librarians could bring to teams pursuing interdisciplinary research.  Those tools could be technological; they could reflect expertise in the management of data; or they could involve helping to describe the product of a research project, make it findable and usable, and preserve it.  The changing role for librarians that this invitation suggests is toward serving as consultants and collaborators in the production of research results.

Ian challenged the Libraries to think about whether our fundamental commitment is to information or to knowledge.  This immediately struck me, as I think it was intended to, as a false dichotomy.  Libraries are not mere storage facilities for information, nor are they, by themselves, producers of knowledge.  Rather, they serve as the bridge that helps students and researchers use information to produce knowledge.  That role, if we will embrace it, implies a much more active and engaged role in the process of knowledge than has traditionally been accorded to (or embraced by) librarians.

Some of the most exciting ideas for me that Ian discussed were around the notion of civic engagement, which is, of course, another important topic on our campuses these days, especially when the Governors of several states (including our own) have challenged the value of higher education.  Ian pointed out that library is often one of the most public-facing parts of a university, and suggested three ways in which this outward-looking aspect of the research library could help the university enhance its civic role.  The first — he called it the centrifugal aspect of this role — was to help the university find a public language for the expert knowledge that it produces.   As an example of this, I thought of the recent effort here at the Duke Libraries to get copies of articles that will be the subject of press releases or other news stories into our repository so that the public announcements can link to an accessible version of each article.  This is one way we help “translate” that expert knowledge for a wider public.

The second role for libraries in assisting the civic engagement of their parent universities that Ian cited, the centripetal aspect, was to pull the issues that are important to the communities around a university into the campus.  This we can do in a variety of ways: everything from exhibits in our spaces to seminars and events that we sponsor.  The role here is what Ian called “instigator,” being the focal point on campus where civic issues become part of the academic discourse, having an impact on and being impacted by that expert knowledge that is our fundamental goal and creation.

Finally, the third aspect of civic engagement for academic libraries returns us to the idea of collaboration.  In many instances, it is the library that is the point of first contact, or the most logical partner, for collaboration with civic organizations, NGOs, local advocacy groups and public institutions.

Three roles for librarians that move well beyond traditional thinking emerged for me from Ian Baucom’s talk — the librarian as consultant has long been on my mind, and the librarian as collaborator is a natural outgrowth of that.  But librarians as translators and as instigators were new to me, and helped to flesh out a vision of what the research library might aspire to in the age of global, digital scholarly communications.  In my second post on this event I will turn to issues of globalization and, especially, publishing.

Taking a stand

When I wrote a blog post two weeks ago about libraries, EBSCO and Harvard Business Publications, I was attending the eIFL General Assembly in Istanbul, and I think the message I wanted to convey — that librarians need to take a stand on this issue and not meekly agree to HBP’s new licensing fee — was partly inspired by my experiences at the eIFL GA.

Having attended two eIFL events in the past four years, I have learned that many U.S. librarians are not aware of the work eIFL does, so let me take a moment to review who they are.  The core mission of eIFL is to “enable access to knowledge in developing and transition countries.”  They are a small and distributed staff that work on several projects, including support for the development of library consortia in their partner countries, negotiating licenses for electronic resources on behalf of those consortia, developing capacity for advocacy focused on copyright reform and open access, and encouraging the use of free and open source software by libraries.  The key clientele for eIFL are academic libraries, and all of the country coordinators and delegates that I met at the General Assembly were from colleges and universities.  But eIFL also has a project to help connect communities to information through public libraries in the nations they serve.

The delegates at the General Assembly came from Eastern Europe, Central Asia and Africa.  These librarians face a variety of local conditions and challenges, but they share a common commitment to improving information access and use for the communities they serve.  It was the depth and strength of that commitment that I found so inspiring at the event.  I wanted to encourage U.S. librarians to take a stand because these librarians from all over the world are themselves so consistently taking a stand.

One way these librarians are taking a stand is in negotiations with publishers.  There were lots of vendor and publishing representatives at the General Assembly, and time for each delegation to speak with each vendor (“speed dating”) was built in to the schedule.  Although these meetings were short, they were clearly intense.  One vendor rep told me that they were difficult because the librarians had diverse needs and were well-versed for the negotiations.  He also told me that he enjoyed the intensity because it went beyond “just selling.”  And that is the key.  These librarians are supporting each other, learning from each other and from speakers at the event what they can expect and what they can aspire to with their electronic resources, and taking those aspirations, along with their local needs, into negotiations.  They are definitely not “easy” customers because they are well-informed and willing to fight for the purchases that best serve their communities.  Because they cannot buy everything, they buy very carefully and drive hard bargains.

Another area in which these eIFL librarians are taking a stand is in regard to copyright in their individual nations.  I saw several presentations, from library consortia in Poland, Uzbekistan, Mongolia and Zimbabwe, about how they had made their library consortia into recognized stakeholders in discussions of copyright reform on the national level.  One consortium is offering small grants for librarians to become advocates for fair copyright; another has established a copyright “help desk” to bring librarians up to speed.  One of the eIFL staff emphasized to me the importance of this copyright work.  Copyright advocacy is part of the solution, I was told, to the problem of burdensome licensing terms that have often been seen in those parts of the world.

One story was particularly interesting to me.  An African librarian told how publishers in her country often view libraries and librarians as a major “enemy” because it is believed that they encourage book piracy.  Through the consortium of academic libraries, librarians have now become actively involved in a major publishing event that is held annually in her country, and recently the libraries were asked to nominate a board member to that group.  As a result of these efforts, the relationship between librarians and publishers has improved, and there is much more understanding (thus less suspicion) about library goals and priorities.

eIFL librarians are also taking a stand amongst their own faculties by advocating for open access. There were multiple stories about new open access policies at different universities, and about the implementation of repository software.  There were also multiple presentations to convey the advantages that open resources offer to education.  These presentations discussed MOOCs (that was me), open data, alternative methods of research assessment and text-mining.  If these sound familiar, they should.  In spite of difficult conditions and low levels of resourcing, these librarians are investigating the same opportunities and addressing the same challenges as their U.S. counterparts.  Just to illustrate the breadth of the interest in the whole topic of openness, I wrote down the countries from which the librarians who grilled me about MOOCs came when I spent an hour fielding questions; they came from Azerbaijan, Lesotho, Kyrgyzstan, Lithuania, Malawi, Maldives, Macedonia, Fiji, China, Thailand, Ghana, Belarus, Armenia, Uzbekistan, Swaziland and Mongolia.  They came with questions that challenged my assumptions (especially about business models) and deepened my own understanding of the international impact of open online courses.

There is one last conversation I had that I want to report on, both for its own sake and because of how it illuminates the eIFL mission.  Mark Patterson, the editor of the open access journal eLife, was at the GA to talk about research assessment.  Later I sat and talked with him about eLife.  He told me that the most exciting thing about eLife was its model whereby scientists reclaim the decision about what is important to science.  While the editors of subscription-based journals must always strive for novelty in order to defend and extend their competitiveness, eLife and, presumably, other open access journals, have scientists making decisions about what is important to science, whether or not it is shiny and new.  Sometimes there is an article that is really important because it refines some idea or process in a small way, or because of its didactic value.  Such articles would escape the notice of many subscription journals, but the editors at eLife can catch them and publish them.  And the reason this seems to fit so well into the eIFL context is because it is about self-determination.  Whether I was talking about open access journals with Mark or to the country delegates at the GA, this was the dominant theme, the need to put self-determination at the center of scholarly communications systems, from publishing to purchasing.

A line in the sand

The Chronicle of Higher Education recently published an article about library outrage over the recent decision by Harvard Business Publishing to claw back some functionality for key Harvard Business Review articles that many libraries subscribe to on various EBSCO platforms, and to charge a separate licensing fee to recover that functionality.  I also will have an article dealing with this issue on the Library Journal Peer-to-Peer blog (to be published on Thursday). But I want to say one more thing about it.

Harvard Business Publishing is treating this as an issue between themselves and the institutions that subscribe to HBR via EBSCO.  They accuse faculty of using articles as course readings without paying the “required” extra fee, and are disabling the EBSCO versions to force that additional fee.  But this is a skewed perspective.  From the point of view of the subscribing institutions, what is happening is that they are getting less functionality from EBSCO and are now being asked to pay HBP to regain that function.

Properly viewed, I suggest, this is not a dispute between libraries, or faculties, and Harvard.  It is a dispute between Harvard Business Publications and EBSCO over how to divide up the pie.  And libraries should refuse to make the pie bigger just to settle that dispute.

To be clear, the functions that HPB says are being wrongly exploited — printing, downloading and persistent linking — have been a part of the EBSCO databases for years.  HBP would argue that their special licensing terms with EBSCO (which were impossible to convey to faculty, since they make no sense) have always forbidden classroom use.  But the truth is, these technological changes are intended to prevent faculty from even giving students a reference to an article and asking the students to read that article on their own.  HBP wants to recover a separate fee even for that.

So the demands made by HBP really do break the EBSCO database as it has been purchased for years.  If libraries are going to lose functionality they have been buying over that time, they must demand a reduction in the price that is paid to EBSCO.  What is remarkable in this case is that the value of the lost functionality is easily quantifiable; it is represented by the new licensing fee that HBP plans to charge.

This is what I mean by insisting that this is a dispute between EBSCO and Harvard.  Libraries should refuse to pay more significantly more for the same functionality, especially since that functionality is so central to what we buy journal aggregator databases for.  If we have to pay Harvard this license fee, basic fairness suggests that what we pay EBSCO be reduced by the same amount.  EBSCO has been strangely silent during this controversy.  But libraries should draw this line in the sand — we will spend no more than some reasonable percentage increase — a single digit percentage, certainly — over our current EBSCO subscription to get the same functions we had last year.  Harvard and EBSCO can discuss any changes in the way that money is split between them, but that is not our problem.  If Harvard wants $200,000 more from us, we must pay EBSCO $200,000 less.

Few librarians would dispute the proposition that we cannot keep paying massive increases to get the same publications and same capabilities that we had before.  It is unsustainable, and it is unfair.  This price increase, for that is what it is, is especially massive.  If Harvard Business Publications cannot make do with the revenue they have had for decades and suddenly needs millions more, that is a problem with how they run their business, not with what EBSCO subscribers expect to get, and have gotten for years, for their subscription dollars.  And they need to take that demand up with the platform provider, since it is that platform that they are insisting be broken.

Nevertheless, librarians have not been good at actually saying no.  This is the moment to strengthen our spines and refuse to pour more money into the fraught relationship between Harvard and EBSCO; we must let them settle the matter between themselves.  If we do not draw this line in the sand, we will continue to get that sand kicked in our faces.

 

 

 

 

 

The big picture about peer-review

In many mystery novels, there is a moment when someone makes an attempt to frighten or even kill the detective.  Invariably, the detective reacts by deciding that the threat is actually a good thing, because it means that he or she is getting close to the truth and making someone nervous.  In a sense, the article in Science by John Bohannon reporting on a “sting” operation carried out against a small subset of open access journals may be such a moment for the OA movement.  Clearly the publishers of Science are getting nervous, when they present such an obviously flawed report that was clearly designed to find what it did and to exclude the possibility of any other result.  But beyond that, we need to recognize that this failed attempt on the life of open access does point us toward a larger truth.

A good place to start is with the title of Bohannon’s article.  It is not, after all, “why open access is bad,” but rather “Who’s afraid of peer-review?”  Putting aside the irony that Bohannon’s own article was, apparently, never subjected to peer-review (because it is presented as “news” rather than research), this is a real question that we need to consider.  What does it mean for a journal to be peer-reviewed and how much confidence should it give us in articles we find in any specific title?

In the opening paragraphs of his article, Bohannon focuses on the Journal of Natural Pharmaceuticals as his “bad boy” example that accepted the bogus paper he concocted.  He quotes a mea culpa from the managing editor that includes a promise to shut down the journal by the end of the year.  But I want to think about the Journal of Natural Pharmaceuticals and about Science itself for a little bit.

I was a bit surprised, perhaps naively, to discover that the Journal of Natural Pharmaceuticals is indexed in two major discovery databases used by many libraries around the world, Academic OneFile from Gale/Cengage and Academic Search Complete from EBSCO.  These vendors, of course, have a strong economic incentive to include as much as possible, regardless of quality, because they market their products based on the number of titles indexed and percentage of full-text available.  Open access journals are great for these vendors because they can get lots of added full-text at no cost.  But they do help us sort the wheat from the chaff by letting us limit our searches to the “good stuff,” don’t they?  Maybe we should not be too sanguine about that.

I picked an article published last year in the Journal of Natural Pharmaceuticals and searched on one of its key terms, after limiting my search in both databases to only scholarly (peer reviewed) publications.  The article I selected from this apparently “predatory” journal was returned in both searches, since the journal identifies itself as peer-reviewed.  This should not surprise us, because the indexing relies on how the journal represents itself, not on any independent evaluation of specific articles.  Indeed, I am quite confident that once this latest issue of Science is indexed in these databases, a search on “peer review” limited to scholarly articles will return Bohannon’s paper, even though it was listed as “news,” not subject to peer-review, and reports on a study that employed highly questionable methods.

Librarians teach students to use that ability to limit searches to scholarly results in order to help them select the best stuff for their own research.  But in reality it probably doesn’t do that.  All it tells us is whether or not the journal itself claims that it employs a peer-review process; it cannot tell us which articles actually were subjected to that process or how rigorous it really is.  From the perspective of a student searching Academic OneFile, articles from Science and articles from the Journal of Natural Pharmaceuticals stand on equal footing.

Of course, it is perfectly possible that there are good, solid research articles in the Journal of Natural Pharmaceuticals.  These indexes list dozens of articles published over the last four years, written by academic researchers from universities in Africa, India, Australia and Japan.  Presumably these are real people, reporting real research, who decided that this journal was an appropriate place to publish their work.  And we simply do not know what level of peer-review these articles received.  So the question remains — should we tell our students that they can rely on these articles?  If not, how do we distinguish good peer-review from that which is sloppy or non-existent when the indexes we subscribe to do not?

The problem here is not with our indexes, nor is it with open access journals.  The problem is what we think peer-review can accomplish.  In a sense, saying a journal is peer-reviewed is rather like citing an impact factor.  At best, neither one actually tells us anything much about the quality of any specific articles, and at worst, both are more about marketing than scholarship.

The peer-review process is important, especially to our faculty authors.  It can be very helpful, when it is done well, because its goal is to assist them in producing an improved article or book.  But its value is greatly diminished from the other side — the consumption rather than the production side of publishing — when the label “peer-reviewed” is used by readers or by promotion and tenure committees as a surrogate for actually evaluating the quality of a specific article. Essentially, peer review is a black box, from the perspective of the reader/user.  I don’t know if the flaws in the “bogus” article that Bohannon submitted were as obvious as he contends, but had he allowed its acceptance by the Journal of Natural Pharmaceuticals to stand, that article would look just as peer-reviewed to users as anything published in Science.  The process, even within a single journal, is simply too diverse and too subject to derogation on any given day because a particular editor or reviewer is not “on their game” that day to be used in making generalized evaluations.

So what are we to do once we recognize the limits of the label “peer-reviewed?”  In general, we need to be much more attentive to the conditions under which scholarship is produced, evaluated and disseminated.  We cannot rely on some of those surrogates that we used for quality in the past, including impact factor and the mere label that a journal is peer-reviewed.  Those come from a time when they were the best we could do, the best that the available technology could give us.  Perhaps it is ironic, but it is open access itself that offers a better alternative.  Open peer-review, where an article is published along with the comments that were made about it during the evaluation process, could improve the current situation.  The evaluations on which a publisher relies, or does not rely, are important data that can help users and evaluators consider the quality of individual works.  Indeed, open peer review, where the reviewers are known and their assessments available, could streamline the promotion and tenure process by reducing the need to send out as many portfolios to external reviewers, since the evaluations that led to publication in the first place would be accessible.

There are many obstacles to achieving this state of affairs.  But we have Bohannon’s article to thank for helping us consider the full range of limitations that peer-reviewed journals are subject to, and for pointing us toward open access, not as the cause of the problem, but potentially as it solution.

Can we “fix” open access?

The later part of this past week was dominated, for me, by discussions of the article published in Science about a “sting” operation directed against a small subset of open access journals that purports to show that peer-review is sometimes not carried out very well, or not at all.  Different versions of a “fake” article, which the authors tell us could easily be determined to be poor science, were sent to a lot of different OA journals, and it was accepted by a large number of them.  This has set off lots of smug satisfaction amongst those who fear open access — I have to suspect that the editors of Science fall into that category — and quite a bit of hand-wring amongst those, like myself, who support open access and see it as a way forward out of the impasse that is the current scholarly communications system.  In short, everyone is playing their assigned parts.

I do not have much to say in regard to the Science article itself that has not been said already, and better, in blog posts by Michael Eisen, Peter Suber and Heather Joseph.  But by way of summary, let me quote here a message I posted on an e-mail list the other day.

My intention is not so much to minimize the problem of peer-review and open access fees as it is to arrive at a fair and accurate assessment of it.  As a step toward that goal, I do not think the Science article is very helpful, owing to two major problems.

First, it lacked any control, which is fundamental for any objective study.  Because the phony articles were only sent to open access journals, we simply do not know if they would have been “caught” more often in the peer-review process of any subscription journals.  There have been several such experiments with traditional journals that have exposed similar problems.  With this study, however, we have nothing to compare the results to.  In my opinion, there is a significant problem with the peer-review system as a whole; it is over-loaded, it tends toward bias, and, because to is pure cost for publishers, no one has much incentive to make it better.  By looking only at a sliver of the publishing system, this Science “sting” limited its ability to expose the roots of the problem.

The other flaw in the current study is that it selected journals from two sources, one of which was Jeffrey Beall’s list of “predatory” journals.  By starting with journals that had already been identified as problematic, it pre-judged its results.  It was weighted, in short, to find problems, not to evaluate the landscape fairly.  Also, it only looked at OA journals that charge open access article processing fees.  Since the majority of OA journals do not charge such fees, it does not even evaluate the full OA landscape.  Again, it focused its attention in a way most likely to reveal problems.  But the environment for OA scholarship is much more diverse than this study suggests.

 The internet has clearly lowered the economic barriers for entering publishing.  In the long run, that is a great thing.  But we are navigating a transition right now.  “Back in the day” there were still predatory publishing practices, such as huge price increases without warning and repackaging older material to try and sell it twice to the same customer, for example.  Librarians have become adept at identifying and avoiding these practices, to a degree, at least.  In the new environment, we need to assist our faculty in doing the same work to evaluate potential publication venues, and also recognize that they sometimes have their own reasons for selecting a journal, whether toll-access or open, that defy our criteria.  I have twice declined to underwrite OA fees for our faculty because the journals seemed suspect, and both time the authors thanked me for my concern and explained reasons why they wanted to publish there anyhow.  This is equally true in the traditional and the OA environment.  So assertions that a particular journal is “bad” or should never be used needs to be qualified with some humility.

At least one participant on the list to which I sent this message was not satisfied, however, and has pressed for an answer to the question of what we, as librarians, should do about the problem that the Science article raises, whether it is confined to open access or a much broader issue.

By way of an answer, I want to recall a quote (which paraphrases earlier versions) from a 2007 paper for CLIR by Andrew Dillon of the University of Texas — “The best way to predict the future is to help design it.”  Likewise, the best way to avoid a future that looks unpleasant or difficult is to take a role in designing a better one.

That the future belongs to open access is no longer really a question, but questions do remain.  Will open access be in the hands of trustworthy scholarly organizations?  Will authors be able to have confidence on the quality of the review and publication processes that they encounter?  Will open access publishing be dominated by commercial interest how will undermine its potential to improve the economics of scholarly communications?  If libraries are concerned about these questions, the solution is to become more involved in open access publishing themselves.  If the economic barriers for entering publisher have been lowered by new technologies, libraries have a great opportunity to ensure the continuing, and improving, quality of scholarly publishing by taking on new roles in that enterprise.

Many libraries are becoming publishers.  They are publishing theses and dissertations in institutional repositories.  They are digitizing unique collections and making them available online.  They are assisting scholars to archive their published works for greater access.  And they are beginning to use open systems to help new journals develop and to lower costs and increase access for established journals.  All these activities improve the scholarly environment of the Internet, and the last one, especially, is an important way to address concerns about the future of open access publishing.  The recently formed Library Publishing Coalition, which has over 50 members, is testament to the growing interest that libraries have in embracing this challenge.  Library-based open access journals and library-managed peer-revew processes are a major step toward address the problem of predatory publishing.

In a recent issue brief for Ithaka S&R, Rick Anderson from the University of Utah calls on libraries to shift some of their attention from collecting what he calls “commodity” works, which many libraries buy, toward making available the unique materials held in specific library collections (often our “special” collections).  This is not really a new suggestion, at least for those who focus on issues of scholarly communication, but Anderson’s terminology makes his piece especially though-provoking, although it also leads him astray, in my opinion. While Anderson’s call to focus more on the “non-commodity” materials, often unique, that our libraries hold is well-taken and can help improve the online scholarly environment, I do not believe that this is enough for library publishing to focus on.  Anderson’s claim that by focusing on non-commodity documents will allow us to “opt out of the scholarly communication wars” misses a couple of points.  First, it is not just publishers and libraries who are combatants in these “wars;” the scholars who themselves produce those “commodity” documents are frustrated and ill-served by the current system.  Second, there is very little reason left for those products — the articles and books written by university faculty members — to be regarded as commodities at all.  The need for extensive investment of capital into publishing operations, which is what made these works commodities in the first place, was a function of print technology and is largely past.

So libraries should definitely focus on local resources, but those resources include content created on campuses that has previously been considered commodities.  The goal of library publishing activities should be to move some of that content — the needs and wishes of the faculty authors should guide us — off of the commodity market entirely and into the “gift economy” along with those other non-commodity documents that Anderson encourages libraries to publish.

If libraries refocus their missions for a digital age, they will begin to become publishers not just of unique materials found in “special” collections, but also of the scholarly output of their constituents.  This is a service that will grow in importance over the coming years, and one that is enable by technologies that are being developed and improved every day.  Library publishing, with all of the attendant services that really are at the core of our mission, is the ultimate answer to how libraries should address the problem described only partially by the Science “sting” article.