Category Archives: Open Access and Institutional Repositories

Half-lives, policies and embargoes

Over the holidays I was contacted by a writer for Library Journal asking me what I thought about a study by Phil Davis, which was commissioned and released by the Association of American Publishers, that analyzed the “article half-life” for journals in a variety of disciplines and reported on the wide variation in that metric.  The main finding is that this idea of half-life — the point at which an article has received half of its lifetime downloads — varies a great deal from discipline to discipline. The writer asked me what I thought about the study, and about a blog post on the Scholarly Kitchen in which David Crotty argues that this study shows that the experience of the NIH with article embargoes — that public access after a one-year embargo does not harm journal subscription — cannot be generalized because the different disciplines vary so much.  I sent some comments, and the article in LJ came out early last week.

Since this exchange I have learned that the Davis study is being presented to legislators to prove the point Crotty makes — that public access policies should have long embargoes on them to protect journal subscriptions.  It is worth noting that Davis does not actually make that claim, but his study is being used to support that argument in the on-going debate over implementing the White House public access directive.  That makes it more important, in my opinion, to be clear about what this study really does tell us and to recognize a bad argument when we see it.

Here is my original reply to the LJ writer, which is based on the fact that this metric, “article half-life,” is entirely new to me and its relevance is completely unproved.  It certainly does not, in my opinion, support the much different claim that short embargoes on public access will lead to journal subscription cancellations:

I have to preface my comments by saying that I was only vaguely aware of Davis’ study before you pointed it out.  So my comments are based on only a very short acquaintance.

I have no reason to question Davis’ data or his results.  My question is about why the particular focus on the half-life of article downloads was chosen in the first place, and my issue is with the attempt to connect that unusual metric with the policy debate about public access policies and embargoes.

As far as I can tell, article half-life tells us something about usage, but not too much about the question of embargoes.  The discussion of how long an embargo should be imposed on public access is supposed to focus on preventing subscription cancellations.  What I do not see is any connection between this notion of article usage half-life and journal cancellation.  It is a big leap from saying that a journal retains some level of usefulness for X number of years to saying that an embargo shorter than X will lead to cancelled subscriptions, yet I think that is the argument that is being made.

Here are two paragraphs from Crotty’s Scholarly Kitchen post:

[snip]”As I understand it, the OSTP set a 12-month embargo as the default, based on the experience seen with the NIH and PubMed Central. The NIH has long had a public access policy with a 12-month embargo, and to date, no publisher has presented concrete evidence that this has resulted in lost subscriptions. With this singular piece of evidence, it made sense for the OSTP to start with a known quantity and work from there.

The new study, however, suggests that the NIH experience may have been a poor choice for a starting point. Clearly the evidence shows that by far, Health Science journals have the shortest article half-lives. The material being deposited in PubMed Central is, therefore, an outlier population, and many (sic) not set an appropriate standard for other fields.”[end quotation]

What immediately strikes me is the unacknowledged transition between the two paragraphs.  In the first he is talking about lost subscriptions, which makes sense.  But in the second he is talking about this notion of download half-life.  What neither Davis nor Crotty give us, however, is the connection between these half-life numbers and lost subscriptions.  In other words, why should policy decisions about embargoes be made based on this half-life number?  At best the connection between so-called article half-life and cancelled subscriptions is based on a highly speculative argument that has yet even to be made, much less proved.  At worst, this metric is irrelevant  to the debate.

My overall impression is that the publishing industry is unable to show evidence of lost subscriptions based on the NIH public access policy (which Crotty acknowledges), so they are trying to introduce this new concept to cloud the discussion and make it look like there is a threat to their businesses that still cannot be documented.  I think it is just not the right data point on which to base the discussion about public access embargoes.

A second point, of course, is that even if it were proved that there would be some economic loss to publishers with 6 or 12 month embargoes, that does not complete the policy discussion.  The government does not support scientific research in order to prop up private business models.  And the public is entitled to make a decision about return on its investment that considers the impact on these private corporate stakeholders but is not dictated by their interests.  It may still be good policy to insist on 6 month embargoes even if we had evidence that this would have a negative economic impact on [some] publishers.  Government agencies that fund research simply are not obligated to protect the existing monopoly on the dissemination of scholarship at the expense of the public interest.

By the way, Crotty is wrong, in the passage quoted above, to say that there is no evidence that short embargoes do not impact subscriptions other than the NIH experience.  The European Commission did a five-year pilot study testing embargoes across disciplines and concluded that maximum periods of six months in the life sciences and 12 months for other disciplines were the correct embargoes.

In addition to what I said in the long quote above, I want to make two additional points.

First, it bears repeating that Davis’ study was commissioned by the publishing industry and released without any apparent peer-review.  Such review might have pointed out that the actual relevance of this article half-life number is never explained or defended.  But the publishing industry is getting to be in the habit of attacking open access using “data” that is not subject to the very process that they tell us is at the core of the value that they, the publishers, add to scholarship.

The second point is that I have never heard of any librarian who used article half-life to make collecting or cancellation decisions.  Indeed, I had never even heard of the idea until the Davis study was released, and neither had the colleagues I asked.  We would not have known how to determine this number even if we had wanted to.  It is not among the metrics, as far as I can determine, that publishers offer to us when we buy their packages and platforms.  So it appears to be a data point cooked up because of what the publishing industry hoped it would show, which is now being presented to policy-makers, quite erroneously, as if it was relevant to the discuss of public access and embargoes.  Crotty says in his post that rational policy should be evidence-based, and that is true.  But we should not accept anything that is presented as evidence just because it looks like data; some connection to the topic at hand must be proved or our decision-making has not been improved one bit.

We cannot say it too often — library support for public access policies is rooted in our commitment to serve the best interests of scholarship and to see to it that all the folks who need or could use the fruits of scholarly research, especially taxpayer-funded research, are able to access it.  We are not supporting these policies in order to cancel library subscriptions, and the many efforts in the publishing industry to distract from the access issue and to claim, on the basis of no evidence or irrelevant data, that their business models are imperiled are just so many red-herrings.

NB — After this was written I discovered the post on the same topic by Peter Suber from Harvard, which comes to many of the same conclusions and elaborates on the data uncovered by the European Commission and the UK Research Councils that are much more directly relevant to this issue.  You can read his comments here.

Taking a stand

When I wrote a blog post two weeks ago about libraries, EBSCO and Harvard Business Publications, I was attending the eIFL General Assembly in Istanbul, and I think the message I wanted to convey — that librarians need to take a stand on this issue and not meekly agree to HBP’s new licensing fee — was partly inspired by my experiences at the eIFL GA.

Having attended two eIFL events in the past four years, I have learned that many U.S. librarians are not aware of the work eIFL does, so let me take a moment to review who they are.  The core mission of eIFL is to “enable access to knowledge in developing and transition countries.”  They are a small and distributed staff that work on several projects, including support for the development of library consortia in their partner countries, negotiating licenses for electronic resources on behalf of those consortia, developing capacity for advocacy focused on copyright reform and open access, and encouraging the use of free and open source software by libraries.  The key clientele for eIFL are academic libraries, and all of the country coordinators and delegates that I met at the General Assembly were from colleges and universities.  But eIFL also has a project to help connect communities to information through public libraries in the nations they serve.

The delegates at the General Assembly came from Eastern Europe, Central Asia and Africa.  These librarians face a variety of local conditions and challenges, but they share a common commitment to improving information access and use for the communities they serve.  It was the depth and strength of that commitment that I found so inspiring at the event.  I wanted to encourage U.S. librarians to take a stand because these librarians from all over the world are themselves so consistently taking a stand.

One way these librarians are taking a stand is in negotiations with publishers.  There were lots of vendor and publishing representatives at the General Assembly, and time for each delegation to speak with each vendor (“speed dating”) was built in to the schedule.  Although these meetings were short, they were clearly intense.  One vendor rep told me that they were difficult because the librarians had diverse needs and were well-versed for the negotiations.  He also told me that he enjoyed the intensity because it went beyond “just selling.”  And that is the key.  These librarians are supporting each other, learning from each other and from speakers at the event what they can expect and what they can aspire to with their electronic resources, and taking those aspirations, along with their local needs, into negotiations.  They are definitely not “easy” customers because they are well-informed and willing to fight for the purchases that best serve their communities.  Because they cannot buy everything, they buy very carefully and drive hard bargains.

Another area in which these eIFL librarians are taking a stand is in regard to copyright in their individual nations.  I saw several presentations, from library consortia in Poland, Uzbekistan, Mongolia and Zimbabwe, about how they had made their library consortia into recognized stakeholders in discussions of copyright reform on the national level.  One consortium is offering small grants for librarians to become advocates for fair copyright; another has established a copyright “help desk” to bring librarians up to speed.  One of the eIFL staff emphasized to me the importance of this copyright work.  Copyright advocacy is part of the solution, I was told, to the problem of burdensome licensing terms that have often been seen in those parts of the world.

One story was particularly interesting to me.  An African librarian told how publishers in her country often view libraries and librarians as a major “enemy” because it is believed that they encourage book piracy.  Through the consortium of academic libraries, librarians have now become actively involved in a major publishing event that is held annually in her country, and recently the libraries were asked to nominate a board member to that group.  As a result of these efforts, the relationship between librarians and publishers has improved, and there is much more understanding (thus less suspicion) about library goals and priorities.

eIFL librarians are also taking a stand amongst their own faculties by advocating for open access. There were multiple stories about new open access policies at different universities, and about the implementation of repository software.  There were also multiple presentations to convey the advantages that open resources offer to education.  These presentations discussed MOOCs (that was me), open data, alternative methods of research assessment and text-mining.  If these sound familiar, they should.  In spite of difficult conditions and low levels of resourcing, these librarians are investigating the same opportunities and addressing the same challenges as their U.S. counterparts.  Just to illustrate the breadth of the interest in the whole topic of openness, I wrote down the countries from which the librarians who grilled me about MOOCs came when I spent an hour fielding questions; they came from Azerbaijan, Lesotho, Kyrgyzstan, Lithuania, Malawi, Maldives, Macedonia, Fiji, China, Thailand, Ghana, Belarus, Armenia, Uzbekistan, Swaziland and Mongolia.  They came with questions that challenged my assumptions (especially about business models) and deepened my own understanding of the international impact of open online courses.

There is one last conversation I had that I want to report on, both for its own sake and because of how it illuminates the eIFL mission.  Mark Patterson, the editor of the open access journal eLife, was at the GA to talk about research assessment.  Later I sat and talked with him about eLife.  He told me that the most exciting thing about eLife was its model whereby scientists reclaim the decision about what is important to science.  While the editors of subscription-based journals must always strive for novelty in order to defend and extend their competitiveness, eLife and, presumably, other open access journals, have scientists making decisions about what is important to science, whether or not it is shiny and new.  Sometimes there is an article that is really important because it refines some idea or process in a small way, or because of its didactic value.  Such articles would escape the notice of many subscription journals, but the editors at eLife can catch them and publish them.  And the reason this seems to fit so well into the eIFL context is because it is about self-determination.  Whether I was talking about open access journals with Mark or to the country delegates at the GA, this was the dominant theme, the need to put self-determination at the center of scholarly communications systems, from publishing to purchasing.

Can we “fix” open access?

The later part of this past week was dominated, for me, by discussions of the article published in Science about a “sting” operation directed against a small subset of open access journals that purports to show that peer-review is sometimes not carried out very well, or not at all.  Different versions of a “fake” article, which the authors tell us could easily be determined to be poor science, were sent to a lot of different OA journals, and it was accepted by a large number of them.  This has set off lots of smug satisfaction amongst those who fear open access — I have to suspect that the editors of Science fall into that category — and quite a bit of hand-wring amongst those, like myself, who support open access and see it as a way forward out of the impasse that is the current scholarly communications system.  In short, everyone is playing their assigned parts.

I do not have much to say in regard to the Science article itself that has not been said already, and better, in blog posts by Michael Eisen, Peter Suber and Heather Joseph.  But by way of summary, let me quote here a message I posted on an e-mail list the other day.

My intention is not so much to minimize the problem of peer-review and open access fees as it is to arrive at a fair and accurate assessment of it.  As a step toward that goal, I do not think the Science article is very helpful, owing to two major problems.

First, it lacked any control, which is fundamental for any objective study.  Because the phony articles were only sent to open access journals, we simply do not know if they would have been “caught” more often in the peer-review process of any subscription journals.  There have been several such experiments with traditional journals that have exposed similar problems.  With this study, however, we have nothing to compare the results to.  In my opinion, there is a significant problem with the peer-review system as a whole; it is over-loaded, it tends toward bias, and, because to is pure cost for publishers, no one has much incentive to make it better.  By looking only at a sliver of the publishing system, this Science “sting” limited its ability to expose the roots of the problem.

The other flaw in the current study is that it selected journals from two sources, one of which was Jeffrey Beall’s list of “predatory” journals.  By starting with journals that had already been identified as problematic, it pre-judged its results.  It was weighted, in short, to find problems, not to evaluate the landscape fairly.  Also, it only looked at OA journals that charge open access article processing fees.  Since the majority of OA journals do not charge such fees, it does not even evaluate the full OA landscape.  Again, it focused its attention in a way most likely to reveal problems.  But the environment for OA scholarship is much more diverse than this study suggests.

 The internet has clearly lowered the economic barriers for entering publishing.  In the long run, that is a great thing.  But we are navigating a transition right now.  “Back in the day” there were still predatory publishing practices, such as huge price increases without warning and repackaging older material to try and sell it twice to the same customer, for example.  Librarians have become adept at identifying and avoiding these practices, to a degree, at least.  In the new environment, we need to assist our faculty in doing the same work to evaluate potential publication venues, and also recognize that they sometimes have their own reasons for selecting a journal, whether toll-access or open, that defy our criteria.  I have twice declined to underwrite OA fees for our faculty because the journals seemed suspect, and both time the authors thanked me for my concern and explained reasons why they wanted to publish there anyhow.  This is equally true in the traditional and the OA environment.  So assertions that a particular journal is “bad” or should never be used needs to be qualified with some humility.

At least one participant on the list to which I sent this message was not satisfied, however, and has pressed for an answer to the question of what we, as librarians, should do about the problem that the Science article raises, whether it is confined to open access or a much broader issue.

By way of an answer, I want to recall a quote (which paraphrases earlier versions) from a 2007 paper for CLIR by Andrew Dillon of the University of Texas — “The best way to predict the future is to help design it.”  Likewise, the best way to avoid a future that looks unpleasant or difficult is to take a role in designing a better one.

That the future belongs to open access is no longer really a question, but questions do remain.  Will open access be in the hands of trustworthy scholarly organizations?  Will authors be able to have confidence on the quality of the review and publication processes that they encounter?  Will open access publishing be dominated by commercial interest how will undermine its potential to improve the economics of scholarly communications?  If libraries are concerned about these questions, the solution is to become more involved in open access publishing themselves.  If the economic barriers for entering publisher have been lowered by new technologies, libraries have a great opportunity to ensure the continuing, and improving, quality of scholarly publishing by taking on new roles in that enterprise.

Many libraries are becoming publishers.  They are publishing theses and dissertations in institutional repositories.  They are digitizing unique collections and making them available online.  They are assisting scholars to archive their published works for greater access.  And they are beginning to use open systems to help new journals develop and to lower costs and increase access for established journals.  All these activities improve the scholarly environment of the Internet, and the last one, especially, is an important way to address concerns about the future of open access publishing.  The recently formed Library Publishing Coalition, which has over 50 members, is testament to the growing interest that libraries have in embracing this challenge.  Library-based open access journals and library-managed peer-revew processes are a major step toward address the problem of predatory publishing.

In a recent issue brief for Ithaka S&R, Rick Anderson from the University of Utah calls on libraries to shift some of their attention from collecting what he calls “commodity” works, which many libraries buy, toward making available the unique materials held in specific library collections (often our “special” collections).  This is not really a new suggestion, at least for those who focus on issues of scholarly communication, but Anderson’s terminology makes his piece especially though-provoking, although it also leads him astray, in my opinion. While Anderson’s call to focus more on the “non-commodity” materials, often unique, that our libraries hold is well-taken and can help improve the online scholarly environment, I do not believe that this is enough for library publishing to focus on.  Anderson’s claim that by focusing on non-commodity documents will allow us to “opt out of the scholarly communication wars” misses a couple of points.  First, it is not just publishers and libraries who are combatants in these “wars;” the scholars who themselves produce those “commodity” documents are frustrated and ill-served by the current system.  Second, there is very little reason left for those products — the articles and books written by university faculty members — to be regarded as commodities at all.  The need for extensive investment of capital into publishing operations, which is what made these works commodities in the first place, was a function of print technology and is largely past.

So libraries should definitely focus on local resources, but those resources include content created on campuses that has previously been considered commodities.  The goal of library publishing activities should be to move some of that content — the needs and wishes of the faculty authors should guide us — off of the commodity market entirely and into the “gift economy” along with those other non-commodity documents that Anderson encourages libraries to publish.

If libraries refocus their missions for a digital age, they will begin to become publishers not just of unique materials found in “special” collections, but also of the scholarly output of their constituents.  This is a service that will grow in importance over the coming years, and one that is enable by technologies that are being developed and improved every day.  Library publishing, with all of the attendant services that really are at the core of our mission, is the ultimate answer to how libraries should address the problem described only partially by the Science “sting” article.

 

 

 

An odd anouncement

I did not initially pay much attention when publisher John Wiley announced early in September that they would impose download limits on users of their database “effective immediately.”  My first thought was “if they are going to disable the database, I wonder how much the price will decrease.”  Then I smiled to myself, because this was a pure flight of fantasy.  Like other publishers before it, Wiley, out of fear and confusion about the Internet, will reduce the functionality of its database in order to stop “piracy,” but the changes will likely do nothing to actually address the piracy problem and will simply make the product less useful to legitimate customers.  But it is foolish to imagine that, by way of apology for this act, Wiley will reduce the price of the database.  As contracts for the database come up for renewal, in fact, I will bet that the usual 6-9% price increase will be demanded, in fact, and maybe even more.

As the discussion of this plan unfolded, I got more interested, mostly because Wiley was doing such a bad job of explaining it to customers.  More about that in a moment.  But first it is worth asking the serious question of whether or not the plan — a hard limit on downloads of 100 articles within a “rolling” 24 hour period — will actually impact researchers.  I suspect that it will, at least at institutions like mine with a significant number of doctoral students.  Students who do intensive research, including those writing doctoral dissertations as well as students or post-docs working in research labs, sometimes choose to “binge” on research, dedicating a day or more to gathering all of the relevant literature on a topic.  Sometimes this material will be download so that it can be reviewed for relevance to the project at hand, and a significant amount of it will be discarded after that review.  Nothing in this practice is a threat to Wiley’s “weaken[ed]” business, nor is it outside of the bounds of the expected use of research databases.  But Wiley has decided, unilaterally, to make such intensive research more difficult.  In my opinion, this is a significant loss of functionality in their product — it becomes less useful for our legitimate users — which is why I wondered about a decrease in the price.

The text of the announcement was strangely written, in my opinion.  For one thing, I immediately distrust something that begins “As you are aware,” since it usually means that someone is about to assert categorically something that is highly dubious, and they do not wish to have to defend that assertion.  So it is here, where we are told that we are aware of the growing threat to Wiley’s intellectual property by people seeking free access.  I am very much aware that Duke pays a lot for the access that our researchers have to the Wiley databases, so this growing threat is purely notional to me.  As is so common for the legacy content industries, their “solutions” to piracy are often directed at the wrong target.  So it is with this one.  As a commenter on the LibLicense list pointed out, Wiley should be addressing automated downloads done by bots, not the varied and quite human research techniques of its registered users.

Another oddity was the second paragraph of the original announcement, which seems to apologize for taking this action “on our own,” without support form the “industry groups” in which Wiley is, they say, a “key player.”  As a customer, I am not sure why I should care about whether the resource I have purchased is broken in concert with other vendors or just by one “key player.”  But the fact that Wiley thought it needed to add this apology may indicate that it is aware that it is following a practice that has been largely shown throughout the content industry to be ineffective against piracy and alienating to genuine customers.  Perhaps, to look on the bright side, it means that other academic article vendors will not follow Wiley’s lead on this.

Things got even stranger when Wiley issued a “clarification” that finally addressed, after a 10 day delay, a question posed almost as soon as the first announcement was made, which was about exactly who would be affected by the limitation.  That delay, in fact, made me wonder if Wiley had not actually fully decided on the nature of the limitation at the time of the first announcement, and waited until a decision was made, belatedly, to answer the question.  In any case, the answer was that the limitation would only be imposed on “registered users.”  That clarification said users who accessed the database through IP recognition or via a proxy would not be affected, and that these non-registered users made up over 95% of the database usage.  So as Wiley asserts that this change will make little difference, they also raise the question of why do it at all.  It seems counter-intuitive that registered users would raise the biggest threat of piracy, and no evidence of that is offered.  And I wonder (I really do not know) why some users register while most, apparently, do not.  If Wiley offers registration as an option, they must think it is beneficial.  But by the structure of this new limitation, they create a strong incentive for users not to register.  But then Wiley adds a threat — they will continue to look for other, presumably more potent, ways to prevent “systematic downloads.”  So our most intensive researchers are not out 0f the woods yet; Wiley may take further action to make the database even less usable.

All of this made me doubt that this change had really been carefully thought out.  And it also reminded me that the best weapon against unilateral decisions that harm scholarship and research is to stop giving away the IP created by our faculty members to vendors who deal with it in costly and irresponsible ways.  One of the most disturbing things about the original announcement is Wiley’s reference to “publishers’ IP.”  Wiley, of course, created almost none of the content they sell; they own that IP only because it has been transferred to them.  If we could put an end to that uneven and unnecessary giveaway, this constant game of paying more for less would have to stop. So I decided to write a message back to Wiley, modeled on their announcement and expressive of the sentiment behind the growing number of open access policies at colleges and universities.  Here is how it will begin:

As you are aware, the products of scholarship, created on our campuses and at our expense, are threatened by a growing number of deliberate attempts to restrict access only to those who pay exorbitant rates.  These actors weaken our ability to support the scholarly enterprise by insisting copyright be transferred to them so that they can lock up this valuable resource for their own profit, without returning any of that profit to the creators.  This takes place every day, in all parts of the world.

University-based scholarship is a key player in the push for economic growth and human progress.  While we strive to remain friendly to all channels for disseminating research, we have to take appropriate actions on our own to insure that our IP assets have the greatest possible impact.  Therefore, effective immediately, we will limit the rights that we are willing to transfer to you in the scholarly products produced on our campuses.

 

 


 


 

ETDs, publishing & policy based on fear

The July 2013 issue of College & Research Libraries contains an important article on the question “Do Open Access Electronic Theses and Dissertations Diminish Publishing Opportunities in the Social Science and Humanities?”  The article reports on a 2011 survey of publishers, which follows up and refines several previous surveys done to see if publishers really do decline to publish revised dissertations when the original work is available in an open access repository.  All of these surveys found the same thing — a large majority of publishers DO NOT treat the existence of an open access ETD as prior publication that disqualifies the revised version from publication.  In fact, while early studies asked the question of whether or not ETDs were considered prior publication and found that about 15-25% said they did, when the 2013 authors phrased the question differently — how do you approach revised versions of manuscripts derived from OA ETDs — the percentage of publishers who said they would not consider those manuscripts was even lower, consistently less than 10%.

By far the majority of journal and university press publishers told the authors that such manuscripts were either always welcome or considered on a case-by-case basis.  It is naturally true that dissertations have to be significantly revised prior to publication, but that is the case regardless of whether or not the dissertation is online.  Consider these two quotations from survey respondents, one from a university press and one from a scholarly journal:

We normally consider theses or dissertations for publication only if the author is willing to significantly revise them for a broader audience; this is our practice regardless of the availability of an ETD.

Readers will consider our article to be the version of record, the version they should read and cite, because (a) it will have been vetted by out double-blind peer review process, (b) it will have been professionally edited, and (c) it will be the most up-to-date version of the material

This does not sound like publishers who are afraid of ETDs.  Indeed, even the very low numbers about publisher reluctance may need to be set in further context, since I think they still over-estimate the degree to which open access is the root cause of whatever difficulties there may be in getting a revised dissertation published.  More about that in a bit.

But first we need to address this farcical statement from the American Historical Association asking institutions that adopted ETDs to provide embargoes on OA of up to six years.  The AHA says it is afraid that ETDs may inhibit opportunities for students to publish their dissertations; they specifically claim that “an increasing number of university presses are reluctant to offer a publishing contract to newly minted PhDs whose dissertations have been freely available via online sources.”  But they offer no evidence for this claim, and the evidence that is out there, including this most recent survey, directly contradicts the assertion.  This is not the way a society of professional scholars should work; policy should be based on data, not merely fear and rumor.  And factual claims should be sourced.  Every scholar knows this, of course, but the AHA asserts an “increasing number” without citing any source, possibly because the available sources simply do not support the claim.  It is ironic that the AHA, in a statement purporting to defend the interests of graduate students, models such bad scholarly practice for those very students.

Speaking of inaccurate assertions, there is one quoted in the C&RL article from Texas A&M Press Director Charles Backus that needs to be addressed.  Mr. Backus is quoted as saying that his press is in that small minority that is more reluctant to consider a revised ETD “because most libraries and library vendors will not buy or recommend purchase of ensuing books that are based substantially on them.”  The authors of the article, themselves librarians, express surprise at this claim, and indicate that it needs further study.  They want to know how widespread this belief is in the publishing community, but we should also be doing research into whether or not the assertion has any foundation in fact.  Based on my experience, I do not believe that library selectors look at availability of an ETD when deciding whether or not to buy a monograph that is a revised dissertation; in fact, I doubt they usually know whether or not there even is such an ETD.  One librarian told me that she looked for “quality, coverage, currency and authority” when buying monographs and the claim that she might not buy a book because an earlier version was available as an ETD was “poppycock.”

If sales of monographs based on dissertations have declined, and I am prepared to believe that is the case, the reasons should be sought elsewhere than with ETDs.  This is really the other side of the publishers insistence that a dissertation must be substantially revised before it can be considered for publication.  Just as presses need to appeal to a broader audience to support sufficient sales, so libraries are looking at books with broad relevance to the curriculum they support when they allocate their shrinking monographs budget (that is the “coverage” criteria).  Even revised dissertations may be too focused on a specific niche, so they are quite likely to be the first things that get passed over.  But it is not open access that is the problem; the problem is that we have less and less money to spend on books because an ever-increasing share of our collection budgets is going to journal packages.  Your lunch, Mr. Backus, is being eaten by Elsevier and Wiley, not by ETDs.

Speaking of coverage and broad appeal gets me back to my suspicion that even the low numbers reported in this survey might over-report the degree to which ETDs inhibit publishing.  The question that was posed in the survey reported in C&RL was specifically about attitudes toward manuscripts that are revised ETDs.  But a larger question should be asked — are publishers accepting fewer revised dissertations overall, regardless of whether or not the original document is online.  I suspect the answer is that they are accepting fewer dissertations overall, and that is the context in which we should place any discussion about fear of ETDs.  In my own conversations with university presses, two criteria seem most important for them when deciding about any book manuscript — it must have “crossover” appeal, meaning that folks outside academia might want to buy it, and/or it must be suited for course adoption.  For many small academic presses, I think, the days of the purely academic monograph that will be read only be specialists are largely over, and they are over because of economic realities (like shrinking library budgets for books) that are independent of the movement toward ETDs.  It is this much more general set of conditions that spells bad news for revised dissertations.

Out of all of this, I hope for three next steps.  First, I hope that the next survey about ETDs will look at how librarians are actually making purchasing decisions, and thus rid us of the claim that libraries are reducing monograph purchases based on the availability of ETDs.  Second, broader research about the number of book manuscripts being accepted by academic press and the criteria that guide those decisions should be undertaken.  And finally, I hope that the AHA will look at this issue more responsibly and issue a statement that is guided by real evidence rather than by fear and nostalgia.

Better than joining the CHORUS

Last week we saw two proposals about how the various federal agencies that fund research might implement the recent directive from the White House Office of Science and Technology Policy that mandates public access to the products of funded research.  A group of publishers unveiled (sort of) a proposal they call CHORUS, while the Association of American Universities, the Association of Research Libraries and the Association of Public and Land-grant Universities collaborated on a different proposal, referred to as SHARE.

The publishers proposal — the acronym stands for Clearing House for the Open Research of the United States — is described in glowing terms on the Scholarly Kitchen website and with a bit more restraint by the Chronicle of Higher Education.  The proposal from the education associations, dubbed Shared Access Research Ecosystem, is also described by the Chronicle and is the subject of a detailed draft proposal that can be found here.

For myself, I would rather SHARE than join the CHORUS, for a number of reasons.

First, I think CHORUS is being touted, at least in what I have read, by comparing it to a straw man.  Its principle virtue seems to be that it would not cost the government as much as setting up lots of government-run repositories, clones of PubMed Central.  But it is not clear that that option is being seriously suggested by anyone.  Certainly many of us encouraged the agencies to look at the benefits of PMC for inspiration and not sacrifice those benefits in their own plans, but that does not mean that each agency must “reinvent the wheel,” no matter how successful that wheel has been.  So the principle virtue of CHORUS seems to be that it does not do what no one is suggesting be done.

The most important thing to understand about CHORUS is that it is a dark archive.  The research papers in CHORUS would not be directly accessible to anyone; they would be “illuminated” only if a “trigger event” occurred.  Routine access would, instead, be provided on the proprietary platforms of each publisher, while the CHORUS site would simply collect metadata about the openly-accessible articles and point researchers to the specific publisher platforms.

It seems to me that the CHORUS proposal is “disabled” from the start, by which I mean that it lacks three fundamental abilities.  CHORUS, at least based on the descriptions we have seen, lacks find-ability, useability and interoperability.

Perhaps the most troubling remark in the description offered on the Scholarly Kitchen blog is that “Users can search and discover papers directly from CHORUS.gov or via any integrated agency site.”  Does this mean that even the collected metadata would not be available to Google?  We know how few researchers “walk through the front door” of our research tools, so limiting discovery to the CHORUS portal or “integrated agency sites” would make these open access papers virtually invisible (which, one suspects, is the point).  As things stand now, open access papers which reside on proprietary publisher platforms are difficult to find because there is no consistency in how they can be discovered.  That is the principal reason so many COPE fund institutions will not support so-called “hybrid” open access publishing that makes a few articles open on an otherwise toll-access site.  It does not seem that CHORUS would change that unfortunate situation at all, which is probably why Heather Joseph of SPARC calls CHORUS “a restatement of the status quo.”  The public would gain very little, since the major goal of the proposal is for the publishers to cling tightly to control over the research papers that have been entrusted with.

Another ability that CHORUS would lack is useability, since as far as we know, all that a researcher or other user could do with these papers is read them.  It would not, of course, facilitate sharing, teaching or reuse, even those these abilities are vital to improving the speed and quality of research in the United States.  And then there is interoperability.  If the geographically desperate archives are genuinely federated, searches across all of them, even keyword searches that are not dependent on the metadata created for each article, would be possible.  So would text and data mining across a large corpus of works.  We already know that such interoperability creates tremendous new opportunities for expanded research, collaboration, and previously impossible discoveries.  But there is no reason to believe that CHORUS would support interoperability, since the various publishers have a strong competitive interest in not allowing cross-platform activities.  Research and education, however, not only do not benefit from that competition, but are actively “disabled” by it.

On the other hand, the proposal from the universities and their libraries is for a genuinely federated system of university-based repositories.  Those repositories already exist, so if we are going to make a cost argument, it really favors SHARE.  And these repositories, unlike the publisher platforms, have a strong interest in facilitating discovery.  Also, the detailed proposal offered by these groups addresses text and data mining, semantic data, APIs for research and linked data.  All of these things make university-based research better, while they pose threats to the commercial publishers who have designed CHORUS to protect themselves, not to benefit research or the public.  So all the incentives line up between the public interest and the university-based SHARE system.

If the OSTP and the research-funding agencies take seriously all of the opportunities that were described in the comments they have solicited over the past year, it will be very obvious to them that CHORUS is singing flat, while it would be good to SHARE, just as our parents always told us.

The O in MOOC

I am generally a poor speller, but even I understand that there are two Os in MOOC.  So for added clarity, let me state up front that this post will focus on the first O — the one that stands for “open.”  But I want to get to the discussion about that O in a slightly round about (pun intended) way.

Let’s start with an insightful article from the recent issue of Nature that contained several pieces about open access.  The one that caught my attention is “Open Access: The true cost of science publishing.”  The author, Richard Van Noorden, provides a wealth of detail, and a very even-handed analysis, about the varying cost of publishing an academic article. He is hampered, unfortunately, by on-going secrecy on the topic. Neither Nature, which is publishing the article, nor PLoS would not talk with him about actual costs.  Nevertheless, there is a great deal of information here, and it all points to the conclusion that logic alone would have suggested — open access publishing, especially by non-profit entities, is much the more efficient way to disseminate scholarship.

One thing Van Noorden is able to show very clearly is that almost all open access publication charges are lower than the average per-article revenue that traditional publishers earn.  The difference can be as much as between a $300 cost per OA article and the average $5000 revenue per toll-access one.  The difference can be accounted for in one of two ways –large corporate profit margins or inefficient publishing methods.  Whichever is the case, however, it is clear the open access is the better option.  These lower costs are among the many reasons that open access provides a much greater benefit to academia than the traditional, pre-Internet system can.

Inspite of this documented good news about OA, however, the article ends on a discouraging note, or perhaps it is better to say a note of frustration.  Open access is obviously growing every year, but it is not growing as quickly, except where it is mandated, as it’s obvious superiority would suggest.  So at the end, the article leaves us to speculate on the incentives faculty authors have for choosing, or not choosing, OA.

And that brings me back to the “open” in Massively Open Online Courses.  The growing popularity of MOOCs, and their potential, parallel to that of open access itself, to revolutionize higher education, is a new and powerful incentive for scholarly authors to rethink access to their publications.

The fundamental driver behind the growth of MOOCs is the desire to expand the scope of our educational mission and to reach a global community of students we could not otherwise serve.  Seen in that light, the “open” in MOOC is key.  Part of our commitment as institutions participating in MOOCs is to try very hard not to erect financial barriers to participation in these courses.  We resist the normal urge to require textbook purchase, for example.  Our instructors are encourage to recommend but not require books for purchase (with the result, BTW, that sales for the merely-recommended books nearly always skyrocket). But this commitment to keeping the courses open for students also means that we look for an increasing amount of open content for teaching.

When our instructors want to provide readings for students taking a MOOC, we generally pursue one of two options.  Either we negotiate with publishers, who are slowly figuring out the marketing advantage they gain by allowing small excerpts of books and textbooks to be made available freely, or we look for OA content.  Unfortunately, the negotiation option is slow and labor-intensive; often we must explain the purpose and the conditions over and over again, to ever-shifting groups of officials, before we can get a decision.  So open access is ever more important, because more efficient, for our MOOC instructors and their students.

One story will illustrate this growing interest in open access.  A faculty member who was recently preparing to teach his first MOOC wanted his students to be able to read several of his own articles.  When we asked his publisher for permission on his behalf, it was denied.  A rude awakening for our professor, but also an opportunity to talk about open access.  As it turned out, all of the articles were published in journals that allowed the author to deposit his final manuscripts, and this author had them all.  So we uploaded those post-prints, and he had persistent, no-cost links to provide to the 80,000 students who were registered for his course.  An eye-opener for the author, a missed opportunity for the publisher, and a small triumph for our OA repository.  Enough of a triumph that this professor has begun asking colleagues if they could deposit post-prints of their own articles in the repositories at their institutions so that he can use those for his MOOC students as well.

So when we are counting up incentives for open access publishing, whether Gold or Green, lets remember that the massive opportunity that is represented by MOOCs is also a new reason to embrace open access.

Good government in action, and inaction

Governments are funny things.  No matter where we fall on the “more government, less government” political spectrum. it is inevitably the case that sometimes we applauded government actions, and sometime we prefer government inaction.  Last week, however, the scholarly communications community got the opportunity to admire BOTH positive action taken by the Administration and a positive decision in favor of inaction.

Let’s start with the inaction.  On Friday, attorneys for the Department of Justice sent a letter to the 11th Circuit Court of Appeals, where the Georgia State copyright decision is being appealed, informing the Court that the DoJ would not be filing an amicus brief in that appeal.  Recall please that the appeal is brought by the three publishers who lost pretty comprehensively in the trial court, and the DoJ had asked the 11th Circuit for an extension of time to file a “friend of the court” brief that would either support the publishers or support neither party.  That generated a lot of consternation in the higher education community, and many calls for the DoJ to rethink its position.  Apparently the outcry had an effect, and the Justice Department decided that it should not take sides in this dispute.  It is even possible that the plaintiff publishers themselves realized that a brief supporting them from the DoJ would open up cans of worms best left closed and mobilize even greater opposition to their efforts to squeeze more money from college and university budgets.  They may have asked the DoJ to stay on the sidelines.  But that is pure speculation.

What is clear is that the folks at Justice decided that their original idea to get involved was a bad one.   Because of the way a litigation schedule works, however, it is not too late for the DoJ to file a brief on the other side — supporting the careful analysis that the trial court judge did on the issue of fair use.  But that is unlikely, I fear.  Now, however, is the time for the higher education community to mobilize its own passion for its mission, and its own lawyers, to file on behalf of the defendant/appellee — Georgia State.

Then there is the action that was announced on Friday, and it could hardly have been better.  The White House, after a long delay, issued a directive that instructs all federal agencies that have large research and development budgets to develop plans to make the articles that arise from such research funding publicly available within 12 months of publication.  In short, the White House has recognized the success of the NIH public access mandate and has committed to providing the same benefit to taxpayers for the other research efforts that they fund.  Additionally, the new directive also instructs agencies to examine data access and sharing, so it genuinely is seeking to improve the overall environment for research, and to give taxpayers a greater return on their investments.

Many readers of this blog responded to our appeal for signatures on the We the People petition that was begun last year on the White House web site.  Those signers will all have received a letter from the Office of Science and Technology Policy,which explicitly portrays the new directive as a response to the opinions put forward in that petition.  So congratulate yourselves, maybe even buy yourselves a drink, in celebration of the good work you did on behalf of making scientific research faster, more nimble and more widely usable.

There is lots of press coverage on this directive from the White House.  Check out these stories from Science, the Wall Street Journal, and the Washington Post for more details.  But here is a detail that you won’t find in any news coverage, because it is not yet decided.  What repositories will be used for public access to non-NIH funded research?  Unlike the NIH, most agencies do not already have repositories that can become the mandated site for deposit.  And it would be a travesty, as well as a sure way to undermine compliance, if open access were left for authors to arrange with their publishers, who will want to add new fees, which would inevitably be borne, eventually, by those same taxpayers.  The sensible alternative is to tell authors that they can use open access repositories at their home institutions or at other educational institutions.  Agencies might also specify some standards for reliability and access that could apply to acceptable institutional repositories.  This will facilitate access at a lower cost.  But it will also increase the urgency for institutions to develop or improve their repositories, since those IRs will very quickly become a vital service to assist faculty authors in complying with the broader mandates that are now so clearly on the way.

The White House directive might lead some people to assume that the FASTR Act, which was introduced in both houses of Congress a couple of weeks ago, is no longer necessary.  That would be a mistake.  FASTR, which stands for Fair Access to Science and Technology Research and is the latest version of the bill formerly known as FRPAA, will do some things that the directive will not, or, at least, may not.  FASTR, for example, directs each agency to investigate issues of reuse; although it does not mandate open licensing, it certainly sets genuine open access — including not just the right to read but also to reuse — as the ideal.  Also, an act of Congress is more difficult to reverse, whereas an executive order can be countermanded by a subsequent president.  So there is good reason to continue to urge our representatives in Congress to support the FASTR act.  We can celebrate a very good week last week while still recognizing that we have more work, both in turning away the attack on fair use in academia and in supporting open access, ahead of us.

Two steps to a revolution in scholarly publishing – a thought experiment

During the Berlin 10 conference on Open Access, the first instance of the Berlin Conference held in Africa, some of the most compelling speeches came from those who advocated a much more radical approach to breaking the hold over academic publishing currently exercised by commercial firms.  Especially from Dr. Adam Habib, the Deputy Vice-Chancellor for Research from the University of Johannesburg, the delegates heard a call for immediate and direct action, even if it provoked a fight with commercial publishers.  In his opinion, at least, that would be a fight worth having, particularly for an African university.

These calls for radical solutions got me thinking about which stakeholders had the power to break open the current structures, which were called feudal and iniquitous by one speaker, and what actions they might take.  The result was the following thought experiment.

First, it seems to me that only two stakeholders really have both the will and the power to radically alter the terms and conditions of scholarly publishing.  One set of stakeholders are the academic authors themselves, who seem to largely recognize that the current system is grossly unfair to them and does not serve the best interests of scholarship or of scholarly authors.  Nevertheless, academic authors are limited in the kinds of action they can take, both because they are not well-organized and because of their dependence on the promotion and tenure system.

That leads to the other set of stakeholders who could shake up the system – the universities themselves.  Are there steps they could take that would reset the conditions for scholarly publication?  I think there are, and it seems like universities, if they were so minded, could create a revolution in scholarly communications with two simple, but by no means easy, policy changes.

First, universities could alter their intellectual property policies to assert that faculty scholarship is, in fact, work made for hire.  The legal argument here is simple and persuasive, that faculty work is created by regular employees within the scope of their employment.  Courts have recognized this argument for years, but universities have rightly been unwilling to press the case, for fear of doing harm to relations with their faculty members.  But as the scholarly publication system increasingly fails to adapt to the radical new conditions created in the digital environment, it is possible to imagine a policy change like this undertaken with the cooperation of the faculty authors themselves.

To make this work, universities could assert ownership, under work for hire rules, and at the same time grant back to faculty authors a broad license to make customary scholarly uses of their own works.  This would actually be a similar situation to the one that now obtains, where authors give copyright away to publishers without compensation and receive back fairly limited licenses to use the works they created.  Universities would almost certainly prove better partners in this approach to intellectual property because they could extend much broader licenses to the original authors and because their values are much more congruent with those of academic authors in the first place.

If universities owned scholarly writings as work for hire, however, they would have control over the means of publication.  They would have the power, and the incentive, to refuse to have work published in commercial journals.  They could give complete preference, if they wished, to open access journals and specifically to those open access journals that are run by scholarly societies and university presses.  Almost overnight universities could put an end to the subscription model that worked so well for 300 years and has now become an obstacle to academic research and scholarly communications.

Of course, if universities were to cut out the commercial publishers, the second step in this revolution is obvious – the system of evaluation for academics, the standards for promotion and tenure review, would have to change.  And this would also be a very good thing.  The system we now have, dependent as it is on the impact factor of the journals in which articles are published and the reputations of the presses who publish books, is inefficient and under-inclusive.  It assesses a work of scholarship on only one type of impact, citations in other scholarly journals, and evaluates that very limited metric at the journal level rather than article by article.  Once we stop confusing what was the only thing we could measure, in the past, with what is truly important, we will see that in today’s scholarly environment we actually can do much better.

So the second policy change that universities would need to undertake to affect this revolution would be to require that all assessments be based on article-level metrics applied to openly available works.  This change sounds very radical, but some institutions are already moving towards it.  At the University of Liege, in Belgium, it is already the case that faculty assessment is done only for articles that are in the university’s open access repository; this was the way Liege decided to put teeth into their open access mandate.  But universities could require open access and article-level evaluation measures while still supporting a variety of publication models.

And that is the final point to be made about this make-believe revolution.  If universities carried it out, it would free up more money than it would cost.  Once academic publication in commercial journals was halted, library collection budgets could be redirected.  Instead of a long transition period during which costs would be expected to rise because both subscription models and open access based on article processing charges would have to be supported, which is what the Finch Report predicted in the U.K., this suggestion would allow for wholesale cancellation of commercial publications.  The money saved would then be available to build up the infrastructure for repositories and to support APCs for gold open access publication.  Authors would have a choice – they could publish in an OA journal or they could publish directly to the institution’s repository.  Peer review could be preserved in a distributed model;  OA journals would continue to support traditional peer review, while some of the money saved from commercial subscriptions could be redirected to a more independent, discipline-specific system of peer-review.  This would provide an important role for scholarly societies, and subventions provided to support such society-run peer-review would help protect those organizations from any negative consequences of this radical re-visioning of the publication system.  Societies and non-profit presses, of course, could also find support through the publication of gold OA journals and even monographs.  University funds from library collection budgets would be more distributed than they are now, able to be used more efficiently to support activities genuinely central to the academic mission, and they would, I believe, be more than adequate to the task.

So that is the thought experiment that began at the University of Stellenbosch while listening to calls for radical action.  I don’t know whether I think it would be a good idea to implement or not.  But I believe it would be the fastest way to dramatically fix the current, broken, system of scholarly communications.  There would be many obstacles to these two policy changes, and the medicine might be worse than the disease.  But at the very least, thinking through this experiment in revolution has given me a better perspective on the power dynamics of the current system.  In libraries we are accustomed to thinking that the huge commercial publishing firms hold all the power and that there is nothing we can do to break their stranglehold over scholarship.  But upon reflection we can see that the real power over the system, which can either perpetuate it or revolutionize it, really does reside on our own campuses.

Is the Web just a faster horse?

On Monday the Duke Libraries celebrated Open Access week with a talk by Jason Priem that was ostensibly about alternative metrics for measuring scholarly impact – so-called AltMetrics.  Jason is a Ph.D. student at the University of North Carolina School of Library and Information Science, a co-author of the well-regarded AltMetrics Manifesto, and one of the founders of the Web tool called ImpactStory.  In addition to his public talk, Jason gave a workshop for a small group of people interested in how ImpactStory works; you can read about the launch of that tool in this article.

So Jason is as qualified as anyone to talk about AltMetrics, and he did so very well.  But his talk was much broader than that subject implies; he really gave a superb summary of the current state of scholarly communications and a compelling vision of where we could go.  For me the most memorable quote was when he said, toward the end of his talk, that the Web had been created to be a tool for scholarly communications, yet while it had dramatically changed many industries, from bookselling to pornography, it had not yet revolutionized scholarly publishing as it should.  The problem is that publishers, and, to some extent, authors, are treating the Web as simply “a faster horse” and not truly exploiting the possibilities it offers to change the way scholarship is done.

Jason began with some history, pointing out the the earliest forms of scholarly communications were simply direct conversations, carried out through letters.  The first scholarly journals, in fact, were simply compilations of these sorts of letters, and you can still see that venerable tradition reflected in some modern journal titles, like “Physical Review Letters.”  But the modern journal, with its submission process, peer review, and extremely delayed publication schedules, was a revolution and a dramatic re-visioning of how scholars communicated with one another.  Certainly there were significant gains in that new technology for communication, but things were lost as well.  The sense of conversation was lost, as was immediacy.  Now, according to Jason, we have the ability to recapture some of those values from the older model.

Another dramatic change in scholarly communications was the movement called “bibliometrics,” which led to the creation, in the early 1960s, of the citation index and the journal impact factor.  Like the journal itself, the impact factor is so ingrained in our current thinking that it is hard to remember that it too was once a new technology.  And it is a system with significant problems.  As Jason said, the impact factor can track only one kind of person, doing one kind of research by making one kind of use.  The impact factor cannot track non-scholarly uses of scholarly works, or even scholarly uses that are not reflected in another journal article.  Also, true social impact , the kind of policy-changing impact that many scholars would see as an important goal, is seldom reflected in an impact factor.  The problem we face, Jason argued, is that we have confused the kind of use we can track with use itself.  In the process we often miss the real heart of scholarship, the conversation.

In the digital age, however, we can begin to track that conversation, because much of it is taking place online. AltMetrics, by which we teach computers how to look for a variety of article-level citations and discussions, offers the chance to analyze the scholarly environment much more thoroughly, and give individual scholars a much clearer and more comprehensive story of their real impact.

One connection that was hard for me in Jason’s talk, but ultimately persuasive, was his discussion of why Twitter is important.  I admit to being a reluctant and unenthusiastic Twitter user.  This blog post will be distributed via Twitter, and most of my readers seem to find what I write that way.  But still I was startled when Jason compared Twitter to that earliest form of scholarly communications, the conversation.  What was new to me was to think of Twitter as an opportunity to have a preselected jury suggest what is important to read.  If I follow people whose work is interesting and important to me, and they are all reading a particular article, isn’t it extremely likely that that article will be interesting and important to me as well?  And isn’t that peer review?  We sometimes hear that peer review is professional evaluation while Twitter is merely a popularity contest.  But Jason challenged that distinction, pointing out that if we follow the right users, the people whose work we know and respect, Twitter is a scholarly tool in which popularity becomes indistinguishable from professional evaluation.  Since many scholars already use Twitter, as well as blogs an e-mail lists, in this way, it is fair to say that new forms of peer-review have already arrived.  The AltMetrics movement aims to track those other forms of scholarly impact.

Jason ended his talk with a proposal to “decouple” the scholarly journal, to recognize that journals have traditionally performed several different functions — often identified as registration, certification, dissemination and archiving.  Some of those functions are now trivial; why pay anyone for dissemination in an age when an article can be distributed to millions with the click of a mouse?  Other functions, especially certification (peer-review) are changing dramatically.  Jason suggested that peer-review should be a service which could be offered independently of how an article was to be disseminated.  Scholarly societies especially are in a good possession to provide peer-review as a service for which scholars and their institutions could pay when it was felt to be necessary.  but in an age when so much peer-review is already happening outside the structure of journal publication, it is clear that not all scholarship will require that formal service.  So in place of the rigid structure that we have now, Jason suggests, illustrates, and enables a more flexible, layered system of scholarly networks and services.

As should be obvious by now, I found Jason’s talk for Open Access Week provocative and thought -provoking.  I hope I have represented what he said fairly. I have tried to indicate where I am paraphrasing Jason directly, and he should not be blamed for the conclusions I draw from his comments.  But for those who would like to hear from Jason directly, and I highly recommend it, he and several other leaders in the area of AltMetrics will take part in a webinar sponsored by NISO on November 14, which you can read about and register for here.  You can also finds slides and a video from a presentation similar to the one he gave at Duke here.