Managing discontinuities

I spent a healthy portion of the long Independence Day weekend reading and chewing over a long blog post by David Rosenthal, who is a computer scientist involved with the LOCKSS digital preservation project.  The post was originally his keynote address at last month’s Joint Conference on Digital Libraries, and it is packed with complex and thought-provoking analysis of the scholarly communications system.

It is very difficult to summarize Rosenthal’s arguments, but he basically explains why all three of the major players in the current scholarly communications system — publishers, libraries and archives — are caught in unsustainable business models.  His analysis of the problems with publishing are fairly familiar, while his discussion of preservation and archiving was new and startling to me.  His discussion of libraries is his most cursory and the least compelling of his arguments for discontinuity.  Rosenthal follows his argument about these three discontinuities with a further discussion, that was largely over my head, about the technological discontinuity that may accompany these economic disruptions and create a perfect storm of opportunity.

In the course of his description of academic publishing, Rosenthal includes a fascinating discussion of how scholarship is changing, moving away from producing static content and toward dynamic services, where data, analysis and Web tools are combined and overlaid to create interactive and dynamic knowledge tools.  Here is Rosenthal’s analysis of this disruption:

What scholars are going to want to publish are dynamic services, not static content, whether it be papers or data.  The entire communication model we have is based on the idea that what is being communicated is static.  That is the assumption behind features of the current system including copyright, peer review, archiving and many others.

Certainly from the perspective of copyright this is clearly a true statement; our legal system is struggling and largely failing to deal with overlays, mashups and other new products of the computer-assisted intellect.  As such dynamic creations proliferate, it is clear that the disruption to traditional publishing, dependent as it is on a static scholarly record and the legal monopoly over that record bestowed by copyright, will also be great.

Rosenthal’s point about scholarship, it should be noted, is true both in the sciences and in the humanities.  The shift toward dynamic knowledge production has moved more quickly in the sciences, but visualizations, digital text projects and GIS enhanced research are moving the humanities and social sciences in the same direction.  Traditional publishing is already beginning to fail to capture important parts of the scholarly record.

Which brings me to the disruption in peer-review, which Rosenthal mentions in the quotation above but does not elaborate upon.  These new kinds of dynamic scholarly productions will clearly force a change in peer-review.  As more kinds of scholarship that cannot be published in traditional journals are produced, and more scholarly attention is focused on these productions, colleges and universities will have to find new ways to evaluate the quality of these works.  We will no longer be able to rely on the reputation of a particular journal title or publisher imprint as a surrogate for quality, since these productions will not be associated with publishing houses.

A new system of peer-review is long overdue in any case.  Dissatisfaction with the current system is ubiquitous, and the expenses claimed by commercial publishers for managing the system, which is based on volunteerism after all, are absurd.  But these issues will not be the engines of change; digital knowledge production, which the current system simply cannot handle, will be.  A new system, distributed using the same network technologies as the productions it evaluates, will gradually replace the outsourcing of quality judgments to commercial firms simply because it must.

There has been some attention recently to claims that the promotion and tenure process keeps scholars loyal to traditional modes of publication.  Such claims must be seen to be based on a very unstable foundation, because the same scholars who are surveyed to reach such conclusions are also the ones experimenting with new modes of teaching, research and scholarship.  It is these scholars, not libraries and their budgetary worries, that are driving the changes that should really worry those who make their livings from the traditional publishing system.

3 thoughts on “Managing discontinuities”

  1. I’m not sure they ARE the same people, honestly. My sense of the landscape is that the experimenters are few and widely scattered. So it’s not at all impossible to see BOTH the experiments AND the survey results you cite.

    I have issues with Rosenthal’s formulation too — not least that the data curation service model he proposes is not at all the one I see emerging — but I agree with you that what he says is well worth chewing over!

  2. Dynamic services, not static content, allows movement and range ~ creativity. Information lofted by technology is food for this engine of change and transformation. Locking up information as static content is as Rosenthal points out regressive, and capturing it while in flight is a challenge for preservation. Good to see that academics are driving the discussion and transparency of these challenges into the future as they impact their own livelihood. I think we are all challenged to see things from a bigger view and need to examine the business models which may keep us locked into stasis rather than growth and change.

  3. ps ~ Kevin ~ thanks for posting on David Rosenthal. He is brilliant and his own Big View so intelligent. Sharing it here for those of us who travel only the copyright highway is a real treat!

Comments are closed.