Category Archives: Technologies

Is Blogging Scholarship?

It certainly can be, according to Margaret Schilt in “Is the Future of Legal Scholarship in the Blogosphere,” reposted in Law.com from the “Legal Times.” Her article provides a very helpful thumbnail summary of the major legal blogs, but also reflects on the trend of legal scholarship toward this more informal and community-centered form of scholarship.

The recently released Ithaka report on university publishing noted that an increasing amount of scholarly communications takes place over informal channels, where the blog is becoming increasingly important. But who are legal bloggers, and do they think they are committing scholarship with their postings?

Schilt observes that most legal bloggers are not the “young turks” one might expect, but mid-career professors who have tenure. There has long been a debate whether new modes of scholarly communications will be adopted more readily by the young, to whom they may be more familiar, or the older, tenured faulty who can afford the risk. In law, apparently, it is the latter who are turning to blogs.

This is good news for shared scholarship, since this group of bloggers tends to be very familiar with traditional scholarship and able to translate that level of work to the blogosphere. Schilt makes specific mention of my favorite legal blog in this regard, Balkinization, where Jack Balkin of Yale leads an in-depth discussion of current events and recent works of legal learning.

What are the benefits of blogging, as Schilt sees them? First and foremost, a blog reaches more readers than does traditional scholarship. Also, it encourages rapid feedback. Some comments may be inane, of course, but there is also the potential to open up the scholarly enterprise to participants long excluded and to make the dialogue amongst traditional participants more lively and immediate.

Interestingly, Schilt also suggests that there may be a “reputational bonus” in blogging, since it can increase name recognition amongst one’s peers. Finally, she points out the value of the blog in teaching, offering a chance to encourage class discussion to continue in a public and accountable forum.

Blogs, Schilt concludes, “are where scholarly dialogue increasingly takes place.” Although it looks different from the traditional journal article, and its pace is accelerated over that of conventional scholarship, the blogosphere “still looks and feels a lot like scholarly activity.”

By the way, the Ithaka report mentioned above has itself become the subject of this rapid and interactive process of “peer-review.” It is now available in a “CommentPress” version from The University of Michigan. This software allows the report to be read in its entirety, but also lets readers insert comments at different places. You can read as many or as few of the comments as you like, but the availability of this important report in a “2.0” version speaks volumes about the trend toward more collaborative scholarship.

Why we need to collaborate.

The report from Ithaka on “University Publishing in the Digital Age” is almost a month old now, but I have delayed commenting about it until I had a chance to read it thoroughly. The report’s principal author was Laura Brown, a former president of Oxford University Press USA, so it is clearly written from inside knowledge of the university publishing industry, and the report subjects the roles of both university presses and libraries to careful scrutiny in the context of the changes taking place in scholarly communications.

University presses are criticized in the report for being slow to adapt to digital media, clinging instead to traditional models of business and distribution that are rapidly becoming out-of-date. Presses have also done a poor job of aligning themselves with the academic priorities of their parent institutions and demonstrating to those institutions that publishing is itself a core function of a university. University presses, however, show important strengths in selecting and editing quality material, developing an elaborate network for credentialing scholarly work and understanding the markets for the work they publish.

These strengths and weaknesses of university presses are the mirror image of the pluses and minuses found in university libraries, according to the report. Libraries recognized the importance of digital media early on – often pulled toward that recognition by the demands of users – and have maintained a consistently mission-focused position at the center of the university enterprise. They often have done a poor job, however, at selecting and evaluating material to be placed in digital collections; such collections are likened to attics where all too frequently random and unsorted materials are found, chosen apparently for availability and lack of obstacles like copyright restrictions rather than from a sound evaluation of the “market” need.

In view of how complimentary these flaws and strengths are, the most important recommendation that the Itaka report makes is that universities need not only to “remain actively involved in publishing scholarship,” but to recognize the strategic importance of developing a comprehensive framework to support a dynamic and multi-faceted system of scholarly communications. Only an institutional vision and commitment, the report suggests, can take advantage of the collaborative possibilities suggested by its analysis.

Clearly this report has generated, and will continue to generate, lots of discussion. There are overall descriptions and assessments of the report from Inside Higher Education and the Chronicle of Higher Education. Amongst the blogs, the most interesting to me have been the comments at if:book and Media Commons that point out what the report does not address – the changes that will be needed in how universities understand authority and scholarly credentialing as we move to the more flexible digital world, where work can be subjected to comment and criticism long before it is submitted for formal publication.

Can Google inherit quality?

That is the question posed by Paul Duguid, a professor at UC Berkeley, the University of London and Santa Clara University, about the Google Books Project. His article, “Inheritance and loss? A brief survey of Google Books” was just published in First Monday, a peer-reviewed online journal about the Internet.

Duguid’s point is that the Google Books project will really outstrip most other projects to digitize cultural artifacts, making them “appear inept or inadequate.” But the authority and quality of the Google project, Duguid argues, is based on a kind of inheritance from the reputation of the libraries involved. So Duguid sets out to see if Google really is the qualitative heir of Harvard and Stanford.

His results are disheartening. His search for a deliberately unconventional book, Sterne’s “Tristram Shandy,” returns results likely to confuse and discourage a casual reader. The first result on Google’s results list, a copy from Harvard, is so badly scanned that it is virtually illegible, with words cut off by the gutter on nearly every line. Elsewhere the text fades to indecipherable scratchings. And some of Sterne’s eccentricities are missing; the black page of mourning for the dead Parson Yorick simply is not included in the Google scan. When Duguid tries the second result from his search, things get worse. The first page of the scan is blank and the second page puts the reader at the end of chapter 0ne and the beginning of chapter 2 — of the second volume. Nothing informs the reader (other than comparison with a printed text) that they have been plunged into the middle of the book.

Duguid’s judgments on Google Books are harsh: the project ignores essential metadata like volume numbers, the quality of the scans are often inadequate, and sometimes editions that are best consigned to oblivion are given undeserved prominence for no discernible reason (that is his conclusion regarding the second text he found, from Stanford). Rather than inheriting quality from Harvard and Stanford, he concludes, “Google threatens not only its own reputation for quality and technological sophistication, but also those of the institutions that have allied themselves to the project.”

It is true that the real value of the Google Books Project is not so much to find reading matter for people as to direct them to which books are most likely to be of help or interest to them.  Few people, one presumes, will try to read “Tristram Shandy” in the Google Books format.  But the failures of visual quality and metadata control threaten even the more modest view of Google Books as a giant index.  Without a higher degree of quality than Duguid discovered, it is hard to argue that Google is superior in any way to a comprehensive online catalog from a major library

Friday’s bad news

UPDATE — What a difference a weekend makes! According to the Chronicle of Higher Education today (Wednesday), Senator Reid has withdrawn the proposed amendment after intense lobbying from the high ed. community. The issue, of course, has not gone away, and lawmakers seem determined to continue to pressure universities as if they were the primary source of this problem, which they are not. But at least this very bad idea has been abandoned for now.

The down side of the news on Friday was an announcement, and an urgent appeal for action, from EDUCAUSE, about the intention of Senator Harry Reid to offer an amendment to the Higher Education Reauthorization Act that would put a grossly unfair burden on a few universities to address illegal file sharing; a burden no other online service provider would share.

Senator Reid’s amendment (there is a report on it here from the Chronicle of Higher Education) would require that 25 institutions identified each year by the music industry to the Secretary of Education, based on the number of copyright infringement notices sent to those schools, adopt a “technology-based deterrent to prevent the illegal downloading or peer-to-peer distribution of intellectual property.” Now, everyone agrees that sharing copyright protected music and video over P2P networks is illegal and ought to be discouraged, but this amendment is clearly the wrong way to approach the problem.

First, colleges and universities are only a small part of the file-sharing problem. Even the content industries admit that nearly 3/4 of all file sharing takes place over commercial networks not affiliated with higher education. In fact, the higher education community is the only major group of online service providers that is now actively taking steps to reduce file-sharing on its networks. Why punish only those who are trying to prevent the activity and ignore the commercial providers? Why do the content industries continue to target higher ed and ignore AOL and Viacom, where the problem is much greater?

Second, the Secretary of Education is supposed to identify the 25 schools from information provided by the content industries. Thus a major financial burden could be created for institutions that have little way to anticipate being targeted or defend themselves from random selection. These notices are often inaccurate, and just counting them up and picking out the top recipients is and unfair, and unfunded, mandate that will do little to actually address the problem.

Finally, this proposal continues the trend in Congress of attempting to apply technological solutions to infringement problems. Unfortunately, every technological barrier quickly becomes a challenge that some programming whiz wants to defeat. The barriers fall as quickly as they are erected. So schools would be required to spend lots of money to implement solutions that can not realistically be expected to work for very long. These problems must be addressed with long-term market solutions, not with technological band-aids.

You can read a letter from EDUCAUSE about the proposed amendment here, and an article from Inside Higher Ed here. As the article notes, this amendment has not been offered yet, and the situation is “fluid.” So perhaps good sense will prevail on this issue, and troubling news can become an opportunity to educate our Congressman on the real facts about file sharing.

Ineffective Technological Protection Measures?

Recently we have seen some music companies move away from using technological protection measures to prevent copying songs onto multiple devices or those sold by different companies in favor of a market solution that charges consumers slightly more for music that can be freely copied. Now another brick, albeit a tiny one, has fallen from the wall of electronic protection measures.

Both the DMCA in the United States and the European Union’s Copyright Directive are designed to implement an international treaty that calls for legal enforcement of “effective technological protection measures.” Both laws use that phrase, but the way they define it differs a bit. The European definition, which says that, to be effective, a technological protection measure “must achieve its protection objective,” was recently used by a court in Finland to declare that CSS (Content Scrambling System), the protection code used on most DVDs, was ineffective and therefore no longer protected from circumvention by law. See Electronic Frontier Foundation posting on the case here.

The problem, according to the Helsinki District Court, is that the code for circumventing CSS is all over the Internet. Some consumers that download software for copying DVDs may not even know that they are circumventing a technological protection measure when the do so. In these conditions, the court said, CSS is simply not effective under the EU definition. It is also important that the argument was made that CSS is not intended so much to protect copyrighted content as it is to enforce a monopoly on playback equipment manufacturing; the fact that this is not a legitimate “protection objective” under the EU directive supported the finding that it was not an effective measure. There is a short English-language article about the case here.

This case may have some symbolic significance, especially by pointing out the real monopolistic purpose behind much DRM, but it is not likely to have much impact in the United States. The definition of “effective” in the DMCA seems to rest more on the intent of the copyright owner than on the observable operation of the DRM system. And two US cases have already rejected the argument that the ubiquitous availability of “keys” renders the “lock” unenforceable. But this Finnish decision may help pressure the movie industry to move away from DRM and, like the music companies, consider market solutions to their copying problem.

RSS explained by Educause

A couple of months ago I wrote about Educause’s “7 Things You Should Know About Creative Commons,” which is part of a series designed to help faculty and administrators keep current with technologies that impact scholarly communications.  Now a new virtual pamphlet is available, “7 Things You Should Know About RSS.”  RSS, which is usually said to stand for “Really Simple Syndication,” is an Internet protocol that allows users to subscribe to content feeds from lots of blogs and other web resources, and aggregate that content into a convenient reader.

Many readers of this blog probably already know about RSS, since it is one of the ways to subscribe to our feed.   But it is worth keeping this simple, jargon-free explanation in mind, along with the other 2-page pamphlets in the series, because they are so useful for explaining to others those things that we ourselves might use frequently but have difficulty articulating.  As it does so often, Educause has provided an important service to the world of technology in higher education.