All posts by Kevin Smith, J.D.

What makes a journal valuable?

For almost 90 years, librarians, faculty authors, tenure review committees and publishers themselves have relied on a single measure – the impact factor – to determine the relative quality of different scholarly journals. Impact factors are based on the number of times articles from a particular journal are cited in other scholarly articles. The citations to articles in one journal are cumulated to calculate the impact factor. It is fairly obvious that this system has some problems, however. For one thing, frequency of citation is a poor marker for quality, since all cited references to a work may not be positive and approving. To posit an extreme example, many articles that cite one specific article as a particularly bad example will boost the citation rate for that article and could raise the impact factor of the journal that published the flawed study. Also, journals are not all of equal quality or influence (which is the point, after all), so many citation from peripheral journals may not be as important as one or two citations in the really influential and universally-read publications. Impact factor can flatten these distinctions in regard to a single article, although cumulation over time should cause the “best” journals to rise to the top.

A new measure of journal quality, called the Eigenfactor, tries to address this last problem by starting with an evaluation of journal quality and assessing article impact on that basis. As their explanation of their methods says,

“Eigenfactor provides a measure of the total influence that a journal
provides, rather than a measure of influence per article… To make our
results comparable to impact factor, we need to divide the journal
influence by the number of articles published.”

Leaving aside the complex mathematics explained at their site, the Eigenfactor is based on an algorithm that maps how a hypothetical researcher would move from article to article based on cited references. This mapping yields a measure of the amount of time that researcher would spend with each particular journal. The score of a journal is based on that finding, and the influence of articles is measured by the influence of the journal in which they are published. This method corrects for peripheral citations and, it is claimed, for different citation patterns in different disciplines.

Both of these methods, however, measure the quality of journals only from within the relatively closed world of traditional periodical publication. Can we imagine ways of assess journal quality that can account for external factors and hence for the changes that are occurring within scholarship?

The advent of online aggregators of journal content has offered one relatively simple external measurement of journal impact which librarians have been quick to embrace – cost per article download. It used to be very cumbersome to try and tally which print journals were most used in a library, based on how often copies were picked up and reshelved. Now databases offer constantly updated counts of downloads which are easily divided into the cost of the database to provide a measure of where collections budgets are best spent. Since many downloads will reduce the cost per download, this metric also can serve as a rough indication of quality, or, at least, influence.

The real question I have, however, is how to assess the importance of traditional journal publication vis-à-vis newer, informal means of communication that are growing in importance amongst scholars. As blogs, wikis and exchanges of working papers via e-mail grow, scholars are getting their inputs and influences from new sources, and web publication of various kinds often supplements, and occasionally supplants, tradition publication. As the ACRL’s recent paper on “Establishing a Research Agenda for Scholarly Communications” puts it,

“Extant measures may suffer from being tightly coupled to traditional
processes while also inhibiting the application of other measures of
value. In the new digital environment, activities other than traditional
or formal publication should be valued in the reward structure for scholarship.”

I know of no metric that can yet account for the variety of informal publications and their relative influence. That, of course, is why it is part of a research agenda. As these informal, digital means of sharing scholarly work become more common, one of the principle functions of traditional publication – that of communicating the finished products of research – may become less and less important. Other functions, such as registration, certification and preservation, may continue to rely on traditional journals for a longer time. But the academic world needs to look carefully for ways to evaluate and compare the influence of a variety of new communications if it is to value scholarship based on its true impact.

By any other name?

Last week Paul Courant, Dean of Libraries (and formerly Provost) at the University of Michigan, posted a thoughtful blog entry on “Why I hate the phrase scholarly communications.” He is kind enough to say some nice things about this blog in his post, for which we are grateful, but I don’t want to let the glow of flattery distract me from addressing the excellent point he is making. “Scholarly communications” is a confusing term that conveys very little information to anyone outside of a circle of initiates within academia.

Even amongst the handful of academic libraries that have appointed positions with scholarly communications in the title there is wide variation in how that role is understood. For some a scholarly communications officer is primarily a copyright consultant, for others an advocate for digital publishing, for some an advocate for legislative change and for yet others a collections librarian trying to deal with alternative publications and journal subscriptions. As Courant points out, what all the various tasks have in common is attention to the business of scholarly publishing – the economic, legal and physical mechanisms by which scholarship is disseminated. Functionally, one might call a scholarly communications program that point (or points) at which an academic library is engaged with scholarly publishing in a role other than as a consumer.  Attention to this bundle of concerns, however, extends well beyond the library at many institutions, and it must do so if real change is to occur.

At Duke we became aware of the naming problem when the new Libraries’ home page included a link for “Scholarly Communications” that was very seldom followed.  We decided to rename that link “Copyright and Publishing” — the topics actually discussed in this space — in hopes of attracting more readers. Certainly for faculty the latter name identifies concerns they often are very conscious of, while the former likely does not.  I sometimes wonder if “Copyright and Publication Librarian” might not be a more accurate and descriptive title for my position.  Yet in the final analysis I am not ready to scrap the phrase “scholarly communications” just yet.

Terms of art are always difficult to handle. To take an example from my other profession, which is laden with them, a lawyer writing a brief who wants to argue that some element of her case is so obvious that no evidence for it need be adduced will use the phrase “res ipsa loquitur”; if she does not, a court will think her poorly trained. But if she uses it when talking to a client, she is guilty of poor professional judgment; attorneys must avoid obfuscation when explaining law and strategy to lay people. Terms of art are shorthand means of communication within a community of practitioners but they require explanation and clarification outside that “inner circle.” If we were to adopt Courant’s suggestion that we simply speak of “publishing” instead of scholarly communications, we would encounter a different confusion, but the same need to explain to the uninitiated exactly what we mean. Scholarly communications is now a recognized term within much of the academy, but like many such terms it is foreign to those outside the ivy-covered walls. I plan to continue to treat my oddly uncommunicative title as a teaching opportunity and decide in each instance whether I am better served by using it (and often having to explain what it means) or by substituting a longer but more descriptive phrase in those situations where the term of art will fail entirely to gain attention from the audience I am seeking.

Should I register my copyright? (weekly widget)

It is no longer necessary to register in order to have copyright protection, just as it is not required anymore to have the symbol (c) attached to a work in order to protect it. Copyright protection is automatic, starting as soon as a work is fixed in tangible form. But registration is still important in some situations. You must register a work before you can sue someone else for infringement, and registration creates a presumption that you own a valid copyright. Also, registration within certain time limits makes it possible to receive a larger damage award and attorney’s fees if a copyright owner can prove infringement. So registration is a good idea to protect the economic value of a work, but it is not required; each creator can make a decision about whether or not registration will best serve that individual’s interests.

Registration is accomplished by filing a form, found on the Copyright Office’s web site, along with a copy of the work being registered and a fee, which is currently $45.

Desperate ploy, or copyright coup?

In the digital age, it is hard to imagine that personal photocopying still poses much of a worry for copyright owners. Isn’t the real problem, after all, the ability to make perfect copies and to share them instantly with thousands of others? Traditional photocopying poses neither of these dangers, and personal copying is a long settled fair use, isn’t it?

Not, apparently, for Access Copyright, the Canadian copyright licensing agency that, like its US counterpart the Copyright Clearance Center, collects and distributes permission fees for various uses of copyrighted material. Access Copyright has recently filed a lawsuit seeking 10 million dollars – the largest damages award ever sought for copyright infringement in Canada – from the office supply chain Staples. Their claim is that Staples should be liable for infringing copying done by customers on equipment provided by the stores. There is a news report on the suit from the Canadian Press here, a negative assessment from P2Pnet here, and a comment from a Canadian professor of IP and technology law here.

To prove secondary liability on the part of Staples, Access Copyright will have to convince a court that Staples should be held responsible for copying done by its customers. As Professor Geist points out, that may be a difficult hurdle to clear. In Canada, as in the US, liability for those who merely supply the equipment to make copies is rare; the US provides statutory protection for libraries in such cases and the Canadian Supreme Court has established a similar “presumption” in favor of Canadian libraries. Explaining why that presumption should not apply to Staples will be a challenge for this lawsuit.

But the issue that should really worry us, the issue that makes this a radial attempt to change the terms of the copyright bargain rather than merely a desperate ploy to protect a new source of revenue as traditional sources dry up, is that Access Copyright will have to show that the personal copying done by customers is direct infringement of copyright. Only if that is true can Staples be held secondarily liable for providing the means for that infringement. But personal copying has been almost universally believed to be fair use (or, in Canada, “fair dealing”). Students have made single copies of journal articles and book chapters for their own study for as long as photocopies have existed, and consumers have made personal copies of TV shows with their own VCRs with the blessing of the US Supreme Court. So what has changed?

The clue is in the fact that this suit was brought by a licensing agency, not by publishers or authors. What we are seeing here is a new assertion that personal copying was never legal, only tolerated by copyright owners until they could create a mechanism to collect payments. The same digital technologies that have allowed so much infringement also now allow content owners to efficiently offer licenses and collect payments for individual uses that could never have supported a market before. Although it is still more efficient to sue the alleged contributory infringer instead of the consumer who is the direct infringer, this saber rattling by a licensing agency should tell us quite clearly that content owners intend to move toward a pay-per-use model. If such suits are successful, every consumer-made copy logged at a store or even at a library photocopier could be subject to small payments, which would be administered through an online licensing agency.

At a recent conference in Washington, DC, Cary Sherman, the President of the Recording Industry Association of American, refused to acknowledge that personal copying of a music CD for listening on an individual MP3 player was fair use. Instead he said that this likely was infringement, but that the industry had agreed internally not to pursue such cases. The Canadian lawsuit suggests that, if a precedent can be set regarding the much less contested area of personal photocopying, any such forbearance around consumer copying will quickly become a thing of the past.

So what is in the public domain? (weekly widget)

The public domain, according to Duke’s Center for the Study of the Public Domain, “is the realm of material—ideas, images, sounds, discoveries, facts, texts—that is unprotected by intellectual property rights and free for all to use or build upon.” In the United States, anything that was published before 1923 is in the public domain. Works published between 1923 and 1963 may be in the pubic domain, if they were published without notice (the symbol (c) with a date and name), or if the original copyright was not renewed after the first term of 28 years. It is often difficult to be certain about this, although the database of renewal records made available by Stanford University is a big help. Government works — works created by government employees (but not necessarily independent contractors working for the government) — are also in the public domain because the copyright law does not allow an initial claim of protection in such works. Works published with a Creative Commons license may also be in the public domain, although usually they are partially protected by copyright but available for non-profit reuse. Unpublished works are in the public domain if the author died over 70 years ago. It is important to note that all of these rules have some additional complexities; this chart by Peter Hirtle is very useful for sorting out the intricacies of copyright terms.

Most importantly, facts and ideas are in the public domain, since copyright only protects expression. Patents, however, do protect ideas, so the idea of a patented invention is not free for others to use without a license, while ideas contained in copyrighted expression are.

P2P and New Business Models

Peer-to-peer file sharing is usually not a scholarly communications issue in itself. Most such activity involves the infringing reproduction and distribution of music and video files, and it is more of a problem for colleges and universities than a benefit. Nevertheless, there are legitimate forms of file-sharing that happen at universities (and between them), and the big danger that recreational file swapping poses to schools is that draconian measures to control the illegal activity will also inhibit legal and productive collaboration.

Each time Congress proposes to address file-sharing at universities, this is one of the concerns that unites the higher education community against the proposals. Another concern is that the cost of implementing new mandates will be very high, even though university networks account for only a small portion of the overall problem. The recent proposal in Congress (see article here from the Chronicle of Higher Education) is a case in point. The proposal to require that universities develop a plan to address file-sharing is a little bit insulting – most schools already have a plan – and the instructions to offer alternatives to illegal music downloading and to explore technological solutions to the problem are unfunded mandates that could cost hundreds of millions of dollars. And filters that stop music sharing may also inhibit legitimate collaboration; the history of Internet filters suggests that they are often more effective at preventing legal activity than illegal.

The problem posed by illicit file-sharing will not be solved by increased enforcement measures; the genie is already out of the bottle in that regard — P2P swapping has grown beyond the bounds of any attempt to stop it using either law or technology. What are needed to curb the growth of P2P are business models that make legal acquisition of digital music and movies more attractive than the illegal alternatives. Georgia Harper from the University of Texas (see her blog here) has been a vocal advocate of business model development as a solution to some of our current copyright problems, and a conversation between Georgia and some speakers at a recent conference caused me to start wondering what such business models would look like.

One possibility came to my attention (rather belatedly, I suppose) while watching a football game on Saturday. Verizon Wireless was heavily advertising its V-Cast Song ID service, which allows a user who hears music that they like to capture a sample of the audio, identify the song and purchase a copy directly from, and to, their cell phone (see news report here). This, it seems to me, is exactly the kind of value-added service that can move listeners back to legal music downloading services, and it represents a much more positive solution to the problem of file-sharing than any of the legal remedies yet proposed.

How long does copyright last? (weekly widget, a little late)

The original term of copyright protection in England was 14 years. In the US it began, in 1790, at, potentially, 28 years (a 14 year term that could be renewed once), then went to a system of two terms of 28 years, so that a renewed copyright lasted for 56 years. In 1976 we changed our law dramatically. Copyright became automatic as soon as a work was “fixed in tangible form,” and the copyright term was based on the life of the author. After another term extension in 1998, copyright in the US now lasts for the life of the author plus 70 years. For works created anonymously, as works for hire, or by a corporate author the term is 95 years from first publication or 120 years from creation, whichever is shorter. These changes mean that the public domain is barely growing at all in the US, since everything is protected automatically and it is now protected for a very long time.

To Assign or Not To Assign?

The International Association of Scientific, technical and Medical Publishers issued a statement last month on the benefits to authors of assigning copyright to publishers. The thrust of the statement is that publishers are better placed than authors to defend against plagiarism and copyright infringement, to ensure broad dissemination of the articles in question, and to manage issues like requests to reprint and migration to new formats. Each of these points is very debatable, and Peter Suber provides both excerpts of the document (which is itself very short) and a comment that refutes the assertions list above in a very concise and competent way. Not surprisingly, his conclusion is that publishers primary concern is to protect their own interests and that a concern for authors’ rights is, at best, secondary.

One point on which Suber and the STM publishers agree is that a complete assignment of copyright need not preclude authors from making their work available in open access through a personal webpage, institutional repository or disciplinary archive. Even when faced with a demand to assign the copyright, authors may negotiate to retain the right to deposit their work in the ways suggested, as well as to retain other rights. There seems to be little doubt, and the STM publishers do not even argue the point, that open access deposit is a benefit to scholarly authors. But authors will have to decide for themselves if assigning copyright while retaining that right really serves their best interests or whether they should negotiate to keep their copyrights and give the publisher a more limited permission to publish.

Second thoughts

On Google — the New Yorker has a learned and fascinating article on the Google Library project this month, by historian Anthony Grafton. The Google project has gotten inordinate praise in some quarters, as well as its share of criticism (see here, for my contribution to the latter). But Grafton’s article is neither wholly critical nor wholly laudatory; his is an attempt to place Google in the history of efforts at building a universal library and to realistically assess what can actually be accomplished. He points out that a truly comprehensive history of humanity, which some have claimed Google will provide, will still remain out of reach. For example, much “gray” literature and archival material will never see the light of scanning, nor will the cultural production of many of the world’s poorest countries.

This latter point is especially troubling. Poor countries are not just consumers of cultural production, they do also produce it. The digitization of so much western/northern literature could have two negative effects on this production. One would be to push developing world literature further to the margins in the developed world. The other is that, in so far as technology is available within those developing countries, the easy access to material through Google could marginalize a country’s own cultural production even within its borders.

Nevertheless, Grafton is properly amazed at the level of access that digitization has made possible. As he says, picking up his opening theme, “Even [Alfred] Kazin’s democratic imagination could not have envisaged the hordes of the Web’s actual and potential users, many of whom will read material that would have been all but inaccessible to them a generation ago.” Digitization offers great things, but a realistic valuation of those benefits recognizes that no single means of access should replace all the others; the Internet will continue to coexist with libraries, archives and whatever the future holds that we can not yet imagine; all will be part of any genuinely comprehensive look at human history.

On Second Life — On a less exalted plane, the New York Post reported last week on a law suit filed by and against Second Life entrepreneurs alleging copyright infringement of products designed and sold entirely within the virtual environment. See another comment on the lawsuit here. As the comment points out, many educators are looking closely to consider the educational potential of Second Life or other virtual worlds. This lawsuit raises some interesting questions that will need to be answered in order to exploit that potential. For example, do real world laws protecting the rights of creators even apply to Second Life? Is copying someone else’s design in Second Life stealing, as the plaintiffs allege, or is it merely part of a giant “video game” that should not have real world legal consequences? The answer to that question should be a prerequisite to placing educational content into Second Life; teachers typically want to protect the content they produce, or at least share it on their own terms. Whether Second Life will be subject to real world laws, intra-world regulation amongst its members, or merely arbitrary decisions enforced by Linden Labs, its owner, will have a profound impact on how much time, money and content educators are likely to invest in Second Life.

Interestingly, the same defendant who argues that Second Life is a giant video game in which real world laws should not apply also claims that his home in Second Life was subject to an illegal search and seizure by the plaintiffs when they entered to photograph the allegedly infringing items. Just goes to show how hard it is for us to escape our real world notions of property and privacy.

What are the rights protected by copyright? (weekly widget)

Copyright is a set of exclusive rights. By exclusive we mean that the owner of the rights has the sole authority to permit or forbid covered activities. There are five basic things that a copyright holder can authorize or prevent — reproduction, meaning making copies of her work; distribution of the work; public performance of the work; public display of the work and the preparation of derivative works. A derivative work is a work based on the original, like a translation or a film adaptation. All of these rights can be sold or transferred to others, and they can be divided up and sold to different parties.

It is important to note what rights are not given to the copyright holder. They do not have the ability to prevent private displays or performances, for example. Most importantly, there is no right to authorize or prevent uses of the work, as there is in patent law. A user is permitted to make use of a work they acquire without further permission as long as they do not copy it, make a derivative work, or offer a public performance or display. A user is also permitted to distribute the legally acquired copy of the work as they see fit.