UCG and you

A really superb article has just been released in the American Bar Association Journal called “Copyright in the Age of YouTube.”  Starting from the story of the dancing baby, whose 30-second home movie was the subject of a DMCA takedown notice for allegedly infringing copyright (a story I have written about before), this article does a really nice job of outlining the current issues about how copyright law addresses, or fails to address, the explosive growth of user generated content on the Internet.

Steven Seidenberg, who is described as a lawyer and freelance journalist, has done an enviable job of describing complex issues in a readable and even enjoyable way.  I recommend the article for anyone interested in how the law struggles to keep up with rapid and dramatic changes in technology.  The analogy between trying to police user generated content and playing “whac-a-mole” is just right.  But apart from its general quality and usefulness, there are two points made in this article that I want to highlight.

First, it is fascinating to see the suggestion made by Seidenberg that the Digital Millennium Copyright Act, which was passed in 1998 and took effect in 200, may already be out-of-date.  I have often remarked about the difficulty of applying a copyright law written for the photocopier to the Internet age; Seidenberg goes a step further by noting that even the DMCA was written for a simpler set of Internet technologies — bulletin boards and Usenet groups — and may not be able to account for the huge popularity of user generated content that is now born on the Internet.  Seidenberg’s neat summary and comparison of two court cases dealing with UGC — the as-yet-undecided case of Viacom v. YouTube and a less well-publicized decision involving Veoh Networks — explains the interplay of laws and the uncertainty of results in this environment.

The second point that seems significant to me is the complaint sometimes heard about the DMCA and the ruling in the dancing baby case that it has created too great a burden on the content companies for them to have to evaluate the possibility of fair use before they send a takedown notice.  Although I understand the argument that a business needs to streamline those processes that merely protect profits and do not generate them, it is hard to be too sympathetic when, in the higher education environment, we are called on to make many decisions about fair use every day, usually without recourse to a stable of lawyers.  The law of fair use is intentionally, and I think correctly, open-ended and flexible; it does not lend itself to streamlined or automated processes precisely so that it can remain useful as new innovations and technologies come along.  The burden of having to make individual fair use decisions is shared by users and content creators alike; it may not be efficient, but in the end the social benefit of doing things this way clearly outweigh the costs.

The last part of Seidenberg’s article addresses this general point as it discusses attempts to convince Congress to shift more of the burden for copyright enforcement off of the big content companies.  If they have not yet been successful in getting courts to force Internet providers like YouTube to police copyright for them, legislative efforts like the PRO-IP act to get the government more involved protecting these private rights have born more fruit.  Whether taxpayer-funded enforcement of private copyrights will ultimately squelch the growth of user generated content is an open question, and it is one that may play a huge role in defining the future for creativity and innovation.

Don’t let this happen to you.

I admit that what caught my eye in this story is the unique name of the band involved — Death Cab for Cutie — and the fact that I know this to be one of my twenty-year-old niece’s favorite acts.

All that aside, the story is an object lesson in the problems with transferring  copyright without careful consideration, and versions of this problem are occurring everyday in academia.

What happened to Death Cab for Cutie is that they posted an embedded YouTube video of themselves singing one of their own songs on their own website.  Except, of course, that they do not own the rights in their own music, having transferred those rights, in one way or another, to their record label.  So, as this report indicates, they received a “takedown” notice alleging copyright infringement from their own label, Warner Music Group.  The video is gone now, and DCFC is not able to share their own music with their fans, even though all sides must realize that doing so would increase sales.

Likewise, numerous academics have assumed that they can post their own work to personal websites, even after they have signed publication agreements.  When those agreements transfer copyright, however, this assumption is likely to be wrong.  There are lots of stories, unfortunately, of academic authors receiving similar “cease and desists” letters to the one the band got, where their own publishers inform them that, as the (now) owners of the copyright in the scholars’ work, they do not permit the authors to post the works they wrote.

The lesson here is twofold.  First, once you sign a publication agreement, it controls the distribution of rights and it is dangerous to assume you can continue to use your own work as you wish.  It is important to read these agreements and to abide by their terms.  Second, however, is the equally important lesson that one can negotiate the distribution of rights within these agreements.  Don’t wait till after the fact to read the agreement; read it before you sign and negotiate for the right to use your own work in ways you will want or need in the future.

Death Cab for Cutie probably had little flexibility in their relationship with their record label and, unfortunately, they did not learn until late in the game that they had sold their rights.  For academic authors it is much more likely that, with a little forethought, similar problems can be avoided.  All it takes is attention to the terms of a publication agreement and consideration of exactly what rights will be beneficial for you, as an author, to retain.

Obama, (c) and the CCC

As a number of media outlets reported, the White House webpage changed over to an Obama presidency before nearly any other action was taken by the new administration.  In fact, the initial posts on the new blog that fronts the page were dated Jan. 20 at 12:01 pm, only one minute after the Twentieth Amendment says that the new president’s term began.

From my (warped) perspective, the most interesting part of this new page is its copyright policy.  For one thing, it does not appear that the Bush administration had any copyright policy on its web page; at least, there was no copyright policy link on the three instantiation of WhiteHouse.gov that I looked at on the Internet Archive’s WayBack Machine (yes, the Bush administration is already “waybacked.”).  There is a privacy and security policy linked to those pages, but nothing about copyright.  The presumption is, therefore, that the White House was claiming copyright in whatever was open to protection on those sites, which probably was not much.

By contrast, the Obama White House begins its policy by noting that materials on their site that are produced by government employees as part of their official duties are not subject to copyright protection.  This was true during previous administrations, of course, but it is nice to see official recognition of the fact.

Perhaps most exciting, however, is the use of a Creative Commons Attribution license for all content found on the White House page that is created by third parties, “except where otherwise noted.”  For the White House to employ a Creative Commons license is, I think, a wonderful affirmation of the value of the CC license as a tool and a recognition of the fact that there are more ways for creators to serve their own interests than simply to rely on “one size fits all” copyright protection.  Here the White House sets an example that many in the academy ought to follow; I am very proud of the fact that the Duke University Libraries placed most of their web content under a CC license over a year ago and now the Obama administration is, presumably, emulating us.

I am much less comfortable, however, with the last paragraph of the White House policy, which asserts a unilateral right under the DMCA to terminate access for “repeat infringers.”  There is no indication of what kind of infringement could get one banned from the White House site, only a vague admonition to “respect the intellectual property of others.”  The concern is that, if the White House takes the same approach as many other site owners, mere accusations of infringement could make one a repeat infringer, and little account would be taken of the possibility of fair use.  Also, no mention is made of the possibility, allowed for in the statute (see, especially, section 512(g)), that an accused infringer could dispute the accusations and have his or her comments or whatever re-posted until such time as the accuser decides to file suit.

The Obama administration, in its website and in other policy statements, has indicated a new commiment to openness and accountability.  The embrace of a CC license, and campaign statements supporting fair use, offer real signs of a balanced approach to intellectual property protection.  It is to be hoped that these signs point in the direction the administration will move, and that that last paragraph of the copyright policy is just obligatory and carelessly worded boilerplate.

Debating Internet regulation

Last week Federal Communications Commissioner Robert McDowell spoke with a small group of Duke administrators about a wide range of topics.  In response to one question (which was, I have reason to know, deliberately provocative), Commissioner McDowell, who is a Duke alum, gave a pretty ringing endorsement of the unregulated Internet.  He referred approvingly to the history of the Internet as an open environment that has, throughout its history, been free of government regulation.

McDowell chose to ignore, in these comments, the pre-history of the Internet as DARPAnet, a creation of the Defense Department’s Advanced Research Projects Agency.  But really his position is the one from which I want a government regulator to start; a stance of healthy skepticism toward regulation is the best way to ensure that careful thought proceeds regulatory enactment.  While suspicion of regulation is almost always a justified foundation, however, it is not necessarily the final word on the matter.

The context of the question Commissioner McDowell was answering was ‘Net neutrality, and in that context it is particularly easy for the FCC to oppose regulation, since that is the position favored by the major telecoms.  But it is far too simple to say that at long as the government keeps its hands off, the Internet will stay unfettered and equally accessible to all.  Commissioner McDowell clearly knows this, and his assertion was that competition is the best way to prevent private entities from closing off the Internet pipes to certain types of traffic.  But he also noted that the economic downturn has delayed the implementation of additional pipes, and it is still true today that the backbone of the Internet is in the hands of only a few major corporations.

The fear here is that these companies may find it desirable to implement differential pricing — charging more for certain kinds of traffic — and that regulation might be necessary to preserve the openness that has, so far, been a hallmark of the Internet.  ISPs might, for example, decide that voice-over-internet phone services compete with another part of the business of their parent telecoms and introduce higher prices for VOIP to choke off such services. UPDATE — As this report indicates, this is a very real concern that the FCC continues to monitor

A similar decision to charge more for high-bandwidth uses could be implemented in a misguided attempt to prevent video piracy.  Illegal video downloads, of course, use a lot of bandwidth, but so do perfectly legal file transfers.  The danger with these kind of “solutions,” whether they are differiential pricing, filtering or agreements between ISPs and content companies, is that they are likely to exclude too much content and too many users.  When this happens, the free speech goals which copyright is meant to serve are undermined, often in the name of copyright protection.

The recent announcement of a new anti-piracy strategy from the RIAA, and the continued behavior of YouTube toward repeat notices of copyright infringement, illustrates this danger.  The RIAA has agreements with some ISPs (it is not saying who) to cut off Internet access for those accused of repeated illegal downloading.  But we know that the RIAA has not been very careful about its accusations in the past, so there is a real concern that users will lose access based on inaccurate information and poorly substantiated charges.  And even before the RIAA’s new strategy is put in place, we know this kind of abuse is happening.  Here is a report from the ArsTechnica blog about a case where what is quite likely to be fair use — the posting of film clips on YouTube to  augment an online critical essay — has lead to the author having his YouTube account shut down because of DMCA notices that claim infringement but do not have to prove it or to take into consideration any of the myriad ways the uses on YouTube might be justified.  By disconnecting users after “three strikes” based on mere accusations, YouTube is already implementing the practice the RIAA is negotiating with ISPs.  And we can see that that process is ripe for abuse.

The moral here is that regulation of the Internet is a complex topic.  Reliance on the market alone will not always guarantee that the ‘Net will remain open and accessible on an equal basis for all.  As more and more basic and vital information and services become Web-based, such access must be preserved.  The trick will be to figure out the right moment and the right way to preserve access, but the time will come when those decisions must be faced, since we have already seen that reliance on market forces and good will alone will not suffice.

What is DRM really good for?

As the Library of Congress considers new exceptions to the anti-circumvention rules that legally protect the DRM systems that are used by many companies to lock up digital content of all kinds, it is helpful to consider if those protections really accomplish what they were intended to.

Digital Rights Management, or electronic protection measures, are technological locks that “physically” prevent uses that are infringing, as well as many uses that would not be infringing if they were possible.  The justification for using DRM is that it is necessary to prevent the widespread infringement that the digital environment enables, and thus to protect the revenues of content creators.  Those revenues, it is argued, provide the incentive that keeps Americans creating more movies, music, books, etc.  This purpose seemed so important in 1998 that the Digital Milleniuum Copyright Act included rather draconian legal protection for DRM systems, making it illegal to circumvent them even when the underlying purpose of the use would itself be legal.  But the juxtaposition of two recent blog posts raises an interesting question about whether DRM really does what is claimed, and whether what is claimed is really its purpose in any case.

First is this report from Canadian copyright professor Michael Geist noting that for the third straight year sales of digital music (a prime type of content “protected” with DRM) have grown faster in Canada than they have in the United States.  This growth comes in spite (?) of the fact that Canada does not have the same legal protections for DRM systems that the US does.  Apparently the incentives for creativity are just as strong, or stronger, in Canada, where circumvention is not punishable, as they are in the US, where we are told that those who circumvent and those who market the technology to circumvent must be stopped lest creativity grind to a halt.  The reality, as Geist points out, is that “copyright is simply not the issue,” and government intervention to drastically strengthen the copyright monopoly has not provided the promised benefit.

So why is DRM really so important to big content companies?  On the Electronic Frontier Foundation’s blog, Richard Esguerra gives us a more convincing answer when he notes that Apple is finally dropping DRM from the music files it sells through its iTunes store.  The timing, he suggests, shows that the big content companies really use DRM to eliminate competition and enforce a captive market; as soon as that purpose becomes moot, the companies drop the DRM.  It is no surprise that DRM is a marketing problem, especially for music, where it often prevents users from moving files from one device to another.  As long as the expected benefits in reduced competition outweighs the loss of sales, DRM is defended as a vital part of the copyright system.  But it is abandoned without a qualm once it no longer serves that anti-competitive purpose and threatens instead to hamper profits.

If DRM systems really are being used primarily to suppress competition and prevent innovation, they are working directly in opposition to the fundamental purpose of copyright law they were sold to us to support.  Read together, these two reports suggest that tinkering with exceptions, as the Library of Congress is charged to do every three years, is not enough; instead, the value of the whole idea of providing legal protection to DRM should be reexamined.

OA, RNA and Wikipedia

The recent announcement made on NatureNews that the journal RNA Biology will require authors writing for one of its sections to also post a page describing the work in Wikipedia set me wondering, and debating with a colleague, about the motivation here.  Bloggers at the Fischbowl and O’Reilly Radar see this as a big step for open access, and I am initially inclined to agree.  But the cynic in me has some questions.  Why Wikipedia? for example.  Couldn’t openness be achieved just as effectively by posting OA abstracts on the journal website, as many other publications do?  The answer, of course, is no, once one recognizes that the purpose is not openness for its own sake, but openness as a driver of commercial sales.  Wikipedia is a first stop for many seeking information on the Internet, and it is a top hit on Google for many searches.  If a large number of Wikipedia pages direct seekers of biological information to RNA Biology, presumably subscriptions and individual article sales will increase.

In these post-Goggle Books settlement days, we should not be surprised to see that limited open access is beginning to be seen as a technique to push more eyeballs onto pay-per-use sites.  And as with the Google settlement agreement, I find myself very conflicted in my reaction to this trend.

It is worth noting that RNA Biology, which is published by Landes Bioscience, is not an unmixed supporter of open access.  Based on their copyright policies, the SHERPA RoMEO database lists this as a “white” journal, meaning archiving is not formally supported.  Like all of RoMEO’s color categories, however, this does not adequately convey the complexity of the situation.  RNA Biology does make its entire contents available in open access one year after publication, and it offers an “author pays” immediate OA option for a relatively low price — $750, reduced to $500 if the author’s institution subscribes to the journal (at institutional rates that are 9x higher than individual subscriptions).  Finally, RNA Biology acknowledges, in its copyright transfer agreement, that authors retain the right to deposit their manuscripts in PubMed Central, as required by the NIH’s Public Access Policy.

All of this sounds good, but it is in that same copyright transfer agreement that one finds the policies that cause SHERPA to give this journal its lowest OA rating.  The journal requires a complete transfer of copyright from its authors, and essentially gives back only two, very limited rights.  Authors are allowed to use their article in subsequent publications, such as a dissertation or monograph, and are allowed to make photocopies of the article (not digital copies) to distribute in classes they teach (even though this could well be a fair use anyway).  Notably, there is no provision for self-archiving either pre-prints or post-prints (which is why they are a RoMEO white journal), and it seems that only Landes Bioscience, not the author(s), are entitled to create derivative works from the article.  That provision (or lack of provision) is, to me, the most worrisome for scholarly authors, who seldom drop a topic “cold turkey” after publishing one article.

From all this I think there is an ambiguous message to be gleaned.  On the one hand, it is a good sign that publishers are beginning to see open access as a supporter of scholarly publishing rather than a competitor to it.  The recent experiment by university presses in publishing traditional books alongside on-line OA versions — James Boyle’s “The Public Domain” is an example — will show, I believe, that OA can increase awareness of a book or a journal and thus support sales of traditional publications.  But on the other hand, if OA is structured entirely with this purpose in mind it can prove to be a detriment rather than a support to the interests of scholarship and scholarly authors.  The CTA that authors for RNA Biology must sign suggests that this is the case here, and the Wikipedia mandate seems unlikely to ultimately benefit the academy as a whole, although I hope to be proved wrong in that prediction.  In any case, I think a close examination of all of the conditions around publication in this journal supports the continuing need for authors to negotiate and retain a right to self-archive, since that alone is a sure guarantee that OA will genuinely serve the interests of the author.

From foreign courts,

come two cases that offer interesting lessons for US observers of the copyright environment.

First, there is a case from Canada that allows us for once to be grateful for at least one aspect of US copyright law. In a case involving a parody newspaper that made fun of the coverage of the Middle East provided by the newspapers of Canwest, one of Canada’s leading media conglomerates, the Supreme Court recently ruled that there is no exception to infringement for parody. The court went on to cite its own earlier decision that freedom of expression is not a defense to copyright infringement. In that case, the court wrote that “the Charter [the Canadian Charter of Rights and Freedoms]does not confer the right to use private property – the plaintiff’s copyright – in the service of freedom of expression.”

The law in the US is entirely the other way on this point – parody is a well-established purpose that is favored in the fair use analysis, and fair use as a whole has been recognized as the safety valve in copyright law that supports free speech. It is easy to see way this is so; if a copyright owner could prevent parody by claiming copyright infringement, it would be possible to suppress a lot of speech that would otherwise be constitutionally protected. Consider the case of the parody of “Gone With the Wind” that tells the story from a slave’s perspective and is called “The Wind Done Gone.” The estate of Margaret Mitchell wanted to prevent publication of “The Wind Done Gone” and succeeded in convincing a district court to enjoin the book on the basis of a copyright infringement claim. Fortunately, the 11th Circuit Court of Appeals understood that our copyright law is not intended to suppress speech, and that sometimes use of another’s copyrighted work is necessary in order to express a particular point, especially when parody is afoot. Free speech, in this sense, trumps copyright ownership, and, on that point at least, Canada could take a lesson from the US.

The other case is from Great Britain and involves the ownership in the copyright to a classic song from the 1960’s – A Whiter Shade of Pale by Procul Harum. Unlike the Canadian case, this one applies a law very similar to US copyright law, but it does so to a very unusual and unexpected set of facts. Apparently, the justly famous organ part played at the beginning and in the middle of the song was written during studio rehearsals by a new member of the band, Matthew Fisher, who was hired to play a Hammond organ. After almost 40 years, Mr. Fisher challenged the copyright ownership in the song, claiming that he was a joint author with Gary Brooker, who originally wrote the song and has collected royalties all these years. There is a nice summary of the case here.

I rather think that I disagree with the result reach by the Court of Appeal back in April, but the decision is interesting and instructive for several reasons.

First, the judgment tries to divide the attribution right (a declaration that Fisher is, indeed, a co-author) from the right to receive royalties on the song in the future or to enjoin its exploitation without Fisher’s permission. This fundamental point is where the decision seems unwieldy and mistaken to me, but it is an interesting reminder of how other countries view the “moral” right of attribution, which the US does not recognize.

Second, Lord Justice Mummery (really!) does a very nice, careful job of picking apart the various threads of creativity and copyright in order to arrive at a reasonable decision about joint authorship. His decision is worth reading in order to understand the complexity of music copyright and the ways in which copyrights in different versions can layer on top of one another.  Since joint authorship is often a very important and debated issue for scholars, the careful and clear treatment of it in this case can be very useful.

Finally, there is a fascinating suggestion in the decision that “proprietary estoppal” might apply to defeat Fisher’s claim. This notion is ultimately rejected (although the same result is reached by a different route), but the discussion itself seems very significant to me. The concept being used here is very similar to “adverse possession” in the law of real property, and the effect of its application would be to give a user some real interest in a copyright if no objection has been made to the use over a long period of time. Essential, Mr. Brooker argued that his exploitation of the whole copyrighted work over forty years gave him a “constructive” right in the organ portion that Fisher could not disclaim after 40 years. I have suggested several times that something similar to adverse possession could be applied to copyright law to solve the orphan works problem, but this is the first case in which I have seen a court (albeit, not an American court) take that possibility seriously.

Security blankets?

The report that some major music companies are considering a blanket licensing arrangement with college campuses whereby the schools would pay into a central collecting agency and the music industry would stop its campaign of litigation, has, quite understandably, generated a lot of Internet buzz. Neither the technorati nor academia really seem sure how to react. To the folks at Techdirt, this is a terrible idea that would amount to a “music tax.” At the Electronic Frontier Foundation, on the other hand, this is official sanction for an idea they have been advocating for a long time. Ars Technica is more cautiously optimistic, and warns against a knee-jerk reaction that anything proposed by the music industry must be bad.

What caught my eye in all the debate, however, was a quote in the Ars Technica piece, attributed generally to EDUCAUSE, in which the licensing scheme was described as “a covenant not to sue.” Certainly an end to the lawsuits and threats of lawsuits directed against students who share music across campus networks would be welcome. But I find the phrase a dangerous gloss on blanket licensing schemes, and it prompts me to consider just how much security these blankets really offer.

The starting point, of course, is to recognize that a license, even a blanket license, is private law — a contract between parties that is binding only on those parties. Such licenses do no impact the rights or obligations of anyone who is not part of the agreement. Thus if three of the four major record labels signed up for a blanket license, and thus promised not to sue college students, the fourth label could continue to pursues such lawsuits unimpeded.

Recognizing this legal situation is especially important for understanding the Annual Campus License offered to institutions by the Copyright Clearance Center, which is a blanket license that much more directly impacts the scholarly communications system. Under that license, some percentage of the publishers who license their works through the CCC (thus a percentage of a portion) agree to accept a blanket payment in exchange for permission to make all of the uses of covered content that the campus wants for the year. Uses covered by the license include classroom distribution, e-reserves, inclusion in a course management site and course packs, but not interlibrary loan. The license applies only to textual material produced by the participating publishers; music and video, as well as text published by non-participants, is simply outside its scope.

Campuses seem to approach this license in two different ways. Some see it realistically as but one tool in the struggle to use copyrighted content in responsible ways. For those campuses, the license may help save time and make costs more predictable, but it will not necessarily streamline the permissions process, since it will require that each work for which permission is needed be checked against the list of participants in the license and individual arrangements sought for items not included therein. Other campuses regard the ACL as a kind of covenant not to sue, assuming that they are safe, or at least safer, regardless of how careful (or not) their permissions process is, as long as they pay the large cost of the license. By this logic, finding the items used that are both infringing (i.e. not fair use) and not covered by the license would be a difficult and unrewarding task for potential plaintiffs, even if they do not participate in the ACL. And there might even be peer pressure within the publishing world not to go after ACL subscribers, since they have an agreement with, and are paying lots of money to, many of the big players. The CCC is careful not to endorse this view of the license, but there are persistent whispers along these lines in the academy and even some anecdotes about individual marketers making these types of assurance “on the QT.”

One aspect of these blanket licenses that I think deserves attention on college campuses is their resemblance to the “big deals” for periodical databases that many academic libraries signed onto in the past decade or so. In those deals, a high but relatively predictable price is paid for access to lots of content, a significant portion of which is probably not really usable in the specific setting. Some libraries have come to regret these deals and to long for the days of disaggregated purchases of content, when decisions could be made based on actual expected use; the classic cost/benefit analysis. A similar dynamic seems likely around blanket licenses, and institutions may have even less control over their costs with copyright licenses than they did with the big serials purchases. In the serials world, we typically negotiated around two cost control mechanisms – the ability to cancel some small percentage of little used journal titles within the scope of the larger collection and/or a cap on the annual increase in subscription fees. It seems unlikely that the licensors for either the music or the publishing industry would agree to allow campuses to delete some providers whose content they do not expect to use from the license in order to reduce costs. And we just do not know what annual price increases will look like for these licenses. Finally, we should remember that these licensing deals, backed as they are by the threat of lawsuit, will be even harder to get out of, once a campus has signed up, than the big serials agreements have been. That fact, and its implications for budget planning, should give us pause as we consider how much security these blankets really offer.

What is “value” in publishing?

The Scholarly Kitchen, a blog sponsored by the Society for Scholarly Publishing, is a source of opinion and debate that I have wanted to point out for some time.  I have finally been prodded to do so, or one might better say provoked, by this post from Kent Anderson called “Are Publishers Anti-Publishing?”  citing a stream of news about how various publishers are abandoning their traditional business and challenging scholarly publishers to find ways to innovate their businesses.  In addition to the instances that Anderson mentions, one could note the report that the Christian Science Monitor has decided to give up its daily print publication and move predominately on-line.

It is interesting to compare Anderson’s post with the op-ed that appearred in the Sunday New York Times from author James Gleik on “Publishing Without Perishing.”  Both pieces challenge publishers to step up to the challenge posed by online availability.  Gleik points to a return to beautiful, durable books as the best hope of traditional publishing, while Anderson clearly envisions a very different response, although his advice is less clear than Gleik’s nostolgic vision.  Anderson suggests emulating Google, Facebook and Amazon, so he is clearly asking for a digital solution, not a return to producing print artifacts.

There are several points I agree with heartily in Anderson’s post, especially the call for traditional publishers to look for the value they can add to content, rather than trying to pare their offerings down to bare bones as so many newspapers have done.  Yet he does not seem consistent about that point when he cites Google as one of the successful models that publishing should emulate.  What value, we might ask, does Google add to content beyond easy of access?  Anderson refers to Google’s “appropriation” of the “STM impact factor model,” but surely Google’s relevance-ranking algorithm is a very different thing, employed for the very different purpose of facilitating access to the content that a searcher is most likely to want.  The impact factor model will not really have been “appropriated” until academic institutions start recognizing that downloads of an online work is itself a legitimate metric for evaluating the quality of the work and the career of the creator.

Which brings me to where Anderson really goes wrong — his comments about how open access and institutional repositories are “anti-publishing.”  To get to this claim one must define publishing very narrowly, based on a traditional, “the way we have done it in the past,” standard;  Anderson sounds a lot like Gleik at this point.  On-line, open access distribution IS publishing, of course, as the many peer-reviewed open access journals clearly prove.  What is most astonishing about Anderson’s discussion of these “anti-publishing” trends, however, is his claim that open access “devalues” scholarly content by “treating it as less than a commodity.”  How can one make such a claim about scholarly content when authors have been expected to give their writings away for free to publishers for many years?  Scholarly authors are used to thinking about the value of their work in terms other than economic, and those terms have been dictated, in part, by the business model of traditional scholarly publishing.

The value of scholarly work, for scholars, has never been based on the money it could earn, since they never saw a penny of that money and were, in fact, expected to pay for access to their own writings.  Often they were even expected to pay “page charges,” which makes the author-side fees now charged by many publishers for open access seem very familiar.  The point is that access and use, not economic gain, define the value of scholarly writing because they serve the scholarly authors’ need for recognition and impact; the cost of the wrapper in which the work was contained (the commodity) has never been a marker for value in the academic world, and it has lately become an impediment.

I fervently hope that scholarly publishers can find ways to add value to academic content, as Anderson challenges them to do.  But that task will be much more difficult if it is based on a narrow view of the value of academic work that begins and ends with the traditional way publishers have done business.  Tthe search for new models of scholarly publishing will have to take into account the things that actually matter to academic authors and scholarly institutions.

Two cases that could shape copyright

Two interesting cases were reported in the past weeks by Zohar Efroni, a non-resident fellow at Stanford’s Center for Internet and Society.  One could have significant impact on the shape of US copyright law, especially as it serves to encourage or hamper technological innovation, and the other suggests, to me at least, an interesting international model that could help mold US thinking about fair use.

The first case, decided in August by the Second Circuit and recently appealed to the Supreme Court, involves the legality of the remote storage digital video recording (RS-VDR) services that large cable companies want to offer to their customers.  In Cartoon Network v. Cablevision, the 2d Circuit reversed a lower court’s ruling and injunction, and held that Cablevision was not directly infringing the copyrights held by television companies when it allowed consumers to select programs that would be recorded and stored remotely, on servers owned by Cablevision, for playback to the specific consumer when they request it.

The RS-VDR argument raises once again the Sony Betamax case, in which the Supreme Court held that a substantial non-infringing use of a video cassette recorder — the fair use by consumers for “time-shifting” TV programs recorded for the personal viewing — meant that the device manufacturer was not guilty of copyright infringement, even though it was acknowledged that there were potential infringing uses of the VCR.  This was a landmark case that made possible a great deal of technological innovation, since it removed the chilling effect that would have inhibited the invention of any new device or service that might be used to infringe a copyright.  The Grokster case about filesharing software gave the Court a second look at the Sony ruling, and opinions differ, even on the Court, about how much that doctrine was changed.  In Grokster, the Court ruled against a technology that did have potential non-infringing use because its actual use was overwhelmingly infringing and its marketing actively and knowingly encouraged infringement.  Now the Supreme Court might once again revisit the fair use underpinnings of so much technical innovation if they elect to hear an appeal of the Cablevision case; Efroni’s comments on the arguments asking the Court to intervene are here.

The other case is one from German and involves music sampling.  The judge in this case, reported by Efroni here, found that sampling does not infringe the copyright in the original music.  The result that is much more sensible than similar approaches to the issue in the US, although it has several “catches” attached to it, as Efroni notes.  It is one of those catches, in fact, which captured my attention.  The decision is based on a provision of the German copyright law translated as “free use” (freie Benutzung), which is not the same as fair use in the US.  Efroni suggests that we think of free use in Germany as “an extreme version of the transformativeness element familiar from the U.S. fair use analysis.”  Since one of the problems with any discussion fair use is separating the notion of transformativeness (which is not a defined element of fair use) from the exclusive right granted to copyright holders to authorize derivative works, I was struck by the possibility of separating transformative uses (under this category of “free use”) from the four-factor analysis.  The many recent decisions upholding transformative uses as fair use have pretty well jettisoned the four factor approach in any case; once they decide that the purpose of the use is transformative, things like commercial versus non-commercial, the nature of the original and the amount used are usually held to be irrelevant.  Impact on the market for the original is also usually discounted, since courts hold that the market for a transformative use does not compete with that for the original.

This transformative approach to fair use has been extraordinarily fruitful, supporting all kinds of scholarship and creativity.  But it threatens to swallow up the fair use analysis as it has been carried out for almost two centuries and as it was codified in section 107, automatically dismissing as an infringement any use at all that is not transformative and struggling to find transformation in any use that a court wants to allow.  So I think it might be worth considering adopting a provision like free use, that would allow an uncompensated use of a prior work to create something new under a set of conditions more appropriate to that situation than are the four fair use factors.  Then fair use could be left in place to permit those socially beneficial uses that do not meet the standard of transformativeness but still serve the fundamental purpose of copyright law to encourage and protect creativity and innovation.

Discussions about the changing world of scholarly communications and copyright