An international perspective on statutory damages

It has been a long time since we discussed statutory damages in this space.  Statutory damages are, of course, the high monetary damages that rights holders can elect when they sue someone for infringement.  Instead of having to prove the actual harm they suffered, statutory damages presume that harm and make proving it unnecessary.  In the U.S., statutory damages can be as high as $150,000 per infringing act (see 17 U.S. Code section 504(c)).  This is a number that the content industries love to throw around, especially as part of the highly-fictionalized warning you see at the beginning of DVDs.

Back in 2009, when the recording industry was actively suing its own customers for copyright infringement because of file-sharing, statutory damages were briefly a hot topic after juries returned million-dollar verdicts against ordinary individuals for downloading files the actual value of which was less than $100.  At the time I wrote about this issue, and also linked to another lawyer’s blog post which argued that these statutory damages were likely unconstitutional.  Since the RIAA has taken its campaign for ever-stronger copyright enforcement and ever-steeper penalties in a different direction, there has been less conversation about these disproportionate penalties.

This week, however, a development in Europe has reminded me that we should not let this issue drop.  Last week Poland’s Constitutional Court released a ruling which effectively declares Poland’s own take on statutory damages a violation of the Polish Constitution.  Polish law, it seems, enacts the same policy of allowing increased damages, well beyond ordinary judicial remedies, for copyright infringement with a provision that allows tripling of the “respective remuneration that would have been due at the time of claiming it in exchange for the rights holder’s consent for the use of the work” (see Article 79 of the English translation of Poland’s copyright law here, on the WIPO site).  What this essentially says is that triple the actual harm done (the amount the rights holder would have been due) can be awarded as a form of statutory damages.  And the Polish Constitutional Court has now decided that that provision must be changed because it violates a constitutional provision ensuring equality of protection for property ownership.  It seems they are concerned that the Polish copyright law gives a level of protection to copyrighted property that is much greater than other forms of property.

It is interesting to compare this situation to what we find in U.S. law.  We do not, of course, have the same provision about equal protection for copyright ownership in our Constitution.  The Constitutional case against statutory damages is made more on the grounds of due process, where the damages are so in excess of the harm that they are unreasonable, out of proportion, and unfair to defendants.  Still, there is intuitive sense to the idea that copyrighted works are protected far more comprehensively and stringently than most other kinds of property,  If this is true in Poland, and the Polish Court thinks it is, it is certainly even more the case in the U.S.

Consider two points.  First, in the Jamie Thomas file sharing case, the relationship between the actual harm — it would have cost about $24 for her to buy the songs at issue — and the 1.9 million dollar verdict against her, was much more disproportional than the triple damages that concerned the Polish court.  If a factor of 3 was too much for the Polish court to accept, a multiplier of nearly 800,000 ought to shock every U.S. court and every U.S. citizen.  Second, it is important to notice the different types of parties involved in the Polish case; it involved a cable TV network that apparently rebroadcast some films without a license.  So corporate entities were involved, and the Polish Court still felt that tripling the damages was unfair.  Yet in the U.S. we have allow grossly more disproportionate damages to be awarded against private citizens.

The content industry often looks to Europe and to other international laws and agreements they can use to convince U.S. lawmakers to increase protection for copyrighted works.  Here we have an international court pointing the other way; showing us in the U.S. how out of whack our copyright law has become in the area of statutory damages.  Something tells me this will not be an example cited by the MPAA or the U.S. Trade Representative.  But as Congress and the Copyright Office discusses reforming the copyright law, this finding from the Constitutional Court of Poland should shame us into looking at statutory damages here in America and recognizing that this is a problem in desperate need of remedy.

 

This is a solution?

Ever since it appeared, I knew I should write about this new report concerning  orphan works that the Copyright Office issued earlier this month.  But, to be honest, I have been on vacation, and have not had a chance to read the full report yet, only excerpts.  Fortunately, on Monday Mike Masnick from Techdirt posted about the report and absolutely nailed it.  So I have little to add, and simply want to direct readers to Mike’s post.

As Mike observes, the new CO report would mostly make a serious problem worse, in that it would make the use of orphan works more difficult rather than less.  The idea of creating a registry for users to register their proposed use is positively Kafkaesque; the real need is to be able to better identify rights holders, not users.  So why not provide incentives for rights holders to register, rather than creating a new registry that will probably not be used, since it is so counter-intuitive and will be unknown to the vast majority of putative users?

The Techdirt post correctly notes that the problem of orphan works increased exponentially after the U.S. made two changes in its law — the elimination of formalities and the extension of the copyright term of protect to life plus 70 years.  These changes were made because the U.S. joined the Berne Convention and other international treaties on copyright in the 1980s, so reversing them would be very difficult.  Still, the problem is world-wide, so maybe someday sanity will prevail at the WIPO and these issues will be addressed directly, instead of taking a kind of backwards approach that tries to solve a problem without addressing its root causes, which has the result of making things worse.  See the suggestions I made several years ago for solving the “Berne Problem” here and here.

The most troubling aspect of the Copyright Office’s new report is the disdain with which it treats fair use.  The U.S. is actually in a better position as far as uses of orphan works are concerned than most nations  because our judges were wise enough to create this doctrine over 150 years ago.  But today’s Copyright Office thinks it knows better; it believes that fair use is “of limited utility” in solving the orphan works problem.  Instead, we need more bureaucratic apparatus.  Worse, to get to this position, the CO presents the HathiTrust case, with its strong affirmation of fair use, as being about “the digitization of millions of non-orphaned works” (p.42).  This is ridiculous, of course; the HathiTrust corpus contains both orphan works and works for which rights holders can be identified.  The CO seems to take the position that since specific uses of orphan works were not ultimately adjudicated in the HathiTrust case, that case is not relevant to the application of fair use to the orphan works problem. So although the report does recommend that any legislative “solution” to the orphan works problem should preserve the users’ ability to rely on fair use, the CO does not seem to feel that fair use is very helpful.  But that simply reflects the prejudice that the CO has about fair use, a prejudice that makes them an unreliable guide to copyright law in the U.S.

Who pays, and what are we paying for?

[ guest post by Paolo Mangiafico ]

I wasn’t at the Society for Scholarly Publishing’s annual meeting in Virginia last week, but was able to follow some of the presentations and discussions via the #SSP2015 hashtag on Twitter and some followup blog posts. Something that caught my eye yesterday was a post on Medium by @CollabraOA titled “What exactly am I paying for?” that summarized a panel discussion at SSP on the topic of “How Much Does it Cost?” versus “What are you Getting for/doing with the Money?” An Overview and Discussion of the Open Access Journal Business Model, (lack of) Transparency, and What is Important for the Various Stakeholders.

The post has summaries (and links to slides) of the presentations by panelists Dan Morgan (University of California Press), Rebecca Kennison (K|N Consultants), Peter Binfield (PeerJ), and Robert Kiley (The Wellcome Trust), as well as links to other readings on the topic, such as this article from a couple of years ago titled “Open access: The true cost of science publishing” by Richard Van Noorden in Nature.

A few things from the summary of the panel discussion that stood out to me (excerpted or paraphrased here):

  • From Robert Kiley’s discussion of the Wellcome Trust’s experience with paying article processing charges (APCs) on behalf of their funded authors: the average APC levied by hybrid journals (which publish both subscription and OA [open access] articles) is 64% higher than the average APC charged by wholly OA, or “born OA”, journals. Despite these higher prices, some of the problems the Trust have encountered, such as articles not being deposited to Europe PubMed Central, incorrect or contradictory licenses appearing on articles, and confusion as to whether the APC has been paid, were almost exclusively related to articles in hybrid journals. Robert asked: “Are we getting what we pay for?”
  • From Rebecca Kennison’s discussion on transparency of publishing costs, and how the initial APC for PLOS Biology was set when it was launched: it was based on the average price paid by authors publishing in that era’s top science journals, for page and color charges, etc. The thinking was that if biology authors are used to paying around $3000 USD to get published in a subscription journal, they will be able to transfer this to pay the APC for PLOS Biology instead. She noted how much of a role this $3000 price point has played in OA price-setting since the early 2000s. This is fascinating when you consider that it was a “What the Market Will Bear” price point, and not based on publishing costs. / The desire for transparency is not so much to make publishers reveal all costs, or push publishers to offer services “at cost”, but to ensure that librarians and funders, or anyone paying an OA charge, are simply more aware, and sure, of what they are paying for, and whether it is the best use of funds. It is not a matter of caveat emptor, but emptor informari.
  • From Pete Binfield’s discussion of the relationship between cost and prestige: despite the fact that “born OA” publishers can be much more efficient, authors still seem to be willing to pay for things like “prestige” and “the best venue for discoverability,” where more traditional publishers are still perceived to have an advantage because of established “brands.”

This discussion resonated with a different one that has been playing out among anthropologists in the past few weeks, regarding whether and when to transition the long established journals of the American Anthropological Association (AAA) to open access, a process that has already begun with the high profile Cultural Anthropology journal.

In an editorial in the February 2015 issue of American Anthropologist, the editor, Michael Chibnik, argued that while he “cannot disagree with the rhetoric of those advocating open access for American Anthropologist” he also could not see how to make the finances work without continuing to rely on the existing subscription model via a publisher like Wiley Blackwell. While admitting “I do not know all the details of the financial arrangements between AAA and WB” (see discussion about the lack of transparency explored in the panel mentioned above) he briefly outlines why several alternative funding models he has heard about are unlikely to work, concluding “The obstacles to AA becoming open access in the near future may be difficult to overcome.”

This elicited several responses, from Martin Eve, who challenged many of the assertions in the piece, one by one; from the Board of the Society for Cultural Anthropology, who argued in a commentary titled “Open Access: A Collective Ecology for AAA Publishing in the Digital Age” that open access was the right thing to do despite the difficulties; and from Alex Golub, who wrote a blog post titled “Open access: What Cultural Anthropology gets right, and American Anthropologist gets wrong.”

The Society for Cultural Anthropology commentary points out that research libraries are key stakeholders in the emerging OA landscape, and potential partners with scholarly societies for new models of scholarly publishing. Both SCA and Golub reference some new projects like Collabra, Open Library of the Humanities, Knowledge Unlatched, and SciELO, that, in Golub’s words, “blur the distinction between journal, platform, and community the same way Duke Ellington blurred the boundary between composer, performer, and conductor” and are examples of “experiments to move beyond cold war publishing institutions.”

It’s not clear yet what financial models will ultimately prove successful and sustainable for scholarly publishing and scholarly societies going forward, but simply maintaining the status quo with its hidden and inflated costs and frequently vestigial practices is almost certainly not the answer. As Alex Golub concludes in his post:

The AAA wasn’t always structured the way it is today, and it may not be structured this way in the future. The question now is whether the AAA can change quickly enough to be relevant, or whether institutions like the SCA are the true future of our discipline. These are issues tied up with a lot more than just publishing: The shrinking of academe, the growing role of nonacademic stakeholders in academic practices, and much besides. Does Cultural Anthropology face a lot of issues down the road? Absolutely. Is complete and total failure on the menu? Yes. But I reckon that in ten years when I sit down to reblog this post, we will look back on this debate and say: The people who did the right thing and took a leap of faith fared far better than the ones who clung to a broken solution. Cultural Anthropology acted like Netflix, while American Anthropologist acted like Blockbuster. Except, of course, no one will remember what Blockbuster was.

A distinction without a difference

The discussion of the new Elsevier policies about sharing and open access has continued at a brisk pace, as anyone following the lists, blogs and Twitter feeds will know.  On one of the most active lists, Elsevier officials have been regular contributors, trying to calm fears and offering rationales, often specious, for their new policy. If one of the stated reasons for their change was to make the policy simpler, the evidence of all these many “clarifying” statements indicates that it is already a dismal failure.

As I read one of the most recent messages from Dr. Alicia Wise of Elsevier, one key aspect of the new policy documents finally sunk in for me, and when I fully realized what Elsevier was doing, and what they clearly thought would be a welcome concession to the academics who create the content from which they make billions, my jaw dropped in amazement.

It appears that Elsevier is making a distinction between an author’s personal website or blog and the repository at the institution where that author works. Authors are, I think, able to post final manuscripts to the former for public access, but posting to the latter must be restricted only to internal users for the duration of the newly-imposed embargo periods. In the four column chart that was included in their original announcement, this disparate treatment of repositories and other sites is illustrated in the “After Acceptance” column, where it says that “author manuscripts can be shared… [o]n personal websites or blogs,” but that sharing must be done “privately” on institutional repositories. I think I missed this at first because the chart is so difficult to understand; it must be read from left to right and understood as cumulative, since by themselves the columns are incomplete and confusing.  But, in their publicity campaign around these new rules, Elsevier is placing a lot of weight on this distinction.

In a way, I guess this situation is a little better than what I thought when I first saw the policy. But really, I think I must have missed the distinction at first because it was so improbable that Elsevier would really try to treat individual websites and IRs differently. Now that I fully understand that intention, it provides clear evidence of just how out of touch with the real conditions of academic work Elsevier has become.

Questions abound. Many scientists, for example, maintain lab websites, and their personal profiles are often subordinate to those sites. Articles are most often linked, in these situations, from the main lab website.  Is this a personal website? Given the distinction Elsevier makes, I think it must be, but it is indicative of the fact that the real world does not conform to Elsevier’s attempt to make a simple distinction between “the Internet we think is OK” and “the Internet we are still afraid of.”

By the way, since the new policy allows authors to replace pre-prints on ArXive and RePEC — those two are specifically mentioned — with final author manuscripts, it is even clearer to see that this new policy is a direct attack on repositories, as the Chronicle of Higher Education perceives in this article.  Elsevier seems to want to broaden its ongoing attack on repositories, shifting from a focus on just those campuses that have an open access policy to now inhibiting green self-archiving on all university campuses.  But they are doing so using a distinction that ultimately makes no sense.

That distinction gets really messy when we try to apply it to the actual conditions of campus IT, something Elsevier apparently knows little about and did not consider as the wrote the new policy documents.  I am reminded that, in a conversation unrelated to the Elsevier policy change, a librarian told me recently that her campus Counsel’s Office had told her that she should treat the repository as an extension of faculty members’ personal sites.  Even before it was enshrined by Elsevier, this was clearly a distinction without a difference.

For one thing, when we consider how users access these copies of final authors’ manuscripts, the line between a personal website and a repository vanishes entirely. In both cases the manuscript would reside on the same servers, or, at least, in the same “cloud.” And our analytics tell us that most people find our repositories through an Internet search engine; they do not go through the “front door” of repository software. The result is that a manuscript will be found just as easily, in the same manner and by the same potential users, if it is on a personal website or in an institutional repository. A Google or Google Scholar search will still find the free copy, so trying to wall off institutional repositories is a truly foolish and futile move.

For many of our campuses, this effort becomes even more problematic as we adopt software that helps faculty members create and populate standardized web profiles. With this software – VIVO and Elements are examples that are becoming quite common — the open access copies that are presented on a faculty author’s individual profile page actually “reside” in the repository. Elsevier apparently views these two “places” – the repository and the faculty web site – as if they really were different rooms in a building, and they could control access to one while making the other open to the public. But that is simply not how the Internet works. After 30 years of experience with hypertext, and with all the money at their disposal, one would think that Elsevier should have gained a better grasp on the technological conditions that prevail on the campuses where the content they publish is created and disseminated. But this policy seems written to facilitate feel-good press releases while still keeping the affordances of the Internet at bay, rather than to provide practical guidelines or address any of the actual needs of researchers.

From control to contempt

I hope it was clear, when I wrote about the press release from Elsevier addressing their new approach to authors’ rights and self-archiving, that I believe the fundamental issue is control.  In a comment to my original post, Mark Seeley, who is Elsevier’s General Counsel, objected to the language I used about control.  Nevertheless, the point he made, about how publishers want people to access “their content,” but in a way that “ensures that their business has continuity” actually re-enforced that the language I used was right on the mark.

My colleague Paolo Mangiafico has suggested that what these new policies are really about is capturing the ecosystem for scholarly sharing under Elsevier’s control.  As Paolo points out, these new policies, which impose long embargo periods on do-it-yourself sharing by authors but offer limited opportunities to share articles when a link or API provided by Elsevier is used, should be seen alongside the company’s purchase of Mendeley; both provide Elsevier an opportunity to capture data about how works are used and re-used, and both  reflect an effort to grab the reins over scholarly sharing to ensure that it is more difficult to share outside of Elsevier’s walled garden than it is inside that enclosure.

I deliberately quote Mr. Seeley’s phrase about “their content” because it is characteristic of how publishers seem to think about what they publish.  I believe it may even be a nearly unconscious gesture of denial of the evident fact that academic publishers rely on others — faculty authors, editors and reviewers — to do most of the work, while the publisher collects all of the profit and fights the authors for subsequent control of the works those authors have created. That denial must be resisted, however, because it is in that gesture that the desire for control becomes outright disrespect for the authors that publishing is supposed to serve.

Nowhere is this disrespect more evident than in publisher claims that the works they publish are “work made for hire,” which means, in legal terms, that the publisher IS the author.  The faculty member who puts pen to paper is completely erased from the transaction.  To be clear, as far as I know Elsevier is not making such a claim with its new policies.  But these work made for hire assertions are growing in academic publishing.

Three years ago I wrote about an author agreement from Oxford University Press that claimed work made for hire over book chapters; that agreement is still in use as far as I am aware.  At the time, I pointed out two reasons why I thought OUP might want to make that claim.  First, if something is a work made for hire, the provision in U.S. copyright law that allows an author or her heirs to terminate any license or transfer after 35 years simply does not apply.  More significantly, an open access license, such as is created by many university policies, probably is not effective if the work is considered made for hire.  This should be pretty obvious, since our law employs the legal fiction that says the employer, not the actual writer, is the author from the very moment of creation in work made for hire situations.  So we should read these claims, when we find them in author agreements, as pretty direct assaults on an author’s ability to comply with an open access policy, no matter how much she may want to.

As disturbing as the Oxford agreement is, however, it should be said that it makes some legal sense.  When a work is created by an independent contractor (and it is not clear to me if an academic author should be defined that way), there are only selected types of works that can even be considered work made for hire; one of them is “contribution[s] to a collective work.”  So a chapter in an edited book is at least plausible as a work made for hire, although the other requirement — an explicit agreement, which some courts have said must predate the creation of the work — may still not be met.  In any case, the situation is much worse with the publication agreement from the American Society of Mechanical Engineers (ASME), which was recently brought to my attention.

ASME takes as its motto the phrase “Setting the Standard,” and with this publication agreement they may well set the standard for contemptuous maltreatment of their authors, many of whom are undoubtedly also members of the society.  A couple of points should be noted here.  First, the contract does claim that the works in question were prepared as work made for hire.  It attempts to “back date” this claim by beginning with an “acknowledgement” that the paper was “specially ordered and commissioned as a work made for hire and, accordingly, ASME is the author of the Paper.”  This acknowledgement is almost certainly untrue in many, if not most, cases, especially since it appears to apply even to conference presentations, which are most certainly not “specially commissioned.”  The legal fiction behind work made for hire has been pushed into the realm of pure fantasy here.

What’s more, later in the agreement the “author” agrees to waive all moral rights, which means that they surrender the right to be attributed as the author of the paper and to protect its integrity.  Basically, an author who is foolish enough to sign this agreement has no relationship at all to the work, once the agreement is in place.  They are given back a very limited set of permissions to use the work internally within their organization and to create some, but not all, forms of derivative works from it (they cannot produce or allow a translation, for example).  Apparently ASME has recently started to disallow some students who publish with them to use the published paper as part of a dissertation, since most dissertations are now online and ASME does not permit the  writer to deposit the article, even in such revised form, in an open repository.

To me, this agreement is the epitome of disrespect for scholarly authors.  Your job, authors are told, is not to spread knowledge, not to teach, not to be part of a wider scholarly conversation.  It is to produce content for us, which we will own and you will have nothing to say about.  You are, as nearly as possible, just “chopped liver.”  It is mind-boggling to me that any self-respecting author would sign this blatant slap in their own face, and that a member-based organization could get away with demanding it.  The best explanation I can think of is that most people do not read the agreements they sign.  But authors — they are authors, darn it, in spite of the work for hire fiction — deserve more respect from publishers who rely on them for content (free content, in fact; the ASME agreement is explicit that writers are paid nothing and are responsible for their own expenses related to the paper).  Indeed, authors should have more respect for themselves, and for the traditions of academic freedom, than to agree to this outlandish publication contract.

Learning how fair use works

How many cases about fair use have been decided in the U.S. since the doctrine was first applied by Justice Story in 1841?  Take a minute to count; I’ll wait.

If you came up with at least 170, the Copyright Office agrees with you.  Last week they announced a fascinating new tool on the CO website, an index of fair use cases.  That index contains summaries of approximately 170 cases, along with a search tool.  The introductory message, however, acknowledges that the index is not complete, so those of you who thought there were more than 170 cases are almost certainly correct.

This index is potentially a very useful tool, and it also raises some interesting questions.  I want to consider the questions first, than discuss how the new fair use index might be used by someone who wanted to learn more about how fair use works (which, by the way, is one of the avowed purposes behind its development).

Coverage is the obvious first question, and, as I said, the CO acknowledges that it is incomplete.  Specifically, it seems heavily weighted toward more recent cases.  There are only 11 cases listed in the index dating to before 1978, and two older cases (1940 and 1968) that are presented in my law school casebook on copyright as important fair use precedents are not included.  So it looks like there are some pretty significant gaps, which one hopes the Copyright Office will address as it continues to develop this tool.

By the way, the issue of continuing development also brings up the question of why the C.O. thought this was a worthwhile investment. It looks useful, to be sure, but there are other sources for similar data, so it is a bit curious that the C.O. chose this among all its potential priorities.

To return to the issue of coverage, it is always important to ask which specific cases were chosen and how they are characterized.  Of the 170 cases, there are 78 for which the result is listed as “Fair use not found,” and 64 in which the C.O. says that fair use was found.  The remaining 29 are listed as “Preliminary ruling, mixed result or remand.”  This last category is rather unhelpful.  For example, the Authors Guild v. HathiTrust case is listed this way, even though it was a strong affirmation of fair use and the remand involved a fairly unimportant issue of standing.  Even more surprising is the fact that this “mixed result” tag is applied to Campbell v. Acuff Rose Music, the “Oh Pretty Women” case from the Supreme Court that is at the heart of modern fair use jurisprudence.  Again, this was a clear fair use win; the case was remanded only because that is what the Supreme Court usually does when it reverses a Court of Appeals’ decision.  So the representation of the holdings is technically accurate, it seems, but not as helpful as it might be in actually focusing on the fair use aspect; while some “mixed result” case genuinely were that — fair use was found on one issue but not for another — many of the remanded cases actually did involve a clear yea or nay about fair use, and it would be more helpful to categorize them that way.

A particularly useful feature of this index is the ability to limit the listings by jurisdiction (the Appellate Circuits) and by topic.  For example, limited to just cases out of the Fourth Circuit, where I reside, I find that the Court has ruled on seven fair uses case and upheld fair use in six of them.  The seventh was one of those genuinely mixed results, where one challenged use was held to be fair and another was not.

If we limit the subject area of the cases to those labeled “Education/Scholarship/Research,” fair use seems to fare better than it does overall.  In that category of 42 total cases there are 18 findings in favor of fair use and 16 rejections.  Of the remaining 8 mixed results, at least two of them — the HathiTrust Case and the GSU case — should be seen as affirmations of fair use, even if the parameters of that use are still unsettled in GSU.  So the impression many of us have that educational and scholarly uses are a bit more favored in the fair use analysis than other types of cases seems to be confirmed.

Things get more interesting when we look just at the Supreme Court in this index, and the issue of how cases are chosen is again highlighted.  The index shows four fair use cases, with one holding in favor of fair use (Sony v. Universal Pictures), one mixed result (Campbell, as discussed above), and two rejections of fair use (Harper v. Nation and Stewart v. Abend).  This last case, Stewart v. Abend, is actually almost never treated as a fair use case; while fair use was dismissed as a potential defense in the case, the real issue involved assignments of copyright and who could exercise the renewal right that existed at that time.  And this case was remanded, just as Campbell was.  So it is odd that Campbell, with its central finding in favor of fair use, is shown as a mixed result, while Stewart v. Abend, where fair use was tangential and there was also a remand, is tagged as a rejection of fair use.  This suggests at least an unconscious bias against fair use findings.

A different listing of Supreme Court fair use cases, on the IP Watchdog site, includes several additional cases — nine, in all — but does not list Stewart v. Abend as one of them. Several of the cases included by IP Watchdog do not seem to me to really focus on fair use, so I am not saying that the C.O. has under-reported the cases.  But the very different lists do suggest that it is a surprisingly subjective undertaking just to identify the cases that should be included in a fair use index.

Finally, the analysis provided in the C.O.’s case summaries needs to be considered carefully.  To take one example, for the recent case of Kienitz v. Scoonie Nation, about which I wrote earlier this year, the short note about the holding ignores the thing that may be most significant about the case — the reluctance of Judge Frank Easterbrook to apply a “transformation” analysis to the fair use question (HT to my friend and colleague David Hansen for pointing this out).  Again, this is not necessarily a problem, and the case summary of Kienitz at the Stanford Copyright & Fair Use site has a similar synopsis, but it is a reminder that these projects are always created by individuals with specific perspectives, viewpoints and limitations.

Even with all these caveats, I think the Copyright Office has created a useful tool, which can be used by those interested to learn a lot about how fair use is applied, especially by looking at the different categories.  The Stanford site, linked above, and especially its own, much shorter list of cases, might usefully be used alongside the C.O. index.  The Stanford descriptions are  very tightly focused on the fair use issue, so reading them in conjunction with the C.O. summaries, with their attention to procedural matters that sometimes obscure the fair use holding, might produce a more balanced approach.

In any case, this new tool form the Copyright Office, and some of the tools that predate it, remind us that the best way to understand fair use, and to become comfortable with it, is too look closely at the cases, both in the aggregate and individually.  This C.O. database offers a statistical perspective, as well as the ability to focus on parody, or music, or format-shifting, while the Stanford summaries emphasize in a few words the core of the fair use analysis.  Both point the interested reader to full opinions, where the analysis can be understood in the context of all the facts.  Combined in this way, these resources offer a terrific opportunity for librarians, authors, and others to dig deeply into the nuances of fair use.

Stepping back from sharing

The announcement from Elsevier about its new policies regarding author rights was a masterpiece of doublespeak, proclaiming that the company was “unleashing the power of sharing” while in fact tying up sharing in as many leashes as they could.  This is a retreat from open access, and it needs to be called out for what it is.

For context, since 2004 Elsevier has allowed authors to self-archive the final accepted manuscripts of their articles in an institutional repository without delay.  In 2012 they added a foolish and forgettable attempt to punish institutions that adopted an open access policy by purporting to revoke self-archiving rights from authors at such institutions.  This was a vain effort to undermine OA policies; clearly Elsevier was hoping that their sanctions would discourage adoption.  This did not prove to be the case.  Faculty authors continued to vote for green open access as the default policy for scholarship.  In just a week at the end of last month the University of North Carolina, Chapel Hill, Penn State, and Dartmouth all adopted such policies.

Attempting to catch up to reality, Elsevier announced last week that it was doing away with its punitive restriction that applied only to authors whose institutions had the temerity to support open access. They now call that policy “complex” — it was really just ambiguous and unenforceable — and assert that they are “simplifying” matters for Elsevier authors.  In reality they are simply punishing any authors who are foolish enough to publish under these terms.

Two major features of this retreat from openness need to be highlighted.  First, it imposes an embargo of at least one year on all self-archiving of final authors’ manuscripts, and those embargoes can be as long as four years.  Second, when the time finally does roll around when an author can make her own work available through an institutional repository, Elsevier now dictates how that access is to be controlled, mandating the most restrictive form of Creative Commons license, the CC-BY-NC-ND license for all green open access.

These embargoes are the principal feature of this new policy, and they are both complicated and draconian.  Far from making life simpler for authors, they now must navigate through several web pages to finally find the list of different embargo periods.  The list itself is 50 pages long, since each journal has its own embargo, but an effort to greatly extend the default expectation is obvious.  Many U.S. and European journals have embargoes of 24, 36 and even 48 months.  There are lots of 12 month embargoes, and one suspects that that delay is imposed because those journals that are deposited in PubMed Central, for which 12 months is the maximum embargo permitted.  Now that maximum embargo is also being imposed on individual authors.  For many others an even longer embargo, which is entirely unsupported by any evidence that it is needed to maintain journal viability, is now the rule.  And there is a handful of journals, all from Latin America, Africa, and the Middle East, as far as I can see, where no embargo is imposed; I wonder if that is the result of country-specific rules or simply a cynical calculation of the actual frequency of self-archiving from those journals.

The other effort to micromanage self-archiving in this new policy is the requirement that all authors who persevere and wish, after the embargo period, to deposit their final manuscript in a repository, must apply a non-commercial and no derivative works limitation on the license for each article.  This, of course, further limits the usefulness of these articles for real sharing and scholarly advancement.  It is one more way in which the new policy is exactly a reverse of what Elsevier calls it; it is a retreat from sharing and an effort to hamstring the movement toward more open scholarship.

The rapid growth of open access policies at U.S. institutions and around the world suggests that more and more scholarly authors want to make their work as accessible as possible.  Elsevier is pushing hard in the opposite direction, trying to delay and restrict scholarly sharing as much as they can.  It seems clear that they are hoping to control the terms of such sharing, in order to both restrict it putative impact on their business model and ultimately to turn it to their profit, if possible.  This latter goal may be a bigger threat to open access than the details of embargoes and licenses are. In any case, it is time, I believe, to look again at the boycott of Elsevier that was undertaken by many scholarly authors a few years ago; with this new salvo fired against the values of open scholarship, it is even more impossible to imagine a responsible author deciding to publish with Elsevier.

Copyright follies

The joy of being a copyright specialist is the amazing array of cool, beautiful, and profound things that make up the raw material of what we do.  It is a privilege to be granted even a tiny window into the creativity of the many people we get to work with.  And even the cases we only read about share in this astonishing diversity.

But let’s be honest.  There is also a lot of nonsense in the copyright world.  The idea of “owning” creative expression just makes some folks go a little nuts, and some pretty absurd claims get made about copyright (monkey selfies, anyone?).  So here is a quick review of some recent bizarre cases, although by the end of it we will have the opportunity to review some important principles about copyright law.

Perhaps a good place to begin is with the claim by descendants of Nazi Propaganda Minister Joesph Goebbels that they are entitled to licensing fees for quotes from Goebbels’ diaries that are used in a new biography of him.  One of the strangest things about this case is that it may well be a valid claim, although there is some dispute over who actually is the copyright holder, since most assets of the Nazi leaders were seized by the Allies after the war.  But the very fact that it is being raised suggests some interesting questions.  How much money would make it worthwhile to publicly identify oneself as a descendant of one of the worlds most vilified war criminals, an architect of the “final solution?”  And will there be a fair use/fair dealing defense raised, as the blog Techdirt has suggested?  It certainly seems like we should avoid a situation where a war criminal’s family would be in a position to censor a biography of him, which would be one possibility if they were found to hold copyright.  Random House seems mostly to assert the “no money to a war criminal” defense against the claim, but it is worth remembering that copyright is not only about money, it is about control.

Just before publishing this post I saw an excellent analysis of the issue of royalties for ex-Nazi’s or these descendants here, in Inside Higher Ed.

Another development this past week was the filing by John Deere in regard to a proposed exception to DMCA anti-circumvention rules in which they claimed that the software in a tractor is only licensed to a consumer, not owned by them.  It was inevitable that such a claim would be made eventually, and I predicted it somewhat eerily in this post from last year (substituting John Deere for Ford).  John Deere wants to sell you a tractor, and they are fine with you using it as you wish, unless you decide to modify the software.  At that point they assert that you, the purchasers, only have an implied license to use the software and that anti-circumvention rules would prevent modification, and should continue to do so.  What makes this claim more dangerous than absurd is that it raises the idea of new limitations on what we mean by ownership.  We thought that the doctrine of First Sale was sufficient to protect the traditional idea of ownership in regard to copyrighted material, but the DMCA, and the desire of some companies to suppress competition, has changed that.  What new and un-imagined restrictions on my use of the tractor in my driveway might be down the line from John Deere? We are getting ever closer to the point where our courts will need to develop clear guidelines about what it means to own a machine that incorporates copyrighted material.  In the meantime, I would think twice before I “bought” a John Deere tractor; I like to know what I am getting for my money, and John Deere seems to think they can upend my reasonable expectations whenever it suits them.

Most readers are likely already familiar with the next of the follies I want to discuss: the claim made on behalf of the bystander who filmed the police shooting of Walter Scott in South Carolina that he, the bystander, is entitled to a licensing fee — apparently as much as $10,000 — every time the media replays the video.  There are two especially troubling aspects of this claim.  The first is the absurd misunderstanding that leads to a statement that the fair use “period” has “expired” for this video.  There is not a time limit on fair use, of course.  It seems to me that a few people are confusing fair use, a statutory boundary on copyright that lasts as long as the rights do, with the so-called “hot news” doctrine.  The latter was the creation of courts, is of uncertain application, and was largely preempted by the 1976 copyright act.  In fact, the hot news doctrine was a limit on the exclusivity that a news organization could have over its report of newsworthy events, so the doctrine acted in the reverse of how it is being asserted, under the wrong name, in the Scott shooting video case.  Fair use continues to exist in spite of the lapse of time, and only a very poorly-advised news organization would accept this idiotic argument.

Which brings me to the most troubling aspect of this case, the apparent fact that the New York Times agrees that fair use can expire.  According to the Forbes report linked above, the NYT claimed that “copyright experts” agreed that this alleged fair use period has passed.  They quote a lawyer for the Copyright Clearance Center (hardly a disinterested party) whose argument, while using temporal language, can only sensibly refer to the specific conditions surrounding a particular use (i.e. whether the use is for the purpose of news reporting or not).  I wonder if the CCC can cite any case law for this proposition that fair use can expire?  If not, then they and the NY Times are just spreading FUD which, at least for the Times, is unexpected and reprehensible.

Finally, I want to briefly comment on this story about a former researcher who is suing his former post-doctoral adviser at Brown University for having published an article that they apparently wrote together without giving authorship credit to the former post-doc.  There are complicated details to the case, and I would not like to offer an opinion about who is right or wrong in the overall dispute.  But the controversy raises one issue that I do want to comment on: the situation between joint authors.  So much of the scholarship produced today is written by multiple authors — I recently saw an article with 102 listed authors — that it is increasingly important to understand a couple of points.

First, to qualify as a joint author in the copyright sense, each author must contribute protectable expression to the preparation of the overall work.  That means that some, at least, of those 102 authors are not co-owners of the copyright because their contributions did not involve the creation of protectable expression.  We don’t worry too much about this distinction in the academic world, but it could be an issue if a dispute over publication arises, as it has in the Brown University case described above.

Second, it is important to understand that each co-owner of the copyright, each joint author, is entitled to exercise the rights in the copyright bundle independently.  That means that one author can conceivably authorize publication without the permission of the other authors, as seems to have happened in this situation.  On the copyright issue, at least, it seems clear that the post-doc cannot object to publication simply because the article he worked on with others was published by one of them without his knowledge or consent.  The author who published would be obligated to account to all co-authors for any profits from the publication, but it would not be infringement to simply publish the article without consent from the others.

This precise situation, also involving a dispute about how authors were listed, was considered by the Seventh Circuit Court of Appeals in 1987 in a case called Weinstein v. the University of Illinois, and the panel of judges, two of whom were themselves well-known academics, came to the same conclusion — no infringement when one co-owner of the copyright publishes without permission from the others.  So whatever the other details are in the Brown case, a copyright claim against the former adviser from one of his co-authors is unlikely to be successful.  This is why it is so important (especially in cases like this involving commercial sponsorship) for all of the authors to agree together about the use and publication of any intellectual property that arises from the project.  That sort of agreement, worked out calmly and in advance of any conflict, is still the best way to avoid being involved in any copyright follies.

Steps toward a new GSU ruling

It looks more and more like we will get a new ruling from the trial court in the Georgia State case about what is or is not fair use for digital course readings.  The case, of course, was reversed and remanded to the trial court after the publishers appealed the initial decision to the 11th Circuit, with instructions to produce a new opinion consistent with the Court of Appeals ruling.    The publisher plaintiffs then asked the trial court to reopen the record in the case and apply the putative new fair use analysis to a different, more recent, set of readings employed by the GSU faculty.  The University opposed this motion, arguing that what would amount to a whole new trial was not necessary.

Last week, District Court Judge Orinda Evans dismissed the motion to reopen the record and issued an order about briefing the court on what a new analysis of fair use for the original excerpts considered in the trial should look like.  Judge Evans wrote that “It does not make sense at this juncture to spend months, probably longer, on what considerations might govern if Plaintiffs prove they are entitled to injunctive relief by virtue of the claimed 2009 infringements.”  The motion is dismissed without prejudice, meaning that the plaintiffs can renew it at a more appropriate time, although I must admit that I do not see what that would mean if the case is to go forward on the original set of readings.

It appears that once again the publishers have failed in an effort to broaden the scope of the case beyond the item-by-item fair use analysis that has already been done and to possibly reintroduce some of the broad principles that they really want, which have so far been rejected at every stage.  Now Judge Evans has explicitly told them, in her scheduling order, that what is required is “consideration and reevaluation of each of the individual claims” in order to redetermine “in each instance… whether defendants’ use was a fair use under 17 U.S.C.  section 107.”  Her schedule for the briefs is tight, with an end of the briefing now scheduled just two and a half months from now.  Presumably we would still have a long wait while Judge Evans applies revised reasoning about fair use to each of the individual excerpts, but it looks a bit more like that is what is going to happen.

A new home for copyright?

The idea that the Copyright Office should move out of the Library of Congress was first raised some years ago by Bruce Lehman, who was, at the time, the Director of the Patent and Trademark Office.  The idea seemed to be that the Copyright Office should join the PTO as an agency within the Commerce Department.  That idea did not seem to be very well-received by many, and I had not heard of the discussion for a while.  But apparently the possibility of moving the CO is still kicking around, and last month current Registrar of Copyright Marie Pallante sent a letter about the topic to Rep. John Conyers, the Ranking Member of the House Committee on the Judiciary.  Her letter was requested after a hearing about the functions and resources of the CO held back in February.

Pallante’s letter makes interesting reading, especially if one is interested in the inside politics of Executive Branch appointments, separation of powers, and the like.  The bottom line, however, is that Registrar Pallante thinks that the Copyright Office should be separated from the Library of Congress, should not move into the Commerce Department, and should instead become an independent agency with its leader directly appointed by the President and confirmed by the Senate.  There has been some discussion about this letter and the ramifications of the debate among my colleagues, and I want to consider two issues that I think are of interest to a wider audience, while admitting that I am shamefully cribbing ideas from those colleagues.

The first issue is why the Copyright Office should leave the Library of Congress in the first place.  Registrar Pallante offers several reasons in her letter.  One is the claim that the Library of Congress is in a Constitutionally awkward position, since it is apparently an Executive branch agency (the Librarian is appointed by the President), but its functions, including advising Congress about copyright law, are at least partially legislative.  While I see the issue, it is not clear to me why it is more pressing for the CO than it is for other offices within the Library, including, for example, the Congressional Research Service.  Nor do I fully understand why making the CO an independent agency, with its head still appointed by the President, would solve this dilemma.  There is certainly an issue of prestige here, but I am not convinced that it is enough to justify a new Federal agency.

The other reason Pallante offers for moving out of the Library of Congress are the “operational challenges,” including staffing and pay.  All bureaucracies are difficult, of course, and rumor has it the LoC is more difficult than most these days.  But, again, it is not obvious that a new agency would necessarily be better.  Everything would depend on the personnel and the budget.  More troubling, however, are the footnotes in Pallante’s letter that refer to the “conflict of interest” between the CO and the Library, which apparently was mentioned by some witnesses during those February hearings.

Is there a conflict of interest between a library and the office that administers our national copyright policy?  If there is, what does that tell us?  To my mind, it suggests that our copyright policy has gotten out-of-line.  We may be developing an approach that sees copyright as a trade regulation that protects specific industries, not as a policy decision about how best to ensure the continuous creation of new works of knowledge and culture.

This concern was clearly raised during the hearings, where Rep. Zoe Lofgren challenged the assumption that the Copyright Office was no longer a good fit with the Library of Congress by suggesting that over the years, the librarians have been better at understanding copyright than some staff at the CO.  To her credit, in her letter Pallante does not endorse the idea of moving the CO to Commerce, where the symbolism of copyright as a sort of trade regulation would be even stronger.  But I would argue that our predecessors knew what they were doing when they centralized copyright services inside the Library of Congress.  Libraries epitomize the social benefits that copyright is supposed to support, and the “optics” of moving the Office, at least, would inevitably undermine that long-standing commitment to the public good.

In fact, if the CO was located in the Commerce Department, as my colleague Brandon Butler points out, it would have to consider all aspects of commerce related to copyright, including those industries that depend on fair use and other copyright exceptions.  The wrong-headed narrative about the competition between the content industry and the technology sector, with the former held up as copyright dependents and the latter as modern-day pirates, would be harder to sustain.  The unfortunate possibility exists that the CO’s desire for independence represents a desire to become even less balanced in its approach than it has been in the past, focusing entirely on its perceived role as enforcer of rules that protect Hollywood from the threatening innovations of Silicon Valley.  An office in the Commerce Department would be less able to take sides.

In terms of rationale and purpose, the Library of Congress is a good fit for the Copyright Office, even if the CO does not, under its current leadership, recognize this.  If a new home is really necessary, Butler makes the wonderful suggestion that the Department of Education should be considered.  The DoE, more than Commerce and maybe even more than the Library of Congress, could refocus copyright policy on the reason we have these laws in the first place — to promote the progress of knowledge and science.  If we lose track of that purpose, it becomes an open question whether we need the law or the CO at all.

Discussions about the changing world of scholarly communications and copyright