All posts by Kevin Smith, J.D.

Defining derivatives

It is frequently interesting, and sometimes appalling, to see how a court that does not usually deal with copyright issues reacts when confronted with one.  Judge Amy Berman Jackson of the U.S. District Court for the District of Columbia is certainly not a complete novice regarding copyright, but the issue she confronted in Drauglis V. Kappa Map Group must certainly have been a new one for her.  It is a case involving the scope and interpretation of a Creative Commons license, and Judge Jackson deals quite well with it, in my opinion, in a decision issued last week.  But she is drawing lines in this case that I am quite sure will be discussed and readjusted over the next few years.

The facts in the case pose a situation that anyone familiar with CC licensing could probably have seen coming.  Art Drauglis, the plaintiff, took a quite lovely photograph  of a scene in rural Maryland that he calls “Swain’s Lock.”  There is no indication that Drauglis is a professional photographer, and he uploaded the image to a Flickr account he shares with his wife, using a CC Attribution Share Alike license.  Several years later he was upset to find that a publisher, Kappa Map Group, had used his photograph on the front cover of their Montgomery County Street Atlas, which, of course, they offered for sale.  Drauglis sued alleging copyright infringement based on the commercial use,  that the share alike provision of his license had been violated, and that he was not given appropriate attribution.  The court dismissed all of these claims on a motion of summary judgment, in the process making some important rulings about how we should understand Creative Commons licenses.

As an initial matter, Judge Jackson cites several authorities for the general proposition that a copyright holder who licenses his or her work can only sue for copyright infringement if the use is outside the scope of that license.  The issue of commercial reuse is an easy matter for the court, since the license Drauglis chose allows such commercial reuse.  He could have chosen an non-commercial (NC) term for this photo, but did not. So the fact that Kappa Maps took the photo and used it as the cover illustration for a commercial product does not, by itself, support a claim of copyright infringement.

The lesson from this part of the story is clear; rights holders need to think carefully about the terms of the licenses they imply.  Creative Commons offers a very flexible set of licenses, and those of us who use them can adjust to suit our particular needs.  But we need to think through what those needs are and select the appropriate terms.

If this was an easy call, three other points raised in this decision are more complicated, and offer some insights into how we might think about some of the aspects of Creative Commons licenses.

First, Drauglis asserted that the terms of his Share Alike provision had been violated because the Atlas was not offered for free and under similar conditions.  To decide this issue, Judge Jackson notes that a CC SA provision applies only to derivative works, and she then examines the definition of a derivative in section 101 of the copyright law.  Noting that a derivative is defined as a work that is “based upon” a preexisting work and that modifications appear to be required, the judge determines that simply reprinting the photo as the cover illustration of a map book does not create a derivative work.  Instead, she considers the atlas to be a compilation, which is treated quite differently in our copyright law.

In general I think this aspect of the case raises a distinction that many users of CC licenses and CC licensed material do not think about — the fact that a share alike provision applies only to derivative works, and not all uses would qualify as derivatives.  The reason I think this will be a matter for future debate is because the line is not very clear.  If I use a preexisting image as an illustration for a web page I create, at what point does that use result in a derivative work?  If the image is reproduced in its entirety and not changed, but merely surrounded by the other parts of the site, it sounds from this decision like the website might be a compilation of sorts, and the SA provision would not come into play.

One thing that is clear, and this is my second point, is that a Share Alike provision does not require that the second work be made available for free, as long as a derivative is not created. The compilation atlas containing Drauglis’ photo was sold, of course, and the court said that was OK because there was no non-commercial restriction on the license and the commercial work was not a derivative (which would activate the share alike restriction).  This was a important issue for me when I was deciding which CC license to use for my book that was published last year.  I considered a CC-BY-SA license in the mistaken belief that this would prevent commercial exploitation of the book without my explicit consent, something I wanted to prevent in order to protect the publisher.   My colleague Paolo Mangiafico corrected my thinking on this point, pointing out that someone could, conceivably, download the book and sell unmodified copies without violating the SA provision.  In this Paolo correctly anticipated the district court, and his argument convinced me to use an attribution/non-commercial license.  Thus non-commercial derivatives are allowed, which was important to me, but all commercial uses other than by the ACRL, who published the book, would require my additional permission.

The third aspect of the case that I think will generate ongoing discussion is the small controversy about appropriate attribution.  This is an issue that I hear about frequently from users of CC-licensed material, who want to do the right thing.  In the case against Kappa Map Group, Drauglis argued that the credit given for his photo, on the outside of the back cover of the Atlas, was inadequate and conflicted with the general statement claiming copyright in the work as a whole that was inside of the front cover.  To decide this issue the judge looked at how copyright notices for the individual maps were handled and what the general standards for appropriate attribution were, and concluded that Kappa had behaved correctly.  This aspect of the case offers a clear lesson for other users of CC licenses.  It is long established that works can have multiple copyright statements, and an assertion of copyright in an overall work does not negate attribution and assertion of rights for incorporated material as long as that attribution is given in a manner appropriate to the medium and the work.  So I think the messages for users of CC-licensed materials from this part of the decision is to relax, give attribution in the best way appropriate to the new work, and not worry too much about the fact that a new work may have multiple attributions and copyright statements.

There is additional analysis and discussion of this case over at TechDirt.  To me the bottom line is that Kappa Map Group behaved responsibly and Mr. Drauglis, suffering from licencor’s remorse, tried to use the courts to remedy his own mistaken selection of a license.  Nevertheless, whether this case is finished or not, I think we will hear more about what the boundaries of a derivative work really are.

 

 

Conspiracy theories, copyright term, and the TPP

I try to resist the urge to find conspiracies behind political developments; I tell myself that politicians and bureaucrats are just too unorganized and divided to really conspire about much of anything.  But sometimes that conviction gets tested.

Consider this: under our current, very confusing, set of rules for copyright term, at least some published works will begin entering the public domain in 2019.  No published works have been rising into the public domain for a long time in the U.S., but a work published in the U.S. in 1923, which had its term extended to 95 years from publication by the 1976 Copyright Act, would enter the public domain on Jan. 1, 2019, since 1923 + 95  equals 2018, and for administrate simplicity, works become PD on the first day of the year after their term expires (hence January 1 is celebrated as Public Domain Day).  Many works are protected for even longer, but this is the earliest we could hope to see published works entering the U.S. public domain.  And because this date is drawing near, it probably will not surprise many, including those who have always believed that protecting Mickey Mouse lies behind the U.S. term of protection, that Congress is once again beginning the process of formulating a new copyright act.  Do we really have any doubts that an extended term will be high on the wish list of the entertainment industry lobbyists?

But — and here is the conspiracy part — in case term extension can’t get through Congress, or cannot get through fast enough (the last copyright revision took over 20 years), there is an alternative approach to punitively protecting valuable icons from the mid-twentieth century — the Trans-Pacific Partnership agreement.

The TPP, as many readers will know, is a massive free trade agreement being negotiated in secret (more or less) between the U.S and eleven other countries.  Included in the TPP is a  chapter on intellectual property, and leaks of the text of that chapter have shown us that negotiators are considering binding the 12 parties to a copyright term that might significantly exceed, possibly even double, the minimum requirement under the Berne Convention of life of the author plus 50 years.  The longest term being proposed in the TPP, and the negotiators do not seem to have agreed on this point yet, is life plus 100 years.

Back in June, Congress renewed so-called fast track authority for the President over trade agreements, so no matter what is in the final version of the TPP, Congress will not get to amend the provisions; they will have to vote either yes or no on the text as it comes from the hands of the negotiators.  Obviously this gives a great deal more influence to the lobbyists, who must work only with the very friendly folks from the U.S Trade Representative’s office, not with the splintered and demanding Congress.  So perhaps this is a backdoor way to protect Mickey well into the 21st century.

We should note, however, that trade agreements are generally not “self-executing,” meaning that they do not become law automatically even once they are ratified.  Each nation must still make changes to their own national laws to bring them in line with the negotiated requirements.  So if a TPP with a copyright term requirement of life plus 100 years did emerge, Congress would still have to act to make that part of U.S. law.  But it would obviously be much easier to get Congress to do so if it were a requirement of a ratified trade agreement; that is great platform on which the lobbyists can stand.

Krista Cox from the Association of Research Libraries has done a wonderful analysis of the latest leaked IP chapter from the TPP, which is worth reading by anyone interested in these issues.  For me, there are three important points to take away from Krista’s analysis.  First, the current version of the IP chapter is an improvement in several ways over what we had seen before.  Second, there is still significant cause for concern in two areas — the proposal to make the issue of technological protection measures independent of the underlying copyright rules regarding a work and the issue discussed above — the potential for a much-extended copyright term to emerge from the negotiation.  And, finally, there is cause for real disappointment, if not much surprise, at the fact that the U.S. is one of only two countries that is apparently opposing a very innocuous provision that acknowledges the importance of the public domain.

With this final point we seem to come full circle.  The U.S., for which the public domain is a Constitutionally-required aspect of our copyright law, opposes an acknowledgement of the importance of that gesture to the public interest.  How can this be?  I am afraid it reminds us again that the folks responsible for copyright policy in our government do not much like the idea that copyright is a bargain with the public.  Rather than listening to the Constitution or to the 225 years of experience of copyright in this country, those officials turn their ears exclusively to the needs and concerns of the legacy entertainment and publishing industries. They seem to hear only the siren song of the lobbyist, and indeed, to move freely in and out of the ranks of those lobbyists.  Perhaps that is the conspiracy I would rather not believe in, a conspiracy to cut the public, and the public interest, out of their discussions about how copyright should work.

What happens when there is no publication agreement?

Scholarly communication discussions and debates usually focus, quite obviously, on the terms of publication agreements and the licenses those agreements often give back to authors to use their own work in limited and specific ways.  This is such a common situation that it is hard to realize that it is not universal for scholarly authors.  But recently it has come to my attention that some authors actually never sign any agreement at all with their publishers, and in one situation that I will explain in a moment, that led to a dispute with the publisher about whether or not the author could place her article in an institutional repository.  The issue, broadly speaking, is when an implied license can be formed and what such licenses might permit.

In a couple of previous posts, I have discussed the idea of implied licenses: licenses that are formed without an explicit signature, usually because someone takes an action in response to a contractual offer, and the action is clear enough to manifest acceptance of that offer.  One of the most common implied licenses that we encounter underlies the transaction every time we open a web page.  Our browsers make a copy of the web page code, of course, and that copy implicates copyright.  But our courts have held that when someone makes a web page accessible, they are offering an implied license that authorizes the copying necessary to view that webpage.  No need to contact the rights holder each time you want to view the page, and no cause of action for infringement based simply on the fact that someone viewed a page and therefore copied the code, temporarily, in their browser cache.

It is important to recognize that such licenses are quite limited.  An implied license can, at best, be relied upon when doing the obvious acts that must have been anticipated by the offeror, such as viewing a web page.  An implied license would not, for example, authorize copying images from that website into a presentation or brochure; that would be well beyond the scope of an license implied by merely making the site available.  For those sorts of activities, either permission (an explicit license) or an exception in the copyright law would be needed.

So how might implied licensing help us untangle the situation where an author has submitted her work to a journal, and the journal has published it without obtaining an explicit transfer of right or a license?  As I said, this is a reversal of the normal situation, and it caught me by surprise.  But I have heard of it now from three different authors, all publishing in small, specialized journals in the humanities or social sciences.

The way the question came to me most recently was from an author who had published in a small journal and later asked, because she had no documentation that answered the question, if she could deposit her article in an open repository.  The publisher told her that she could do so only after obtaining permission from the Copyright Clearance Center, and she came to me, through a colleague, asking how the publisher could insist on her getting permission if she had not signed a transfer document.  Could the publisher, she asked, claim that the transfer had taken place through some kind of implied contract?

The answer here is clearly no; the copyright law says explicitly, in section 204, that “A transfer of copyright ownership… is not valid unless an instrument of conveyance, or a note or memorandum of the transfer, is in writing and signed by the owner of the rights conveyed or such owner’s duly authorized agent.”  So an implied transfer of rights is impossible; all that can be conveyed implicitly is a non-exclusive license (as in the web site example).

In the case of my author with no publication agreement, she remains the copyright holder, whatever the publisher may think.  At best, she has given the publisher a non-exclusive license, by implication from her act of submitting the article, to publish and distribute it in the journal. This is not really all that unusual. I have written opinion pieces for several newspapers in the past and never signed a copyright transfer; the pressure of daily publication apparently leads newspapers to rely on this kind of implied license quite frequently.  But it is unusual in academia, and requires some unpacking.  No transfer of copyright could have occurred by implication, so the rights remain with the author, who is free to do whatever she likes with the article and to authorize others to do things as well.  The publisher probably does have an implied license for publication, but that license is non-exclusive and quite limited.

As we worked through this situation, three unanswered questions occurred to me, and I will close by offering them for consideration:

  1. Are authors always correct when they tell us they did not sign a publication agreement?  Sometimes an agreement may have been forgotten amidst all the paperwork of academic life, or the agreement might have been online, a “click-through” contract at the point of submission.  We need to probe these possibilities when confronted with the claim that no agreement was signed, but those are very delicate conversations to have.
  2. Returning for a moment to the possibility of a click-through agreement that the author could have forgotten, we might also ask if this type of arrangement, increasingly common among academic publishers, are really valid to transfer copyright.  I am well aware that courts are becoming quite liberal in accepting online signatures and the like, but is there a limit?  Where there is a statute that explicitly requires a signed writing for a specified effect, as the Title 17 does for assignment of copyright, could an author challenge the sufficiency of a (non-negotiable) click-through agreement?  I expect that this issue will eventually come before a court (if any readers who know of such cases, please add the information in the comments), and I will be very interested in that discussion.
  3. Finally, what do we make of the journal’s claim, in the situation I was asked about, that the author must purchase permission to use her own work from the Copyright Clearance Center?  If there was no transfer of rights, the journal has no right to make such a demand and the CCC has no right to sell a license.  This is one more situation where it seems that the CCC is sometimes used to sell rights that are not actually held by the putative licensors, and it renews my concern about whether, and when, we actually are getting value for the money we spend on licensing.

What is “extended” about Extended Collective Licensing?

There has been a lot of discussion recently about the new proposals sent out for comment by the Copyright Office about orphan works and mass digitization, and numerous library groups are drafting responses to the Notices of Inquiry as I write.  Part of what the CO proposes in regard to mass digitization is an “extended collective licensing” scheme, which prompts the question in my title.

Before turning to that, however, let’s look a moment at the whole picture of what the CO is suggesting here.  The proposals address the overall problem of orphan works.  Unfortunately they do not do so by taking steps to reduce the number of orphan works or to make finding rights holders easier.  Instead, they create significant new obstacles for users who want to make use of an orphan work.  If you are looking to use just one or two works for which a rights holder cannot be determined, the CO wants you to go through a poorly-defined process of making a “reasonably diligent” search AND they will insist that you register your use.  If this seems backwards, that is because it is.  The goal here seems to be to discourage use, and hence new creation, by placing the onus on the user rather than the rights holder to make themselves known.  The excuse for such lousy policy would probably be the prohibition of formalities in the Berne Convention, but other countries have adopted voluntary registries for rights holders. Our Copyright Office, however, has been blinded by staring into the brilliance of Hollywood for so long, and can only see copyright on their terms.  Hence the necessity of burdening users who, we know, are significant threats to the “creativity” of the music and movie companies.

Alongside this proposal for how to deter use of individual orphan works is a grander scheme to deter mass digitization projects, called extended collective licensing.  So what does “extended” mean in this context?  A normal collective licensing scheme means that rights holders get together and create a collective organization to administer the rights that they own.  Such organizations are usually inefficient and sometimes prone to corruption, but there is nothing inherently wrong with the idea behind them.  They could, if done well, increase efficiency for both rights holders and users (that is, for new creators).  When a collective licensing scheme is extended, however, it means that licenses are being sold for rights not held by any of the members of the collective society.  That is the point about orphan works — a collective society representing the traditional content industries would sell licenses for the use of works for which they do not, by definition, hold the rights.  They would collect licensing fees “on behalf” of the unknown owners.  According to the proposed pilot, such a collecting agency would have an obligation to look for the correct rights holders in materials for which the collect fees, but there is no indication of how they would do this and no indication that a significant success rate could be achieved.  After a certain period of time, of course, the money collected would belong to the usual suspects; they would reap where they did not sow.

So what does “extended” really mean here?  If the situation were reversed, the content industries themselves would have a perfect word for it.  They would call it theft, or piracy.  Traditional rights holders love to use analogies with real property, claiming, ad nauseum, that downloading an unlicensed movie is equivalent to driving off in someone else’s car.  Now they are proposing, through the Copyright Office, to sell other peoples’ property for their own benefit.  Wouldn’t the “extended” analogy then be me trying to sell a car parked in front of my house, just because I do not know who the owner is?  It seems that in this context, “extended” just means parasitic — claiming a unjust and undeserved benefit from someone else’s labor.

Economists, of course, have less harsh, but no less pejorative, terms for this sort of arrangement.  One is “rent seeking,” which refers to efforts to gain a profit without making any reciprocal contribution to society by creating value.  When a company records and distributes a song or an album, and collects money for it, that is a normal economic exchange — value for value.  But when Warner Music Group continues to collect fees for the use of “Happy Birthday to You” long after any incentive for creativity is being fostered and, apparently, long after the rights have been abandoned, that is pure rent seeking — the pursuit of an undeserved profit without a normal exchange of value.  ECL is a similar exercise in rent-seeking, asking, that is, to benefit from the labors of unknown others without the obligation to provide value in exchange.

Another economic term is relevant here — deadweight loss.  Taxation is often held to be responsible for deadweight loss, when, because of taxes, it becomes too expensive to make or sell some good.  In that situation, the manufacture will stop making the good, and society will lose all the potential benefits — no goods and no tax revenue.  Deadweight loss.  In the case of extended collective licensing, the risk of deadweight loss, and the analogy to a tax, is the same.  ECL is a form of tax on using orphan works.  The revenue from that tax will have no benefit in providing an incentive for further creation, because it will not go to the creators who made the works in the first place.  But a requirement to pay such an unproductive tax will certainly deter many digitization projects that could make rare historic materials available for research, study and teaching.  Thus productivity is lost without the benefits of an economic incentive.  It is, truly, a lose-lose situation.

Can this gulf be bridged?

Litigants in court cases often disagree sharply about the law and its application to the facts, so it is probably not a surprise that the briefs filed in the District Court’s re-examination of its ruling in the Georgia State copyright infringement trial should see the issues in such starkly different terms.

If you read the publishers’ brief, the 11th Circuit decision that sent the case back to the District Court changed everything, and every one of those 70 excerpts found to be fair use at trial now must be labeled infringement.  This is absurd, of course, and I don’t actually believe that the publishers expect, or even hope, to win the point.  They want a new ruling that they can appeal.  In my opinion the publisher strategy has now shifted from an effort to “win” the case, as they understand what winning would mean, to a determination to keep it going, in order to profit from ongoing uncertainty in the academic community (and, possibly, to spend so much money that GSU is forced to give up).

On the other hand, the brief from Georgia State, filed last Friday, argues that all 70 of those challenged excerpts are still fair use.  It seems likely that the actual outcome will be somewhere in the middle, and, to be fair to them, GSU does recognize this, by making a concession the publishers never make.  For a number of excerpts where a digital license was shown to be available at the time of the trial, GSU argues that the available licenses were not “reasonable” because they force students to pay based on what they are getting access to, whether or not the specific excerpt is ever actually used.  This is an interesting argument, tracking a long-standing complaint in academic libraries.  If the court accepts it, it would dramatically restructure the licensing market.  But GSU also seems to recognize that this is a stretch, and ends several of its analyses of specific excerpts by saying that the specific use “should be found to be fair if the Court finds the licensing scheme unreasonable, and unfair if the Court finds the licensing scheme reasonable.”  So it seems GSU is prepared for what I believe is the most likely outcome of this reconsideration on remand — a split between fair uses and ones that are not fair that is different than the original finding — probably with some more instances of infringement — but still a “split decision.”

The availability of licenses is one of the interesting issues in these briefs.  The publisher plaintiffs now argue that licenses were available, back in 2009, for those excerpts where the judge said no licenses were “reasonably available.”  They are continuing to try to introduce new evidence to this effect; which is something GSU vigorously opposes.  But those of us who have been involved in e-reserves for a while remember clearly that such licenses were not available at all through the CCC from Cambridge University Press and only occasionally from Oxford.  So what is this new evidence (which the publishers’ brief says was not offered before because they were so surprised that it was being requested)?  It is an  affidavit from a VP at the CCC, and my best guess is that it would argue that licenses were “reasonably available” because it was possible, through the CCC system, to send a direct request to the publisher in those instances where standard licenses for digital excerpts were not offered.  GSU argues that the evidence gathering phase of the case is over, a ruling about licenses has been made and affirmed by the Court of Appeals, and the issue settled.  A lot will depend on how Judge Evans views this issue; so far she has ruled against admitting new evidence.

Another controversy, about which I wrote before, involves whose incentive is at stake.  The Court of Appeals wrote a lengthy discussion of the incentive for authors to write, and its importance for the fundamental purpose of copyright.  To this they appended an odd sentence that says they are “primarily concerned… with [publisher’s] incentive to publish.”  The publishers, of course, hang a lot of weigh on this phrase, and take it out of context to do so.  GSU, on their side, make a rather forced argument intended to limit the impact of the sentence.  Neither side can admit what I believe is the truth here: that that one sentence was inserted into an opinion where it does not fit because doing so was a condition of the dissenting judge for keeping his opinion as a “special concurrence” rather than the dissent it really was.  If I am right, this compromise served the publishers well, since they can now cite the phrase from the actual opinion of the Court; it is seldom useful to cite a dissent, after all.  So the publishers quote this phrase repeatedly and use it to argue that all of the factors really collapse into the fourth factor, and that any impact at all, no matter how small, on their markets or potential markets effectively eliminates fair use.  Authors, and the reasons that academic authors write books and articles, do not appear in the publishers’ analysis, as, indeed, they could not if the argument for publisher hegemony over scholarship is to be maintained.

GSU, as we have already seen, takes a more balanced approach.  For the first factor, they discount the publishers’ attempt to make “market substitution” a touchstone even at that point in the analysis, and focus instead on the 11th Circuit’s affirmation that non-profit educational use favors fair use even when transformation is not found.  The GSU brief fleshes this out nicely by discussing the purpose of copyright in relationship to scholarship and teaching.  On the second factor, GSU discusses author incentives directly, which in my opinion is the core of the second factor, even though courts seldom recognize this.  GSU also points out that the publishers have ignored the 11th Circuit’s instruction, both here and in the third factor analysis, to apply a case-by-case inquiry to those factors; instead, the publishers assert that since every book contains some authorial opinion, the second factor always disfavors fair use, and that no amount is small enough to overcome the possibility of “market substitution.”  For their part, GSU introduces, albeit briefly, a discussion of the content of each excerpt (they are often surveys or summaries of research) for the discussion of factor two, and of the reason the specific amount was assigned, in regard to factor three.

As I said, these differences in approach lead to wildly different conclusions.  Consider these paragraphs by which each side sums up its fair use analysis for each of the excerpts at issue:

The publishers end nearly every discussion of a specific passage with these words — “On remand, the Court should find no fair use as to this work because: (1) factor one favors fair use only slightly given the nontransformativeness of the use; (2) factor two favors Plaintiffs, given the evaluative/analytical nature of the material copied; (3) factor three favors Plaintiffs because even assuming narrow tailoring to Professor _____________’s pedagogical purpose, it is counterbalanced by the threat of market substitution, especially in light of the repeated use; and (4) factor four “strongly favors Plaintiffs,” and is entitled to “relatively great weight,” which tips the balance as to this work decidedly against fair use. ”

On the other side, GSU closes many discussions (although there is more diversity in their analysis and their summations than in the publishers’) this way — “Given the teaching purpose of the use, the nature of the work and the decidedly small amount used, the fact that this use did not supplant sales of the work, and the lack of digital licensing, the use of this narrowly tailored excerpt constituted fair use.”

These are starkly contrasting visions of what is happening with these excerpts and with electronic reserves, as practiced at a great many universities, as a whole.  It will be interesting, to say the least, to see how Judge Evans decides between such divergent views.

An international perspective on statutory damages

It has been a long time since we discussed statutory damages in this space.  Statutory damages are, of course, the high monetary damages that rights holders can elect when they sue someone for infringement.  Instead of having to prove the actual harm they suffered, statutory damages presume that harm and make proving it unnecessary.  In the U.S., statutory damages can be as high as $150,000 per infringing act (see 17 U.S. Code section 504(c)).  This is a number that the content industries love to throw around, especially as part of the highly-fictionalized warning you see at the beginning of DVDs.

Back in 2009, when the recording industry was actively suing its own customers for copyright infringement because of file-sharing, statutory damages were briefly a hot topic after juries returned million-dollar verdicts against ordinary individuals for downloading files the actual value of which was less than $100.  At the time I wrote about this issue, and also linked to another lawyer’s blog post which argued that these statutory damages were likely unconstitutional.  Since the RIAA has taken its campaign for ever-stronger copyright enforcement and ever-steeper penalties in a different direction, there has been less conversation about these disproportionate penalties.

This week, however, a development in Europe has reminded me that we should not let this issue drop.  Last week Poland’s Constitutional Court released a ruling which effectively declares Poland’s own take on statutory damages a violation of the Polish Constitution.  Polish law, it seems, enacts the same policy of allowing increased damages, well beyond ordinary judicial remedies, for copyright infringement with a provision that allows tripling of the “respective remuneration that would have been due at the time of claiming it in exchange for the rights holder’s consent for the use of the work” (see Article 79 of the English translation of Poland’s copyright law here, on the WIPO site).  What this essentially says is that triple the actual harm done (the amount the rights holder would have been due) can be awarded as a form of statutory damages.  And the Polish Constitutional Court has now decided that that provision must be changed because it violates a constitutional provision ensuring equality of protection for property ownership.  It seems they are concerned that the Polish copyright law gives a level of protection to copyrighted property that is much greater than other forms of property.

It is interesting to compare this situation to what we find in U.S. law.  We do not, of course, have the same provision about equal protection for copyright ownership in our Constitution.  The Constitutional case against statutory damages is made more on the grounds of due process, where the damages are so in excess of the harm that they are unreasonable, out of proportion, and unfair to defendants.  Still, there is intuitive sense to the idea that copyrighted works are protected far more comprehensively and stringently than most other kinds of property,  If this is true in Poland, and the Polish Court thinks it is, it is certainly even more the case in the U.S.

Consider two points.  First, in the Jamie Thomas file sharing case, the relationship between the actual harm — it would have cost about $24 for her to buy the songs at issue — and the 1.9 million dollar verdict against her, was much more disproportional than the triple damages that concerned the Polish court.  If a factor of 3 was too much for the Polish court to accept, a multiplier of nearly 800,000 ought to shock every U.S. court and every U.S. citizen.  Second, it is important to notice the different types of parties involved in the Polish case; it involved a cable TV network that apparently rebroadcast some films without a license.  So corporate entities were involved, and the Polish Court still felt that tripling the damages was unfair.  Yet in the U.S. we have allow grossly more disproportionate damages to be awarded against private citizens.

The content industry often looks to Europe and to other international laws and agreements they can use to convince U.S. lawmakers to increase protection for copyrighted works.  Here we have an international court pointing the other way; showing us in the U.S. how out of whack our copyright law has become in the area of statutory damages.  Something tells me this will not be an example cited by the MPAA or the U.S. Trade Representative.  But as Congress and the Copyright Office discusses reforming the copyright law, this finding from the Constitutional Court of Poland should shame us into looking at statutory damages here in America and recognizing that this is a problem in desperate need of remedy.

 

This is a solution?

Ever since it appeared, I knew I should write about this new report concerning  orphan works that the Copyright Office issued earlier this month.  But, to be honest, I have been on vacation, and have not had a chance to read the full report yet, only excerpts.  Fortunately, on Monday Mike Masnick from Techdirt posted about the report and absolutely nailed it.  So I have little to add, and simply want to direct readers to Mike’s post.

As Mike observes, the new CO report would mostly make a serious problem worse, in that it would make the use of orphan works more difficult rather than less.  The idea of creating a registry for users to register their proposed use is positively Kafkaesque; the real need is to be able to better identify rights holders, not users.  So why not provide incentives for rights holders to register, rather than creating a new registry that will probably not be used, since it is so counter-intuitive and will be unknown to the vast majority of putative users?

The Techdirt post correctly notes that the problem of orphan works increased exponentially after the U.S. made two changes in its law — the elimination of formalities and the extension of the copyright term of protect to life plus 70 years.  These changes were made because the U.S. joined the Berne Convention and other international treaties on copyright in the 1980s, so reversing them would be very difficult.  Still, the problem is world-wide, so maybe someday sanity will prevail at the WIPO and these issues will be addressed directly, instead of taking a kind of backwards approach that tries to solve a problem without addressing its root causes, which has the result of making things worse.  See the suggestions I made several years ago for solving the “Berne Problem” here and here.

The most troubling aspect of the Copyright Office’s new report is the disdain with which it treats fair use.  The U.S. is actually in a better position as far as uses of orphan works are concerned than most nations  because our judges were wise enough to create this doctrine over 150 years ago.  But today’s Copyright Office thinks it knows better; it believes that fair use is “of limited utility” in solving the orphan works problem.  Instead, we need more bureaucratic apparatus.  Worse, to get to this position, the CO presents the HathiTrust case, with its strong affirmation of fair use, as being about “the digitization of millions of non-orphaned works” (p.42).  This is ridiculous, of course; the HathiTrust corpus contains both orphan works and works for which rights holders can be identified.  The CO seems to take the position that since specific uses of orphan works were not ultimately adjudicated in the HathiTrust case, that case is not relevant to the application of fair use to the orphan works problem. So although the report does recommend that any legislative “solution” to the orphan works problem should preserve the users’ ability to rely on fair use, the CO does not seem to feel that fair use is very helpful.  But that simply reflects the prejudice that the CO has about fair use, a prejudice that makes them an unreliable guide to copyright law in the U.S.

A distinction without a difference

The discussion of the new Elsevier policies about sharing and open access has continued at a brisk pace, as anyone following the lists, blogs and Twitter feeds will know.  On one of the most active lists, Elsevier officials have been regular contributors, trying to calm fears and offering rationales, often specious, for their new policy. If one of the stated reasons for their change was to make the policy simpler, the evidence of all these many “clarifying” statements indicates that it is already a dismal failure.

As I read one of the most recent messages from Dr. Alicia Wise of Elsevier, one key aspect of the new policy documents finally sunk in for me, and when I fully realized what Elsevier was doing, and what they clearly thought would be a welcome concession to the academics who create the content from which they make billions, my jaw dropped in amazement.

It appears that Elsevier is making a distinction between an author’s personal website or blog and the repository at the institution where that author works. Authors are, I think, able to post final manuscripts to the former for public access, but posting to the latter must be restricted only to internal users for the duration of the newly-imposed embargo periods. In the four column chart that was included in their original announcement, this disparate treatment of repositories and other sites is illustrated in the “After Acceptance” column, where it says that “author manuscripts can be shared… [o]n personal websites or blogs,” but that sharing must be done “privately” on institutional repositories. I think I missed this at first because the chart is so difficult to understand; it must be read from left to right and understood as cumulative, since by themselves the columns are incomplete and confusing.  But, in their publicity campaign around these new rules, Elsevier is placing a lot of weight on this distinction.

In a way, I guess this situation is a little better than what I thought when I first saw the policy. But really, I think I must have missed the distinction at first because it was so improbable that Elsevier would really try to treat individual websites and IRs differently. Now that I fully understand that intention, it provides clear evidence of just how out of touch with the real conditions of academic work Elsevier has become.

Questions abound. Many scientists, for example, maintain lab websites, and their personal profiles are often subordinate to those sites. Articles are most often linked, in these situations, from the main lab website.  Is this a personal website? Given the distinction Elsevier makes, I think it must be, but it is indicative of the fact that the real world does not conform to Elsevier’s attempt to make a simple distinction between “the Internet we think is OK” and “the Internet we are still afraid of.”

By the way, since the new policy allows authors to replace pre-prints on ArXive and RePEC — those two are specifically mentioned — with final author manuscripts, it is even clearer to see that this new policy is a direct attack on repositories, as the Chronicle of Higher Education perceives in this article.  Elsevier seems to want to broaden its ongoing attack on repositories, shifting from a focus on just those campuses that have an open access policy to now inhibiting green self-archiving on all university campuses.  But they are doing so using a distinction that ultimately makes no sense.

That distinction gets really messy when we try to apply it to the actual conditions of campus IT, something Elsevier apparently knows little about and did not consider as the wrote the new policy documents.  I am reminded that, in a conversation unrelated to the Elsevier policy change, a librarian told me recently that her campus Counsel’s Office had told her that she should treat the repository as an extension of faculty members’ personal sites.  Even before it was enshrined by Elsevier, this was clearly a distinction without a difference.

For one thing, when we consider how users access these copies of final authors’ manuscripts, the line between a personal website and a repository vanishes entirely. In both cases the manuscript would reside on the same servers, or, at least, in the same “cloud.” And our analytics tell us that most people find our repositories through an Internet search engine; they do not go through the “front door” of repository software. The result is that a manuscript will be found just as easily, in the same manner and by the same potential users, if it is on a personal website or in an institutional repository. A Google or Google Scholar search will still find the free copy, so trying to wall off institutional repositories is a truly foolish and futile move.

For many of our campuses, this effort becomes even more problematic as we adopt software that helps faculty members create and populate standardized web profiles. With this software – VIVO and Elements are examples that are becoming quite common — the open access copies that are presented on a faculty author’s individual profile page actually “reside” in the repository. Elsevier apparently views these two “places” – the repository and the faculty web site – as if they really were different rooms in a building, and they could control access to one while making the other open to the public. But that is simply not how the Internet works. After 30 years of experience with hypertext, and with all the money at their disposal, one would think that Elsevier should have gained a better grasp on the technological conditions that prevail on the campuses where the content they publish is created and disseminated. But this policy seems written to facilitate feel-good press releases while still keeping the affordances of the Internet at bay, rather than to provide practical guidelines or address any of the actual needs of researchers.

From control to contempt

I hope it was clear, when I wrote about the press release from Elsevier addressing their new approach to authors’ rights and self-archiving, that I believe the fundamental issue is control.  In a comment to my original post, Mark Seeley, who is Elsevier’s General Counsel, objected to the language I used about control.  Nevertheless, the point he made, about how publishers want people to access “their content,” but in a way that “ensures that their business has continuity” actually re-enforced that the language I used was right on the mark.

My colleague Paolo Mangiafico has suggested that what these new policies are really about is capturing the ecosystem for scholarly sharing under Elsevier’s control.  As Paolo points out, these new policies, which impose long embargo periods on do-it-yourself sharing by authors but offer limited opportunities to share articles when a link or API provided by Elsevier is used, should be seen alongside the company’s purchase of Mendeley; both provide Elsevier an opportunity to capture data about how works are used and re-used, and both  reflect an effort to grab the reins over scholarly sharing to ensure that it is more difficult to share outside of Elsevier’s walled garden than it is inside that enclosure.

I deliberately quote Mr. Seeley’s phrase about “their content” because it is characteristic of how publishers seem to think about what they publish.  I believe it may even be a nearly unconscious gesture of denial of the evident fact that academic publishers rely on others — faculty authors, editors and reviewers — to do most of the work, while the publisher collects all of the profit and fights the authors for subsequent control of the works those authors have created. That denial must be resisted, however, because it is in that gesture that the desire for control becomes outright disrespect for the authors that publishing is supposed to serve.

Nowhere is this disrespect more evident than in publisher claims that the works they publish are “work made for hire,” which means, in legal terms, that the publisher IS the author.  The faculty member who puts pen to paper is completely erased from the transaction.  To be clear, as far as I know Elsevier is not making such a claim with its new policies.  But these work made for hire assertions are growing in academic publishing.

Three years ago I wrote about an author agreement from Oxford University Press that claimed work made for hire over book chapters; that agreement is still in use as far as I am aware.  At the time, I pointed out two reasons why I thought OUP might want to make that claim.  First, if something is a work made for hire, the provision in U.S. copyright law that allows an author or her heirs to terminate any license or transfer after 35 years simply does not apply.  More significantly, an open access license, such as is created by many university policies, probably is not effective if the work is considered made for hire.  This should be pretty obvious, since our law employs the legal fiction that says the employer, not the actual writer, is the author from the very moment of creation in work made for hire situations.  So we should read these claims, when we find them in author agreements, as pretty direct assaults on an author’s ability to comply with an open access policy, no matter how much she may want to.

As disturbing as the Oxford agreement is, however, it should be said that it makes some legal sense.  When a work is created by an independent contractor (and it is not clear to me if an academic author should be defined that way), there are only selected types of works that can even be considered work made for hire; one of them is “contribution[s] to a collective work.”  So a chapter in an edited book is at least plausible as a work made for hire, although the other requirement — an explicit agreement, which some courts have said must predate the creation of the work — may still not be met.  In any case, the situation is much worse with the publication agreement from the American Society of Mechanical Engineers (ASME), which was recently brought to my attention.

ASME takes as its motto the phrase “Setting the Standard,” and with this publication agreement they may well set the standard for contemptuous maltreatment of their authors, many of whom are undoubtedly also members of the society.  A couple of points should be noted here.  First, the contract does claim that the works in question were prepared as work made for hire.  It attempts to “back date” this claim by beginning with an “acknowledgement” that the paper was “specially ordered and commissioned as a work made for hire and, accordingly, ASME is the author of the Paper.”  This acknowledgement is almost certainly untrue in many, if not most, cases, especially since it appears to apply even to conference presentations, which are most certainly not “specially commissioned.”  The legal fiction behind work made for hire has been pushed into the realm of pure fantasy here.

What’s more, later in the agreement the “author” agrees to waive all moral rights, which means that they surrender the right to be attributed as the author of the paper and to protect its integrity.  Basically, an author who is foolish enough to sign this agreement has no relationship at all to the work, once the agreement is in place.  They are given back a very limited set of permissions to use the work internally within their organization and to create some, but not all, forms of derivative works from it (they cannot produce or allow a translation, for example).  Apparently ASME has recently started to disallow some students who publish with them to use the published paper as part of a dissertation, since most dissertations are now online and ASME does not permit the  writer to deposit the article, even in such revised form, in an open repository.

To me, this agreement is the epitome of disrespect for scholarly authors.  Your job, authors are told, is not to spread knowledge, not to teach, not to be part of a wider scholarly conversation.  It is to produce content for us, which we will own and you will have nothing to say about.  You are, as nearly as possible, just “chopped liver.”  It is mind-boggling to me that any self-respecting author would sign this blatant slap in their own face, and that a member-based organization could get away with demanding it.  The best explanation I can think of is that most people do not read the agreements they sign.  But authors — they are authors, darn it, in spite of the work for hire fiction — deserve more respect from publishers who rely on them for content (free content, in fact; the ASME agreement is explicit that writers are paid nothing and are responsible for their own expenses related to the paper).  Indeed, authors should have more respect for themselves, and for the traditions of academic freedom, than to agree to this outlandish publication contract.

Learning how fair use works

How many cases about fair use have been decided in the U.S. since the doctrine was first applied by Justice Story in 1841?  Take a minute to count; I’ll wait.

If you came up with at least 170, the Copyright Office agrees with you.  Last week they announced a fascinating new tool on the CO website, an index of fair use cases.  That index contains summaries of approximately 170 cases, along with a search tool.  The introductory message, however, acknowledges that the index is not complete, so those of you who thought there were more than 170 cases are almost certainly correct.

This index is potentially a very useful tool, and it also raises some interesting questions.  I want to consider the questions first, than discuss how the new fair use index might be used by someone who wanted to learn more about how fair use works (which, by the way, is one of the avowed purposes behind its development).

Coverage is the obvious first question, and, as I said, the CO acknowledges that it is incomplete.  Specifically, it seems heavily weighted toward more recent cases.  There are only 11 cases listed in the index dating to before 1978, and two older cases (1940 and 1968) that are presented in my law school casebook on copyright as important fair use precedents are not included.  So it looks like there are some pretty significant gaps, which one hopes the Copyright Office will address as it continues to develop this tool.

By the way, the issue of continuing development also brings up the question of why the C.O. thought this was a worthwhile investment. It looks useful, to be sure, but there are other sources for similar data, so it is a bit curious that the C.O. chose this among all its potential priorities.

To return to the issue of coverage, it is always important to ask which specific cases were chosen and how they are characterized.  Of the 170 cases, there are 78 for which the result is listed as “Fair use not found,” and 64 in which the C.O. says that fair use was found.  The remaining 29 are listed as “Preliminary ruling, mixed result or remand.”  This last category is rather unhelpful.  For example, the Authors Guild v. HathiTrust case is listed this way, even though it was a strong affirmation of fair use and the remand involved a fairly unimportant issue of standing.  Even more surprising is the fact that this “mixed result” tag is applied to Campbell v. Acuff Rose Music, the “Oh Pretty Women” case from the Supreme Court that is at the heart of modern fair use jurisprudence.  Again, this was a clear fair use win; the case was remanded only because that is what the Supreme Court usually does when it reverses a Court of Appeals’ decision.  So the representation of the holdings is technically accurate, it seems, but not as helpful as it might be in actually focusing on the fair use aspect; while some “mixed result” case genuinely were that — fair use was found on one issue but not for another — many of the remanded cases actually did involve a clear yea or nay about fair use, and it would be more helpful to categorize them that way.

A particularly useful feature of this index is the ability to limit the listings by jurisdiction (the Appellate Circuits) and by topic.  For example, limited to just cases out of the Fourth Circuit, where I reside, I find that the Court has ruled on seven fair uses case and upheld fair use in six of them.  The seventh was one of those genuinely mixed results, where one challenged use was held to be fair and another was not.

If we limit the subject area of the cases to those labeled “Education/Scholarship/Research,” fair use seems to fare better than it does overall.  In that category of 42 total cases there are 18 findings in favor of fair use and 16 rejections.  Of the remaining 8 mixed results, at least two of them — the HathiTrust Case and the GSU case — should be seen as affirmations of fair use, even if the parameters of that use are still unsettled in GSU.  So the impression many of us have that educational and scholarly uses are a bit more favored in the fair use analysis than other types of cases seems to be confirmed.

Things get more interesting when we look just at the Supreme Court in this index, and the issue of how cases are chosen is again highlighted.  The index shows four fair use cases, with one holding in favor of fair use (Sony v. Universal Pictures), one mixed result (Campbell, as discussed above), and two rejections of fair use (Harper v. Nation and Stewart v. Abend).  This last case, Stewart v. Abend, is actually almost never treated as a fair use case; while fair use was dismissed as a potential defense in the case, the real issue involved assignments of copyright and who could exercise the renewal right that existed at that time.  And this case was remanded, just as Campbell was.  So it is odd that Campbell, with its central finding in favor of fair use, is shown as a mixed result, while Stewart v. Abend, where fair use was tangential and there was also a remand, is tagged as a rejection of fair use.  This suggests at least an unconscious bias against fair use findings.

A different listing of Supreme Court fair use cases, on the IP Watchdog site, includes several additional cases — nine, in all — but does not list Stewart v. Abend as one of them. Several of the cases included by IP Watchdog do not seem to me to really focus on fair use, so I am not saying that the C.O. has under-reported the cases.  But the very different lists do suggest that it is a surprisingly subjective undertaking just to identify the cases that should be included in a fair use index.

Finally, the analysis provided in the C.O.’s case summaries needs to be considered carefully.  To take one example, for the recent case of Kienitz v. Scoonie Nation, about which I wrote earlier this year, the short note about the holding ignores the thing that may be most significant about the case — the reluctance of Judge Frank Easterbrook to apply a “transformation” analysis to the fair use question (HT to my friend and colleague David Hansen for pointing this out).  Again, this is not necessarily a problem, and the case summary of Kienitz at the Stanford Copyright & Fair Use site has a similar synopsis, but it is a reminder that these projects are always created by individuals with specific perspectives, viewpoints and limitations.

Even with all these caveats, I think the Copyright Office has created a useful tool, which can be used by those interested to learn a lot about how fair use is applied, especially by looking at the different categories.  The Stanford site, linked above, and especially its own, much shorter list of cases, might usefully be used alongside the C.O. index.  The Stanford descriptions are  very tightly focused on the fair use issue, so reading them in conjunction with the C.O. summaries, with their attention to procedural matters that sometimes obscure the fair use holding, might produce a more balanced approach.

In any case, this new tool form the Copyright Office, and some of the tools that predate it, remind us that the best way to understand fair use, and to become comfortable with it, is too look closely at the cases, both in the aggregate and individually.  This C.O. database offers a statistical perspective, as well as the ability to focus on parody, or music, or format-shifting, while the Stanford summaries emphasize in a few words the core of the fair use analysis.  Both point the interested reader to full opinions, where the analysis can be understood in the context of all the facts.  Combined in this way, these resources offer a terrific opportunity for librarians, authors, and others to dig deeply into the nuances of fair use.