How efficient is our licensing system?

Two letters landed on my desk a few weeks ago, both from the Copyright Clearance Center.  I have written before about concerns over what we are actually paying for when we pay permission fees to CCC, and my experience with these two letters deepened that concern.

The first letter asked us to give permission for another university to use in class a newsletter from the 1970s that is on our digital collections web site as part of the holdings of the Sallie Bingham Center for Women’s History.  We had to deny the request because we do not hold the rights in this particular newsletter; we have digitized and displayed it based on a belief that doing so is fair use.  In any case, the original author cannot be located since the work was pseudonymous.  The result is that CCC, having actually failed to contact the rights holder, will now send the other university a denial of permission message.  This disturbed me, and I decided to get in touch with the professor directly, suggesting that she link to our digital collection so her students would still be able to read the work.  Unfortunately, the permission mechanism at CCC is an either / or switch, which cannot accommodate all of the nuances of copyright in higher education.

The second letter started a much more complicate chase.  It informed us that someone at Duke was entitled to a royalty check, but that the original check had gone uncashed for 6 months.  The letter offered to reissue a check (for a $25 fee that was nearly half the total amount) or gave us the option to refuse the payment altogether.  Since the letter was sent to an impossible address – the name of one University entity but the box number for a different one – it was no surprise that the first check had been lost.  When this second notice found its way to my desk, I decided to investigate.

What I found was very troubling.  The folks at the CCC told me that the royalties had been collected from a South African university and they cited two titles.  Unfortunately, neither title seems to be something for which anyone at Duke has the rights in the first place.  One appeared to be a Harvard Business case study; the other was an article from a journal for which I was given an ISSN that is non-existent.  None of the authors named by CCC appear to have any connection to Duke, and I cannot locate the specific article at all.  When I asked for more details, two different representatives at CCC promised to call me back.  That was three weeks ago, and I have heard nothing further.

My concern here is not to collect the $56 dollars being offered to us.  Instead, I am wondering just what that South African university actually paid for.  Obviously a substantial fee was collected and permission granted.  Yet the CCC seems to have based that fee on a mistaken perception of who the rights holder is and that they were authorized by that rights holder to sell the permission.

We are often told that the application of fair use on campus can be quite narrow because there is an efficient mechanism for licensing reuse and rewarding authors.  This experience reinforces my perception that that mechanism is not as efficient as is often claimed, and that a great deal of the money we spend on permissions never does get to the authors who are supposed to be rewarded.  The fact is that locating rights holders is very difficult, and the Copyright Clearance Center is as much at the mercy of those difficulties as are the rest of us.  One of the reasons for reinstating a renewal prevision for US authors into our copyright law would be to make locating the real holder of specific rights much easier.  Until we have such a system in place, we must be wary of relying too heavily on any licensing organization to actually know who each rights holder is and how to get them the fees that are supposed to motivate them.

Should the court consider the new Google patent?

On Tuesday Google was formally granted a patent on software to selectively control access to content, based on access rules and geographical locations.  There is a story on Ars Technica here that explains the patent and its potential application very nicely.  Basically, this is a technique for filtering what users can see based on where they are in the world.  Such filtering is not new; Yahoo! famously lost a court case in France and had to begin controlling access to its auction sites to prevent Nazi memorabilia from being sold in that country.  Different international laws about all kinds of topics can force Internet services to distinguish what can and cannot be seen in different parts of the world.

This story is interesting, however, for at least three reasons, the last one very relevant to the fairness hearing being held tomorrow in regard to the proposed settlement of the copyright lawsuit over the Google Books project.

The first thing that is interesting in this story is the fact that the patent application for this software was filed in September of 2004.  The five and a half year gap between initial filing and final approval is not necessarily unusual, but it gives me a chance to remind readers how long and costly the patent process is.  This is a huge difference between copyright and patents, and indicates why copyright is usually so much more important to higher education.  Every scholar, administrator, librarian and student owns copyrights, but relatively few can afford the time and money to obtain a patent, even if they have an invention that meets the much higher standards for patentability.

Which brings me to the second point: should software of this type even be eligible for patent protection?  Software patents were controversial for a long time because they were alleged to represent “only” abstract ideas – algorithms based on zeros and ones.  And until at least the mid-1990s, the Patent Office and the courts would not recognize patents for business methods.  All of that seemed to be resolved in favor of patenting business method software, but a case currently before the Supreme Court, called Bilski v. Kappos, has the potential to alter the rules on this issue.

But it is the impact of this patent on the Google Books settlement that really interests me.  Should the court considering the settlement tomorrow take notice of this patent?  If it did, what impact would it have?  Given the objections to the settlement from international copyright holders and the promises Google has made to exclude works from outside the US, UK and Canada, the need for some filtering system seems obvious.  So from one point of view, this patent is indicative of Google’s good faith efforts to do what it has promised to do.  Nevertheless, there are some less charitable interpretations that could be applied.

For one thing, this software could enable censorship of the type Google first practiced, then rejected, in China.  We should never forget that the origin of copyright was as a tool for censorship; anything that automates copyright enforcement runs the risk of facilitating repression.

Of most interest to the GB settlement, however, is the question of whether this patent ratchets up the worry about a Google monopoly over digital books.  Lots of comment has been made about the possibility that Google will have a monopoly over digital access to orphan works.  It is unlikely that any other party will be able to get the same kind of compulsory license for all orphan works that Google stands to gain in this class action settlement.  Now we must face the possibility that, even if a competitor could get such a license, in order to effectuate the necessary access restrictions they would have to seek a license from Google for these patented process.

The Ars Technica article points out that Google has promised not to use its patent portfolio for offensive purposes, that is, to limit competition, and that its record so far suggests that it is serious about that promise.  Nevertheless, courts need to look beyond promises to “do no evil” and think about long-term consequences.  As it considers whether the proposed settlement will give Google too much market power, it might be well to consider this patent on geographical filtering software as one more reason to keep a sharp eye on the project as it proceeds.

OSTP comments and the issue of compensation

I wanted to post this earlier, but intervening events got the better of me.  As most readers will know, the White House Office of Science and Technology Policy recently collected a wide range of very useful and specific comments in response to a request for information about public access policies for federally-funded research.  I wanted to point readers to two sets of comments, those that I wrote on behalf of the Duke University Office of Scholarly Communications, which are available here, and the superb comments from Harvard Provost Steven Hyman, which are linked from Harvard’s Office of Scholarly Communications blog, the Occasional Pamphlet.

One issue that arises in some of the OSTP comments is “compensation” for publishers when the final published version of articles based on federally-funded research is made publicly accessible.  I was recently part of a conversation on public access in which several academic publishers from scholarly societies raised this term.  I bit my tongue at the time to keep from yelling because I thought it was an idiosyncratic notion with no legs.  But when I looked at the full set of OSTP comments, I notice that compensation is brought up by numerous publishers.  See, for example, the comments from the Association of American Publishers, from Elsevier, and from STM: the Association of Scientific, Technical and Medical Publishers.   This last set of comments is very explicit in suggesting that publishers deserve financial compensation (not just “compensation” in the form of embargos) for the value they add to scholarly articles through managing peer-review and copy editing (see page 10).

I continue to be amazed that scholarly publishers are willing to make this demand in this language, but all I can say is that it is a conversation I am anxious to have.  I hope we can discuss the failures of compensation that occur throughout the scholarly publication system.  Publishers, of course, are usually the only ones who actually profit financially from scholarly journal articles.   Taxpayers often underwrite the work upon which those articles are based, and universities also supply resources and salaries to the authors.  Where is compensation for those entities, which rightfully should be paid out of subscription income?  Instead, of course, it is the universities that pay to get back the products of their own research, through their library budgets.  Peer-review is, of course, managed by publishers, but the actual intellectual work is again done by university faculty members, who donate their time and labor to improve scholarship.  If we are going to talk about compensation, let’s discuss how they should be compensated from the profits publishers make from their work.  And finally, how should scholars be compensated for transferring their copyrights to publishers?  There are, after all, substantial lost opportunity costs whenever an author surrenders control over their work.  These transfers have almost always been gratuitous in the past, but if we are going to talk about compensation, perhaps that can change.

In short, I hope we can have a conversation about compensation, because such a conversation can only reveal how exploitative and unsustainable the current model is.  If we discuss the full range of compensation issues, and not only narrow questions about copy editing, perhaps we can make progress towards a fair system of scholarly communications.

LOL at the Federal Register

It sounds very strange, but I really did find myself laughing as I read a long notice from the Copyright Office in volume 75, number 15 (Jan. 25, 2010) of the Federal Register.  Admittedly, some background is necessary to acquit me of the suspicion of insanity.

The Copyright Office notice details a interim rule that makes an interesting change in Office policy.  Essentially, the Office is now giving itself the right to ask for mandatory deposit of a certain class of online-only publications.  Deposit is mandatory for all published works, although failure to comply with this rule does not impact the availability of copyright protection itself.  But until now the Office has exempted online-only publications and, indeed, has not taken a position on whether or not such works are actually “published.”  In this notice they do acknowledge, based on several court rulings, that online material is published, and they extend the mandatory deposit requirement to a circumscribed class of online-only works — essentially formal periodicals that are published online without a print equivalent.  The requirement of deposit is not automatic even for these works; it will be triggered only if the Copyright Office makes a demand.  And the notice is careful to exclude things like blog posts, although this blogger wonders how successful that effort really is.

For me the real humor in this dry Federal Register notice concerns the issue of Digital Rights Management, or DRM — the technological protections used by many rights holders to control access to their works.  Such protection measures are fiercely protected by the copyright law itself; the Digital Millennium Copyright Act added provisions that make it illegal to circumvent these electronic locks even for otherwise legal purposes.  Some courts have even held, in spite of plain language in the statute, that fair use of a work can be prevented if the user would have to break though a digital lock.

So the irony that made me laugh out loud was the discovery that the Copyright Office will require that DRM be removed from the copy of an online publication that is deposited subject to this new rule.  Why?  Because “copies of works submitted to the Copyright Office under this interim rule must be accessible to the Office, the Library, and the Library’s users.”  In order to insure this accessibility, the interim rule makes part of  the definition of the “best edition” that must be deposited the criterion that “technological measures that control access to or use of the work should be removed” (see page 3867 of the Register, column 3, for these quotes).

As if to soften the blow to publishers from this rule, the Copyright Office goes on to detail the special security measures that will be taken regarding use of these deposited copies.  Users will have to be in a Library of Congress facility, and only two users at a time will be given access, over a secure network.  Yet the notice goes on to say that “Authorized users may print from electronic works to the extent allowed by the fair use provisions of the copyright law . . . as is the case with traditional publications.”

Here is the problem.  DRM systems prevent users from exercising their rights under the Copyright law; we often cannot print from DRM-protected works even “to the extent allowed by law.”  If we disable technological protections to do so, we may be subject to draconian penalties.  No other library in the world has the power to exempt its users from these burdens.  The Library of Congress seems to recognize the unfair restriction placed on users by DRM and is using its unique position to mitigate that problem.  But what about the rest of us?

Of course, the Library of Congress does have the authority to relieve the rest of us of some of the burden created by the anti-circumvention rules.  The Library is given rule-making authority by the DMCA to declare exceptions to these rules.  Unfortunately, when the new exceptions were due in late 2009, the Library punted, issuing an indefinite extension of the old, inadequate exceptions and promising new rules in “no more than a few weeks.”  The new exceptions have still not been announced, three months later.  With the Library’s clear recognition in last week’s rule that DRM protected works are not acceptable to meet the needs of library users, we can only hope that these long-delayed exceptions, when announced, will recognize the widespread harm done by DRM and will declare broad exceptions  — like the one the Library has given itself — to help the rest of us.

“Renewing copyright” and a reflection on versions

In a post about two months ago I promised that I would offer a link to the article I wrote on reforming copyright law from the perspective of academic libraries.  That article was published this month in portal: Libraries and the Academy, and is now also available in DukeSpace, the open access repository at the Duke Libraries.

The full citation for the article is:  Kevin L. Smith, “Copyright Renewal for Libraries: Seven Steps Toward a User-friendly Law,” portal: Libraries and the Academy, Volume 10, Number 1, January 2010, pp. 5-27.

The published version is available on Project Muse at http://muse.jhu.edu/journals/portal_libraries_and_the_academy/summary/v010/10.1.smith.html

If you cannot access Project Muse, this is the link to the DukeSpace version, which is my final manuscript but lacks the formatting and copy editing done by the good folks who publish portal:

http://hdl.handle.net/10161/1702

As I said in the original post linked to above, I hope my suggestions will be read in combination with those made by Professor Jessica Litman in her wonderful article on “Real Copyright Reform.

I had intended to end this post with the information above, but a recent discovery has caused me to change that plan.  Late last week I discovered that a small error, an extra clause made up of words from elsewhere in the sentence, was inserted into the HTML version of the article.  It does not appear in my manuscript, nor in the PDF of the published article, only in the HTML version.  I contacted the editorial folks at portal and expect that the error will be fixed shortly, perhaps even before I publish this post (Note on 2/2 — the error has been corrected).  But it does raise some questions about some of the assertions made on behalf of traditional publication.

First, we are often told that copy editing adds value to an article and that publishers deserve compensation for adding that value whenever the public is given access to the final published version of an article.  On the compensation issue I shall write more later.  But here I want to note that the editorial process can insert errors as well as eliminate them.  I found the editorial assistance from portal to be superb, but, in spite of their best efforts, the multiple stages of the publication process are not all within their control.  The result was that a error that I was not responsible for, albeit a minor one, found its way into my work.

Second, this small incident raises questions about the assertion that publishers provide the scholarly community with the “version of record” that assures consistent quality.  In fact, there are two different versions of my article available at this moment (on 2/1) on the Project Muse site for this journal — the HTML is different from the PDF in at least this one respect.  So which is the version of record?  To make that determination, I am the final arbitrator, and I hope that the error I caught in the HTML will be corrected based on my request.

This suggests that there is at least an argument that the “version of record” should be the one that is closest to the author’s hand.  Who else has a greater incentive to insure accuracy, after all?  A serious error may impact the publisher’s reputation to some degree, but it can be devastating to that of the author.  And I would certainly hope that a significant error, such as an incorrect calculation or formula, would never be “corrected” by a copy editor without first consulting the author; it is easy to imagine cases where what looks like an error — a deviation from the expected — is in fact the heart of the argument.  Thus significant corrections should always be made with input from the author, and the author would then be free to correct any versions she has made available to the public.  So I would like to see discussions of “version of record” include the likelihood that the version nearest to the author may, at least sometimes, be the most accurate version available.

Can we stream digital video?

I had not even had a chance to open my daily e-mail from Inside Higher Ed yesterday before four colleagues had sent me a link to this story about an educational video trade association forcing UCLA to halt its practice of streaming digitized video on course Web sites.  Several suggested that I would surely want to blog about the story, and they were, of course, correct.

The story contains some chilling rhetoric from the representative of the Association for Information and Media Equipment – intentional, I am sure – about their plans to investigate and threaten other colleges and universities that are doing the same thing.  Many schools, of course, have explored these options because the pressure from faculty and students to provide greater digital access to our film collections is intense.  Some have concluded that the legal risk is too great and are resisting that pressure, at least for now.  Others have tried various justifications, clearly hoping to “fly under the radar.”  This story will certainly strike fear into many, and will give more ammunition to faculty members who complain that copyright law prevents them from teaching effectively in the media-saturated world of 21st Century America.

In response to the story, I want to suggest here what the major alternatives for legal streaming of digital video might be and the problems inherent in each alternative.  I know from conversations with colleagues that each of these strategies is being tried somewhere.

The first, and most obvious, possibility is to rely on the TEACH Act, which amended one of the Section 110 exceptions to the public performance right in copyright in order to allow “transmissions” of certain performances for distance education.  TEACH (or Section 110(2)) has a lot of specific requirements that must be met (see this TEACH ACT toolkit from NC State University), although many of those requirements would appear to be satisfied when digital video is streamed through a closed-access course management system.  The real problem with relying on TEACH is the portion limits it imposes; it permits transmission of entire “non-dramatic musical and literary works” and “reasonable and limited portions” of other audio-visual works. This second provision seems to apply to films and to disallow the transmission of entire films.  Some institutions would argue, I think, that an entire film is often the only “reasonable” portion to use for a particular teaching purpose, but that argument ignores the word “limited.” The point about a reasonable portion is well-taken, in my opinion, but only proves that TEACH was never an adequate solution to this problem.

Other institutions could assert fair use as the justification for streaming digital video.  These schools would point out, I imagine, that courts have often held that the use of an entire work can be a fair use, based on the overall balancing of the fair use factors and the totality of circumstances.  The trade group clearly disagrees, although the comments about fair use in the article are not really on target.  It is correct that password-protection alone is not enough to guarantee fair use, but it does strengthen the university’s position in the complete analysis of the factors.  Simply to say that a password does not make something fair use is as incomplete as asserting that an educational purpose always means a use is fair; both assertions miss the need for a complete examination and balancing of the factors.

The problem I see for a fair use justification is that courts would be likely, in my opinion, to look at the portion limits in the TEACH Act and say that that legislation was Congress’ opportunity to provide guidance on educational transmissions, and it selected a limited standard.  A court that took that approach would be unlikely to let a school “shoehorn” the entire film in under fair use, simply in order to avoid the inconvenient limits imposed by TEACH.  But I have to add that there is no agreement on this point — even the intern in my office this year, who is also a lawyer, disagrees with me — and some universities clearly have decided to rely on fair use to stream entire videos.

Perhaps the most interesting argument, however, is the one that UCLA seems to be making, according to the article, based on the performance exception that proceeds TEACH in Section 110 and permits performances in face-to-face teaching situations.  Section 110(1) is clearly meant to have some “give” in it, since it refers to “teaching situations” rather than classes and to “classroom[s] or similar place[s] devoted to instruction [emphasis mine].”  UCLA seems to want to stretch these terms to include the course Web site as part of the face-to-face instruction.  I know of other institutions, less bold than UCLA, perhaps, but still unwilling to accept unworkable limitations, that read 110(1) to permit streaming to designated sites like language labs, but not to course sites that can be accessed from anywhere.  These efforts to clarify the fuzzy boundaries of 110(1) are fascinating and seem to invite a court to step in and clarify; it is just that no one wants to be the defendant in that case if they can help it.  While I admit to lingering doubts, this last approach seems to me to be the most surprising, yet most promising, of the three.

There is still another obstacle, however, posed by the anti-circumvention rules added to copyright law by the Digital Millennium Copyright Act.  This provision prevents the circumvention of technological protection measures even, in some cases, when the purpose of the use would be permitted.  So ripping a DVD protected with CSS (Content Scramble System) may violate these rules even if it is otherwise legal.  The DMCA specifically stated that these rules should not inhibit fair use, but courts have been inconsistent about that provision in circumvention cases.

Also, the Library of Congress is charged with declaring categories of exceptions to anti-circumvention in a rulemaking process every three years.  New rules, which are desperately needed, were due in October 2009 but the Library punted, extending the old rules while giving itself unlimited time to try and craft new ones.  The situation is getting worse on university campuses and we have to ask when the Library of Congress is going to clarify the situation.

In the end, I agree with Tracy Mitrano from Cornell, quoted in the article, that this is one more place where copyright law is not up to the technological challenges posed in higher education today.  The need for revision “before [the law] does any more damage” is clear.  We can only hope that the educational media industry will eventually come to understand this (they are supposed to be educational, after all) and move away from threats and towards real dialogue.

ScienceOnline and copyright anxiety

I attended parts of the ScienceOnline 2010 conference, held here in the Research Triangle this weekend.  There was a fascinating array of topics discussed and an interesting crowd of 270+ that included many working scientists, librarians and even journalists.  It was a great opportunity to listen to scientists talk about how they want to communicate with one another and with the general public.

There are some excellent discussions of what went on at this year’s conference, especially here and here on Dorothea Salo’s blog.  Those with a real passion for more information can check out this growing list of blog posts about the conference.  I won’t try to compete with these comprehensive recaps, especially because my selection of events to attend was rather idiosyncratic, and perhaps even ill-advised.  But I do want to make three quick observations about what I personally learned from the conference.

First, I discovered one more argument for open science that had not occurred to me before, but has the potential to be very compelling for scientists on our faculties.  One reason academic research should be online is that “junk” science is already there.  If the general public — including the proportion thereof who vote or require health care — do not make good decisions in regard to matters involving scientific knowledge, we can only blame ourselves when the best research is not available to them, hidden behind pay walls.

Second, I was fascinated to discover that health science bloggers have developed a code of ethics to try and account for the many issues that arise when scientists put important and potentially life-altering information onto the open web.  The benefits of this openness are indisputable, but so are some of the risks.  This code of ethics represents an attempt to address some of those risks and minimize them (there is a somewhat different discussion of a similar issue from the conference here).  The criteria applied to evaluate health care blogs (see the text of the code itself) — clear representation of perspective, confidentiality, commercial disclosure, reliability of information and courtesy — encapsulate standards that all of us who try to share information and opinion online need to be aware of.

Third, I was amazed at how important, and problematic, copyright issues were to this group.  I attended seven sessions at the conference, and five of them dealt with copyright as a major (although often unannounced) topic of discussion.  Even recognizing my tendency to gravitate toward such sessions, this is a high percentage.  I asked a fellow attendee why so many sessions raised copyright and was told, albeit with tongue in cheek, that it is “ruining our lives.”  More seriously, one scientist described trying to put his classroom lecture slides online and being told by his university’s counsel that all material that he did not create had to be removed first.  Apparently there was no discussion of the applicability of fair use and how to decide what was and was not allowable; just a wholesale rule that would discourage most scientists interested in sharing.  This suggested to me that it really is very important to improve the quality of copyright education on campus — for faculty, librarians (who are often the ones asked for advice) and even legal counsel.  We cannot reasonably advocate more online open access unless we also give our scholars the resources to accomplish that goal.  In many ways the technological infrastructure is becoming trivial and it is the policy and legal questions that must be addressed directly if we really want encourage openness.

An amusing chance to review some key ideas

There are probably many readers out there who know who Vanessa Hudgens is.  I did not, until I saw some blog posts reporting on her ongoing lawsuit against website owners who apparently posted nude photos of the actress and singer without her permission; see this report (without the pictures) on the TechDirt site.  Not, I admit, a serious issue of scholarly communications, but it does offer a chance to review some key points about copyright law about which there seem to always be questions and confusion.

First, the subject of a photograph does not have a copyright claim in the picture.  As the post linked to above points out, this has some counter-intuitive results.  One of the best photos of my wife and I together taken in recent years was snapped by a stranger we met in Istanbul who asked me to take a picture of he and his new bride and then returned the favor.  Oddly, he has a copyright claim in photo on my wife and I while I would have a claim in the picture of him, on his camera.  This is a result of the automatic nature of copyright protection, which showers down on a creator as he or she creates.  No such right in copyright accrues to the person whose picture is taken.  In the case of Ms. Hudgens, who is suing for copyright infringement, the ability to make that claim depends on the asserted fact that she took the photos herself, using a cellphone camera.  Were they taken by another person, Ms. Hudgens would not have any copyright claim.

This brings us to the need to distinguish copyrights, which are granted and enforced by federal law, from a right of publicity, which is a state law claim.  Merely as the subject of these photos, Hudgens might still have a claim that her right of publicity has been infringed, even if she had no copyright claim.  There could be a dispute about whether posting these pictures to a website was a commercial use, which is usually necessary to trigger the right of publicity, but I suspect that the website sells advertising and expected the photos to drive up both hits and revenues.  So posting the pictures might well have been the kind of use that would violate Vanessa’s right to control commercial use of her image (as well as other privacy rights, perhaps).

This need to distinguish between the owner of a copyright interest in a photograph (the photographer) and the owner of the publicity right (usually the subject) is the first lesson we can tease out of this case.

There are other ways a subject might get a copyright interest in a photograph, by the way.  First, the photo might be a work made for hire.  In that case, the employer owns the copyright from the start, and the employer might well also be the subject of the photo.  But just paying for a photograph does not make it a work for hire; the photographer must either be a regular employee of the employer/subject or an independent contractor who explicitly agrees in writing that the work will be a work for hire.  Alternatively, the photo might be a derivative work based on a copyrighted work that is part of its subject.  Suppose an artist poses in front of one of her paintings, or that Vanessa Hudgens had been wearing a dress she had designed herself (clearly counter-factual).  In those cases, the subject would have a copyright interest in the photo because of the derivative representation of an original work.

Finally, we can also take from this case a reminder about the role of registration in copyright protection.  The blog post notes that it is “odd” that these photos are registered with the Copyright Office if they really were private self-portraits, as claimed.  Not really.  We should remember that registration is not required for protection — copyright is bestowed automatically as soon as the pictures were snapped — but it is required to bring an infringement action into court.  Thus it is perfectly possible to hold a copyright, have it infringed, then go and register that right before bringing a lawsuit.  In fact, a quick review of the Copyright Office’s records suggests that this was the case here, since the registration data of the photos is October 2009.  If one follows that sequence of events, the range of damages is limited, since statutory damages are not available.  I suppose, however, that if copyright was registered before the infringement took place (and thus statutory damages are sought), one might well doubt the assertion that the photos were intended to be private.

Let the user beware

If the box that says “I Accept” (regarding a website’s terms of use)  really is the most dangerous place on the web, as I wrote several weeks ago, it is getting even riskier out there.  For a long time, a relatively safe rule-of-thumb has been that EULAs (end user license agreements) that forced you to see their terms were enforceable, while those that merely offered you a chance to see them by clicking on a link you did not have to follow were not.  That has never been a hard-and-fast rule, and its utility has been seriously eroded by several court cases in recent years.

The most recent case involved a challenge to a “choice of forum” clause contained in the EULA for a site called ServiceMagic.  A lawsuit by Victoria Major was dismissed because it was filed in Missouri while the EULA says that all lawsuits must be filed in Colorado.  The Missouri Court of Appeals upheld the dismissal, even though Ms. Major never read the EULA nor was forced to see it and click “I Accept.”  The court held, as have several others in recent years, that the link was placed prominently enough for the terms to be enforced, even though there was no technological requirement to actually click through the license.  Some details about the case can be found here, on ArsTechnica.

The was an uproar several years ago about a proposed change in the Uniform Commercial Code called UCITA (the Uniform Computer Information Transactions Act) that would have made it too easy, its opponents felt, for consumers to commit themselves to licensing terms about which they had no knowledge and no chance to negotiate.  UCITA was adopted only in Maryland and Virginia, and it has since been withdrawn by its sponsors.  But the goal of UCITA to speed up Internet commerce by simplifying licensing, even at the cost of consumer protection, is being accomplished by courts around the country anyway.  This latest case is one in a line of similar cases that make it even more imperative that users look for and read licensing agreements, even if the site itself does not force them to do so.

There are a couple of caveats to this trend.  First, it is not universal.  A similar case against online retailer Overstock, reported here, went the other way, apparently because the link to the terms was not sufficiently prominent.  And in the ServiceMagic case one judge on the appeals court wrote that she would only uphold reasonable and expected terms like choice of forum, not terms that were “unconscionable.”  So perhaps we can expect a more active review of licensing terms when the licenses is merely a “clickwrap” or “browsewrap.”  Nevertheless, the most important caveat raised by these cases may also be the oldest — “caveat emptor.”

Taxing culture

Happy New Year to all.

January 1 is traditionally Public Domain Day, in addition to being a day for parades,  bowl games and hangovers.  That is because most copyright laws stipulate that all copyrights the term of which would expire throughout a particular year actually expire on Dec. 31.  Thus, on January 1, lots of works should enter the public domain.  In Europe, January 1, 2010 sees free public access to the poetry of William Butler Yeats and the works of Sigmund Freud.  But here in the U.S., the gerrymandering of our law over the past decades has resulting in almost no new works in our public domain.

There is a great web page on Public Domain Day 2010 here, from the Center for the Study of the Public Domain at Duke’s Law School.

I am particularly struck by the quote on the CSPD page that reminds us that we are the first generation of Americans to deny ourselves access to our own culture.  Almost nothing created in our lifetime will be available to support new creation and innovation by us.  If we are not vigilant, these works will be denied to our children and grandchildren as well.  Nothing except some unpublished works will enter the US public domain through expiration of the copyright term until 2019, and possibly later than that, if the term is extended retroactively again.

These reflections demonstrate very clearly that copyright protection really is, as Lord Macauley said many years ago, a tax on the public.  We continue to pay higher prices for works by Yeats, Freud and thousands of others because copyright prevents free market forces from operating.  We must seek permission, often from descendants who do not know nor care about the control they hold, to make new works based on old.  All these costs are imposed on us by the government, which grants the copyright monopoly ostensibly for the benefit of authors.  But there is no sign that the descendants of Freud or Yeats are benefiting from this absurdly long protection in the US.  Only intermediaries continue to make money, because they do not have to compete in a free market but can charge the public monopoly prices.

Perhaps when the next proposal to extend copyright’s term comes before Congress, we can be intentional about labeling it what it really is — a tax that benefits private interests at the expense of the public.  If its impact was clearly understood, it would be much harder for Legislators to vote for the copyright tax.

Discussions about the changing world of scholarly communications and copyright