Public access and protectionism

By now many folks have commented on the announcement from Nature Publishing Group early this week about public access to all of its content and most have sussed out the fairly obvious fact that this is not open access, in spite of the rah-rah headline in the Chronicle of Higher Education, nor even public access as it is defined by many national or funder mandates.  Just to review quickly the major points about why this announcement actually gives the scholarly community so much less than is implied by either of those terms, consider these limitations:

  1. A putative reader can only get to an article if they are sent a link by a subscriber, or the link is present in a news article written by one of the 100 news organizations that NPG has chosen to “honor.”
  2. Articles can only be read inside NPG’s proprietary reader
  3. No printing or downloading is possible, so a non-subscriber hoping to use one of these articles to further her own research better have a darn good memory!
  4. No machine processing will be possible; no text or data mining.

In short, all of the inconveniences of print journals are preserved; what NPG is facilitating here is essentially a replica of loaning a colleague your copy of the printed magazine.  Or, at best, the old-fashioned system whereby authors were given paper “off-prints” to send to colleagues.  Although, honestly, off-prints had more utility for furthering research than this “now you see it, now you don’t” system has.

If this is not open or public access, what is it?  I like the term “beggar access,” which Ross Mounce applied to NPG’s scheme in a recent blog post, since it makes clear that any potential reader must ask for and receive the link from a subscriber.  Some suggest that this is a small step forward, but I am not convinced.  There is nothing public or open about this “ask a subscriber” model; all it really does is prevent scholars from downloading PDFs from their subscription access to NPG journals and emailing them to colleagues who lack a subscription.  In short, it looks like another stage in the ongoing comedy of fear and incomprehension about the way digital scholarship works on the part of a major publisher.  But Mounce’s post suggests that the move is more than that; he points out ways in which it may be designed to prop up digital business that Nature and its parent Macmillan have invested in — specifically ReadCube and AltMetric.com.  The byzantine scheme announced by Nature will drive readers to ReadCube and will generate data for AltMetrics.com, helping ReadCube compete with, for example, Elsevier and their proprietary reading and sharing tool, Mendeley.

That is, this looks like another move in the efforts by the large commercial publishers to buy up and co-opt the potential of open access. On their lips, open access does not mean greater potential for research and the advancement of science; it means a new market to exploit.  If administrators, researchers and librarians allow that to happen, they will have only themselves to blame.

My colleague Haley Walton, who recently attended OpenCon 2014, told me about a presentation made by Audrey Watters that included the idea of “openwashing,” which Watters defines like this:

Openwashing: n., having an appearance of open-source and open-licensing for marketing purposes, while continuing proprietary practices.

This is exactly what is happening in this announcement from NPG; old business models and awkward exploitation of new markets are being dressed up and presented as a commitment to access to scholarship, but the ruse is pretty transparent.  It may quack like a duck, or be quacked about, but this plan is really a turkey.

If NPG really was committed to better access for scientific research, there is a simple step they could take — put an end to the six-month embargo they impose on author self-archiving.  Much of their competition allows immediate self-archiving of an author’s final manuscript version of articles, but Nature does not.  Instead, they require a six-month embargo on such distribution.  So this new move does only very little to ameliorate the situation; the public still cannot see Nature-published research until it is old news.

Speaking of news, at Duke we have a relationship between the office of Scholarly Communications and that of News & Communications whereby we are notified of upcoming articles about research done at Duke.  In many cases, we are able to work with authors to get a version of the article in question into our repository and provide an open link that can be included in the article when it is released, or added shortly after release.  Our researchers find that putting such links in news stories leads to much better coverage of their discoveries and increased impact on their disciplines.  We always do this in accordance with the specific journal policies — we do not want to place our authors in a difficult position — which means that we cannot include Nature-published articles in this program.  To be frank, articles published in Nature remain highly valued by promotion and tenure committees, but relatively obscure in terms of their ability to advance science.  NPG seems to understand this problem, which is why they have selected a small number of news outlets to be allowed to use these tightly-restricted, read-only links.  They want to avoid increasing irrelevance, but they cannot quite bring themselves to take the necessary risk.  The best way they could advance science would be to eliminate the six-month embargo.

It is interesting to consider what might happen if Nature embraced a more comprehensive opportunity to learn what researchers think about open access by tying their “get a link from a subscriber” offer with an announcement that they were lifting the six-month embargo on self-archiving.  That would demonstrate a real commitment to better access for science, and it would set up a nice experiment.  Is the “version of record” really as important to researchers as some claim?  Important enough to tolerate the straightjacket created by NPG’s proprietary links?  Or will researchers and authors prefer self-archiving, even though an earlier version of the article must be used? This is not an obvious choice, and NPG might actually win its point, if it were willing to try; they might discover that their scheme is more attractive to authors than self-archiving.  NPG would have little to lose if they did this, and they would gain much more credit for facilitating real openness.  But the only way to know what the real preference among academic authors is would be for Nature Publishing to drop its embargo requirement and let authors choose.  When they make that announcement, I will believe that their commitment to finding new ways to promote research and learning is real.

Going all in on GSU

On Friday the publishers who are suing Georgia State University for allegedly infringing copyright by scanning short excerpts from academic books to provide students with access through electronic reserves and learning management systems filed a petition for a rehearing by the entire Eleventh Circuit Court of Appeals.  As most will recall, the panel of the Eleventh Circuit essentially did what the publishers wanted — reversal of the lower court judgment — but the appeals panel denied those plaintiffs most of the principles by which they hope to radically reshape copyright law.  The publishers clearly understand that, whatever they can gain from additional lower court proceedings on remand, they will not get what they wanted when they brought the lawsuit.  The panel ruled that the first fair use factor favors an educational, non-profit use even if the use is not transformative, that an item-by-item analysis is appropriate, and that it matters in the fair use analysis whether or not a license for digital excerpts is available.  The publishers have decided they cannot live with these conclusions, so they have asked that those specific issues be reconsidered by the entire Eleventh Circuit court.  Their “petition for en banc rehearing” lays out their arguments.

GSU also has filed a petition for rehearing.  They are seeking some corrections to inaccurate statements about what list of alleged infringements was considered by the lower court, as well as a ruling that the risk of market harm from electronic reserves is a question of fact that the lower court should be instructed to consider.  That risk, GSU argues, should be proved; it is not something the appeals panel should have presumed.

It is important to understand there is little chance that these petitions will be granted.  When a case is appealed from the lower court to a Circuit Court of Appeals, we call that an “appeal as of right.”  That is, that first appeal must be heard by an appellate panel.  But thereafter, all subsequent appeals are discretionary; the court does not have to actually take the case, it has the option to deny the petition.  Most people are familiar with the idea that the Supreme Court actually reviews only a tiny percentage of the cases for which it receives a petition for a hearing.  Besides asking for Supreme Court review, the other option, after losing (or feeling like you lost) an appeal in front of a Circuit panel, is to ask that the entire group of judges in that Circuit reconsider the case.  Like Supreme Court petitions, these petitions for en banc rehearing are rarely granted.  In fact, the Federal Rules of Civil Procedure explicitly say that petitions for rehearing are “not favored and ordinarily will not be granted”  (FRCP 35(a)).  For more information about these post-appeal options, interested readers should see this article from the law firm Reed Smith.

So does the petition from the publishers stand a chance?  There are two reasons a petition for rehearing might be granted: when there is a split within the courts of the Circuit or when a “question of extraordinary importance” is involved.  In their petition, the publishers rely on the latter claim, but it is not very convincing.  They try to drum up controversy by suggesting that the panel ruling contradicts some Supreme Court precedents, but, again, the effort is weak.  The petition relies on the 1980’s decision in Harper and Row v. Nation Magazine, which the Supremes themselves have seriously modified in later rulings.  So when the publishers object that the panel ignored Harper‘s emphasis on the importance of the fourth factor, they are deliberately ignoring language from the later Campbell v. Acuff-Rose case.

The other source that the publisher petition puts a lot of weight on is the “special concurrence” by Judge Roger Vinson.  Essentially, Judge Vinson dissented on every major point in the majority opinion, but concurred in the result.  Taken together, the two opinions indicate that a lot of negotiation took place in the 11 months it took to produce the ruling.  It suggests, in fact, that the other two panel judges — Tjoflat and Marcus — were actually more sympathetic to fair use than is expressed in the majority opinion.  But what is important about the heavy reliance on Judge Vinson in the petition for rehearing is the fact that Judge Vinson is not a regular member of the Eleventh Circuit.  He is a senior judge at the District Court level (in Florida) who was on the Appeals Court panel to fill a vacant seat (called “sitting by designation”).  That means that he presumably will have no role in deciding whether or not to grant the petition, or in any actual rehearing, in the unlikely event the petition is granted.  So the publishers have found a friend in Judge Vinson, but he is not a friend who can help them all that much.

This petition for rehearing is thus a long shot, and it reveals the stark opposition of these three publishers to fair use as it has traditionally been interpreted throughout the long history of U.S. copyright law.  Let’s look at the three principles the publishers say that they want and that the appeals panel got wrong.

The first point from the panel decision that the publishers say is wrong involves the idea of “media-neutrality.”  This is a huge red-herring that the publishes have been waving around to distract the various courts from the weakness of their case, and they lead off with it in the rehearing petition. Judge Vinson was convinced by this argument that if courts do not treat electronic reserves the same way print course packs were treated in the “copyshop” cases from the 1990s, they are violating a principle of media-neutrality.  The majority opinion, on the other hand, tried to define the limited role that media neutrality has in copyright law, a definition the new petition claims was an error.  There are a couple of important points that are getting overlooked in this discussion.

For one thing, there are many ways in which copyright is not media neutral.  Many exceptions, for example, refer to specific media and specific technologies.  There is a provision just related to royalties on digital audio recording machines, for example.  The TEACH Act refers to transmission over a digital network, and is inapplicable to other types of distance learning.  Broadcast television is treated differently than cable, and terrestrial radio differently than Internet radio.  Since the law is therefore often media-specific, it was not irrational for the panel majority to try to define what media neutrality does, and does not, mean.  The publishers want it to mean something very specific in order to benefit their case, but the panel looked at a principle-based definition that took account of how the copyright law as a whole really works, and rejected the publishers’ ad hoc claim.

The reason for pushing this broad and self-serving definition of media-neutrality, of course, is to convince courts that the “course pack” cases are good analogies for electronic reserves.  Since those cases found against fair use, the publishers’ argument goes, the principle of media neutrality demands that fair use also be rejected for electronic reserves.  But, in fact, neither the lower court nor the appellate panel has rejected the course pack cases because of a perceived difference between electronic and print fair use.  This is just sand being thrown in the face of the courts to confuse them (it worked with Judge Vinson).  The course pack cases are distinguishable instead on first factor grounds that have nothing to do with the media involved; those cases involved a commercial intermediary making and selling the course packs, which is an entirely different situation than is reflected in the GSU case.

The second claim the publishers make in their petition attempts to undermine the first fair use factor more directly by asserting that it should not favor non-profit educational uses unless they are transformative.  Although the publishers assert that this is the meaning of the Supreme Court’s ruling in Campbell v. Acuff-Rose Music, that is simply not true.  Although that case laid great weight on transformation for many fair use decisions, it explicitly stated that not all fair use must be transformative, and cited “multiple copies for classroom use” as the paradigmatic case of a non-transformative use that is likely still fair.  They get this phrase directly from section 107, of course.  So the publishers are asking for a pretty radical reconfiguration of the copyright law here, that would directly defy the Supreme Court and the text of the law.  It would be pretty audacious of the Eleventh Circuit to accept this argument, but the publishers are clearly going all in with their fight against fair use.  It seems they are reasoning that if they can persuade the Eleventh Circuit into accepting this radical new view of copyright, they could at least get a shot at Supreme Court review by provoking a split in the circuits where none has previously existed.

Finally, the most troubling claim the publishers make is in their argument that the fourth fair use factor’s emphasis on market harm, including “potential” markets, gives them the right to decline to offer a license for digital excerpts without tipping the fourth factor toward favoring fair use.  The appellate panel correctly noted that this argument would demolish fair use, since it would allow a rights holder to say “we could have licensed this use if we wanted to, so allowing fair use damages the potential market we have chosen not to enter.”

In one sense, I would like to see a discussion of this idea of potential markets.  It should be seen as a gateway to consider the incentive purpose of copyright law.  How would it create additional incentive for creation to permit publishers to refuse to license uses of academic works?  These markets are not an end in themselves, but a vehicle to produce such incentives.  Establishing a right to refuse to license does not serve this purpose at all.  It is a selfish and antisocial argument put forward by the publishers to protect the artificial scarcity that they believe they must create in order to make money.  In short, the publishers want the right to limit access to knowledge because they do not have the vision needed to run successful businesses in a changing environment.

What do we lose if that argument is accepted?  Only our most cherished democratic value, the freedom of expression.  Fair use has always been considered a “safety valve” for free expression that prevents a rights holder from suppressing speech he or she doesn’t like by asserting copyright.  If we were to accept this potential market argument, a rights holder would be a step closer to preventing scholarly commentary by denying a license for the quotations used in, for example, a review (as Stephen James Joyce famously tried to do regarding his grandfather’s work).  That might seem extremely unlikely on a larger scale, but we should remember that publishers often require their authors to obtain permission for the use of quotations beyond an artificially imposed word limit.  Combined with this idea that denial of a license should not improve the fair use argument, the conditions for such suppression would be ideal.

The truly shocking thing about this petition is how openly Oxford University Press, Cambridge University Press and Sage Publishing are now attacking free speech and the dissemination of knowledge.  These are not “academic” presses anymore; their profit motive and shorted-sighted focus on protecting old business models has led them to assume an anti-academic stance that the scholarly community should not tolerate.  They are demanding nothing less than a right to suppress and inhibit the spread of knowledge, simply by refusing to offer a license, whenever the believe that doing so is to their commercial advantage.  I have often been asked if I think scholars, libraries, and others should boycott these publishers because of the lawsuit, and I have always said that we should wait and see where the cases goes.  To me, it has now gone in an intolerable direction, one that threatens core principles of academic discourse.  Everyone must make their own decision, of course, but I am now willing to say that I will neither publish with these three plaintiff publishers nor buy their products.  They have declared war on teaching and the dissemination of scholarship, and I will not help them buy the ammunition.

Free speech, fair use, and affirmative defenses

On an e-mail list to which I do not subscribe, there was recently a long exchange about fair use and large-scale digitization.  Part of the exchange was forwarded to me by a friend seeking comment about a specific issue that was raised, but in the course of looking back at the thread I discovered this comment:

Fair use doesn’t “allow” large scale digitization and didn’t “allow” digitization in the case of HathiTrust. The fair use provision does not allow anything up front- it has to be won through litigation. The fair use provision was used as an affirmative defense in litigation concerning the HathiTrust et al., and after much time and money spent in litigation, the court ruled, and the appeals court ruled, that HathiTrusts’s activity could be considered fair.

This comment repeats a mistake that is very common in discussions of fair use — while noting correctly that fair use is an “affirmative defense,” it concludes from that fact that fair use must be something unusual, a privilege that we rely on rarely because it is risky and difficult to prove.  But, as I hope to show with the rest of this post, affirmative defenses are quite common; in fact, almost all positive rights have to be treated as affirmative defenses in litigation.  We rely on things that are “allowed” by affirmative defenses all the time.

Basically, to call something an affirmative defense is to make a technical point about how it functions in a court case.  We should not be frightened by the phrase or invest it with too much significance.  Some of our most cherished rights would have to be called affirmative defenses in the technical sense that is the only proper usage of that phrase.

Consider the case of Cohen v. California (1971), one of our most important cases about the meaning of the First Amendment.  Mr. Cohen entered the Sacramento court house wearing a piece of clothing on which he had written a profane anti-war message — “F**K the Draft” — and was arrested for disturbing the peace because that message was considered “offensive conduct.”  The Supreme Court ultimately held that it was protected speech, in spite of the profanity, and that Cohen’s arrest was therefore improper. But let’s imagine for a moment how the trial over this issue must proceed.  The state would provide evidence that Cohen did wear the jacket, had deliberately painted the words on it, and knew what was written on his jacket when he entered the court house.  Cohen’s defense would then be to actually admit all of those points, but raise an additional fact — his words were political speech protected by the First Amendment to the U.S. Constitution.  Free speech would thus function as an affirmative defense to vindicate Mr. Cohen’s right.

I hope this example illustrates two things.  First, all an affirmative defense means is that the defendant must raise additional facts or legal principles in addition to what the plaintiff or prosecution has asserted.  This is not uncommon; anytime a defendant does more than simply deny the truth of everything the plaintiff says, they are raising an affirmative defense.  Second, all of our most cherished rights in America can function as affirmative defenses in court, but that does not mean they are unusual or unreliable.

In any court case, the plaintiff has to prove some facts in order to establish that a “cause of action” exists.  A defendant then has two avenues — he can simply deny the truth of some or all of what the plaintiff has said (we call that arguing that the plaintiff failed to meet her “burden of proof”) or he can produce additional facts that show that what he is accused of doing is actually permissible (which is the defendant’s “burden of proof”).  If we take the Georgia State copyright case as an example, we can see both strategies at work.  For over 20 of the challenged excerpts, GSU successfully argued that the publishers had not met their burden of proof by showing that they owned a valid copyright in the works in question.  Since the publishers could not produce valid transfers of copyright, there was no further need for a defense.  For 40+ other excerpts, however, GSU successfully argued some additional facts and showed that their use was fair use (although the Appeals Court has now told the trial court to reanalyze this).  Just like Mr. Cohen in the free speech case, GSU invoked a positive right that is precious to all Americans, but in the context of the lawsuit that right was presented as an affirmative defense.

I can’t say it often enough — when one is sued for doing something one believes is actually allowed by the law, that “right,” whether it is free speech or fair use, is always presented in the form of an affirmative defense.  All that means is that it is something which the defendant must raise to justify herself (something for which she bears the burden of proof), but these things are not rare, disreputable or frightening; they are the very rights that define our citizenship.

Fair use is one such right, and the copyright law very clearly calls it a right (in section 108(f)(4) of Title 17).  It is a key and indispensable component of our system of copyright, as the Supreme Court has reminded us many times (E.g., Campbell v. Acuff Rose Music, Inc., 510 U.S. 569 (1994) at 575).  It is, especially, a “safety valve” that protects free speech from encroachment by copyright holders, and it is useful to think of those two rights — free speech and fair use — together.

So it is simply wrong to say that fair use does not “allow” anything because it is an affirmative defense, just as it would be wrong to say that about free speech.  The First Amendment allows me to have campaign signs on my lawn during this election, even if my neighbors disagree strongly with me.  It allows me carry a placard down a public sidewalk proclaiming that “The End is Near,” if I am so inclined.  It would allow me even to wear a swastika tattoo, as offensive as that would be to many.  In the same way, there are many activities that we can say with assurance are allowed by the right of fair use.  When we use a quotation from a previous work in a new article we are writing, we do not stop to do a individualized analysis because we know, and pretty much everyone agrees, that this is a settled instance of fair use.  Nor do we need to re-litigate the Sony v. Universal Pictures case every time we want to record a TV show to watch at a later time; the Supreme Court has confirmed for us that doing this is allowed by fair use.  And in the HathiTrust case, the Second Circuit told us that fair use supports large-scale digitization for the purpose of indexing and access for persons with disabilities.  It is possible that a rights holder could challenge such an activity again, just as some government entity could again try to outlaw profanity in political speech.  Possible, but unwise and very unlikely.

When we say something is an affirmative defense, all we are doing is indicating how it would be raised in litigation.  Many of our most cherished freedoms would be raised as affirmative defenses.  So we must resist the urge to allow ourselves to be frightened by that phrase or to accept arguments intended to make fair use seem odd, unusual, or risky.  Fair use is no more unusual or dangerous than free speech is.

Swimming in muddy waters

Since the ruling from the Eleventh Circuit Court of Appeals in the Georgia State copyright case came out two weeks ago, most commentators have come to the same conclusions.  It is a mostly negative ruling, in which publishers actually lost a lot of what they were fighting for.  Georgia State also lost, in the sense that the case is not over for them and they are no longer assured of being reimbursed for their attorney’s fees and court costs, as Judge Evans had originally ordered.  But apart from those parties to the case, has the library community lost by this decision, or gained?  Once again, what we have gained is mostly negative — we know that we do not have to strictly observe the 1976 Classroom Copying Guidelines, we know that the cases involving commercially-produced course packs do not dictate the fair use result for e-reserves, we know that 10% or one chapter is not a bright line rule.  But there is little benefit in knowing how NOT to make fair use decisions; it is easy to see why one commentator has pled for bright line rules.

One affirmative point we can take from the case is that we know we still can, and must, do item-by-item analyses to make fair use decisions.  But what exactly should the process for those decisions look like?

As Brandon Butler from American University has pointed out, the decision-making processes will be different when we are assessing uses for teaching that are transformative versus those, like e-reserves, that are not.  We still have a good deal of freedom when the use is transformative — when the original material becomes part of a new expression, a new meaning, or a new purpose.  And this is important for a great deal of scholarship and teaching.  We should not lose sight of this important application of fair use, or assume, incorrectly, that the 11th Circuit ruling creates new limits on transformative fair use.

But when we must make decisions about digital course readings, we need to apply the “old-fashioned” four factor test.  What does it look like after the Appeals Court ruling?  I am afraid it has gotten pretty muddy:

The first fair use factor — the purpose and character of the use — continues to favor fair use whenever that use is undertaken by a non-profit educational institution.  If a commercial intermediary is involved, as was the situation in the course pack cases, this is no longer true.  But where there is no profit being made and the user is the educational institution itself, the first factor supports a claim of fair use.  And that is where the clarity ends.

The second factor — the nature of the original — can go either way, depending on the specifics of the work involved.  Is it more factual or interpretive?  This is a judgment call, and one which librarians may be hard-pressed to make when processing a number of e-reserve requests in a discipline they are unfamiliar with.  The good news is that the Court said that this factor is relatively unimportant, so the safest course may be to consider this factor neutral; call it a draw — at least where the item is not clearly creative — and move on.

On the third factor we thought we had a rule, even if many of us didn’t like it — 10% or one chapter was the amount that Judge Evans said was “decidedly small” and therefore OK for fair use for digital course readings.  The bad news is that we no longer have that rule.  The good news is that we no longer have that rule.  The 11th Circuit panel wanted a more nuanced approach, that balances amount with the other factors and especially looks at how appropriate the amount used is in relation to the educational purpose.  When the other factors line up in favor of fair use, this approach could well allow more than 10%.  If the other factors tend to disfavor fair use, only a much shorter portion might be permissible.  It is just very hard, after this ruling, to have clear standards about the amount that is usable, and that makes things difficult for day-to-day decisions.

With the fourth factor — impact on the market for the original — the 11th Circuit made things even more unclear.  The panel actually affirmed the lower court in its analysis of this factor, emphasizing that it is permissible to take into account the availability of a license for the specific use as part of evaluating this factor.  So if a license for a digital excerpt is unavailable, does that mean this factor favors fair use, as Judge Evans said?  Maybe, but the 11th Circuit added two complications.  First, it said that the Judge should have included the importance of license income to the value of the work in her fourth factor reasoning, rather than treating it as an additional consideration for breaking “ties.”  Second, they said that the fourth factor should have more weight in non-transformational settings.  How are we to put these instructions into practice?  Libraries do not have access to publishers’ accounts, as the judge did, so we cannot assess the importance of licensing income (nor can we trust publishers to give us straight answers about that importance).  And what does more weight mean?  If there is no digital license available, does more weight on this factor mean more room for fair use, perhaps of a larger excerpt?  Again, maybe.  But it also seems to mean that where such a license is available, even 10% or one chapter might be too much for fair use.

How does one swim in water that is this muddy?  The answer, of course, is very carefully.  We must keep on making those decisions, and we do have space to do so.  The fair use checklist, by the way, received a relatively sympathetic description from the 11th Circuit, but not a definite embrace.  At this point, my best advice is to keep on doing what we have been doing, thinking carefully about each situation and making a responsible decision.  I would recommend a somewhat more conservative approach, perhaps, than I might have done three weeks ago, especially when a license for a digital excerpt is available.  But the bottom line is that the situation is not much different than we have always known it to be, there is just a little more mud in the water.

GSU appeal ruling — the more I read, the better it seems

Those of us who heard the oral arguments in the Eleventh Circuit Court of Appeals last November, in which the publishers appeal of the District Court ruling favoring fair use in their copyright infringement lawsuit against Georgia State was heard, mostly expected a discouraging result from the Appellate panel. An initial or cursory reading of the opinion issued by the panel of the 11th Circuit on Friday might even confirm those fears.  After all, the fairly positive ruling from the District Court is reversed, the injection and the order for the publishers to pay GSU’s costs and attorney’s fees are both vacated, and the whole case is remanded back to the District Court to reconsider.  But once one begins to read carefully, the panel opinion gets much better.  All of the big points that the publishers were pushing, which consequently are the really bad potential rulings for higher education, go against the publishers.  In many ways, they won a reversal but lost the possibility of achieving any of their most desired outcomes.  Higher ed didn’t exactly win on Friday, but the plaintiff publishers lost a lot.

There is a thorough and smart analysis of the ruling from Nancy Sims of the University of Minnesota found here.

For those of us struggling to make responsible fair use decisions on a day-to-day basis, this Appeals Court ruling doesn’t actually change much.  The message for us is that it could have been much worse, the case is far from over, and we must just keep on making the same kind of reasoned and reasonable fair use decisions we have been making for years.

What we got from the three-judge panel of the Eleventh Circuit on Friday is a mostly negative ruling that outlines where both Judge Evans of the District Court and the plaintiff publishers are wrong, as the panel thinks, about fair use.  To be exact, two judges — Tjoflat and Marcus — tell us those things.  The third judge, Vinson, concurs in the result — the reversal and remand of the case — but would have accepted virtual all of the publishers arguments and closed the door on fair use for even very small classroom readings.  His concurrence suggests that getting an opinion together was probably a difficult process involving a lot of compromise (it took eleven months), and it also tells us how bad the opinion could have been for universities.  Instead, what the panel majority issued is mostly bad for the publishers.

In my opinion, there were five major principles that publishers wanted to get from this lawsuit, and in the Appeals Court ruling they lost on all of them:

  1. Publishers wanted the Appeals Court to hold that Judge Evans should have ruled based on the big picture — the large number of electronic reserve items made available to students without permission — rather than doing an item-by-item analysis for each reading.  Instead, the Appeals Court affirmed that the item-by-item approach was the correct form the analysis should take.  This, of course, is the key that allows universities to make individualized fair use decisions, and it rejects the attempt to force all schools to purchase a blanket license from the Copyright Clearance Center (which was, in my opinion, the fundamental goal for which the case was filed in the first place).
  2. The plaintiffs wanted a ruling that non-profit educational use did not mean that the first fair use factor always favored fair use.  They wanted the Appeals Court to hold that where the copying is non-transformative, and both Judge Evans and the Appeals Court felt that the copying at issue was non-transformative, the first fair use factor does not favor the defendants, even when they are non-profit educational institutions.  But the Court of Appeals correctly applied Supreme Court precedent and held that the first fair use factor still favors fair use for such “verbatim” copying when it is done for an educational purpose without profit.
  3. The Appeals Court held that the so-called “course pack” cases, which rejected fair use for course packs made for a fee by commercial copy shops, were not controlling precedent in the situation before it, where GSU was doing the copying itself and made no profit from it.
  4. The publishers wanted a clear statement that the Classroom Copying Guidelines were a limit on fair use for multiple copies made for classroom use, defining a maximum amount for such copying of 1000 words.  They lost there too; the panel held that the Guidelines were intended as a minimum safe harbor and did not define a limit on fair use.  Therefore they do not control the decision for this type of copying.  Instead, the panel rejected the 10% or one chapter rule applied by Judge Evans as too rigid and instructed her to use a more flexible approach that takes account the amount appropriate for the pedagogical purpose.
  5. Finally, the publishers were hoping that the Appeals Court would reject the idea that the availability of a license for a digital excerpt was relevant to the fourth fair use factor; they wanted a rule that says that any unlicensed use is an economic loss for them, even if they have decided not to make the desired license available.  They lost that too; the panel affirmed that the District Court was correct to consider the availability of a license for the specific use when evaluating market harm.

These losses, which constitute the heart of what the publishers were hoping to achieve when they brought the lawsuit, are probably final.  They are now binding precedent in the 11th Circuit, and persuasive throughout the country.  The publishers could presumably appeal to the Supreme Court, but it seems unlikely the current Court would take the case because there is no split amongst the Circuit Courts, only a growing consensus about fair use.

So if the publishers lost on everything that really mattered to them, why was the case reversed?

First, as I have said, this is a big loss for the publisher plaintiffs, but it is not a win for GSU.  With the reversal of the District Court ruling and the injunction and fee award vacated, their copyright policy is again up in the air.  And, of course, they have lost the immediate prospect of collecting about $3.5 million in costs.  But for now, they, like all libraries, should probably just carry on with their normal practices and wait to see what happens on remand.

When (if?) the case gets back to Judge Evans, who I very much doubt wanted it back, the fair use analysis will look somewhat different. The Appeals panel found specific errors in her analysis of the second and third fair use factors. On the second factor they have told her that she cannot presume that the works in question are all “informational.” She has been told to do a work-by-work evaluation, but also told that this factor is not very important.  On the third factor, the amount used, the panel said that her bright-line rule of no more than 10% or one chapter was too rigid (as would have been the much lower bright-line rule the publishers wanted).  Here too, the Appeals Court wants a more nuanced and fact-specific analysis, looking at both quantity and quality (the heart of the work).  Significantly, they have told Judge Evans to look at the pedagogical appropriateness of the excerpt when determining how the amount factors into a fair use analysis.  Since this corresponds with what many of us tell campus faculty — use only what you really need and no more —  it is nice that the panel approved.  Finally, Judge Evans has been told to give the fourth factor — market harm — more weight, rather than counting all the factors equally.  This would probably, but not certainly, result in fewer findings of fair use.  Instead of the split we got — 43 fair uses versus 5 infringements (plus 26 for which there was no prima facie showing of infringement)– there would probably be a different division.  Maybe 30 excerpts would be fair use and 18 infringing; who knows?  But I think we should consider whether or not getting to that point really benefits anyone.

Which brings me to considering what the various players should do now, in light of this ruling.

The publishers, as I say, have pretty much lost even as it looks like they were winning.  There is no good that can come out of a remand for them.  At best they will get that different division between fair use and infringement as a result, and will be able to use it to spread a little more fear amongst academic libraries about the uncertainty of fair use.  But that is not their real goal, I hope.  They were hoping to radically change the landscape, and they have failed spectacularly.  If there is any common sense left in their board rooms and executive suites, they need to considering settling with Georgia State and then engaging in real, good-faith negotiations with higher education and library groups.  Don’t open those negotiations with threats, as you did before.  We now know how toothless your threats are.  But it is still the case that libraries and faculties would like some standards they can follow that are realistic in light of what the courts have told us.  There is no windfall for publishers in such negotiations, but there might be some stability, not to mention the savings they will realize if they stop wasting money on foolish and unavailing litigation.

As for academic libraries, this long, drawn-out case, although not over yet, appears to have been much ado about nothing.  We still should be making careful, responsible and good-faith decisions about copyright and fair use, just as we have done for years.  We need to educate ourselves, look at the array of precedents we have from the federal courts, and continue to do our best.  There has been no revolution, and no dramatic alteration of the conditions under which we do our work.  The bottom line is that, after this ruling, libraries should just keep calm and carry on.

A reversal for Georgia State

The Eleventh Circuit Court of Appeals has issued its ruling in the publisher appeal of a district court decision that found most instances of electronic reserve copying at Georgia State to be fair use.  The appellate court ruling is 129 pages long, and I will have much more to say after I read it carefully.  But the hot news right now is that the Court of Appeals has reversed the District Court’s judgment and remanded the case back for proceedings consistent with the new opinion.  The injunction issued by the District Court and the order awarding costs and attorney’s fees to GSU have been vacated.

Looking at its analysis of the four fair use factors suggests that applying the Court of Appeals’ ruling will be challenging.  The panel has held that th=Judge Evans of the District Court was correct to find that the first factor favored fair use, even though, both courts say, the use is not transformative.  The non-profit educational character of the use seems to carry the day on that factor.  On the other hand, the Appeals Court finds error in the District Court’s sweeping finding that the second factor favored fair use.  The panel also disagrees with the 10% or 1 chapter standard used by Judge Evans to decide about the third fair use factor, the amount used; they object to any mechanical standard and want a more nuanced, work-by-work analysis.  The Court of Appeals also agrees with the District Court about the fourth factor — largely favoring plaintiffs when a digital license is available.  But the 11th Circuit wants the factors to be balanced with a different touch; not treated as all equal.

What is interesting here is that it looks like a considerable victory for the publishers, but there is still a lot of work to do.  Judge Evans will need to do her analysis over again, and the results will be different.  But given the way the Appeals Court agrees with some parts of her initial analysis and corrects her on others, it is hard to predict, on first glance, what the final result will be.

A copy of the opinion can be found here.  I am sure there will be lots more written about the, including by me, in the days to come.

Jury instructions go missing

Jury instructions are one of those things that few people, not even most lawyers, think about very often.  But if you are involved in a trial, they can be vitally important.  The ways in which juries are instructed on particular points of law can determine the outcome of a case, so litigants and the lawyers must spend a lot of time arguing over which instruction should be given and how they should be worded.  Many appeals revolve around the accuracy and appropriateness of the instructions that were given to a jury.

Because they are so important and potentially so contentious, most states develop model jury instructions, or “pattern” instructions, as they are called in North Carolina, for a great many legal doctrines and situations.  Access to these pattern instructions is important for litigants.  Although their use is not required in North Carolina, using a pattern instruction creates a presumption that the jury was properly instructed.  Also, specific variations that a litigant wants a judge to make in an instruction are best proposed as amendments to the pattern.

I admit that I have never consulted the North Carolina Pattern Jury Instructions.  Nevertheless, I was dismayed to receive this announcement from the North Carolina Bar Association telling me that the free access I had to the Pattern Instructions as a member of the Bar would soon be ending. The story it tells is interesting and instructive.

First, the Pattern Jury Instructions are developed by the North Carolina Conference of Superior Court Judges.  That is, they are created by state officials as part of their official duties.  But unlike documents created by Federal employees, there is a copyright in these instructions, and that copyright has been given to the University of North Carolina School of Government.  The SOG, in turn, has given a vendor called CX Corp an exclusive license to distribute the instructions.  For over a decade the Bar Association contracted with CX to provide access as a member benefit to NC lawyers, but that contract is expiring and efforts to renew it have failed.  A familiar story to librarians, of course, and probably due to the same, familiar reason that most licensing renewals fail — a demand for more money than the licensee feels it can pay.

For many North Carolina lawyers, the impact of this new regime will be minimal.  Medium and large firms will just factor in the cost and effort of getting an individual subscription to the Pattern Jury Instructions.  Presumably the UNC School of Government will realize more revenue from this gifted copyright.  But I worry about pro se litigants and those who are represented by solo practitioners or small firms.  Those folks might well be at a disadvantage in a crucial phase of a trial, the debate about how the jury will be instructed.  They will either have to scrape up the money to purchase a print or electronic version of the pattern instructions, find one of the relatively few public law libraries where they can consult them, or else risk advocating for instructions that will be more easily challenged and undermined.

From this unfortunate situation, I want to draw two points that I think are of general relevance to those of us who think about copyright matters.

First, it is an important reminder that we cannot assume that public documents intended for a public purpose are necessarily public domain.  The familiar provision in U.S. copyright law that puts documents created by the federal government in the public domain is, obviously, only about federal documents.  The states can and sometimes do claim copyright in official state documents.  They are often used as revenue sources, especially if there is a target audience of professionals, like building contractors or lawyers, from whom payment can be readily expected.

The problem with this situation is that the different states take different approaches to what is and is not in the public domain, and also that a single state may be wildly inconsistent about its approach to different types of documents it creates.  In most states, most official documents are public domain, at least in the practical sense that they are reasonably accessible without cost.  But odd things, like building codes or jury instructions or even an electronic database of the state’s laws, many be exclusively behind a pay wall.

The other instructive point in this is the realization that copyrights, especially those held by state and local government, may be used to enact policy goals that have nothing to do with copyright.  The purpose of copyright is, explicitly, to incentivize creation.  Presumably the NC Conference of Superior Court Judges did not need the promise of royalties in order to compile the Pattern Jury Instructions; creating those instructions was simply a part of good judicial practice for the state.  So here we had a copyright that wasn’t doing any work.  What can we do with it; how can we use it to make some money?  Here’s an idea, let’s give it to the UNC School of Government as an added revenue source!  They can exploit it in a way that would be inappropriate for the judges, and the state’s flagship public university can reap the benefit.

This isn’t necessarily a bad idea; at least, the purpose is worthwhile.  But it is a public policy that copyright was not intended to serve, and it is worth noting that this can happen, and that not all the policies that are supported this way will be equally laudable.  In the ideal world, states would be more transparent about what they claim a copyright in and why.  And elected representatives should be given the chance to approve, or not, those policy ends that are furthered by the exploitation of copyrights claimed in state documents.  At least that way, there would be some accountability when copyright is used for purposes other than that for which it was instituted.

While I was preparing this post, I encountered a somewhat parallel story about another legal document — the Bluebook that is the required citation manual for law students, lawyers, and litigants in many U.S. courts.  As the blog TechDirt reports, this is another case where access for impoverished litigants may be important, but copyright protection allows access restrictions that impose financial barriers.  Of course, unlike the Pattern Jury Instructions, the Bluebook is a privately created document, so there is less confusion about the appropriateness of the initial copyright.  Nevertheless, Carl Malamud and his Public Resource allies have mounted a campaign asking that the Bluebook be made more accessible and, as it turns out, finding grounds to challenge the continued existence of a copyright in the work.  Fascinating reading.

 

Are fair use and open access incompatible?

There has been a spirited discussion on a list to which I subscribe about the plight of this graduate student who is trying to publish an article that critiques a previously published work.  I’ll go into details below, but I want to start by noting that during that discussion, my colleague Laura Quilter from the University of Massachusetts, Amherst captured the nub of the problem with this phrase: “the incompatibility of fair use with the policies of open content publishers.”  Laura’s phrase is carefully worded; the problem we need to unpack here is about the policies of open content publishers, and the solution is to help them understand that fair use and open licensing are NOT incompatible.

Briefly, the situation is this.  An author has written a paper that critiques previous work, specifically about the existence, or not, of “striped nanoparticles.”  In order to assess and refute evidence cited in some earlier papers, the author wants to reproduce some figures from those earlier publications and compare them to imagery from his own research.  He has encountered two obstacles that we should consider.  First, his article was rejected by some traditional publications because it was not groundbreaking; it merely reinterpreted and critiqued previously published evidence.  Then, when it was accepted by PLoS One, he encountered a copyright difficulty.  PLoS requires permission for all material not created by the author(s) of papers they publish.  One of the publishers of those previous papers — Wiley — was willing to give permission for reuse but not for publication under the Creative Commons Attribution (CC BY) license that PLoS One uses.  Wiley apparently told the author that “We are happy to grant permission to reproduce our content in your article, but are unable to change its copyright status.”

It is easy to see the problem that PLoS faces here.  Once the article is published under a CC license, it seems that there is little control over downstream uses.  Even if the initial use of the Wiley content is fair use — and of course it probably is — how can we ensure that all the downstream uses are fair use, especially since the license permits more types of reuse than fair use does?  Isn’t this why fair use and open licensing are incompatible?

But this may be an overly simplistic view of the situation.  Indeed, I think this researcher is caught up in a net of simplified views of copyright and scholarly publication that creates an untenable and unnecessary dilemma.  If we start by looking at where each player in this controversy has gone wrong, we may get to a potential solution.

Let’s start with Wiley.  Are they in the wrong here in any way?  I think they are.  It is nice that they are willing to grant permission in a general way, but they are probably wrong, or disingenuous, to say that they are “unable” to change the copyright status of the material.  Under normal agreements, Wiley now owns the copyright in the previously published figures, so they are perfectly able to permit their incorporation into a CC licensed article.  They can “change the copyright status” (if that is really what is involved) if they want to; they simply do not want to.  The author believes this is a deliberate move to stifle his criticism, although it is equally possible that it is just normal publishing myopia about copyright.

There is also some blame here for the system of scholarly publishing.  The roadblock encountered with traditional publishers — that they do not want articles that are “derivative” from prior work — is common; most scientists have encountered it.  In order to generate high impact factors, journals want new, exciting and sexy discoveries, not ongoing discussions that pick apart and evaluate previously announced discoveries.  We have found striped nanoparticles!  Don’t dispute the discovery, just move on to the next big announcement.

This attitude, of course, is antithetical to how science works.  All knowledge, in fact, is incremental, building on what has gone before and subject to correction, addition and even rejection by later research.  The standard of review applied by the big and famous scientific journals, which is based on commercial rather than scholarly needs, actually cuts against the progress of science.  On the other hand, the review standard applied by PLoS One — which is focused on scientific validity rather than making a big splash, and under which the article in question was apparently accepted — better serves the scientific enterprise.

But this does not let PLoS off the hook in this particular situation.  It is their policies, which draw a too-sharp line between copyright protection and open content, that have created a problem that need not exist.

First, we should recognize that the use the author wants to make of previously published figures is almost certainly fair use.  He is drawing small excerpts from several published articles in order to compare and critique as part of his own scholarly argument.  This is what fair use exists to allow.  It is nice that Wiley and others will grant permission for the use, but their OK is not needed here.

Second, the claim that you cannot include material used as fair use in a CC-licensed article is bogus.  In fact, it happens all the time.  I simply do not believe that no one who publishes in PLoS journals ever quotes from the text of a prior publication; the ubiquitous academic quotation, of course, is the most common form of fair use, and I am sure PLoS publishes CC-licensed articles that rely on that form of fair use every day.  The irony of this situation is that it points out that PLoS is applying a standard to imagery that it clearly does not apply to text.  But that differential treatment is not called for by the law or by CC licenses; fair use is equally possible for figures, illustrations and text from prior work, and the CC licenses do not exclude reliance on such fair uses.

Next, we can look at the CC licenses themselves to see how downstream uses can be handled.  If we read the text of the Creative Commons license “deed” carefully, we find these lines:

Licensors should clearly mark any material not subject to the license. This includes other CC-licensed material, or material used under an exception or limitation to copyright.

Obviously, the CC licenses themselves expect that not everything that is part of a licensed work will be equally subject to the license; they realize that authors will — indeed must — rely on fair use as one of those exceptions and limitations to copyright.  How should licensors mark such material?  The most usual way is a footnote, of course.  But a caption to the figure that indicates the source of the different pieces and even says that copyrights may be held by the respective publishers would work as well.

Finally, let’s acknowledge that there is nothing new or unusual in the procedure recommended above. Traditional publishers have done things this way for years.  When Wiley publishes an article or a textbook that asserts that they, Wiley, own the copyright, they are not asserting that they own copyright over the text of every quotation or the images used by permission as illustrations.  Such incorporated material remains in the hands of the original rights holder, even after it is included in the new work under fair use or a grant of permission.  The copyright in the new work applies to what is new, and downstream users are expected to understand this.  Likewise, the partial waiver of copyright accomplished by a CC license applies to what is new in the licensed work, not to material that is legally drawn from earlier works.

So I think there is a way forward here, which is for PLoS to agree to publish the article with all of the borrowings under fair use or by permission clearly marked, just as they would do if those borrowings were all in the form of textual quotations.  And I think we can learn two lessons from this situation:

  1. The standard of review applied by open content publishers is more supportive of the true values of science than that used by traditional publishers.  Over reliance on impact factor hurts scholarship in many ways, but one of them is by pushing publishers to focus on the next big thing instead of the ongoing scientific conversation that is the core of scholarship.  The movement toward open access has given us a chance to reverse that unfortunate emphasis.
  2. Open content licenses should not be seen as all-or-nothing affairs, which must either apply to every word and image in a work or not be used at all.  To take this stance is to introduce rigidity that has never been a part of our copyright system or of traditional publishing.  It would be a shame if excessive enthusiasm for openness were allowed to actually undermine the value of research by making the scientific conversation, with all its reliance on what has gone before, more difficult.

How useful is the EU’s gift to libraries?

On Thursday the European Union’s Court of Justice issued an opinion that allows libraries to digitize books in their holdings and make those digital copies accessible, on site, to patrons.  In a way, this is a remarkable ruling that recognizes the unique place of libraries in the dissemination and democratization of knowledge.  Yet the decision does not really give libraries a tool that promises to be very useful.  It is worth taking a moment, I think, to reflect on what this EU ruling is, what it is not, and how it compares to the current state of things for U.S. libraries.

There are news stories about the EU ruling here, here and here.

What the EU Court of Justice said is that, based on the EU “Copyright Directive,” libraries have permission to make digital copies of works in their collections and make those copies available to patrons on dedicated terminals in the library,  The Court is interpreting language already found in the Directive, and adding two points.  First, library digitization is implied by the authorization for digital copies on dedicated terminals contained in the Directive, and, second, that this is permissible even if the publisher is offering a license agreement.  Finally, the Court makes clear that this ruling does not permit further distribution of copies of the digitized work, either by printing or by downloading to a patron’s storage device.

As far as recognizing what this decision is not, it is very important to realize that it is not the law in the United States.  It is easy sometimes, when the media gets ahold of a copyright-related story, to forget that different jurisdictions have different rules.  The welter of copyright information, guidelines, suggestions and downright misinformation can make the whole area so complex that simple principles can be forgotten.  So let’s remind ourselves that this interesting move comes from the European Union Court of Justice and is the law only for the EU member states.

The other thing this ruling is not is broad permission for mass digitization.  The authorization is restricted to copies that are made available to library patrons on dedicated terminals in the library.  It does not permit wide-spread distribution over the Internet, just “reading stations” in the library.  That restriction makes it unlikely, in my opinion, that many European libraries would invest in the costs of mass digitization just for such a relatively small benefit.

So how does this ruling in the EU compare to the rights and needs of libraries in the U.S.?

Let’s consider section 108(c) of the U.S. copyright law, which permits copying of published works for preservation purposes.  That provision seems to get us only a little way toward what the EU Court has allowed.  Under 108(c), a U.S. library could digitize a book if three conditions were met.  First, the digitization must be for the purpose of preserving a book from the collection that is damaged, deteriorating, or permanently missing.  Second, an unused replacement for the book must not be available at a fair price.  Third, the digital copy may not be made available to the public outside of the library’s premises.  This last condition is similar, obviously, to the EU’s dedicated terminal authorization; a patron can read the digital copy only while present in the library.

Two differences between the EU ruling and section 108(c) are especially interesting:

  1. The works for which this type of copying are allowed in the U.S are much more limited.  The EU says that libraries can digitize any book in their collection, even if it is not damaged or deteriorating, and even if another copy, even a electronic one, could be purchased.  This seems like the major place where the EU Court has expanded the scope for library digitization.
  2. On the other hand, the use of a digital copy may be less restricted in the U.S.  Instead of a dedicated terminal, a U.S. library could, presumably, make the copy available on a restricted network, so that more than one patron could use it at a time, as long as all of them were only able to access the digital copy while on the library premises.

In the U.S., of course, libraries also can rely on fair use.  Does fair use get us closer to being able to do in the U.S. what is allowed to European libraries?  Maybe a little closer.  Fair use might get us past the restriction in 108(c) about only digitizing damaged books; we could conceivably digitize a book that did not meet the preservation standard if we had a permissible purpose.  And the restriction of that digitized book to in-library use only would help with the fourth fair use factor, impact on the market.  But still we would have issues about the purpose of the copying and the nature of the original work.  Would general reading be a purpose that supports fair use?  I am not sure.  And what books could we (or could we not) digitize?  The specific book at issue in the case before the EU Court was a history textbook.  But textbooks might be especially hard for a U.S. library to justify digitizing for even limited access under fair use.

If we wanted to claim fair use for digitizing a work for limited, on site access, my first priority would be to ask why — what is the purpose that supports digitization?  Is a digital version superior for some articulable reason to the print copy we own (remembering that if the problem is condition, we should look to 108)?  One obvious purpose would be for use with adaptive software by disabled patrons.  Also, I would look at the type of original; as I said, I think a textbook, such as was at issue in the EU case, would be harder to justify under U.S. fair use, although some purposes, such as access for the disabled, might do it.  Finally, I would look at the market effect.  Is a version that would meet the need available?  Although the EU Court said that European libraries did not need to ask this question, I think in the U.S. we still must.

Ultimately, the EU Court gave European libraries a limited but useful option here.  Unfortunately, in the U.S. we have only pieces of that option available to us, under different parts of the U.S. law.  It will be interesting to see whether, in this age of copyright harmonization, U.S. officials begin to reconsider this particular slice of library needs because of what the EU has ruled.

 

 

MOOCs and student learning

Now that the MOOC on Copyright for Educators and Librarians has finished its first run, it seems like a good time to post some reflections on what I learned from the experience.

The first thing I learned is that offering a MOOC takes a lot of work, and it is easier when that work is shared.  In my case, I was working with two wonderful colleagues — Anne Gilliland from the University of North Carolina, Chapel Hill and Lisa Macklin from Emory — who made the effort of putting the course together much more pleasant.  Both are lawyers and librarians with lots of experience teaching the issues we were dealing with, and we are all friends as well, which made the whole process a lot easier.  We also benefited from the terrific support we got from consultants working for Duke’s Center for Instructional Technology, which may be the single most MOOC-savvy group at any university.

That we had a great team was not really a surprise.  I was a bit more surprised however, and quite pleasantly, by the quality of the student discussion in our MOOC.  I had heard from other instructors about how effective the online discussion forums could be, but was just a bit skeptical.  Then I was able to watch as MOOC participants would pose difficult questions or struggle with the application of copyright law to a particular situation, and repeatedly the other course participants would work through the problem in the forums and arrive at surprisingly good answers. Peer-to-peer teaching is a reality in MOOCs, and is certainly among the best features of these courses.

One thing we know about MOOCs is that they often have participants with considerable background in the topic; often they have enrolled for a refresher or to see how someone else teaches the topic.  These people are a great asset in the MOOC.  Even if they are not amongst the most-likely participants to complete a course according to whatever formula for completion is in place, they are tremendously important to the success of the course because of the contribution they make to peer-learning in the discussion forums.

Acknowledging the contribution of “expert students” also offers a reminder to MOOC instructors to take a more humble approach to the standards we set for completion of our courses.  The open and online nature of these courses means that students enroll with a wide variety of goals in mind.  As I just said, some are experts looking to see how others teach the topic.  Completion of quizzes and such may be unimportant to such participants, even though they are getting valuable career or personal development from the course.

Along these lines, I agree wholeheartedly with this essay by Jeff Pomerantz about apologies for failing to complete a course.  Like Jeff, my colleagues and I got multiple e-mails in which participants explained their “failure” to complete the course.  Like Jeff, we often smiled to ourselves and chocked those messages up to a misunderstanding of what MOOCs are.  And like Jeff, we learned that there are so many reasons for taking a course, so many different goals that participants bring to their involvement, that it is more likely we instructors who need to get a better understanding of MOOCs.

Many of the participants in our specific course were librarians and educators; they were our target audiences, so that makes sense.  These are groups that take assignments and course completion very seriously, which was reflected in our very high completion rate (over 15%).  But it also means that these were folks who wanted to explain to us when they were not going to complete the course according to official standards.  Maybe they did not realize that we were unable to track participation at an individual level due to the technology and the volume of students.  Nevertheless, we needed to treat their desire to explain with respect, and to recognize that many of those who did not earn a certificate of completion probably got what they wanted from the course, and also very likely made important contributions to what other participants learned.

Last week I attended a meeting of Duke’s MOOC instructors, which focused on discussions about how we can use data available about the MOOCs to learn more about the teaching and learning process.  It was a fascinating meeting on several levels, but one thing I got from it was two stories about the kinds of goals that MOOC participants might have.

  • One faculty member who had taught a MOOC explained incidentally his own motivation for taking a different online course.  His own career as a student had been so focused on his own specialty that he had never gotten a chance to take a basic course in a different field that had always interested him.  “There was so much to learn,” he said, “and so little time.” A MOOC gave him a chance to fill that long-felt gap, and I will bet that he was a valuable student to have in the course; very highly motivated, like so many MOOC participants, whether or not he finished the assignments that lead to completion.
  • One of the administrators of Duke’s online initiative told about overhearing two students discussing the fact that each was taking a MOOC, and interrupting the conversation to ask why each had enrolled.  One of the women was a Ph.D. student who explained that there were certain areas of study or skills that she needed to complete her dissertation that were most efficiently gained by taking parts of a MOOC or two.  She registered in order to listen to selected videos that have relevance for her specific research.  She is a perfect example of someone who will not count toward a completion statistic but who is gaining something very valuable through her participation.

The other thing I learned from this meeting about potential research enabled by MOOCs is the myriad ways that these online courses can help improve teaching and learning on our own campus.  Duke has said all along that improving the experience of our own students was an important goal of our involvement with MOOCs.  When I heard this, I usually thought about flipped classrooms.  But that is a very small part of what MOOCs can do for us, I discovered.  I was privileged to listen to a comprehensive discussion about how the data we gather from MOOCs can be used to improve the student experience in our regular classrooms.  Very specific questions were posed about the role of cohorts, the impact of positive and negative feedback, how we can harness the creative ideas students raise during courses, and how to better assess the degree to which individual students have met the unique goals they brought to the course.  All of this has obvious application well beyond the specific MOOC context.

The most important thing I learned from the experience of teaching a MOOC actually has little to do with online courses as such.  It is a renewed respect for the complexity and diversity of the learning process itself, and a sense of awe at being allowed to play a small role in it.