Essays on law, leadership, technology, and "things" – and arts, culture, intellectual property, and commons – and entrepreneurship and innovation – and higher education – and Pittsburgh and urbanism. One law professor's views of the world.
Two years to the day since my last blog post on this subject, the Federal Circuit has reversed Judge Alsup’s ruling that the Java API (the list of function and variable -a/k/a parameter- names) is uncopyrightable. The Federal Circuit held that the structure, sequence, and organization of the APIs renders them sufficiently original and non-functional to be copyrightable. As such, the case is remanded to determine whether there is fair use by Google in using them wholesale to make Android. For more background, see my prior post.
The problem with this ruling is twofold. First, it is surely correct. Second, it is surely wrong. Why is it correct? Because structure, sequence, and organization can be creative. This has long been true, and well should be. I won’t relitigate that here, but holding that these APIs were simply not copyrightable was a stretch in the 9th Circuit, and the Federal Circuit is correct to say so.
Why is it wrong? Because Google should surely be privileged to do what it did without having to resort to fair use. The court says: “We disagree with Google’s suggestion that Sony and Sega created an ‘interoperability exception’ to copyrightability.”
It is here that framing is important. The court’s statement is accurate; we don’t get rid of copyrightability just to allow interoperability. But Sega is crystal clear that we do allow interoperability reuse: “To the extent that a work is functional or factual, it may be copied,Baker v. Selden,as may those expressive elements of the work that ‘must necessarily be used as incident to’ expression of the underlying ideas, functional concepts, or facts….” This is not the merger doctrine that the court applied, but rather a defense to infringement.
In short, this should have been an abstraction-filtration-comparison case, and the Federal Circuit makes clear that Judge Alsup did not perform that analysis. The appeals court also makes clear that if the APIs are directly taken, you can jump directly to filtration, but this does not mean you need to hold the APIs uncopyrightable in order to filter them out in the infringement analysis. Instead, Oracle gets its copyright, and Google gets interoperability. It is here that the appellate decision misses the boat.
I hate to be critical after the fact, but this case should never have gone to the jury. It should have been decided as a non-infringement summary judgment case pre-trial where Oracle kept its copyright but infringement was denied as a matter of law due to functional reuse. Maybe that would have been reversed, too, but at least the framing would have been right to better support affirmance.
May 12, 2014 update: Two commenters have gone opposite ways on Sega, so I thought I would expand that discussion a bit:
Sega is about intermediate copying fair use, yes. But that intermediate copying was to get to the underlying interoperability. And I quote the key sentence from Sega above – even if that functionality is bound up with expression (as it is in this case), we still consider that a privileged use (and thus a worthy end to intermediate copying, which is not a privileged use).
Now, in this case, we don’t need to get to the intermediate copying part because the interoperability information was published. But the privileged use that allowed the intermediate copying didn’t suddenly go away simply because Google didn’t have to reverse engineer to expose it. So, so say Sega doesn’t apply because it is a fair use case completely misunderstands Sega. The fair use there wasnot about fair use of the APIs. That use was allowed with a simple hand wave. The fair use was about copying the whole program to get to those APIs, something that is not relevant here. So sending this case back for a fair use determination is odd.
That said, Sega pretty clearly makes the use a defense to infringement, rather than a 102(b) ruling that there can be no copyright.
As patent system followers eagerly await the outcome of Alice v. CLS Bank, it occurred to me that the Court has already heard this exact case – back in the 1970s. My prior discussion of Alice is here as background.
In Dann v. Johnston, the applicant sought a patent on software that allowed banks to report account spending by category (e.g. rent, utilities) rather than having customers calculate this themselves. This patent is little different than the patent in Alice in concept. The Alice patent covers software that allows banks to reconcile transactions through the use of “shadow” accounts kept in data records.
The Court took the case up on two questions: subject matter and obviousness. In the end, the Court dodged the subject matter question, but instead ruled that the patent was obvious, in large part because it simply implemented something that already existed in paper:
Under respondent’s system, what might previously have been separate accounts are treated as a single account, and the customer can see on a single statement the status and progress of each of his “sub-accounts.” Respondent’s “category code” scheme, see supra, at 221, is, we think, closely analogous to a bank’s offering its customers multiple accounts from which to choose for making a deposit or writing a check. Indeed, as noted by the Board, the addition of a category number, varying with the nature of the transaction, to the end of a bank customer’s regular account number, creates “in effect, a series of different and distinct account numbers. . . .” Pet. for Cert. 34A. Moreover, we note that banks have long segregated debits attributable to service charges within any given separate account and have rendered their customers subtotals for those charges.
The utilization of automatic data processing equipment in the traditional separate account system is, of course, somewhat different from the system encompassed by respondent’s invention. As the CCPA noted, respondent’s invention does something other than “provide a customer with . . . a summary sheet consisting of net totals of plural separate accounts which a customer may have at a bank.” 502 F. 2d, at 771. However, it must be remembered that the “obviousness” test of § 103 is not one which turns on whether an invention is equivalent to some element in the prior art but rather whether the difference between the prior art and the subject matter in question “is a difference sufficient to render the claimed subject matter unobvious to one skilled in the applicable art. . . .” Id., at 772 (Markey, C. J., dissenting).
You could cut something like this out and plop it right into the Alice opinion. Except now, obviousness is not on the table. The question is whether the Court got it wrong by not ruling on subject matter in 1976. I don’t think so, but I do think that Dann v. Johnston is the most underused Supreme Court opinion in the software area.
Professors are in an uproar over Aspen Publisher’s new rules for textbooks. In short, if you thought you could buy a book and do what you wanted after that (i.e. sell it used), Aspen wants to change that system. Instead of a true, unbundled digital option, it has a system where students buy both a physical textbook and a “lifetime” digital book. Too bad as there is a market opportunity that they might be missing. On the legal doctrine front, Josh Blackman called it out. James Grimmelmann jumped on the bashing. Rebecca Tushnet has poked at the offer too. But where is the market here? Is there a way Aspen could make this shift work well? If so, would authors (i.e., professors with deals with Aspen) like it? And why not use dollars to tell Aspen what to do? Assign a different casebook from a competitor (FYI there is a free one out there, see below). There are some specific issues that illustrate sme of the problems in this space.
First, what about time and artificial editions? Rebecca nails this point by calling out that some areas of law (e.g., IP) change so fast that new editions and coverage issues make staying up with casebooks a problem. In those areas, does first sale do much work? Maybe it does much work in the few years between editions. But after that, the text is somewhat obsolete. Dusting of an IP text in digital or hardcopy from the 1990s would be dangerous except for fundamentals (and maybe even for those). Still, there are now seven editions for the Dukeminier casebook. Are the updates every four or so years needed? Even in other areas, are authors updating to add value or to create a new text that undercuts the used market? Do publishers lean on authors to issue new editions when there is not much to say as a market window or version control? If so, the publisher is setting up the demand for secondary or alternate markets that cut out the publisher.
So is this system functioning? As I noted before, the OpenStax system offers high quality texts for free and in a modular way. That means sections are updated for free and folks can assemble material as they wish. Law does not have that yet. The folks at Semaphore Press are close however. That press happens to publish a property text by Steve Semeraro (disclosure I am friends with the folks at Semaphore and introduced them to Steve). It is not quite OpenStax, but it is an interesting model with a shareware feel.
Second, what about the cost to write and update a text? I know it takes tons of time. Whether RA’s do some work or it is all by the professors, the time to write a good casebook is real. I am grateful for the good books. A great teacher’s manual is also a huge help. For new teachers and even experienced, a rich manual provides insights about how the author(s) teach the material and where they see the comments to be headed. One can then choose to follow that lead or modify. But is the price point for texts (as many noted often close to $200) sustainable? Would the market collapse if the cost dropped to low or no charge? OpenStax indicates that the system could shift, and a small crowd of experts would be able to offer an excellent, up-to-date text. And as Pam Samuelson and many others have noted, scholarly works pay off in reputation. So having the most assigned text (or specific chapter on a subject) may stimulate just enough competition for reputation to get great texts (or chapters) but not a glut of roughly the same material from many high-priced publishers.
Third, what about that market opportunity? Would a publisher that offered A) a true digital copy for $40, $50, or even a $100 take share from others? B) What if the publisher said rent the hard copy for a reduced price (again it should be low)? Some might hate that idea as a matter of doctrine but that market is emerging on Amazon and at least lets the student know what is going on (though I think a rental model poses some issues for libraries in that no one should say that libraries should just be rental depots that is another debate for another time).
So Apsen, if you’d like to survive I am betting your authors would like that too. But I am also betting they want to work with you to offer much better solutions than the ones you have right now. The life time digital edition and the high price insult the authors and the marketplace. I think others will find ways to route around you. But you could take your current position and parlay it for the future. If not, I think you may have pushed the law text market to Semaphore or OpenStax. Hmm, maybe Aspen should stay with its model after all.
As many who follow such things know, ABC v. Aereo was argued today before the Supreme Court. My writeup on it last year provides some background about the case and my views at the time (which have changed a bit since I have more closely studied the technology, the case law, and the statute). In this post, I want to discuss the three pictures of Aereo – the big picture, the little picture, and the side picture.
The Big Picture
The basic gist of this case is this: Aereo grabs programming off the airwaves – programming that anyone with an antenna in their home can grab. Aereo then sends this content to servers – sort of remote DVRs. From those servers, the content is then sent to users over the internet. (There is some transcoding and compressing that occurs, but most are ignoring this feature).
It turns out that a court passed judgment on a similar system a few years ago in the Cablevision case. There, the court held that a remote DVR was not a public performance even though it transmitted shows from the remote DVR to subscribers. The basis for the ruling was eminently reasonable: the user decides what to record, what to watch, when to watch, and such decisions are separate from other users. As such, the transmission from the DVR to the user is a private performance, no different than if the DVR were in the user’s own home attached to the television.
There is one primary difference between Cablevision and Aereo. Cablevision grabs the signals off the air with one (or a few) big antennas. Aereo grabs the signals off the air with thousands of tiny, dime-size antennas. Aereo assigns one to each user for each channel watched or recorded. This difference becomes important, but only in the little picture.
Indeed, this difference was irrelevant to the circuit court that ruled in favor of Aereo. That court said Aereo is no different from Cablevision, and as a result, the performance is private.
And there lies the angst. At oral argument today, several justices struggled with what might happen if the circuit court is reversed. Will all cloud computing be at risk because it is considered public? That’s a scary thought, and what I call the big picture of Aereo.
I think the big picture is mostly a sideshow. Justice Kagan asked the best, most pointed questions about whether a storage locker might be publicly performing if people upload their own content and then share it with others. The answer, of course, is that it depends on how widely and with whom the content is shared (and whether it is shared for viewing or downloading, since only viewing would be a performance). After all, YouTube is simply a storage locker with worldwide shared viewing – and no one doubts that YouTube publicly performs. Further, we have safe harbors to protect such services that perform content coming from their users.
So, then why is the big picture a sideshow? Because the Court is actively thinking about it, and will work hard to find a way to rule without harming cloud services.
This leads to the little picture.
The Little Picture
It turns out that all of the hand-wringing about the cloud is caused by the Second Circuit’s fixation on the wrong part of the transmission chain.
Consider the system above. All of the concern appears to be from the cloud DVR to the viewing device. But, as Cablevision makes clear, it doesn’t make a difference where the DVR is. You can put it in the user’s home, or push it back the pipeline to the provider, and the result should be the same. Indeed, the same is true for Aereo, and for Dropbox, and iCloud, and so on. So long as there is one to one correspondence and control between the DVR storage and the viewer, the performance is private. This is why YouTube, Roku, and shared lockers might be public performances – not due to the location, but due to the connections between users and the storage.
If the DVR is private, then, what should we be looking at? The other part of the system above: the re-transmission (a “secondary transmission” in the words of 17 USC 111) from the antenna to the DVR. And it doesn’t matter how long or short that cable is – whether the transmission is from the antenna to the DVR at the user’s home or to the DVR at the provider’s facility, the result is the same. The transmission is to many DVRs, not from one DVR to one user.
This is where, unlike the treatment by the Second Circuit, the individual antennas become relevant. Aereo claims that because the user controls a unique dime-size antenna, that re-transmission is also private – it is still one to one. For that reason, Aereo argues, it is not publicly performing, where Cablevision is (though Cablevision pays a licensing fee).
And that question, I submit, is little picture. It is a small, non-earthshattering, non-economy-harming statutory interpretation issue: do multiple antennas constitute a secondary transmission/public performance or not? I think so, based on my reading of the statute. Section 111 makes very clear that “secondary transmissions” are considered public, and the definition of such transmissions is not limited to a single antenna: the signal is being grabbed and sent to many users by Aereo, not by the users themselves with user equipement. Aereo claims that its antennas are like a user running an antenna at home-essentially a rental- the antennas are not really leased. They are reused by others when not in use (unlike stored shows on the DVR), they are maintained by Aereo, they only feed Aereo equipment, and they are controlled by Aereo at the behest of an IP packet received over the internet (rather than a user owned device that actually tunes to a frequency).
Of course, others (and the Court) might disagree with me, but the ruling will not bring down the cloud. A secondary transmission is defined as a simultaneous retransmission of a primary transmission. There is no primary transmission that is simultaneously retransmitted in most cloud applications. It’s just not the same thing, nor should it be treated as such. And, perhaps surprisingly, the complexity of the copyright statute actually considered and handled this issue.
This brings us to the side picture, which many academics and media outlets have discussed, but has generally been left out of judicial discussion: the business of broadcasting is changing, and this case is one of many to come that will test broadcasters, service providers, and consumers in how television entertainment will be delivered. Aereo is a piece of a puzzle that allows people to enjoy live shows while streaming serial shows on services like Netflix and Amazon Instant Video. The sum total of those services cost less than cable due to unbundling.
But even if Aereo wins, then broadcasters might change their behavior to avoid the harms of Aereo. They might stop broadcasting, they might move more live television to cable (like ESPN and NFL Network showing more pro football), they might offer competitive services, or they might offer free streaming to cable customers (some already do). Thus, earlier today, I boldly argued that this case would not be a big deal – that the parties would adapt to any ruling or lobby Congress.
Here’s what I said in my blog post last year:
I’m not sure the Aereo ruling [allowing Aereo] is the right one in the long run. One of the thorny issues with broadcast television is range. Broadcasters in different markets are not supposed to overlap. Ordinarily, this is no issue because radio waves only travel so far. When a provider sends the broadcast by other means, however, overlap is possible, and the provider keeps the overlap from happening. DirecTV, for example, only allows a broadcast package based on location.
Aereo is not so limited, however. Presumably, one can record broadcast shows from every market. Why should this matter? Imagine the Aereo “Sunday Ticket” package, whereby Aereo records local NFL games from every market and allows subscribers to stream them. Presumably this is completely legal, but something seems off about it. While Aereo’s operation seems fine for a single market, this use is a bit thornier. I’m reasonably certain that Congress will close that loophole if any service actually tries it.
My thinking is still the same today. Indeed, one of the justices asked whether Aereo could just abandon geographic restrictions if it wins. Counsel had no real answer to that question.
The problem is that Aereo is caught between a rock and a hard place. As Bruce argues, Aereo should be considered a cable system and should pay the compulsory license fee (which is relatively inexpensive – cable companies offer basic OTA channels for $12 or so a month). But prior precedent has held that internet distribution cannot be considered a cable company eligible for a statutory license. Perhaps the best solution is to revisit that rule.
This week, the Court heard oral arguments in Alice Corp. v. CLS Bank. There are lots of writeups, and I, of course, prefer the one that quotes me. For that reason, I’m going to skip a lot of the detail. The gist of the case is that the Supreme Court must again decide whether patent claims are eligible subject matter – that is, whether the claims should be tested on their merits, or thrown out because we just don’t patent that sort of thing.
The last time around, in Bilski v. Kappos, the Court said that abstract ideas were not patentable, and that a claim to hedging was an abstract idea. That’s all well and good, until the next claim comes around, and the next claim, and the next claim. In Mayo v. Prometheus, the court added a little meat to the bones, saying that once you have your “natural law,” the inventor has to add some non-conventional steps in addition to the natural law.
That’s all well and good, except: 1) for reasons I can’t understand, lower courts aren’t tying the Prometheus conventional steps idea to abstract ideas systematically; 2) the conventional steps view imports 102 and 103, but without any of the rigor those sections require; and 3) lower courts and litigants can’t agree on what the base “idea” or “law” is, to which we might add steps. More on that later. Justice Breyer, at oral arguments in Alice, made clear that the Court was providing a “shell” in prior cases and that he was hopeful that the lower courts and litigants would figure it out.
How wrong he was. As I’ve writtenseveraltimes over the past seven years, since before subject matter challenges were en vogue, this is a morass that cannot be solved in a principled way. I think the best shot we had was our argument in Life after Bilski that we should be looking for claim overbreadth, and for ideas untethered from any application. The Court did not adopt this approach, though there is hope it will reconsider in this case.
But, despite my misgivings, my viewpoint that we should just abandon all attempts to limit patentable subject matter beyond the statute has been expressly rejected by the Court. In short, the Court really, really wants 101 to do something, and it is now struggling with what and how.
This leads to my view of how the Court should rule. Despite everyone who knows anything about patents (including me) thinking the Court is going to invalidate some of the claims of this patent, my view is that the Court must affirm patentability of the Alice Corp. claims. This doesn’t mean I think it’s a good patent.
Why do I argue this? Because we need goalposts. As long as the Court continues to hold patents invalid, we will see uncertainty. This is directly parallel to the Diehr case of 1981. Prior to that time, there was great uncertainty about what could be patented, and it did not end until Diehr. That uncertainty led to under-applications for patents, which led to less prior art, which made it harder to invalidate weak patents of the 1990s, the very same patents that give us so much trouble now.
My view is that Alice Corp. can be a new floor, that patent applications can be rejected on their merits but also become part of the public domain to wipe out other patent applications on weak inventions of the future. We need goalposts.
As for the actual subject matter of the patent, saying it is not an abstract idea is not a stretch. The broadest claim at issue is a system configured for settling multiparty transactions on a computer, getting information from a “device” and “generating instructions.” It’s written fancily, but it’s a pretty simple claim. Public Knowledge claims it can be implemented in 7 lines of Basic code. This is a bit of an exaggeration that takes some liberties with claim construction.
But that’s the point. First, we should probably construe what a claim covers before deciding that it’s an abstract idea. Oral argument at Alice, and general discussion in this area pays too little attention to the actual limitations of the patent claims. Second, the claims here include a device, a computer, a storage unit, etc. This is an application of how to do escrow, and not even the general idea of escrow, but a specific type of intermediated escrow. Yes, I understand that “Do idea x on a computer” is a claim we worry about, but we have cases like Dann v. Johnston that deal with that on obviousness, and if it wasn’t obvious to “Do idea x on a computer” then that might be the type of claim we want. Bonus trivia: cert. was granted in Dann on a subject matter challenge as well, and the court declined to rule on it, instead ruling only on the obviousness point.
Third, the patent has been attacked as simple and not doing anything earth shattering, but CLS Bank itself claims that this idea was a long time coming, some 20 years. If it could really be implemented with 7 lines of code, why didn’t CLS Bank just do it? Why did it take a couple years even after Alice sought patent protection to implement such an “abstract idea” that everyone claims to have known was necessary? Then again, maybe it’s just a simple invention and Alice was merely the first to write it down when computers and networks got faster. But these are all questions about obviousness – questions that we should be asking and that may well render the patent invalid. But just because we have a gut feel that something might be obvious, that doesn’t mean we should use hindsight many, many years after the patent filing to call it an abstract idea.
Admittedly, this is a broad claim. It might even satisfy our definition of “overly broad” from Life After Bilski and the factors of abstract idea that we identify there. But the Court did not adopt our framework. I worry that more harm than good will come of further nebulous definitions of abstract ideas, which will come until the Court affirms a borderline claim as good enough. Everyone watching this case, whether they favor broad patenting or limited patenting, thinks the lack of rule clarity is a problem. The Court should use this case to set the lowest bar for non-abstractness – it is the only way left to get some clarity.