Author: Karl Fogel

The conference “Copyright, DRM Technologies, and Consumer Protection”, on March 9 and 10 at UC Berkeley, was a great example of competing assumptions jockeying for mental floor space.

(DRM is the set of software and hardware handicaps that prevents computers and media players from sharing files freely with each other. It’s why when you download a song to your iPod, you can’t copy it to another iPod or upload it to somewhere else, for example; or why when you burn a CD with a standalone CD burner, you often have trouble making copies from the new CD.)

The panelists at the conference were varied: lawyers and professors of law, some economists and other academics, executives from content-owning companies and content-carrying companies, officials from governmental and quasi-governmental bodies (e.g., WIPO), someone from the British Library, from the Electronic Frontier Foundation, from Public Knowledge, etc. Each panel held five or six people, facing the audience. A moderator introduced each one, and then that panelist spoke for fifteen to twenty minutes on the panel’s topic. After all had spoken, the panel as a whole took questions from the audience.

Given the disagreements in the room, the conference understandably didn’t come to any conclusions about DRM. But it did two other useful things: it gave an opportunity for every possible analysis of DRM to be heard and debated (there were some non-obvious ones), and, perhaps more importantly, it revealed the rhetorical terms in which different parties want the public to think about DRM.

One good point a few panelists made is that successful DRM is likely to weaken the user’s privacy. All DRM prevents computers and media devices from sharing files freely with each other. But in order to merely curb freedom, rather than end it entirely, DRM must identify which files can be shared and which can’t, and which methods of sharing are permissible. The more sophisticated this process of determination becomes, the more it is necessary for devices to analyze information about the files in complex ways. The burden of this analysis will often be too great to implement in typical consumer electronics — so instead the data will be sent to an online server, which will figure out your rights and tell the client device what to do. But step back and consider where this is going: devices all over your house, sending information about your viewing and listening habits to a central server. Is this data certain to be subpoena-able someday? You bet. It probably already is.

Another point (made by Peter Swire among others) was the computer security implications of running DRM. The code in a DRM system must be a black box: it cannot be open source, because if the user could understand and change it, she could disable it and copy her files without restriction. But if the code is opaque, it cannot be examined for security flaws — and in fact, the Digital Millennium Copyright Act makes it illegal to even attempt such an examination in most circumstances. Basically, you have to run this code, for even if you are technically capable of modifying it, doing so would be illegal. (In response to this situation, Jim Blandy proposed a new slogan: “It’s my computer, damn it!”)

There was also some discussion of DRM in terms of consumer law and contract law, rather than copyright law. Consumer law takes into account the “reasonable expectations” of the consumer (for example, that having obtained a copy of a movie, you might reasonably expect your television or computer to actually play it when asked). But this solid-sounding phrase slowly disintegrated as people pointed out that the expectations of consumers are not static: as people experience digital restrictions more and more often, they begin to accept them — their “reasonable expectations” begin to incorporate DRM behaviors.

It was fascinating to see how determinedly the representatives from content-owning companies used the words “balance” and “choice”. Over and over again, we heard that the best DRM systems are those that strike an appropriate “balance” between the rights of content owners and the rights of consumers (also sometimes called “users”). The invocation of “choice” as a guiding principle often came as part of a self-addressed call-and-response formula, as in “What do consumers really want? They want choices.” I don’t see any way to understand that other than as a fake question designed to make DRM deployment appear to be a response to some market need — which it isn’t, because users have been indicating pretty clearly what they want: bits that flow freely.

These sorts of assertions often came in the context of the broader claim that DRM has the potential to enable a wide variety of new business models (see “An Economic Explanation For Why DRM Cannot Open Up New Business Model Opportunities” for a rebuttal of this point of view). The business-model argument was repeated by several panelists, and it’s worth some attention for the assumption underlying it: that enabling any particular business model is a positive good, a prima facie justification for whatever DRM mechanism might be required to enable it. My friend Ben Gross has an intensely practical answer to this kind of thinking. He objects to draconian copyright laws on the grounds that it’s simply not the government’s job to prop up failing business models, and he applies the same reasoning to DRM. Panelist Andrew Bridges (of Winston & Strawn, LLP) said essentially the same thing, in a memorable comment on DRM’s essential role in the marketplace: “There are two ways to make money by connecting supply and demand: by making it easy, or making it hard.”

Some panelists made reference to DRM protecting “integrity” (e.g., Victoria Bassetti of EMI: “DRM preserves our products’ integrity”), but we never got a concrete explanation of how it does so, or even what precisely it would mean. “Integrity” is a loaded word here, because whenever it is used in a conversation about filesharing and copyright, one can easily imagine that it refers at least partly to plagiarism. I don’t know whether that’s how these people meant it, but the inference is hard to avoid, and it’s completely backwards: DRM works against the detection of plagiarism, because it impedes digital technologies’ ability to arbitrarily examine and compare files, and prevents people from uploading files to locations where they can be viewed and downloaded publicly. Plagiarism cannot flourish where there is transparency, but DRM prevents transparent behavior at a technical level, and thus drives people toward non-transparent methods of sharing.

There were also various attempts to talk of DRM-restricted products as being essentially the same as physical products or limited-resource services. Thus, Thomas Rubin of Microsoft said that DRM has been accepted for years as a means of controlling access to satellite TV (and now satellite radio), to websites that require login accounts, to cell phone networks, and even to traditional libraries! At some point during the reading of that list, he mentioned that he was being deliberately provocative and tossing up some examples as fodder for thought. I’m glad he included that caveat, because his list didn’t contribute anything constructive to the debate, except to outrage more than one person in the audience (as I learned chatting in the hallway afterwards). Web sites and cell phone networks have limited bandwidth and computational resources, so their products really can be used up, if too many people log on. And libraries deal with physical objects, so access control is as understandable for them as for jewelry stores.

Is it so much to ask, at this late date, that everyone debating issues of copyright and DRM agree to stop talking about digital data as though it were a limited resource? You can’t “steal” songs and movies, you can only copy them. They’re not like library books, or cell phone bandwidth, or an artist’s reputation (all of which are, in one way or another, diminishable resources). Talking about data in that way is a disservice to logic. I think, deep down, Thomas Rubin knew this, which is why he inserted his disclaimer.

If I were to take away two lessons from the conference, they’d be that language matters, and assumptions matter. The rhetorical advantage gained by being in favor of “balance” is nearly unbeatable. I think the only way to deal with it is to redefine “balance”, to start using that word to talk about balancing new things, for example, the right of the public to copy and make derivative works versus the right of the artist to have (very) temporary control over the initial distribution of her work.

As for assumptions: I heard Victoria Bassetti of EMI respond to a questioner by asking (paraphrasing, as I don’t have the transcript) if he cared whether artists earned any money or not. Since artists mostly don’t make money from copyright royalties anyway, her response was a non-sequitur, but it effectively placed her on the artists’ side and the questioner on the side of those lazy, freeloading filesharers. Because she knows that most people share a certain assumption — one which she may even sincerely believe herself — about artists earning their livings from copyright royalties, she’s able to use this kind of response to deflect attention from the problems DRM creates. It would be a bad outcome indeed for these to be the terms under which the public considers DRM.

I’ve concentrated mostly on the remarks of unreservedly pro-DRM panelists here, but I don’t want to give the impression that they set the tone of the conference. There were impressive critical presentations and questions from the aforementioned Andrew Bridges, from Gigi Sohn (of Public Knowledge), Cindy Cohn (of the EFF), Ian Kerr (University of Ottowa), Deirdre Mulligan (Berkeley Center for Law and Technology, Samuelson Clinic, and Boalt Hall School of Law), Peter Swire (Ohio State University), and others. I went to the conference partly to see how people were talking about DRM, from all points of view, in preparation for being on a similar panel in Montréal next month. I was not disappointed. If there remains any major point about DRM not raised at this conference, I’d be very surprised; kudos to the organizers for that.

[References: my notes from the conference are here.]

On April 18th, 2007, I’ll be a panelist at a session with the provocative (and somewhat enigmatic) title of “Interoperability: computer industry giants versus music?”, at Les Rencontres québécoises de l’industrie de la musique, at the Bonsecours Market in Montréal, Quebec. The other panelists will be from the music distribution industry, plus at least one from the Electronic Frontier Foundation. I’m looking forward to a lively discussion! Full report here afterwards…


Well, this isn’t really a “full report”, but the panel discussion was terrific, and not at all the slugfest one might expect — genuine discussion took place. But the most interesting thing about the conference as a whole was that although it was mainly composed of people from the recording and radio industries, many were very receptive to the message that copyright is not always good, and many also showed signs of giving up on DRM as a strategy for controlling copying. It may be that the industry is starting to see the light, at least in Québec, Canada.

Portait of Joyce Hatto

By now, the whole classical music world has heard of the Joyce Hatto scandal (Wikipedia’s article is excellent).

Joyce Hatto was a pianist who died in June, 2006. She didn’t play many concerts, but she recorded prolifically — or so everyone thought, until it was discovered, in early 2007, that most of her recordings were plagiarized from the records of other pianists. She never knew about it, apparently: the plagiarism was the work of her recording engineer and husband, William Barrington-Coupe.

The best part is how the deception was uncovered: when someone put her recordings onto a computer, automated comparison routines kept stubbornly identifying them as other pianists’ tracks!

It’s a great example of what we’ve been saying about artists putting their work online: sharing files widely prevents plagiarism, by making it much easier to detect. Forget Hatto herself for a moment — think instead of all those other pianists, whose recordings were passed off as her work: the reason the hoax was detected at all was because their track information was available online. And if the recordings themselves had been available online, the problem would only have been detected more quickly, probably years ago.

The unmasking had nothing to do with DRM, by the way. DRM is the set of software and hardware handicaps that prevents computers and music players from sharing files freely with each other. It’s true that some of the programs that detected the similarities between Hatto’s recordings and other pianists’ also have built-in DRM, but the DRM is utterly irrelevant to the comparison techniques that spotted the correlations. In fact, if DRM were as effective as the record companies wish it were, it would only have hindered the comparisons, since then the other pianists’ track information might not have been readily available for examination.

But we’ve still got a long way to go. The Wikipedia article on Hatto had the following sentence, as of early February 28th:

Meanwhile the British Phonographic Industry (BPI) has begun an investigation. If the allegations are true, it would be one of the most extraordinary cases of copyright infringement the record industry had ever seen, according to a BPI spokesman.

Notice how the hoax is identified as “copyright infringement“, not “plagiarism“. I checked the reference: the BPI spokesman apparently referred to “piracy”, so in the interests of accurate quoting, I’ve changed the Wikipedia article to say “piracy”. But that’s not really satisfactory: the word “piracy” is often used to refer to both unauthorized printing and plagiarism, as though the two are the same offense. The word thus provided a semantic pivot, around which some Wikipedian was able turn from one of the word’s meanings to the other, making it into a case of “copyright infringement”.

Who was this mysterious misquoter?

We’ll never know, because they did it anonymously, though clearly on purpose. For when the sentence was originally added, it quoted the BPI representative correctly. Later, someone came along and changed just one thing: “piracy” to “copyright infringement”. You can see the edit here. Probably they felt that “piracy” was too loaded a term, and that “copyright infringement” would be more accurate. Unfortunately, this is exactly the conflation — equating unauthorized copying with stealing credit — that the record industry promotes; the pity is that their effort has been so successful.

I’ve fixed the text to say “piracy” again. But the BPI spokesman should have talked about “plagiarism” in the first place, because that’s what we’re dealing with here, and the more we let digital files circulate freely, the less plagiarism there will be.

Portait of Matthew Gertner Back in late 2006, Matthew Gertner (of AllPeers) and I did a mutual interview about copyright reform. It was a fascinating and wide-ranging conversation, and he’s posted it on PeerPressure, the AllPeers blog.

From Matthew’s introduction:

“…Rather than assuming that some copyright is necessary and trying to decide exactly how much is optimal, [Fogel] suggests that we imagine a world without copyright and take it from there.

He contends at the beginning of the podcast that, not only does he not know personally what the right level of copyright is, but that it isn’t possible to know this based on current evidence, a view that I find eminently reasonable. I also agree wholeheartedly with the way he concludes our discussion:

I think that there is some built-in exclusivity there but I also think… whatever change is going to happen is going to happen essentially through a market process. It’s not going to be that Congress suddenly wakes up and drastically reconsiders copyright law. Instead, some number of artists, just as some number of software developers did a couple of decades ago, will by choice release their stuff under these liberal copyrights, And they will create this little fertile safe space for sharing that will grow, and basically we’ll have two parallel streams: there’s the old stream and the free stream. And people will just start choosing stuff based on what they like, not based on ideological concerns about how it was produced. And we’ll just see what happens.

At the end of the day, we need to create an environment where individuals can test their own approaches to copyright and let the market decide what works best. I don’t necessarily see as strong a connection as Karl between liberal copyright terms and free content, however, and I hope that this makes our discussion more dynamic and thought-provoking.”

There are both download and streaming links available — listen to it here.

Click the image above to watch QuestionCopyright.Org Executive Director Karl Fogel delivering a talk at the Stanford University Library’s Technology Chalk Talk Series on October 19, 2006. The video is available to view and download on the Internet Archive.

The talk is about 90 minutes long, including the question-and-answer session after the presentation. The audience members’ backgrounds were in library science, computer science, publishing, and law, so the Q&A was particularly good in this talk.

On Thursday, October 19th, from 2:00-3:30 p.m., I’ll be giving a SULAIR Technology Chalk Talk on the history of copyright, hosted by the Standford Universty Library, in Room 102 of the Hewlett building (on the other side of the quad from Green library). Here’s a map to the building:

I’ll talk about the history of copyright, its original goals and its effects today, and will leave plenty of time for questions and discussion afterwards.

Many thanks to James Jacobs, User Services Technology Specialist at the Cubberley Education Library, for arranging this.

Still composite from street interviews, Chicago, 2006

In order to document the public perception of copyright today, we went around Chicago with a video camera over two days in the summer of 2006, asking strangers what they think copyright is for, how it got started, how they feel about filesharing, and for any other thoughts they have on copyright. We didn’t tell the interviewees about this website or the nature of our project until after each interview was over.

The points that showed up consistently were:

  1. Most people felt that copyright is mainly about credit, that is, about preventing plagiarism.

  2. Everyone was on the artist’s side — everyone wants to feel that they’re treating the artists right. Over and over again, we heard the sentiment that when someone goes to a concert they’ll buy the CD “to support the band”, even if they already have all those songs on their computer already.

  3. Many people felt that copyright was about giving creators the means to make a living, but that in recent times it’s been abused and corrupted by corporate interests.

  4. No one — not even the interviewee who had just read a book on copyright — knew where copyright comes from. Most people had the feeling it had been around for a while, though estimates varied widely on how long. One interviewee knew of the Constitutional clause that is the legal basis for copyright in the United States, but wasn’t familiar with the history leading up to that clause.

  5. People were ambivalent about filesharing. They don’t feel like it hurts anyone, except perhaps the music distributors, but they still feel some residual guilt about it anyway.

You can view the video at the Internet Archive. It is also available at Google Video and YouTube.

The video is in the public domain; all participants signed a release form permitting their footage to be used. Many thanks to Ben Collins-Sussman and Brian Fitzpatrick for their help filming, and to Ben for huge amounts of help with editing.

“The Surprising History of Copyright” is coming to LinuxWorld at San Francisco’s Moscone Center, on Wednesday, 16 August 2006, 3:30pm at the O’Reilly Media exposition booth (#928). I’ll be giving a talk about the history of copyright, how the economics of replication and distribution have changed radically since the 1700’s, and how both creators and publishers can flourish using different strategies than the traditional copyright monopoly.

The slides for the talk “The (Surprising) History of Copyright and What It Means for Open Source” are available in PDF and OpenOffice Presentation formats. The slides are meant to be accompanied by the talk, and don’t really stand by themselves, but I’m providing them for those who didn’t have time to write down the URLs and the bibliography at the talk.

The Google Book Search Library Project promises to be, among other things, the greatest plagiarism detector ever created. So why are the Association of American Publishers and the Authors Guild suing Google over its plan to digitize millions of books? In the case of the AAP, it’s probably because they understand that copyright law really exists to subsidize distributors, not writers or readers. They’re just looking out for their own interests. Or at least they think they are: it’s much more likely that Google search results will improve book sales than hurt them. In any case, one has to pause at the spectacle of a publishers’ association coming out against readers being able to locate the books they’re looking for more efficiently than ever before. But what’s more interesting, if not exactly unexpected, is that the Authors Guild is reacting in the same way. Here’s what the Guild’s president, Nick Taylor, had to say:

“This is a plain and brazen violation of copyright law. It’s not up to Google or anyone other than the authors, the rightful owners of these copyrights, to decide whether and how their works will be copied.”

How odd. Mostly, authors are not the owners of the copyrights in their work — publishers are. And even in those cases where the author retains copyright, she has usually signed a contract granting exclusive printing and distribution rights to a particular publisher. Nick Taylor’s comment might make sense in some idealistic world where authors typically retain control of their work, but for the authors he represents, the world is rarely like that. Meanwhile, the Authors Guild ignores an amazing possibility opened up by Google’s project: we will be able detect plagiarism with a thoroughness hitherto unthinkable. Google is the world’s premier search engine; they have made billions of dollars matching snippets of text together and displaying the results. After digitizing these texts, the natural thing to do is to start looking for ways to cross-reference them. For legitimate citations, the effect of this will be mere convenience: instead of trudging to the library or bookstore, you can click on a link. But for cases of plagiarism, the effect will be a revolution: whereas in the past, discovering plagiarism required that the same person read both books, it will now be possible to flag potential instances of unattributed copying automatically! So why isn’t the Authors Guild cheering Google on? A clue can be found in the Guild’s self-description, as given at the end of their press release about the Google lawsuit


“The Authors Guild is the nation’s largest and oldest society of published authors and the leading writers’ advocate for fair compensation, effective copyright protection, and free expression.”

There’s a subtle bit of cognitive slippage going on there. They start out stating (accurately) that they are the largest society for published authors. But then they go on to claim that they are the leading writers’ advocate for fair compensation, effective copyright protection, and free expression. Where did that slide from representing published authors to representing all authors happen? Anyone who writes is a writer; and thanks to the Internet, any writer who wants to be published can be, by simply making their work available on the Web. This is not wordplay, it is a fundamentally important fact of modern information distribution, as many popular bloggers have learned. The Author’s Guild does not represent most authors anymore, if it ever did. It represents a tiny minority of authors: those whose works have been found fit for distribution by a certain kind of publisher, the kind that makes a massive initial investment in a print run and then depends on strict monopoly control of the copyright to recover that investment. Tellingly, the Guild’s identifying statement doesn’t contain a word about plagiarism, a threat faced by all authors. While texts may be shareable resources, reputation and credit are not: plagiarism is a concern for all writers, no matter how their work is distributed. Yet the Guild’s omission isn’t limited to that one press release. A search for the word “plagiarism” across their entire web site returns only this:

Search word: plagiarism 0 results found.

Perhaps the Guild thinks that the phrase “effective copyright protection” includes plagiarism, but as we have noted elsewhere, copyright “protection” is really not about plagiarism: one can permit limitless attributed copying without approving of or permitting plagiarism. The two are separate, and the Authors Guild, of all organizations, should know this. The Authors Guild’s heart is in the right place; the problem is just that they’ve bought the industry myth: that authors’ interests are always the same as publishers’. If the AG really wants to look out for the interests of all authors, not just the small percentage with successful monopoly-based publishing arrangements, they’ll knock on Google’s door and ask how they can help. Instead, they’re suing for copyright violation, even though what Google is doing is both well within the bounds of so-called “fair use” and enormously beneficial to the Guild’s members. The Great Cross-Referencing has begun. Let us hope the Authors Guild sees the light and allows it to continue.

[Postscript: When I first wrote this article, I wasn’t aware that Amazon had already been doing in-book searching for some time. This means that Amazon could do automated plagiarism detection as well, and perhaps there are other organizations in the same position. But note that Amazon is not the target of publishing industry lawsuits, probably because Amazon negotiated with publishers for access to book text, rather than just scanning it in the way Google did.]