As we’ve pointed out before, copyright began as a compromise censorship law, and is still frequently used as a means of censorship today.
The U.S. Senate is not being overly subtle about it lately, either. The so-called “SMART Copyright Act of 2022” would effectively force online platforms to use content-monitoring software designated by the U.S. Copyright Office to detect user-uploaded content that some monopolist asserts violates their monopoly. Well, technically, the platforms aren’t required to use the content-monitoring software — they would just face absurdly, existentially high penalties if they were found guilty of copyright infringement and had not been cooperating with Big Brother. To be fair, the platforms can make their own choice, right?
So we would now have enforced running of government-designated code, in addition to the already well-documented problems that have plagued automated content-monitoring software for years:
- Lots of false positives, causing content to be mistakenly taken down by the hosting service with no practical way for people to argue that the system has made a mistake. Here at QCO we argue that censoring non-confidential content is inherently a mistake anyway, but even if you think copyright justifies that censorship, the fact is that automated monitoring systems make lots of mistakes even on their own terms, and no platform provides adequate recourse to the victims of those mistakes, because…
- …this bill, like all the others before it, contains no meaningful penalties for false claims of copyright ownership or of infringement. Instead, all of the terms favor the content monopolists: if you share things that they legally monopolize, then you (or the service provider) pay a price, but if the monopolists wrongly claim that you have done this, there is no penalty to them for being wrong about that (and, in general, you’ll still pay the price — your stuff will be censored anyway).
Public Knowledge has already put out a good piece explaining what’s wrong with this bill, and you can easily find other groups opposing it too.
We would add:
An automated system to detect, flag, and take down content from online services is, by definition, a technical system for implementing censorship. As it increasingly becomes a government-directed censorship system — which is what this bill is the start of — the temptation will become irresistible to use it for purposes beyond copyright-based censorship. “Oh, hey, we’ve got this great content ID system in place, so now we can use it to flag all this other bad stuff too.” For “other bad stuff”, substitute pretty anything anything you think a DOJ lawyer might be able to persuade a judge to set aside her 1st Amendment concerns for: illegal (ahem) foreign propaganda trying to influence elections, medical mis-information, information about the activities of U.S. military forces overseas…
Government-chosen, government-mandated automated censorship technology. If that sounds like a bad idea to you, then (if you’re in the U.S.) please get on the horn and let your senators and representative know.
By the way, in addition to the bill’s basic flaws of premise and design, it is also extremely poorly drafted: it’s full of gaping meaning voids like “…a broad consensus of copyright owners and service providers in an open, fair, voluntary, multi-industry process…” and “…a broad consensus of relevant copyright owners and relevant service providers, in an open, fair, voluntary process…”, etc. It is, essentially, an invitation to judges to slap arbitrarily high penalties on any platform that accepts user-generated content but does not get in line and obey the software code and content rules that Uncle Sam tells them to obey.
If there’s a bright spot here, it’s that legislation like this will only drive people and platforms toward peer-to-peer encryption and platform-opaque systems even faster than they were already being driven there by pervasive surveillance and eroding civil liberties.