Every day, more than 400,000 hours of new video are uploaded to YouTube. In the same time span, more than 30 million visitors stop by, making it the third-most-visited website in the world. Combined with Netflix, the two sites account for more than half of all internet traffic in North America. Yes, YouTube is huge. While people watch everything from cat videos to presidential speeches on the platform, they also use it for a wide range of purposes — including pirating movies, music, and television shows. Therein lies a quandary: Big copyright-holders cannot depend on contractors or employees to sift through all the videos to look for violations, as it would take more than 50,000 people working full time just to watch everything. Instead, YouTube’s copyright control system is on auto-pilot, helmed by an algorithm called Content ID.
While people watch everything from cat videos to presidential speeches on the platform, they also use it for a wide range of purposes — including pirating movies, music, and television shows. Therein lies a quandary.
For some, this presents a serious problem. “Content ID disrupts the Internet community by blocking or removing lawful videos,” says Laura Zapata-Kim, writing in the Boston College Law Review. Like most massive, automated systems, Content ID loses in nuance what it gains in scale. According to Zapata-Kim, these algorithms “thereby test which values are at the heart of a business: efficiency or accuracy. The automated systems save huge businesses time and money in detecting and removing infringing material, but they frequently err in identifying that material.” Without humans reviewing Content ID’s actions, the policy is effectively “shoot first, ask questions never.”
One of the nuances that gets lost in the process is whether or not certain material is fair to use. Content ID has no way of knowing the answer to this question; it cannot tell if a snippet of music comes from a pirated upload or from a music critic talking about a recent album. One of those is illegal, the other is not, but Content ID gets rid of both.
YouTube is like a large landowner in a growing town. To demonstrate that its property is crime-free, YouTube establishes its own private law enforcement on-site. The private corps does not enforce the same set of laws; rather, they go above and beyond to ensure that nothing gets out that could look like a crime on YouTube’s property. If that means putting handcuffs on everyone with a pocket knife, then so be it (and even then, it is imperfect). Anyone who steps on YouTube’s property has to agree to be controlled by the rules of this private force as well — and visitors are not able to run to the local police when one of YouTube’s robots takes hasty action. This pseudo-judicial approach by YouTube, rather than protecting parties according to the law, protects YouTube from the law.
That law is the Digital Millennium Copyright Act (DMCA), enacted in 1998, a full seven years before YouTube existed. The DMCA, designed to address copyright issues in digital spaces, outlines a set of procedures for handling problems, including a system of official take-down notices and counterclaims. It also affords some protections to media content platforms like YouTube against charges of copyright infringement. However, judges have found these protections to be conditional. YouTube’s fear — and the reason why Content ID exists — was that if it did not do something to proactively control its platform, then it could indeed be held liable for any copyright-violating material uploaded (and record companies are notoriously litigious).
That control is far-reaching. Zapata-Kim points out just how powerful Content ID can be: “The effects of Content ID are thus similar to the effects of an official takedown request through the [DMCA], yet content creators are not allowed to hold parties liable for misrepresenting a false or fraudulent Content ID match.” If your parody video with just a few seconds of copyrighted music goes up, it will get muted, monetized, or blocked in the blink of an eye. The only option is to appeal to YouTube directly, and YouTube is under no obligation to reinstate the video, even if it is legal material.
Given the choice, YouTube would much rather not devote the resources to running Content ID and handling its fallout. Doing so right now, however, would mean opening up the company to lawsuits big and small (mostly big), because YouTube is expected to provide the extrajudicial police work. James Boyle, author of the 2008 book The Public Domain: Erasing the Commons of the Mind, describes how copyright became what it is and where it might go. He mentions a concept that he terms “the Internet Threat — the belief that control must rise as copying costs fall.” Because YouTube makes video uploading as easy as dragging a file and clicking a couple of buttons, pirating and mass-sharing the newest Marvel superhero movie is a piece of cake. To combat this opportunity, big copyright-holders argue, copyright needs to control not only what gets distributed, but also how we distribute it. “Cheaper copying and the logic of the Internet Threat will always drive us toward giving more control over our communications architecture to the content industries,” says Boyle.
Content ID presages an environment in which copyright is enforced with blinding, and sometimes blind, speed.
What happens in the future regarding YouTube and Content ID matters for more than just online videos. Ideas and creative works constitute both a growing share of the economy and a significant part of people’s everyday lives. Content ID presages an environment in which copyright is enforced with blinding, and sometimes blind, speed. Though Zapata-Kim frames the conflict as between efficiency and accuracy, it might be better to think of these problems as a tension between justness and completeness: should the laws first and foremost protect the innocent, or punish the guilty?
Peter Hunt is a graduate of the University of Texas at Dallas.