Ashley Ulrich is a J.D. candidate, 2021 at NYU School of Law.

For several years, pressure has been building in courts and Congress for limitations on or repeal of Section 230 of the Communications Decency Act (“CDA”) of 1996. This section, which largely protects website hosts from liability arising out of third-party posts, helps to ensure broad protections for speech online. However, the section has also rendered victims of defamation, harassment, revenge porn, and other crimes and abuses online virtually powerless against anonymous aggressors, as they lack any means to pressure websites to issue takedowns and bar malicious users from creating new accounts.

Background to Section 230 of the CDA

In defamation law, both the speaker of a defamatory statement and her publisher or distributor may face liability. For a publisher to be liable, he must have at least been negligent in his role to communicate the defamatory material, whereas a distributor will only be liable if she has knowledge that the material she distributes, transmits, or broadcasts is defamatory. A publisher is characterized as having editorial control over material that he disseminates, akin to the publisher of a typical newspaper, whereas a distributor lacks editorial control, akin to a public library, book store, or newsstand.

In the mid-1990s, whether to treat interactive computer services providers—mostly message board hosts and early social networking websites—as publishers or distributors of their users’ posts for the purpose of assigning liability in defamation cases fell into a legal gray area. Two New York cases, one federal and one state, split on the issue. In Cubby, Inc. v. CompuServe, Inc.,776 F. Supp. 135 (S.D.N.Y. 1991), the court held that an internet-based communications platform was a distributor and thus not liable for a user’s defamatory post where it lacked knowledge that the material was defamatory. In Stratton Oakmont, Inc. v. Prodigy Services Co., 23 Media L. Rep. 1794 (N.Y. Sup. 1995), the court went the other way, finding a provider of online bulletin boards to be a publisher and assigning it liability for a failure of ordinary care. In the first case, the court was swayed by the defendant’s lack of editorial control for users’ content, whereas in the latter case, the court determined that the defendant’s attempts to moderate offensive content demonstrated editorial control.

The decision in Stratton Oakmont, Inc. resulted in significant industry backlash, as website hosts complained of heightened costs to review many thousands of users’ posts online, as well as significant liability risks from their failure to adequately do so. Within a year, Congress devised a solution: as part of the Communications Decency Act (“CDA”) of 1996, Congress granted broad immunity to interactive computer services providers from claims arising out of users’ posts to their websites. Specifically, Section 230 required that interactive computer services providers not be treated as publishers or speakers of content posted by third parties to their websites. Section 230 also offered immunity to these businesses from civil liability where these websites undertook voluntary steps to self-regulate content that they deemed obscene, lewd, excessively violent, harassing, or otherwise objectionable, regardless of whether restricting the same content would be unconstitutional if the government were to undertake it. Further, Section 230 preempted state tort claims inconsistent with the section.

According to U.S. Sen. Ron Wyden (D-OR), then a congressman and co-author to the bill that became the CDA, Section 230 was supposed to offer interactive computer services providers a “sword and shield”: the legislation would allow these businesses to moderate content that they deemed inappropriate without fear of heightened liability exposure as a publisher, i.e., the “sword,” as well as generally protecting them from liability arising out of users’ defamatory posts to their websites, i.e., the “shield.” Lawmakers feared that without the former protection, firms would fail to undertake any self-moderation of user posts online, lest they fall into a heightened liability trap like the defendant in Stratton Oakmont. Lawmakers also feared that without the latter protection, smaller start-up tech firms would be saddled with a costly, time-consuming requirement to monitor users’ posts and investigate any content that they believed to be defamatory. Such a requirement might be largely ineffective and result in a chilling effect to online speech. 

Early cases applying Section 230 expanded on the already broad protections for interactive computer services providers from defamation suits. In Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997), the court held that Section 230 gave providers of interactive computer services not just immunity from publisher liability in defamation suits, but also immunity from distributor liability. Thus, the defendant’s repeated notice to defendant America Online that posts about him on its platform were defamatory was insufficient for a finding of liability, as the platform owner owed the plaintiff no duty to remove or correct these posts.

Since Zeran, courts have continued to find that Section 230 grants immunity to website hosts that disseminate user posts later determined to be criminal or illegal. Website hosts have been found to be immune where users made use of their services to post messages that were in furtherance of a financial crime, in furtherance of sexual crimes against minors, and intended to coordinate terrorist activities.

Section 230 Today: The “Material Contributions” Test and “Neutral Tools” Doctrine

Despite Section 230’s broad sweep, an interactive computer services provider may be liable for a third-party’s posts to its website found to be criminal or illegal where it made a “material contribution” to these posts. An interactive computer services provider’s conduct in this context must extend beyond merely providing a forum for third-party posts and instead demonstrate the website host has contributed to the aspect of the posts that is improper.

In Fair Housing Council of San Fernando Valley v. Roommate.com, LLC, 521 F.3d 1157 (9th Cir. 2008), the website host was found to have made a material contribution to illegal content on its website in violation of the Fair Housing Act and California’s anti-discrimination law where it required users to disclose their race, sex, sexual orientation and family formation status in an application to be matched to roommates, as well as asked users to rank potential roommates according to these characteristics. The court determined that in specifically querying users for housing preferences that were illegal under fair housing and anti-discrimination law, the website had made a material contribution to the aspect of the posts determined to be illegal. 

A website that provides features and utilities that may be used for proper or improper purposes, however, will have their Section 230 immunity remain intact in the case of user misuse, on a theory that these features and utilities are “neutral tools.” In Roommate.com, for example, the website’s open text box for users to list additional roommate characteristics that they would like used by Roommate.com to create housing matches was found to be a neutral tool, as users could enter information in accord with or in violation of fair housing and anti-discrimination law. Because Roomate.com was merely responsible for providing the text box and not for writing users’ messages found to implicate relevant law, it remained protected by Section 230. Other recognized neutral tools include a rating system for consumers to rate businesses and the ability for users to create profile pages and invite members.

Under the neutral tools doctrine, a website host is under no obligation to monitor or remove content making use of these tools that is in fact improper. Nor is it under any obligation, following actual or constructive notice of misuse, to change the design and implementation of its tools. In Goddard v. Google, Inc.,640 F. Supp. 2d 1193 (N.D. Cal. 2009), the website host’s provisioning of digital advertising services and tools to businesses that advertised fraudulent products was found to be protected under the neutral tools doctrine, even where the website host knew or should have known that some third-parties were misusing its tools.

Applying the “Material Contributions” Test and “Neutral Tools” Doctrine to Recent Cases

What appears to be a mostly fair and workable balance for determining interactive services provider liability has in fact resulted in no liability determinations in two recent Circuit Court cases where websites’ business practices put consumers at especially foreseeable and grave risk of harm.

In the first case, Daniel v. Armslist LLC, 926 N.W.2d 710, cert denied, 140 S. Ct. 562 (2019), the plaintiff alleged that the defendant, an internet-based platform for arms sales, knew or should have known that its website features facilitated the sale of guns to dangerous buyers, including those barred by law from buying a gun. In particular, the website allowed users to search for active sales by whether sellers required a background check, which would be required at standard firearms stores. Private sellers, however, were not required to implement a background check under Wisconsin state law. Thus, buyers like the third-party at issue in the case who were barred from purchasing a gun in standard stores could easily use the website’s tools to evade otherwise prevailing restrictions on his gun purchases. The defendant also did not require any disclosure of website users’ identities to broker a sale, nor did it corroborate whether a buyer was legally able to buy a gun. It also undertook no effort to inform sellers of relevant federal and state laws applying to gun sales. Despite foreseeable misuse by buyers barred from purchasing guns at standard firearm stores, the court found that the defendant’s search tool was a neutral tool because it could be used by a buyer who was permitted to purchase guns from a private seller as well as one who was not. The court reiterated that even if the defendant were on actual notice that users were misusing this tool, it was under no obligation to design its website to be safer.

In the second case, Herrick v. Grindr LLC, 306 F. Supp. 3d 579 (S.D.N.Y. 2018), aff’d, 765 F. App’x 586 (2d Cir. 2019), cert. denied, 140 S. Ct. 221 (2019), the plaintiff alleged that the defendant, an online dating website, failed to take even ordinary care to prevent users from facing harassment, stalking, threats of violence, and various privacy torts in their interactions with other users on the website. The plaintiff faced harassment, stalking, and threats of violence from many dozens of men after his ex-boyfriend created accounts impersonating him on Grindr. The plaintiff also experienced a disclosure of private information and defamation arising from an untrue statement about his HIV status. Yet, despite a hundred requests by the plaintiff for the defendant to remove the impersonating accounts, Grindr failed to take any action. In reasoning similar to Daniel, the court characterized Grindr’s internet-based service as providing neutral tools and functionalities to users that could be used for proper or improper purposes: drop-down menus for sexual preferences and open fields for user biographical information could be used for bona fide matchmaking purposes, as well as for harassment and privacy torts. The court did not find that Grindr had made a material contribution to the third-party’s misuse of these tools and did not find that it was under any obligation to make its website safer.

A Better Way Forward

There are a couple of ways that Section 230 might be refashioned to better protect consumers online. Given the over 20 years of jurisprudence flowing out of Zeran, these solutions would likely need to be taken up by Congress in a new statute.

One option would be to modify Section 230’s general no duty to monitor or remove content posted by third-party users to require facilitation of notice and takedown procedures with a safe haven for compliance, much like the method for handling copyright disputes online under the Digital Millennium Copyright Act (DMCA). Under DMCA, websites must provide a procedure whereby copyright owners can flag content for being infringing. In order for the website to issue a takedown, the person or group giving notice must substantiate that it enjoys an exclusive right to the work claimed to be infringed, must identify the claimed infringing work, and must demonstrate that no consent for the third party’s use has been authorized. The website then must make a good-faith attempt to contact the person or group that submitted the allegedly infringing material, so that they can make a counter-notification. Following this, the website host must remove the allegedly infringing content. So long as a website host facilitates this process in accordance with statutory guidelines, the website host will not face liability for any errors in administering the process.

A notice and takedown procedure would work well to protect internet users from various privacy torts including defamation and disclosure, as well as criminal or illegal conduct like harassment and revenge porn. A website user harmed by this conduct is in the best position to identify this material online and could easily alert the website host of offending material. A notice and takedown procedure would then ensure that defendants like Grindr quickly acknowledge and respond to notice from plaintiffs like Herrick of obviously defamatory material or harassing conduct. A plaintiff like Herrick, for example, could easily validate that his image was being misused through providing the website host a photo of his government-issued ID and affirming that he did not consent to his image’s use. Many dating websites note that they already implement tools to flag suspicious conduct according to the number of messages sent, keyword searches, and recent location changes. Requiring a notice and takedown procedure would ensure that plaintiffs at least have grounds for recourse where some websites fail to keep up with best practices.

A second option would be to read into the “neutral tools” doctrine a requirement that the website host have a good faith belief that its tools are being used for a proper purpose. Thus, where a website host is on actual or constructive notice of significant misuse, it would be under a requirement to alter the design of this aspect of its website. Especially since a website host has broad access to information about user practices and controls both the look and function of its tools, such a requirement is eminently fair. In Roommate.com, the website host could have reduced misuse of its open text box through posting a single sentence above the box that informed users not to enter information about roommate preferences that included descriptions of race, sex, and sexuality, as this would violate federal and state housing and anti-discrimination laws. It could also have screened roommate applications using keyword searches and removed this information before it was used to execute roommate matches. Finally, as in Daniel, slight changes like informing users of federal and state statutes affecting guns sales or requiring validation that interested buyers were legally able to buy guns could have ensured that users were making use of its tools for a proper purpose.

One thought on “The Internet’s Wild West: Section 230 and Why Platforms Don’t Owe You Anything”
  1. … [Trackback]

    […] Find More Info here on that Topic: jipel.law.nyu.edu/the-internets-wild-west-section-230-and-why-platforms-dont-owe-you-anything/ […]

Comments are closed.