In November of 2014, Kim Kardashian posted two photographs to Instagram sourced from her cover shoot for the winter 2014 issue of Paper magazine. One photo displays Kardashian peering over her shoulder as she slips her sparkly black evening gown beneath her entirely nude, albeit heavily oiled, posterior. The caption reads: “#BreakTheInternet.” The Kardashian photograph quickly stirred conversation, inspiring many social media users to post their own images. For example, comedian Chelsea Handler took to Instagram with a selfie, mimicking Kardashian’s pose in a bathroom mirror. The Metropolitan Museum of Art chimed in on Twitter, comparing the Kardashian cover to a picture of a prehistoric Cycladic statue from the Museum’s collection.
Our image-saturated culture is incredibly susceptible to social media spectacles such as this one. On a seemingly regular basis, we are bombarded with depictions of nudity or semi-nudity as we scroll through our News Feeds. However, unlike the example of the Kardashian “khaos”, reactions to posting depictions of nudity or semi-nudity are not always limited to comedic commentary. In some instances, moderators for social media services either remove the content or in more extreme cases, deactivate the parent account entirely. In fact, Instagram’s most recent Basic Terms of Use explicitly prohibit the posting of “nude, partially nude, . . . pornographic or sexually suggestive photos or other content”. The Basic Terms of Use are supplemented by Community Guidelines, which provide more detail about what types of content users are banned from posting on the app. Instagram’s definition of nudity includes “photos, videos, and some digitally-created content that show sexual intercourse, genitals, and close-ups of fully-nude buttocks.” Interestingly, Instagram’s guidelines carve out an exception for “[n]udity in photos of paintings and sculptures,” but not for nudity in artistic photographs. Terms like these give social media services broad latitude to self-censor content in order to meet the goals of ensuring a “safe and open environment” for their users.
As a general rule, users are bound to the terms of use delineated by social media sites. Social media services are governed by electronic contracts that commonly take the form of browsewrap or clickwrap agreements. In a browsewrap agreement, the terms are posted somewhere on the website, often accessible by clicking on a hyperlink. Essentially, the user agrees to the terms by accessing the service or browsing the website. A clickwrap agreement, in contrast, requires that the user do something to assent to the terms (e.g., clicking a box to indicate acceptance). When users create an Instagram account, for example, the app says that “by tapping to continue, you are indicating that you have . . . agree[d] to the Terms of Service.” The effort required to assent is minimal, but the potential impact on users’ ability to post freely is noteworthy.
To aid in enforcement, social media sites put their own users to work. Users on Instagram can report what they deem to be inappropriate content by selecting from a list of criteria that, among other things, includes: “this photo puts people at risk” and “this photo shouldn’t be on Instagram”. If the unhappy user indicates that he merely does not like the photo, the app suggests that he block the user posting the disagreeable content. By collecting reports directly from their users, social media services can arguably get a better sense of what their users find offensive versus entertaining. The availability of user-generated reports may also reduce the need for moderators to scour their ever-growing networks for impermissible content. In turn, this reduces enforcement costs.
However, many users have continued to push back against takedowns or other forms of self-censorship on various social media sites. Earlier this year, art critic Jerry Saltz, wrote an article about when he was “kicked off” of Facebook after users reported his posts of Greek and Roman art and medieval manuscripts. The art featured nudity, “defecation, plague, . . . torture,” and other similarly graphic content. Saltz implored users to unfollow or block rather than report him. In another example, a French Facebook user sought a remedy in court after Facebook deactivated his account when he posted an image of Gustave Courbet’s famous painting, L’Origine du monde (The Origin of the World). Also, numerous proponents of the Free the Nipple (#freethenipple) campaign have tested limits on social media with the goal of drawing attention to the imbalanced practice of censorship. Criticism of takedowns is a consistent problem facing social media services. Oftentimes, negative feedback does result in a clarification in the terms of use, giving users a sharper sense of their rights.
In any event, the persistence of conflict illustrates that regulating content through browsewrap and clickwrap agreements is a thorny matter. This is important because social media plays such an integral role in our increasingly electronic culture. Despite best efforts to provide clarity in terms of use and community guidelines, defining the boundary between permissible and impermissible content is fraught with complexities and will undoubtedly continue to be a contentious issue for some social media users.
Katherine C. Nemeth is a J.D. candidate, 2017, at NYU School of Law