About Michael J. Kasdan

Posts by Michael J. Kasdan:

Is Facebook Killing Privacy Softly? The Impact of Facebook’s Default Privacy Settings on Online Privacy

By Michael J. Kasdan* A pdf version of this article may be downloaded here. “IMPORTANT!!  Tomorrow, Facebook will change its privacy settings to allow Mark Zuckerberg to come into your house while you sleep and eat your brains with a sharpened spoon.  To stop this from happening go to Account > Home Invasion Settings > Cannibalism > Brains, and uncheck the “Tasty” box.  Please copy and repost.” – Satirical Status Post from Friend’s Facebook Status on February 15, 2011. Introduction Since launching its now ubiquitous social networking website out of the Harvard dorm room of Mark Zuckerberg in early 2004, Facebook has rapidly become one of the most dominant websites on the planet.  And “rapid” doesn’t quite do it justice.  It has been estimated that over 40% of the U.S. population has a Facebook account.[FN1] Facebook now boasts over 600 million active user accounts [FN2] and was recently estimated to be adding user accounts at the unbelievable clip of well over half a million new users per day.[FN3] The very nature of a social networking site like Facebook is to provide its users with a platform through which they can share massive amounts of personal information.  Facebook has created a platform where users can post personal data such as their contact information, birthdays, favorite movies, books, music and news articles, share scads of written comments and notes, post pictures and videos of themselves and others, associate themselves with various products, services, and groups, and post information about where they are and what they are doing. Over its six-year existence, Facebook’s privacy policy – the set of rules that dictate which information is shared and with whom – has undergone significant systemic revisions that have had the effect of collectively encouraging, and in some cases requiring, users to share more personal information with bigger groups of people and companies.  Facebook’s original privacy policy provided that no personal information would be shared with any other user who did not belong to a group specified in the user’s privacy settings.  The principle behind this policy was one of user control over personal information.  By contrast, under today’s Facebook privacy policy, owners of numerous websites and applications may access broad categories of user information, and the default settings are such that many categories of user information will be widely accessible, unless users carefully review and modify them. This article explores the background, impact, and legal and policy challenges posed by Facebook’s evolving privacy policy. Background Facebook (www.facebook.com) is a social-networking website that is privately owned and operated by Facebook, Inc.  Facebook is free to use.  Once registered, users of the Facebook.com website may create a personal profile and can then create their “social network” by inviting other users to be their “friends.”  Users can upload photos and albums and update their “status” to inform their friends of their whereabouts, actions, and thoughts.  Users and their friends may communicate with each other through private and/or public messages (i.e., privately, through email, or publicly, by writing or posting a comment on another user’s “wall”), view and comment on each other’s status updates and postings, and share and comment on each other’s pictures, videos, and other Internet content. [FN4] Facebook users can also associate with and recommend (i.e., “like”) brands, products, services, web pages, and articles posted all over the Internet by clicking a “like” button on Facebook or on those web pages.  When a user’s friend views that same web page, they can see which of their friends have “liked” the page.  A user’s “likes” are also posted in that user’s “newsfeed,” which is a running list of comments, pictures, status updates, etc. of that user and his friends that is visible to friends.  These “likes” are, of course, also recorded by Facebook’s business partners that are associated with brands, products, and services.  Most recently, Facebook has added a “check-in” or “places” feature.  Using this feature, Facebook users can indicate that they are currently at a restaurant, store, bar, or other real-world location.  This information is posted onto their profile and is also recorded by Facebook for use by its business partners, which may include, for example, the restaurant or bar at which the user has checked in. In addition, Facebook has partnered with certain third-party websites, such as Yelp, to provide Facebook “personalization features” for its users.  Specifically, if a user has a Facebook account and goes to the Yelp website, a site that collects user reviews about businesses such as restaurants and bars, that user will be able to see which of his Facebook friends have reviewed a particular business, which friends have “liked” a particular business, and review his Facebook friends’ Yelp reviews and “likes.”[FN5] Facebook users can also access third-party applications (“apps”) on the Facebook site.  These apps include trivia quizzes, games, and other interactive content.  Many of these applications gather personal information about the user and his Facebook friends. There clearly are tremendous benefits to the social networking experience on Facebook.  The broad disclosure by users and their friends of all sorts of personal details about themselves is central to Facebook’s functionality.  It is in large part what makes Facebook interesting, interactive, and fun to its users.  It is also equally (if not more) important to Facebook as a business, and the key to its ability to monetize Facebook.com.  Indeed, much of the perceived value of Facebook as a business[FN6] is Facebook’s ability to gather personalized information about its massive user base and to leverage that user base.  The costs to these same activities, in terms of the sacrifices to one’s own personal privacy, may be harder to spot at first, but they are also significant.[FN7] Facebook’s Privacy Policy – A Brief History Facebook’s privacy policy has undergone a significant shift over its relatively short existence.  Its original policy limited the distribution of user information to a group of that user’s choice (thus creating a private space for user communication).  By contrast, its current policy makes much user information public by default and requires other information to be public.  This public information is accessible by Facebook and its business partners and advertisers.  The shift in Facebook’s default privacy settings over time is perhaps most strikingly illustrated by an info-graphic created by Matt McKeon, a developer at the Visual Communication Lab at IBM Research.[FN8] The blue shading indicates the extent that the viewing of various categories of information is limited to a user’s friends, friends of friends, all Facebook users, or the entire Internet.  Heavier shading towards the outer part of the circle indicates that the information is more widely accessible.
The History of Facebook's Default Privacy Settings
  Facebook has been criticized by privacy advocates and industry watch groups for its revision of its privacy policies.  For example, after Facebook rolled out its revised privacy settings in late 2009, the Electronic Frontier Foundation (“EFF correctly concluded that these changes reduce the amount of control the users have over their personal data while at the same time push Facebook’s users to publicly share more of their personal information than before.[FN9] As the Electronic Frontier Foundation (“EFF”) put it, viewing Facebook’s successive privacy policies from 2005-2010 “tell[s] a clear story.  Facebook originally earned its core base of users by offering simple and powerful controls over their personal information.  As Facebook grew larger and became more important . . . [it] slowly but surely helped itself – and its advertising and business partners – to more and more of its users’ information, while limiting the user’s options to control their own information.”[FN10] Under Facebook’s current privacy policy, certain personal information, such as a user’s name, profile pictures, current city, gender, networks, and pages that user is a “fan” of (now, pages that user “likes”) is deemed “publicly available information.”  And this user information is now accessible by Facebook applications that are added by any of that user’s Facebook friends, even if that user does not use these applications.  In March, 2011, Facebook announced that it would be moving forward with a plan to give third-party developers and external websites the ability to access Facebook users’ home addresses and cell phone numbers.[FN11] Facebook users may not restrict access to this information to a more controlled group or prevent application developers from accessing it.[FN12] In addition, when a Facebook user “likes” a product or service or “checks-in” to a place, such as Starbucks, the coffee company displays that information, both in the user’s news feed and also as part of a paid advertisement for Starbucks.  This functionality is called “Sponsored Stories,” and Facebook users cannot opt out of the use of their information in Sponsored Stories if they “like” or “check-in” to a business or service.[FN13] Based on these changes, Facebook users are now sharing a lot of personal information with the third party companies that partner with Facebook to develop applications and advertisements.[FN14] Finally, Facebook’s “privacy transition tool,” which guides users through the configuration of privacy settings will “recommend” (i.e., preselect by default) each user’s privacy settings for sharing information posted to Facebook, including status messages and wall posts, to be set to share with “everyone” on the Internet.  The prior default setting for such information had been limited to each users’ “Networks and Friends” on Facebook.  As discussed in the following section of this Article, default settings are often outcome determinative.  It is human nature to accept and not change the suggested default settings.  In this way, Facebook’s “privacy transition tool” results in more users shifting their privacy level to share their information with more people than before.[FN15] This erosion of privacy should come as no great surprise.  Social networks like Facebook benefit from loose privacy rules:  “the more incentives [Facebook] create[s] for people to share data, the more valuable the network . . . because [Facebook] ha[s] data you can resell or study for marketing trends.”[FN16] Controlling, storing, using, and providing access to or analytics concerning vast stockpiles of user data is tremendously lucrative.  Because Facebook makes money through targeted advertising and the like, reducing the privacy settings of its service is to its financial benefit.[FN17] To this end, Mark Zuckerberg and Facebook have taken the position that “‘Facebook has always been about friends and community and that therefore the default has been skewed towards sharing information rather than restricting it.”[FN18] This position also aligns with Facebook’s profit motive, monetization end-game, and growing valuation. [FN19] Default Settings Matter a Great Deal What is important to keep in mind in the ongoing debate about Facebook’s privacy settings is the significant power of default settings in affecting user behavior and outcomes.  When defending its increasingly “public” default privacy settings, Facebook often focuses on the fact that it gives its users the ability to change these privacy settings to control information (though not all information) more tightly, if they so choose.  But the reality is that defaults are often determinative.  Most users surely clicked through the new default settings without realizing it.  And while users could, theoretically, change these more public “recommended” settings by navigating through the detailed privacy settings, doing so takes more effort.[FN20] Defaults have a particularly strong influence in software.  System or device defaults are rarely altered by users.  And commentators have observed that “psychological studies have shown that the tiny bit of extra effort needed to alter a default is enough to dissuade most people from bothering, so they stick to the default despite their untapped freedom.”[FN21] With the rise of ubiquitous network software systems like Facebook, the outcome-determinative nature of defaults has the ability to fundamentally influence social concerns, such as privacy.[FN22] Indeed, the evolution of Facebook’s privacy settings demonstrate the company’s understanding of the importance of default settings.  On the one hand, Facebook does provide a good deal of granular control to its users in terms of privacy settings.  But on the other hand, as studies in human computer interaction and behavioral economics show, users tend to favor the status quo or default settings.[FN23] In the case of Facebook, these are the privacy recommendations and default settings that are provided.  Furthermore, Facebook’s programs that pass information to its third-party business partner sites, such as Yelp, require users to “opt out,” which means that Facebook will freely disseminate user information unless the user affirmatively objects.  Therefore, even though Facebook offers detailed privacy options, by pre-selecting the default settings for those user privacy settings and requiring users to affirmatively opt-out, Facebook is effectively “dictating what kind of privacy [users] will or will not have.”[FN24] The Risks So what?  Aside from throwing around important-sounding words like “privacy issues,” what is the big deal? Recently, industry watch groups, like the EFF and Consumer Reports, as well as the U.S. government, have articulated a host of real-world concerns.  For example, posting personal information (including birthdates, street addresses, whether you are home or away), can expose a user to crime of either the cyber- or real-world variety.[FN25] In addition, the privacy settings of users and users’ Facebook friends can expose users to harassment, malware, spyware, identity theft, viruses and scams.  For example, a recent article estimated that out of the 18 million Facebook users who used “apps” of Facebook’s business partners and advertisers, roughly 1.8 million (or 10%) of their computers were infected by these applications.  Many of these applications access a large swath of personal information, often without the user realizing it.[FN26] Aside from the above crime risks, there are serious “social” and “commercial” risks as well.  Sharing the likes and dislikes of users and their friends, as well as the places to which they go and the products they recommend could lead to a world where companies, advertising agencies, and others who seek to influence your behavior are able to track each individual user to such an extent that they can compile a set of incredibly granular and personal details about each person, including what time he gets up, where he goes, what he buys, what he reads, what his political views are, etc.[FN27] For many, that may be an uncomfortable place to be. Addressing the Issue of Online Privacy – Personal Choice, Regulation and Enforcement, or Both? Broadly speaking, there are two general approaches to addressing the implications of online privacy settings, such as Facebook’s.  It is unlikely that either one of these approaches alone will adequately address the privacy concerns raised above. The first approach is to rely on the market and users to drive changes to privacy settings, when required.  This approach relies on users to recognize that their privacy settings are important and to take the time and responsibility to set them.[FN28] This laissez faire approach relies on individuals to take more care about what default settings they are agreeing to and to demand change in areas of paramount importance. Users who care should certainly take more care in setting their privacy options.  However, there are limitations to relying on users alone.  When settings and choices are not apparent to users, or defaults are repeatedly set in such a way that the vast majority of users are unlikely to understand the consequences of their selections or be able to demand change, it seems that more may be required.[FN29] The second approach is to rely on government regulation and enforcement to ensure that there are clearly laid-out privacy options.[FN30] In this regard, the U.S. government recently has began to raise questions about Facebook’s privacy policy.  For example, when Facebook announced plans to enable its partners to access users’ addresses and phone numbers, Congressman Edward Markey (D-Mass) and Joe Barten (R-Texas), the Co-Chairmen of the House Bipartisan Privacy Caucus, sent a letter to Facebook CEO Mark Zuckerberg seeking answers about the company’s plans.[FN31] Similarly, in May of 2010, the Article 29 Data Protection Working Party, a coalition of European data protection officials, sent a letter to Facebook criticizing the changes it made to its privacy policy and default privacy settings.[FN32] The Working Party argued that significant changes to a privacy policy and settings relating to sharing of user information should require the active consent of users rather than mere notice of the changes to users. The Federal Trade Commission (FTC) has likewise become more active in investigating online privacy violations. Section 5 of the FTC Act grants the FTC the power to pursue claims against entities which engage in unfair or deceptive acts or practices in interstate commerce with respect to consumers.[FN33] In the past, the FTC has taken action against websites for violating their own privacy policies as a deceptive trade practice.  The FTC has also used its Section 5 powers to pursue claims against online companies related to spyware and adware, etc.[FN34] Most significantly, last month, the FTC settled its Section 5 investigation into the privacy practices of Google in relation to Google Buzz, a social networking tool in Gmail that Google introduced last year.  As part of the settlement, Google agreed to start a privacy program, to undergo privacy audits for a period of 20 years, and to obtain user consent before changing the way that any Google product shares personal information.[FN35] As relevant to Facebook, privacy interest groups led by the Electronic Privacy Information Center (“EPIC”) have filed multiple Complaints with the FTC, accusing Facebook of violations to the privacy interests of Internet users.[FN36] EPIC’s first FTC Complaint against Facebook focuses on Facebook’s practices relating to the sharing of user information with third-party app developers.  In particular, it alleges that the mandatory public disclosure of certain user information to the public, including third-party app developers, is an unfair practice.  The Complaint also alleges that Facebook’s policies regarding third-party app developers are misleading and deceptive, and provide for more information sharing and less user control of that information without a clear way for users to opt out.[FN37] EPIC’s second Complaint against Facebook focuses on newer changes to Facebook, including the “like” feature and “instant personalization” feature, both of which, it is alleged, cause the sharing of user information in ways that are deceptive to the user.[FN38] EPIC’s Facebook Complaints may provide the FTC with the vehicle to take on Facebook, should it perceive the need to do so. At the very least, the FTC’s recent landmark settlement with Google signals that the FTC is ready and willing to use its Section 5 powers to remedy privacy violations in connection with social networking, where it deems appropriate. The FTC has also provided guidance by issuing a Privacy Report entitled “Protecting Consumer Privacy in an Era of Rapid Change: A Proposed Framework for Businesses and Policymakers,” which seeks to provide a framework for consumers, businesses, and policymakers to address online privacy issues.[FN39] The FTC Report concludes that industry efforts to address privacy through self-regulation have been too slow and have failed to provide adequate and meaningful protection to consumers.  Among the recommendations in the FTC’s proposed framework is that consumers be presented with a clear and easy to understand choice about the collection and sharing of their data at the time and in the context in which they are making decisions.  The FTC framework also addresses the tracking, collection, and sharing of user data with advertisers, recommending the adoption of a universal mechanism for implementing a user’s choice to opt out of such practices.[FN40] In response, Facebook has argued against excessive regulation, indicating that Internet companies should be self-regulated so as not to stifle innovation.[FN41] It is unclear what ultimate impact the FTC Report and the proposals of other commentators will have on the industry or on policymakers, if any.  These developments at least have the effect of bringing about public debate over many core privacy issues implicated by social networks and other online companies. In this regard, some of the policies championed by the FTC and others are making their way before Congress in newly proposed privacy bills. Specifically, Representative Jackie Speier (D-California) introduced Bill HR 654 that would direct the FTC to put forth standards that provide an online mechanism for consumers to opt out of the collection and use of their personal information online and would require online advertisers and third-party website operators to disclose their practices with respect to data collection and use.[FN42] These regulatory and legislative efforts may provide some baseline requirements for privacy policies and provide users with a privacy bill of rights. As with the FTC Report, it is not yet clear what shape such privacy legislation will take, and the extent to which legislators will seek to address such privacy issues through legislation.[FN43] Conclusion In the age of instantaneous sharing of information on Facebook, it is fair to ask whether privacy is dead or dying, and whether online social networks like Facebook are killing it. Despite what may be seen as an unstoppable cultural imperative to socialize, connect, share, communicate, and post information about oneself at a dizzying pace, it is important not to lose sight of the risks to handing over control over our personal information. As we status-update our way through the information age, both users and regulators alike must continue to closely monitor companies that receive access to the information we share. At the same time, we must also carefully weigh the benefits of increased interconnectivity against the costs of reduced privacy.  
* Michael J. Kasdan is a Partner at Amster, Rothstein & Ebenstein LLP and is a 2001 graduate of NYU School of Law.  He is a Facebook user.  The views and opinions expressed in this article are his own.  Mr. Kasdan also authored Student Speech in Online Social Networking Sites: Where to Draw the Line, https://jipel.law.nyu.edu/2010/11/22/student-speech-in-online-social-networking-sites-where-to-draw-the-line/ (November 22, 2010). [FN1] See Roy Wells, 41.6% of the U.S. Population Has a Facebook Account, socialmediatoday (Aug. 8, 2010), http://socialmediatoday.com/index.php?q=roywells1/158020/416-us-population-has-facebook-account. [FN2] See Nicholas Carlson, Goldman to clients: Facebook has 600 million users, Business Insider (Jan. 5, 2011), http://www.msnbc.msn.com/id/40929239/ns/technology_and_science-tech_and_gadgets/. [FN3] See Justin Smith, Facebook Now Growing by Over 700,000 Users a Day, and New Engagement Stats, Inside Facebook (July 2, 2009), http://www.insidefacebook.com/2009/07/02/facebook-now-growing-by-over-700000-users-a-day-updated-engagement-stats/. [FN4] For the BBC’s truly hilarious take on the Facebook paradigm, see Facebook in Real Life, http://www.youtube.com/watch?v=BYNYLq_KvW4 (last visited March 22, 2011). [FN5] See Yelp Partners With Facebook For A Personal Experience, Yelp Web Log (Apr. 21, 2010), http://officialblog.yelp.com/2010/04. [FN6] Facebook is presently a privately held company.  Recently, a consortium including Goldman Sachs invested $500 million “in a transaction that values [Facebook] at $50 billion.” Susan Craig & Andrew Ross Sorkin, Goldman Offering Clients a Chance to Invest in Facebook, DealBook (Mar. 29, 2011), http://dealbook.nytimes.com/2011/01/02/goldman-invests-in-facebook-at-50-billion-valuation/. [FN7] It is significant to note that there are clear generational differences at work as to how a particular person will assess these types of trade-offs.  For example, I use Google’s Gmail service because it is slick and functional.  My father, however, will not, because he is greatly bothered by the fact that Google pushes context-based advertising at its Gmail users based upon the content of a user’s emails.  Likewise, on Facebook, the younger generation is more apt to publicly share private details through status updates or to publicly share embarrassing pictures of their Saturday night escapades.  In online social networks like Facebook, there is a quid pro quo in which privacy is gladly exchanged in favor of social interaction.  To these users, the social and community benefits of sharing such information far outweigh the more subtle-to-perceive-downside of diminished privacy. See Schneier, Google and Facebook’s Privacy Illusion, http://www.forbes.com/2010/04/05/google-facebook-twitter-technology-security-10-privacy (April 6, 2010). [FN8] See http://www.allfacebook.com/infographic-the-history-of-facebooks-default-privacy-settings-2010-05 (May 9, 2010) & http://www.goso.blog/2010/06/facebook-default-privacy-settings-over-time. [FN9] Kevin Bankston, Facebook’s New Privacy Changes: The Good, The Bad, and The Ugly, Electronic Frontier Foundation (Dec. 9, 2010), http://www.eff.org/deeplinks/2009/12/facebooks-new-privacy-changes-good-bad-and-ugly. [FN10] For a complete timeline of the changes to Facebook’s privacy policy from 2005 to present, see Facebook’s Eroding Privacy Policy: A Timeline, Electronic Frontier Foundation, http://www.eff.org/deeplinks/2010/04/facebook-timeline/ (April 28, 2010). [FN11] See Bianca Bosker, Facebook To Share Users Home Addresses, Phone Numbers with External Sites, HuffPost Technology (Feb. 28, 2011) http://www.huffingtonpost.com/2011/02/28/facebook-home-addresses-phone-numbers_n_829459.html. [FN12] “When you connect with an application or website it will have access to General Information about you.  The term General Information includes you and your friends’ names, profile pictures, gender, user IDs, connections, and any content  shared using the Everyone privacy settings . . . . The default privacy setting for certain types of information you post on Facebook is set to ‘everyone.’”  See Facebook’s Eroding Privacy Policy: A Timeline, Electronic Frontier Foundation, http://www.eff.org/deeplinks/2010/04/facebook-timeline/ (April 28, 2010)(quoting Facebook’s Privacy Policy). [FN13] Clint Boulton, Facebook Invites Privacy Concerns with Sponsored Story Ads, eWeek (Jan. 26, 2011), http://www.eweek.com/ [FN14] Bankston, supra note 10. [FN15] Id. [FN16] Privacy: The Slow Tipping Point, Carnegie Mellon University (2007 Interview Transcript, Podcast Interview of Alessandro Acquisti). [FN17] Bruce Schneier, Google and Facebook’s Privacy Illusion, Forbes (Apr. 6, 2010)(quoting Mark Zuckerburg), http://www.forbes.com/2010/04/05/google-facebook-twitter-technology-security-10-privacy. [FN18] Memmott, Zuckerberg:  Sharing Is What Facebook Is About, http://www.npr.org/alltechconsidered/2010/05/27/127210855/facebook-zuckerberg- (May 27, 2010). [FN19] See Craig, supra note 6. [FN20] Forbes Magazine notes that companies like Facebook are driven by market forces “to kill privacy” by controlling defaults, limiting privacy options, and making it difficult to change such settings.  This results in making it “hard …to opt out.”  Schneier, supra note 18. [FN21] See Pat Coyle, Triumph of the default in sports social networks, Technium Blog (Aug. 18, 2010) http://www.coylemedia.com/2010/08/18/power-of-the-default-in-sports-social-networks. [FN22] Jay Kesan and Rajiv C. Shah, Establishing Software Defaults: Perspectives from Law, Computer Science, and Behavioral Economics, Notre Dame Law Review, Vol. 82, pp. 583-634, 2006 (available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=906816). [FN23] Examples that illustrate the power of defaults are found not only in the technology field but across many other fields.  One oft-cited study of defaults is the work of Madrian and Shea, who studied the impact of defaults on money saving tendencies by changing the defaults of 401(k) retirement plans. Specifically, they changed the default enrollment rule so that new employees had to choose to opt out of contributing to the 401(k) plan rather than to opt into it.  The results were striking.  Changing this one simple default rule brought participation in the 401(k) plan from less than 40% to over 85%.  Furthermore, those who participated made few subsequent changes to their default plan.  This study indicates that defaults can strongly influence real-life decision-making, and that people generally defer to defaults in their decision-making.  Whether the cause of this behavior is momentum, laziness, procrastination, passivity, a tendency to follow the guidance or advice of experts, or some other phenomena, the effect of defaults is very powerful and very real.  See Simon Kemp, Psychology & Economics in Regulation, Institute of Policy Studies (Feb. 19, 2010); Sendhil Mullainathan, Psychology and Development Economics (June 2004) (unpublished manuscript) (on file with Harvard University Department of Economics) see also James M. Poterba,Behavioral Economics and Public Policy: Reflections on the Past and Lessons for the Futurein Policymaking Insights from Behavioral Economics (Christopher L. Foot et al eds., 2007); Kesan, supra note 24. [FN24] Acquisti, supra note 17. For humorous satirical commentary on this phenomena, see Entire Facebook Staff Laughs As Man Tightens Privacy Settings, The Onion (May 26, 2010), http://www.theonion.com/articles/entire-facebook-staff-laughs-as-man-tightens-priva,17508/. [FN25] See Jeff Fox, Why Facebook Users Need Protection, HuffPost Technology (May 4, 2010), http://www.huffingtonpost.com/jeff-fox/why-facebook-users-need-p_b_562945.htmlsee also Protecting Your Computer from Online Threats, Consumer Reports (June 2010), http://www.consumerreports.org/cro/magazine-archive/2010/june/electronics-computers/. [FN26] Id. [FN27] See e.g., JC Raphael, Facebook Privacy: Secrets Unveiled, PC World (May 16, 2010), http://www.pcworld.com/article/196410/facebook_privacy_secrets_unveiled.html. [FN28] “It’s very simple: Facebook is a business and their goal is to make money.  They make money through advertising and selling virtual goods.  The more of your personal information they can mine, the more likely their advertising will result in revenue for Facebook and to their clients. . . . What about privacy settings?  You need to set them, it’s your responsibility and no one else’s.  Facebook wants you to share as much as possible since it helps them monetize your account.  Consequently the default settings tend to be “opt out” rather than “opt in,” knowing that most people review their privacy settings. . . .  You are responsible for what information you post about yourself, the Facebook friends you link to, the privacy settings and the applications you use.” Howard Steven Friedman, You Are Responsible for Your Own (Facebook) Privacy, HuffPost Technology (Mar. 3, 2011) (emphasis added), http://www.huffingtonpost.com/howard-steven-friedman/you-are-responsible-for-y_b_830652.html. [FN29] Another aspect of a laissez faire approach to dealing with online privacy would be to rely on the market to provide competing social networking systems that address privacy differently than Facebook. In other words, if clearer and tighter privacy controls are something that consumers want and value, a market competitor to Facebook should offer a competing alternative. Cf. Hiroki Tabuchi, Facebook Wins Relatively Few Friends in Japan, New York Times (January 9, 2011), http://www.nytimes.com/2011/01/10/technology/10facebook.html (noting Facebook’s relative lack of success in Japan, whose Internet users are “fiercely private.” In Japan, Facebook’s competitors, which “let members mask their identities, in distinct contrast to the real-name, oversharing hypothetical user on which Facebook’s business model is based,” have been far more successful). However, because the value of a social network is largely based on the fact that all of one’s friends are members, the sheer size and momentum of Facebook in the U.S. market may well prevent viable competitors from easily emerging. [FN30] Schneier, supra note 18 (stating that “[i]f we believe privacy is a social good, something necessary for democracy, liberty, and human dignity, then we can’t rely on market forces to maintain it” and calling for broad legislation that would protect personal privacy by giving people control over their personal data). [FN31] See Thomas Clayburn, Facebook Faces Congressional Privacy Interrogation, Information Week (Feb. 5, 2011), http://www.informationweek.com/news/internet/social_network/showArticle.jhtml?articleID=229201226. Facebook responded to this inquiry in a letter on February 23, 2011 (available at http://markey.house.gov/docs/facebook_response_markey_barton_letter_2.2011.pdf) in which it highlighted that Facebook users have various different levels at which they can set their privacy options and that users must give applications seeking to access their personal information permission to do so. Information concerning the Congressmens’ response to Facebook can be found at Markey, Barton Respond to Facebook (Feb. 28, 2011), http://markey.house.gov/index.php?option=content&task=view&id=4238&Itemid=125. [FN32] Article 29 Data Protection Working Party, Press Release, May 12, 2010, available at http://ec.europa.eu/justice/policies/privacy/news/docs/pr_12_05_10_en.pdf. [FN33] Act of March 21, 1938, ch. 49, § 3, 52 Stat. 111 (codified at 15 U.S.C. § 45(a)(1)(1994)). [FN34] See e.g., Federal Trade Commission, Privacy Initiatives, http://business.ftc.gov/legal-resources/29/36 (Last visited February 18, 2011). [FN35] See C. Miller and T. Vega, Google Unveils New Social Tool as It Settles Privacy Case, New York Times (March 20, 2011); Google Agrees to Implement Comprehensive Privacy Program to Protect Consumer Data, http://www.ftc.gov/opa/2011/03/google.shtm (March 30, 2011).  A copy of the consent order is available at http://www.ftc.gov/os/caselist/1023136/110330googlebuzzagreeorder.pdf (Last Visited, April 6, 2011). [FN36] The EPIC Complaints are available at http://epic.org/privacy/inrefacebook/EPIC-FacebookComplaint.pdf (“EPIC I”)  and http://epic.org/privacy/facebook/EPIC_FTC_FB_Complaint.pdf (“EPIC II”).  For additional general background regarding the EPIC Complaints, see http://epic.org/privacy/inrefacebook and http://epic.org/privacy/facebook/in_re_facebook_ii.html. [FN37] EPIC I, supra note 37. [FN38] EPIC II, supra note 37 at ¶¶ 65-94. [FN39] See generally FTC Staff Releases Privacy Report, Offers Framework for Consumers, Businesses and Policymakers, Federal Trade Commission (Dec. 1, 2010), http://www.ftc.gov/opa/2010/12/privacyreport.shtm; FTC Staff, FTC Staff Report: Protecting Consumer Privacy in an Era of Rapid Change (2010), http://www.ftc.gov/os/2010/12/101201privacyreport.pdf. [FN40] Similarly, the EFF recently proposed a “Bill of Privacy Rights for Social Network Users.” The Proposed Bill of Privacy Rights includes (i) “the right to informed decision-making” about who sees their personal data, (ii) “the right to control” the use and disclosure of their data, including requiring a default opt-in permission by users, so that user data is not shared unless a user makes an informed decision to share it, and (iii) “the right to leave” a social network, at which point the user data is permanently deleted from the social network’s databases and those of its partners. See Kurt Opsahl,  A Bill of Privacy Rights for Social Network Users, Electronic Frontier Foundation (May 19, 2010), http://www.eff.org/deeplinks/2010/05/bill-privacy-rights-social-network-users; see also Dani Manor, Proposed New Bill of Rights for Facebook Users, Electronic Frontier Foundation (May 21, 2010), http://www.allfacebook.com/eff-proposes-new-bill-of-rights-for-facebook-users-2010-05. [FN41] See e.g., Katie Kindelan, What You Should Know About Facebook’s Response to the FTC, Social Times (Feb. 25, 2011), http://www.socialtimes.com/2011/02/what-you-should-know-about-facebooks-response-to-the-ftc/see also Bianca Boscer, Facebook Response to FTC’s Privacy Plans, Huffpost Technology (Feb. 23, 2011), http://www.huffingtonpost.com/2011/02/23/facebook-responds-to-ftcs_n_827260.html; Leigh Goessl, Facebook Response to FTC Privacy Investigation, Helium (Feb. 27, 2011), http://www.helium.com/items/2103119-facebook-response-to-ftc-privacy-investigation. [FN42] See H.R. 654, 112th Cong. (2011).; See also Bert Knabe, Two Privacy Bills Introduced by Representative Jackie Speier, Lubbock Avalanche-Journal (Feb. 14, 2011), http://lubbockonline.com/interact/blog-post/bert-knabe/2011-02-14/two-privacy-bills-introduced-representative-jackie-speier-d. [FN43] Cf. Farhad Manjoo, No More Privacy Paranoia, Slate (April 7, 2011), http://www.slate.com/id/2290719/pagenum/all/#p2 (noting that regulators must carefully balance the costs of privacy protection with its benefits).

Student Speech in Online Social Networking Sites: Where to Draw the Line

By Michael J. Kasdan* A pdf version of this article may be downloaded here. Introduction The move toward online communication has the potential to throw off the historically careful balance that has been struck regarding First Amendment issues in the realm of “student speech.”  In a seminal trilogy of cases, the Supreme Court balanced the free speech rights of students with school districts’ ability – and even responsibility – to regulate student speech that disrupts the learning environment.  Before the proliferation of instant messaging, SMS texts, and social networking sites, the Court allowed schools to regulate on-campus speech in limited circumstances (i.e., when the speech disrupts the learning environment) but did not extend the school’s authority to regulate speech that occurs off-campus (i.e., speech subject to traditional First Amendment protection).  Electronic communication blurs the boundary between on- and off-campus speech.  While a student may post a Facebook message from the seeming privacy of his or her own home, that message is widely accessible and could have a potentially disruptive effect on campus. Because the Supreme Court has not yet addressed this particular issue, courts are struggling to define the proper place of so-called “student internet speech.”  Indeed, two different Third Circuit panels recently came to exactly opposite conclusions on the very same day about the ability of schools to regulate student internet speech: in one, the Third Circuit upheld a school’s ability to discipline a student for creating a fake MySpace profile mocking the school’s principal; in the other, the Third Circuit held the school could not regulate conduct (again, creation of a fake MySpace profile about the school’s principal) that occurred within the student’s home.  Both opinions have since been vacated pending a consolidated rehearing en banc, but the message is clear: courts throughout the country require guidance on the appropriate legal principles applicable to student internet speech. The remainder of this Article introduces the relevant Supreme Court precedent, explores in greater depth the two contradictory Third Circuit opinions, and offers some preliminary analysis as to how the Third Circuit (and perhaps ultimately the Supreme Court) may clarify the law in the pending en banc decision. Background – Supreme Court Precedent The Supreme Court’s seminal pronouncement that set the limits of a school’s ability to regulate student speech came down in 1969.  In Tinker v. Des Moines Independent Community School District, the Supreme Court addressed the issue of “First Amendment rights, applied in light of the special characteristics of the school environment.” [FN1] The Court reasoned that while students do not “shed their constitutional rights to freedom of speech or expression at the schoolhouse gate,” the right to free speech must be balanced against the interest in allowing “[s]tates and of school officials, consistent with fundamental constitutional safeguards, to prescribe and control conduct in the schools.”[FN2] The so-called Tinker rule holds that in order for a school district to suppress student speech (by issuing a punishment or discipline relating to that speech), the speech must materially disrupt the school, involve substantial disorder, or invade the rights of others: “conduct by the student, in class or out of it, which for any reason — whether it stems from time, place, or type of behavior — materially disrupts class work or involves substantial disorder or invasion of the rights of others is, of course, not immunized by the constitutional guarantee of freedom of speech.”[FN3] Since Tinker, the Supreme Court has addressed free speech issues in the context of schools in several cases.  In each case, the Court addressed the tension between the students’ right to free expression and the schools’ need to regulate school conduct in favor of the schools.  In Bethel School District v. Fraser[FN4] the Court distinguished Tinker and found that a school’s discipline of a student for his sexual-innuendo-charged assembly speech was not a violation of the student’s First Amendment rights. [FN5] More recently, in Morse v. Frederick, the Court held that the First Amendment does not prevent school officials from suppressing student speech that was reasonably viewed as promoting illegal drug use at a school-supervised event.[FN6] Today’s Online Student Speech Cases The degree to which student online speech may be regulated is an increasingly significant issue.  As stated in a recent New York Times article, “the Internet is where children are growing up.  The average young person spends seven and a half hours a day with a computer, television, or smart phone . . . suggesting that almost every extra curricular hour is devoted to online life.” [FN7] And today’s online speech has some distinguishing characteristics from “ordinary speech.”  It is extremely public.  It may be rapidly distributed to a wide group of people extremely quickly.  And it may potentially be saved forever. A recent series of cases demonstrate that courts are grappling with how to apply the Supreme Court free-speech precedent to student speech that has moved to online mediums such as the now-ubiquitous Facebook or Twitter.  None of the triumvirate of Supreme Court student speech cases maps easily to the arena of online student speech.  As one state supreme court noted,
“[u]nfortunately, the United States Supreme Court has not revisited this area [of the First Amendment rights of public school students] for fifteen years.  Thus, the breadth and contour of these cases and their application to differing circumstances continues to evolve.  Moreover, the advent of the Internet has complicated analysis of restrictions on speech.  Indeed, Tinker’s simple armband, worn silently and brought into a Des Moines, Iowa classroom, has been replaced by [today’s student’s] complex multi-media website, accessible to fellow students, teachers, and the world.” [FN8]
A recent series of cases from the Third Circuit demonstrates the complexities raised by these cases.  In one case, a Third Circuit panel found a school’s discipline of a student for his online speech to be a violation of the First Amendment and that the school’s authority could not extend to such off-campus behavior.  That very same day, a different Third Circuit panel addressing an almost identical fact pattern came to the opposite conclusion, finding no First Amendment violation when a school district punished a student for online speech. Recent Online Student speech Cases J.S. ex rel. Snyder v. Blue Mountain School District In J.S., the Third Circuit affirmed a district court ruling that a school district had acted within its authority in disciplining a student for creating an online profile on her MySpace page that alluded to “sexually inappropriate behavior and illegal conduct” by her principal. [FN9] The student was a 14-year-old eighth-grader who, along with a friend, had been disciplined by the principal for a dress code violation.  A month later, the students created a fictitious profile for the principal from a home computer on MySpace.  The MySpace profile, which included a picture of the principal taken from the school’s website, described him as a pedophile and a sex addict whose interests included “being a tight ass,” “[having sex] in my office,” and “hitting on students and their parents.”  Word of the MySpace profile soon spread around school.  Eventually, the principal found out about it.  In response, the principal issued the students a ten-day suspension for violating the school’s rule against making false accusations against members of the school staff. [FN10] The students’ parents sued the school district, claiming that the suspension was a violation of their children’s First Amendment rights.  The district court disagreed and found for the school board, concluding that the school had acted properly in suspending the students and that their First Amendment rights had not been violated.[FN11] On appeal, the Third Circuit affirmed.  The Panel majority noted that although the Supreme Court “has not yet spoken on the relatively new area of student internet speech,” courts can derive the relevant legal principles from traditional student speech cases, such as Tinker, Bethel, and Morse[FN12] Drawing from the Tinker standard that a school may discipline students for speech that “create[s] a significant threat of substantial disruption” within the school, [FN13]the Third Circuit found that discipline was appropriate and permissible based primarily on the fact that the profile targeted the principal in a manner that could have undermined his authority by referencing “activities clearly inappropriate for a Middle School principal and illegal for any adult.” [FN14]The court also found that the online context of the speech, which allowed for quick and widespread distribution, exacerbated the situation and increased the likelihood of “substantial disruption.” [FN15] In a strongly written dissent, one of the panel Judges concluded that the Tinker standard had not been met: Tinker requires a showing of “specific and significant fear of disruption, not just some remote apprehension of disturbance.” [FN16] While acknowledging the general power of school officials to regulate conduct at schools, the dissent concluded that the majority decision vests school officials with dangerously over-broad censorship authority in that it “adopt[s] a rule that allows school officials to punish any speech by a student that takes place anywhere, at any time, as long as it is about the school or a school official . . . and is deemed ‘offensive’ by the prevailing authority.” [FN17] The dissent further noted that “[n]either the Supreme Court nor this Court has ever allowed schools to punish students for off-campus speech that is not school-sponsored and that caused no substantial disruption at school.” [FN18] Layshock v. Hermitage School District Curiously, a different panel of Judges of the Third Circuit reached the opposite conclusion on the very same day in a similar case, Layshock v. Hermitage School District.[FN19] In Layshock, the Third Circuit panel affirmed a district court ruling that Hermitage School District’s suspension of high school student Justin Layshock for his “parody profile” of the high school principal on his MySpace page was improper.  The Layshock panel concluded that the high school’s discipline of the student for his online behavior violated his First Amendment free speech rights and that the school’s authority did not reach such off-campus behavior.[FN20] The student, a 17-year-old high school senior, created a fake MySpace profile in the name of his principal, using a picture of the principal from the school’s website.  The profile mocked the principal, indicating that he was a “big steroid freak,” a “big hard ass” and a “big whore” who smoked a “big blunt.”  When the principal learned of the profile, he issued a ten-day suspension and barred Justin from extracurricular activities for disruption of school activities, harassment of a school administrator over the Internet, and computer policy violations.[FN21] Layshock’s parents sued the school district and the principal, asserting violations of the First and Fourteenth Amendments.  The district court ruled in their favor on the First Amendment claim, concluding that the school was unable to establish “a sufficient nexus between Justin’s speech and a substantial disruption of the school environment, which is necessary to suppress students’ speech per Tinker.” [FN22] On appeal, the Third Circuit agreed that “it would be an unseemly and dangerous precedent to allow the state in the guise of school authorities to reach into a child’s home and control his/her actions there to the same extent that they can control that child when he/she participates in school sponsored activities.” [FN23] The court refused to allow the School District to exercise authority over a student “while he is sitting in his grandmother’s home after school.” [FN24] On April 9, 2010, shortly after issuing the seemingly contradictory rulings in J.S. and Layshock, the Third Circuit agreed to rehear the two cases en banc.  Given the factually similar circumstances of the two cases and their opposite results, it is not surprising that the Third Circuit found it necessary to provide clear guidance delineating what type of speech may be punished and how far school districts may go in punishing online speech.  Argument was heard by the full court on June 3, 2010, and a ruling is expected sometime this year.  The Third Circuit en banc review of the J.S. and Layshock cases may also be a precursor to a Supreme Court pronouncement on the topic of School regulation of online student speech. Clarifying The Law? One key issue raised in these en banc appeals – and in other cases around the country addressing similar issues [FN25] – is whether online speech by a student that is generated off school property and not during school hours, but is nonetheless directed at the school, can be regulated by a school district at all.  That is, is such speech “student speech” that may be regulated under appropriate circumstances or is it “off-campus speech” that is out of the reach of school regulation under Tinker, Bethel, and Morse? In the en banc appeals, the school districts argued in their briefing papers and at oral argument that the Supreme Court’s reasoning in Bethel regarding the ability of schools to regulate disruptive student speech should likewise apply to online speech that is directed at school faculty.  They argued that although such “speech” may be created outside of school, it is student speech, because it is specifically aimed at the school or a school administrator.  Further, they argued that such speech may be restricted because it has a sufficient impact on the proper functioning of the school. [FN26] The districts reason that because students today create, send, and access communication using multiple methods including online social media sites, email, and text messaging, the proper focus is not where the speech was made, but whether its impact is felt in school.[FN27] On the other hand, the students argued that a school district’s ability to regulate disruptive student speech should not extend to speech outside of school and that the curtailment of students’ off-campus speech is doctrinally indefensible.[FN28] In my view, extending school districts’ intentionally limited authority to off-campus speech — whether online or otherwise — would set a dangerous precedent.  Indeed, during oral argument of the en banc appeals in June, Chief Judge McKee of the Third Circuit asked if a group of students could be punished if they were overheard in a baseball stadium calling their principal a “douchebag.”  The clear answer is no.  Judge Rendell similarly noted that “the First Amendment allows people to say things that aren’t nice.” [FN29] These seem to be the right points to be making.  In other words, how are the online profiles in the J.S. and Layshock cases any different than distasteful jokes or mocking speech about school officials made outside of school?  TheTinkerBethelMorse trilogy of cases allows for limited regulation of speech in school; they simply do not contemplate otherwise limiting speech outside of school.  While online speech undoubtedly has some characteristics that distinguish it from Judge McKee’s example — i.e., a mocking online profile can be rapidly accessed by a wide group of students and lasts longer than the spoken word — these differences do not justify redrawing the line in order to allow a school to regulate a student’s out-of-school online speech. A second key issue is, if schools were allowed to regulate such speech, how substantial must a disruption be to be considered a “substantial disruption” for which discipline is permitted?  Is a school district’s judgment that there is potential to cause disruption enough, or should more be required? The school districts argue that they should have the authority to regulate speech when it is reasonably foreseeable that it would cause a substantial disruption in school.[FN30] But the students argue that if a school district is authorized to punish students’ off-campus online speech based on a presumed “reasonable possibility” of future disruption, this would eviscerate the careful balance drawn in Tinker. In my view, if schools are allowed to regulate online off-campus speech merely because it is directed towards school officials (a dubious proposition under Supreme Court First Amendment precedents), it is critical that this authority remain as limited as possible.  One way to do that is to tie the school’s authority to the presence of an in-school disruption.  Giving schools the authority to determine that, in their view, there is a “reasonable potential” for a future disruption, even if there is no evidence of any disruption, seems to give them too much power.  For instance, in the Third Circuit cases discussed above, it seems likely that anyone who viewed the fake MySpace profile would know it was intended as a joke.  And there was no evidence of any disruption at all.  Still, the school district punished the speech.  This gives the school district too much power to discipline speech that occurs off-campus. The principles set forth in the seminal Supreme Court student speech cases should favor the students in online speech cases – unless the courts adopt the view that online speech as inherently different from traditional speech.  If so, then the rules regarding school regulation of student speech will change in turn.  The Third Circuit en banc cases ­ and perhaps one day the Supreme Court – must now grapple with that issue.[FN31]  
* Michael Kasdan is an associate at Amster, Rothstein & Ebenstein LLP and is a 2001 graduate of NYU School of Law.  The views and opinions expressed in this article are his own. [FN1] 393 U.S. 503, 506 (1969).  Tinker involved an in-school passive display of political “speech,” students wearing black armbands in school to protest the Vietnam War.  The Court found that while there is a need to provide for authority to regulate disruptive speech in schools, in this case the speech was silent and passive, and there was no “evidence that the authorities had reason to anticipate that the wearing of the armbands would substantially interfere with the work of the school or impinge upon the rights of other students.”  Id. at 509.  Accordingly, the discipline was found to be a violation of the First Amendment.  Id. at 510-11. [FN2] Id. at 506-07. [FN3] Id. at 513. [FN4] 478 U.S. 675 (1986). [FN5] Bethel, unlike Tinker, did not involve political speech, nor was it of the silent variety.  In Bethel, a student delivered a speech at a school event that was based wholly on “explicit sexual metaphor.”  Id. at 676.  The speech, supporting the candidacy of the speaker’s friend for student counsel, used repeated sexual innuendo to comic effect.  In finding that the First Amendment did not prevent the school from disciplining the student for the speech, the Court remarked that it was “perfectly appropriate for the school to . . . make the point to pupils that vulgar speech and lewd conduct is wholly inconsistent with the ‘fundamental values’ of public school education.”  Id. at 685-86.  The in-school nature of the speech was central to this case.  Indeed, Justice Brennan was careful to note in his concurrence that the holding should be narrowly limited to in-school circumstances.  Brennan argued that under applicable Supreme Court precedent, if the same speech had been given “outside of the school environment, he could not be penalized simply because government officials considered his language to be inappropriate” because the speech was far removed from the category of “obscene” speech that is unprotected by the First Amendment.  Id. at 688.  Moreover, the discipline was not based on the fact that the school district disagreed with the political viewpoint of the speech; rather, the basis for the discipline was the school’s interest in ensuring that a school event proceeded in an orderly manner.  Accordingly, Justice Brennan cast the Court’s holding narrowly: “the Court’s holding concerns only the authority that school officials have to restrict a high school student’s use of disruptive language in a speech given at a high school assembly.”  Id. at 689. [FN6] Morse v. Frederick, 551 U.S. 393 (2007).  In Morse, the Court found that a school district may discipline a student for speech at a school event that was regarded as encouraging illegal drug use without running afoul of the First Amendment.  Id. at 408.  There, a student was suspended from school after refusing to take down a banner stating “BONG HiTS 4 JESUS” that he unfurled at a school event.  Id. at 393.  Under these circumstances, the Court found that even though there was no “substantial disruption” caused, id., the discipline by the school was nevertheless appropriate in view of “the special characteristics of the school environment,” id. (quoting Tinker), because schools are entitled to take steps to safeguard the students entrusted into their care from speech that could be reasonably regarded as encouraging illegal drug use. [FN7] Stephanie Clifford, Teaching About Web Includes Troublesome Parts, N.Y. Times, Apr. 8, 2010, at A15. [FN8] J.S. ex rel. H.S. v. Bethlehem Area School Dist., 807 A.2d 847, 863-64 (Pa. 2002). [FN9] J.S. ex rel. Snyder v. Blue Mountain Sch. Dist., 593 F.3d 286, 308 (3rd Cir. 2010). [FN10] Id. at 291-93. [FN11] Id. at 290-95. [FN12] Id. at 295-97. [FN13] Id. at 298. [FN14] Id. at 300. [FN15] Id. [FN16] Id. at 312 (Chagares, J. concurring in part and dissenting in part). [FN17] Id. at 318. [FN18] Id. at 308. [FN19] 593 F.3d 249 (3rd Cir. 2010). [FN20] Id. at 252-54. [FN21] Id. [FN22] Id. at 259-60. [FN23] Id. at 260. [FN24] Id. [FN25] The Third Circuit cases discussed in depth in this article are merely illustrative of the differing results courts addressing this issue have reached.  Similar cases have arisen across the country.  See, e.g., Evans v. Bayer, 684 F.Supp.2d 1365 (S.D. Fl. 2010) (holding, where student created fake and harassing Facebook profile of teacher, school districts may discipline off-campus speech only where such speech “raises on-campus concerns”). [FN26] See J.S., 593 F.3d at 298 n.6 (“Electronic communication allows students to cause a substantial disruption to a school’s learning environment even without being physically present.  We decline to say that simply because the disruption to the learning environment originates from a computer off-campus, the school should be left powerless to discipline the student.”). [FN27] The District also noted that several other appellate courts have held that online speech created by a student at their home computer constitutes “student speech” for First Amendment purposes.  See, e.g., J.S. v. Bethlehem Area Sch. Dist., 807 A.2d 847 (Pa. 2002); Wisniewski v. Bd. of Educ. of Weedsport Cent. Sch. Dist., 494 F.3d 34 (2d Cir. 2007); Doninger v. Niehoff, 527 F.3d 41 (2d Cir. 2008).  In each of those cases, the speech at issue was created at the students’ home outside the physical presence of the schools they attended. [FN28] See J.S., 593 F.3d at 318 n.23 (explaining that Pennsylvania state law clearly intended Bethel to apply only to in-school speech). [FN29] Shannon P. Duffy, 3rd Circuit Mulls Student Suspensions for MySpace Postings, Law.Com, June 4, 2010, available at http://www.law.com/jsp/article.jsp?id=1202459201824. [FN30] “[B]oth the United States Supreme Court and this Court have held that a school district can act to restrict student speech based on a reasonable belief the speech would, in the foreseeable future, substantially disrupt or materially interfere with school activities.  See Tinker, 393 U.S. at 514 (“the record does not demonstrate any facts which might reasonably have led school authorities to forecast substantial disruption of or material interference with school activities”) (emphasis added); Morse, 551 U.S. at 403 (“Tinker held that student expression may not be suppressed unless school officials reasonablyconclude that it will materially and substantially disrupt the work and discipline of the school”) (emphasis added) (internal quotations omitted).