Yeseul Do *
Download a PDF version of this article here.
Conversations about artificial intelligence (AI) regulation in lawmaking and legal scholarship typically focus on data privacy issues. This Note breaks from that tendency, engaging AI regulation from an educational perspective that focuses instead on the pedagogical implications of AI use. In particular, it examines the role of ChatGPT in educational settings for young adults, a group that is often overlooked in regulatory discussion. Unlike other arguments that emphasize the privacy risks posed by AI, this Note argues that AI regulation is too privacy focused, to the detriment of overlooking other important risks that young adults face in educational settings. In fact, there is too little attention given to regulating ChatGPT for young adults and in education, especially in recognizing the risks of inequitable access to these technologies. Despite recognition by regulators, policymakers, educators, and students of the disruptive potential of ChatGPT in secondary classrooms, many of the concerns raised by lawmakers do not align with those of educators. In recognizing these challenges, I offer several strategies to begin thinking about how to effectively regulate ChatGPT by harnessing the technology’s benefits while simultaneously safeguarding against its risks.
Introduction1
Brian, my seventeen-year-old, high school student cousin, asked me to look over his college entrance essay for the Common App. However, upon reading the first few sentences, something felt amiss. The sentences were grammatically correct, and there was not a single typo throughout the essay. Perhaps it was that, due to my former experience working as a high school English teacher, I knew how seventeen-year-olds normally write, or maybe it was just my innate intuition that the writing sounded off. I couldn’t quite put my finger on exactly why, but it was clear after reading the first two paragraphs that Brian was not the author. As I skimmed through the rest of the essay, I paused and asked Brian, “did you use ChatGPT to write this?” Bewildered, my teenage cousin responded, “is it that obvious?”.
My experience with Brian’s essay is neither unusual nor rare. An online survey of around a thousand high school students in fall of 2023 showed that one in five teenagers have used ChatGPT for school assignments.2 Brian’s sheepish reaction to me asking if his essay was AI-generated also affirms a finding from the same survey, which showed that 57% of these teens felt it was not acceptable to write essays using ChatGPT.3 My anecdote with Brian opens up the door to an entire slew of questions and concerns – how are young adults using AI? Is AI safe? Do teachers know about it? Is independent essay writing a thing of the past?
A deeper dive into AI regulation in schools shows that research in this area is very limited, and the research that does occur from the legal space focuses nearly exclusively on information privacy questions. There is surprisingly little about AI’s effects on education and young adults in legal discourse. On the other hand, schools and the education space are largely focused on pedagogical questions, such as how AI fits into teaching students and concerns about academic integrity. There is also a significant disconnect between policymakers’ views of AI and educators’ concerns surrounding AI.
Part I of this Note begins by discussing the current educational landscape and its approach to technology as schools begin to grapple with and manage the availability of AI. In this section, I also offer different definitions of AI and explore the challenges that come with defining the technology, along with explaining how ChatGPT (currently the most widely-used generative AI chatbot app) functions.
Part II discusses the multifaceted risks that AI poses for young adults in educational settings beyond the commonly discussed privacy concerns. This includes concerns about accessibility, AI literacy, pedagogical effectiveness, and ethical use.
Part III explores the current regulatory and legislative landscape for AI and criticizes AI regulation’s narrow obsession with privacy concerns. I argue that the current regulatory environment for AI is overly preoccupied with privacy issues, at the expense of addressing a broader spectrum of ethical, social, and pedagogical challenges presented by AI technologies. This oversight becomes increasingly evident when scrutinizing AI regulation for young adults in educational contexts.
Part IV sets forth suggestions for future AI regulation for young adults in educational contexts, utilizing the 2023 Biden Executive Order on AI as a baseline. By drawing on the 2023 Biden Executive Order’s guiding principles, this section lays out strategies in formulating effective regulation of ChatGPT and other generative AI apps for the future.
I. Current Educational Landscape for Young Adults
A. The Recent EdTech Landscape
K-12 schools have experienced rapid technological developments thanks to being forced into virtual learning by the COVID-19 pandemic. Had it not been for the pandemic, many schools would have technologically remained decades behind, where laptops were only seldom used during class time, but students would all be Snapchatting their friends on smartphones during bathroom breaks.
However, despite the necessity of virtual learning having forced some schools to integrate more technology into their classrooms, the world of secondary education remains resistant to change. While many young adults (ages 13-17) seem to own a smartphone, many also struggled to adapt when education moved online. When the world was forced to move to solely digital communication in the lockdowns of March 2020, I personally saw many of my students struggle.
Most classes at my school did not use laptops in the classroom prior to the pandemic, and I observed firsthand my high school students’ poor technological literacy. Due to a lack of familiarity with computers and online classroom tools, students struggled to use basic internet tools during the pandemic. For example, I regularly had students send entire emails in the subject line. Gen Z and Gen Alpha young adults are presumed to be more tech-savvy because this generation grew up with screens from a young age, but my experience as a teacher illuminated that technological literacy remains extremely low.4 Most of my students in my English I class (ages 13-17) frequently fell for phishing scams and did not know how to use the Google search bar.
My school also had serious bars to technological access. Despite my school not qualifying for Title I funding (federal financial assistance to schools that record at least 40% of their students as low-income status),5 nearly none of my students had access to a computer during the pandemic shutdowns of March 2020. My students logged into class on Google Meets on their phones, and the few who were lucky enough to have access to a computer frequently had network connectivity issues or had to disconnect from the call midway, as their siblings also needed to use the single computer in the house to attend their respective classes.
The technology access issue was so bad that the Hawaii Department of Education (HIDOE) made school optional for the rest of that school year. In the span of weeks, learning moved from my classroom of thirty desk chairs to just five students logging onto Google Meets. The state purchased pre-loaded freshman English curricula on Blackboard, an online education management system, that ended up being unsuitable for my integrated classroom with disabled and non-disabled students. Neither myself nor my students could figure out how to navigate Blackboard’s clunky interface, and the investment into Blackboard thus felt like a huge waste of money. Sadly, my experience during the pandemic was not unique. My classroom was just one of thousands experiencing this type of difficulty nationwide.6
Today, student learning and teaching in some schools is more or less the same as its pre-pandemic status. When speaking with my former colleagues, teachers share that many schools have returned to a fully in-person environment. Classrooms remain largely technology free when it comes to learning. Students still access their smartphones in their free time to socialize or contact their families, but Chromebooks are not used much besides for word processing or an occasional research project.7 Although a post-Covid policy report showed that more homes now have access to broadband internet access,8 that is only one step towards increasing technological access for young adults.
B. Education’s Resistance to Change
Despite the rapid switch to virtual learning during the pandemic, education is typically a very stagnant space where things are extremely resistant to change. Education’s resistance to innovation has been documented across all levels, from secondary to higher education.9 Teachers are often suspicious of new innovations, not because they lack value, but because they introduce additional workload such as extensive training. Teaching is already a difficult profession, notorious for requiring that teachers juggle a number of endless tasks, and with innovation follow more responsibilities, often without sufficient support for teachers.10
Education’s resistance to change is also rooted in cost concerns. School funding is historically lacking, making it challenging for institutions to justify experimenting with unproven tools. Implementing new technology demands significant investments in teacher training and an evaluation period to assess its effectiveness. Schools tend to be risk-averse, afraid of investing already scarce resources in a new technology that ends up being impractical to implement or pedagogically ineffective.11 Consequently, schools are wary of wasting time, energy and money on an initiative that results in its abandonment.
These change-averse attitudes in education contribute to the odd EdTech landscape, explaining why EdTech lags so far behind as the rest of society speeds ahead. Students, especially those from poorer school districts, continue to fall behind in tech literacy because schools with fewer resources are even less able to take risks with new educational tools. This situation exacerbates the digital divide between wealthier and poorer schools, widening the gap in education opportunities and outcomes.
Illustrating this dynamic, special education records are still filed by paper in Hawaii and shredded annually to ensure student privacy. This careful approach reflects the serious commitment to federally mandated protections for students, especially special needs students with IEPs and 504 plans.12 While it’s important that student privacy is taken very seriously, this also leads to a strange archaic system of physical document handling. Years’ worth of IEPs are stored in giant cabinets, and paper copies are painstakingly transferred between schools whenever a student transfers to another institution.
This juxtaposition of balancing stringent student privacy requirements alongside outdated technology reveals the reality of our secondary education system. While this paper-based system ensures student privacy, it feels incredibly antiquated. Sensitive data is transferred across multitudes of industries and institutions (such as law firms and hospitals) via more technologically advanced cloud management systems. The lack of technological advancement in schools is frustrating and reflects the lack of resources and challenges affecting the education system, rather than a commitment to student privacy.
The variation in how teachers utilize technology in classrooms is also affected by socioeconomic factors surrounding the schools. More affluent schools have more resources, with certain private schools even renting out computer devices to each student during the school year, whereas classrooms like mine with underserved populations had a computer cart that was shared across four other classrooms of 400 students. I was lucky to have the laptop cart “housed” permanently in my classroom, but it was usually borrowed by other teachers. Teachers fought for laptops, with students knocking at my door from neighboring classrooms asking if they could take the laptop cart.
C. The Advent of ChatGPT in Schools
Within this strange post-Covid world of education, schools and students are encountering what seems like a magic tool: ChatGPT. A 2023 national survey showed that around 20% of young adults have used ChatGPT for schoolwork.13 AI seems to be touching a part of everyone’s lives, so it is no surprise that it is also affecting schools and young adults. There are, most prominently, concerns that kids are using ChatGPT to write their papers and cheat on schoolwork, along with fears over the future of education.14
1. Defining ChatGPT
ChatGPT is a large language model (LLM) developed by OpenAI which functions as a generative AI chatbot, meaning it can generate new content based on what the app is trained on. In contrast to basic chatbots that many of us have experienced, such as customer support on a product’s website, ChatGPT’s data-driven approach allows it to continually improve its output over time as the app is exposed to more data. Because ChatGPT relies on vast amounts of text data to “learn” (identifying patterns, making predictions, and generating responses based on the data it processes), this enables the app to handle a wider range of queries with greater accuracy.
ChatGPT essentially functions as an advanced word predictor, and the technology is good at predicting words because it is trained on all publicly available text on the internet.15 It is so good at predicting what word is going to appear next statistically that its output of word sequences is in remarkably coherent sentences, as if you are chatting with another person. So despite popular notions surrounding how “smart” ChatGPT seems to be, and popular tendencies to anthropomorphize the chatbot, ChatGPT does not “understand” our conversations or “learn” like the human brain.
OpenAI, the non-profit that created ChatGPT, offers free and paid versions of its chatbot service. GPT 4-o mini is the most recent freely available version of ChatGPT and is easily accessed with a web browser and internet connection. GPT 4-o, the newest version as of May 2024, is described on OpenAI’s website as being “faster,” and as the “newest flagship model.”16 Although all free tier users can access all three versions (GPT 4, 4-o mini, and 4-o), free users can only use GPT 4-o a limited number of times a day, beyond which free users are then automatically reverted to the 4-0 mini.17 Greater access to GPT-4o requires an individual$20/month “Plus” subscription and improves accuracy by integrating Bing search results. GPT 4 and 4o are more factually accurate than GPT 3.5 from 2023 which only uses pre-trained data sets. There is additionally the highest “Pro” subscription tier for$200/month which gives users “the highest level of access.”18
The immense computing power ChatGPT uses is parlayed to the browser instead of a device’s hard drive, so one can access ChatGPT on a desktop, laptop, or the convenience of a smartphone. However, there are other free LLMs. OpenAI’s rivals include other chatbots like Google’s Gemini and Meta’s Llama2. Unless specified, from here on all references to “ChatGPT” mean GPT4-o or the most recent freely accessible version.
ChatGPT has enjoyed its place in the relatively new AI chatbot space, with approximately 400 million users weekly in 2025.19 ChatGPT has become a valuable tool for summarizing text and generating written content, and users can interact with ChatGPT for various tasks, such as creating workout plans or meal suggestions.
2. Social Definitions of AI
While AI is technically defined by its algorithms, data, and computing power, AI embodies a significant social dimension that influences how it is perceived and accepted by the general public. Thanks to ChatGPT’s recent popularity, the broader social perception of AI centers around generative AI chatbots like ChatGPT that help people compose essays, create to-do lists, and generate images from textual prompts.
Another side to the social definition of AI is that it is often categorized as something almost mystical or “outside-worldly,” a perception shaped not just by what AI can do, but also by how it’s presented and understood in popular culture and media. This phenomenon is also recognized as Tesler’s Theorem, referring to the famous phrase by Larry Tesler: “AI is whatever hasn’t been done yet.”20 Tesler’s Theorem reflects a moving goal post when it comes to AI and technological innovations generally – once a particular technology becomes commonplace, it is no longer seen as “AI” in the mystical sense.
For example, Google Translate is, definitionally, an advanced generative AI application. Google Translate’s algorithm implements machine learning by continually improving its translation service based on user input data.21 A decade ago, Google Translate’s ability to break down language barriers by translating text in real-time would have been considered AI. Imagine if you time traveled and told your friend in 2000 that there was a way to translate text in real-time – they would have found it miraculous or even frightening. Yet, many of us today do not think of Google Translate when AI is mentioned in conversations. Thanks to the ChatGPT boom, Google Translate is likely not the first thing regulators are scrutinizing when discussing AI law. As Tesler’s Theorem suggests, when AI technology evolves and becomes more integrated into our everyday lives, the public’s perception shifts.
Thus, it is important for regulators to understand ChatGPT and other AI apps by both their technical and social definitions. With these various definitions affecting our perceptions of what qualifies as AI in the minds of the public, it is exceedingly difficult to create laws around it. For those who are not familiar with the technological underpinnings of ChatGPT, these social definitions muddle people’s understanding of AI. There is more to AI than ChatGPT, but lawmakers, and much of the world, are caught in the ChatGPT craze, obscuring their views on how to effectively regulate AI.
To regulate AI effectively, whether through existing legal frameworks or new ones, lawmakers need to think beyond just ChatGPT and understand both the technological and social impacts of AI, as well as the direction that AI is headed. Regulators should not be misled by the hype surrounding AI nor allow these perceptions of AI to affect the urgency and focus of regulatory measures.
D. ChatGPT in Education
The AI explosion has affected educators as well, as they confront questions about whether AI could replace teachers and transform the way schools are run. At the classroom level, overworked teachers want guidance on AI, like whether their students are allowed to use it and for what purposes. The ChatGPT obsession in education also follows the cyclical tendencies and trends of fearmongering surrounding new technologies. First, we were afraid of calculators and the internet, leaving teachers with pedagogical questions about whether there was still a purpose in teaching children how to do arithmetic. Teachers were worried that calculators would make math class obsolete in the 1970s, but that is far from true today.22 Currently, schools are having similar worried reactions to ChatGPT, leading some to implement outright bans of the technology.
New York City Public Schools’ decision to ban ChatGPT on school devices illustrates common concerns about AI in education.23 The spokesperson for the school system cited “negative impacts on student learning,” emphasizing that ChatGPT provides quick answers but does not foster critical thinking skills.24 This concern reflects a broader apprehension about the role of AI in education: that it might diminish essential cognitive skills rather than enhance them.
Interestingly, there is a disconnect between regulation of AI use in K-12 schools versus in the professional world. Although some professionals are being warned not to use ChatGPT, such as lawyers being disciplined for misusing the chatbot,25 many companies have provided no guidelines for their workforce.26 There are even jobs where using ChatGPT has become an essential tool because it has made working so much easier (e.g., management consulting, software engineering, marketing, etc.).27 From updating a resume to coming up with talking points for client slide decks, ChatGPT has made work much simpler for some professions. Studies show that many young adults in the workforce use AI to assist with their work,28 but teens are often not allowed to use it in school.
Higher education’s approach towards AI tools also differs from secondary education’s. Higher education tends to skew more in favor of allowing students to use ChatGPT in certain contexts,29 whereas secondary education has been more entirely against its use. ChatGPT use may be more acceptable in higher education because, in contrast to their younger high school peers, college-aged students are more mature and may have better judgment when it comes to using these tools ethically.
There are many divided views on whether ChatGPT should be allowed in the classroom in secondary education.30 These attitude differences towards AI tools in secondary and higher education create a strange paradigm for young adults – some young adults may have not been allowed to use ChatGPT at all in K-12 school, up until they reach adulthood and post-secondary education where they are suddenly encouraged or even expected to use the tool. A student who had ChatGPT fully integrated into their classroom will have a very different attitude towards the technology than a peer whose school district banned ChatGPT.
One survey suggests that these differences in use and perception of AI tools are also correlated with race and socioeconomic status. 72% of White teens had heard about ChatGPT, compared to 64% of Hispanic and 56% of Black teens, and a greater percentage of teens from households making at least$75,000 annually had heard of ChatGPT than their peers from lower income backgrounds.31 The “digital divide” is highly documented in socio-education research,32 and the decisions that schools make in restricting access to ChatGPT are likely to further exacerbate the technological divide between students of different socioeconomic backgrounds.
1. Education’s Misunderstanding of ChatGPT
Education research on AI has received significant attention in the past five years. Large governmental organizations have conducted education research as policy drivers to determine guidelines for AI use in education, including the Office of Educational Technology at the U.S. Department of Education (US DOE) and UNESCO.33 However, a significant flaw in these AI guides is that there is an assumption of a certain level of technological literacy in students. For example, the US DOE mentions cultivating AI literacy, but does not mention the prerequisite technological literacy for effective AI literacy.34 This guide only addresses those who are already at a certain level of technological literacy and assumes the ability to teach efficient AI usage.
NCER (National Center for Education Research) also shows dozens of funded opportunities to research AI’s pedagogical efficacy in K-12 classroom learning.35 They are especially focused on developing AI Chatbots and other types of AI apps to be integrated into classroom learning. There has also been an explosion of published education research on ChatGPT in particular, with 357 articles being published between 2022-2023, a steep increase from 148 in 2021-2022.36 Popular topics include developing and scaling AI tutoring applications and teachers using AI to reduce their workload via AI-powered grading.37
However, much of the education research on AI is misguided, reflecting a sector-wide misunderstanding of how ChatGPT and other generative AI tools work. Too many studies look at how accurate ChatGPT is in answering prompts about specific subjects such as science or language education. Although ChatGPT’s factual accuracy is a valid concern, these types of studies do not meaningfully capture what the technology is capable of or how it should be evaluated. For example, one study criticized ChatGPT’s accuracy in answering science-related questions for having factually inaccurate outputs and the app’s tendency to fabricate sources.38 Although the paper’s introduction briefly explained how ChatGPT’s technology works, the research itself in the rest of the paper demonstrates a lack of AI literacy by focusing on the accuracy of a chatbot’s output. These “factual accuracy” types of studies are useless because this overlooks that ChatGPT is a constantly evolving technology. Because these AI chatbots are regularly updated through user feedback, yesterday’s ChatGPT results may not reflect ChatGPT’s performance tomorrow. Thus, these AI “accuracy” studies dominating the education research scene demonstrate the education researchers’ misunderstanding of generative AI technology.
E. Unique Challenges to Regulating Education
Regulating AI in education involves navigating historical, social, and legal complexities. Effective regulation must balance federal oversight with the need for local autonomy, ensuring that policies are both practical and sensitive to the diverse needs of various schools.
First, the disconnect between educators and lawmakers creates challenges in education policymaking. Many education policymakers lack classroom experience and do not understand the daily realities faced by teachers. Consequently, even well-intentioned legislative efforts can backfire or face protest from teachers. For example, in 2017 the Hawaii Department of Education approved funding for solar-powered air-conditioning units at a local high school without realizing the school’s electrical system could not handle the upgrades.39 Although the state aimed to improve the concentration of students and quality of classrooms, they overlooked the fact that the high school was located in a small town with limited electricity generation. Thus, millions of dollars were wasted.
Second, lawmakers tend to avoid micromanaging schools, often only providing broad curricular requirements. Specific teaching methods and content delivery are left to individual teachers to address the diverse educational needs across different communities. Attempting to regulate AI use through one sweeping education law could overstep into an area usually self-regulated. Furthermore, education laws can prove very difficult to enforce. For example, during COVID-19, Hawaii’s attempt to implement a single online curriculum plan for all public schools failed to meet varied classroom needs and was not implemented successfully across the state.40 I recall not using the state-provided English curriculum because it was not suited to the needs of my special education students, and this caused them to fall behind in the coursework. A broad AI law would likely face similar challenges in execution and application, mirroring the issues during the pandemic.
Regulating education at the federal level is also challenging because of the sheer number of schools across the country. Each school has unique needs and contexts, making one-size-fits-all regulations impractical. This diversity not only challenges broad regulatory efforts but also highlights the difficulties in implementing consistent standards across such a varied landscape. For example, the federal landmark education law, The No Child Left Behind Act, faced criticism and pushback from educators and families alike.41 Stakeholders highlighted the difficulties of satisfying diverse education needs when these types of sweeping education laws put too much emphasis on standardized testing. It is difficult to satisfy such a large number of affected constituents with overarching federal education legislation.
II. AI Risks for Young Adults in Education
A. Overlooked and Under-Protected: Young Adults as a Protected Group
Young adults, which I define as ages 13-17, are overlooked in education and legal regulation, perhaps because they occupy an awkward space between childhood and adulthood and display a wide range of maturity levels. This developmental stage is marked by significant variability, as a 13-year-old might still look and behave like a child, while a 17-year-old may be on the cusp of adulthood.
One reason for the under-regulation of young adults stems from societal attitudes toward parenting and education. Once young adults reach high school, many parents mistakenly believe that they require less supervision. As a result, young adults are often delegated adult-like duties, like working part-time jobs or caring for younger siblings. This perception that young adults are “grown” can lead adults to treat them as adults prematurely, despite their ongoing development. Psychologists note that young adults often suffer from being “given too much responsibility at too early a stage of their development” due to misguided expectations from parents.42
Although young adults are generally more developed than children under 13, they still lack the full cognitive and emotional maturity of legal adults over 18. While young adults can exhibit cognitive capacities similar to adults, such as when providing informed consent, their decision-making skills are not fully developed due to the ongoing maturation of the prefrontal cortex.43
The law also recognizes the need to treat young adults differently from younger children, especially in juvenile sentencing. For example, the Supreme Court has recognized that juveniles, defined as individuals under 18 or 21 in some cases, have developmental differences from adults.44 This acknowledgement has led to the rejection of extreme sentencing for youth, noting the ongoing development of young adults.
Critics may point out that defining an exact age group for regulation is difficult because maturity varies widely amongst young adults. Child developmental experts highlight the unpredictability of how “much a young person can manage on her own,”45 while also disagreeing on whether adolescence is a “distinctive stage” or a “matter of gradual progression.”46 Others may fear that regulating young adults could be perceived as over-policing, infringing on their growing independence.
Another challenge is how different laws define the age of majority for different purposes. This existing patchwork of laws that have different ages of consent for teens and what constitutes a legal adult is challenging. While 18 is widely accepted as the age of majority, the Children’s Online Privacy Protection Act (COPPA) defines children as those under 13, while the age of consent also varies across states depending on the subject matter, such as for marriage, sexual activity, and medical procedures.47
However, the fact that young adults are still developing is precisely the reason we need to have separate regulation considerations for them. The in-between status of young adults calls for regulations uniquely tailored to their developmental stage, rather than an approach that lumps them in with either young children under 13 or legal adults. The regulatory mechanisms that I suggest for young adults are specific to education contexts and do not require overly harsh parental monitoring of every AI interaction. It is really a matter of whether young adults can adequately understand the risks and benefits of ChatGPT and whether they are old enough to consent to the processing of their data when they use AI tools.
B. Protecting Young Adults Through Educational Contexts
The best way to protect young adults from generative AI’s risks is by addressing these issues within the education context. My earlier story with Brian is not an unusual situation. Young adults represent a significant user base for ChatGPT, and they are likely to continue using similar generative AI tools in their post-secondary education, especially because ChatGPT is already an integral part of many jobs. Thus, the best place to begin managing AI risks for young adults is where young adults spend most of their time – in educational settings like school.
Although younger students may have the capacity to use ChatGPT, I hypothesize that older students would use ChatGPT more because you need a certain level of literacy, both written and technological, to type inputs into ChatGPT. It would be helpful for regulators to survey K-12 children or secondary students on their ChatGPT or other AI tool use, determining which apps students feel familiar with, which age groups use AI tools, and how students are using AI tools. Furthermore, integrating AI tools into school curricula would also align with the pedagogical movement towards teaching relevant, 21st-century life skills.48
C. An Educator’s Perspective on the Benefits and Risks of ChatGPT
As a former high school teacher, I recognize both benefits of and concerns about ChatGPT’s usage by young adults. However, my risk analysis diverges from the current focus of lawmakers and other legal scholars on data privacy and content moderation. The privacy risks that current AI regulations seek to address are valid, as training, running, and using AI apps like ChatGPT involves a gargantuan amount of data processing. Content moderation concerns are valid as well, since ChatGPT has the ability to generate harmful content. However, I argue that the intense focus on privacy as a means to regulate AI is too narrow-minded.
The digital transformation of our world has made us realize how much information is generated, stored, processed, and flows through digital systems. Consequently, privacy law has become the law of everything – a catch-all framework because data underpins so much of our information-based society. Privacy issues related to AI include consensual data collection and implications of scraping in training AI systems, as well as automated decision-making by AI systems to potentially make harmful inferences about data subjects.
It makes sense that concerns about regulating AI are so focused on privacy legislation, as we are in a moment that tends to define privacy broadly. Scholars like Ryan Calo critique this method, calling for a more concrete understanding of privacy law and privacy harms.49 Nevertheless, the emphasis on privacy overlooks other harms to young adults (mis)using AI, including inequitable access, educational harms, developmental harms, and unethical use.
1. ChatGPT’s Benefits
New York City made headlines when its public schools banned ChatGPT from school devices, only to reverse the ban three months later.50 I speculate that this reversal indicates that New York’s public schools recognized the impracticality of prohibiting its use, but also began to recognize ChatGPT’s benefits. ChatGPT is undeniably advantageous for young adults to learn to use.
One of ChatGPT’s greatest potentials is to lessen the workload of teachers while enhancing student learning. Numerous education studies are exploring how AI can transform learning outcomes, such as by developing AI-powered tutoring chatbots in specific subjects.51 For example, ChatGPT has been studied for its potential to tutor students in subjects, providing individualized learning experiences and improving academic performance. This support is particularly beneficial for students with disabilities, as ChatGPT’s scaffolding abilities can break learning into digestible chunks and offer tailored tools to help varied learners more effectively.52 Additionally, teachers may hope to use ChatGPT to create and grade assignments, saving time and allowing them to focus on more important tasks like in-class instructions and professional development opportunities.
However, much of these benefits seem to focus on how teachers can utilize ChatGPT, with students receiving residual benefits from having more efficient educators. What is less emphasized is the direct benefit to young adults using ChatGPT. From my observations while teaching during the pandemic, it is a critical life skill for young adults to learn to use current technologies. Regardless of a young adult’s career trajectory, technological literacy will be important as technology becomes more enmeshed in our everyday lives.
While it’s easy to take technological literacy for granted, I observed many of my high school students struggle with basic digital skills in 2019. These 14-year-olds did not know how to use Google Search or a word processor. A digital literacy program must include how AI tools like ChatGPT work, as well as guidance on how to use AI tools effectively. This knowledge is critical for every young adult’s educational experience in order to prepare them for a technology-driven future.
2. ChatGPT’s Risks
While ChatGPT offers promising educational benefits, its usage also poses several risks. Beyond the mainstream privacy and content moderation risks percolating the existing AI regulatory space, I recognize four additional risks related to young adults and education: accessibility of AI technologies, pedagogical harms, developmental harms, and AI misuse.
2.1. AI Access and Literacy
One of the foremost challenges is ensuring equitable access to AI technologies like ChatGPT. OpenAI markets its image as a company aiming for digital equity by making its technology as accessible as possible.53 ChatGPT is currently a very popular product, but what happens when the next best AI product is prohibitively expensive? From an educator’s perspective, AI’s accessibility to young adults is concerning. AI accessibility has two parts: availability of the product, and an understanding of how to use the product effectively. The fact that an educational tool is available to a student does not mean that the student knows how to use it. Without proper integration and use guidelines, AI’s introduction and integration into the K-12 education space will only exacerbate the digital divide in education. Schools around the world will experience a repetition of the pandemic-caused shift to digital learning, with students from underserved communities not being able to fully utilize the benefits of AI in education due to the lack of resources.
Barriers to AI (hardware, technology, geographic, financial, etc.) will prevent young adults’ access to ChatGPT and other AI tools. For instance, while a basic version of GPT is available for free now, advanced iterations like GPT-4o require payment, limiting access to wealthier students. Additionally, although ChatGPT is accessible at the convenience of a smart phone and internet access now, the need for different hardware (such as more powerful computers versus smartphones) and software (browser compatibility) may further exacerbate access issues as the technology advances. These barriers may prevent a uniform educational experience for young adults across different socioeconomic demographics and regions, thereby worsening existing educational inequities.
Second, the misguided content of ChatGPT studies suggests that education researchers lack a fundamental understanding of ChatGPT’s functions. If they understood the utility of the tool, they would not be testing the accuracy of ChatGPT’s outputs. Misunderstanding ChatGPT is dangerous because it leads to inefficient use. People often treat it like a search engine (though it is not optimized for that purpose) and therefore many education studies attempt to record the factual accuracy of ChatGPT’s output. One study misguidedly looks at the efficacy of ChatGPT in TESOL (teaching English as secondary language), stating that ChatGPT is useful for compiling information and learning about unfamiliar topics for language learners because it “scours information on a common topic.”54 Unfortunately, much of this type of research overlooks that ChatGPT was designed to “interact in a conversational way,” and not for precise fact-checking or serving as an exhaustive knowledge source.55
Studies on ChatGPT’s pedagogical effectiveness demonstrate the researchers’ lack of technological literacy on how generative AI chatbots work. Thus, even if the best AI EdTech companies release AI-powered products, young adults (and the adults educating them) will not be able to benefit from these tools unless AI access and AI literacy are incorporated into them.
2.2. Pedagogical Harms
Students may also face educational harm due to improper use of AI tools like ChatGPT. The current skepticism against ChatGPT stems from concerns that students will not learn critical thinking skills adequately if they become over-reliant on AI to provide easy answers. Students may also query and take in inappropriate information, as well as misunderstand the limitations and nature of AI. Another critical concern is the teacher’s over-reliance on ChatGPT: if teachers design lessons with ChatGPT, students use ChatGPT for completing assignments, and teachers then use ChatGPT again for grading, how much genuine education is happening? This may be more of a critique on the education system rather than ChatGPT itself, but the goal should be to teach students the subject matter, not to teach kids how to make the most efficient searches to extract the response they want from ChatGPT.
A third risk of ChatGPT use in education for young adults is bias in ChatGPT’s output, stemming from both the training data and the developers’ biases. OpenAI states that ChatGPT is trained on publicly available internet text,56 which often reflects demographic imbalances in content creation. This leads to a question of the demographics of content creators on the internet. For instance, although the internet is lauded as a space where everyone is allowed to post and become “equal,” educational research on digital inequity suggests that white populations, due to greater access to broadband internet, are more able to produce online content, rather than merely consume it.57 The skewed demographics of internet content in combination with the algorithm developer’s bias inevitably forms an AI technology with biased output.
Despite developmental research and legal precedent pointing to young adults being distinct from younger children and adults, young adults remain overlooked in platform regulation through privacy and AI laws. However, improper and overbroad regulation such as a ban will cause more harm than good. When schools penalize students for using AI tools, this causes students to lag behind in technological literacy, leaving them increasingly unprepared for the digital workforce where such tools are increasingly expected to be used. Inconsistent guidelines on ChatGPT usage can also lead to uneven disciplinary actions among students, causing confusion and unfair punishment.58 As some schools encourage AI use while others prohibit it, the lack of clear rules about educational AI use creates inconsistent enforcement both within and across schools. Adding to the confusion for K-12 teachers, many higher-education institutions have declined to ban ChatGPT.59 Rather than solving the pedagogical concerns that teachers fear over AI use (like cheating and output bias), simple bans create more problems down the line for educators. Instead, proper guidance should include training both teachers and students on understanding how AI chatbots process data, how to recognize that chatbot outputs may be biased or inaccurate, and how to teach students to integrate AI tools with their own critical thinking.
2.3. Developmental Harms
From a developmental perspective, there is also the risk of students forming emotional attachments to AI systems and misunderstanding them as human interactions, which could impact their social development and understanding of human relationships. ChatGPT does not actually “know” anything or differentiate between true and false information. Rather, ChatGPT generates text based on statistical probabilities, leading to human-like but sometimes inaccurate responses. This tendency to anthropomorphize AI can be misleading, as we must be reminded that ChatGPT is not capable of true intelligence as we know humans to be.60 This risk warrants further research on how young adult attachment to human-like technology impacts their development.
2.4 Unethical AI Misuse
Young adults will also continue to misuse AI without the proper guardrails from adults. Like the anecdote of my cousin using ChatGPT to write college essays, there are serious concerns about whether generative AI’s usage in life-altering documents like college applications is ethical, raising questions about the integrity of a young adult’s capabilities.
Another concern is using AI to promote violence or bullying. There have been incidents where teenagers have used generative AI to bully others by creating deep fake pornography.61 Young adults need continuous guidance from adults to explain which uses of AI are unethical, so these horrible incidents can be reduced.
D. A Critical Look on AI Regulation – Continued Onus on Teachers
One criticism of regulating AI through education is that regulating education often really means regulating the adults working in the schools. Teachers are already overburdened, and this continues the onus on teachers to determine AI usage standards. Individual teachers deciding how AI tools should be utilized in their classrooms can also lead to a lack of uniformity in how AI is integrated into educational practices, causing inconsistency across educational institutions.
Although burdensome on teachers, AI regulation remains crucial because young adults lack the maturity to recognize the risks of AI use themselves. Without proper guidance, young adults are unable to use AI tools effectively or ethically, leading to issues like academic dishonesty. With proper training, teachers can help students use tools like ChatGPT in their learning. ChatGPT is an excellent tool for summarizing long text and brainstorming ideas, but many young adult students are still developing critical thinking skills and technological literacy.
The potential risks of AI in education warrant a structured regulatory approach. Regulation can help ensure that all students have equal access to these technologies, preventing a digital divide. It can also set standards for ethical use, safeguarding against misuse that could disadvantage or unfairly penalize students. Thoughtful regulation can prevent schools from lagging in technological literacy, ensuring that students are adequately prepared for a future where AI plays a prominent role.
III. Current AI Regulations and Their Gaps in Protecting Young Adults
Regulatory bodies are not immune to the AI craze sweeping across all parts of our society. The rapid advancement and widespread adoption of generative AI technologies like ChatGPT have introduced new challenges and risks, particularly in the realms of data security and content moderation. AI requires a nuanced regulatory approach to ensure that the benefits of AI can be realized while minimizing potential harms. However, regulating AI is uniquely difficult because AI can be so broadly used. This can be analogized to data privacy regulation – we live in a data economy, where everything we do generates “data” of some sort in this virtual world. Similarly, AI feels so disruptive as a product because it can be applied in so many different areas.
Beyond the four Executive Orders (E.O.) – two issued by President Joe Biden62 and two by President Donald Trump (one of which revoked Biden’s first E.O.)63 – the United States has yet to pass any federal legislation specifically targeting AI. Some states have broader information privacy laws that also encompass AI, given the technology’s data-heavy nature and related need for transparency in how information is collected, stored, and processed. Although information privacy and security of AI systems are valid concerns, I argue that this focus on privacy overlooks the other risks AI poses to young adults, particularly within educational contexts.
The regulation of AI in education is not merely a precaution but a necessity. It is crucial to address these issues proactively, setting guardrails that will protect and enhance the educational experiences of young adults. By doing so, we can harness the benefits of AI while mitigating its risks, ensuring it serves as a tool for educational enhancement rather than a source of inequality or ethical breaches.
A. Federal Education Privacy Laws’ Shortcomings: COPPA and FERPA
There is no federal law regulating AI directly, but there are a few federal laws that touch on AI use by young adults or in education broadly. As we see with COPPA and FERPA, current laws end up regulating AI despite not being AI-specific. However, the laws currently encompassing AI regulation either fail to adequately protect young adults or do not adequately protect against AI’s educational risks. Existing legal mechanisms such as the Biden E.O. (prior to its revocation), state privacy laws, and content moderation laws also fall short of effectively protecting young adults. Below, I discuss laws that are proposed to specifically regulate AI, as well as laws which arguably happen to encompass AI because of a feature of AI technology.
1. FERPA
In 1974, Congress passed the Family Educational Rights and Privacy Act (FERPA), an educational privacy law to give parents control over their children’s privacy through a student’s educational records. FERPA treats students under 18 as a protected class and does not clarify the difference between “student” and “children.”64 When a student turns 18 or enters a postsecondary institution at any age, the rights under FERPA transfer from the parent to the student.65
FERPA has been criticized for its lack of protection regarding students’ educational privacy in practice.66 Furthermore, FERPA is more so a law about how adults in a young adults’ life can delegate control over the young adult’s educational data. Young adults cannot consent for themselves when it comes to educational records, and parents or guardians are supposed to consent on their behalf.
Although FERPA covers young adults, it is not clear whether AI is implicated. FERPA has been criticized for its shortcomings when it comes to technological infrastructure in EdTech because educators are overwhelmed by data tracking practices, leading to an unsustainable system that burdens students, parents, and educators while failing to ensure “meaningful transparency, accountability, and scrutiny over schools’ information practices.”67 FERPA doesn’t adequately address AI risks. FERPA’s potential AI application involves schools and adults inputting young adult data into an AI system, rather than young adults themselves using the AI. FERPA does, however, raise the concern of how an adults’ use of an AI tool can implicate a young adult’s data.
2. COPPA
The Children’s Online Privacy Protection Act (COPPA) is a federal children’s privacy regulation that defines “children” as being under 13 and regulates digital platforms because of the COPPA’s restrictions on children’s data collection.68 As a result of COPPA’s definition of children as those under age 13, COPPA creates a gap for young adults aged 13-17. Young adults are instead treated as adults who can consent for themselves without parental involvement for the purposes of online services. Given that AI’s main regulatory mechanism is currently through privacy law, COPPA’s gap in young adult privacy regulation translates to a gap in young adult AI regulation.
COPPA is also confusingly inconsistent with FERPA when defining who qualifies as a young adult.69 Reading FERPA and COPPA together presumes an age distinction between “child” and “student,” as COPPA defines “child” as under 13 while FERPA defines students as under 18. This protected class distinction between “child” and “student” under COPPA and FERPA seems arbitrary. For example, FERPA gives parents full power over their 14-year-old’s educational data, but COPPA gives them no legal power to look at the data their 14-year-old has shared with an online service or video game.
For COPPA, protecting children’s privacy effectively translates to giving parents control over their children’s privacy. The legislative history shows that Congress developed COPPA intending for parents to be in control of their children’s data.70 Furthermore, even if we come to an agreement that young adults do have the cognitive capacity to provide their own consent, it does not matter in educational contexts because schools can consent on behalf of the parents, for their students.71 Because young adults fall outside of COPPA’s protections but remain under parental or institutional authority until the age of 18 or high school graduation, it is unclear what privacy rights young adults can actually exercise in educational contexts. Merely broadening the definition of “child” under COPPA to capture all minors under 18 would also not adequately protect young adults because privacy laws will not address the educational risks of AI for young adults (i.e., the developmental, pedagogical, and accessibility risks identified in Part II).72 COPPA’s scope is also limited to online services targeting children, and it is unclear whether ChatGPT or other AI chatbots would fall under this definition. Interestingly, OpenAI’s privacy policy has a section titled “Children,” which contains a disclaimer explicitly stating that “[o]ur Services are not directed to, or intended for, children under 13,” and that users between 13 and 18 years of age “must have permission from their parent or guardian to use our Services.”73 OpenAI’s privacy policy regarding children is clearly influenced by COPPA and exemplifies how private companies have little incentive to create AI protections for young adults over the age of 13.
B. The 2023 Biden AI Executive Order
On October 30, 2023, President Joe Biden signed Executive Order 14110 (hereinafter the “Biden E.O.”), titled “Safe, Secure, and Trustworthy Development and Use of AI,” taking a more sectoral approach to AI regulation.74 Although the Biden E.O. was ultimately revoked by President Trump in January 2025, the Biden E.O. still warrants discussion for being the only Executive Order to explicitly acknowledge AI’s role in education (albeit only briefly). In this section, I examine the Biden E.O. and its shortcomings to highlight areas where future AI regulation can be improved beyond the Biden Administration’s efforts. By analyzing its approach and gaps, I aim to show how AI policy in education can be more comprehensive in addressing the unique challenges that young adults face when using AI technologies.
By formally naming education as a focus area of AI’s impact, the Biden E.O. established an important foundation for how future administrations may tackle AI in schools. Though no longer in place, it also offers policymakers and schools a glimpse into how the executive branch may use the law to shape AI governance in education. Given the overall lack of legal guidance on AI regulation in education, examining the Biden E.O.’s intentions – even in retrospect – serves as a valuable starting point for developing a framework that acknowledges the unique risks posed by AI in educational contexts.
Section 8 of the Biden E.O., titled “Protecting Consumers, Patients, Passengers, and Students,” notably dedicated a subsection to the Secretary of Education. Subsection (d) mandates that the Secretary of Education “develop resources, policies, and guidance” regarding the “safe, responsible, and nondiscriminatory use of AI in education” and recognized a need to focus on the impact on vulnerable and underserved communities.75 The order also called for the development of an “AI toolkit” for education leaders, which included guidelines for human review of AI decisions, AI systems designed to enhance trust and safety, and alignment with privacy laws and regulations in the educational context.76
The Biden E.O. Fact Sheet further explained that one of Section 8’s goals was to “[s]hape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools.”77 It is promising that the Biden E.O. recognized the need to support educators in using AI tools, yet also disappointing that the order did not seem to recognize the need to make these tools more readily accessible for all educators and students.
Although Section 8 explicitly addressed the use of AI in education, the Biden E.O. still felt frustratingly vague with its lack of concrete details. As with the rest of the document, the E.O. contained a lot of “fluff” – there were broad strokes touching on policy issues, but few actionable steps. And of the entire E.O., only one sentence was dedicated to education. Besides directing the Secretary of Education to come up with this “AI toolkit,” there was no other commentary on AI’s impact on education or how Section 8 should specifically guide the Secretary of Education’s AI Toolkit. Educators and students were left to wait a year for a response from the Secretary of Education, while school continued and ChatGPT became more integrated within our society.
Near the end of the Biden presidency, on October 24, 2024, the Department of Education’s Office of Educational Technology published an AI Toolkit pursuant to the 2023 Biden E.O.78 The now revoked Toolkit was divided into three sections: Mitigating Risk, Building a Strategy for Integration, and Maximizing Opportunity and Guiding Effective Use.79 The AI Toolkit seemed to provide a guide for teachers on designing curricular content around AI, making AI safe for students, and addressing the potential barriers for students accessing AI. It was promising to see that the AI Toolkit incorporated issues of access and pedagogical effectiveness, a departure from the initial Biden E.O. that generally focused on the information privacy aspects of AI technology.
However, the AI Toolkit was revoked alongside the Biden E.O. when President Donald Trump took office and is no longer available online.80 It is unclear whether the AI Toolkit had any discernible practical impact, as it was released at the very end of Biden’s presidency, and then immediately revoked on Trump’s first day back in office.81 Nonetheless, the AI Toolkit’s brief publication still highlights the urgent need for a federal resource or directive to help schools navigate AI in the classroom.
Complementing the Biden E.O., the White House Office of Science and Technology Policy under the Biden Administration also introduced the Blueprint for an AI Bill of Rights, a white paper report outlining five key principles as a framework for designing, using, and executing ethical AI systems:82
- (1) safe and effective systems;
- (2) algorithmic discrimination protections;
- (3) data privacy;
- (4) notice and explanation; and
- (5) human alternatives, consideration, and fallback.
Since the AI Bill of Rights is not an Executive Order or a legally-binding document, there is no indication that the AI Bill of Rights has been revoked.83 As of now, it remains available as a guiding framework. But again, besides signaling to the public that AI regulation is on America’s agenda, the Bill of Rights remains vague and general like the revoked E.O. – these statements lay the groundwork for future policies but lack specific, actionable directives.
The “EducateAI” initiative, an update to Section 8, was launched to help fund educators creating high-quality, inclusive AI educational opportunities at the K-12 through undergraduate levels.84 However, EducateAI also seemed to be tied to another part of the Biden E.O. focusing on labor goals, specifically “to prioritize AI-related workforce development.”85 Like the initial Trump Executive Order in 2019, the Biden E.O. highlighted a recurring theme of prioritizing the short-term economic benefits of labor and employment over long-term benefits of educational transformation. Given that the Biden E.O. has been revoked, it also remains unclear whether EducateAI remains active or if the millions of dollars already invested in the initiative will be withdrawn.86
A future Executive Order inspired by Section 8 of the Biden E.O. would require more detailed guidelines to be truly effective. The integration of AI in education needs to address risks beyond children’s data privacy, algorithmic fairness, and AI’s impact on the labor force. We must also tread carefully in creating regulatory guardrails to ensure effective AI integration and equitable access to AI technologies through educational systems.
Yet, I acknowledge the benefits of these broad types of executive orders. AI is such a fast-growing space that hasty and narrow regulations may inadequately address new technologies. With the time it takes for a regulation to pass through legislative processes, the technology is likely to have advanced and changed. Given education’s historical challenges with federal regulation, a more flexible guideline approach that a regulation like the AI Executive Order promises is going to be more effective than a strict formal law that restricts particular uses of AI in education. The malleable nature is helpful, and in Section IV, I offer factors that the Secretary of Education and other local education regulators should consider in thinking about the types of AI guidelines that would be most helpful for their classrooms or schools.
C. State Laws
In a regulatory landscape without federal or state AI legislation, state laws covering data privacy, children’s privacy, and social media moderation fill some gaps in regulating AI use among young adults. However, given the heavy privacy focus on regulating AI, the existing state laws that encompass AI remain inadequate in addressing risks such as ensuring accessibility and developmental considerations for young adults.
1. Privacy Laws
Most AI-related regulations are primarily found within the realm of information privacy. Yet, there remain ongoing debates among privacy scholars on what constitutes a “privacy” concern in this growing legal area. Helen Nissenbaum’s theory of contextual integrity offers one perspective, characterizing privacy as the appropriate flow of information within specific contexts.87 In contrast, Daniel Solove categorizes different types of privacy harms, reflecting the complexity of privacy issues.88 Recently, Angel and Calo criticized Solove’s social-taxonomic approach, suggesting that the wide range of information-based harms, such as consumer manipulation and algorithmic bias, has diluted the core principles of privacy law.89
Given the extensive data processing involved in AI technology, as well as privacy law’s tendency to encroach on every aspect of our lives, understanding privacy concerns and desiring transparency around data processing are integral to understanding AI’s legal challenges. State privacy laws are crucial, as generative AI models like ChatGPT require vast amounts of data to function effectively, often incorporating personal and sensitive information.90 Consequently, state data privacy laws directly impact how AI technologies can be developed and used.
For instance, the California Consumer Privacy Act (CCPA) and its successor, the California Privacy Rights Act (CPRA), set rules for companies engaged in data collection, storage, and processing.91 These laws require companies to disclose what data they collect and allow consumers to opt out of data sharing.92 Such regulations ensure that AI developers implement robust privacy measures, safeguarding user data from misuse or unauthorized access.
However, the application of these laws to AI technologies is not helpful for addressing the specific educational and developmental risks posed to young adults. Laws like the CCPA and CPRA regulate how companies collect, store, and share personal data, but they do not address the ways AI tools affect critical thinking, introduce algorithmic bias, or create unclear consent requires for young adults. State privacy laws also typically include a carve-out for COPPA, so that the state law does not face a preemption issue. Needing to comply with COPPA carve-outs continues the issues mentioned earlier about the disjunction between children under 13 having full parental oversight over their data versus being treated like an adult despite being under the age of 18, thus overlooking the unique developmental needs of young adults. Furthermore, existing data protection frameworks do not address the unethical uses of AI tools by its users.
Lastly, in the educational context, state education surveillance laws, as well as Fourth Amendment jurisprudence, show that K-12 students have a lesser expectation of privacy in schools.93 School officials are allowed to search students within reason, as well as have surveillance cameras in classrooms. This leaves open a question of whether young adults will be allowed to have separate protections as more advanced technologies like AI become more integrated into school surveillance systems.94
2. Child Design Laws and Social Media Moderation
Several states have attempted to pass laws addressing children’s privacy and teen social media and gaming use to address risks like harmful content exposure, addictive app design, and data-sharing practices.95 However, these child design and social media laws only provide indirect regulation of AI and do not sufficiently address the risks to young adults I describe in relation to the educational context.
One example is California’s Age-Appropriate Design Code Act (CAADCA), which regulates online services and products used by children by requiring specific privacy protections.96 Although the future of CAADCA is uncertain due to First Amendment concerns, it is worth noting that CAADCA is one of the first to recognize that different age groups amongst minors have different needs for protection.97 Unlike COPPA which only defines a child as under 13, CAADCA tailors its safety measures to different age groups and acknowledges that different online products and services will be accessed depending on a minor’s age group.
But otherwise, CAADCA’s primary protection mechanism focuses on privacy controls for minors.98 It may be interesting to follow how AI technologies become integrated into social media and gaming platforms, such as Meta AI being integrated into Instagram’s search feature, but this does not touch upon the specific educational context which is the focus of this Note.
Similarly, Florida, Utah, and Montana have attempted to pass laws restricting access to certain social media platforms or content deemed harmful to children or teenagers, rather than addressing how AI might shape their educational experience.99 While social media laws inadvertently regulate AI to the extent that social media platforms and online gaming utilize AI-driven algorithms,100 again these laws are not designed with education in mind and thus do not offer the nuanced regulatory approach I call for regarding young adults. Thus, the move towards social media regulation does not adequately address my concerns with AI tools like ChatGPT, especially because ChatGPT is neither a social media platform nor intended for minors. If policymakers are looking to protect young adults, they must acknowledge the risks and benefits of AI in educational contexts, which are distinct from AI applications in social media and gaming.
IV. Suggestions
As AI continues to permeate various sectors of our society, it is imperative that regulatory frameworks evolve to address the unique challenges and opportunities presented by these technologies. One challenge of AI regulation is the number of laws already encompassing AI in some capacity. Instead, it is more effective to think about AI’s specific use cases when thinking of AI regulation. Thus, it makes more sense to think about AI regulation sectorally, and, for purposes of this Note, consider how to best equip young adults to engage with AI safely and effectively in the education context.
Given education’s unique regulatory challenges, traditional forms of top-down lawmaking are unlikely to be effective. Instead, flexible regulations through suggestions and guidelines are more practical, while outright bans are ill-advised. Building on the groundwork laid by the now-revoked Biden E.O., I propose suggestions to the Secretary of Education, as well as other regulatory bodies like schools and teachers, to develop an AI Toolkit tailored to the unique needs and risks posed by AI in educational contexts. The recommendations I propose reflect my own perspective on what a future regulatory body should consider when formulating a new AI Toolkit, and they have a special focus on managing AI use among young adults, which is influenced by my experience as a high school teacher. These suggestions are not exhaustive, but serve as a starting point so that our society is better prepared to realize AI technology’s potential. The goal is to enable young adults to use AI safely and effectively, fostering an AI-ready generation.
A. AI Regulation is Necessary
One may question whether AI regulation is necessary at all. We could instead hope for the market to self-regulate, allowing AI companies, schools, and young adults to figure out themselves how to best utilize AI. However, the risks are too numerous, and intervention in educational technology is long overdue, as well as the need for recognition that young adults require distinct considerations from adults over 18 and children under 13. It is unjust to infantilize young adults by not giving them agency to use AI tools and irresponsible to not give young adults proper guidance on using AI tools.
Current AI regulation already occurs through laws that happen to encompass AI due to AI’s diverse technology features. As AI progresses and integrates into other areas, more regulation challenges will arise. Similar to the challenges around regulating privacy, since privacy has become a law of everything in a world that relies on a huge data economy, we will find ourselves in an AI economy where all parts of our lives and technologies we use will incorporate generative AI technologies.
Instead of continuing to overlook the multitude of risks that AI holds in relation to young adults, we must think about these issues and set guardrails now. Otherwise, we may see another crisis like what happened in the pandemic, uncovering the severity of the digital divide in education, this time with AI technologies.
B. The Case Against Banning ChatGPT – No Putting the Genie Back in the Bottle
Banning ChatGPT should be avoided because bans as a regulatory mechanism are ineffective. It is more harmful to ban ChatGPT than allow students to use it, and enforcement challenges will arise out of attempting to police the use of ChatGPT.
First, any form of a ChatGPT ban will be unsuccessful, whether it involves restricting the app from school devices or networks, or the use of ChatGPT for classroom assignments. Enforcing a ban on school devices or school networks is simple – if a teacher notices a student using ChatGPT at school, then the student will be disciplined. However, young adults can simply access ChatGPT on their phones, as 95% of teens aged 13 and older owned a cell phone in 2022.101 Similarly, banning ChatGPT from school networks is also ineffective because a student can access cellular networks on their smartphones or find proxy sites. For example, a common “hack” that my own high school students used was entering a URL into Google Translate to bypass school firewall restrictions.
A broader ban on ChatGPT use for classroom assignments is also ineffective because of enforcement challenges. It is difficult to determine whether a student’s work was created with the help of ChatGPT. There is also a line drawing question of whether any use of ChatGPT is contraband, such as to brainstorm paper topics, or if it is the act of copying the output directly into an assignment that constitutes the prohibited behavior.
Second, the burden of enforcing a ban gets shifted to teachers, becoming another item on a teacher’s never-ending to-do list. Teachers will have to use their judgment to determine what assignments seem to be AI-generated or AI-assisted, which are increasingly complicated to detect. AI detection tools, often being described as “snake oil,” are highly inaccurate.102 There is currently no accurate way to test for AI detection, and purported AI-detection products such as ZeroGPT have dismal accuracy rates.103 Unlike other cheating-detection methods, AI detection is more complicated. It is easier for a teacher to physically observe a student looking at another’s work during an exam or for a plagiarism detector to find matches in preexisting text.
If a ban is enacted, I also anticipate numerous student complaints for being falsely accused of using ChatGPT, which will strain already burdensome disciplinary processes. Schools are normally hesitant to accuse a student of academic misconduct unless the cheating is blatant, as over-disciplining is not in the best interest of the students nor the school. I recall one of my own students with a second-grade literacy level submitting a flawless essay completed at home. I was unable to “prove” that the parent wrote the assignment for the student, but I very much suspected this was the case. Although I secured support from a vice principal to address the issue with the parent, I could not outright accuse them of cheating without concrete evidence.
Finally, banning ChatGPT deprives young adults of valuable resources widely used by adults and higher education institutions. Prohibiting ChatGPT in secondary schools fails young adults by denying them the opportunity to learn about appropriate AI use. To prepare young adults for post-high school careers, it is better to promote responsible AI integration into the high school classrooms rather than imposing a ban.
C. Building on the Legacy of Biden AI E.O.: Creating an Eduction-Informed AI Toolkit
That said, ChatGPT is not risk free. The Biden AI E.O. and previously published AI Toolkit, though revoked, can still serve as a flexible foundation for creating AI-guidelines geared towards demystifying AI for young adults. The repeal of these resources effectively leaves the United States back at square one, underscoring the urgency of developing a new AI Toolkit that properly addresses ethical, developmental, and pedagogical considerations. Instead of focusing solely on teachers’ use of AI to streamline classroom practices, the Secretary of Education and other policymakers should design resources that help students understand how to engage responsibly with AI, ensuring AI integration into the classroom.
1. Addressing Ethical and Pedagogical Considerations
AI tools like ChatGPT can be tempting for teachers looking to reduce their workload. However, relying too heavily on AI for tasks like grading and lesson planning raises significant ethical and pedagogical questions.
While teachers might find ChatGPT’s ability to create and grade assignments, and thus lower their workload, appealing, they should be cautious about using AI in this way. If ChatGPT can make an assignment and a student can then generate a perfect response, the assignments teachers are giving students may be too formulaic, indicating the existing shortcomings of the education system rather than of ChatGPT. Furthermore, the potential for students to use AI to complete assignments highlights the need for more meaningful assessments. Reliance on AI by teachers and students to complete and evaluate work seems to undermine the fundamental purpose of education: to develop independent thinking and problem-solving skills. Thus, the rise of AI tools like ChatGPT reveals existing shortcomings in the education system, making it imperative to recognize that ChatGPT may not be the root cause of these educational challenges, but is merely uncovering them.
Since banning AI is not an option, educators should compile a list of appropriate, or even encouraged, ChatGPT uses for students. This would help set a regulatory floor, identifying AI applications deemed inappropriate for education and young adults. This list should be an ongoing discussion backed by various stakeholders, such as, but not limited to: AI specialists, EdTech specialists, teachers, students, and parents. These stakeholders should brainstorm ideas collectively and determine suitable uses of ChatGPT, providing young adults with proper guidance on using AI tools. This approach will also better inform educators on how to integrate AI into their curricula effectively and responsibly.
Some appropriate ways to utilize ChatGPT may be:
- Brainstorming ideas for writing prompts
- Summarizing large text
- Finding templates for writing prompts
- Requesting organizational support such as chunking, or breaking assignments down into more manageable pieces, for students with disabilities
Inappropriate uses for ChatGPT may be:
- Having ChatGPT to generate an assignment for you and then copying it with minimal personal effort
- Relying solely on ChatGPT for factual research
- Asking questions about sensitive information like depression
This list is not meant to be exhaustive, as stakeholders will have different opinions on each use case even within this list. Some uses of ChatGPT might clearly constitute cheating or inappropriate conduct, but many more will fall into a gray area.
Institutions may find it appropriate to draw bright lines around which situations are unacceptable. For example, for college admissions essays, universities and high school college counselors should clearly communicate to their students that student essays should not be AI-generated, to ensure that college applications are largely or completely representative of the individual student’s work.
But these types of bright lines also bring us back to questions about what, really, the difference is between ChatGPT and other existing tools. For example, what is the difference between an elite college consulting company that heavily assists in writing student essays and ChatGPT? How about other study aids or learning tools, like CliffsNotes or Google Search? Opponents of ChatGPT would argue that these traditional aids are supplemental, designed to assist learning without replacing formal educational structures. The Secretary of Education and schools across the nation must answer complex questions like these, and one reality may be that each school district feels differently about the various use cases.
2. Stakeholders in Charge of Creating AI Guidelines
Despite the absence of a current federal AI directive, as the briefly published AI Toolkit and the Biden E.O. was removed by the new Trump administration in 2025, local schools, districts, cities, and states need not delay in forming their own AI guidelines. One way in which they can begin this process is by forming AI Councils, establishing a democratic process that can include various stakeholders such as parents, teachers, and students. Including a technologist, similar to state-level privacy enforcement roles, could also bridge the gap between lawmakers and educators. Instead of purely depending on the federal government to act, creating these councils can offer valuable testimony to legislators and departments of education, ensuring policies are informed by those directly affected. Additionally, these councils can remain flexible, adapting to changes without needing to adhere to the rigidity of law.
However, challenges include finding technologists or AI specialists willing to participate for free. Conflicts may arise, particularly with the power dynamics between adults and students or even amongst adults with different backgrounds. For example, a school teacher with classroom expertise will have different opinions than a school counselor or a privacy specialist. Without the force of law, enforcement will likely be inconsistent and will require reliance on self-regulation by teachers and schools. Varied opinions within districts or states could also lead to disparate guidelines, complicating standardization.
3. Increasing Access: A Public Utility Argument
Access to advanced AI tools like ChatGPT remains a contentious issue due to AI’s broad applicability spanning every sector of society. If the accessibility issue remains unaddressed, I worry that we risk repeating the pandemic’s impact on education, where thousands of children were left out of opportunities and fell further behind due to inadequate resources.
Although the Biden AI E.O. has been revoked and there is no longer any federal mandate guiding AI use in education, the Secretary of Education could still take initiative on this matter by investing in ongoing research to better understand how students use AI and to keep up with the fast-paced nature of the AI landscape. Determining whether AI companies should make their products more accessible or if the Department of Education should invest in AI resources is essential in determining how to best ensure that AI tools are equitable and effective in learning. Of course, one single platform’s decision cannot solve educational inequity. EdTech companies and the greater education community must work together to determine how to continually improve access to the most effective AI tools so that all learners can benefit.
One proposed solution to address the access disparity is to treat AI in schools as a public utility, democratizing student access to AI tools.104 AI has become an integral part of modern society, significantly impacting various sectors, including education, healthcare, finance, and entertainment. Currently, AI technologies like ChatGPT-4o are accessible only to those who can afford premium services. As AI becomes more advanced, access might be further constrained to those who have the infrastructure to support high computational requirements.105 Making AI a public utility could standardize and broaden its accessibility, ensuring that everyone, regardless of socioeconomic status, has equal opportunity to benefit from these technologies.
However, implementing ChatGPT as a public utility raises several critical questions: who should carry the responsibility for managing and regulating this public utility, how should it be funded, and how can the loss of innovation be prevented once market pressure is removed? Government oversight might emphasize equity and adherence to the public interest but would risk bureaucratic inefficiencies, whereas a consortium of private companies might drive innovation but prioritize profit over accessibility and fairness.
Funding is another issue: who would bear the financial burden of maintaining and upgrading the AI infrastructure to ensure it remains cutting-edge and efficient? Potential funding sources could include government allocations, public-private partnerships, or even a model similar to how utilities like water and electricity are funded through user fees. The challenge is to devise a funding mechanism that ensures sustainability without compromising accessibility. Furthermore, a baseline for the public AI resource will need to be defined, such as determining whether the standard should be based on existing private services like ChatGPT or if it should include a whole new set of AI tools.
A prime example of public services declining in quality due to the reduced incentive to innovate is Hawaii’s electronic Comprehensive Student Support System (eCSSS), a centralized database for K-12 student information.106 Although eCSSS was initially a cutting-edge program, eCSSS has become frustratingly outdated.107 Without market pressure, innovation stalled, and it is now an inferior product. Additionally, expecting private companies like OpenAI to ensure product accessibility may overstep governmental authority, raising questions about the appropriate balance between government mandates and a private company’s autonomy.
And while not exactly a public utility, many students are familiar with Google Suite. Higher education institutions are already experiencing issues with Google, where Google’s early approach of offering free services to schools initially led to widespread adoption, but the subsequent service reductions and storage limits have prompted schools to find cheaper alternatives.108 A similar issue may arise with ChatGPT, where AI tools eventually become prohibitively expensive or inaccessible due to infrastructural limitations. If the next disruptive AI tool requires expensive hardware to operate or high subscription costs, only wealthy young adults from more resource-rich backgrounds may be able to use these tools, thus further deepening the digital divide.
Testing the feasibility of AI as a public utility in a smaller controlled setting, such as a single city or school district, may provide valuable insights. Individual schools or classrooms also offer an ideal experimental ground, as students would make a manageable experiment pool size for revealing the benefits and challenges of AI integration. However, due to education’s risk-averse tendencies, convincing schools to budget for such a project will be challenging.
4. Increasing AI Literacy
Increasing AI education through targeted learning and research initiatives is crucial for both educators and young adults. The current lack of AI guidance had led to a fragmented approach to AI literacy in education. As AI becomes more incorporated into young adults’ classroom experience, a more streamlined approach through an AI literacy program is essential to ensure that all stakeholders have a proper understanding of how AI works. Conducting surveys on how educators and students use ChatGPT and their perceptions of AI can also provide valuable insights, informing the development of AI educational programs and policies. These insights can also guide an “AI Tech Council” or the Secretary of Education in developing comprehensive guidelines and policies for AI integration in education.
Education researchers should focus on conducting surveys that can reveal patterns in AI usage, identify areas where additional training is needed, and highlight common misconceptions. Results of these surveys can gauge how students and teachers utilize and perceive ChatGPT, providing valuable insights into how students, teachers, and parents actually engage with ChatGPT rather than the current fearmongering over improper uses of ChatGPT.
By better understanding how AI is used by young adults and in classrooms, these insights can inform the Secretary of Education or an “AI Tech Council” in developing policies and strategies for integrating AI in education. Addressing these aspects will foster a well-informed, ethical, and safe AI environment, ensuring AI technologies align with educational goals and ethical standards.
Having proper AI literacy will also help educators determine effective and ethical AI uses for their respective schools. Any future governmental guidance on AI use in education should include a comprehensive AI Toolkit that includes strategies to integrate AI into teaching methods responsibly, as well as guidance for administrators to establish clear agreed-upon protocols outlining appropriate AI uses. This includes understanding how to use AI to supplement learning without replacing critical thinking and creativity, as well as ethical considerations like avoiding over-reliance on AI and recognizing how bias impacts AI chatbot outputs. These considerations should be embedded into professional development programs to equip educators with necessary skills to effectively navigate AI in the classroom.
Parents and guardians should also be informed, but teachers are best suited to guide young adults in using ChatGPT since young adults spend most of their day in school. An AI literacy program for teachers and students should cover the basics of how tools like ChatGPT work, what AI is, and how to use these tools effectively. This would enable young adults to use AI effectively and ethically, while also helping teachers determine appropriate uses for their classrooms. Additionally, increasing AI literacy in young adults may help to foster an early understanding of intellectual property (IP) rights in the digital era, where it is becoming increasingly more important to understand proper source and copyright protocols. There are some efforts by the educators and the U.S. Patent Office to increase IP literacy in children,109 and AI literacy offers an excellent opportunity to increase young adults’ understanding of IP protections. By forming ethical AI use in young adults, young adults can better understand that although AI can be used as a tool to boost creativity and learning, AI tools are trained by using other people’s work. This basic AI literacy can help prevent students from unknowingly plagiarizing or misusing AI-generated content from an early age. Additionally, for budding artists and creatives, an understanding of how their work may one day be used to train these AI tools may help students gain the skills to advocate for protecting their original works.
Lastly, guidelines to prevent emotional attachment to AI chatbots by teaching students that AI is a tool, not a human-like entity, is essential. Young adults need to be reminded that ChatGPT does not care for or feel anything, and to be wary of being manipulated by a machine’s output no matter how “human” it sounds. This distinction is vital to avoid potential emotional harm and to foster a healthy relationship with technology. Teaching students about the ethical implications of AI, including privacy concerns and data security, can also contribute to more responsible use.
D. Developing a “School Safe ChatGPT”: Responsible AI Integration
During the coronavirus pandemic, EdTech experienced a brief surge of new products. Educational games like Kahoot and virtual classroom management systems like Class Dojo were very popular in the two years of hybrid teaching. To harness AI’s advantages while safeguarding students’ privacy and promoting ethical use, developing a “School-Safe ChatGPT” could be a safe and controlled option for schools to monitor student AI use and teach young adults how to use the tool. The success of a School Safe ChatGPT will also depend on its accessibility, including how expensive and scalable the product is. This AI chatbot would be designed specifically for educational settings, incorporating additional safeguards and functionalities, similar to how Zoom created a more protective version tailored for K-12 schooling.
A School-Safe ChatGPT would need stringent data containment measures, akin to AI chatbots used in legal and business settings that keep information within a firm’s closed system.110 Schools could establish agreements on data containment requirements, with stricter deletion protocols and data anonymization or encryption practices to minimize risks for their young adult users. Limiting chatbot access to specific schools or districts and ensuring the deletion of sensitive information could protect student data, and data minimization strategies could ensure that only necessary information is collected and retained. These measures should comply with regulations like FERPA and IDEA, which currently set guidelines on educational data privacy and are aligned with our society’s values of being more wary of processing minors’ information due to fears of misuse.
Currently, “Kids ChatGPT” is a “COPPA-compliant chatbot platform-as-a-service for education, youth, and children’s related companies.”111 In line with how platform and AI regulation is so privacy-centered, one main difference between Kids ChatGPT and OpenAI’s ChatGPT are the privacy protections. Interestingly, Kids ChatGPT also emphasizes its “natural and informative responses” to child inquiries, and its safety as a platform.112 The website does not share what makes interactions with an AI system more “natural,” but the “safe” portion is tied to its content moderation (“[k]ids ChatGPT is designed to filter out inappropriate content, ensuring a wholesome and educational experience for your child,”) and data privacy (“[n]o personal information is ever saved, and our chats vanish once your child leaves the website.”).113 This seems like a great starting point, but given that COPPA is directed for children under 13, an app would be too basic for the needs of a young adult accessing ChatGPT.
Kids ChatGPT also raises a question of how the data inputted by students into the chatbot should be reviewed by adults. Kids ChatGPT states that an adult is “sent the unidentifiable raw chat logs bi-weekly. . . .”114 Although designing an AI chatbot to provide adults with oversight capabilities over the type of information students are putting into the chatbot is helpful, this raises questions about the extent of access and monitoring. Although it is important for teachers to observe how students are using the app, allowing more transparency for student data collection can be seen as a negative, as this contributes to the school as a surveillance state.115 Furthermore, as a teacher, I rarely looked at the swaths of student data collected on online learning platforms like Canvas and Google Classroom. Although I had access to the data, I wasn’t given proper training on utilizing this information. Data is only as useful as what you know what to do with it. An effective AI Toolkit will need to consider balancing data transparency with providing proper training for teachers to use the data collected on students for the benefit of student learning.
Additionally, a “School Safe ChatGPT” should incorporate functionalities to prevent misuse and promote ethical use. For example, the chatbot could include warnings or restrictions on certain prompts. If a student attempts to prompt the chatbot to “Write me an essay about To Kill a Mockingbird’s symbolism,” the chatbot could be trained to respond with a warning about academic integrity and discourage direct copying and pasting. Alternatively, it could be programmed to block such requests entirely, encouraging students to use AI for guidance and brainstorming ideas rather than as a learning shortcut.
Lastly, preventing algorithmic bias is critical in creating an AI education app. EdTech companies must rigorously test AI models for accuracy and bias and train their AI models using diverse data sets. For example, voice-enabled tools assessing literacy skills should recognize the diverse student strengths. Without diverse training data, AI chatbots may inadequately assess a child’s language skill when the child speaks in non-standardized English like Hawaii Creole English and African-American Vernacular English. One study has already shown algorithmic bias in automated essay scoring, inaccurately giving higher scores for 11th grade Hispanic and Asian-American students “while being more accurate for White and African-American students” when compared to human essay-graded scores.116 Continuous evaluation will be necessary to maintain algorithmic inclusivity across different student demographics, and chatbots likely need to be customized for each school’s particular demographics.
Conclusion
Current AI regulation has neglected young adults by failing to adequately respond to AI’s disruptive effects in education. But instead of trying to fill this gap with more privacy legislation, lawmakers must realize that AI involves broader risks affecting the education, development, and futures of young adults. Privacy regulation does not adequately consider educator concerns of implementation, accessibility, equity, literacy, and pedagogical effectiveness. These young adults are our world’s future, and the rise of generative AI tools will lead to greater inequities if we don’t consider these issues now.
Educational institutions should implement structured AI literacy programs designed to explain AI’s mechanics, capabilities, and limitations. By demystifying AI, educators and students can develop a more nuanced understanding of these tools, leading to more informed and responsible usage. Recognizing the technological access issues is also crucial, as leaving this aspect of AI tools unregulated will only deepen the inequalities faced by young adults in various socioeconomic groups. Simply banning ChatGPT will not resolve these issues – it is a lazy and ineffective approach that will lead to more harm for young adults and their teachers.
By confronting and realizing the technological access issue, I hope that regulators realize that leaving ChatGPT unregulated only further entrenches the inequalities faced by young adults in their various socioeconomic groups and communities. By shedding light on the disparities between regulatory approaches and the realities that young adults face in classrooms and beyond, I hope to underscore the necessity for further research and emphasize that advocacy in the realm of education and AI technologies is critically underexplored.