Several years after the release of ChatGPT, which raised ethical concerns for education, schools are still wrestling with how to adopt artificial intelligence.
Last week’s batch of executive orders from the Trump administration included one that advanced “AI leadership.”
The White House’s order emphasized its desire to use AI to boost learning across the country, opening discretionary federal grant money for training educators and also signaling a federal interest in teaching the technology in K-12 schools.
But even with a new executive order in hand, those interested in incorporating AI into schools will look to states — not the federal government — for leadership on how to accomplish this.
So are states stepping up for schools? According to some, what they leave out of their AI policy guidances speaks volumes about their priorities.
Back to the States
Despite President Trump’s emphasis on “leadership” in his executive order, the federal government has really put states in the driver’s seat.
After taking office, the Trump administration rescinded the Biden era federal order on artificial intelligence that had spotlighted the technology’s potential harms including discrimination, disinformation and threats to national security. It also ended the Office of Educational Technology, a key federal source of guidance for schools. And it hampered the Office for Civil Rights, another core agency in helping schools navigate AI use.
Even under the Biden administration’s plan, states would have had to helm schools’ attempts to teach and utilize AI, says Reg Leichty, a founder and partner of Foresight Law + Policy advisers. Now, with the new federal direction, that’s even more true.
Many states have already stepped into that role.
In March, Nevada published guidance counseling schools in the state about how to incorporate AI responsibly. It joined the list of more than half of states — 28, including the territory of Puerto Rico — that have released such a document.
These are voluntary, but they offer schools critical direction on how to both navigate sharp pitfalls that AI raises and to ensure that the technology is used effectively, experts say.
The guidances also send a signal that AI is important for schools, says Pat Yongpradit, who leads TeachAI, a coalition of advisory organizations, state and global government agencies. Yongpradit’s organization created a toolkit he says was used by at least 20 states in crafting their guidelines for schools.
(One of the groups on the TeachAI steering committee is ISTE. EdSurge is an independent newsroom that shares a parent organization with ISTE. Learn more about EdSurge ethics and policies here and supporters here.)
So, what’s in the guidances?
A recent review by the Center for Democracy & Technology found that those state guidances broadly agree on the benefits of AI for education. In particular, they tend to emphasize the usefulness of AI for boosting personal learning and for making burdensome administrative tasks more manageable for educators.
The documents also concur on the perils of the technology, especially threatening privacy, weakening critical thinking skills for students and perpetuating bias. Further, they stress the need for human oversight of these emerging technologies and note that detection software for these tools is unreliable.
At least 11 of these documents also touch on the promise of AI in making education more accessible for students with disabilities and for English learners, the nonprofit found.
The biggest takeaway is that both red and blue states have issued these guidance documents, says Maddy Dwyer, a policy analyst for the Center for Democracy & Technology.
It’s a rare flash of bipartisan agreement.
“I think that’s super significant, because it’s not just one state doing this work,” Dwyer says, adding that it suggests sweeping recognition of the issues of bias, privacy, harms and unreliability of AI outputs across states. It’s “heartening,” she says.
But even though there was a high level of agreement among state guidance documents, the CDT argued that states have — with some exceptions — missed key topics in AI, most notably how to help schools navigate deepfakes and how to bring communities into conversations around the technology.
Yongpradit, of TeachAI, disagrees that these have been missed.
“There are a bazillion risks” from AI popping up all the time, he says, many of them difficult to figure out. Nevertheless, some do show robust community engagement and at least one addresses deepfakes, he says.
But some experts perceive bigger problems.
Silence Speaks Volumes?
Relying on states to create their own rules about this emergent technology raises the possibility of having different rules across those states, even if they seem to broadly agree.
Some companies would prefer to be regulated by a uniform set of rules, rather than having to deal with differing laws across states, says Leichty, of Foresight Law + Policy advisers. But absent fixed federal rules, it’s valuable to have these documents, he says.
But for some observers, the most troubling aspect of the state guidelines is what’s not in them.
It’s true that these state documents agree about some of the basic problems with AI, says Clarence Okoh, a senior attorney for the Center on Privacy and Technology at Georgetown University Law Center.
But, he adds, when you really drill down into the details, none of the states tackle police surveillance in schools in those AI guidances.
Across the country, police use technology in schools — such as facial recognition tools — to track and discipline students. Surveillance is widespread. For instance, an investigation by Democratic senators into student monitoring services led to a document from GoGuardian, one such company, asserting that roughly 7,000 schools around the country were using products from that company alone as of 2021. These practices exacerbate the school-to-prison-pipeline and accelerate inequality by exposing students and families to greater contact with police and immigration authorities, Okoh believes.
States have introduced legislation that broaches AI surveillance. But in Okoh’s eyes, these laws do little to prevent rights violations, often even exempting police from restrictions. Indeed, he points toward only one specific bill this legislative session, in New York, that would ban biometric surveillance technologies in schools.
Perhaps the state AI guidance closest to raising the issue is Alabama’s, which notes the risks presented by facial recognition technology in schools but doesn’t directly discuss policing, according to Dwyer, of the Center for Democracy & Technology.
Why would states underemphasize this in their guidances? It’s likely state legislators are focused only on generative AI when thinking about the technology, and they are not weighing concerns with surveillance technology, speculates Okoh, of the Center on Privacy and Technology.
With a shifting federal context, that could be meaningful.
During the last administration, there was some attempt to regulate this trend of policing students, according to Okoh. For example, the Justice Department came to a settlement with Pasco County School District in Florida over claims that the district discriminated, using a predictive policing program that had access to student records, against students with disabilities.
But now, civil rights agencies are less primed to continue that work.
Last week, the White House also released an executive order to “reinstate commonsense school discipline policies,” targeting what Trump labels as “racially preferential policies.” Those were meant to combat what observers like Okoh understand as punitively over-punishing Black and Hispanic students.
Combined with new emphasis in the Office for Civil Rights, which investigates these matters, the discipline executive order makes it tougher to challenge uses of AI technology for discipline in states that are “hostile” to civil rights, Okoh says.
“The rise of AI surveillance in public education is one of the most urgent civil and human rights challenges confronting public schools today,” Okoh told EdSurge, adding: “Unfortunately, state AI guidance largely ignores this crisis because [states] have been [too] distracted by shiny baubles, like AI chatbots, to notice the rise of mass surveillance and digital authoritarianism in their schools.”