States scramble to publish ‘human-centered’ artificial intelligence guidelines for schools

A growing number of states are producing guidelines for the usage of artificial intelligence (AI) technology in their public school systems.

Arizona State University’s research entity, The…

A growing number of states are producing guidelines for the usage of artificial intelligence (AI) technology in their public school systems.

Arizona State University’s research entity, The Center on Reinventing Public Education (CPRE), reported in October that only two states had offered guidance for school districts regarding the use of AI.

Now, several more states, including Washington, North Carolina, and Virginia, have published guidance on the topic.

“Like many of the innovations in technology that came before it, the world of AI is evolving at lightning speed. Also like many of the technology innovations that came before it, young people are accessing these tools and wanting to use them in their daily lives,” reads Washington’s “Human-Centered AI Guidance for K-12 Public Schools” document. 

Though states are rapidly discovering positive uses for the technology, one prominent concern emphasized by multiple states is the necessity of human oversight over AI tools. 

“Our educators need to validate and trust AI-generated content before use and ensure there is always a human in the loop,” reads North Carolina’s guidelines, asking educators to consider, “What human oversight and quality control measures are used?” and “How is feedback from teachers/students being collected and actioned?” 

Virginia’s guidance likewise explains, “AI cannot and should not ever replace human judgement. Although synthesis and analysis of information can be expedited through AI, it will 

never replace teachers who provide wisdom, context, feedback, empathy, nurturing and 

humanity in ways that a machine cannot.” 

The states ask educators to consider a range of relevant concerns, such as protecting sensitive and confidential data from cybersecurity risks, collecting evidence that AI tools are actually improving students’ educational experience, and prioritizing academic integrity. 

Meanwhile, K-12 classrooms are not the only educational settings introducing unprecedented AI technology. 

Arizona State University announced last week that it will be the first university to partner with Open AI, the artificial intelligence company responsible for the popular tool ChatGPT, giving students and staff access to a specialized version of the program. The university shared that it will create an ethics committee tasked with monitoring the partnership. 

Every state government in the nation is facing mounting pressure from its constituents to head off the potential dangers of unregulated AI. 

A Pew Research report shows that over 52% of U.S. adults are more concerned than excited about the prospect of the technology entering their daily lives.  

AI regulation has also garnered bipartisan political support, with one report indicating 85% of all voters agree AI companies should be required to demonstrate their products are harmless before releasing them to the public.