New AI guidance for developers in educational technology released by Department of Ed

The Department of Education’s (DOE) Office of Educational Technology has released new guidance for tech-developers integrating artificial intelligence into their educational products.

The…

The Department of Education’s (DOE) Office of Educational Technology has released new guidance for tech-developers integrating artificial intelligence into their educational products.

The agency cites a need for “shared responsibility” between public and private sectors to mitigate the potential risks of AI technology.

The 49-page paper, “Designing for Education with Artificial Intelligence: An Essential Guide for Developers,” comes as a response to President Biden’s executive order last October on artificial intelligence to create resources to “support educators deploying AI-enabled educational tools”. It’s also a follow-up to the DOE’s May 2023 report on artificial intelligence in K-12 schools.

The guidance outlines several issues as potential concerns for AI in ed-tech, including the “race to release,” “bias and fairness,” “data privacy and security,” “harmful content”, “cyberbullying”, and “transparency and explainability”.  

The agency outlined five areas tech developers should consider as they partner with schools, such as:  

  • Designing for Teaching and Learning: developers should understand educational values and adopt “humans-in-the-loop” approaches that implement feedback from educators and students. 
  • Providing Evidence for Rationale and Impact: developers should use evidence-based practices and be able to clearly articulate the application and efficacy of AI-related products. 
  • Advancing Equity and Protecting Civil Rights: ed-tech providers should be wary of “algorithmic discrimination,” which the guidance defines as “automated systems contribut[ing] to unjustified different treatment or impacts disfavoring people”.  
  • Ensuring Safety and Security: all AI technology should be developed with student and teacher’s data security and privacy in mind. 
  • Promoting Transparency and Earning Trust: tech developers must work together with educators, students, and other people in the AI ecosystem to “expand the strength of shared responsibility.”  

The agency’s new guidance comes amid growing concerns among parents, educators, and policy-makers over the use of artificial intelligence in the classroom. 

The rise of cyberbullying in the form of deepfakes, the potential for plagiarism, and impact of political bias has contributed to concerns over AI.  

A growing number of states are issuing guidelines for the use of AI technology in the public school system. As of June, at least 18 states had created guidance resources about AI in education.