‘Aider and abettor’: Florida to investigate possible ChatGPT link to college shooting
The state of Florida is launching an investigation into ChatGPT and its parent company, OpenAI, over whether the company’s product contributed to the mass shooting at Florida State University last…
The state of Florida is launching an investigation into ChatGPT and its parent company, OpenAI, over whether the company’s product contributed to the mass shooting at Florida State University last spring.
“Unfortunately, what we’ve seen in our initial review is that ChatGPT offered significant advice to the shooter before he committed such heinous crimes,” Florida Attorney General James Uthmeier said at a press briefing Tuesday.
Gunman Phoenix Ikner opened fire on FSU’s Tallahassee campus April 17, 2025, killing two vendors and wounding six students, according to the New York Post. After the shooting, OpenAI discovered a ChatGPT account believed to be associated with Ikner and shared the communication with law enforcement, Tampa Bay affiliate FOX 13 reported.
In the dialogue between ChatGPT and Ikner, the bot advised Ikner on the type of gun and ammunition for short-range, lethal results. ChatGPT also recommended the best time of day and locations on campus where the shooter would encounter the greatest number of people, Uthmeier said at the press conference.
“My prosecutors have looked at this, and they’ve told me: If it was a person on the other end of that screen, we would be charging them with murder,” he said.
Florida law states anyone who aids, abets or counsels someone in the commission of a crime, and that crime is committed or attempted, may be considered a principal to the crime, according to the state’s press release.
“The ‘aider and abettor’ is just as responsible for the crime as the perpetrator,” the release states.
Even though ChatGPT is not a person, Uthmeier said his office can still investigate whether the corporation holds criminal culpability.
“We are going to look at who knew what, designed what or should have done what,” he said. “And if it is clear that individuals knew that this type of dangerous behavior might take place, that these types of unfortunate, tragic events might take place and nevertheless still turned to profit – still allowed this business to operate – then people need to be held accountable.”
Florida issued subpoenas Tuesday requiring information from March 1, 2024, to April 17, 2026, regarding OpenAI’s policies to monitor users’ threats of harm to self or others. The subpoenas also require the company’s media communications and all cooperation with law enforcement on past, present and future crimes, FOX 13 reports. The companies have until May 1 to respond.
“We’re hoping that the engineers, the officials at OpenAI, will provide information on the design, their internal policies, the engineers and officials that were used in creating this application, what kind of flags they get, how they work with law enforcement,” Uthmeier told Fox & Friends on Wednesday. “This information will be helpful for all of us as we learn how to regulate AI and how to prevent it from hurting people. Technology should help mankind, not lead to its demise.”
OpenAI told FOX 13 it will continue to cooperate with authorities, but that ChatGPT provided “factual responses to questions” and “did not encourage or promote illegal or harmful activity.”
“ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes,” the statement continues. “We work continuously to strengthen our safeguards to detect harmful intent, limit misuse and respond appropriately when safety risks arise.”
Florida Department of Law Enforcement Commissioner Mark Glass said AI is built by fallible people and requires regulation.
“It is important that all are aware of the risks of this new technology and the harms it can and has already caused in our communities,” Glass said in the press release. “The more we can educate ourselves, the better we can protect ourselves, our loved ones and our communities from scams, fraud and much worse.”
Uthmeier told Fox & Friends the United States must continue to lead in AI development while ensuring protection for its citizens.
“AI is not going anywhere,” he said. “As Americans, we need to lead on this issue, but at the end of the day, we need to regulate and we need to hold people accountable where they know that wrongs and harms could take place and don’t do enough to stop it. So we look forward to working with them and all other companies to protect our families, protect our kids and stop violence where we have the ability to do it.”


