Florida Attorney General James Uthmeier has opened a criminal investigation into OpenAI, focusing on the role its ChatGPT chatbot may have played in the recent shooting at Florida State University. The investigation represents an unprecedented legal action targeting an artificial intelligence company in connection with a violent crime.

Authorities are examining whether the gunman used ChatGPT in planning or carrying out the attack at FSU, and whether OpenAI bears any criminal liability for the chatbot's alleged involvement. The specifics of how the shooter reportedly used the AI tool have not been fully disclosed by investigators, though the inquiry signals a significant escalation in scrutiny of AI companies' responsibility for harms facilitated by their products.

OpenAI has not publicly responded in detail to the investigation. The company has previously stated that its products include safety guardrails designed to prevent misuse, though critics have long argued those measures are insufficient. The FSU shooting, which resulted in multiple casualties, prompted swift calls from Florida officials for accountability.

The investigation raises broad legal questions about the extent to which AI developers can be held criminally responsible when their tools are allegedly used by individuals committing violent acts. Legal experts note that existing law offers limited precedent for such prosecutions, making the Florida inquiry a potential landmark case in AI liability.

The case is drawing national attention as policymakers and law enforcement agencies grapple with how to regulate rapidly advancing AI systems. Florida's move could prompt other states to consider similar actions, and may intensify pressure on Congress to establish clearer federal standards for AI company accountability.