OpenAI’s confidential AI information was compromised in a security breach that occurred in 2023.

An unauthorized individual gained access to the internal messaging systems at OpenAI in the previous year and purloined confidential information pertaining to the design of the company’s artificial intelligence technologies, as reported by the New York Times on Thursday.

The hacker obtained confidential information from an internal discussion forum where employees were exchanging ideas about OpenAI’s cutting-edge technologies, as revealed by two well-informed sources.

Nevertheless, they did not gain access to the systems where OpenAI, the company behind the popular chatbot ChatGPT, stores and develops its artificial intelligence, the report further stated.

OpenAI executives disclosed the breach to both employees during an all-hands meeting in April of last year and the company’s board of directors. However, they made the decision not to publicly disclose the incident as no customer or partner information had been compromised.

The OpenAI leadership team did not perceive the incident as a national security risk, as they believed the hacker was an individual actor without any apparent connections to a foreign government, as per the report. Consequently, the San Francisco-based organization did not notify the relevant federal law enforcement agencies about the breach.

In May, OpenAI reported the disruption of five covert influence operations that attempted to exploit its AI models for “deceptive activities” across the internet. This incident highlights the growing concerns surrounding the potential misuse of AI technology and the need for robust safety measures.

The Biden administration is considering implementing new measures to protect U.S. AI technology from potential threats posed by China and Russia. According to preliminary plans reported by Reuters, the administration is evaluating the placement of safeguards around advanced AI models, including ChatGPT, to ensure their responsible and secure use.

In May, at a global meeting, 16 companies that are developing AI pledged to develop the technology safely. This comes at a time when regulators are struggling to keep up with the rapid innovation and emerging risks associated with AI.