OpenAI breached by hacker in 2023 – report
A hacker is said to have gained access to the internal messaging systems at the AI giant last year, stealing details about the design of its products.
A hacker gained access to the internal messaging systems of ChatGPT maker OpenAI last year, stealing details about the design of the firm’s AI products, it has been reported.
The New York Times said the incident saw a hacker lift details from discussions in an internal forum between OpenAI employees about the technologies being worked on by the company.
But they did not get into the systems where OpenAI’s product are built and housed, the report said.
Since then, many of the world’s largest technology companies have started moving into the sector, with many experts also identifying generative AI as the key innovation of this generation.
According to the report, OpenAI executives told staff and the company’s board about the breach in April last year, but did not make the details public because no customer or partner data had been stolen.
OpenAI also did not inform US law enforcement agencies of the incident, the report said, because the company believed the hacker was a private individual with no known ties to a foreign government.
“As we shared with our board and employees last year, we identified and fixed the underlying security issue and continue to invest in strengthening our security,” a company spokesperson said.
Dr Ilia Kolochenko, cybersecurity expert and chief executive at security firm ImmuniWeb, warned that attacks on AI firms are likely to continue, and increase, given the growing importance of the technology.
“While the details of the alleged incident are not yet confirmed by OpenAI, there is a strong possibility that the incident actually took place and is not the only one,” he said.
“The hackers mostly focus their efforts on the theft of valuable intellectual property, including technological research and know-how, large language models (LLMs), sources of training data, as well as commercial information such as AI vendors’ clients and novel use of AI across different industries.
“More sophisticated cyber-threat actors may also implant stealthy backdoors to continually control breached AI companies, and to be able to suddenly disrupt or even shut down their operations, similar to the large-scale hacking campaigns targeting critical national infrastructure (CNI) in Western countries recently.
“All corporate users of GenAI vendors should be particularly careful and prudent when they share, or give access to, their proprietary data for LLM training or fine-tuning, as their data – spanning from attorney-client privileged information and trade secrets of the leading industrial or pharmaceutical companies to classified military information – is also in the crosshairs of AI-hungry cybercriminals that are poised to intensify their attacks.”