OpenAI Systems Exploited by Hackers from China, Russia, and Other Nations, Report Reveals

Nations Strategically Using A.I. Access To Plan Cyberattacks

OpenAI and Microsoft research suggests that hackers working for foreign governments have used OpenAI’s systems to plan and execute cyberattacks. The partnership of these companies has allowed them to determine that nations with ties to China, Russia, North Korea and Iran have utilized OpenAI’s technology for malicious purposes.

Instead of using the A.I. to create advanced and unique attacks, the hackers have employed it for simple tasks such as drafting emails, translating documents and debugging computer code. Tom Burt, who leads Microsoft’s cybersecurity efforts, noted that the hackers are using the technology to be more efficient in their activities.

Microsoft has invested $13 billion in OpenAI, and the two companies have a close partnership. They have joined forces to document how various hacking groups have leveraged OpenAI’s technology for their attacks. OpenAI has since terminated the access of these groups after learning about their illicit use of the technology.

OpenAI released ChatGPT in November 2022, which sparked concerns among experts, the media and government officials that adversaries could exploit these more advanced A.I. tools to create new and harmful cyberattacks. However, the reality may not be as severe as feared. Bob Rotsted, who is in charge of cybersecurity threat intelligence at OpenAI, stated that there is no evidence that these advanced A.I. tools significantly accelerate adversaries beyond what a better search engine could do.

Microsoft has identified specific instances of hacking groups utilizing OpenAI’s systems for malicious purposes. For example, a group connected to the Islamic Revolutionary Guards Corps in Iran used the technology to generate phishing emails and avoid antivirus scanners. Additionally, a Russian-affiliated group used OpenAI’s systems to research satellite communication protocols and radar imaging technology in the context of the conflict in Ukraine.

Microsoft and OpenAI track more than 300 hacking groups, and they have found that open-source A.I. technology makes it challenging to identify and disrupt malicious use. They highlighted the difficulty of ascertaining who is deploying such technology and their policies for responsible and safe usage.

In a separate incident, Microsoft disclosed that the Russian hack of top Microsoft executives did not involve the use of generative A.I, as per Mr. Burt.

Cade Metz reported from San Francisco.

News

Unlikely Industry Player Anguilla Profits Big from A.I. Boom

Artificial intelligence’s integration into everyday life has stirred up doubts and unsettling questions for many about humanity’s path forward. But in Anguilla, a tiny Caribbean island to the east of Puerto Rico, the A.I. boom has made the country a fortune. The British territory collects a fee from every registration for internet addresses that end […]

Read More
News

China Surpasses U.S. in A.I. Talent: A Key Metric

China lags behind the United States in artificial intelligence that powers chatbots like ChatGPT but excels in producing scientists behind new humanoid technologies. New research reveals that China has surpassed the United States as the biggest producer of A.I. talent. The country generates almost half the world’s top A.I. researchers, compared to 18 percent from […]

Read More
News

Brands Brace for Impact as TikTok Faces Criticism

Amid debate in Washington over whether TikTok should be banned if its Chinese owner doesn’t sell it, one group is watching with particular interest: the many brands — particularly in the beauty, skin care, fashion, and health and wellness industries — that have used the video app to boost their sales. Youthforia, a makeup brand […]

Read More