
Nations Strategically Using A.I. Access To Plan Cyberattacks
OpenAI and Microsoft research suggests that hackers working for foreign governments have used OpenAI’s systems to plan and execute cyberattacks. The partnership of these companies has allowed them to determine that nations with ties to China, Russia, North Korea and Iran have utilized OpenAI’s technology for malicious purposes.
Instead of using the A.I. to create advanced and unique attacks, the hackers have employed it for simple tasks such as drafting emails, translating documents and debugging computer code. Tom Burt, who leads Microsoft’s cybersecurity efforts, noted that the hackers are using the technology to be more efficient in their activities.
Microsoft has invested $13 billion in OpenAI, and the two companies have a close partnership. They have joined forces to document how various hacking groups have leveraged OpenAI’s technology for their attacks. OpenAI has since terminated the access of these groups after learning about their illicit use of the technology.
OpenAI released ChatGPT in November 2022, which sparked concerns among experts, the media and government officials that adversaries could exploit these more advanced A.I. tools to create new and harmful cyberattacks. However, the reality may not be as severe as feared. Bob Rotsted, who is in charge of cybersecurity threat intelligence at OpenAI, stated that there is no evidence that these advanced A.I. tools significantly accelerate adversaries beyond what a better search engine could do.
Microsoft has identified specific instances of hacking groups utilizing OpenAI’s systems for malicious purposes. For example, a group connected to the Islamic Revolutionary Guards Corps in Iran used the technology to generate phishing emails and avoid antivirus scanners. Additionally, a Russian-affiliated group used OpenAI’s systems to research satellite communication protocols and radar imaging technology in the context of the conflict in Ukraine.
Microsoft and OpenAI track more than 300 hacking groups, and they have found that open-source A.I. technology makes it challenging to identify and disrupt malicious use. They highlighted the difficulty of ascertaining who is deploying such technology and their policies for responsible and safe usage.
In a separate incident, Microsoft disclosed that the Russian hack of top Microsoft executives did not involve the use of generative A.I, as per Mr. Burt.
Cade Metz reported from San Francisco.