Microsoft's Revelation about Hackers' Utilization of AI Tools from China, Russia, and Iran
International News Agencies Report Microsoft's Findings
Microsoft's report has unveiled that hackers from Russia, China, and Iran are utilizing their open AI tools to achieve their objectives more effectively.
According to a report by international news agency Reuters, Microsoft stated in its report that they are closely monitoring the activities of hacking groups affiliated with Russian military intelligence, Iran's Pasdaran-e Enghelab (Revolutionary Guards), and governments of China and North Korea. All these groups were attempting to enhance their hacking campaigns by using models of new languages.
Tom Burt, Microsoft's Vice President for Customer Security, conveyed to reporters before the release of the report, "We do not want these groups, which could pose risks, to access this technology through violations of legal terms or service conditions."
Russian, North Korean, and Iranian diplomatic officials have promptly denied Microsoft's allegations.
The spokesperson for the U.S. Embassy in China, Liu Pengyu, dismissed these allegations as unfounded, stating that they are against China and emphasized the need for secure, trustworthy, and controllable use of AI technology.
High-ranking cybersecurity officials have been warning since last year that some groups are misusing AI tools, but detailed information on this matter is still limited.
The report outlined how hacking groups employed models of new languages through various means.
Microsoft revealed that Russian hackers were working for a military intelligence agency and used models for satellite and radar technologies, which could be related to conventional military operations in Ukraine.
Microsoft stated that North Korean hackers prepared materials for use in spear-phishing campaigns against regional experts.
Microsoft also noted that Iranian hackers were using models to write emails.
Dave DeWalt, CEO of software security firm Dewey, said that Chinese hackers were also experimenting with large models of different languages, such as using them to inquire about rival intelligence agencies, cyber security issues, and 'significant individuals'.