April 14, 2024

While AI can drive productivity in the workplace, it can also pose athreat to companies fending off cyber attackers.

Like business teams, cyber attackers are using large language models (LLMs) to become more productive, Vasu Jakkal, corporate vice president of security, compliance, identity, and management at Microsoft told Quartz. That “productivity” includes doing reconnaissance on people and companies to find vulnerabilities, as well as learning how to code.

“It fundamentally boils down to finding information and directly launching these attacks to strengthen their own positions of influence and get economic advantage,” Jakkal said about nation-state and financial crime actors targeting companies in cyberattacks. She added that cyber attackers can also use LLMs to improve password cracking efforts, make deepfakes, and spread misinformation.

In February, Microsoft and OpenAI said they found and shut down OpenAI accounts belonging to “five state-affiliated malicious actors” using AI tools, including ChatGPT, to carry out cyberattacks. The accounts were associated with Chinese-affiliated Charcoal Typhoon (CHROMIUM) and Salmon Typhoon (SODIUM), Iran-affiliated Crimson Sandstorm (CURIUM), North Korea-affiliated Emerald Sleet (THALLIUM), and Russia-affiliated Forest Blizzard (STRONTIUM) according to OpenAI and Microsoft Threat Intelligence.

As Jakkal said, the threat actors used OpenAI’s services to look up open-source information, for translating, to find coding errors, and other coding tasks, the companies found. According to a Microsoft report, Emerald Sleet used LLMs to research think tanks and experts on North Korea, and to make content for spear phishing campaigns. Meanwhile Microsoft found Charcoal Typhoon has used LLMs to find research on specific technologies, platforms, and vulnerabilities in information-gathering. Jakkal said Microsoft’s research with OpenAI didn’t find any novel techniques.

In addition to AI-related cyber attack risks, Jakkal said the “battleground of security” lies in identity. According to Jakkal, 4,000 password spray attacks happen per second, as cyber attackers attempt to get into organizations and use identity-related information to access more information. Companies also face cyber attack risks around data security, because information can get leaked if it doesn’t have the right sensitivity labeling, or due to insider-risk, where someone inside the organization unintentionally or intentionally leaks information.

“What makes us human is our vulnerability, our curiosity,” Jakkal said. “Everything that makes us human makes us vulnerable to these phishing attacks, so we continue to see those attacks, which are pretty prominent.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *