The Dark Side of AI: OpenAI's Groundbreaking Report Exposes Nation-State Cyber Threats

The Dark Side of AI: OpenAI's Groundbreaking Report Exposes Nation-State Cyber Threats
Photo by Growtika / Unsplash

How State Actors Are Weaponizing ChatGPT for Espionage, Fraud, and Influence Operations

In a watershed moment for AI security, OpenAI has released its June 2025 quarterly threat intelligence report, marking the first comprehensive disclosure by a major tech company of how nation-state actors are weaponizing artificial intelligence tools. The report reveals a disturbing pattern: in the three months since their last report, OpenAI has been able to detect, disrupt and expose abusive activity including social engineering, cyber espionage, deceptive employment schemes, covert influence operations and scams. This unprecedented transparency sheds light on the dark underbelly of AI democratization and its exploitation by malicious actors worldwide.

https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf

Key Topics Covered:

  • North Korean IT Worker Schemes: Details on how they're using AI to create fake profiles, infiltrate Fortune 100 companies, and fund weapons programs
  • ScopeCreep Malware: Russian hackers using ChatGPT to develop and refine Windows malware
  • Operation Uncle Spam: China's campaign to amplify divisive content on both sides of US political debates
  • Four Chinese Operations: Including "Sneer Review" and intelligence collection operations targeting US senators
  • AI-Powered Social Engineering: The evolution of deepfake attacks targeting government officials

The Threat Landscape: A Global Overview

OpenAI's investigation uncovered at least 10 malicious AI campaigns already this year, with state-sponsored actors from China, Russia, Iran, and North Korea leading the charge. A significant number appeared to originate in China: Four of the 10 cases in this report, spanning social engineering, covert influence operations and cyber threats, likely had a Chinese origin.

Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, emphasized the scope of the threat: "What we're seeing from China is a growing range of covert operations using a growing range of tactics". These operations are not isolated incidents but part of a coordinated effort by nation-states to exploit AI capabilities for strategic advantage.

AI Security Risk Assessment Tool
Systematically evaluate security risks across your AI systems

North Korean IT Workers: The Million-Dollar Deception

Perhaps the most audacious scheme involves North Korean operatives infiltrating Western tech companies through an elaborate fake employment operation. The Justice Department said North Korea has potentially made hundreds of millions of dollars through the scheme, where workers living in Southeast Asia or China obtain remote IT jobs at U.S. or European companies.

The Sophisticated Scam

The operation is remarkably sophisticated. North Koreans will use generative AI to develop bulk batches of LinkedIn profiles and applications for remote work jobs that appeal to Western companies. The scheme involves:

  • AI-Enhanced Profiles: cybersecurity firm Okta conducted research into online services used by individuals identified by U.S. authorities and third parties as agents for the Democratic People's Republic of Korea (DPRK), revealing the use of multiple AI-enhanced services used to manage the email and phone communications of multiple personas; translate and transcribe communications; generate resumes and cover letters; conduct mock job interviews
  • Laptop Farms: to get around the IP address problem, laptop farms are springing up all over America. If an applicant gets a job, the firm will usually send him a laptop, at which point the interviewee explains that they've moved or have a family emergency, so could they send it to a new address please?
  • Team Operations: During an interview, multiple teams will work on the technical challenges that are part of the interview while the "front man" handles the physical side of the interview

The Scale of Infiltration

The scope is staggering. Charles Carmakal, CTO of Mandiant, said in a statement that he has spoken to "dozens of Fortune 100 organizations that have accidentally hired North Korean IT workers." In one documented case, The Justice Department in recent months has arrested and charged several U.S. citizens for running these laptop farms and in one instance, found an American that used 60 stolen identities to facilitate North Korean employment at more than 300 U.S. companies.

The KnowBe4 Incident

One of the most publicized cases involved cybersecurity firm KnowBe4, which hired a Principal Software Engineer. On July 15, 2024, a series of suspicious activities were detected on that user account. The moment they received their Mac workstation, it immediately started to load malware. The company discovered they had hired a North Korean operative using a real person using a valid but stolen US-based identity. The picture was AI "enhanced".

ScopeCreep: Russian Hackers' AI-Powered Malware Campaign

In a separate but equally concerning development, OpenAI identified a Russian-speaking actor [who] used our models to assist with developing and refining Windows malware, debugging code across multiple languages, and setting up their command-and-control infrastructure. This campaign, codenamed ScopeCreep, represents a new evolution in AI-assisted malware development.

Technical Sophistication

The ScopeCreep malware demonstrates advanced capabilities:

  • "The malware is designed to escalate privileges by relaunching with ShellExecuteW and attempts to evade detection by using PowerShell to programmatically exclude itself from Windows Defender, suppressing console windows, and inserting timing delays"
  • The threat actor used temporary email accounts to sign up for ChatGPT, using each of the created accounts to have one conversation to make a single incremental improvement to their malicious software

Operational Security Focus

What makes ScopeCreep particularly notable is the threat actor's emphasis on operational security. They subsequently abandoned the account and moved on to the next. This practice of using a network of accounts to fine-tune their code highlights the adversary's focus on operational security (OPSEC). The malware's primary objectives include:

  • Harvesting credentials, tokens, and cookies from web browsers
  • Exfiltrating data to attacker-controlled servers
  • Sending alerts to Telegram channels when new victims are compromised
Quantum-Ready Risk Assessment Tool | QuantumSecurity.ai
Evaluate your organization’s vulnerability to quantum computing threats and get a customized action plan to secure your systems from quantum attacks.

Operation Uncle Spam: China's Divisive AI Campaign

One of the most insidious operations uncovered is "Uncle Spam," where Chinese actors would generate highly divisive content aimed at widening the political divide in the US, including creating social media accounts that posted arguments for and against tariffs, as well as generating accounts that mimicked US veteran support pages.

Multi-Platform Strategy

The Uncle Spam operation demonstrates China's sophisticated approach to influence operations:

  • Creating content on both sides of divisive issues
  • Mimicking legitimate American interest groups
  • Exploiting political polarization to sow discord

China's Four Major AI-Powered Operations

OpenAI's report detailed four distinct Chinese operations, each with unique objectives and tactics:

1. Sneer Review

One Chinese operation, which OpenAI dubbed "Sneer Review," used ChatGPT to generate short comments that were posted across TikTok, X, Reddit, Facebook and other websites, in English, Chinese and Urdu. Most notably, this operation targeted a Taiwanese game in which players work to defeat the Chinese Communist Party.

What's particularly revealing is that The actors behind Sneer Review also used OpenAI's tools to do internal work, including creating "a performance review describing, in detail, the steps taken to establish and run the operation".

2. Intelligence Collection Operations

Another operation that OpenAI tied to China focused on collecting intelligence by posing as journalists and geopolitical analysts. It used ChatGPT to write posts and biographies for accounts on X, to translate emails and messages from Chinese to English, and to analyze data. Alarmingly, this included "correspondence addressed to a US Senator regarding the nomination of an Administration official".

3. Social Engineering Campaigns

The report reveals sophisticated social engineering targeting government officials and defense contractors, using AI to craft persuasive phishing messages and impersonate trusted contacts.

4. Multi-Domain Influence Operations

The China-linked operations "targeted many different countries and topics, even including a strategy game. Some of them combined elements of influence operations, social engineering, surveillance. And they did work across multiple different platforms and websites".

AI-Powered Social Engineering: A New Threat Vector

The rise of AI-powered social engineering represents a paradigm shift in cyber threats. "In 2025, social engineering will cement itself as the top security threat – supercharged by generative AI. Criminals won't just rely on phishing emails anymore", warns Kevin Tian, CEO of Doppel.

The Evolution of Deepfakes

"This technology will impersonate critical individuals such as CEOs, government officials, or even loved ones, making it nearly impossible to distinguish between genuine and fabricated communications", according to Irfan Shakeel, VP of training and certification services at OPSWAT.

Targeting High-Value Individuals

The report indicates that these AI-enhanced attacks are specifically targeting:

  • US senators and government officials
  • Defense contractors
  • Technology executives
  • Financial institutions

The Limited Impact Paradox

Despite the sophistication of these operations, OpenAI's findings include a surprising revelation: "We didn't generally see these operations getting more engagement because of their use of AI". Ben Nimmo noted that "For these operations, better tools don't necessarily mean better outcomes".

This suggests that while AI can amplify the scale and speed of malicious operations, it doesn't automatically guarantee their success. Human vigilance and robust security measures remain effective defenses.

Implications for National Security

The Talent War

Thousands of skilled North Korean IT workers use stolen identities to hold high-paying remote jobs at Western companies, illegally making money for Kim Jong Un's regime. Experts estimate the DPRK receives hundreds of millions of dollars each year from the fake IT worker scheme, directly funding the nation's illegal weapons program.

Critical Infrastructure Risk

"North Korean IT workers often have multiple jobs with different organizations concurrently, and they often have elevated access to production systems, or the ability to make changes to application source code". Carmakal warned: "There is a concern that they may use this access to insert backdoors in systems or software in the future".

The Attribution Challenge

The use of AI tools makes attribution increasingly difficult. Threat actors can:

  • Generate content in multiple languages fluently
  • Create convincing personas at scale
  • Obfuscate their origins through AI-generated content

Defense Strategies and Recommendations

For Organizations

  1. Enhanced Vetting: "My favorite question is something to the effect of, 'How fat is Kim Jong Un?'" said Adam Meyers of CrowdStrike, noting that North Korean operatives will "terminate the call instantly, because it's not worth it to say something negative" about their leader.
  2. Identity Verification: Companies are increasingly turning to specialized identity verification services to combat fake workers.
  3. Behavioral Analysis: Look for red flags such as:
    • Reluctance to engage in video calls
    • Requests to send equipment to different addresses
    • Below-average work quality despite impressive credentials
    • Unusual working hours or communication patterns

For Policymakers

The report underscores the urgent need for:

  • International cooperation on AI security standards
  • Enhanced information sharing between AI companies and government agencies
  • Updated legal frameworks to address AI-enabled threats
  • Investment in AI security research and defense capabilities

The Future of AI Security

As Bruce Schneier noted in his analysis: "last year the models weren't good enough for these sorts of things, and next year the threat actors will run their AI models locally—and we won't have this kind of visibility". This narrow window of visibility makes OpenAI's report particularly valuable.

The Arms Race Accelerates

The report signals the beginning of an AI arms race between defenders and attackers. As AI capabilities advance, we can expect:

  • More sophisticated deepfake attacks
  • Automated vulnerability discovery and exploitation
  • AI-powered disinformation campaigns at unprecedented scale
  • Increasingly difficult attribution challenges

The Need for Collective Defense

More robust information sharing between AI companies and the U.S. government can help disrupt adversarial influence and intelligence operations. Max Lesser of FDD emphasizes: Companies can supplement government actions to indict malicious actors or seize malicious domains with their own enforcement actions — such as suspending accounts engaged in malign behavior.

Conclusion: A Call to Action

OpenAI's groundbreaking report represents a critical moment in the evolution of AI security. By exposing how nation-state actors are weaponizing AI tools, it provides invaluable intelligence for defenders while highlighting the urgent need for comprehensive security measures.

The report reveals that we are at an inflection point. The same AI technologies that promise to revolutionize productivity and creativity are being exploited by adversaries for espionage, fraud, and influence operations. The international community must act swiftly to establish norms, enhance defenses, and ensure that AI's benefits aren't overshadowed by its misuse.

As we move forward, the lessons from this report are clear: transparency is essential, vigilance is paramount, and collective action is necessary. The dark side of AI is real, but with proper awareness and preparation, we can work to ensure that AI remains a force for good rather than a tool for harm.

The battle for AI security has only just begun, and OpenAI's report serves as both a warning and a roadmap for the challenges ahead. The question now is whether governments, organizations, and individuals will heed this warning and take the necessary steps to protect themselves in an AI-powered world.

Read more

Global Information Security Compliance and AI Regulations: Q2 2025 Updates - A Comprehensive Analysis

Global Information Security Compliance and AI Regulations: Q2 2025 Updates - A Comprehensive Analysis

The second quarter of 2025 has marked a pivotal period in the evolution of global information security compliance and artificial intelligence regulations. Organizations worldwide are navigating an increasingly complex landscape of regulatory requirements, with significant developments across multiple jurisdictions that will reshape how businesses approach cybersecurity, data protection, and AI

By Compliance Hub
Global Data Guardians: Navigating the Fragmented Future of Data Security and Compliance

Global Data Guardians: Navigating the Fragmented Future of Data Security and Compliance

In today's interconnected digital world, multinational corporations (MCPs) face a formidable challenge: ensuring robust data security and seamless regulatory adherence across a deeply fragmented global landscape. The era of escalating cyber threats, particularly a substantial increase in ransomware incidents, demands proactive and meticulous attention to diverse international data

lock-1 By Compliance Hub
Generate Policy Global Compliance Map Policy Quest Secure Checklists Cyber Templates