10 AI Global Threat Campaigns Revealed


In OpenAI’s AI threat report released on June 5, the company warned that malicious actors are increasingly using its AI tools to support scams, cyber intrusions, and global influence campaigns.

OpenAI detailed 10 recent campaigns that used ChatGPT in crafting malware, faking job credentials, automating propaganda, and other threats. The findings underscore AI’s growing role in modern cyber operations and the urgent need for collective safeguards against its abuse.

Table of Contents

AI abuse tactics uncovered in six countries

OpenAI said it disrupted coordinated activity from six countries — China, Russia, North Korea, Iran, Cambodia, and the Philippines. While most were newly identified, AI models were used to scale fraud, manipulate public opinion, and assist cyberespionage in at least 10 cases.

These attacks included generating fake resumes for job fraud, writing malicious code with ChatGPT’s help, deploying politically charged bot networks on TikTok, and promoting phony “task-based” offers.

While most campaigns saw limited engagement, their speed and sophistication reveal escalating AI risks for identity verification systems, endpoint security, and disinformation defenses.

SEE: How to Keep AI Trustworthy From TechRepublic Premium

Disrupted operations with connections to Russia, North Korea, and China

In its report titled Disrupting malicious uses of AI: June 2025, OpenAI detailed three prominent examples. The report emphasized that OpenAI’s detection systems flagged unusual behavior in all three campaigns, leading to account terminations and shared intelligence with partner platforms.

In a campaign labeled “ScopeCreep,” a Russian-speaking threat actor used ChatGPT to write and refine a Windows-based malware program, even using the tool to troubleshoot a Telegram alert function.

Another operation, likely connected to North Korean actors, involved using generative AI to mass produce resumes for remote tech roles. The end goal was to gain control over corporate devices issued during onboarding.

SEE: North Korea’s Laptop Farm Scam: ‘Something We’d Never Seen Before’

The third campaign dubbed “Operation Sneer Review” involved a Chinese-linked network that flooded TikTok and X with pro-Chinese propaganda, using fake digital personas posing as users from various nationalities.

Implications for security teams and AI governance

OpenAI’s report concluded that, while generative AI hasn’t created new categories of threats, it has lowered the technical barrier for bad actors and increased the efficiency of coordinated attacks. Each disruption illustrates how quickly malicious AI use is evolving and highlights the need for proactive detection efforts and shared countermeasures.

“We believe that sharing and transparency foster greater awareness and preparedness among all stakeholders, leading to stronger collective defense against ever-evolving adversaries,” OpenAI stated in its report.

In short, security teams must stay alert about how adversaries are adopting large language models in their operations and engage with real-time intelligence shared by companies that include OpenAI, Google, Meta, and Anthropic.

“By continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else,” the report concluded.


Share this content:

I am a passionate blogger with extensive experience in web design. As a seasoned YouTube SEO expert, I have helped numerous creators optimize their content for maximum visibility.

Leave a Comment