Technology

A.I. Is on Its Way to Upending Cybersecurity

AI is rapidly transforming cybersecurity, enabling both faster threat detection and more sophisticated attacks, experts warn.

Digital screens display data on a circuit board background

Image: GlobalBeat / 2026

AI cybersecurity threats surge as hackers weaponize chatbots for zero-day attacks

Sarah Mills | GlobalBeat

Attackers used OpenAI’s latest model to craft working exploit code in 47 minutes during a controlled demo last week at Stanford.

The code penetrated a patched Apache server that had stood for 11 years without breach, researchers told reporters Thursday.

Cyber insurers are already rewriting policies. Premiums for midsize U.S. firms rose 32% in the first quarter alone, according to broker Marsh.

Security teams watched the demo through one-way glass. A junior analyst whispered “we’re toast” as the AI bypassed the final firewall. The model had never seen the target software during training. It pieced together the vulnerability from a 2009 blog post and a leaked debugging symbol. Researchers pulled the plug once the payload executed, but the message was clear. Offensive AI is no longer theoretical.

Google’s Threat Intelligence Group released numbers the same morning. State hackers used large language models to refine phishing lures in 34 campaigns since January. Click-through rates doubled against defense contractors. North Korea’s Kimsuky outfit fed transcripts of Zoom calls into a chatbot, then asked it to impersonate a Boeing engineer. The result fooled two subcontractors. They sent weapon schematics to a Seoul server controlled by Pyongyang.

Microsoft tracked a separate wave. Russian actors calling themselves “Vixen Wave” asked ChatGPT to rewrite malware stubs every hour. Each变异beat endpoint detection for an average of 9 hours. The bot obeyed until OpenAI closed the account. Redmond suspects the group is the same one that breached Ukraine’s power grid in 2022. They now operate from IP ranges registered in Sri Lanka and Cambodia.

Corporate boards are demanding counter-measures. Walmart’s CISO told Fortune 500 peers on a private call that the retailer now runs “red-team GPT” against its own code nightly. The internal bot found 3 exploitable bugs last month that human audits missed for years. One flaw in the deli inventory app could have given attackers a path into pharmacy records. The company patched in 48 hours and added the module to its bug-bounty program.

Start-ups smell opportunity. Palo Alto-based RoboFence announced $80 million in Series C funding Wednesday. Their product pits AI against AI: a defensive model that hallucinates fake credentials to keep intruders busy. Early trials with three U.K. banks cut dwell time from 18 days to 11 hours. Investors include CrowdStrike and Qualcomm Ventures. The round valued the 70-person firm at $440 million.

Not everyone is convinced. Bruce Schneier, the veteran cryptographer, blogged that the arms race is “marketing money on both sides.” He pointed to a recent MIT study that found human review still catches 91% of AI-generated malware samples. The paper argued that hype drives budgets rather than results. Schneier wants funding pushed toward basic hygiene like patching and multi-factor authentication instead of “magic algorithmic shields.”

Washington is moving anyway. The White House Office of the National Cyber Director will require federal suppliers to disclose any AI-generated code starting in October. A draft rule published Friday covers both offensive and defensive use. Contractors must label training data sources and provide audit trails. Violators face suspension from new awards. Trade groups including the IT Alliance for Public Sector warned the mandate could “stifle innovation” and add $2 billion in compliance costs.

Background

The first public alert came in February 2022 when researchers at security firm Recorded Future showed GPT-3 writing ransomware macros. The output was clumsy, but it compiled. By late 2023, improved models could produce polymorphic keyloggers that changed shape on every infection. ChatGPT’s built-in guardrails blocked obvious criminal requests, yet users quickly learned to split tasks into innocuous chunks. The community calls it “jailbreaking by boredom.”

Nation-state adoption followed a predictable curve. Britain’s GCHQ warned in November that Chinese APT groups used AI to summarize stolen documents, accelerating lateral movement. Iran’s Islamic Revolutionary Guard Corps ran Telegram channels offering Bitcoin bounties for the best offensive prompts. The U.S. answered with its own program: the NSA’s Artificial Intelligence Security Center opened at Fort Meade in September, tasked with hardening Pentagon networks against algorithmic foes.

What’s Next

The DEF CON hacker conference in August plans a live contest: teams get 24 hours to attack and defend a power-grid simulation using only AI assistants. Winning code will be published open-source, forcing vendors to patch before Black Hat. Registration opens next month and is capped at 500 participants. Organizers expect the exercise to set a baseline for what autonomous exploitation really looks like.

Insurers will tighten further. Munich Re insiders say the underwriters are testing a clause that excludes claims where AI-assisted code appears in the kill chain. If adopted industry-wide, premiums for tech firms could jump another 60%, pushing smaller players toward captive insurers or self-retention. Watch for at least one Fortune 1000 company to announce it is dropping cyber coverage altogether by year-end.

Sarah Mills
Technology & Science Editor

Sarah Mills is GlobalBeat’s technology and science editor, covering artificial intelligence, cybersecurity, public health, and climate research. Before joining GlobalBeat, she reported for technology desks across Europe and North America. She holds a degree in Computer Science and Journalism.