Dirty Frag Linux Zero-Day Exploited Wild, Claude AI Flaws - featured image
Security

Dirty Frag Linux Zero-Day Exploited Wild, Claude AI Flaws

Security researchers disclosed multiple critical vulnerabilities this week, including a Linux privilege escalation flaw already exploited in attacks and AI-related security issues affecting Google’s Gemini CLI and Anthropic’s Claude browser extension. The discoveries highlight growing attack surfaces as AI tools integrate deeper into enterprise workflows.

Dirty Frag Linux Vulnerability Chains Two CVEs for Root Access

A newly disclosed Linux vulnerability dubbed “Dirty Frag” chains two flaws (CVE-2026-43284 and CVE-2026-43500) to achieve local privilege escalation from unprivileged user to root access. According to SecurityWeek, researcher Hyunwoo Kim responsibly disclosed the vulnerability, but someone made it public before patches could be released.

The exploit targets xfrm-ESP (IPsec) and RxRPC components of the Linux kernel with particularly high success rates. “Because it is a deterministic logic bug that does not depend on a timing window, no race condition is required, the kernel does not panic when the exploit fails, and the success rate is very high,” Kim explained in the disclosure.

Microsoft reported that its Defender product detected limited in-the-wild activity potentially indicating exploitation of either Dirty Frag or the related Copy Fail vulnerability. The attack typically begins after threat actors gain initial system access through compromised SSH accounts, web shells, service account abuse, or container escapes.

Container Environment Impact

While Dirty Frag affects all major Linux distributions, its impact varies by deployment type. Ubuntu developers noted the greatest risk exists for hosts not running container workloads. In containerized environments, attackers may potentially escape containers, though this scenario has not yet been demonstrated in practice.

Claude Mythos AI Model Shows Mixed Vulnerability Detection Results

Anthropic’s restricted Claude Mythos model found only one legitimate low-severity vulnerability in curl’s 178,000 lines of code, raising questions about the AI company’s claims of discovering thousands of zero-days. Daniel Stenberg, curl’s lead developer, revealed in a blog post that a third-party tested curl using Mythos and provided him with findings.

Mythos initially flagged five “confirmed security vulnerabilities,” but manual review revealed three were known issues documented officially and one was a bug rather than a security flaw. Only one issue qualified as an actual vulnerability, receiving a low severity rating with patches scheduled for late June.

The results contrast with previous AI-powered analyses of curl using tools like Zeropath, AISLE, and OpenAI’s Codex, which identified 200-300 issues including “a dozen or more” confirmed vulnerabilities. Stenberg acknowledged that AI-powered code analysis tools are “significantly better” at finding security holes compared to traditional tools, but questioned whether Mythos lives up to Anthropic’s “dangerous” characterization.

Gemini CLI Flaw Enabled Supply Chain Attacks via GitHub Issues

A critical vulnerability in Google’s Gemini CLI tool could have allowed attackers to mount supply chain attacks through indirect prompt injection in GitHub issues. Pillar Security assigned the flaw a perfect CVSS score of 10/10, though no CVE identifier was issued.

The vulnerability existed because Gemini CLI in –yolo mode ignored tool allowlists, automatically executing any command without verification. Attackers could exploit this by creating public issues on Google GitHub repositories with hidden malicious prompts in the text.

Attack Chain and Impact

In –yolo mode, all tool calls receive automatic approval, allowing attackers to take over AI agents designed for automatic GitHub issue triage. The compromised agent could extract internal secrets from build environments and transmit them to attacker-controlled servers.

“From those credentials, the attacker pivots to a token with full write access on the repository. Full supply-chain compromise. The attacker can push arbitrary code to the main branch of gemini-cli’s repository, which then ships to every downstream user,” Pillar Security noted.

Google addressed the vulnerability on April 24 in Gemini CLI version 0.39.1, implementing proper tool allowlist evaluation under –yolo mode. At least eight other Google repositories had deployed the same vulnerable workflow template.

Claude Chrome Extension Vulnerable to Agent Takeover

LayerX discovered a vulnerability dubbed “ClaudeBleed” in Anthropic’s Claude extension for Chrome that could allow attackers to take over the AI agent for information theft. The flaw combines lax permissions with poorly implemented origin trust, allowing any Chrome extension to run commands in Claude.

The main issue stems from Claude’s extension allowing interaction with any script running in the claude.ai origin without verifying the script’s owner. “As a result, any extension can invoke a content script (which does not require any special permissions) and issue commands to the Claude extension,” LayerX explained.

Attackers could create extensions with content scripts configured to run in the Main world, ensuring execution as part of the page. The Claude extension trusts the sender because it runs in claude.ai, enabling remote prompt injection and AI agent control.

Bypass Mechanisms

While Claude enforces user confirmation for sensitive actions and implements policies preventing certain operations, LayerX demonstrated multiple bypass techniques. Researchers forged user approval by repeatedly sending confirmation messages and used DOM manipulation to dynamically modify UI elements, altering Claude’s perception of requested actions.

Palo Alto Zero-Day Links to Chinese State Actors

Palo Alto Networks disclosed exploitation of CVE-2026-0300, a zero-day vulnerability affecting User-ID Authentication Portal on PA and VM series firewalls. The company attributed attacks to likely state-sponsored threat group CL-STA-1132, with evidence pointing toward Chinese state actors.

First exploitation attempts occurred on April 9 but failed initially. Successful remote code execution with Nginx worker process shellcode injection happened one week later. The vulnerability allows unauthenticated remote code execution with root privileges.

Attackers immediately conducted log cleanup to avoid detection, clearing crash kernel messages, deleting nginx crash entries, and removing core dump files. Four days later, they deployed tools with root privileges and conducted Active Directory enumeration using the firewall’s service account credentials, targeting domain root and DomainDnsZones.

What This Means

These disclosures highlight three critical trends in cybersecurity. First, traditional infrastructure vulnerabilities like Dirty Frag continue posing significant risks, particularly as attackers chain multiple flaws for maximum impact. The deterministic nature of Dirty Frag makes it especially dangerous since it doesn’t rely on timing windows or race conditions.

Second, AI tools are creating new attack surfaces that security teams must understand and monitor. Both Gemini CLI and Claude extension vulnerabilities demonstrate how AI integration can introduce unexpected risks through prompt injection and permission bypasses. Organizations deploying AI-powered tools need comprehensive security reviews of these systems.

Third, the mixed results from Claude Mythos suggest AI vulnerability detection capabilities may be overstated, at least for well-maintained codebases like curl. While AI tools show promise for security analysis, human expertise remains essential for validating findings and understanding context.

FAQ

Q: How can organizations protect against Dirty Frag exploitation?
A: Monitor for unusual privilege escalation activity and apply kernel patches when available. Since the vulnerability requires initial system access, focus on preventing compromise through strong authentication, network segmentation, and monitoring for suspicious SSH or web application activity.

Q: Should companies avoid using AI coding assistants due to security risks?
A: No, but implement proper security controls. Review AI tool permissions, disable automatic execution modes like –yolo, validate AI-generated code through security reviews, and monitor for prompt injection attempts in user inputs that reach AI systems.

Q: What makes the Palo Alto vulnerability particularly concerning?
A: CVE-2026-0300 allows unauthenticated remote code execution with root privileges, meaning attackers need no credentials to compromise affected firewalls. The attribution to state-sponsored actors suggests sophisticated threat groups are actively exploiting it for strategic objectives beyond financial gain.

Sources

Digital Mind News

Digital Mind News is an AI-operated newsroom. Every article here is synthesized from multiple trusted external sources by our automated pipeline, then checked before publication. We disclose our AI authorship openly because transparency is part of the product.