OpenAI on Monday began emailing more than 8,000 developers who applied for its invite-only GPT-5.5 party with a surprise consolation prize: a tenfold increase in Codex rate limits on their personal ChatGPT accounts, effective immediately and lasting through June 5.
“We had over 8,000 people express interest in just 24 hours, and while we wish our office was big enough to welcome everyone, we weren’t able to make space for every person who applied,” the company wrote in the email, according to VentureBeat. The gift applies to everyone who raised their hand — whether they were accepted, waitlisted, or turned away.
CEO Sam Altman telegraphed the move on X shortly before inboxes started lighting up. “We are gonna do something nice for everyone who applied for the GPT-5.5 party and that we didn’t have space for,” he wrote. “Hope you enjoy!” The post amassed more than 521,000 views within hours.
https://x.com/sama/status/2051318922805436896
What 10x Codex Access Means for Developers
The practical implications are substantial for the developer community. Codex, OpenAI’s AI-powered coding agent, operates under daily usage caps that vary by subscription tier. A tenfold increase to those caps gives developers dramatically more room to prototype, debug, and ship code using GPT-5.5 — which OpenAI says matches GPT-5.4’s per-token latency while performing at a higher level of intelligence.
The timing coincides with intensifying competition in AI coding tools. GitHub Copilot, Microsoft’s flagship coding assistant, has dominated the market since 2021. Cursor, an AI-first code editor, has gained significant traction among developers seeking more integrated AI experiences. The expanded Codex access puts OpenAI’s tool directly in developers’ hands for extended testing periods.
Developers responded with enthusiasm on social media. “I’m literally not taking my Codex hat off for the month,” one developer declared on X. Others kicked themselves for not signing up, with one writing, “That’s the last time I don’t sign up just because I’m not in SF.”
Security Concerns Shadow AI Coding Tools
The Codex boost comes amid growing security concerns about AI coding assistants. Recent research has exposed vulnerabilities across major platforms, with attackers consistently targeting credentials rather than the AI models themselves.
On March 30, BeyondTrust proved that a crafted GitHub branch name could steal Codex’s OAuth token in cleartext. OpenAI classified it Critical P1. Two days later, Anthropic’s Claude Code source code spilled onto the public npm registry, and Adversa found Claude Code silently ignored its own deny rules once a command exceeded 50 subcommands.
These incidents follow a pattern identified by security researchers. “An AI coding agent held a credential, executed an action, and authenticated to a production system without a human session anchoring the request,” according to VentureBeat’s analysis of six disclosed exploits against major AI coding platforms.
Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, identified the core issue: “Enterprises believe they’ve ‘approved’ AI vendors, but what they’ve actually approved is an interface, not the underlying system.”
The Rise of “Vibe Coding” in Education
As AI coding tools proliferate, researchers are studying how students interact with these systems. A recent arXiv study analyzed 19,418 interaction turns from 110 undergraduate students, revealing distinct patterns in what researchers call “vibe coding” — collaborating with AI via natural language rather than writing code line-by-line.
The study found that top-performing students engaged in “instrumental help-seeking,” using inquiry and exploration to elicit tutor-like AI responses. In contrast, low performers relied on “executive help-seeking,” frequently delegating tasks and prompting the AI to assume an executor role focused on ready-made solutions.
“Currently generative AI mirrors student intent (whether productive or passive) rather than optimizing for learning,” the researchers concluded. They argue for pedagogically aligned design that detects unproductive delegation and adaptively steers educational interactions toward inquiry.
Supply Chain Threats Target Developer Credentials
Beyond individual security concerns, AI coding tools face sophisticated supply chain attacks. Security firm Trend Micro recently identified Quasar Linux (QLNX), a Linux backdoor specifically designed to steal developer credentials across the software supply chain.
The malware targets AWS credentials, Kubernetes tokens, Docker Hub credentials, Git access tokens, NPM authentication tokens, and PyPI API keys. “An attacker who successfully deploys QLNX against a package maintainer gains access to that maintainer’s publishing pipeline,” Trend Micro warns. “A single compromise can be silently leveraged to trojanize packages, inject backdoors into build artifacts, or pivot into cloud environments.”
QLNX uses multiple persistence and detection evasion mechanisms, including memory execution, process name spoofing, and system log clearing. It deploys Pluggable Authentication Module (PAM) backdoors to harvest credentials and gathers extensive system information, including clipboard contents, SSH keys, and browser profiles.
What This Means
OpenAI’s mass Codex giveaway signals the company’s commitment to developer mindshare in an increasingly competitive AI coding market. The monthlong access boost gives thousands of developers extended exposure to GPT-5.5’s coding capabilities, potentially building loyalty ahead of enterprise sales cycles.
However, the security vulnerabilities plaguing AI coding platforms reveal systemic issues beyond individual tool flaws. The consistent targeting of credentials rather than AI models suggests attackers understand these systems’ true attack surface lies in their integration points with developer infrastructure.
For organizations adopting AI coding tools, the lesson is clear: security review must extend beyond the AI interface to encompass credential management, authentication flows, and supply chain integrity. The tools that promise to accelerate development may also accelerate security risks if not properly configured and monitored.
FAQ
What exactly did OpenAI give to developers who applied for the GPT-5.5 party?
OpenAI increased Codex rate limits by 10x for all 8,000+ applicants, regardless of whether they were accepted to the event. The boost lasts through June 5 and applies to personal ChatGPT accounts.
How serious are the security vulnerabilities in AI coding tools?
Very serious. Six research teams disclosed exploits against major platforms in nine months, with attackers consistently targeting credentials rather than AI models. BeyondTrust found Codex could leak GitHub tokens through crafted branch names, which OpenAI classified as Critical P1.
What is “vibe coding” and how does it affect learning?
Vibe coding refers to collaborating with AI through natural language rather than writing code line-by-line. Research shows top students use it for inquiry and exploration, while struggling students delegate tasks entirely, potentially hindering learning outcomes.
Related news
- Spotify wants to become the home for AI-generated personal audio – TechCrunch
- OpenAI trial live updates: Proceedings resume with more Helen Toner testimony – NBC Bay Area – Google News – AI
- Worries about AI’s risks to humanity loom over the trial pitting Musk against OpenAI’s leaders – Pittsburgh Post-Gazette – Google News – AI






