Poolside AI on Thursday launched two new open-source Laguna models optimized for agentic coding workflows, while Xiaomi released MiMo-V2.5 and MiMo-V2.5-Pro under MIT licensing. Both releases highlight the growing momentum behind open-source alternatives to proprietary AI systems from OpenAI and Anthropic.
The San Francisco-based Poolside, founded in 2023, positioned its Laguna XS.2 models as cost-effective solutions for autonomous coding agents that can write code, use third-party tools, and execute tasks independently. According to VentureBeat, the models compete directly with expensive proprietary offerings while maintaining open licensing terms.
Poolside’s Agentic Coding Focus
Poolside’s Laguna models target a specific use case: agentic workflows where AI systems perform complex coding tasks autonomously rather than simple chat interactions. The company released both standard and enhanced versions alongside new development tools.
The startup introduced “pool,” a coding agent harness, and “shimmer,” a web-based mobile-optimized development environment for interactive code preview. According to Poolside’s announcement, these tools integrate directly with the Laguna models to create end-to-end agentic coding solutions.
Poolside post-training engineer George Grigorev explained on X that government agencies might prefer Poolside over proprietary labs like OpenAI due to transparency and control over the underlying model architecture. This positioning reflects broader enterprise concerns about vendor lock-in with closed-source AI systems.
Xiaomi’s Efficiency-Focused Models
Xiaomi’s MiMo-V2.5 series emphasizes token efficiency for agentic “claw” tasks — AI agents that complete user requests through third-party messaging apps and services. The models are available on Hugging Face under MIT licensing for commercial use.
According to Xiaomi’s ClawEval benchmarks, the Pro model achieved 63.8% performance on standardized agentic tasks while consuming fewer tokens than competing open-source alternatives. This efficiency translates to lower operational costs for enterprises moving to usage-based AI billing models.
The models support systems like OpenClaw, NanoClaw, and Hermes Agent for tasks including marketing content creation, email organization, and automated scheduling. VentureBeat noted that both versions appear in the top-left quadrant of efficiency charts, indicating high performance with minimal token consumption.
Security Challenges in Open Model Distribution
The proliferation of open models has created new security vectors, according to research from Acronis. The cybersecurity firm identified nearly 600 malicious “skills” distributed through AI platforms including Hugging Face and ClawHub, targeting both Windows and macOS systems.
Threat actors exploit trust relationships between users and AI distribution platforms by embedding trojanized code in shared model files. Two developer accounts — hightower6eu with 334 malicious skills and sakaen736jih with 199 — contained most of the identified threats, including the Atomic macOS Stealer (AMOS).
The attacks use indirect prompt injection to instruct AI agents to download and execute malicious code without user awareness. This technique leverages the modular architecture of AI ecosystems where agents can execute external code with elevated privileges.
Cisco’s Model Provenance Solution
Cisco on Thursday released the Model Provenance Kit, an open-source tool addressing supply chain risks in third-party AI models. The toolkit helps organizations track model lineage, verify developer claims, and assess security vulnerabilities before deployment.
According to Cisco, enterprises often lack visibility into model modifications, training biases, and potential vulnerabilities when sourcing from repositories like Hugging Face. The provenance kit provides documentation and verification capabilities to support incident response and regulatory compliance.
The tool addresses specific enterprise concerns including licensing risks, regulatory documentation requirements, and supply chain integrity verification. Cisco emphasized that without proper provenance tracking, organizations cannot trace security incidents to root causes or identify affected models in their deployment stack.
Fine-Tuning Accessibility Improvements
Hugging Face published new guidance for fine-tuning large language models using PyTorch, making advanced customization techniques more accessible to developers. The blog post represents “Chapter 0” of an upcoming handbook covering practical implementation approaches.
The documentation focuses on practical fine-tuning workflows rather than theoretical foundations, addressing enterprise demand for customizable open-source models. This educational push supports broader adoption of open alternatives to proprietary AI services.
Fine-tuning capabilities represent a key differentiator for open models, allowing organizations to adapt pre-trained systems for specific use cases without relying on API-based services. The improved documentation lowers technical barriers for teams implementing custom AI solutions.
What This Means
The simultaneous releases from Poolside and Xiaomi signal intensifying competition in open-source AI, particularly for specialized use cases like coding and agentic workflows. Both companies are targeting enterprise users seeking alternatives to expensive proprietary models while maintaining performance standards.
However, the security research from Acronis highlights real risks as open model ecosystems expand. Organizations adopting open-source AI must implement robust verification processes and supply chain security measures. Cisco’s provenance toolkit represents one approach to managing these risks systematically.
The efficiency focus of Xiaomi’s models and the specialized positioning of Poolside’s coding agents suggest open-source development is moving beyond general-purpose chat models toward task-specific optimization. This trend could accelerate enterprise adoption by addressing specific business requirements more cost-effectively than broad-capability proprietary systems.
FAQ
What makes Poolside’s Laguna models different from other open-source AI models?
Laguna models are specifically optimized for agentic coding workflows where AI systems autonomously write code, use tools, and execute tasks. Unlike general-purpose models, they’re designed for specialized development environments and coding agent applications.
How do Xiaomi’s MiMo-V2.5 models achieve better token efficiency?
The models are optimized for “claw” tasks through specialized training that reduces token consumption while maintaining performance. This efficiency translates to lower costs for enterprises using usage-based AI billing, especially for repetitive agentic workflows.
What security risks should organizations consider when using open-source AI models?
Key risks include trojanized model files, indirect prompt injection attacks, and lack of provenance tracking. Organizations should verify model sources, implement security scanning, and use tools like Cisco’s Model Provenance Kit to track model lineage and vulnerabilities.






