San Francisco-based AI startup Poolside on Thursday launched two new Laguna large language models optimized for agentic coding workflows, marking a significant entry from a U.S. company into the increasingly competitive open-source AI market dominated by Chinese firms like DeepSeek and Xiaomi.
The Laguna XS.2 and larger variant target developers seeking affordable alternatives to proprietary models from OpenAI and Anthropic. According to VentureBeat’s coverage, Poolside’s models focus specifically on autonomous coding tasks, tool usage, and action execution rather than general chat applications.
Xiaomi Leads Open Source Efficiency Race
Xiaomi simultaneously strengthened its position in open-source AI with the release of MiMo-V2.5 and MiMo-V2.5-Pro under the MIT License. Both models are available on Hugging Face for commercial use without restrictions.
The Pro model achieved a 63.8% success rate on ClawEval benchmarks for agentic tasks while maintaining high token efficiency. According to Xiaomi’s published data, both versions rank among the top performers for powering systems like OpenClaw and NanoClaw, where AI agents complete tasks autonomously on users’ behalf.
This efficiency advantage becomes critical as services like Microsoft’s GitHub Copilot shift to usage-based billing models, charging users per token consumed rather than offering unlimited subscriptions.
Security Challenges Plague Model Repositories
The rapid expansion of open-source AI models has created new attack vectors for cybercriminals. Acronis researchers identified nearly 600 malicious “skills” across 13 developer accounts on ClawHub, with threat actors distributing trojans, cryptominers, and information stealers targeting Windows and macOS systems.
Two accounts contained the majority of malicious content: hightower6eu hosted 334 malicious skills, while sakaen736jih contained 199. The attacks exploit users’ trust in AI distribution platforms through social engineering rather than compromising the AI models themselves.
Hugging Face Also Targeted
Similar abuse patterns emerged on Hugging Face, where attackers upload trojanized files disguised as legitimate AI resources. The attacks use indirect prompt injection techniques, embedding hidden instructions that AI systems execute without user awareness.
One identified payload targeting macOS users deployed the Atomic macOS Stealer (AMOS), demonstrating how threat actors increasingly target trusted AI distribution channels rather than traditional malvertisement vectors.
Cisco Addresses Model Provenance Gap
Cisco on Thursday released the Model Provenance Kit, an open-source tool designed to help organizations track third-party AI model lineage and modifications. The tool addresses critical gaps in model accountability that affect millions of models available on repositories like Hugging Face.
According to Cisco, organizations often cannot verify claims made by model developers regarding sources, vulnerabilities, and training biases. This lack of transparency creates security, compliance, and liability risks, particularly when models contain poisoned data or manipulation vulnerabilities.
“Without provenance, organizations have no easy way to trace an incident back to its root cause, and no way to determine which other models in their stack are also affected,” Cisco explained in its announcement.
Enterprise Adoption Challenges
The provenance problem becomes more acute as enterprises deploy AI models in customer-facing applications and internal systems. Unverified models can perpetuate biases, introduce security vulnerabilities, or violate licensing requirements.
Regulatory compliance adds another layer of complexity, as government requirements for documenting AI system usage become more stringent. Supply chain integrity risks emerge when organizations cannot verify developer claims about model capabilities and limitations.
Growing Fine-Tuning Ecosystem
The open-source AI landscape continues expanding beyond pre-trained models. Hugging Face published comprehensive guidance on fine-tuning large language models with PyTorch, indicating growing enterprise interest in customizing models for specific use cases.
Fine-tuning allows organizations to adapt general-purpose models like Llama or Mistral for domain-specific applications while maintaining the cost advantages of open-source alternatives. This approach enables smaller companies to compete with tech giants without building models from scratch.
The availability of tools like Hugging Face’s Transformers library and PyTorch integration lowers technical barriers for organizations seeking to implement custom AI solutions.
What This Means
The simultaneous launch of Poolside’s Laguna models and Xiaomi’s MiMo updates signals intensifying competition in open-source AI, with U.S. companies finally challenging Chinese dominance in affordable, high-performance models. However, security concerns around model repositories highlight the need for better verification and provenance tracking as adoption accelerates.
Cisco’s Model Provenance Kit addresses a critical infrastructure gap, but widespread adoption will depend on organizations recognizing the risks of unverified AI models. The tension between rapid deployment and security validation will likely shape enterprise AI adoption patterns through 2024.
For developers and enterprises, the expanding ecosystem of open-source models offers unprecedented choice and cost savings, but requires more sophisticated evaluation and security practices than proprietary alternatives.
FAQ
What makes Poolside’s Laguna models different from other open-source options?
Laguna models specifically target agentic coding workflows and autonomous task execution, unlike general-purpose models. They’re designed for developers building AI agents that write code, use tools, and take actions independently.
How do security risks in model repositories affect enterprises?
Malicious models can introduce trojans, cryptominers, and data stealers into enterprise systems. Threat actors exploit trust in platforms like Hugging Face and ClawHub through social engineering, making verification tools like Cisco’s Model Provenance Kit essential.
Why are Chinese companies dominating open-source AI development?
Chinese firms like Xiaomi and DeepSeek focus on efficiency and affordability while maintaining competitive performance. Their models often match proprietary alternatives at significantly lower costs, appealing to price-sensitive enterprises and developers.
Related news
- Building long-horizon SWE environments on Hugging Face: Frontier SWE × OpenEnv – HuggingFace Blog
- NVIDIA Isaac GR00T N1.7: Open Reasoning VLA Model for Humanoid Robots – HuggingFace Blog
Sources
- Fine-Tuning Your First Large Language Model (LLM) with PyTorch and Hugging Face – HuggingFace Blog
- American AI startup Poolside launches free, high-performing open model Laguna XS.2 for local agentic coding – VentureBeat
- Hugging Face, ClawHub Abused for Malware Distribution – SecurityWeek
- Open source Xiaomi MiMo-V2.5 and V2.5-Pro are among the most efficient (and affordable) at agentic ‘claw’ tasks – VentureBeat
- Cisco Releases Open Source Tool for AI Model Provenance – SecurityWeek






