OpenAI Breaks Microsoft Exclusivity, Poolside Ships Open AGI Models - featured image
OpenAI

OpenAI Breaks Microsoft Exclusivity, Poolside Ships Open AGI Models

OpenAI and Microsoft on Monday dismantled their exclusive partnership that defined commercial AI since 2019, while startup Poolside launched open-source reasoning models that challenge proprietary AGI development. The restructuring eliminates Microsoft’s revenue share from Azure OpenAI customers and allows OpenAI to deploy on AWS and Google Cloud starting immediately.

Microsoft-OpenAI Partnership Restructure Reshapes Cloud AI

The amended agreement marks the most significant change since Microsoft’s initial $1 billion investment in 2019. Under new terms disclosed in simultaneous OpenAI and Microsoft blog posts, Microsoft will no longer receive revenue share when customers access OpenAI models through Azure.

OpenAI continues paying Microsoft a 20% revenue share through 2030, but that obligation now carries a total cap. Microsoft retains non-exclusive licensing rights to OpenAI’s intellectual property through 2032. Most significantly for enterprise customers, OpenAI can now serve all products on any cloud provider, including Amazon Web Services and Google Cloud.

According to AWS CEO Andy Jassy, OpenAI services will be available on AWS “within weeks.” The change eliminates what technology commentator Jehangeer Hasan called a “notable shift in the cloud AI landscape” that previously forced enterprise customers onto Azure for OpenAI access.

Poolside Launches Open-Source AGI Reasoning Models

San Francisco startup Poolside released two open-source Laguna language models optimized for autonomous reasoning workflows, challenging the proprietary model dominance of OpenAI and Anthropic. The company announced Laguna XS.2 alongside new development tools including the “pool” coding agent harness and “shimmer” web-based development environment.

Poolside’s models target agentic AI capabilities — systems that write code, use third-party tools, and take autonomous actions beyond simple chat generation. According to VentureBeat, the release represents a significant U.S. open-source challenger in a space dominated by Chinese companies like DeepSeek and Xiaomi.

The timing coincides with enterprise demand for alternatives to expensive proprietary models. Poolside post-training engineer George Grigorev noted that government agencies increasingly seek domestic open-source options over leading proprietary labs for security and cost considerations.

Breakthrough in Efficient Reasoning Model Training

Researchers at JD.com introduced Reinforcement Learning with Verifiable Rewards with Self-Distillation (RLSD), a training method that dramatically reduces compute requirements for custom reasoning models. The technique combines reinforcement learning’s performance tracking with self-distillation’s granular feedback, addressing what co-author Chenxu Yang called the “signal density problem” in standard training.

Traditional Reinforcement Learning with Verifiable Rewards (RLVR) provides only binary feedback — a model receives identical credit for every token in a multi-thousand-token reasoning trace, whether pivotal or irrelevant. RLSD enables models to learn which intermediate steps contribute to success or failure.

According to VentureBeat, experiments show RLSD-trained models outperform those built on classic distillation and reinforcement learning. The approach lowers technical and financial barriers for enterprise teams building custom reasoning models tailored to specific business logic.

Research Challenges Core AGI Assumptions

New research from arXiv challenges a fundamental assumption in neuro-symbolic AI development — that compositional reasoning emerges automatically from successful symbol grounding. The study introduces the Iterative Logic Tensor Network (iLTN), designed for multi-step deduction, to test whether grounding alone produces reasoning capabilities.

Researchers tested models across three generalization categories: novel entities, unseen relations, and complex rule compositions. Models trained solely on grounding objectives failed to generalize, while iLTN models trained jointly on perceptual grounding and multi-step reasoning achieved high zero-shot accuracy across all tasks.

The findings provide “conclusive evidence that symbol grounding, while necessary, is insufficient for generalization,” establishing that reasoning requires explicit learning objectives rather than emerging as a byproduct. This challenges current AGI development approaches that prioritize grounding over explicit reasoning training.

NVIDIA-Google Cloud Expand AGI Infrastructure

NVIDIA and Google Cloud announced expanded collaboration for agentic and physical AI development at Google Cloud Next in Las Vegas. The partnership introduces NVIDIA Vera Rubin-powered A5X bare-metal instances and preview access to Google Gemini running on NVIDIA Blackwell and Blackwell Ultra GPUs.

New offerings include confidential VMs with NVIDIA Blackwell GPUs and agentic AI capabilities on Gemini Enterprise Agent Platform with NVIDIA Nemotron open models. The collaboration targets production deployment of agents managing complex workflows and robots operating in factory environments.

According to NVIDIA’s blog, the full-stack platform spans “performance-optimized libraries and frameworks to enterprise-grade cloud services,” enabling developers to move agentic and physical AI from laboratory research into production systems.

What This Means

These developments signal a fundamental shift in AGI development from closed, exclusive partnerships toward open, competitive ecosystems. OpenAI’s break from Microsoft exclusivity democratizes access to frontier models across cloud providers, while Poolside’s open-source release challenges the assumption that AGI requires massive proprietary investments.

The research findings on reasoning versus grounding suggest current AGI approaches may be fundamentally flawed. If reasoning doesn’t emerge from grounding alone, AGI development must explicitly train for multi-step logical capabilities — potentially explaining why current models struggle with complex reasoning despite impressive grounding performance.

For enterprises, these changes create new opportunities and challenges. Organizations gain cloud provider choice and access to cost-effective open models, but must navigate increased complexity in model selection and deployment strategies. The RLSD training breakthrough particularly benefits companies needing custom reasoning capabilities without frontier-model budgets.

FAQ

When will OpenAI models be available on AWS and Google Cloud?
AWS CEO Andy Jassy stated OpenAI services will launch on AWS “within weeks” of the Monday announcement. Google Cloud availability timing wasn’t specified but is expected shortly after AWS deployment.

How does Poolside’s open model compare to proprietary alternatives?
Poolside claims performance competitive with proprietary models for coding and agentic tasks, but independent benchmarks aren’t yet available. The key advantage is local deployment without usage fees or data sharing requirements.

What does the reasoning research mean for current AI development?
The findings suggest that training AI systems solely on pattern recognition and symbol grounding won’t automatically produce reasoning capabilities. AGI development may need explicit reasoning objectives, potentially requiring new training methodologies and architectural approaches.

Sources

Digital Mind News

Digital Mind News is an AI-operated newsroom. Every article here is synthesized from multiple trusted external sources by our automated pipeline, then checked before publication. We disclose our AI authorship openly because transparency is part of the product.