UL Launches AI Safety Standard as Tech Regulation Expands - featured image
AI

UL Launches AI Safety Standard as Tech Regulation Expands

UL Solutions on Monday unveiled UL 3115, a structured framework for evaluating AI-based products before and during deployment, marking the safety testing giant’s entry into artificial intelligence regulation. The standard arrives as governments worldwide accelerate technology oversight, with seven U.S. states now enforcing comprehensive right-to-repair laws and federal agencies expanding regulatory frameworks across multiple tech sectors.

According to UL’s announcement, the new standard requires “a lot of companies and regulators to buy in” and establishes methods to “reliably safety test AI at all.” UL CEO Jennifer Scanlon told The Verge that the century-old company, known for its ubiquitous safety logos on electronics, sees AI testing as a natural extension of its fire and electrical safety heritage.

State-Level Tech Regulation Gains Momentum

California, Colorado, Minnesota, New York, Connecticut, Oregon and Washington have all passed comprehensive right-to-repair regulations covering consumer electronics, farm equipment, wheelchairs and automobiles. CNBC reported that Maine and Texas laws are next, with major manufacturers including Apple, Samsung, IBM and John Deere facing compliance requirements.

The right-to-repair movement represents what industry observers call a “populist wave” against the “captive repair economy.” These laws mandate that manufacturers provide repair manuals, spare parts and diagnostic tools to independent repair shops and consumers.

Repair advocates argue the legislation breaks manufacturer monopolies on device servicing. Apple previously opposed such measures but reversed course in 2023, announcing support for federal right-to-repair legislation after California’s bill passed.

Federal Agencies Expand Technology Oversight

The Commodity Futures Trading Commission granted Gemini approval to operate its own regulated derivatives clearinghouse, allowing the cryptocurrency exchange to clear and settle trades in-house rather than relying on outside infrastructure. The decision gives Gemini “greater control over how its prediction market products function and scale,” according to company statements.

Prediction markets have emerged as a regulatory focus area as platforms like Polymarket and Kalshi gain mainstream adoption. The CFTC approval signals federal willingness to accommodate crypto derivatives trading within existing regulatory frameworks.

Meanwhile, enterprise AI deployment faces increasing scrutiny. Industry research indicates over 40% of agentic AI projects will be abandoned by 2027 due to high costs, unclear value and operational complexity. This backdrop makes UL’s AI safety standard particularly relevant for enterprises seeking regulatory compliance.

International AI Regulation Landscape

The European Union’s AI Act, which began enforcement in August 2024, established the world’s first comprehensive AI regulation framework. The law classifies AI systems by risk level and imposes requirements ranging from transparency disclosures to prohibited applications.

China has implemented multiple AI regulations since 2022, including rules for algorithmic recommendations and deep synthesis technologies. The country’s approach emphasizes content control and data security over safety testing.

Canada’s proposed Artificial Intelligence and Data Act (AIDA) would establish mandatory risk assessments for high-impact AI systems. The bill remains under parliamentary review but could influence U.S. federal AI legislation.

Industry Response to Regulatory Pressure

Major tech companies are adapting business models to accommodate regulatory requirements. Mistral AI launched Workflows, a production-grade orchestration layer designed to move enterprise AI systems “out of proofs of concept and into business processes that generate revenue.”

The Paris-based company, valued at €11.7 billion ($13.8 billion), positions Workflows as infrastructure to “run AI systems reliably across business-critical processes.” Mistral head of product Elisa Salamanca told VentureBeat that “the bottleneck for organizations adopting AI is no longer the model itself, but the infrastructure required to run it reliably at scale.”

Open source AI development continues despite regulatory uncertainty. Xiaomi released MiMo-V2.5 and MiMo-V2.5-Pro under the MIT License, making both models available for commercial applications. The models excel at agentic “claw” tasks, achieving 63.8% performance while using fewer tokens than competing systems.

Compliance Costs and Market Impact

Regulatory compliance represents a growing expense for technology companies. Right-to-repair laws require manufacturers to maintain parts inventories, publish repair documentation and train service networks. These costs particularly impact smaller manufacturers with limited compliance resources.

AI safety testing adds another layer of expense. UL’s framework requires ongoing evaluation “before and during deployment,” suggesting continuous monitoring rather than one-time certification. Companies developing AI products must now budget for safety testing alongside traditional quality assurance.

The dedicated agentic AI market reached $10.9 billion in 2026 and projects growth to $199 billion by 2034. However, regulatory compliance costs could slow adoption rates and favor larger companies with dedicated legal teams.

What This Means

Technology regulation is shifting from reactive enforcement to proactive standard-setting. UL’s AI safety framework represents industry self-regulation ahead of federal mandates, while state-level right-to-repair laws demonstrate grassroots pressure for technology accountability.

The convergence of AI safety standards, repair rights and financial technology oversight suggests a comprehensive regulatory approach emerging across multiple government levels. Companies operating in these sectors must now navigate overlapping compliance requirements while maintaining competitive positioning.

Successful navigation requires early engagement with standard-setting bodies, proactive compliance planning and business model adaptation. Organizations that view regulation as operational constraint rather than market opportunity risk falling behind competitors who integrate compliance into product development.

FAQ

What is UL 3115 and why does it matter for AI companies?
UL 3115 is a safety testing standard for AI-based products that provides structured evaluation methods before and during deployment. It matters because it could become an industry requirement for AI products, similar to how UL electrical safety certifications are now standard for electronics.

Which states have passed right-to-repair laws and what do they require?
California, Colorado, Minnesota, New York, Connecticut, Oregon and Washington have comprehensive right-to-repair laws. These require manufacturers to provide repair manuals, spare parts and diagnostic tools to independent repair shops and consumers for various products including electronics and farm equipment.

How might AI regulation affect smaller companies differently than large tech firms?
Smaller companies may struggle more with compliance costs for safety testing and documentation requirements. However, they could benefit from standardized frameworks that level the playing field and open source models that reduce development costs while meeting regulatory requirements.

Sources

Digital Mind News

Digital Mind News is an AI-operated newsroom. Every article here is synthesized from multiple trusted external sources by our automated pipeline, then checked before publication. We disclose our AI authorship openly because transparency is part of the product.