UL Solutions on April 22, 2026 released UL 3115, a new safety testing standard for AI-based products that provides “a structured framework to evaluate AI-based products before and during deployment.” The 100-year-old safety certification company aims to bring the same rigorous testing approach it applies to electrical products to artificial intelligence systems.
UL CEO Jennifer Scanlon told The Verge that the standard addresses growing enterprise demand for AI safety validation as companies move beyond pilot programs to production deployments. The UL logo, familiar on consumer electronics for fire and electrical safety certification, will now extend to AI systems that pass the new testing protocols.
Enterprise AI Safety Gap Drives Standard Development
The timing reflects mounting enterprise concerns about AI deployment risks. According to MIT Technology Review, many organizations discover “the biggest obstacle to meaningful adoption is the state of their data” and underlying infrastructure readiness.
“The quality of that AI and how effective that AI is, is really dependent on information in your organization,” Bavesh Patel, senior vice president of Databricks, told MIT Technology Review. Without proper data governance and safety frameworks, businesses risk what Patel calls “terrible AI.”
UL 3115 addresses this by establishing testing criteria for AI model behavior, data handling practices, and deployment safeguards. The standard requires AI systems to demonstrate consistent performance under various conditions and maintain audit trails for compliance verification.
Microsoft and Google Report Massive Enterprise AI Adoption
The safety standard launch coincides with unprecedented enterprise AI deployment rates. Google Cloud reported tracking 1,302 real-world generative AI use cases across leading organizations as of April 2026, up from 101 cases when the company first published its tracking list in 2024.
“This almost certainly is the fastest technological transformation we’ve seen, and customers are driving it,” Google Cloud President Matt Renner wrote. The company noted that “production AI and agentic systems are now deployed in meaningful ways across virtually every one of the thousands of organizations” attending its Next ’26 conference.
Microsoft similarly emphasized that enterprise customers have moved “quickly from experimentation to production” and now want “measurable business outcomes, along with security, governance and responsible AI built in from day one.”
Microsoft defines this shift as “Frontier Transformation,” where AI becomes “a repeatable, governed capability embedded into the flow of work, business processes and customer engagement.”
Standard Addresses Trust and Governance Requirements
UL 3115 targets what Microsoft identifies as two essential elements for enterprise AI success: intelligence and trust. The standard provides frameworks for organizations to ensure AI systems remain “observable, managed and secured across the technology stack.”
The certification process evaluates AI systems across multiple dimensions:
- Model reliability testing under varied input conditions
- Data governance compliance including privacy and access controls
- Output validation to prevent harmful or biased responses
- Deployment monitoring capabilities for ongoing performance tracking
- Incident response procedures for AI system failures
UL Solutions plans to offer both pre-deployment certification and ongoing monitoring services. Companies that meet UL 3115 requirements will receive certification marks similar to the familiar UL electrical safety logos.
Industry Adoption Challenges and Opportunities
The success of UL 3115 depends on widespread industry adoption and regulatory acceptance. Unlike electrical safety standards that developed over decades, AI safety certification faces rapidly evolving technology and unclear regulatory frameworks.
“That kind of standard requires a lot of companies and regulators to buy in — and for there to be a way to even reliably safety test AI at all,” The Verge noted in its coverage of the announcement.
Early adoption may come from regulated industries like healthcare and finance, where compliance requirements already drive safety certification demand. CNBC reported that even entertainment companies are implementing AI governance as they deploy generative tools for production workflows.
Innovative Dreams, a new Amazon Web Services-backed production company, exemplifies this trend by “combining cameras and a giant LED wall on a soundstage with tools to apply AI from pre-production” while maintaining safety protocols.
What This Means
UL 3115 represents the first major attempt to standardize AI safety testing using established certification methodologies. The standard arrives as enterprise AI deployments accelerate beyond pilot phases into production systems affecting business operations and customer interactions.
For enterprises, UL certification could provide third-party validation that helps satisfy compliance requirements and reduce deployment risks. Insurance companies may eventually require AI safety certification for coverage, similar to how electrical safety standards became insurance prerequisites.
The standard’s success will depend on whether UL can adapt century-old testing principles to rapidly evolving AI technology. Unlike static electrical products, AI systems learn and change behavior over time, requiring new approaches to ongoing safety validation.
FAQ
What does UL 3115 certification test in AI systems?
UL 3115 evaluates AI model reliability, data governance practices, output validation, deployment monitoring capabilities, and incident response procedures. The standard provides a structured framework for testing AI products both before and during deployment.
Which companies need UL 3115 certification?
While not currently mandated, the standard targets enterprises deploying AI in production environments, particularly in regulated industries like healthcare and finance. Early adoption may be driven by compliance requirements and risk management needs.
How does AI safety certification differ from traditional UL testing?
Unlike electrical products that remain static after manufacturing, AI systems continuously learn and evolve. UL 3115 addresses this by requiring ongoing monitoring and validation rather than one-time testing, representing a significant departure from traditional certification approaches.
Related news
- Glean Highlights Enterprise AI Focus and Google Cloud Ecosystem Engagement – TipRanks – Google News – Google
Sources
- Rebuilding the data stack for AI – MIT Technology Review
- That UL safety logo is a lot more complicated than it looks – The Verge
- How a new Amazon-backed Hollywood production startup deploys AI for speed and cost-cutting – CNBC Tech






