Peer-archived research

Research

Seven Physical AI Safety papers, archived on Zenodo with persistent DOIs. The research spine for the imprint catalog.

Published

Each paper is freely available on Zenodo under a persistent DOI. Cite by DOI.

Published

Physical AI Safety: A Definitional Framework

Physical AI is moving from research into industrial deployment at scale. The International Federation of Robotics reports more than four million industrial robots in operation globally, and recent national strategy documents identify Physical AI as a priority area for the coming decade. Three discipline traditions inform the safety of such systems: AI Safety, Functional Safety, and Robot Safety. None of them, individually or collectively, addresses the failure modes that arise when machine-learning components drive physical actuators with assurance levels comparable to legacy automation. This paper introduces Physical AI Safety as a distinct discipline. It is the discipline of designing,…

Published
2026-05-06
Published

The Software Safety Ceiling: Why Physical AI Cannot Be Made Safe in Software Alone

Physical AI systems control actuators that can cause physical harm. Current safety practice in this domain leans heavily on software mechanisms: monitors, watchdogs, redundant channels, and formal methods. This paper argues that software-only safety has a fundamental architectural limit. The mechanism is fate-sharing: a software safety monitor and the components it monitors share a substrate — CPU, memory, operating system, firmware, power, clock, supply chain, and design assumptions. Failures in any shared resource can affect both the monitor and the monitored. The probability of correlated failure can be reduced by careful engineering, but cannot be eliminated by software design…

Published
2026-05-06
Published

Physical AI Safety Maturity Model (PAS-MM): A Five-Level Framework for Industry Readiness

Physical AI deployments — humanoid robots, autonomous mobile robots, surgical systems, autonomous vehicles — are scaling faster than the safety vocabulary used to describe them. Safety claims across organizations are not comparable. Adjacent fields solved this problem with maturity models: CMMI for software process, AI Safety Levels (ASL) for AI safety posture, ISO/IEC 33001 for process assessment, PCI DSS for payment-industry security. Physical AI lacks an equivalent. This paper proposes the Physical AI Safety Maturity Model (PAS-MM) , a five-level classification system: Ad-hoc, Documented, Compliant, Certified, and Defense-in-depth. Levels are anchored to recognized…

Published
2026-05-06
Published

Mapping the Physical AI Safety Gap: A Multi-Source Industry Survey

Robotics venture investment reached approximately fourteen billion United States dollars in 2025, up from approximately seven point eight billion in 2024. The International Federation of Robotics reports an installed base of 4.664 million industrial robots, with 542,000 new installations in 2024. Investment in the safety infrastructure that governs these systems is, by contrast, not separately tracked in any public funding or industry index. This paper maps the resulting Physical AI Safety Gap using public-source data and a hedged, multi-measure framework. The base survey covers 155 organizations: 123 Israeli companies drawn from the Israel Innovation Authority national strategy report of…

Published
2026-05
Published

Common-Cause Failures in Physical AI: Estimating β-Coefficients for Redundant Safety Architectures

Physical AI systems commonly claim 'redundant' or 'dual-channel' safety architectures. Per IEC 61508-6 Annex D, the efficacy of redundancy depends on the β-coefficient: the fraction of channel failures that are common-cause. A redundancy claim without β disclosure is therefore unverifiable. This paper presents a methodology for estimating β from publicly available architecture information, applied to five anonymized Physical AI architectures and a wider survey of approximately 30 cases. Most claimed-redundant architectures show estimated β > 5%, with software-only configurations approaching 100% for operating-system-level common-cause failures. We provide a 12-question evaluator's checklist…

Published
2026-05-15
Published

Regulatory Convergence in Physical AI: A Comparative Survey of the European Union, United States, Japan, and Israel (2025-2027)

Regulatory regimes for Physical AI are emerging across major markets in the period 2025-2027. This paper surveys four jurisdictions - the European Union, the United States, Japan, and Israel - and identifies a convergence pattern. Despite different legal traditions, four common requirements appear in each jurisdiction: third-party-certifiable hardware safety mechanisms, AI-specific risk assessment, traceability of decision-making, and incident reporting. The window opens with two EU regulations: Regulation (EU) 2023/1230 (the Machinery Regulation, effective January 20, 2027) and the EU AI Act (Regulation 2024/1689). The United States layer is sectoral, anchored on the NIST AI Risk…

Published
2026-05-15
Published

Towards a Physical AI Safety Certification Framework: A Synthesis and Proposal

The deployment of artificial intelligence in physical systems — robots, autonomous vehicles, surgical platforms, and other actuator-driven machines — has outpaced the safety standards meant to govern it. This paper synthesizes seven prior contributions into a proposed certification framework for Physical AI: the Physical AI Safety Certification Framework, or PAS-CF. The argument proceeds from convergence. Three factors point to the same gap. Regulatory pressure is the first: the EU Machinery Regulation, the EU AI Act, and parallel work in other jurisdictions. The second is technical: common-cause failure analysis in machine-learning-bearing safety channels. The third is…

Published
2026-05-06

We use cookies

This site uses essential cookies to function and, with your consent, analytics cookies (Google Analytics) to understand how the site is used. Learn more.