תעשייה
The Gap in Physical AI
There's a structural gap in Physical AI that almost no one is addressing. AI gets smarter every quarter. Robot hardware gets cheaper every year. Safety infrastructure has barely moved since the 1990s.
That gap has a deadline, and it's closer than most operators realise.
The gap
The conversation around Physical AI right now is dominated by capability. Vision-language-action models. Humanoid platforms. Real-time motion planning that would have been science fiction a decade ago. The progress is real and the pace is genuine.
Underneath that conversation, three curves are moving at very different speeds:
- Robot intelligence — the ability of a machine to perceive, reason, and act — is compounding on the same trajectory as general AI. It improves with every model release.
- Robot hardware — actuators, sensors, compute, batteries — is getting cheaper, smaller, and more capable on a curve that looks a lot like consumer electronics.
- Robot safety infrastructure — the layer of systems, standards, and independent hardware that is supposed to guarantee a machine won't hurt the person next to it — has barely changed since the 1990s.
That third curve is the one the industry has stopped talking about. And it's the one that's going to matter most in the next eighteen months.
Why 20 January 2027 matters
On 20 January 2027, the EU Machinery Regulation 2023/1230 applies in full across the European Union. It replaces the 2006 Machinery Directive and, for the first time, brings AI-enabled safety functions explicitly inside the regulatory perimeter.
A few things change in a concrete way on that date. Machines placed on the EU market must meet the new essential health and safety requirements. The list of "high-risk" machinery is expanded. Self-evolving behaviour — software that changes after deployment — requires a conformity reassessment, not a version bump. And the regulation applies to anyone placing machinery on the EU market, including importers.
This is not a voluntary framework. Non-compliant machinery cannot legally be placed on the market. The teeth are there.
Most of the Physical AI industry is not ready for it. Talk to any safety integrator and the same pattern emerges: the technology is ahead, the documentation is behind, and the hardware safety layer that would actually make certification achievable does not exist as a standard component.
What "software-only safety" misses
The default posture across the industry today is what you could call software-only safety: the safety logic lives in the same compute stack as the perception and planning logic. It runs on the same CPU. It trusts the same inputs. It fails the same way when those inputs are wrong.
That approach has a ceiling, and it was mapped out forty years ago by IEC 61508 and the industries that built safety engineering into their DNA: aviation, nuclear, rail. The short version: if you want a high-integrity guarantee that something dangerous will not happen, the thing providing the guarantee has to be independent of the thing being guaranteed. Independent compute. Independent power. Independent reasoning. That is what hardware-fault-tolerance and diverse redundancy mean in the standard, and it is not optional for the higher safety integrity levels.
Robotics has mostly ignored this. Not because the engineers don't know it — many of them came from those older industries — but because the commercial pressure to ship is immense and because, so far, the regulatory pressure has been lower than the commercial pressure.
That changes on 20 January 2027. At that point, the regulatory pressure catches up. Machines that cannot produce a coherent safety case lose access to the largest robotics market in the world.
Why I'm writing this
I spent the last twelve months reading the standards, the incident reports, the regulatory text, and the academic literature behind this gap. I looked at where the money is going and where it isn't. I looked at the ecosystem of robotics companies being funded in Israel, in Europe, in the US. The consistent thread across all of that reading is that very few of the people actually building the machines are thinking about the safety layer as a first-class engineering problem. It gets added at the end, or it gets outsourced, or it gets left for the integrator to sort out.
That is a systems problem. It's not going to be solved by any one company or any one paper. But it is going to get solved, because the deadline is real and the economic consequence of missing it is significant.
Over the next six months I'll be writing one analysis post every week on this gap. Each post is short, sourced, and focused on one concrete aspect of the problem. No sales pitch. No brand. Just the work of someone who thinks this matters.
If it's your problem too, follow along.