נתונים

Six incidents. Five years. One pattern.

6 דקות קריאהMati Melchior
Six incidents. Five years. One pattern.

Six publicly-reported robot incidents from the last five years. Different companies. Different sectors. Different countries. From factory floors to living rooms to public roads.

I've read the reports and cross-checked them against court filings, regulatory disclosures, and trade-press coverage. Every one of these is on the record.

What they share is not a single company, a single failure mode, or a single sector. What they share is architectural.

The six incidents

2021 · Tesla Giga Texas

During commissioning work at the Tesla Gigafactory in Austin, an industrial robot arm pinned an engineer against a surface, leaving an open wound. A third arm nearby was not shut down as part of the maintenance procedure. The incident was not publicly reported until December 2023 — more than two years after the event — when it surfaced in OSHA filings and subsequent Associated Press coverage.

2023 · Tesla Fremont, California

A technician performing disassembly work was struck by a Fanuc industrial robot arm and knocked unconscious. The incident resulted in a $51 million lawsuit filed in 2024. Public court filings are the primary source. The specific failure mode reported in the coverage was that the arm was not effectively locked out during the disassembly work being performed on it.

2023 · Goseong, South Korea

In a vegetable-packaging plant in Goseong County, a worker was fatally crushed when a robot arm, apparently treating him as an item to be packed, grabbed him and pinned him against a conveyor belt. Manufacturing Dive and multiple international outlets covered the incident. This is not a near-miss. This is the deadliest kind of outcome.

2024 · Ecovacs Deebot X2 · multiple US cities

Security researchers demonstrated — and then criminal actors replicated — a remote takeover of consumer robot vacuums. Attackers accessed onboard cameras, chased pets around victims' homes, and yelled slurs through the device speakers. Malwarebytes and ABC News documented the exploit chain in detail. The affected model was the Ecovacs Deebot X2.

2024 · OSHA study published · US workplace robotics

A peer-reviewed 2024 study analysing OSHA's Severe Injury Reports database identified 77 robot-related workplace accidents across 2015–2022, producing 93 injuries. A majority of those incidents involved unexpected activation — the robot moved when it was believed by the humans around it to be safe. That's not human-in-the-work-envelope failure. That's the machine doing what it wasn't asked to do.

2026 · Tesla Austin, Texas

The autonomous robotaxi fleet operating in Austin has accumulated 14 publicly-reported crashes in the first 8 months of deployment, with injuries and at least one hospitalization reported through the public incident channels that exist today. Coverage is ongoing and the regulatory response is still developing.

The common thread

The surface-level causes look different. A lockout procedure missed. An arm that wasn't shut down. A robot pattern-matching a human as a vegetable. A remote-code-execution chain in a consumer device. Unexpected activation in an industrial cell. A perception stack that didn't recognise a scenario on a public road.

At a slightly deeper level, though, every one of these incidents has the same architectural signature.

In each case, the safety behaviour of the machine — the thing that was supposed to prevent the outcome — was implemented as a layer on top of the machine's normal operating stack. Not as an independent system running on independent compute with an independent view of the physical world. Not as a hardware-enforced constraint that would remain true even if every other subsystem was compromised, confused, or switched off.

A layer. An add-on. Something that, when the main stack misbehaved or saw something it didn't understand, went down with it — or was simply not present when the main stack needed to be overruled.

This is what I mean when I say safety was treated as a layer added at the end, rather than a constraint designed in from the start. It's a pattern the older safety-critical industries learned to avoid a long time ago.

What "safety by design" actually means

"Safety by design" has become a marketing phrase. It's worth going back to what the engineering community actually meant by it before it became one.

The principle comes from functional safety — the body of standards developed for chemical, nuclear, aviation, and rail systems over the last forty years. The core idea is that for a safety guarantee to be meaningful, the thing providing the guarantee must be independent of the thing being guaranteed. Different compute. Different power supply. Different sensor inputs where possible. Different people designing each side.

That independence is what gives a safety claim its weight. Without it, the safety layer is only as reliable as the operational layer it sits on top of — which is exactly the failure mode we see in the six incidents above.

In robotics, that translates into concrete architectural choices. The safety system should not share a CPU with the motion-planning stack. The safety system should not depend solely on the same cameras the perception stack is using. The safety system should have its own, simpler, independently-verifiable view of the world — usually from a separate set of sensors — and its own way to bring the machine to a safe state without asking permission from the main stack.

If the main stack has crashed, got confused, or been compromised, the safety system should still do its job.

What this means for operators

If you're operating robotic equipment today, the practical questions to ask the vendor are narrower than the marketing glossaries would suggest.

  • What's the independent safety system on this machine? Not what software can detect a fault — what independent hardware brings the machine to a safe state when the main stack is wrong?
  • What is the safety certification based on? Is it a claim about the final integrated system, or is it a claim about a subsystem that the integrator is expected to make safe?
  • What's the evidence that the safety layer is actually independent of the operational layer? Separate compute? Separate power? Separate sensors? If the answer is "we run redundant processes on the same CPU", that is not independence in any useful sense.
  • What does the machine do when the main stack is not healthy? Specifically, what is the deterministic failure mode? "It logs an error and waits for reset" is not a safety answer.

These are the questions the sixty-percent-unexpected-activation statistic is quietly telling us to ask.

The root cause

Every incident above shares one root cause — no hardware-enforced safety layer between the AI and the physical world.

The six incidents didn't require an EU regulation to be preventable. They'd all have been preventable with an architectural choice that the older safety-critical industries settled on decades ago. But on 20 January 2027, when the EU Machinery Regulation applies in full, that architectural choice moves from "good engineering" to "prerequisite for market access."

That's the conversation the industry needs to have in public now, not in January 2027.

Share

דיספאץ' בטיחות Physical AI

ניתוח חודשי. בלי ספאם. תובנה בלעדית אחת בכל גיליון.

גיליון אחד בחודש. ביטול מנוי בלחיצה אחת מכל אימייל. מדיניות פרטיות.

הניתוח הזה חינמי. אם הוא שימושי לך, שקול לתמוך בעבודה.

קנה לי קפה

השימוש שלנו בעוגיות

האתר משתמש בעוגיות הכרחיות לתפעולו, ובהסכמתכם גם בעוגיות אנליטיקה (Google Analytics) להבנת אופן השימוש באתר. מידע נוסף.