Zero‑trust, infinite reasoning: Securing AI‑native physical security operations
Physical security is no longer just about locked doors and patrolling guards. With AI-driven cameras, patrol robots, and autonomous access control, the modern perimeter now mirrors the complexity and risk profile of the data center. To achieve true end-to-end resilience, organizations must apply zero-trust principles-“never trust, always verify”-across every layer, from silicon to security guard contracts. This approach demands a clear span-of-responsibility model that aligns with legal, HR, and union requirements, ensuring both compliance and operational excellence.
"We’re seeing firsthand how AI is already leapfrogging our industry by reduced response times and raised consistency across thousands of sites. But autonomy without oversight is not resilience. The future of security is agentic, but governed, human-connected, and always auditable."
- Martin Althen, President, Securitas Digital
From perimeter to premises
The line between cyber and physical domains is gone.
Consider this example: an employee taps a smart card; an edge-compute camera performs AI-powered face recognition; a software-defined lock opens, seconds before reception even registers the arrival.
This means AI-based cyber threats now extend into the physical world. Deep-fake credentials, prompt-injection against computer-vision models, and API scraping of badge readers already feature in ENISA’s 2024 Threat Landscape [1], which warns that access-control stacks now face the same cyber security threats as cloud applications.
The zero-trust principle - verify everything, trust nothing - is now just as relevant to doors as to data centres. Every badge swipe, iris scan, or patrol-bot ping is a cryptographically signed transaction inside your zero-trust architecture.
The legacy perimeter model falters
This shift unlocks new capabilities. AI-powered systems now detect threats faster [2], reduce false alarms, and free up human guards for more critical tasks. But they also change the threat landscape. As AI becomes the first line of defence, the assumptions behind legacy physical security [3] - air gaps, static credentials, manual intervention - no longer hold.
The table below shows where traditional thinking must evolve to match the speed, complexity, and accountability demands of AI-native environments.
Two design principles for AI‑era premises
These shifts demand a new design philosophy. As physical infrastructure becomes intelligent, connected, and increasingly autonomous, the way we secure it must reflect the same architectural principles that underpin secure digital systems. From smart locks to patrol bots [4], we’re no longer protecting fixed assets, we’re governing dynamic agents.
Asset equals a micro‑service. Treat every lock, camera, and patrol robot as a mutually authenticated service with mTLS, signed firmware, and continuous identity attestation.
Action equals accountability. Tie every automated decision to a named owner. If a door unlocks, audit trails must map to a Data Controller under GDPR Article 32 and satisfy the incident‑handling duties of the NIS2 Directive.
By collapsing the gap between cyber and on‑site security, organisations gain a single line of sight, and a single line of accountability, from silicon to works‑council agreements. The next section dissects how an attacker weaponises AI to pierce that new perimeter.
Anatomy of an AI‑native physical breach
AI has elevated physical security: false alarms are down, response times are faster, and intelligent systems now cover more ground than ever. But when AI becomes the first line of decision-making, the nature of risk shifts from technical failure to architectural oversight. The challenge is not whether AI works, but whether it’s governed with the right loop of human judgment and escalation.
Consider a standard scenario: a facility deploys an AI-enabled camera to detect unauthorized equipment, supported by a patrol robot and an autonomous access gate. When the system classifies the environment as safe and verifies credentials, the gate opens, no manual intervention required.
Now imagine a breach. An attacker presents a phone screen displaying an adversarial image designed to mislead the vision model. The AI misclassifies it as harmless, and the gate opens. No malware. No insider threat. Just a model fooled at the edge.
This is where physical security is critical. In high-risk environments, systems are designed with both human-in-the-loop (HITL) safeguards, requiring operator confirmation before gate activation, and human-on-the-loop (HOTL) oversight, where anomalies flagged by AI are escalated to live security analysts via command centers.
Securitas emphasizes that while AI enhances capabilities, human judgment remains irreplaceable [5]. Their approach ensures that AI serves as a tool to augment human decision-making, not replace it. By integrating AI with human oversight, Securitas aims to provide a holistic and effective security solution.
This hybrid model ensures that AI acts fast, humans act wisely, and the loop between them is designed, not assumed.
"In a world where AI now decides who enters the building, physical security must be treated with the same architectural discipline as the cloud. That means zero-trust by design, continuous verification, and accountable automation—down to every sensor and policy decision."
- Jens Eriksvik, CEO, Algorithma
Defining the span of responsibility
In digital systems, the concept of a “logged action” is familiar, every API call leaves a trail. In AI-native physical environments, we must treat every sensor trigger, agent decision, and human override the same way: as traceable, attributable, and accountable.
This is where Algorithma’s “span of responsibility” model applies directly [6]. As we argued in an earlier article, the key to AI governance isn’t just visibility or control, it’s ownership. Without clear role mapping, the system works until it doesn’t, and no one knows who’s responsible.
To meet the standards of GDPR Article 32, workplace safety law, and collective-bargaining agreements, each AI-enabled action should map to a named person or role:
Who signed off on the model that opens the gate?
Who is accountable if it misclassifies an intruder?
Who reviews escalations when the system overrides a human?
This is not theoretical. Securitas, for instance, already integrates structured escalation paths and human validation points into its remote services architecture [7] delivered by their local and global SOC. Their workflows combine HITL safeguards with live operator teams who act on-the-loop, monitoring autonomous systems and intervening when thresholds are crossed.
To operationalise this, we introduce an AI-on-prem RACI matrix, assigning roles for:
Access decisions
Sensor calibration and override
Policy tuning and exception handling
Model retraining and audit readiness
When zero-trust controls are combined with this matrix, every incident generates legally relevant evidence: who made the decision, who had oversight, and who took responsibility. These outputs feed directly into HR and legal playbooks [8, 9], enabling faster internal investigations, clearer union communication, and improved regulatory posture.
The new control plane for guards and gates
As AI-native infrastructure matures, physical security moves beyond static automation. The modern premise stack now operates as a distributed control plane, one where doors, patrol bots, identity systems, and edge sensors collaborate in real time, under continuous verification.
At the core are three capabilities:
Continuous identity attestation for all devices, not just users.
Policy-as-code execution for gates, drones, and patrol robots.
Runtime trust scores, computed from real-world conditions, for autonomous decision-making at the edge.
The defining traits of modern physical security are predictive analytics, anomaly detection, and edge inference [10]. Together, they shift operations from rule-based control to adaptive, context-aware enforcement [11].
But it’s not just about hardware logic. Increasingly, the control plane ingests external intelligence alongside internal telemetry, or Operational Intelligence (OI). Securitas Risk Intelligence [13], for instance, fuses OSINT and HUMINT to inject geopolitical, criminal, and environmental alerts directly into control APIs [14]. When conditions change, e.g. civil unrest or a spike in local break-ins, trust scores adapt and patrol routes reweight in real time. Securitas calls it Intelligence-led security.
This fusion enables a shift from detect-and-respond to anticipate-and-preempt. Instead of reacting to breaches, the system forecasts volatility, and reshapes posture accordingly.
In this architecture, the guard becomes a coordinated layer of services, where AI and human operators share state, context, and escalation logic. Gates no longer just open; they reason. Patrol bots don’t just roam; they reprioritise. The perimeter isn’t just monitored, it’s continuously redefined.
What the numbers say
AI is rapidly reshaping physical security. Adoption is accelerating, legacy assumptions are eroding, and the biggest gains appear when systems are continuously verified and mapped to clear lines of accountability.
The cyber-physical attack surface is now mainstream. 90 % of chief security officers say cyber-originated threats already endanger their physical-security systems, and the same 90 % expect AI to have the single-biggest impact on physical-security operations in the next five years [15].
Cloud-class tactics are hitting doors and cameras. ENISA’s Threat-Landscape 2024 lists prompt-injection, deep-fake credentials and API scraping as “high-likelihood, high-impact” risks [16].
Connectivity brings exposure. In Claroty’s 2024 global CPS survey of 1 100 security pros, 45 % said at least half of their cyber-physical assets are internet-connected, 82 % endured at least one attack that began via third-party remote access, and 49 % lost over 12 hours of operations to downtime [17].
AI slashes noise when models are hardened. Field data from Reconeyez and Actuate shows up to 95 % false-alarm reduction after deploying AI video analytics on live sites. [18, 19]
But readiness lags the threat. Only 1 in 5 firms feels “very well prepared” for AI-powered bot attacks, according to Arkose Labs’ 2024 survey of U.S. security teams.
Zero-trust is gaining ground, yet depth is shallow. Okta finds 61 % of organisations now run a zero-trust programme [20], but Gartner projects that by 2026 only 10 % will have a mature and measurable implementation and warns that >50 % of attacks will still target areas current zero-trust controls don’t cover. [21]
Autonomous security agents are scaling fast. Markets & Markets forecasts the security-robot segment to reach 71,8 BUSD (≈ 66 BEUR) by 2027, an 18 % CAGR, driven by personnel shortages, zero-trust integration and AI guardrails [22].
AI can amplify vigilance, cut through noise and tighten response windows, but only when speed, safety and span-of-responsibility advance together in a zero-trust, cyber-physical architecture.
One perimeter. One chain of custody.
As AI becomes the intelligence layer of our physical spaces, the challenge is no longer whether systems can be tricked, but whether they’re designed and governed to absorb the impact of increasingly sophisticated attacks.
The data underscores the urgency: cloud-class threats are now hitting physical systems, and legacy security models, built on air gaps and manual overrides, are no longer sufficient. Yet the same AI that introduces complexity also delivers capability: faster detection, fewer false alarms, more consistent enforcement. When paired with human oversight and clear lines of responsibility, these systems not only scale security, they raise the bar for compliance.
Moving forward requires a deliberate architectural shift:
Treat assets as services
Bind actions to accountability
Build zero-trust into the fabric of access itself
By operationalizing spans of responsibility and embedding escalation logic into every decision point, organisations can meet the demands of this new, converged environment, where the perimeter is dynamic, distributed, and intelligent.
The path to resilient, AI-native physical security starts by securing that single perimeter with a unified chain of custody.