Agentic AI... The Newest Attack Surface, and the Least Secured.

Jim Leone

11/19/20253 min read

Artificial intelligence has evolved faster in the past 24 months than most platforms have evolved in 20 years. But the biggest shift isn’t the models, it’s the agents. Agentic AI systems can already browse the web, write and execute code, deploy infrastructure, read and respond to emails, analyze logs, perform triage, interact with APIs, and take actions without waiting for a human.

In other words... We are entering a world where AI doesn’t just “assist”, it operates.

And that creates a completely new cybersecurity problem... AI agents now have privileges, identities, access, and decision-making power traditionally reserved for administrators.

I believe the question every SOC leader should be asking is--> Who is securing the machines that act like humans? Because right now, it's clear to me that the answer is... almost no one.

The Emerging Risks of Agentic AI That I Believe Security Teams May Be Underestimating...

Prompt Injection Becomes RCE for AI

When an agent reads emails, documents, logs, or webpages, a hidden instruction can hijack it...

  • Exfiltrate data

  • Alter firewall rules

  • Modify SOAR runbooks

  • Create new agents

  • Disable monitoring

We spent decades fighting code injection. Now we must fight instruction injection.

Uh Oh! AI-to-AI Lateral Movement

Organizations are chaining multiple agents together...

  • Agent A queries data →

  • Agent B triages →

  • Agent C remediates

Compromise one agent, and you compromise the entire chain. This would be machine-speed lateral movement.

Where's The Audit Trail For Accountability?

When an agent closes a ticket, blocks an IP, rotates a key, or provisions cloud resources, did it...

  • Log the action?

  • Explain why?

  • Use proper approvals?

Today, from what I've seen, most agent systems have zero auditability.

Data Poisoning That Leads to Operational Misfires

Feed an agent manipulated...

  • Threat intel

  • API responses

  • Logs

  • Metrics

  • Email content

…and it will take incorrect actions with absolute confidence.

This may be the first time in history where data integrity failures can cause automated operational actions.

Over-Privileged Agent Credentials

To “make the demo work,” early adopters grant agents...

  • Administrator API keys

  • Cross-account access

  • Broad IAM roles

  • Write privileges to production

Let's not forget that just one compromised agent = full compromise.

Multi-Agent Decision Loops

Agents now...

  • Write tasks

  • Assign tasks

  • Approve tasks

  • Execute tasks

These loops multiply risk. One manipulated decision spirals across the entire system.

This is the AI version of a supply-chain attack.

Today's Agents Lack Human Judgment

Humans hesitate before dangerous actions. Agents don’t. If the logic is wrong, an AI can...

  • Terminate production workloads

  • Delete data

  • Overwrite configurations

  • Trigger auto-remediation loops

And it can do it in milliseconds!

No (Accepted) Frameworks or Controls Exist Yet

Zero Trust was built for...

  • users

  • devices

  • networks

  • workloads

But not autonomous AI. We don’t have...

  • Agent identity

  • Agent least privilege

  • Agent accountability

  • Agent kill switches

  • Agent provenance

  • Agent certificates

  • Agent drift detection

I feel this may be the largest security gap of the early AI era.

Emerging Frameworks & Tools (A New Industry Is Forming)

The good news... security minds have started to address this gap. Here are the most notable early frameworks, research efforts, and toolsets.

BeyondTrust - Agentic AI Security Platform

One of the first commercial solutions explicitly targeting...

  • AI agent identity

  • Least privilege for AI

  • Zero standing access

  • Just-in-time permissions for autonomous systems

OWASP - State of Agentic AI Security & Governance (1.0)

A major step forward. Defines...

  • Threats

  • Governance models

  • Lifecycle controls

  • Agent oversight standards

This should become required reading for SOC and security architects.

AAGATE - Agentic AI Governance Assurance & Trust Engine (Research)

Defines a runtime governance layer for autonomous agents...

  • Policy engine

  • Behavior validation

  • Agent certificates

  • Identity delegation

  • “Trust transmission” between agents

I understand that this is academic now, but it foreshadows what enterprise controls will look like.

SAGA - Security Architecture for Governing Agentic Systems (Research)

Addresses...

  • Agent identity

  • Permissioning

  • Cryptographic signing

  • Delegation chains

  • Verifiable action records

We can think of this as “Zero Trust for Agentic AI.”

Microsoft, NIST, and ISO Early Work

All three are beginning to define...

  • AI safety baselines

  • Autonomous system standards

  • Inference integrity requirements

  • System-of-agents risk definitions

Nothing enterprise-ready yet, but coming fast.

What I Believe Agentic AI Security Must Look Like (The Future State)

Based on everything above, here’s the emerging consensus of what “secure agentic AI” will require...

Agent Identity (AID)

Agents must authenticate like services, certificates, tokens, rotation.

Least-Privilege Agents

Not “read/write all.” Scoped roles. Narrow tasks. Guardrails.

"Human-in-the-Loop" for High-Risk Actions

Agents should recommend... humans authorize.

Explainability Before Execution

Agents must justify actions... “Here is what I intend to do and why.”

Immutable Audit Logs

Every decision. Every action. Every input. Logged, timestamped, verifiable.

Agent Kill Switches

If an agent behaves unexpectedly --> immediate shutdown.

Continuous Behavior Monitoring

Anomaly detection for agents, not just networks.

This Matters! And Why I Feel We Must Act Now...

Agentic AI will soon -->

  • write code

  • deploy infrastructure

  • tune firewalls

  • respond to alerts

  • interact with customers

  • handle billing

  • manage accounts

This is the next digital workforce. But unlike humans, agents don’t sleep, don’t hesitate, don’t second-guess, and don’t question bad instructions. We must secure them before they scale, not after. Organizations that adopt agentic AI without governance are building the next generation of breaches into their foundations.