Skip to main content

Agentic AI security: What business leaders can’t afford to ignore

Agentic AI security

Agentic AI is here to stay. It doesn’t matter whether you’re just experimenting with simple AI assistants and chatbots or already have autonomous agents with privileged access running in production. The time to start securing them is now.”

With those words, CyberArk CEO Matt Cohen set the tone for “Securing the New Frontier of Agentic AI: The Identity Security Imperative for AI Agents.” Thousands of industry professionals joined the virtual event to explore the rise of autonomous AI agents, one of today’s most significant shifts in enterprise technology.

While these AI systems are already reshaping how work gets done—streamlining workflows, accelerating decisions, and amplifying efficiencies—they’re also architecting an unprecedented attack surface within enterprises.

And across every session, one message was crystal clear: AI agents are a new class of identity, and securing them demands a new approach.

Agentic AI security

Agentic AI moves from concept to practice

This vision is rapidly becoming a reality. As Cohen went on to note, “We’re at the cusp of an agentic AI revolution.”

Organizations across industries are now embedding AI agents into their daily workflows, and as a result, accelerating transformation and decision-making at scale.

In fact, recent survey data from more than 100 financial and software security leaders indicates that nearly 40% of enterprises have already deployed AI agents, with this number expected to almost double in three years.

That’s no surprise, considering how tangible the returns already are:

  • A global bank cut its legacy-system modernization time by 50%.
  • A grocery retailer saw a 10% revenue lift through smarter recommendations.
  • A retail bank boosted analyst productivity by up to 60% after automating credit-risk memos.

But even as agentic AI continues to deliver measurable value, CISOs are faced with harder questions. As Cohen noted, “Security leaders want visibility into what agents exist, how they’re accessing data, and the ability to shut them down if something goes wrong.”

Risk levels are unlike anything we’ve seen before

While innovation races on, new risk classes are entering the mix, and teams can’t afford to stay stagnant.

Venu Shastri, CyberArk’s Senior Director of Product Marketing for Platform and AI Solutions, framed it like this: Agentic AI is a new identity class that operates with reason and autonomy. They’re non-deterministic systems, so traditional safeguards like static permissions and manual reviews simply can’t keep pace.

And security teams are taking notice. Two-thirds of CISOs rank agentic AI among their top three cyber risks, ahead of ransomware and insider threats. However, while teams recognize the dangers that autonomous systems bring, fewer than 10% of those surveyed have implemented dynamic authorization or risk registries at scale.

“Agentic AI affects identity, sensitive data, and automated actions at the same time,” one CISO told researchers. “Any compromise can spread faster and have broader impact than other threats.”

A new threat landscape emerges

In another session, Lavi Lazarovitz, VP of Cyber Research at CyberArk Labs, illustrated how quickly exposure scales when autonomy is introduced into the enterprise.

In CyberArk Labs testing, a prompt injection hidden in a database record manipulated a financial services agent into exposing sensitive data and issuing unauthorized invoices. It was accomplished through poisoning a Model Context Protocol (MCP) connection. MCP, a novel framework introduced by Anthropic with the aim of standardizing how agents connect to tools and data, is quickly emerging as the API equivalent for AI agents.

But the framework isn’t all it’s cracked up to be, as Lazarovitz noted. “[MCP makes] connection easier,” but as the number of agents grows, teams are also “opening a world of opportunities for threat actors to take advantage of this access.” It also expands the potential blast radius of compromise, uncovering a clear need for identity-centric controls for AI agents.

Why identity security is the new foundation for securing AI agents

By treating agents as privileged identities, organizations can apply proven guardrails used for humans and machines, but at the scale, speed, and level of influence AI agents operate within. These controls are a core part of identity security—they aren’t meant to slow agents down, but they do help define boundaries, so innovation stays inside the lines.

“AI agents are privileged identities by definition,” said Shay Saffer, CyberArk VP of Machine Identity Solutions. “Agents have access to sensitive resources with privileged and excessive permissions…and controls need to be applied on agents before they interact.”

The current state of readiness in AI agent security

While many enterprises are piloting or deploying agents across multiple functions, the implementation of dynamic, context-aware controls remains rare.

To what do we owe this gap?

Many organizations are still figuring out how to treat agents from a security perspective. Even though agents touch sensitive resources, these autonomous systems are still being delegated through the same access and privileges as the humans invoking them. But every actor in the enterprise needs a unique, verifiable identity—including AI agents.

Other teams are struggling with the complexity of building adaptive authorization models that can interpret intent in real time, without granting AI agents standing privileges that exponentially increase the attack surface.

However, as autonomy grows, these capabilities become non-negotiable.

What steps to take next to secure your AI agents

At the event, CyberArk previewed its industry-first Secure AI Agents solution, built to prioritize control without constraining innovation.

In a nutshell, here’s how it works:

1. Start with discovery and visibility

Map every agent operating in your environment. Ask what it does, what it accesses, who owns it, and what associated risks it poses. Integrate this inventory with your existing identity platforms, as this will help eliminate shadow AI.

2. Treat agents as privileged machine identities

Apply the same rigor you use for human and machine identities, including onboarding, monitoring, and decommissioning, through defined, end-to-end lifecycle processes.

3. Expand existing identity programs

Extend zero standing privileges (ZSP), just-in-time (JIT) access, and continuous governance to this new, autonomous, and digital workforce.

Explore more insights and real-world strategies

To learn more about securing AI agents in your enterprise, catch the on-demand replay of our virtual event, “Securing the New Frontier of Agentic AI.”

You’ll dive deeper into what we’ve recapped here, with CyberArk experts, industry researchers, and security leaders unpacking the data, demonstrating real attack scenarios, and what it takes to manage AI autonomy at enterprise scale.

Kaitlin Harvey is a digital content manager at CyberArk.