96 machines per human: The financial sector’s agentic AI identity crisis
What if you hired about 100 new employees for every one you already had, and then, on a whim, gave them all admin rights? Sure, these fresh hires would likely be brilliant and hungry to make an impression. But they wouldn’t always know the rules. Some would make mistakes. Others might take liberties. Before long, it’d be bedlam.
That’s what’s happening right now inside financial services institutions. For every human employee, there are 96 machine identities that teams must manage—a much higher ratio compared to 82:1 across other sectors. And that’s before we even consider agentic AI.
These autonomous workers are quickly becoming, as I’ve referred to them, first-class citizens across critical systems. Still, agentic AI innovation and machine identity sprawl are outpacing security controls for many financial firms.
The rise of AI and machine identity sprawl in finance
Financial services leaders know identity counts are exploding, and over half (51%) of firms expect the number of identities they manage to double in the next 12 months, with machines and AI systems as their primary growth drivers.
However, only 10% of financial organizations view machine identities as “privileged” users.
Why is this gap so troubling? Agentic AI adoption is skyrocketing. As Citi’s GPS Report describes it, agentic AI is fueling the “Do It For Me” economy in finance, already automating compliance checks, analyzing trades, and even making credit decisions. While these business innovations are powerful, the lack of identity-first guardrails for agentic AI technology is cause for concern.
From my own time in the CISO chair, I can tell you that discovery is the first battle. You can’t secure what you can’t see. If you don’t know where every identity is, the ones operating in the dark will likely blindside you.
It’s helpful to think of AI agents like digital employees. But would you ever let thousands of new workers onto your systems without thorough background checks, access policies, or ongoing oversight?
Because that’s what is happening when machine identity sprawl—specifically, AI agents—grows unchecked.
Rising risks of shadow AI in financial services
The old headache of shadow IT has evolved into something far more dangerous: shadow AI. Nearly half (45%) of financial services organizations admit unsanctioned AI agents are already creating identity silos outside formal governance programs.
Picture a settlement processing AI agent that tweaks a script to speed up end-of-day settlements. Throughput improves 30%, but in the process, the unmonitored agent skips a key data filtering rule, accidentally pulling from non-production datasets and risking a data leak. If this agent isn’t tracked, mitigating the leak or subsequent incident is much more difficult.
While the efficiency gains of AI agents are clear, shadow AI presents real, unmanaged risk—not to mention greater potential for costly compliance violations.
Three agentic AI attack surfaces financial firms must secure
Every AI agent opens up three unique layers of attack, each presenting its own identity security challenges. Together, these layers form a critical framework for understanding where vulnerabilities lie and how to address them. Here are three key attack surfaces financial firms need to secure:
1. Infrastructure credentials
API keys, TLS certificates, and other machine credentials are the lifeblood of how AI agents operate. If stolen, they can act like a master keycard left lying on a desk, granting unauthorized access to critical systems.
2. Entitlement creep
Over time, agents accumulate privileges the way human employees hoard files. A bit of access here, a temporary entitlement there, and soon, an AI agent has sweeping control—often far beyond what’s necessary for its job.
3. The model layer
AI agents themselves can be manipulated. Techniques like prompt injection, recursive loops, or poisoned training data can trick AI agents into behaving against policy, accessing systems they never should, or inheriting risks from their other agentic colleagues.
Why manual processes fail to secure AI agents
While in charge of core banking infrastructure, I saw firsthand how manual processes collapse at this scale. Human oversight alone won’t cut it. You need controls that work at machine speed, with governance frameworks that treat AI agents like privileged machine identities.
In my military days, we called it “train hard, fight easy.” The same applies here. If you don’t rigorously test, discipline, and optimize your systems, they will likely fail when the pressure’s really on.
Fraud-related risks and rewards: how agentic AI is reshaping finance
Nowhere is the dual nature of AI agents more apparent than in fraud.
On the one hand, they can be fraud multipliers. U.K. banks lost £1.1 billion to fraud in 2024, up 14% year over year, and sixty percent of financial firms already say their top AI concern is agent misconfiguration or manipulation. A rogue agent—or worse, one designed for fraud detection but tricked into enabling fraud—becomes the ultimate insider threat.
On the other hand, they can be fraud fighters. Mastercard’s AI-driven fraud detection system now monitors more than 160 billion transactions annually, proving that AI agents can outpace fraudsters when correctly governed.
The paradox is clear: agents can be both hero and villain. Identity-first controls play a critical role in determining which.
Why identity security is critical for financial sector innovation
Financial institutions are often criticized as slow adopters, weighed down by regulatory complexity. But identity-first security flips the narrative. It lets firms innovate at speed, while proving to regulators that they’re doing so safely.
European banks may even have an edge. The EU AI Act and DORA explicitly call out machine identities and model governance. Firms that get ahead here don’t just reduce risk—they create a compliance and operational advantage.
For example, one bank enforces zero standing privileges (ZSP) and just-in-time (JIT) access for every compliance-checking agent. While peers wrestle with shadow bots and entitlement creep, this bank can demonstrate to regulators that actions are logged, constrained, and revocable in real time. Audit prep that once took weeks now takes hours.
That’s not a laggard’s story, but leadership.
Securing the future of finance with identity-first AI governance
With financial services teams struggling to manage 96 machines for every human—and with each AI agent as a privileged machine identity—securing AI agents is the defining challenge, and opportunity, of financial cybersecurity today.
As I often tell clients, you can’t secure what you can’t see. Discovery, ownership, and governance must extend to these machine identities with the same discipline as humans. Because these are our new digital coworkers, and they demand first-class security that matches their first-class access.
Andy Parsons is the director of EMEA Financial Services and Insurance at CyberArk.
🎧Hear more from the author. Listen to Andy Parsons on the Security Matters podcast for a deeper dive into agentic AI in financial services—including the crucial role of identity security.