Skip to main content

The new AI access problem: Why machine identities now drive trust in banking

Digital representation of a bank connected to AI systems, illustrating machine identity security in financial services.

In my experience working inside banks, identity security can be like plumbing: when it’s working, no one wants to talk about it. When there’s an incident, an audit, or a regulator—suddenly everyone wants to understand how it works.

Artificial intelligence (AI) brings the same “no one cares until everyone does” energy, but with face-melting velocity. Today, AI is embedded across large parts of the financial services industry, and it has been around for more than 25 years. The debate on whether that’s good or bad moved on while we were still forming our opinions.

Banks use AI systems to drive fraud detection, inform trading strategies, support underwriting decisions, and shape customer interactions at scale. However, the gap between rapid adoption and laggy oversight keeps widening, and it spells trouble.

While many institutions invest in AI ethics, explainability, and model governance, they underestimate the risk posed by the identities and authority that those models are given.

That risk doesn’t show up in model performance metrics.

The question we should be asking is: How do we trust decision-making systems that operate beyond human scale?

Why authority, not intelligence, now determines AI trust in banking

In a traditional bank, you could count the keys to the vault. Access was tangible, limited, and owned. In a modern financial institution, every employee, contractor, application, and automated process holds dozens of digital keys, each of which opens something worth protecting.

In this vast digital web, who carries out the work behind each system-to-system operation? Machine identities—service accounts, tokens, application identities, and automated principals. How many of these identities does the average bank require? Industry research suggests the ratio is approaching 96 machine identities for every human.

As proliferating AI systems are granted the authority to trigger workflows, move data, and influence outcomes, the nature of this identity risk becomes clear (and sobering): What’s scaled isn’t the intelligence of our tech; it’s how many decisions we’re letting machine identities make for us.

If we can’t inventory, understand access, or assign accountability at this scale, then trust becomes inferred when it should be enforced.

Pull quote reading "Trust becomes inferred when it should be enforced" from a blog on machine identity security and AI governance.

How shadow AI creates unaccountable authority in financial institutions

Just as we were beginning to get a handle on the risks of shadow IT, we’re seeing a similar pattern emerging with AI, where automation is introduced faster than governance frameworks can mature. This typically shows up in three areas that surface across automated environments:

1. Authentication paths through which AI systems and agents access platforms and services.

2. Privilege drift, where permissions expand over time as systems are reused and integrated.

3. Decision influence, where models shape outcomes without clear ownership or oversight.

What makes this different from earlier automation waves is that access paths are created implicitly inside AI-driven workflows. Authority is delegated through integrations and tokens that feel temporary—but in some cases, access is granted without knowing the extent of the consent for the data. Decision-making power expands faster than ownership is reassigned.

These gaps won’t trigger immediate alarms, but they will surface later when teams are asked to explain how access to data and resources was granted, why it remained in place, and who was accountable at the time.

The first questions institutions face when AI identity risk surfaces

When AI-driven identity risk materializes, the same failures tend to appear early.

Discovery tools don’t see everything. By the time anyone notices an orphaned machine identity or an unreviewed entitlement, the environment has already moved on. At that point, teams are forced to reconstruct history rather than demonstrate control.

AI-driven environments introduce a whole new failure mode: post-event explainability.

Boards aren’t debating model behavior; they’re questioning whether the institution can stand behind the authority it delegated, and the resilience posture that delegation damaged. In the audit room, the conversation will be about authority: who acted, what permissions were in place, and whether anyone can account for them. Unclear authority is what turns an incident into a crisis.

How to make digital coworkers (AI systems) governable

How do we address this in a way that keeps pace with AI adoption, rather than fighting it? A more effective approach is to treat these systems as digital coworkers, operating alongside humans and influencing outcomes.

Like human counterparts, digital coworkers operate continuously, influence outcomes, and require governance, and I’d argue, some form of performance management. If an automated system can affect customers, capital, liquidity, or operational risk, someone must be accountable for its access and behavior. Security leaders already apply this discipline to human roles. Extending it to machine identities is a necessary evolution.

The bigger shift is cultural: we must treat delegated authority as always having an owner, a scope, and an exit plan.

Why regulation is now a forcing function for AI trust in finance

Regulatory scrutiny is increasingly focused on how automated systems are governed, not just how they perform.

Across regions, regulators are asking material risk takers similar questions. Can institutions explain how AI-driven decisions are controlled? Can they evidence accountability? Can they show that controls still work as environments change?

Here’s how that regulatory pressure is taking shape across key regions:

While regulatory frameworks differ by region, the underlying expectation is the same: Can financial institutions stand behind the authority their AI systems were given? Accountability can’t be delegated to technology. It stays with the institution.

Demonstrating trust when machine identities outnumber humans

Trust in technology has always been part of financial services, but the scale and speed of modern systems change the equation. Teams design, build, and rely on systems to behave predictably, day in and day out.

What changes at the machine scale is the tolerance for uncertainty. When machine identities outnumber humans by such a wide margin, trust has to be demonstrated, not assumed. It depends on clear visibility into who can act, defined boundaries on what they’re allowed to do, and the ability to step in quickly when conditions shift.

The institutions that address this early will be better positioned as AI governance continues to mature globally.

Andy Parsons is the director of EMEA Financial Services and Insurance at CyberArk.