Predicting the Future of AI in Identity and Access Management
In the rapidly changing cybersecurity landscape, Identity and Access Management (IAM) is a critical pillar, safeguarding organizational data and access across different enterprise systems and platforms. As the head of CyberArk’s Artificial Intelligence Center of Excellence (AI CoE), I’m witnessing firsthand the transformative impact of artificial intelligence (AI) in this domain. AI is not just reshaping how we manage digital identities and access controls but also how we balance productivity and security.
After announcing the launch of CyberArk’s AI CoE last September, my team and I dedicated ourselves to understanding the needs of our customers and the industry. We conducted interviews, analyzed market trends and made predictions. At the same time, we began working on several imperative AI initiatives. Just over six months later, we have even more AI initiatives underway and many more in the planning stages.
This week, I’m participating in CyberArk’s annual IMPACT conference, where our customers and partners will preview CyberArk CORA™ AI*, the first wave of AI-powered capabilities that will be embedded across our Identity Security Platform and that we plan to release this year. In this post, I will share insights into our current work and the direction we envision for the future. While some predictions are involved, it’s important to note that they do not reflect our product roadmaps.
AI and the Security-Productivity Balancing Act
When developing AI-based capabilities for IAM systems, we often must choose between improving user productivity and enhancing security measures. Some AI capabilities streamline user operations and improve efficiency, while others aim to tighten security.
From conversations with customers, I understand they primarily see AI-based features as productivity boosters. These features help them work more efficiently and shorten the learning curve. Some AI-based features, like chatbots, focus on productivity, while others, like policy recommendation engines, combine productivity with security. These features can streamline users’ work and provide collective or heuristic-based knowledge, guiding them to make better decisions and enhance security.
Other capabilities, such as discovering and alerting suspicious activities, lean even further toward the security end of the spectrum. But, even in this case, there’s an argument that these capabilities also increase user productivity and effectiveness.
The Three Pillars of AI in IAM and What’s on the Horizon
As charted in the illustration above, we’ve categorized AI in IAM into three main pillars for this blog post, each blending productivity enhancements with security improvements. While securing GenAI is an exciting topic, it only relates tangentially to IAM, so I’m not discussing it in this blog. Looking into the future, we can expect a wide range of technologies to emerge in each category, from the near and achievable to the more distant and complex.
So, with my (cyber) crystal ball in hand, to follow are my predictions (and my predictions only) for what will happen in each area of AI in IAM over the next few years.
1) Chatbots and AI Assistants
Imagine a world where intelligent AI assistants guide every interaction with your IAM system. Information answers retrieval, context-specific recommendations and even system configuration or system debugging are delivered instantly and accurately. AI-driven chatbots and assistants in IAM will soon be able to provide not just text-based Q&A interactions but also create context-aware recommendations that include integrations with third-party systems.
These AI functionalities are built to understand users’ individual needs and the unique circumstances of different customers, making operations more intuitive, efficient and tailored to the specific situation. Whether answering queries, suggesting next steps or executing commands on the user’s behalf, AI assistants are set to become indispensable in the IAM toolkit.
Here’s what’s likely to roll out in the next few years in this area of AI in IAM:
Predicted Release: 2024
- Documentation chatbots will continue to evolve and improve their ability to answer generic questions based on the body of documents, knowledge base and other sources of information.
- Assistant chatbots will understand the user’s natural language and run commands for them, but at a much deeper and more complex level than what we’ve seen thus far. For example, some user queries may involve executing a chain of API calls that requires correctly understanding the necessary parameters for each subsequent call and how to provide the appropriate response to the user. These assistants will be simple to start, but as time goes by, they will add more and more capabilities and support ever more complex use cases.
Predicted Release: 2025-2026
- Context-aware chatbots and assistants will be more knowledgeable about individual user circumstances than their current AI predecessors. Rather than providing the same outputs to different users, these next-gen chatbots will “know” things about you and tailor their responses accordingly. For example, they will be able to identify if the user is new to the system, which operations they typically perform and which services they subscribe to. Additionally, these assistants will consider broader context, such as progress in onboarding or completing a to-do list.
Predicted Release: 2027-2030
- Automatic issue detection and guided remediation will enable AI chatbots and assistants to become more proactive, suggesting actions and next steps. Whether it’s the next item on your to-do list or resolving an error on your screen, these assistants will increasingly offer solutions to problems and tasks – asking only for your confirmation.
2) Access Policies
The integration of AI tech is poised to significantly transform the definition of access policy. Algorithms will generate dynamic least privilege access policies, ensuring users have only the access necessary for their roles. These policies will be based on natural language and intent rather than technical language currently representing this intent.
This shift means that IAM administrators will transition from hands-on operators to strategic supervisors who set high-level guidelines, accept or update suggested policies and handle anomalies.
This change should accelerate task completion and reduce the technical knowledge required for policy managers within organizations. Yet, it raises an interesting question: Will this efficiency come at the expense of precision (for instance, will you get used to unquestioningly accepting the suggestions), or will it enhance security? The trend points toward the latter, considering simpler, AI-generated policies tend to be less prone to human error and misconfiguration.
Here’s what to look for in the next few years:
Predicted Release: 2024
- Policy recommendations based on best practices or the collective knowledge of other customers. You can expect to see lots of these soon.
Predicted Release: 2025-2026
- Intent as policy is one of my most anticipated and revolutionary AI promises to the world of IAM (and other sister industries that handle multiple access policies). We’ll see natural language used to define new and explain existing policies. The intent will become the policy. For example, an access policy rule could look like this: “Give John SRE-level access to his team’s AWS production account for the upcoming two hours.” Or, “Don’t allow non-admin users to see payment-related fields in the CRM app.”
Predicted Release: 2027-2030
- Automatic policy creation is the logical next step after policy recommendation. Such policies can rely on history or heuristics for their creation.
3) Risk-based Access
The third pillar focuses on the nature of access itself. As AI makes access to systems more personalized and contextualized, access becomes more dynamic and transparent. This means fewer repetitive logins and multi-factor authentication (MFA) prompts during normal operations, leading to smoother workflows and less user frustration.
Here’s what I predict we’ll see in this area in the near and not-as-near future:
Predicted Release: 2024
- Activity summaries and security insights will be generated from the user’s interactions with systems, which produce a digital trace (like a video recording, a log or an audit record). Generative AI (GenAI) will transcribe and summarize this trace into human-readable text. Additionally, GenAI will alert you if you perform a risky operation during the session.
Predicted Release: 2025-2026
- Behavioral profiling and threat detection will work together to create and continually update risk profiles for workloads and users. These profiles will be based on their activity within the systems, allowing for the creation of specific profiles for each workload and user. As a result, more granular and precise risk-level management and threat detection will be achievable.
Predicted Release: 2027-2030
- Automated threat prevention will be the next natural step following the arrival of threat detection mechanisms. It will likely take many forms, such as stopping a suspicious session or suspending or requiring additional login measures for a questionable user.
- Automatic policy creation (extension): With the ability to maintain user-specific profiles, systems will use this data to create user-specific and context-specific behavior, resulting in more personalized and dynamic access policies.
The Impact of AI on IAM: A Look Into the Future
Integrating AI in IAM is an ongoing journey toward creating more secure, efficient and user-friendly systems. As we look to the future, the focus will be on how AI can seamlessly integrate into the core areas of IAM to provide increased security and productivity.
At CyberArk’s AI CoE, our mission is to drive state-of-the-art technological innovation in our products and create value for our customers by meeting today’s challenges and future-proofing against tomorrow’s cyberthreats. We strive to seamlessly integrate AI into the core areas of IAM, enhancing security and productivity. As we continue exploring the exciting possibilities, we are grateful for our customers and partners in this journey toward a more secure and efficient future.
Together, we can achieve great things.
Daniel Schwartzer is CyberArk’s Chief Product Technologist and the leader of CyberArk’s Artificial Intelligence Center of Excellence.
*Learn more about CyberArk CORA AI.
Editor’s note: For more insights from Daniel Schwartzer on this subject and beyond, check out his appearance on CyberArk’s Trust Issues podcast episode, “AI Insights: Shaping the Future of IAM.” The episode is available in the player below and on most major podcast platforms.