Skip to main content

Beneath the AI iceberg: The forces reshaping work and security

Iceberg above and below the waterline with a digital circuit‑patterned underside symbolizing hidden AI systems and identity security risks.

In conversations about AI, there’s a tendency to treat the future like a horizon we’re walking toward, always somewhere ahead, always a question of when. But if we look closely, the forces reshaping work, identity, and security beneath the surface are far more consequential than most people realize.

More importantly, that reshaping is already happening.

Project Iceberg, a new research framework from MIT and Oak Ridge National Laboratory, exposes this hidden layer by simulating how today’s AI can perform multifaceted tasks across the U.S. labor market. The Iceberg Index indicates that current AI systems are capable of performing work equivalent to 11.7% of the U.S. labor wage value, translating to approximately $1.2 trillion across 923 occupations.

The impact noted by the Iceberg Index doesn’t mean these jobs are simply gone. Still, it does indicate that powerful augmentation capabilities exist today, alarmingly, before most organizations have figured out how to govern AI, more specifically, AI agents, effectively. This widening gap also signals major shifts around the very meaning of work, both within the technology industry and beyond.

Graphic pull quote reading: “What’s below the surface today will play a significant role in shaping the economics and sociology of tomorrow.”

Why agentic AI, not superintelligence, is the real inflection point

One of the most common misunderstandings about AI is that its impact will dramatically expand only when it achieves something like human-level intelligence, commonly referred to as artificial general intelligence (AGI).

But “intelligence” is the wrong starting place.

Remember the old joke about a man telling a horse not to worry about the Model T? While the horse doesn’t really grasp what’s happening, the real lesson lies in the subtext. Cars didn’t just replace horses because they were faster; the invention of the automobile changed the foundational logic of transportation itself.

In AI advancement, we’re seeing similar structural shifts extending beyond simple assistance and speed to autonomous systems that act on our behalf, augmenting human task performance at enterprise scale. This is agentic AI: systems that can plan, adapt, and execute multi-step tasks with minimal human prompting.

And adoption numbers are accelerating. According to recent research, 40 percent of financial and software companies have already deployed agentic AI systems, with deployments expected to double by 2028.

Put another way, we’re no longer talking about AI getting you from Point A to Point B faster. We’re referring to technology that can drive autonomously across changing terrain and evolving business operations, with or without you in the driver’s seat.

AI 2027 and the signals of an emerging paradigm shift

Many current public debates focus on whether and when AGI will arrive. But that’s a distraction from where real impacts are unfolding at the intersection of agency and adoption.

To illustrate, let’s look at one vivid, data-driven set of forecasts: AI 2027.

This uncomfortably plausible set of scenarios is grounded in current trends and expert input, and it sketches a world where autonomous agents play an active role in mainstream economic, technological, and social systems, long before a singular AGI threshold is reached.

These scenarios envision AI agents influencing R&D, scaling decision workflows, and participating in organizational operations in ways that have systemic implications. They offer a grounded, strategic lens to help leaders anticipate what’s next.

Key milestones of AI 2027 include:

  • Mid-2025: Agents begin performing complete tasks across business, coding, and research.
  • Late-2025: AI accelerates AI development, with models optimizing architectures, training processes, and alignment techniques.
  • Early-2026: Superhuman coding systems emerge, outpacing top human engineers.
  • Mid-2026: A global “AI race” intensifies, and compute, chip supplies, and data access become strategic levers of national power.
  • Late-2026 into 2027: New models self-improve, straining governance frameworks, and alignment gaps scale more quickly than ever before.
  • 2027: The world splits between two possible outcomes, a race of escalating capabilities (and loss of control), or a collective slowdown driven by global regulation and cautious innovation.

As you can see from these milestones, especially when held up to the Iceberg Index, agents are poised to transform the workforce and society. For organizations seeking to unlock the potential of AI agents, their most significant challenges will revolve around how to govern agency responsibly. At the same time, the ground continues to shift beneath their feet.

What work and purpose look like in an AI agent-driven world

Many publications talk about AI-related job changes and layoffs abstractly, with roles simply shifting as technology evolves. But that’s again a surface story.

To look at this another way, the Iceberg Index reveals that the most visible aspects of AI exposure account for only 2.2% of the wage value in tech-heavy U.S. coastal areas. However, when you dig deeper, the actual number, 11.7%, reveals the bigger picture. And it does so across administrative, financial, and other cognitive work areas that traditionally form the backbone of white-collar employment but are often overlooked in similar surveys.

While these numbers themselves are revealing, they don’t paint the whole picture, as the very idea of work is about more than just being employed. Much of who we are as people, including our identities, contributions, and communities, is deeply intertwined with our work. When we discuss potential displacement due to AI, we need to consider more than just tasks being automated; instead, we should examine what that work will look and feel like when machines can, and do, perform significant subsets of what humans once did.

If we review another piece of industry research on the future of workplace models, questions shift from the replacement or elimination of work to the reconfiguration of it as an ongoing partnership among people, agents, and robots. In fact, in many plausible mid-term scenarios, AI and human teams can unlock greater economic value if humans focus on higher-order operations, like judgment, strategy, coaching, and orchestration, while agents handle granular execution.

At this convergence, the meaning of work must evolve, just as transportation has. If machines can generate reports, detect patterns, and initiate workflows, then our own work as people will advance from task execution to curation, governance, and stewardship.

As it stands, the real exposure risks won’t come from AI totally replacing humans, but from organizations failing to design systems and frameworks around coexistence and shared purposes.

Security as the first frontier of agentic AI

One of the domains where AI agency already has real operational impact is cybersecurity.

For decades, security operations centers (SOCs) lived in a human-centric model, with alerts filtered through analysts who investigated, responded to, and documented incidents. But agentic AI is breaking that rhythm.

Today’s autonomous agents can simultaneously ingest signals from identity systems, cloud platforms, endpoint telemetry, and threat intelligence, allowing them to correlate, prioritize, and even execute predefined containment steps when risk signals breach policy thresholds. Agents can also compose narratives, link evidence, and surface strategic interpretation without waiting for manual orchestration.

Early SOC adopters are already integrating agentic AI workflows to reduce noise, accelerate detection, and offload repetitive triage tasks, allowing human analysts to focus on judgment and escalation. And as recent agentic AI security guidance emphasizes, this shift comes with new kinds of risk management requirements like updated governance frameworks, oversight mechanisms, and identity-centric control models designed for autonomous systems rather than static ones.

Strengthening human-AI collaboration as agents take on more work

The biggest mistake organizations are making is treating AI as something that gets adopted once and checked off a list. Agentic AI operates differently, and it requires a few strategic shifts to ensure trust and responsibility for how these systems act:

  • Purpose-first thinking: AI agents cannot define what matters. Humans must set goalposts and intent. What gets automated, why, and to what end?
  • Governance of agentic AI systems: Technologies that act need identity, privileges, oversight, and ongoing accountability, akin to that of humans.
  • Shared contribution design: Work needs to be intentionally restructured so humans and agents complement each other.
  • Meaning over tasks: If machines assume procedural duties, humans are free to focus on judgment, interpretation, ethics, and community.

Leaders who adopt these tenets will be best equipped to adapt, shape, and lead in the era of agentic AI.

Looking beneath today’s AI iceberg to understand the future of security

Rather than speculating on AGI arrival timelines, organizations should instead focus on securing agentic systems that are already acting at scale.

These structural shifts may not make headlines the same way AI sentience does, but what’s below the surface today will play a significant role in shaping the economics and sociology of tomorrow.

And as we design the work models of the future, we must consider one strategic security imperative: Are we building AI systems that we can govern, understand, and align with human purpose before autonomous actions become irreversible?

Omer Grossman is chief trust officer (CTrO) and head of CYBR Unit at CyberArk.