Skip to main content

CIO POV: Navigating the Deepfake Pandemic with Proactive Measures

Navigating the Deepfake Pandemic with Proactive Measures

We’re in the throes of another pandemic, but this time, it’s not transmitted through the air – it spreads with just a click.

Welcome to the world of deepfakes.

While COVID-19 significantly impacted our physical and mental well-being, deepfakes affect our minds differently. Their influence is causing confusion, mistrust and a distorted perception of reality, both personally and globally. In this crucial election year – with over 4 billion people across 60 countries gearing up to choose their leaders – deepfake technology is being weaponized to spread misinformation, influence global events and shape the course of history.

Voter influence campaigns fueled by deepfake videos will spread across social media platforms with a mere click. In the midst of elections now, India is grappling with an onslaught of deepfakes. Moreover, deepfakes are infiltrating B2B environments, exemplified by a recent fraud case in Hong Kong that incurred significant financial losses for a financial firm.

Further chaos will ensue as GenAI races toward super-fakes.

Mitigating the Risks of GenAI

Ongoing innovation in GenAI will likely render the average person incapable of discerning authentic content from deepfakes. Currently, there are no tools capable of identifying and mitigating this threat. This gap will put added pressure on cybersecurity teams already pressed with limited resources and budgets to defend against the existing threat landscape. However, here are three measures we must consider now to address this modern risk:

   1. Establishing Regulations Quickly

Regulations, though not foolproof, serve as guardrails against unchecked technological innovation. The absence of regulations, as witnessed in the case of social media, allows for rampant issues like misinformation and polarization. Governments worldwide, mindful of not making the same mistakes they made with social media, are swiftly enacting regulations to hold both vendors and users of GenAI accountable. Examples from 2023 include the EU AI Act and the U.S. Executive Order (EO) on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence.

The current regulations are – to some extent – slowing down the pace at which vendors release GenAI-powered tools to the market. For example, OpenAI is delaying the release of Sora AI to ensure content provenance and enable users to identify real vs. increasingly real-looking but fake videos.

   2. Addressing Misplaced Confidence and the Need for Self-Regulation

AI-driven phishing and deepfake scams are already working. In February, a Hong Kong-based multinational company lost HK $200 million (U.S. $25.6 million) to a deepfake scam that fooled a clerk into executing a financial transaction discussed during a virtual meeting where every attendee – even the chief financial officer (CFO) – was fake. Unfortunately, this first-of-its-kind AI heist (and worst day ever for the clerk) will not be the last. AI-powered phishing attacks will soon target and potentially breach nearly all organizations. In addition to this dire forecast, the report forecasts a steep rise in GenAI-powered phishing that will be harder to detect because of the sophistication and scale of the attacks.

Yet despite the growing threat of AI-driven phishing and deepfake scams, there’s a widespread misconception among employees regarding their ability to identify deepfakes. Our recent survey of 4,000 U.S. office workers finds that over 70% of employees (yes, you read that correctly) are largely confident in their ability to identify a deepfake video or audio of the leaders in their organization.

It’s a bad bet. This misplaced confidence underscores the need for rigorous fact-checking and self-regulation. Individuals must verify information from multiple trusted sources and exercise caution, particularly in high-stakes scenarios. And if you can’t fact-check it with numerous trusted sources, don’t believe it.

   3. Tackling the Debilitating Lack of Tools

Among other notable executives to make bullish statements about AI, JP Morgan Chase CEO Jamie Dimon recently said that AI could be as impactful as electricity. And who can fault them for making such statements? After all, GenAI boasts over a billion users in just a matter of months. No other technology has seen such unprecedented adoption.

I’m excited about AI, too.

However, the unchecked proliferation of GenAI poses significant challenges for organizations, including the need for more effective training and oversight. GenAI tools learn from vast datasets, raising concerns about inadvertently sharing sensitive data. Moreover, some GenAI models inherently lack adequate cybersecurity safeguards – controls too complicated to configure – leaving organizations vulnerable to exploitation by malicious actors. It’s particularly concerning that the number of GenAI tools that can generate deepfakes is increasing, but tools that can detect and prevent them are too few or next to non-existent.

As a leader, I sympathize with my peers facing the same ordeal who still must find ways to maintain a robust security posture.

What Your Organization Can Do to Protect Against Deepfakes

The good news is that there are some things organizations can do now to protect against deepfakes. These actions include:

  • Identify and train external-facing employees who interact with customers and may have access to sensitive information. For example, support and services staff should be trained to ask additional questions to verify whether the external caller is a human or a deepfake.
  • Educate all employees on the risks of engaging with unverified content and discourage amending or amplifying such content.
  • Prioritize investment in responsible and ethical AI practices, particularly during times of budget cuts.
  • Hold AI vendors accountable by embedding language in contracts to review capabilities periodically and ensure alignment with expectations.
  • Foster collaboration between employees and leadership to address gaps in perception and enhance awareness of deepfake threats. You can start by socializing and discussing our Identity Security Threat Landscape Report findings with them (my gift to you!).

As we navigate the uncharted territory of GenAI, collaboration, vigilance and proactive measures are essential to combat the threat of deepfakes. Let’s work together to ensure that GenAI shapes a future where technology is a force for good rather than a pervasive pandemic of misinformation.

Don’t wait – take action now to protect your organization from the dangers of deepfakes.

Omer Grossman is the global chief information officer at CyberArk. You can check out more content from Omer on CyberArk’s Security Matters | CIO Connections page.