The Human Factor in a Tech-Driven World: Insights from the CrowdStrike Outage
AI and Deep Fake Technology v. The Human Element
The idea that people are the weakest link has been a constant topic of discussion in cybersecurity conversations for years, and this may have been the case when looking at the attack landscape of the past. But we live in a new world where artificial intelligence (AI), large language models (LLMs) and deep fake technology are changing every day.
The recent CrowdStrike global outage exposed the world to what can happen when critical systems are affected by an attack or an error – the results are the same. According to many accounts, almost every major business sector was affected, including airports, retail and hospitals. This incident represents the largest global outage in history, with more than 8.5 million hosts affected. Analysts have thoroughly dissected the root cause, which you can find the details of on countless websites.
Lessons from the CrowdStrike Outage
The CrowdStrike incident all came down to a null pointer exception. In coding, a null pointer or null reference is a value that holds a spot in memory for a value that should be populated sometime before that line of code is executed. An example would be code that could provide an output of files that can vary in number. Once the files are created, that value can populate the memory space held by the null pointer, and the program can continue.
Null pointer exceptions occur when a line of code attempts to execute before the null pointer has valid data and the reference points to nothing. The application cannot continue. In the CrowdStrike incident, this empty programmatic value caused the widespread “blue screen of death” (BSOD) issue for many Windows users.
Today, a month after the incident, I still find myself questioning how such an issue could make it into production, causing such a significant impact on the industry and the world.
As an industry practitioner and a civilian, I have heard claims for years about how superior computers and technology are compared to their human counterparts. I have observed the trends over the last two to three years as more and more requests are made to AI and LLMs to review, inspect and validate code and, in some cases, provide the final say in what is released to customers and clients. I have also witnessed ever-increasing confidence from executives in what is essentially new technology as the trust in individual human resources dwindles.
Clearly, the CrowdStrike incident has reinforced the critical importance of the human element in an increasingly automated tech world.
The Importance of Manual Verification
CrowdStrike’s postmortem on the outage revealed that the company relied on the checks performed by the Content Validator. I have been in this industry for over 30 years, and I will never forget what I learned as a noob (many times over), “Computers and technology are only as smart as the people that program and configure them.” I don’t remember hearing about a perfect infallible AI/next-gen cyber solution – as such, we are still at the ultimate mercy of the people running the computers. We are all looking for answers.
As CISOs globally struggle to find a new path after seeing the consequences of an outage of this scale, why are we looking to another piece of software when the answer is already on the payroll? We look to multi-factor authentication (MFA) when we allow privileged access and step-up authentication for mission-critical access – but why is there no manual verification when addressing automation tools, AI and LLMs? Why has basic cyber hygiene been ignored after implementing alleged next-gen technology? How many entities have rules allowing specific vendors to push updates without scrutiny? Why is anyone allowing software to update without independent review or change control?
Companies need to remember that their greatest assets are their employees. Regardless of software choice, your trusted team members are capable of reacting outside the scope of code.
The concept of Zero Trust – “never trust, always verify” – is the current battle cry for the security conscious. Yet so many organizations leave the verification portion to applications they’ve purchased or developed in-house.
I’m not suggesting we need to forgo these modern tools, but we should be realistic, look at them for what they actually are, and realize their limitations. They are applications that use existing data models and resources available at the time of execution to remove the potential for human error.
We must remember there is a significant divide between best practices and reality – every entity must choose its own acceptable risk. LLMs and automation are trained on these best practices, but there is a lot to be said for years of hands-on practical experience.
Balancing Trust and Technology
The road to recovery from the CrowdStrike outage will be long and tedious, and the ripple effects will be discussed for years. As we move forward, we must remember the maturity of the technology and that it’s not perfect. We should lean into new technologies that help us to be more efficient – not replace ourselves with them. Someday, we may be able to trust computers blindly – unfortunately, we’re just not there yet.
Revisiting the opening quote of this post, people may be the weakest link, but in this new technological future, they may also be the most critical.
Len Noe is CyberArk’s resident Technical Evangelist, White Hat Hacker and Transhuman. His book, “Human Hacked: My Life and Lessons as the World’s First Augmented Ethical Hacker,” releases on Oct. 29, 2024.