Chris Scott — AI in Cybersecurity
“AI’s Impact in Cybersecurity” is a blog series based on interviews with a variety of experts at Palo Alto Networks and Unit 42 with roles in AI research, product management, consulting, engineering and more. Our objective is to present different viewpoints and predictions on how artificial intelligence is impacting the current threat landscape, how Palo Alto Networks protects itself and its customers, as well as implications for the future of cybersecurity. In our interview with Chris Scott, a managing partner from Unit 42, he explores the impacts of AI in cybersecurity.
In the ever-evolving landscape of cybersecurity, organizations are searching for innovative solutions to combat increasingly sophisticated cyberthreats. The integration of artificial intelligence (AI) has emerged as a game-changer in the field, a powerful tool to safeguard data and organizations. Our conversation with Chris focuses on the transformative potential of AI in cybersecurity with both near-term predictions and long-term impacts.
Near-Term Predictions — Co-Pilot AI and Sophisticated Spear Phishing
In the near term, AI is set to play a co-pilot role in assisting cybersecurity professionals during attacks, Chris highlights. This entails AI riding alongside individuals, offering critical insights and context to enhance decision-making. With the ability to process vast amounts of data rapidly, AI provides essential guidance to security practitioners, enabling them to stay one step ahead of attackers.
Unfortunately, AI is a double-edged sword. Cybercriminals are also capitalizing on AI's capabilities to enhance their malicious activities. Spear phishing attacks, for instance, have become more sophisticated with the advent of large language models (LLMs). These models enable bad actors to create highly authentic, localized and contextually relevant spear phishing messages. By analyzing multiple emails from specific companies, cybercriminals can leverage AI to craft personalized and convincing messages, leading to an increased risk of individuals falling victim to such attacks.
Medium-Term Impacts — Automated Response and Human Collaboration
Looking ahead, the medium-term impacts of AI in cybersecurity will revolve around automated response systems. As organizations face an ever-increasing number of cyberthreats, automated response mechanisms become crucial for rapid incident containment. Chris shared an intriguing example: password theft. In this scenario, an automated system can isolate compromised hosts promptly, minimizing the potential fallout. However, caution is necessary as AI systems are not immune to false positives or other issues. Establishing a balance between automation and human involvement is vital to ensure effective decision-making and prevent unnecessary disruptions. Chris explains this concept a bit further:
"When I think about the medium-term impacts of AI in the cybersecurity realm, I think you're going to see a lot more of the automated response side, that ability to understand what has happened. And, I think you're going to see a mix of where do we put this with what's human involvement, what is automated response, and how do we work together.
So, as we get these common responses, one of my favorite examples is password theft, or credential theft is another way to think of that. When a credential is stolen from an environment, what should our automated response be? One of those might be to isolate that host. Well, after we're very sure about that isolation process, we can hand that off in an automated fashion to an AI to say: When you see credential theft within these environments, go ahead and automate the isolation of these endpoints.
Now we want to be careful, though, because LLMs and just AI in general, sometimes they have a false positive or they have an issue. So, in those initial phases, we want to limit those capabilities down and specify how many automated isolations that you want to allow. Maybe it's 20 per day? Or, maybe it's 30 per day for your entire environment? And anything above that, we want to push that to a human to be able to help make the decision. Make sure that we're not having a false positive, or in the LLM world, what we call a hallucination, when it really believes that something else is happening that is not."
This approach ensures that critical incidents receive human attention, preventing false positives or AI-driven mistakes (such as hallucinations) from causing unnecessary chaos. Collaborating with AI in this way enables cybersecurity professionals to harness the speed and efficiency of automated responses while maintaining oversight and reducing the risk of false positives and a barrage of low-fidelity alerts.
Long-Term Predictions — Proactive Security and Real-Time Analysis
In the long term, AI is poised to revolutionize cybersecurity by becoming proactive. Chris painted a vision where AI systems proactively configure and secure environments at their inception. He predicts:
"Long term, I think that AIs will be used to proactively configure environments as they're stood up. And even as data is flowing, let's say that you have data flowing within the environment that shows early signs of an attacker. I think AIs will proactively secure resources based upon the concern.
What are the riskiest assets? If I understand who the attacker might be, I may then be able to secure assets related to where that attacker might go. In essence, I think we'll see a lot more predictive analysis going on in real-time with real-time security applied. It'll be an interesting field to see how we get there, but I think long term that is, that's where AI is going to end up."
By preemptively securing resources and deploying real-time security measures, organizations can mitigate risks associated with specific threat actors. Predictive analysis will empower AI systems to prioritize safeguarding the most critical and vulnerable assets. This proactive and real-time approach will transform the way organizations defend against cyberthreats.
The integration of AI in cybersecurity offers immense potential to enhance our defense strategies. From serving as a co-pilot, providing critical insights during attacks, to automating incident response and proactive security measures, AI is shaping the future of cybersecurity. However, it is crucial to remember that AI is not a panacea. Human expertise and collaboration remain essential to ensure the accuracy, reliability and ethical implementation of AI-driven cybersecurity measures.
As we continue to navigate an increasingly complex digital landscape, harnessing the power of AI in cybersecurity will be instrumental in safeguarding organizations from potential harm. By embracing AI's predictive capabilities, organizations can better prepare for attacks, analyze configurations, and create robust solutions to protect their critical data. Together, human ingenuity and AI innovation will pave the way for a more secure digital future.
Want to learn more about the impact of AI in security? Chris shares his thoughts on The Role of AI in Reshaping Cybersecurity Careers.
Never miss our ongoing “AI’s Impact in Cybersecurity” blog series. Subscribe to the Cortex SecOps blog today and receive a weekly digest every Friday, direct to your inbox.