Smarter Security
Mike Spisak – Building a Trusted AI System
“AI’s Impact in Cybersecurity” is a blog series based on interviews with a variety of experts at Palo Alto Networks and Unit 42, with roles in AI research, product management, consulting, engineering and more. Our objective is to present different viewpoints and predictions on how artificial intelligence is impacting the current threat landscape, how Palo Alto Networks protects itself and its customers, as well as implications for the future of cybersecurity.
We recently interviewed Mike Spisak, technical managing director with the Proactive Services Creation Team at Unit 42. He discussed his predictions around AI in cybersecurity, and the importance of fostering a cyber-aware culture.
One short-term prediction from Spisak is the emergence of AI-powered cybersecurity “assistants,” which he envisions will serve as co-pilots to defenders, boosting their efficiency in responding to threats.
Imagine having a virtual cybersecurity assistant by your side, like a trusted co-pilot, enhancing your security operations with unparalleled speed and efficiency. Spisak foresees the emergence of such assistants in various forms, aimed at accelerating the pace of threat detection, response and analysis. As such, leveraging AI technology to automate mundane, low-level tasks and expedite critical processes is paramount. Spisak emphasizes that nearly 40 percent of daily security operations can be automated, highlighting the potential for AI-driven assistants to revolutionize cybersecurity workflows.
In the short term, as we imagine, these assistants will be poised to become indispensable companions for security analysts, streamlining operations and bolstering defense capabilities. However, Spisak predicts an even broader impact in the medium term – envisioning a future where such assistants become ubiquitous across all sectors, where everyone has their own cybersecurity assistant, tailored to their individual needs and vigilant against potential threats. These virtual assistants, powered by AI, would seamlessly integrate into daily routines, providing timely guidance and warnings to mitigate risks. Spisak elaborates:
“Imagine a place where a CISO can ask an artificially driven intelligent assistant, ‘where are there vulnerabilities in my codebase? Where am I affected by some new threat intelligence that just came out? Am I affected by this new piece of intel that just dropped on my desk? I just heard about a new zero day attack from my friends at Unit 42. Does this exist in my environment?’ I need to be aware of these types of threats.
This is an area where visibility and situational awareness are key. The CISO needs to explain to nontechnical people what's happening in the environment and what risk it poses. If a CISO goes to a board and says I need help in the form of compute power, or resources, or skills to battle a buffer overflow, he's gonna get shown the door because those issues don't translate directly to business outcomes. But if a CISO can, with the help of generative AI, receive text summarizations or another mode where complex technical topics are converted into consumable human business speak, that translates to something the board can relate to and take action on if needed.”
From avoiding phishing emails to steering clear of suspicious websites, these assistants would offer invaluable support in navigating threats. Spisak anticipates that as technology evolves and becomes more accessible, the widespread adoption of such assistants will empower individuals to make smarter security decisions, ultimately fortifying our digital defenses on a global scale.
Long-Term Predictions
Spisak goes on to predict that AI systems will engage in autonomous "battles" with offensive AI, leading to a cycle of attack and defense, learning from each other. He foresees that AI entities may find themselves engaged with their offensive counterparts where they autonomously learn from each other, perpetually iterating strategies of attack and defense in a cyclical pattern.
While such dynamics are not extensively documented in real-world scenarios, theories abound on the potential for AI-driven attackers and defenders to engage in such a symbiotic learning process. This prediction is based on observations from simulations and theoretical frameworks, indicating a probable trajectory for cybersecurity in the long term.
In this envisioned future, AI-powered attackers will glean insights into defensive tactics, while defensive AI systems will reciprocate by studying offensive strategies. This perpetual cycle of learning and adaptation underscores the importance of staying ahead of the curve in understanding and mitigating emerging threats.
As this future-forward notion unfolds, AI-driven cyberwarfare will demand innovative approaches to uphold security in digital ecosystems, underscored by the critical necessity of human inputs and ethical oversight.
AI's Proficiency in Detecting and Preventing Security Threats and Attacks
When asked what types of security threats or attacks AI powered systems are going to be particularly effective at detecting and preventing, Spisak gets right to the point:
"This may seem somewhat obvious, but the first one I'll say, and we haven't talked about it yet, is denial-of-service attacks or DDoS attacks as a popular, or distributed denial-of-service attacks. I think that AI is very good at pattern detection, and I also think it's very good at generating synthetic data and then recognizing that one-off by one percent or by 1000 percent and the ability to slide the throttle, throttle the threshold they're in."
He highlights AI's proficiency in pattern detection and its ability to generate synthetic data, enabling it to discern anomalies with precision. "The ability to distinguish legitimate traffic from malicious traffic and then automatically diverting or taking autonomous action to maintain service availability will become increasingly refined," Spisak explained.
Continuing his analysis, Spisak turns his attention to phishing and social engineering attacks. "AI-driven email security solutions are poised to excel in identifying phishing emails," he asserts. Drawing on their capability to analyze email content and behavioral patterns, these systems are evolving to anticipate adversaries' tactics and recipients' responses. Spisak emphasized the need for continuous improvement in AI defenses to keep pace with evolving threats. "It's a cyclical attack-defend pattern," he remarked. "We're each striving to outpace the other in a perpetual game of cat and mouse."
Another question posed during the interview: “What proactive steps can be taken to protect AI models from adversarial attacks and evasion techniques?” Spisak starts off, noting that many models are bootstrapped by leveraging either open-source models or models from other sources. He goes on to offer a discerning reply:
“Building a trusted AI system starts with making sure you evaluate policies, practices and the lineage (or the pedigree) of where you're getting your base foundational model from, and then growing from there. And then, of course, doing what I'll call classic cybersecurity hygiene – first making sure we have a cybersecurity foundation in place for cloud, data, identity and application security to address common risks. Then we can progress to applying AI-specific measures for emerging threats.
Shifting left, doing the testing earlier in the AI lifecycle, will prove, I think, super valuable. And continuously assessing and training. AI and its associated security are rapidly evolving and require an on-going commitment to learning and research that results in regularly updating security controls in order to meet continuously changing threats.”
Key Performance Metrics in Evaluating AI
The interview shifts gears a bit when asked about what key performance metrics Spisak would look at or would advise somebody to look at to evaluate the effectiveness of their AI powered solutions, and how they should be tracked over time.
Spisak responded with an overview of essential metrics, emphasizing the importance of fundamental statistics, such as false positive and false negative rates, alongside the successful detection rate. These metrics, he explained, offer crucial insights into the accuracy and performance of AI systems, particularly in the cybersecurity domain.
In addition to these metrics, Spisak highlighted the significance of understanding threat attribution, which contextualizes detected threats within specific threat actor groups or tactics. He also stressed the importance of considering the cost-to-value ratio, ensuring a balance between the implementation cost of AI solutions and the value they deliver to users.
When it comes to tracking these metrics over time, Spisak advocates for a proactive approach. He suggested monitoring for model drift, where the behavior of AI models deviates from their intended function. This, he explained, can be achieved through robust MLOps practices, involving standard operating procedures and protocols throughout the model lifecycle.
Moreover, Spisak introduced the concept of adversarial simulations, akin to red teaming, where AI models are subjected to simulated attacks to identify vulnerabilities and enhance defenses. This approach, he noted, is gaining traction, particularly in the startup space, as organizations seek to fortify their AI systems against evolving threats.
As the cybersecurity landscape continues to evolve, the role of AI will become increasingly prominent. Spisak emphasized the need for organizations to embrace AI-driven solutions while exercising caution and maintaining human oversight. By staying ahead of emerging threats, leveraging AI for proactive defense, and continuously evolving security strategies, organizations can navigate the AI landscape and safeguard their digital assets effectively.