This post is also available in: 简体中文 (Chinese (Simplified)) 繁體中文 (Chinese (Traditional)) 日本語 (Japanese) 한국어 (Korean) Português (Portuguese (Brazil))
I recently had the opportunity to chat with Palo Alto Networks Senior Director of Worldwide Public Cloud Security SEs, Allan Kristensen, who brings 15+ years of experience building highly effective solutions engineering (SE) teams. The Palo Alto Networks SE team has firsthand knowledge of the unique and diverse cloud security challenges that prospective customers are looking to solve.
Based on my conversation with Allan, here are seven essential principles to guide you as you evaluate and select the right cloud security offering for your multi-cloud environments, spanning AWS, Azure, and Google Cloud Platform.
Principle One: Multi-cloud support – AWS, Azure, and GCP at a minimum
In our experience, more than three-quarters of our customers have a multi-cloud strategy – maybe not initially, but definitely down the road. With that in mind, it’s important to select a solution that can span clouds and deliver truly integrated multi-cloud support – with a centralized approach that seamlessly unifies visibility across each of your cloud environments today and in the future.
Principle Two: 100% SaaS-based and API driven – no agents or proxies
A 100% API-based SaaS solution is the only way you can effectively manage the dynamic, distributed nature of cloud environments. Our experience shows that customers trying to leverage agent or proxy-based point products introduce considerable friction and end up with security blind spots. There is far too much overhead, risk, and manual work required to deploy and maintain non-API based products.
Principle Three: Continuous resource discovery
You can’t protect what you can’t see. It’s important to select a solution that continuously monitors and dynamically discovers your cloud resources, such as virtual machines, database instances, storage buckets, users, access keys, security groups, networks, gateways, snapshots, and more. A centralized and auto-updating inventory that displays the security and compliance status of every deployed resource is foundational for a truly effective cloud security strategy.
Principle Four: Automated resource monitoring
Equally important is your solution’s ability to automatically apply robust security policies and swiftly remediate misconfigurations to ensure adherence to your corporate-defined security policies. These capabilities must cover all the key risk vectors in your cloud environments, including:
- Configuration checks: Recent research from Unit 42 highlights that 32% of organizations publicly exposed at least one cloud storage service. Configuration checks help ensure any deployed cloud resource is properly configured and within defined guardrails as well as that you don't have any configuration drift across your AWS, Azure, and GCP public cloud environments.
- Network activities: The same Unit 42 research also shows that 11% of organizations currently have cryptojacking activities in their environments. To ensure you have complete visibility into suspicious network traffic and activities, your chosen solution must be able to continuously monitor your cloud environments. It’s not enough to just have configuration and compliance checks in place, because these will only tell you what can go wrong, not what is going wrong. Here’s an example:
Configuration checks can help detect and alert on loosely configured Security Groups that allow inbound traffic on all ports from all IP addresses. This could be a mission-critical issue. However, without network monitoring, you simply cannot determine if the vulnerability has been exploited, nor whether malicious traffic has penetrated beyond the Security Group.
- User and access key monitoring: Unit 42 data also indicates 29% of organizations experienced potential account compromises, which can not only lead to data loss but also loss of control, and ultimately confidence in your cloud environments. User behavior analytics (UBA) and other machine learning (ML)-based capabilities can help detect sneaky activities, such as hijacked credentials. These capabilities help customers look for and alert on anomalous activities. Without UBA, it’s nearly impossible to detect sophisticated attacks in time.
- Host vulnerability and threat detection monitoring: It’s important to select a cloud security offering that can correlate and contextualize threat and vulnerability data from third parties.
Principle Five: Correlate lots of data
Continuous contextualization of multiple, disparate data sets is critical for building a deep understanding of your security posture. Only once you have a complete understanding of your security profile and risks can you quickly remediate issues. Here are just a couple of common examples:
- Workloads with overly permissive security group configurations, known host vulnerabilities detected, and traffic from suspicious IP addresses etc.
- Identification of privileged user activities across cloud environments which are performed for unusual (not seen before) locations.
Principle Six: Remediation is good. Auto-remediation is better.
Having multiple remediation options (both guided and automated) is important for reducing your window of exposure. For example, if the system identifies a publicly accessible Network Security Group associated with a sensitive workload, the ability to automatically restrict access is paramount. The ability to also write custom remediation rules tailored to meet your specific needs is key. A "self-healing" ability enables organizations to ensure that their ‘gold standard’ security and compliance policies are always enforced.
Principle Seven: Integrate
Finally, it's important to leverage an open platform, which enables you to send cloud alerts to existing tools and workflows, such as your SIEM, SOAR, ticketing systems, collaboration tools, etc.
Prisma Public Cloud is the most complete cloud security offering on the market, incorporating all seven principles discussed above. See the Prisma difference for yourself.