This post is part of an ongoing blog series examining predictions and recommendations for cybersecurity in 2018.
For 2017, I predicted that a large-scale, public cloud-specific (e.g., IaaS, PaaS) breach would finally center industry attention on cloud security. This year, we did in fact experience some high profile security incidents in which we saw the following:
- Compromised servers monetized on the dark web: Hundreds of compromised Windows Servers were available for sale on the dark web, some with asking prices as high as $15,000 because they included user data and administrative access. In this case, customer public cloud accounts were quietly compromised and their resources monetized by attackers. It’s worth noting that this is no different from past examples of a desktop or server in a corporate data center being compromised and used to steal user information or execute a large-scale DoS attack from a physical network. It just happens these servers are deployed in the public cloud instead.
- Misconfigured applications and services: There were numerous examples in 2017 of misconfigured applications and services that resulted in exposed data, ransomware and malware distribution.
- Thousands of instances of a public cloud search service were found to be distributing POS malware from 2012.Infected servers became part of a bigger POS botnet with command-and-control functionality for POS malware clients that were collecting, encrypting and transferring credit card information stolen from POS terminals, RAM or infected Windows machines.
- A popular open source database with permissive security settings was the target of a ramsomware campaign. The database was found to be widely deployed using either an early version with no security settings or a more current version with security settings that are permissive by default, requiring configuration. The result was more than 25,000 instances exposed, placing the contents within at risk.
These two misconfiguration cases are no different from the configuration errors made and taken advantage of in years past in applications and servers located on physical networks.
There are many other unknown factors behind these incidents, but two points are consistent across all of them: account owners must configure native security features to enable a functional deployment, and the applications and services within are deployed with permissive security by default, requiring configuration to improve security.
As organizations migrate to the public cloud, they sometimes tell us native security features, like security groups and web application firewalls, are “good enough” – a puzzling position, to be sure. It is well documented by security organizations like MITRE that many of the same ports required to enable common public cloud applications (e.g., TCP/80, TCP/443, TCP/25 (SMTP), TCP/53 (DNS), TCP/3389 (RDP), TCP/22 (SSH), TCP/135 (RPC)) are the very same ports attackers commonly use for evasion and data exfiltration.
The question then becomes: if you were deploying new applications and data in a new data center now, would you take a step back and rely on “good enough” port-based controls for security? No. Then why do we see it happening in new public cloud deployments? History does indeed repeat itself – we have learned that “good enough” is not good at all.
My 2018 Prediction
Driven by these public cloud security incidents and likely more in the future, customers will reject the “native security is good enough” approach to protecting their public cloud deployments. There is a common misconception that the public cloud is more secure and therefore that basic security features are good enough. It’s well known that users themselves often create the entry point for an attacker, sometimes inadvertently via a drive-by download or more directly via a phishing email, for example. Either way, once inside the network, the attacker gains a foothold and can move laterally to any resource, be it in the data center or in the public cloud.
We founded Palo Alto Networks on the premise that port-based access control was no longer good enough to protect your network. Applications no longer adhered to specific port-protocol development methodology, allowing tech-savvy applications and users to bypass them with ease by hopping ports, using SSL, sneaking across port 80 or using non-standard ports. Our approach uses the application identity as the basis for access control and threat prevention security policy, to protect the network. Customers and the market agreed. Now, we are applying the premise that “good enough” doesn’t work for cloud, either.
My 2018 Recommendation
The shared responsibility model dictates that the cloud provider protects the infrastructure while the customer protects the applications and data. However, many organizations still do not fully grasp their role in the shared responsibility model. Despite cloud provider efforts to better educate, we see customers moving to the cloud and applying “good enough” security to protect their applications and data.
Security best practices dictate that protecting your applications and data in the public cloud should follow a prevention-based approach: understand your threat exposure through application visibility, use policies to reduce the attack surface area, then prevent threats and data exfiltration within the allowed traffic.
I recommend organizations take a more aggressive stance in embracing their role in the public cloud shared responsibility model and implement security as strong as that which protects their on-premise data centers. Not only can it be done – it’s the best way to ensure a secure cloud experience.