The past decade of Cybersecurity has been relentlessly focused on stopping threats at the network edge. The implicit assumption of this approach is that the interior of your network is a trusted zone, and everything outside is untrusted. With this idea in mind, vendors began offering more and more ways to scan traffic at this logical boundary, attempting to detect known threats and hopefully taking some type of blocking action against them.
For the better part of the past ten years, this approach was the only one offered, and did a reasonably good job at keeping organizations safe. Traditional IPS/IDS, stateful firewalls, web security – it all relied on scanning traffic and making binary “yes, no” decisions as it passed through. Typically these decisions were made on known-bad content; only able to stop what security vendors thought was malicious.
Then, adversaries and threats changed. They had been watching, learning, and understood that this “hard outer shell, squishy center” represented a golden opportunity to carry out their objectives. To the adversary, this meant that getting past the edge gave them free reign to move laterally within organizations, finding valuable intellectual property wherever it resides, and exfiltrating it out using undetectable protocols. The edge, and the legacy technologies that protected it, had become an easily evaded – and expected – barrier.
This all begs one simple question: Why would you only detect malware at the network edge?
Let’s take a step back and examine how a typical advanced attack works:
• Make an initial compromise via a spear-phishing email, which leads to an infected site with a drive-by download or a malicious attachment.
• This drive-by download exploits a zero-day vulnerability in a browser, or the malicious attachment exploits one in client-site reader software.
• In this case, the attacker has masked his traffic by compromising a benign site or using an exploit that has never been seen before, making it undetectable for traditional solutions.
• The attacker has now established a foothold through this exploited client, a base of operations for future activity.
• From here, the attacker will deliver the actual malicious payload, so-called 2nd stage malware. This will often be done over protocols such as FTP, using encryption, over non-standard ports.
• Once the malicious payload has been delivered, the attacker now has free-reign to pivot laterally within the organization, moving from the initial client toward their final target.
• Often, they will hop multiple times, and the steal data using evasive means.
In this example, the perimeter has become a trivial “wall” for the adversary to overcome. The combination of unknown threats and persistent action within the organization itself is a very common method for truly advanced attackers.
Now, going back to the initial question: what if your entire organization’s network was able to detect and prevent this attack in multiple places? Not only this, but what if your security devices automatically augmented your security posture by discovering new threats and creating new protections?
Now your infrastructure has become an adaptive security framework that is tailored toward how advanced threats operate today. In order to gain this pervasive functionality, there are a few typical places where security devices can be deployed:
• Internet Edge
• Data Center Edge
• Between Virtual Machines in the Data Center
• On Mobile Devices and Endpoints
With this type of architecture, new threats are being discovered at each location in the network, and protections created. This intelligence is then automatically fed into every single security device wherever they are deployed. This gives you the advantage, instead of the adversary, as you are now increasing the probability of stopping an attack at each location, at each stage in the attack kill-chain.
The network edge is the ideal location for quickly preventing the vast majority of attacks, but looking forward, you should consider how pervasive deployments can stop the new breed of advanced attack.
Scott Simkin is a Senior Manager in the Cybersecurity group at Palo Alto Networks. He has broad experience across threat research, cloud-based security solutions, and advanced anti-malware products. He is a seasoned speaker on an extensive range of topics, including Advanced Persistent Threats (APTs), presenting at the RSA conference, among others. Prior to joining Palo Alto Networks, Scott spent 5 years at Cisco where he led the creation of the 2013 Annual Security Report amongst other activities in network security and enterprise mobility. Scott is a graduate of the Leavey School of Business at Santa Clara University.