Palo Alto Networks News of the Week – May 16

Interested in the top Palo Alto Networks news from this past week? It’s all right here.

Palo Alto Networks researchers identified a new Trojan, Funtasy, that targets Spanish Android users with sneaky SMS charges.

For the Record: We recently asked several Palo Alto Networks customers to describe the benefits of WildFire, and why adding a WildFire subscription to their Palo Alto Networks deployment is a better option than buying a standalone detection product or service.

Sharat Sinha, Palo Alto Networks VP, detailed 3 security priorities for the Asia Pacific region.

Kevin Magee, Palo Alto Networks Regional Sales Manager for Ontario, Public Sector, shared his perspective on the success of the Palo Alto Networks Expert Forums held recently in Ontario’s unique public sector community.

We hosted our third annual EMEA Expert Tour under the sun this week in Marbella, Spain with NextWave partner sales engineers and technicians across the EMEA region.

We talked at our Federal Expert Forum about tackling the government’s toughest cybersecurity challenges.

As a continuing part of our government and public sector activities, we are featured on Federal News Radio/WTOP in the United States over the next few months. Check it out to hear Rick and Steve Hoffman, VP, U.S. Federal, talk about what advanced government security teams are doing today.

Danelle Au discussed the massive challenge of securing the Internet of Things.

Our own James Sherlow commented on whether it is time to kill OpenSSL post-Heartbleed.

Join fellow IT Managers & Security Experts at the Palo Alto Networks Customer Forum on May 21 in The Netherlands. If you attend, you could win a great prize.

Here are more upcoming events you should know about:

[Source: Palo Alto Networks]

IT Security: It’s Time to Change the Game – And Here’s How

Summary: After several major security breaches, is there’s another way to do things?

We do IT differently these days, with users bringing their own devices into our networks, with our apps in the cloud, and our users wirelessly connected — from anywhere at any time. But we still do security the same old ways, with firewalls the mediaeval fortresses guarding the gates around our walled city datacentres.

So how can we rethink the ways we protect our changing IT world? We’ve already started to understand that what’s most important is the data and information we use, not the software, nor even our PCs and smartphones. We’ve started to encrypt data, at rest and in motion, and we’re also ensuring our users and apps work with the least possible set of privileges.

But, as the news headlines show, it’s not enough. With millions of us having to replace credit cards and deal with the fallout from recent major data losses, the failings of current security practices have been put in sharp relief. It’s time to do something different, to move from detecting attacks and clearing up after them, to preventing those attacks in the first place.

In the shadow of those high-profile intrusions, I spent some time with Palo Alto Networks, to try to understand how the security company is going beyond the traditional firewall, and coming up with an alternate way of looking at security.

Detecting malware is a complex piece of the puzzle. It’s no longer a matter of looking for malware signatures — for one thing, malware authors have long been able to create software that changes from download to download, and the targeted malware used by state actors and sophisticated cyber criminals is often designed to penetrate a specific network.

New malware that’s never been analysed won’t be blocked by conventional tools: someone must have been infected and lost data for that malware to be found, analysed and its signature added to the daily download of signature files. And while in many cases that someone is a honeypot system on some vendor’s network, there’s still a chance that that someone is you, and that it’s your data that’s been lost.

The risk may be small, but it’s still a risk: and the higher profile you are, the higher the risk. Home PCs might well be safe with a traditional signature-based approach, but that’s an approach that’s risky for businesses running cloud services, or hosting APIs for their apps.

What’s really important is understanding just how malware works. It turns out that while malware apps differ, the attack paths and methods they use are identical. To monitoring software, a buffer overflow or a SQL injection looks the same; so instead of protecting the operating systems of modern network endpoints, we need to monitor the applications and services they’re using, looking for the signatures of attacks, and blocking those attack paths rather than the malware. That’s the approach taken by Israeli security company Cyvera, recently bought by Palo Alto.

By analysing the attack patterns of thousands of pieces of malware, Cyvera has been able to identify fewer than thirty actual attacks. It’s then able to sit between your applications and those attacks, monitor for suspicious activity, and then block and report the code that’s trying to penetrate your network.

If malware can’t attack, no matter what the underlying code might be, we’re starting to focus on prevention, rather than detection. That’s an important distinction, as it’s an approach that, if implemented at an OS-level, would mean that Microsoft wouldn’t have had to issue a patch for IE in Windows XP, as it would have been protected automatically.

Changing the way we think about protecting our networks from malware changes the game. It lets us focus on understanding the software engineering implications of malware, and allows us to harden the areas of our OSes and software that need hardening by using those common attack patterns as part of our software test procedures. However we shouldn’t become complacent.

Just because malware uses a set of common attack patterns doesn’t mean that they’re the only possible attack patterns: it’s just that they’re the easiest or most effective routes into someone’s network. There are always going to be other ways in; just harder and more expensive. However, by continuing to analyse attack signatures it will still always be easier to prevent attacks than to detect malware and then remediate its effects.

These are tools that can be used alongside next generation firewalls, monitoring for unusual network traffic and unknown applications. Bringing the two together turns security into a proactive, rather than reactive, technology, one that’s much more in tune with modern IT and the rapid changes in how we work. They’re also techniques that don’t need to be associated with physical hardware, and can be implemented as part of the software control plane of a software defined network, or even as virtual machines in a virtualised infrastructure — as Palo Alto Networks is doing in conjunction with VMware.

It’s a brave new world out there, and it’s good to see that the security industry is thinking about how it needs to react, taking advantage of the same new tools and techniques we’re using in our private, hybrid, and public clouds. Now it’s up to us to think about how we can move to preventing attacks on our infrastructure, and keeping that vital data right where it belongs.

Simon Bisson is a freelance technology journalist. He specialises in architecture and enterprise IT. He ran one of the UK’s first national ISPs and moved to writing around the time of the collapse of the first dotcom boom. He still writes code.

[Source: ZDNet]

Securing an Evolving Cloud Environment

The chief information officer (CIO) of a large utility provider had decided to move email, file shares, video sharing and the company’s internal web site to the cloud and needed to know the security requirements for this project within two weeks. The organization already had security requirements in place for traditional third-party vendors; however, these requirements were not a good fit for the cloud services the company was looking to adopt.

The director of security at the utility provider approached SecureState, a management consulting firm specializing in information security, with the problem.

Unlike traditional third-party solutions where the vendor is responsible for all, or most, of the security controls in the cloud, there are often cases where the client is responsible for managing and maintaining key security controls. For example, if a company was hosting a homegrown application at a Platform as a Service (PaaS) provider, the client would generally be responsible for the security of the application itself (figure 1). The cloud provider of the PaaS would be responsible for securing the platform and infrastructure supporting the application. However, if a company selected a Software as a Service (SaaS) application, the cloud provider would generally be responsible for all layers of the stack and the client would have very little responsibility or control over the security of the application (figure 2).


With that in mind when moving to the cloud, it is critical to clearly outline who is responsible for each component and have requirements that give the organization its desired level of security while being flexible enough to fit the different service models available from cloud providers.

For this utility provider, the move of these initial four services was part of a larger effort to eventually migrate all corporate IT services to the cloud, so in addition to quickly developing requirements for the applications listed previously, the director of security also needed a way to rapidly assess and categorize future cloud service providers to determine what minimum set of controls should be applied. This system also needed to be flexible enough to support new technology developments as cloud solutions mature. Further, a system would need to be put in place to track and monitor compliance of these key business partners to the required controls.

Building a Framework

To assist with this, SecureState created a program to review, approve and manage these cloud providers. The program was built around a custom cloud security framework (CSF) that the team developed. This framework was comprised of numerous components including:

  • Data classification and cloud service provider categorization guidelines
  • A control set
  • Vendor questionnaires mapped back to the control set
  • Federated identity management standards

To create this framework, the team met with stakeholders to gather business, technical and security requirements. The framework leveraged the utility company’s existing security policies, procedures and standards while adding requirements specific to cloud computing environments.

The controls in the framework were broken down by the classification of the data processed and/or stored by the provider (public, internal, confidential and regulatory). Each level added another layer of controls that needed to be present in the environment. To ensure that the controls were properly applied to various cloud models and use cases, a lookup table was created to show who is commonly responsible for managing each of the controls in the framework, depending on what type of cloud service model (e.g., SaaS, IaaS, PaaS) is being used.

Special attention was given to the regulatory requirements related to the data that would be stored and processed by the cloud providers, as the utility company needed to comply with several different regulations:

  • North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) standards because the utility provides power generation and transmission
  • Payment Card Industry Data Security Standard (PCI DSS) because the utility processes credit cards
  • US Health Insurance Portability and Accountability Act (HIPAA) because the utility self-insures its employees for health benefits

Requiring all cloud service providers to meet these regulatory requirements would be onerous, if not impossible. Therefore, appropriate regulatory controls would be applied only in environments that required them.

For example, portions of the utility’s employee health insurance process would migrate to the cloud specifically related to the corporate file share. Because of this, additional steps needed to be taken to ensure that the provider of the file-sharing service could meet the related HIPAA requirements.

Once the framework was completed, the team met with executives at the organization to review the CSF. During this meeting, SecureState conveyed the importance of the framework to the business and outlined how the organization should align to it. Once executive management buy-in was obtained, the framework was adopted for use by all lines of business moving services to the cloud, not just IT. This provided the company with a unified approach to managing the security of cloud services, thus ensuring all corporate data moved to the cloud were appropriately secured.

Managing the Security of Cloud Services

The director of security also needed to develop processes to prioritize, review and track which cloud services were approved for use, as well as a program to manage and track what data were being stored and/or processed by these cloud services. Without a robust program in place, the security department would quickly lose control of where sensitive data were stored and which vendor had been approved or denied.

The SecureState team created an online portal where lines of business inside the utility can enter requests to have potential cloud service providers (CSPs) reviewed. Once a provider is entered for review, a questionnaire is generated based on the type of cloud service used and the data stored and/or processed by that provider. This questionnaire is then sent to the point of contact at the cloud service provider to gather information on what security controls are present in their environment. Once the questionnaire is complete, SecureState works with the CSP and client to snap the cloud service into the CSF. To ensure the lines of responsibility are clearly defined, each requirement in the CSF is assigned to either the CSP or client. Depending on the categorization of the data being stored or processed by the provider, additional testing or interviews outside of the questionnaire may be required to determine which controls are present and to verify that they are properly implemented. A similar process is also followed to ensure that the controls the organization is responsible for implementing internally are present and properly implemented for each new cloud service entering into the environment.

During this review process, risk posed by the proposed solution is enumerated and areas where the solution did not meet the CSF are outlined. Using this information, the utility’s security group can determine if the new solution poses an acceptable level of risk, if the solution would be rejected or if it requires additional controls.

This portal also provides an inventory of which approved cloud applications/providers are currently being used in the environment and any exceptions associated with each provider. Additional reminders are set up to reassess each CSP annually, at a minimum. The depth of the reassessment is determined by the type of data processed or stored by the provider and any control exceptions granted.

Lessons Learned

Since implementing the CSF, the utility has applied it to four initial cloud services and a handful of subsequent providers. While applying the framework, a number of lessons were learned:

  1. Getting in front of the providers before the contract is signed to gain the full support of the providers. The utility had a large challenge applying the CSF to the initial set of vendors, as the contracts with these vendors had already been signed by the time security was brought in to review them. Because of this, the team had little leverage to get the vendors to make changes to their environments to meet the utility’s security requirements.
  2. Ensuring the use and completion of the utility’s questionnaire. Many of the providers preferred to provide third-party audit reports such as Service Organization Control 2 (SOC2) reports or self-assessments such as the Cloud Security Alliance (CSA) Consensus Assessments Initiative Questionnaire (CAIQ) instead of completing the utility company’s questionnaire. In these cases, the team would map the results back to the framework manually. Unfortunately, in most cases the information provided in the SOC2 report or CAIQ did not contain enough detail and further interviews and assessments were required to fill in the gaps. These processes ended up taking longer than initially planned. As a result, it was determined that this process would go more smoothly if the questionnaire was completed first. Thus, the team focused on streamlining the questionnaire and warned the project team that if the vendor did not complete it, the time required to review the vendor would lengthen, possibly impacting the project timeline. With this concern in mind, often the line of business could pressure the prospective providers to complete the questionnaire.
  3. Prioritizing provider assessments based on services provided. Follow-up interviews and assessments took longer than initially planned, and a method to prioritize service providers had to be developed to ensure high-priority service providers were assessed first. In some cases, lower-priority providers that housed only public data received minimal follow-up interviews and assessments. This was done to ensure that providers could be reviewed and approved quickly with the resources available.
  4. Educating the line of business on the cloud provider review process and following that process. Large projects that went through the company’s central procurement, or project management, office were easily flagged for provider review. However, many smaller projects that were initiated by the lines of business were small enough that they did not require involvement from these groups. Therefore, the security team did not hear about some smaller projects until they were fully implemented and, in some cases, had been in operation for a few months. To address this, the security department now makes a concerted effort to reach out to all lines of business to educate them on the process while working to quickly review new providers so this review is not a bottleneck in the process.

Conclusion

By pulling together the right team, the utility was able not only to address its initial problem of providing security requirements for the first group of vendors, but also to develop a solution to manage future cloud vendors. This solution allowed the utility to quickly and easily review future providers and also provide a program to manage them, thus ensuring corporate information stored in-house or in the cloud is protected equally.

The best way to start this process in any organization is to inventory the existing cloud services already in use. Many organizations have already started to leverage cloud services, often without audit, IT or security’s knowledge. By generating an inventory of which service providers are currently being used and what data are being stored or processed there, the organization can get a handle on what corporate data may be underprotected in these environments and use this information as leverage to start its own internal project to create a CSF for the environment.

Matthew Neely is the director of strategic initiatives at SecureState (www.securestate.com). Neely uses his technical knowledge to lead the Research & Innovation team to develop threat intelligence tools and methodologies for the challenging problems of the information security industry. Previously, he served as SecureState’s vice president of consulting and manager of the Attack & Defense Team. With more than 10 years of experience in the area of penetration testing and incident response, Neely brings the ability to think like an attacker to every engagement. He uses this skill to find creative ways to bypass controls and gain access to sensitive information. Prior to working at SecureState, Neely worked in the security department at a top-10 bank where he focused on penetration testing, assessing new technology and incident response.

[Source: ISACA]

The Cybersecurity Canon: Secrets and Lies

For the past decade, I have held the notion that the security industry needs a Cybersecurity Canon: a list of must-read books where the content is timeless, genuinely represents an aspect of the community that is true and precise and that, if not read, leaves a hole in a cybersecurity professional’s education.

If you’d like to hear more about my Cybersecurity Canon idea, take a look at the presentations I made at this year’s RSA Conference and at Ignite 2014. As always, I love a good argument, so feel free to let me know what you think.

The Cybersecurity Canon: Secrets and Lies: Digital Security in a Networked World (2000) by Bruce Schneier

Secrets and Lies: Digital Security in a Networked World is the perfect book to hand to new bosses or new employees coming in the door who have not been exposed to cybersecurity in their past lives*. It is also the perfect book for seasoned security practitioners who want an overview of the key issues facing our community today. Schneier wrote it more than a decade ago, but he talks about a variety of ideas so ahead of their time that they are still relevant today. Concepts he touches on include:

  • The idea that “security is a process, not a product.” With that one line, Schneier captures the essence of what our cybersecurity community should be about.
  • No matter how advanced security technology becomes, people are the still the weakest link in the security chain.
  • The cyber-adversary as something more than just a hacker.
  • Making the Internet more secure by strengthening confidentiality, integrity, and availability (CIA), as well as improving Internet privacy and anonymity.
  • Challenging the idea that security practitioners must choose between security and privacy.
  • Holding software vendors accountable for security risks in their code.
  • The need for a Bitcoin-like capability long before Bitcoin became popular.

The content within Secrets and Lies is a good introduction to the cybersecurity community, and Schneier tells the story well.

The Story

Secrets and Lies demonstrates Schneier’s evolution as an early thought leader in the cybersecurity community and outlines some key concepts that are still valid today.

Security Is a Process

In the preface, Schneier freely admits to thinking in his earlier life that cryptology would solve all of our Internet security problems. In Secrets and Lies, however, he is forced to acknowledge upfront that technology by itself does not even come close to solving these problems. You do not get security out of a box. You get security by applying people, process, and technology to a problem set, and the more complex we make things, the more likely it is that we are going to screw up the process.

People Are the Weakest Link

The weak link in all of this is the people. You can have the best tools on the planet configured to defend your enterprise, but if you do not have the qualified people to maintain them and to understand what the tools are telling you, you have probably wasted your money. This goes hand in hand with the user community, too. It doesn’t matter that I spent a gazillion dollars on Internet security this year if the least-security-savvy people on your staff take their laptops home and unwittingly install malcode on their machines.

Risk

When it comes to business risk, cybersecurity isn’t its own category separate from more traditional risks. What I have noticed in my career is that many security-practitioners and senior-level company leaders treat “cyber risk” as a thing unto itself and throw the responsibility for it over to the “IT guys” or to the “security dorks.” In my mind, this is one of our community’s great failures. It is up to all of us to convey that essential idea to senior leadership in our organizations.

Software Liability

Every new piece of software deployed has the potential to expose additional threats to the enterprise in terms of new vulnerabilities, and vendors have no liability for this. In other industries, if a vendor were to produce a defective product that causes monetary damage to a company, that company would most likely sue that vendor with a high probability of success in court. It is not like that in the commercial software business or even in the open-source movement. Vendors will patch their systems for sure, but they accept no responsibility for, let’s say, hackers stealing 400 million credit cards from a major retail chain. Schneier is aghast at this development that the user community has let vendors get away with this stance.

Adversary Motivations

Secrets and Lies was the first time that I had seen an author characterize the adversary as a person or a group with motives and aspirations.

“Adversaries have varying objectives: raw damage, financial gain, information, and so on. This is important. The objectives of an industrial spy are different from the objectives of an organized-crime syndicate, and the countermeasures that stop the former might not even faze the latter. Understanding the objectives of likely attackers is the first step toward figuring out what countermeasures are going to be effective.”

This was a revelation to me. At this point in my career, I just thought “hackers” were trying to steal my stuff. This is Schneier’s first cut of a complete adversary list:

  • Hackers
  • Lone Criminals
  • Malicious Insiders
  • Industrial Espionage Actors
  • Press
  • Organized Criminals
  • Police
  • Terrorists
  • National Intelligence Organizations
  • Info warriors

In my work, I have found it useful to refine Schneier’s list of people into the following adversary motivations:

  • Cyber Crime
  • Cyber Espionage
  • Cyber Warfare
  • Cyber Hactivism
  • Cyber Terrorism
  • Cyber Mischief

The bottom line is that these adversaries have a purpose, and it helps network defenders if they understand what kind of adversaries are likely to attack the defender’s assets.

Things Stay the Same

Sadly, even though Schneier published Secrets and Lies in 2000, all of these things are still true, and there is no real solution is sight. Many organizations still think that installing the latest shiny security toy to hit the market will make their networks more secure. They don’t stop to think that they might be better off if they just made sure that the toys they already have installed on their network worked correctly.

People are still the weak link both in the security operations center (SOC) and in the general user community. As I have written elsewhere, talented SOC people are hard to come by, and many organizations still spend resources on robust employee-training programs, but the results are mixed at best.

CISOs are still struggling to convey the security risk message to the C-Suite. Most of us came up through the technical ranks and think colorful bar charts about the numbers of systems that have been patched are pretty cool. The CEO couldn’t care less about those charts and instead wants to know what the charts mean in terms of material risk to the business.

Finally, software vendors still have no liability when it comes to deploying faulty software that results in monetary loss to a customer. This just seems to be something we have all accepted, that it is much better to build a working piece of code first and then worry how to secure it later. I know entrepreneurs prefer this method because the alternative slows the economic engine down if developers spend time adding security features to a new product that drives no immediate revenue opportunities. But this is the great embarrassment to the computer science field: we have not eradicated bugs like buffer overflows in modern code. How is it possible that we can send people to the moon but we cannot eliminate buffer overflows in code development? Don’t get me wrong; the industry has made great strides in developing tools and techniques in these areas—just look at the Building Security in Maturity Model (BSIMM) project to see for yourself. But the fact that, as a cybersecurity community, we have not made it mandatory to use these techniques is one of the reasons we are still often considered a “field of study.”

What We Need

In the end, Schneier makes the case for things that the cybersecurity community needs in order to make the Internet more secure. Long before the acronym became a staple on Certified Information Systems Security Professional (CISSP) exams, he advocated the need to strengthen confidentiality, integrity, and availability (CIA). He does not call it CIA in the book, but he talks at length about the concepts. He was prescient in his emphasis on the need for Internet privacy and Internet anonymity and was one of the first thought leaders to start asking the question about security versus privacy in terms of government surveillance. He also anticipated the need for a Bitcoin-like capability long before Bitcoin became popular.

The Tech

Unfortunately, when you begin to write a technology book about the current state of the art surrounding cybersecurity, much of what you write about is already outdated as you go to press. As I was rereading Schneier’s book, I chuckled to myself when he referenced his blindingly fast Pentium III machines running Windows NT. The world has indeed changed since 2000.

Schneier wrote Secrets and Lies at the time when the industry had just accepted that a stateful inspection firewall was not sufficient to secure the enterprise.

“Today’s firewalls have to deal with multimedia traffic, downloadable programs, Java Applets, and all sorts of weird things. A Firewall has to make decisions with only partial information: It might have to decide whether or not to let a packet through before seeing all the packets in transmission.”

Besides firewalls, he describes other controls that the cybersecurity community has decided are necessary to secure the perimeter, such as demilitarized zones (DMZs), virtual private networks (VPNs), application gateways, intrusion detection systems, honeypots, vulnerability scanners, and email security. Since the book’s publication, security vendors have added even more tools to this conga line, tools like URL filters, Doman Name System (DNS) monitoring, sandboxing technology, security incident and event management systems (SIEMS), and protocol capture and analysis tools.

As of May 2014, the cybersecurity community is mounting a bit of a backlash against the vendor community’s conga line strategy. Practitioners simply can’t manage it all. The best and most recent example of this is the Target data breach. Like many of us, the Target security team installed the conga line of security products and even had a dedicated SOC to monitor them. According to published reports, the controls dutifully alerted the SOC that a breach was in progress but there was apparently so much noise in the system (and perhaps Target’s process was not as efficient as it could be) that nobody in the organization reacted to the breach until it was too late. It’s a perfect example of why many organizations are looking for simpler solutions rather than continuing to add new tools to the security stack.

Cryptology

According to Schneier, underlying everything is cryptology. As you would expect from a cryptologist, Schneier believes that his field of study is the linchpin of the entire idea of Internet security.

“Cryptography is pretty amazing. On one level, it’s a bunch of complicated mathematics. On another level, cryptography is a core technology of cyberspace. In order to understand security in cyberspace, you need to understand cryptography. You don’t have to understand the math, but you have to understand its ramifications. You need to know what cryptography can do, and more importantly, what cryptography cannot do.”

I agree. (Note: The difference between the terms cryptography, cryptanalysis, cryptology, and cryptologist is left as an exercise for the reader.) I would say that the cybersecurity community has failed in this regard. While it is true that cryptography is the underlying technology that makes it possible to secure the Internet, it is still too complicated for the general user to leverage. In light of the Edward Snowden revelations —that we not only have to worry about foreign governments spying on our electronic transmissions, but we also have to worry about our own government doing it—the fact that most people do not know how to encrypt their own email messages as a matter of course is a testament to our industry’s failure.

Kill Chain

Schneier makes a distinction between computer and network security, that the conga line of security tools that make up the security stack at the network perimeter is not the same as the set of tools you need to secure the endpoint. While this is still true today, the cybersecurity community has merged these two ideas together since Schneier’s book was published.

The thought is that it does not make sense to consider network and endpoint security separately; it makes more sense to think of everything as a system, as we do at Palo Alto Networks. As organizations develop indicators of compromise at both the network and endpoint layers, essentially the Kill Chain model, the cybersecurity community can develop advanced adversary profiles about the attacker’s campaign plan.

In conclusion, the ideas Schneier examines in Secrets and Lies were years ahead of their time.  They show the cybersecurity industry just how far we have come and how far we still have to go. Because of this, Secrets and Lies is a candidate for the cybersecurity canon, and you should have read it by now.

*Full disclosure: The first civilian job I took after I retired from the US Army was with the company that Bruce Schneier founded called Counterpane, so I may be a little biased. 

[Source: ]

CVE-2014-1776: How Easy It Is To Attack These Days

This post originally appeared on Cyvera.com.

Just about a week ago, everyone was alarmed due to a new zero-day vulnerability affecting Internet Explorer 6 through 11. The vulnerability was used in attacks in the wild, which targeted IE 8 to IE 11. The impact was so severe that Microsoft hurried to issue an out-of-band patch. Today, I would like to show how relatively easy it is to attack these days, when you can just reuse code.

We will compare the attack that used CVE-2014-0322 (then, an IE zero-day) to the current attacks utilizing CVE-2014-1776. We will show an almost exact match between the two templates for the attacks, indicating that either the same group was behind the two campaigns, or that the ease of acquiring used exploit code (even from public sources) allows different groups to quickly reuse and adapt the same code to the next vulnerability.

Overview

Both attacks utilize use-after-free vulnerabilities in IE, and leverage Flash Player in order to easily bypass DEP and ASLR. In both cases the scheme is the same:

  • Load a Flash SWF file.
  • Spray the heap with ActionScript uint vector objects with 0x3FE elements, for a total of 0×1000 bytes (i.e., 1 page) of memory for each vector object (including the vector’s management information, which should be inaccessible directly from the ActionScript code).
  • Spray the heap with references to a Sound object, to be used later as the first trigger to the shellcode.
  • Call a JavaScript function in the HTML page and set a timer to invoke another AS function.
  • Trigger a UAF vulnerability using the JavaScript code, while spraying the heap in order to ensure that the used block is controlled.
  • Use the bug to change an AS vector’s size (which is inaccessible directly from AS).
  • Back in the timed function in the SWF, use the modified vector to change an adjacent vector’s size to encompass all virtual memory, effectively achieving full memory disclosure and writing abilities (where current permissions allow that).
  • Find a module in memory and reach NTDLL by hopping through modules’ import address tables.
  • Find a stack pivoting gadget in NTDLL as well as the address of ZwProtectVirtualMemory.
  • Overwrite the virtual function table pointer of the Sound object to initiate code execution by calling the Sound object’s (replaced) toString function. Then, use a few ROP gadgets to pivot the stack, change the permissions on the shellcode to RWX, and execute the shellcode.
  • Restore normal operation.

There are, however, some improvements in this CVE-2014-1776 attack over the CVE-2014-0322 attack. For example, while all the hard work is crammed into one big function in the CVE-2014-0322 attack, the authors of the CVE-2014-1776 attack strove for cleaner code and broke the huge pile of code into many smaller functions, which constitute basic primitives for the larger goal. In fact, now it is even easier to reuse this code for the next exploit…

The Flash Spray

In both cases vectors of uints are sprayed (with 0x3FE elements in each vector), as well as references to a Sound object. The values of the uints in the sprayed vectors are constructed so as to fit the address the attacker had chosen and the vulnerability (and the browser, if applicable).

CVE-2014-1776

CVE-2014-0322

The UAF Triggering

In both attacks, the JS code which triggers the vulnerability is called from the ActionScript code, using the external interface. The AS code then registers a function to be invoked at a later time and search for the artifacts of the triggered vulnerability. Although the code performing the actual UAF is almost the same, there are some differences in behavior here:

  • In the current attack, the JS function gets a parameter, which holds JavaScript code that is crucial for the vulnerability to arise. In contrast, in the previous attack, the entire JavaScript code was present in the HTML.
  • In the current attack, the JS code sent to the external function lies encrypted (using RC4) in the SWF file, and is decrypted only prior to sending it to the external interface. Other parts in the SWF (relating to the shellcode) are also encrypted. Consequently, if you only have the HTML file, you cannot reproduce the zero-day (and vice versa). In contrast, the previous attack had no encrypted elements at all.
  • In yet another effort to make sure the zero-day is not compromised even if one file falls into the wrong hands, in the current attack the HTML file was split into two files, the second one containing the JS code used for the heap-spray.

The heap-sprays, though, are very much alike.

CVE-2014-1776

CVE-2014-0322

Memory Ownage

In both cases this is pretty easy – look for the modified length of the vector (that is what the IE vulnerability was used for), use the modified vector to modify the adjacent vector’s length to span all memory, and use the second modified vector for memory read and write operations.

CVE-2014-1776

CVE-2014-0322

Looking for Modules and Functions

This is pretty straightforward – find a module in memory, go backwards to find its base, parse the PE header and look for a function imported from KERNEL32.DLL, repeat the same process to go from KERNEL32.DLL to NTDLL.DLL, and then parse its import table looking for the needed functions. However, the code for the recent attack has one improvement over the older code: The new code uses the Sound object’s vftable to get a function pointer which points inside the Flash OCX, while the older code scans the memory and tries to find an executable image by brute-forcing.

CVE-2014-1776

CVE-2014-0322

Running the Shellcode

In both cases, the Sound object’s vftable pointer is overwritten, pointing to a pre-crafted area in memory. Then, the Sound object’s toString method is called (#28 in the table), which runs a stack pivoting gadget chained to a gadget that uses ZwVirtualProtect on the shellcode, which immediately follows. The shellcode begins by saving information and restoring overwritten values.

CVE-2014-1776

CVE-2014-0322

Summary

We have shown a very high correlation between the exploit code used in the CVE-2014-1776 attack, and the exploit code used in the CVE-2014-0322 attack. Clearly, the same code base was used. Whether this is indicative of the same actor or not, we cannot tell, since all code was freely available on the net when the recent attack commenced.

Looking at the entire SWF file in both cases, it can be seen that some mistakes were made, and some code was copied without actually utilizing it or understanding why it is there. Nevertheless, the high correlation between the two exploits shows how easy it has become to reuse proven code from past exploits when preparing the next attack. This only means that organizations need to stay protected, as sophisticated attacks can be easily copied by teams who don’t possess the knowledge to construct such an attack on their own.

All endpoints on which Cyvera TRAPS was installed were (and are) protected from the CVE-2014-1776 attack: TRAPS stops this in-the-wild exploitation attempt at several different points. Since TRAPS does not rely on signatures or behaviors but on breaking the attacker’s core techniques, TRAPS stops even zero-day attacks (including this one) without any need for updates. Of course, TRAPS users were also protected from the CVE-2014-0322 attack.

[Source: Palo Alto Networks Research Center]

English
Exit mobile version