Global IT Audit Study Says Emerging Tech is Top Challenge

The ever-changing nature of complex emerging technology and infrastructure changes, including transformation, innovation and disruption, is the top challenge faced by IT audit executives and professionals around the world, according to a new survey from global consulting firm Protiviti and ISACA.

The fifth annual IT Audit Benchmarking Survey, titled A Global Look at IT Audit Best Practices, examines where IT audit functions stand in their ability to address complex challenges. More than 1,200 respondents shared their perceptions of top technology challenges currently facing their organizations

Top 10 Challenges
According to the survey, the top 10 global technology challenges facing IT audit professionals are:

  1. Emerging technology and infrastructure changes: transformation, innovation, disruption
  2. IT security and privacy/cybersecurity
  3. Resource/staffing/skills challenges
  4. Infrastructure management
  5. Cloud computing/virtualization
  6. Bridging IT and the business
  7. Big data and analytics
  8. Project management and change management
  9. Regulatory compliance
  10. Budgets and controlling costs

Interestingly, regulatory compliance and budgets/controlling costs have moved down significantly on the list compared to last year, indicating that IT departments are getting better at managing compliance costs.

Notable Takeaways
This year’s study indicated that audit professionals have significant concerns about finding qualified resources and skills. Not only was this noted by respondents as a top-three IT challenge, but numerous results suggest that finding the right people with the right knowledge/skills for the right job remains a significant challenge.

The study also serves as a reminder that IT audit risk assessments are an absolute must.There are small but meaningful numbers of companies that are not conducting any type of IT audit risk assessment. For these organizations, this is a significant risk given the cybersecurity threat environment. Other organizations are adhering to best practices by conducting these risk assessments more frequently.

IT Audit Reporting Structures Still Off the Mark
According to the survey, 60 percent of the largest public companies have a designated IT audit director or equivalent position within their organizations, and yet, in half of all companies, these individuals do not attend audit committee meetings. Furthermore, many companies still have established reporting structures that are less than optimal. Having the IT audit director report to the CAE or equivalent is a best practice, yet 28 percent of companies in North America and Asia use another, less ideal reporting line. This number is as high as 33 percent in Latin America and 41 percent in Europe.

Organizations need to address effective IT audit management through a number of controls, including treating IT and cybersecurity risks as strategic-level risks, operating as a truly independent and impartial function, and allotting the necessary resources and expertise, whether internal or external, to help the organization identify and manage its IT risks effectively.

COBIT Is the Go-to Framework
Respondents cited COBIT as the most accepted industry framework on which the IT audit risk assessment is based, followed by COSO, ISO and ITIL. Organizations may use a combination of frameworks to complete risk assessments.

Looking Ahead
ISACA is committed to helping you face the challenges identified in this survey. From recent reports on emerging technology, to more cybersecurity guidance, to audit and assurance career tools coming in 2016, we aim to help you face these issues head-on and succeed.

Christos Dimitriadis, Ph.D., CISA, CISM, CRISC
ISACA International President

[ISACA Now Blog]

Exploitation Demystified, Part 2: Overwrite and Redirect

In Part 1 of this series, we laid the foundation of memory corruption exploitation and presented the basic exploitation framework:

This post will cover the implementation of Overwrite and Redirect in the context of stack based buffer overflow vulnerabilities.

Memory Address Space Revisited

In its simplest form, the memory space is divided by the executable code region and the data region. The executable region contains both the program’s unique code as well as the DLLs the operating system provides to all processes. Data region, as its name implies, contains the data on which the code operates. The data region is comprised of the stack and the heap, which we will describe in detail below.

The attacker is interested in the data region since the shellcode by definition will be embedded in what will be loaded to the data region. This means once the file with the shellcode runs, the shellcode resides either in the stack or in the heap.

The Attacker’s Challenge

From the attacker’s perspective, inserting the shellcode to the data region is still far from satisfactory because the shellcode is an executable code. Remember – the shellcode’s role is to be executed and to open a connection between the attacker and the targeted machine. This is the fundamental exploitation challenge:

  1. The shellcode is by default loaded to data region.
  2. The shellcode needs to be executed.
  3. Residing in the data region means that the memory addresses populated by the shellcode will never be fetched to the CPU for execution.

The Attacker’s Solution

The attacker’s main objective is to manipulate the CPU into executing content of memory addresses which under normal circumstances do not get executed. This is wherevulnerabilities come into play.

To recap: when we say that an application has a vulnerability we mean that a crafted input file will cause the execution flow to deviate from its predesignated course. In other words, the CPU is fetched an address it was not meant to receive.

Let’s tie it all together now. The attacker has managed to insert a shellcode to the data region of the process memory, and what it seeks now is a way to get that shellcode executed. To achieve that, the attacker will craft the file in such a way that the deviated address will contain instructions to jump to the shellcode address. Now, the CPU receives an address containing executable code and it will follow the instructions, jump to the shellcode address and execute it.

(Very) Brief Vulnerabilities Overview

Vulnerabilities are tightly related to the overwrite part in the exploitation flow. Different vulnerabilities enable the attacker to overwrite addresses in different parts of the process address space.

The first type of vulnerability we will cover is stack based buffer overflow. This class of vulnerability can be considered a classic exploitation pattern. It is also one of the oldest patterns to be exploited in the wild and is still a prominent part of the current threat landscape.

The Stack

A typical computer program is comprised of a main program and functions or subroutines. When a subroutine is called, it performs its task and returns control to the main program. From the memory address space perspective, the addresses of the main program reside in the code region. When a subroutine is called, a stack is invoked to store its local variables (which roughly correlates to what we refer to as the data). The subroutine then performs its designated task and when it is done, hands over control back to the main program.

From the attacker’s perspective there are three interesting features:

  • Fixed size: The size of the stack is fixed and determined at the time of the call. For example, let’s assume that the subroutine declared an array of 10 characters. This will be the size of the stack regardless of the arguments we will pass to it.
  • Return address: The return control mechanism works like this: the stack is invoked with a fixed memory size. Let’s say our 10 characters stack is assigned to address 100. This means that addresses 91 to 100 are assigned to this stack. In addition, address 90 contains the address in the main program to which the CPU should return after the subroutine has fulfilled its task. This memory location is known as the return address.
  • The stack grows downward: when we provide the actual arguments to the stack, the first goes to the highest address and is then pushed downwards by its followers. So if we provide a 3 character input to our simplified stack, the first one will go to address 100. After that the second one will populate 100 pushing the first to 99. Then the third will go to 100 pushing the second to 99 and the first to 98. Since our input ends here and there are no more arguments, the return address will be fetched to the CPU, which will follow its instructions and jump back to the main program.

Stack Based Buffer Overflow

So far we have described the stack architecture with no malicious context. Now we will explain how this architecture can be maliciously leveraged.

The inherent security flaw in the stack architecture is that it implicitly assumes that the input will match the predesignated size. It works well when the input is either smaller than or identical to this size. The problem arises when the input is larger than the predesignated stack boundaries.

Let’s go back to our simplified stack. Suppose we give the subroutine an input larger than 10 characters. Remember that addresses 91 to 100 are assigned for the input and that address 90 is already populated with the return address. If our input is 11 characters, the first character will go to address 100 and will be pushed downwards. When it reaches 90 it will overwrite the return address. The CPU will try to follow the instructions in address 90 but because they do not exist anymore, it will break the execution flow and the process will crash. This is known as Stack Overflow.

Let’s also remember that the shellcode resides in the stack and the attacker attempts to cause it to be executed.

In order to leverage stack overflow for its purposes, the attacker will craft the subroutine input in a way that the return address will be overwritten with new instructions which will redirect the CPU to the shellcode location.

In our simplified stack example, the attacker will craft an 11 character size input. The shellcode resides in characters 11 and 10. Character 1 contains instructions to jump to 11 and execute. In that case when the subroutine is called, character 1 will be pushed down, overwrite the return address and redirect the CPU to address 100 where the shellcode is. The CPU will blindly follow the instructions and execute the shellcode.

Zoom Out on Exploitation Architecture

As you can see in our example above, the exploitation parts are not connected to each other: embedding a shellcode is totally decoupled from crafting the input file to trigger a certain vulnerability. The triggered vulnerability enables the return address to be overwritten. The instructions, which overwrite the return address, redirect the CPU to the shellcode but other than do not relate to the shellcode’s functionalities in any way.

The art of exploits is to orchestrate these independent parts to work together. In a similar way, the art of protection against exploits is to obstruct either the independent parts directly or the orchestration among them.

Conclusion

We have learned how the basic exploitation framework is implemented on stack based buffer overflows vulnerabilities. Despite its age (the earliest documented attack was the Morris Wormin 1988), this class is still a prominent part of the threat landscape. We encounter exploitations of these vulnerabilities in various readers, players and Microsoft office documents but also in industrial protocols and services.

In the next Exploitation Demystified post, we’ll cover implementation of the exploitation framework on heap-based vulnerabilities.

[Palo Alto Networks Blog]

The Cybersecurity Canon: Metasploit: The Penetration Tester’s Guide

We modeled the Cybersecurity Canon after the Baseball or Rock & Roll Hall-of-Fame, except for cybersecurity books. We have more than 25 books on the initial candidate list, but we are soliciting help from the cybersecurity community to increase the number to be much more than that. Please write a review and nominate your favorite

The Cybersecurity Canon is a real thing for our community. We have designed it so that you can directly participate in the process. Please do so!

Book Review by Canon Committee Member, Brian Kelly: Metasploit: The Penetration Tester’s Guide (2011) by David Kennedy, Jim O’Gorman, Devon Kearns, and Mati Aharoni

Executive Summary

Learning to think like a criminal, or in this case a cybercriminal, is a requirement for all penetration testers. Fundamentally, penetration testing is about probing an organization’s systems for weakness.
While the goal of Metasploit: The Penetration Tester’s Guide is to provide a useful tutorial for beginners, it also serves as a reference for practitioners.

The authors write in the Preface that, “This book is designed to teach you the ins and outs of Metasploit and how to use the Framework to its fullest.” While the book is focused on using the Metasploit Framework, it begins by building a foundation for penetration testing and establishing a fundamental methodology.

Using the Metasploit Framework makes discovering, exploiting, and sharing vulnerabilities quick and relatively painless. While Metasploit has been used by security professionals for several years now, the tool can be hard to grasp for first-time users. This book fills the gap by teaching readers how to harness the Framework and interact with the active community of Metasploit contributors.

While the Metasploit Framework is frequently updated with new features and exploits, the long-term value of this book is its emphasis on Metasploit fundamentals, which, when understood and practiced, allow the user to be comfortable with both the frequent updates of the tool and also the changing penetration testing landscape.

Review

Metasploit: The Penetration Tester’s Guide is laid out in two sections, Chapters 1 to 5 introduce the basics of penetration testing and the Metasploit framework with the remaining 11 chapters outlining specific areas of the framework, building on the fundamental concepts introduced in the first section. The bulk of the book takes the penetration tester through using the framework with examples of both use cases and the syntax required. The examples begin with the very basics techniques of the craft and move through carrying out exploits and gaining value from the post-exploitation capabilities of Meterpreter.

The authors give a short overview of each topic before jumping right into the hands on – showing readers the commands to use and then dissecting the output – explaining step by step what is happening and what was accomplished. The book allows readers to move quickly from the basics of penetration testing through using the platform to perform the different phases of intelligence gathering and exploitation.

The exploitation sections cover a wide range of techniques, including attacking MS SQL, dumping password hashes, pass the hash and token impersonation, killing antivirus, and gathering intelligence from the system to pivot deeper into the target network.

Conclusion

Metasploit: The Penetration Tester’s Guide is written in a hands-on, tutorial-like style that is great for beginners, as well as folks who prefer to learn by doing. This is an excellent book for anyone interested in a hands-on learning approach to cybersecurity and the fundamentals of penetration testing. It is also a great reference book for the seasoned Metasploit user and those new to Metasploit who want a step-by-step instruction manual.

The craft of penetration testing is covered deeply and broadly. However, the book’s greatest source of value is how the concepts being applied are explained and demonstrated with well-annotated examples. The authors’ experiences in formal instruction and practice are evident. This book achieves a good balance between concept and practicality.

The goal of the Cybersecurity Canon is to identify a list of must-read books for all cybersecurity practitioners — be they from industry, government or academia — where the content is timeless, genuinely represents an aspect of the community that is true and precise, reflects the highest quality and, if not read, will leave a hole in the cybersecurity professional’s education that will make the practitioner incomplete. Finally, the books must provide timeless technical know-how. Metasploit: The Penetration Tester’s Guide achieves these goals, and I believe it is worthy of inclusion in the Cybersecurity Canon candidate list. It is a valuable resource for all cybersecurity professionals’ libraries, whether they be novices or experienced practitioners.

[Palo Alto Networks Blog]

2016 Predictions #4: Growth in Exploit-Based Attacks Will Require Increased Emphasis on Prevention

This is the fourth in our series of cybersecurity predictions for 2016. Stay tuned for more through the end of the year.

In 2015, the cybersecurity market witnessed the introduction of a slew of new and improved products that promised to enhance the detection and response capabilities of organizations against malware. The prevailing rationale was that an improvement in these tools would help organizations to reduce the impact of malware by becoming better at spotting suspicious activity. Unfortunately, the threat agents also witnessed this trend. Their attacks became more targeted, oftentimes uniquely designed to compromise a given organization’s defenses.

The shift from executable malware to exploits will continue

In 2016, we can expect that well-funded, highly skilled, and patient threat agents will shift their focus toward deploying the types of attacks that are virtually undetectable by current antivirus solutions and much harder to counter by current “detect and respond” tools. These attacks will exploit vulnerabilities in legacy and commonly used applications that are often whitelisted or play a major role in the organization’s business processes; hence, these applications cannot be eliminated without having a negative impact on the organization’s ability to conduct business.

As threat actors become more effective in the reconnaissance of their targets, the exploits will become more highly customized to the specific applications in use by a target organization, and even to the targeted individuals within that organization.

In 2016, software developers will undoubtedly continue to improve the overall security of their applications and operating systems, while threat actors will escalate the perpetual “cops and robbers” game by deploying exploits that are more sophisticated – and often created by professional exploit developers.

Organizations will realize the futility of fighting machines with people

Cyberattacks in 2015 exhibit a massive increase in volume, velocity and variation. The fundamentally asymmetrical nature of cyberattacks, in the sense that small groups of highly skilled individuals have the potential to inflict disproportionately large amounts of damage on an organization, took a turn for the worse as attackers gained increased access to more scalable options, such as Malware-as-a-Service and Exploits-as-a-Service.

While attackers unleashed an army of machines on their targets with a click of a mouse, many organizations continued to commit their scarce resources to the perpetual loop of “detect and respond,” which is to identify, investigate, remediate, recover, and then repeat.

In 2016, we can expect that organizations will finally realize this people-intensive approach is no longer scalable or sustainable. Organizations will recognize that automation and scalability are the keys to matching the asymmetric nature of cyberattacks. And they will come to rely on new tools that can effectively prevent the army of machines from using sophisticated and previously unknown threats, malware, and exploits to compromise the organization’s defenses.

The pendulum will start to swing back from detection and response toward prevention

2015 witnessed the continuing market sentiment that security breaches are inevitable, that organizations should assume a breach has already happened, and that the best course of action is to focus scarce resources on rapid detection and response in order to minimize the impact.

Despite the proliferation of new services and products that focused on helping organizations to improve their ability to detect and respond to malicious activities, organizations will realize that these advancements cannot change the economics of their chosen approach.

The fact remains that the further along the breach continuum one detects and intercepts an attack, the higher the negative impact, and the costlier it will be to recover and remediate.

In 2016, organizations will begin to realize that breach prevention is not only possible but also more viable and sustainable. Although detection and response capabilities will remain necessary for a balanced security posture, the old adage “an ounce of prevention is worth a pound of cure” will resonate with more and more organizations.

Want to explore more of our top 2016 cybersecurity predictions? Register now for Ignite 2016.

 

[Palo Alto Networks Blog]

Best Practices for Your Swiss Army Knife

If you’ve been to any recent Palo Alto Networks Ignite conferences, you’ve likely attended sessions led by our Product Management team on best practices for various Palo Alto Networks technologies and security initiatives.

Actually, those best practices sessions are, by far, our most requested and well-attended sessions. Customers have been very interested in how technologies on our platform can be combined to improve their security posture and make their lives easier. As one of my customers once put it, “Your platform is like a Swiss Army knife. There are all these cool tools and features, and you just have to figure out how to combine them to solve the problem at hand.”

For instance:

  • Combine SSL decryption and URL Filtering to easily identify URL categories for decryption and inspection.
  • Combine URL Filtering and file blocking to disallow .exe downloads from high-risk URL categories, such as dynamic-DNS or unknown URLs.
  • Combine App-ID, User-ID, and Content-ID technologies to identify known versus unknown users, restrict their access to applications housing sensitive data, and enforce strict decryption and threat inspection policies. This combination will make sure that unknown users are not doing anything malicious to your network.
  • Combine User-ID and file blocking to help prevent the delivery of malware via watering hole or a spear phishing attack to groups of users who don’t have a business reason for downloading Portable Executable (PE) files types, such as .exe, .dll, and .scr.

Over the years, we have accumulated tons of tips and tricks throughout our tens of thousands of customer engagements that we actively recommend to our customer base. We are still discovering new ways our customers combine and use features on our platform to solve their problems.

Here are just a few of these recommendations:

  • Enable file blocking profiles within your application-based policies and allow only certain file types to be downloaded or uploaded to prevent malware downloads and data exfiltration.
  • Utilize the dynamic block list feature on the NGFW to prevent traffic to and from known malicious IPs. Or, better yet, copy the IP addresses that have triggered a number of IPS signatures in a certain amount of time, and paste them into a dynamic block list to help prevent attacks from actively targeting your organization.
  • Enable DNS sinkhole functionality on the NGFW to provide your security and IR teams with a list of users and endpoints actively attempting to connect to command-and-control domains, as they’ve very likely been compromised. The sinkhole will block the communication and provide a high fidelity list of users for whom you should probably re-image devices.
  • Alert on or disallow SSL traffic over unexpected ports, especially if it’s traffic you aren’t able to decrypt and fully inspect for threats.
  • Activate strict threat profiles for Threat Prevention signature sets (IPS, AV, anti-CnC) and leverage WildFire to configure signature updates every 15 minutes within your data center to help prevent lateral movement on east-west traffic and data exfiltration.

We use tips like these to help our customers better secure their organizations and more fully leverage technology and features within the Palo Alto Networks Next-Generation Security Platform. For us, it’s all about enabling business and preventing breaches.

That’s why we’re collecting these tips, tricks, and tactics and publishing them in a series of chapters – our recommended best practices. The first chapter, on leveraging application-based policies to provide complete visibility (the first step in reducing the attack surface), is available now within our Fuel community and will be followed by chapters on decryption and user-based policies in the next few weeks.

Be sure to download our best practices to find out how you can better secure your organization or confirm that you’re already ahead of the game.

[Palo Alto Networks Blog]

English
Exit mobile version