2018 Predictions & Recommendations: The Cloud Will Accelerate Channel Partner Migration to Next-Generation Security Innovators

This post is part of an ongoing blog series examining predictions and recommendations for cybersecurity in 2018.

It wasn’t long ago that success in the security channel was based on a partner’s ability to integrate point products. The more point products you offered, the better because the complexity – and, therefore, your margin – was in providing product integration services. The stalwart performers during this era were called “solution providers.”

But as the volume and sophistication of today’s cyberattacks increase, point product approaches that deliver detection and remediation capabilities have proven to be ineffective. Customers want to prevent successful attacks from the endpoint, across the network and within any cloud.

What we have is the industry’s first prevention-based, highly integrated and automated, Next-Generation Security Platform. Our best partners have been part of a fundamental shift in the channel to think beyond the point – that is, beyond point products.

To help our channel partners build a successful platform practice, we evolved our channel mission to build a partner ecosystem of next-generation security innovators, experts at enhancing the platform to prevent successful cyberattacks.

Early adopters of the next-generation security innovator success factors have benefited greatly. In fact, in FY17 we had more than 100 partners double their business with us and ended the year doing more than $1 million in sales for Palo Alto Networks. These channel partners have helped us define the next-generation security innovator blueprint for success.

This is only the beginning of a massive channel migration, which we predict will accelerate in 2018 because of the cloud, but not for the reasons you might think.

The common belief is the cloud changes everything. Ironically, it is what the cloud doesn’t change that will help accelerate this migration in 2018:

  1. Security will remain a strategic priority for every company and organization in the world, regardless of where the data resides.
  2. Security risks will stay the same – different data centers, same risks.
  3. A true platform approach will still be key to prevention, and whether on-premise or in the cloud, customers know that legacy point products are incapable of protecting their businesses.

With a clear understanding of the customer’s cloud security priorities, risks and need for a platform, channel partners have realized they must strengthen their value proposition and differentiate themselves by accelerating their migration to becoming next-generation security innovators.

Now that we understand the baseline reasons for channel partners to accelerate their migration, let’s focus on the motivation. In 2018, I recommend our channel partners explore our new licensing model. Optimized for the cloud, our new licensing model addresses two fundamental cloud adoption issues for our channel partners:

  1. How to build a sustainable cloud business practice when sales are fluid or unpredictable?
  2. How do I compensate my sales team on much smaller per-use deals?

We unveiled our new licensing model in April 2017. Since its launch, the average deal size has been more than $300,000. We have created a licensing model that is perfect for channel partners. Our ability to successfully bundle pay-per-use into a license that is both predictable and easy to compensate for will be the motivating factor that drives thousands of partners to accelerate their migration to next-generation security innovators.

What are your thoughts and predictions for cybersecurity in 2018?

Learn more by exploring these helpful next-generation security innovator resources:

[Palo Alto Networks Research Center]

2018 Predictions & Recommendations: Automated Threat Response Technology in OT Grows Up

This post is part of an ongoing blog series examining predictions and recommendations for cybersecurity in 2018.

Automated Threat Response and Relevance to IT and OT

Automated threat response, which we’ll simply refer to as ATR, is the process of automating the action taken on detected cyber incidents, particularly those deemed malicious or anomalous. For each type of incident, there is a predefined action for containment or prevention where newer technologies, such as behavioral analytics and artificial intelligence, are utilized to bring incidents of interest to the surface. With these technologies, the goal is to automate the process of detection, and implement an equally automated and closed-loop process of prevention. This not only reduces the burden on the SecOps teams but also shortens the response time. Over recent years, IT organizations have needed to adopt ATR technologies, such as our WildFire and behavioral analytics offerings, to be able to better combat the advanced attacks that have increased in frequency and capability.

So how applicable is this technology in protecting Industrial Control Systems (ICS) and Operational Technology (OT) environments from advanced threats? It is clearly relevant for the adjacent corporate and business networks of the OT environment, which are often internet- connected and subject to usage by threat actors as a pivot point for an attack. But what I’m more interested in is the relevance to the core areas of ICS: Levels 3, 2, 1 of the Purdue model, and the DMZs between them. ATR is very relevant, in fact.

Consider the scenario where an HMI station in an electric utility Energy Management System is suddenly being used to issue an unusually high amount of DNP3 operate commands (to open breakers), much higher than the baseline. This could constitute a malicious event or, at minimum, an anomalous one. ATR systems could be used to detect such events and automatically respond, whether it is to block the rogue device or limit the connection; for example, by giving the device of interest read-only access for the DNP3 protocol.

So why has this technology not been adopted yet? There are several reasons. Most OT organizations’ current OT cybersecurity initiatives focus on visibility and access control. Advanced threat prevention is a longer-term initiative. Second, the newer AI/machine learning technologies used to baseline ICS-specific traffic and detect anomalies have been mostly reserved for R&D or PoC environments. Third, ICS/OT asset owners and operators tend to be very conservative. The idea of allowing a system to automatically respond to detecting threat incidents is pretty scary for most OT folks due to the fear of accidentally blocking legitimate traffic/devices and causing downtime. Finally, the use cases and response actions for incidents detected in OT have not been well-defined.

2018 Is the Year of ATR in OT

My prediction for 2018 is that ATR in OT will reach production-level maturity and be deployed in a meaningful way. “Meaningful” means that we will start seeing large-scale deployments by leading operators of ICS in critical infrastructure and manufacturing environments.

There are several reasons I believe this will be the case. Some leading organizations have matured beyond visibility and segmentation, and have completed their PoCs of the technology. To add, a strong ecosystem around the OT-specific profiling, behavioral analytics, and anomaly detection has emerged. Some of these solutions exist as dedicated sensors or as modules that supplement SIEM devices. Initially deployed as stand-alone detection tools, these ICS network- monitoring solutions are starting to be integrated with enforcement devices, such as our next-generation firewalls, which are then used to realize the appropriate threat response.

Further driving the adoption are recent high-profile, cyber-physical attacks, such as those to the Ukraine grid in 2015 and 2016, which many perceive may have been mitigated or possibly prevented with ICS-specific ATR technologies. The scope of ATR in OT could also be applied to threats typically associated with IT, like ransomware, which could still impact OT. The effect of WannaCry in causing downtime in some manufacturing plants in 2016 is an example of this. I also see the development of OT incident response playbooks and semi-automated approaches, which make adoption into OT more amenable for resource-constrained and risk-averse OT types.

To be sure, ATR in OT will be initially limited to cases deemed less risky in terms of accidentally causing process downtime or safety issues. What defines “less risky” is certainly debatable and is still being worked out. Also, this definition will differ between organizations. However, there are some seemingly amenable ATR in OT scenarios mentioned often in my discussions with OT security teams. These include the case where a pre-existing host is suddenly performing unusual commands and then limiting its access; for example, limiting an HMI or engineering workstation to read-only access to the PLCs. It may also include blocking new devices that were not included in any installation plan of record.

One other such scenario would be to quarantine a non-critical host, such as a redundant HMI, found to be infected with ransomware. Another aspect of how I see gradual adoption is in how OT users will want the option to manually accept or reject a proposed threat response. This isn’t a fully automated approach, but is likely a necessary intermediate step toward proving full automation. Integrators developing these systems will be wise to develop user interfaces and workflows to support this semi-automated approach.

Palo Alto Networks Enables Automated Response

In anticipation of this growing use case, Palo Alto Networks has been engaged closely with the ATR ecosystem, customers and industry organizations to put in place the integration required to facilitate the adoption. A key enabler for the integration and automation is our powerful application programming interface, which makes interacting with sensors, SIEMs and other system elements very easy. Furthermore, the flexibility and granularity of the controls that could be automatically implemented are immense. Specifically, for OT environment, users can apply the App-ID we have for ICS to implement protocol-level responses down to functional commands; for example, to the DNP3 operate command mentioned earlier. Couple that with User-ID for role-based access, Content-ID for control over payloads/threats and, of course, more basic controls based on IP and port, and here you have a very flexible ATR platform that can accommodate a range of response templates, which could be tied to an organizations risk tolerance. A hazard and operability study (HAZOP) may be something organizations do to determine the appropriate ATR for certain scenarios where more conservative responses, which may include redundant systems, would be applied to processes/operations.

Whether it is in 2018 or later that our users decide to implement ATR in OT, they will be happy to know we have the capabilities in place, and the ecosystem to support their initiatives.

Learn more

[Palo Alto Networks Research Center]

2018 Predictions & Recommendations: Advances in Machine Learning Will Improve Both Patient Care and Cybersecurity

This post is part of an ongoing blog series examining predictions and recommendations for cybersecurity in 2018.

Machine learning is a buzz topic of conversation in many industries, but is it over-hyped or a real game changer? In the healthcare and cybersecurity industries, at least, I’m leaning toward game changer, and here’s why.

There are endless applications of machine learning within healthcare that can improve patient outcomes and patient care. The most obvious one is to use machine learning algorithms to improve diagnoses and care plans – far more accurately than a human doctor, and with much better results. We’re already beginning to see headlines: Stanford has developed a deep learning algorithm to identify skin cancer; Google used machine learning to create a tool that detects breast cancer better than human pathologists; and a JAMA article described the success of using machine learning to detect diabetic retinopathy in retinal photographs. We’re only witnessing the beginning of a long line of breakthroughs that will change the way people think about machine learning, from interesting research into a new standard of care for patients.

Cybersecurity, like healthcare, has some very compelling applications for machine learning as well, many of them equally game-changing. Ten years ago, organizations could protect themselves from cyberattacks with signature-based security products at the endpoint, on the network and in the cloud. But it didn’t take long for cyberattackers to catch on to the fact that they could beat signature-based security by automating the creation of unique malware, and that shift marked the end of pure signature-based malware detection.

Is machine learning the silver bullet for cybersecurity? Maybe that’s a little dramatic, but machine learning definitely will have a growing impact on the effectiveness of cyberattack prevention. Machine learning is one of the methods used by our Traps advanced endpoint protection to identify malicious files with a very high degree of accuracy. On the network, LightCyber behavioral analytics uses machine learning to “learn” the expected behavior of users and devices and then detect behavioral anomalies indicative of attack.

Healthcare and cybersecurity both generate a massive amount of data, and machine learning offers a standardized and proven approach to drawing meaningful conclusions from huge and seemingly unrelated data sets. The healthcare industry, in particular, has been impacted greatly by targeted and non-targeted cyberattacks in the past year and, hence, is very well-positioned to benefit from machine learning advances on both the patient-facing and cybersecurity fronts.

As we head into 2018, CISOs of healthcare organizations should start planning to adopt machine learning in their cybersecurity programs.  Applications for machine learning will expand over time, but it’s already proven to be effective at identifying advanced malware in healthcare IT environments.

[Palo Alto Networks Research Center]

2018 Predictions & Recommendations: Cyber Hygiene for Financial Institutions Found Non-Compliant with SWIFT Mandatory Security Controls

This post is part of an ongoing blog series examining predictions and recommendations for cybersecurity in 2018.

After a series of high-profile attacks against its members in 2016, the Society for Worldwide Interbank Financial Telecommunication (SWIFT) established a Customer Security Controls Framework that includes a set of 16 mandatory controls. SWIFT requires self-attestations to be completed by the end of 2017. These will be made available to SWIFT counterparties in support of the transparent exchange of security status information. Without going out on a limb, my prediction is that some SWIFT members will not be able to comply with all mandatory controls by that deadline.

That being said, my recommendation for financial institutions is to incorporate the best practices for cyber hygiene found in the SWIFT mandatory controls into your overarching security program. Avoid the temptation to treat the SWIFT controls as “one-offs” to be addressed separately. Integrating them into your cybersecurity program will provide a more holistic approach and enable you to ensure ongoing compliance.

The SWIFT mandatory security controls can be viewed as measures of good cyber hygiene for their members. I won’t cover all 16 here, but I will highlight a few to provide some flavor for the controls.

  • SWIFT Environment Protection (1.1): Network segmentation of the local SWIFT infrastructure from the rest of the IT environment would be a major first step. This would limit access to/from the local SWIFT elements from attackers on potentially compromised endpoints and even malicious insiders.
  • Operating System Privileged Account Control (1.2) and Multi-Factor Authentication (4.2): In addition to the policy of least privileges, administrator-level accounts should be protected with multi-factor authentication (MFA). Of course, MFA should also be in place for access to critical systems, such as SWIFT. This limits the value of any credentials stolen by an attacker.
  • Internal Data Flow Security (2.1) and Logical Access Control (5.1): To ensure the integrity of communications between SWIFT-related components, obtain visibility into and control the traffic flow based on applications, users, and content. Security policies may then be defined with the context of actual application and user identity to safely enable authorized access to the data.
  • Security Updates (2.2), Malware Protection (6.1), and Software Integrity (6.2): Patching software for security vulnerabilities in a timely fashion is clearly a necessity. However, in instances where this is not possible due to software past end-of-support or other extenuating circumstances, advanced endpoint protection from both malware and exploits is an alternative to maintain the integrity of the production environment. In general, advanced endpoint protection is superior to legacy antivirus and anti-malware solutions.
  • Logging and Monitoring (6.4): With the local SWIFT infrastructure protected by network segmentation, those firewalls will have significant information on both normal and unexpected data flows into and out of the environment. Those firewall logs should be reviewed for anomalies in traffic patterns as these may signal undesired activity.

The two most recently publicized attacks on SWIFT members occurred in October 2017 (Taiwan and Nepal). Prior to these, there was an attack in December 2016 (Turkey).  Although one could say the pace of attacks against SWIFT members has slowed from the peak seen in mid-2016, it would not be prudent to ignore the recommended security controls. Whether or not you are a SWIFT customer, ensuring that basic cyber hygiene is part of your overall security program is well worth the time and effort.

[Palo Alto Networks Research Center]

Five Areas to Consider When Testing Cyber Threat Intelligence Effectiveness

According to the ISACA State of Cyber Security 2017 research, 80% of respondents believe “it is either “likely” or “very likely” that they will be attacked in 2017.” In 2018 and beyond, based on current risk trends to organizations from their infrastructure, employees, supply chain and external threat actors, this figure is unlikely to drop.

Cyber threat intelligence (CTI) plays an important role in an organization’s defense-in-depth defense strategy often being leveraged by other cyber security functions, such as security event monitoring, incident response and forensic investigations.

To derive value from CTI, raw or processed data feeds must be analyzed and applied within the context of the organization to improve, among other capabilities, the ability to detect threats and respond to incidents.

Visibility into the design and operating effectiveness of CTI processes can provide some assurance to management and potentially support funding requests for further investment in this area. Based on that premise, below are five areas to consider when conducting a review of your organization’s CTI capabilities.

Alignment with your organization’s threat model
Commonalities exist in the threats to organizations operating in the same industry sector. However, because no two businesses are exactly alike, there is a high likelihood that each one will have a slightly different threat model.

Threat modeling is a necessary risk management step to ensure that resources are directed at controls that address the real threats to the organization. Therefore, to ensure that CTI sourced by an organization is effective, it must support an existing threat model.

A key initial part of your review should involve checking whether your organization maintains a threat model, whether the CTI sourcing strategy adds more visibility to that model and whether the combination of both supports effective decision-making when managing risk.

Quality of threat intelligence
Threat and vulnerability information originates from a variety of internal and external sources and is often ingested manually or through automation by the user organization.

Externally, sources include commercial CTI vendors, industry/community collaboration forums, and security product/vendor intelligence feeds. Internal sources include proactive vulnerability scanning, network monitoring and behavioral analysis tools.

Whether derived internally or externally, the quality of CTI is critical for it to effectively contribute toward improving an organization’s cyber security posture.

According to leading threat intelligence expert Sergio Caltagirone, the quality of threat intelligence is determined by four factors: completeness, accuracy, relevance and timeliness. Each of these factors is described briefly below:

  • Completeness – Visibility of the organization’s threat model could provide a view on the completeness of CTI. Threat models will help the organization to ask the right questions of CTI data.
  • Accuracy – A high number of false positives in an intelligence report infers poor quality CTI. A consistent trend of false positives may require further investigation.
  • Relevance – The more organizational and industry context that is available within CTI, the more useful it is. More weight should be given to internally sourced CTI which reflects the nuances of an organization over externally sourced CTI which may be generic and may lack context.
  • Timeliness – CTI is only effective if it can be applied in an operational context to address current threats facing an organization.

Start by obtaining a list of your organization’s internal and external sources and reviewing them against each of these factors.

Integration with security monitoring
There are many use cases for CTI. According to the 2017 SANS Institute Cyber Threat Intelligence report, the top use case for CTI was in security operations, as 72% of respondents say they use CTI information when detecting potential cyber security events and locating sources and/or blocking malicious activities or threats.

An effective security monitoring strategy is one which correlates and analyzes data from multiple sources to detect threats before they can cause harm to the organization. Leveraging available CTI is one way to ensure the optimal use of security operations resources by focusing monitoring efforts on indicators of compromise that pose the highest risk.

Conduct a review of security monitoring procedures to determine how much CTI influences monitoring strategies.

Integration with incident response
Improving visibility into threats and attack methodologies is vital to an organization’s ability to respond to incidents. Effective CTI provides insight into the intent, opportunity and capability of a cyber-attacker. It is this insight which gives an organization some assurance that it can deploy appropriate defense mechanisms to prevent a successful attack.

As part of your review, assess the degree to which CTI is integrated with the steps in your organization’s incident response approach, including preparation, detection, analysis, containment, eradication and recovery.

Measuring the impact of incidents
A post-mortem review of security incidents could give an organization insight into what worked well (and what did not) during incident detection and response and help to identify improvement opportunities.

It is worth reviewing security incidents to determine whether the use of CTI in security monitoring and incident response played a significant role in areas such as detecting unknown threats, reducing time to identify and respond to threats, and preventing significant damage to systems and data.

An assessment of the relevance of CTI to reducing the impact of security incidents could provide a view on which intelligence sources provide the best value to the organization and deserve continued investment.

Summary
The value of CTI to any organization is in its ability to support timely decision-making by stakeholders including executive management, corporate security, security operations and risk teams.

Regardless of which cyber security functions it is applied to, this is the key consideration to remember when conducting a review of the design and operating effectiveness of CTI processes.

Editor’s note: For more insights on threat intelligence, download ISACA’s threat intelligence tech brief.

Omo Osagiede, Director, Borderless-I Consulting

[ISACA Now Blog]

English
Exit mobile version