The University of the Cumberlands Knows No Boundaries

For Donnie Grimes, (ISC)² Global Academic Program (GAP) instructor and vice president of information systems and creator of the Master’s program in cybersecurity for the University of the Cumberlands, based in Williamsburg, Kentucky, breaches know no boundaries – and neither should cybersecurity education.

A GAP member since 2014, the University has historically served people from the Appalachia area; and until 2014, had no cybersecurity offering. Over the past 10-15 years, however, its sphere of influence has increased, with thriving graduate programs and students representing 58 different countries and most U.S. states. With a 40-year stint as a two-year school, Cumberlands is now a four-year college with 5,500 students. Cumberlands is one of the largest online schools in Kentucky, with an online population of approximately 4,000.

In 2012, Cumberlands tasked Grimes with developing the graduate cybersecurity curriculum for the University. As part of this process, he researched hundreds of different programs but couldn’t find many in cybersecurity, let alone those that adequately prepared students to enter the field. He found many schools that offered Master’s programs, but he believed they were really just glorified computer science programs. They included classes on data structures and programming, and just tacked one or two classes on at the end of the program and called them a “Master’s” in cybersecurity.

His vision for the Cumberlands was to create a Master’s program that was more in line with certification programs, such as the CISSP®, that exposes students to real-world concepts and prepares them for the pursuit of continual learning, which is essential for success in the field. Grimes designed the curriculum around the CISSP CBK®, with each course based upon a different CISSP CBK domain. He believes this approach provides a great foundation for students and ensures well-rounded graduates.

He worked to get the University and the information security program accredited through the Commission on Colleges of the Southern Association of Colleges and Schools (SACS). While there have been no graduates yet, there are 120 students enrolled in the program, including CIOs from a wide variety of industries. Their feedback has been very positive, and Grimes sees this compliment from professionals working in the field as the best they could receive.

Grimes implemented a process to review modifications to the CBK domains so they can keep up with industry fluctuations. Says Grimes, “We are not afraid of change. Our goal is to keep the program flexible enough to accommodate the realities of a dynamic industry.”

In discussing why the Cumberlands became a GAP school, Grimes comments that it was a “…value-add for our program and a natural fit because our curriculum was already aligned with the CISSP. We were already encouraging our graduates to sit for the CISSP exam because it validates their core knowledge. Becoming a GAP school streamlines the process and helps us keep our curriculum aligned with real-world concepts they can apply not only to their education process, but that will contribute to their success in the field.”

So what’s next for this rapidly growing school? Grimes would like to create courses that train future cybersecurity leaders and to see the University reach students in more parts of the world. The University also plans to launch a PhD program in information security this year. The University is currently working with the NSA and DHS to become a National Center of Academic Excellence. He reflected, “Our extensive online program means that students’ educational opportunities are not limited by their physical location. Breaches know no boundaries, and as an educational institution, we shouldn’t either. Regional colleges have an important role in stemming the cybersecurity skills shortage, and we should take advantage of virtual learning systems to improve the cybersecurity situation globally.”

For more information on the GAP, please visit https://www.isc2.org/global-academic-program/default.aspx.

(ISC)² Management

[(ISC)² Blog]

Malware Threats to Industrial Control Systems

Managers keen to avoid business interruption are delaying crucial software updates to industrial control systems. But with viruses like Stuxnet at large, this leaves organisations vulnerable says Del Rodillas.

One major reason why many industrial control systems (ICS) are highly susceptible to cyberattacks is that their software patching and anti-malware update cycles are infrequent – if they’re even happening at all.

Adding to this weakness is the growing presence of widely used Commercial Off-the-Shelf (COTS) systems whose universe of vulnerabilities and malware is constantly and rapidly expanding.  As seen in examples such as the Stuxnet and Energetic Bear attacks, these payloads can be leveraged in sophisticated cyberattacks that, if successful, could severely impact not only process availability but also safety. Let’s examine some of the ways to stay secure even in this difficult environment.

In my experience, it’s not that ICS security professionals don’t understand that patching is necessary and that systems are at risk of being compromised. Rather, it’s how the cumbersome process of ICS patching affects their main priority, which is high uptime.

Keeping the system available and running properly is critical whether the organisation is producing oil, transporting electricity or some other intensive process.

Patching in ICS to install software updates that fix vulnerabilities or to install the latest exploit/malware signatures usually requires stopping that process. With so much pressure on administrators to keep system uptime high, they often delay patching for months, or longer, to maximize production.


“It’s not that ICS security professionals don’t understand that patching is necessary and that systems are at risk of being compromised. Rather, it’s how the cumbersome process of ICS patching affects their main priority, which is high uptime.”


In some cases, the nature of the physical process dictates the patching cycles, some of which can span years. There is also a risk that the patches may cause a system to behave in undesired ways, adding even more hesitancy to patch.  It’s for these reasons that ICS patching must be done methodically. But during this window of being unpatched, the systems are highly vulnerable to known threats as well as zero-day threats that have not yet been discovered in the wild.

While security vendors do their best to ensure that new software updates do not cause any issues to systems, they may not have tested all scenarios – some of which may cause performance issues or system crashes once deployed in production.

These disruptions cause big problems in industrial automation environments where even temporary loss of visibility and control at the Human Machine Interface or automation server level could lead to substantial production losses and even compromise worker or consumer safety.

The quality assurance process is made more difficult by the fact that personnel don’t always see exploitable software vulnerabilities or new software feature as compelling enough events to “mess” with a system that is working just fine. The old adage of “if it ain’t broke don’t fix it” often reigns supreme in this environment. Too often operational technology personnel believe that they sufficiently isolated for these vulnerabilities to be exploitable. But Stuxnet, which attacked an air-gapped ICS environment, is just one example of this fallacy.

There are still other challenges. Variants of older malware such as Conficker or Slammer could be accidentally released into the ICS causing various levels of loss of visibility and/or control to the process from account lockout, HMI software non-responsiveness, or the debilitating “blue-screen of death” in which machines are rendered useless.

It’s important to note that in some cases, the ICS software may not be patchable at all. For example, there are some ICSes in the middle of their lifecycle that use operating systems such as Windows XP and Windows Server, neither of which is still actively supported. Given that the average lifecycle for an ICS is more than a decade, it could be years before asset owners can deploy newer, supported operating systems. An older system is therefore susceptible to both known and unknown threats – and the known threats won’t be patched.

A good cybersecurity strategy in ICS must include both a systematic approach to patch management and compensating cybersecurity controls when patching is not an option.  Patch management increases cybersecurity through the installation of patches that resolve bugs, operability, reliability, and cybersecurity vulnerabilities.  The ISA-TR62443‑2‑3 technical report, developed by the ISA 99 Working Group 6 in collaboration with IEC 62443 standards body, addresses the patch management aspect of ICS cyber security.

Here are five factors to consider when choosing ICS security:

  • Reduce the attack surface – Make sure that the technology you select gives you granular controls at the application, user, and content levels.  Also ensure these controls are contextually tied versus residing on separate disjointed network security devices.  This leads not only to better administrative efficiency but also accuracy of the policies that you implement.
  • Stop the propagation of known threats – Select a segmentation gateway that has native threat prevention capabilities to stop known malware and exploits from propagating in your network.  This serves as your first level of defense for protecting unpatched systems from threats whether specific to ICS-products or more general business software and operating systems.  Having this capability natively in the gateway instead of implemented as a separate, add-on device is important to ensure once again that there is shared context with the application/protocol and user information collected by the gateway.
  • Deploy sandboxing technology to stop zero-day threats – Advanced attackers will use zero-day malware to compromise your network.  Network sandboxing technologies that isolate suspicious payloads into a cloud-based environment, analyze them to determine their nature (malicious/benign), and send protections back to the user, are invaluable in terms of preventing zero day threats from propagating into networks. Make sure that this capability is native to the access control device so that there is a closed loop for protection, versus just serving as a detection-only device.
  • Prevent zero-day attacks to the endpoint – If the threat manages to bypass network security or is implemented locally at the endpoint, it is important that any attempts to compromise the system, whether using exploits or malware, are stopped.  Detection-only technologies are not enough; these attacks must be prevented. The risks are too high in critical infrastructure applications to allow threats to successfully execute.  They must be prevented.  Newer technologies are available which rather than trying to stop exploits and malware using known threat signatures (hashes, strings, behaviors), stops the underlying techniques employed by these threats – halting even zero-day attacks to unpatched systems.
  • Select a platform vs. point solutions – Integrating point solutions for network security, sandboxing and endpoint security leads to information silos, slow forensics, high administrative overhead and security gaps. When selecting a security architecture, make sure the components you pick work together as a platform. Application/Content firewalls, IPS, and URL filter functionality should be integrated into the access control device.  Furthermore, the access control device should make use of the output of the cloud sandboxing technology to ensure a closed loop in terms of stopping zero-day threats. The endpoint security should also take advantage of the threat intelligence provided by the cloud to ensure even stronger security posture than if the endpoint security was working in isolation.

Author: Del Rodillas, Palo Alto Networks,

How COBIT 5 Can Help Internal Audit Be “The New Pillar of Senior Management”

Internal audit has recently been called “the new pillar of senior management” because it is a key element in the structure of the company, contributing to the strength of internal control, risk management and corporate governance. COBIT 5, the last ISACA’s framework for the governance and management of enterprise IT, can help the internal audit function to be this pillar in many ways.

COBIT 5 is based on the assumption that companies exist to create value for their stakeholders. If companies exist for this purpose, auditors have to assess and report to the board of directors on whether benefits are delivered and risk and resources are optimized.

Internal auditors can use COBIT5 to set and prioritise specific enterprise goals and IT-related goals.

To be the pillar of senior management, auditors have to consider:

  • Stakeholder value of business investments: Auditors should assess the alignment of IT with business strategy; executive management commitment regarding IT-related decisions; the optimization of IT assets, resources and capabilities; and the realization of benefits from IT.
  • Management of business risk to protect assets: Auditors should assess how well IT-related business risk is managed and how well information, processing infrastructure and applications are secured.
  • Compliance with external laws and regulations and internal policies: Auditors should assess IT compliance with legal and internal requirements and IT support for business compliance with these requirements.
  • Optimization of business process functionality: One of the objectives of internal controls is improving the business process functionality. Internal auditors should assess how well applications and technology are integrated into the business process to enable and support them.

If these goals are considered for both enterprise and internal auditors, senior management will have to consider them as an important resource— as “a new pillar.”

Graciela Braga, CGEIT, COBIT 5 (F), CPA
Argentina

[ISACA]

Using Technology to Achieve Organizational Goals

In a recent interview with CIO Asia, Rene Bonvanie, Palo Alto Networks CMO, discusses the important relationship between the CIO and CMO roles and using technology to foster collaboration and growth.

“The role of the CMO has effectively moved to that of being the ‘chief digital officer,’” Rene notes. “Targeted and informed engagement with thousands or millions of customers would usually require multiple technologies and applications. It is therefore an imperative that the CMO masters these technologies and has control over them. Tech savvy or not, CIOs should work with the CMOs to help fill in the gaps in terms of technological proficiency.”

Read Rene’s interview here.

[Palo Alto Networks Blog]

Design Correlation Rules to Get the Most Out of Your SIEM

Every networked environment generates thousands of logs from disparate systems. Individually, many of these events may seem worthless. But when looking for a specific needle in the haystack, these logs can be very valuable. To gain this level of visibility, many organizations deploy a SIEM (Security Information Event Management) solution.

A SIEM performs several tasks that, combined, make it a great analytics tool. SIEM is big data analytics for security events. The functionality generally includes the following:

  • Centralize logs (and in some cases more). The logs from all of your systems can be forwarded to the SIEM, so that you only need to go to one place to get a consolidated view of what your systems are doing. Typically any of your network security appliances, routers, switches, servers, and network management solutions will generate logs that are supported by the SIEM.
  • Normalize logs. Many logs have similar types of information in them; however, they format them in a different way. For instance, a firewall log and a router security log will have a timestamp and a source IP address, but if you were to put them into a spreadsheet, they might end up in different columns. Normalizing the events is the process of manipulating the log data so that they are all in the same columns, allowing you to search for something where the results could be found in many different types of logs.
  • Correlate logs. Correlation allows you to apply intelligence to the logs, typically with if/then logic statements.

There are many indicators of compromise (IOCs) that can be identified by forwarding logs to a SIEM. These are a few that may be helpful to get started with a SIEM and get you thinking about how to develop more.

Many of the examples below will require some tinkering to fit your environment. The good news is that, as you develop these rules, there is nothing that will break. Each tuning configuration will lead you toward a better-monitored environment designed to reduce the amount of time it takes to resolve issues when they occur.

Attack Sequence

System compromises today include various stages, which we describe as the cyber attack lifecycle. Preventing any one of the stages can thwart the attack. The stages include reconnaissance, delivery of malware, exploitation of vulnerability, installation of command and control, command and control, and action or exfiltration of data.

Once a target is identified, an initial, malicious payload will be delivered to exploit a vulnerability on the target. If successful, more robust software, command and control, is installed. Once installed, the command and control software will talk to the control or management server. At this point, any action is possible — logging keystrokes, looking for passwords, exfiltration of data.

DNS is an important part of the attack infrastructure. Resolving domain names can be important to keep stability in the malware and allow for quick changes of IP addresses, if the management server gets taken down. For further stability, malware authors will often use their own DNS servers and configure compromised systems to resolve domain names from them. In addition to command and control, using a malicious DNS server allows attackers to return any IP address they want for any site requested. This allows attackers to set up phishing websites for popular banking or email services, steer that traffic to their website, and collect credentials.

Vendor Provided Correlation Rules

My general methodology with SIEM (and any Intrusion Prevention System for that matter) is to enable everything and see what happens, and then tune back what I am not interested in. The process is to enable the correlation rules, once your events are being forwarded, to see how the SIEM reacts.

For example, you may have a network monitoring system sending UDP packets on port 162 to poll system information via SNMP, generating lots of firewall events. These firewall events may trigger a port scanning rule on the SIEM. The port scanning correlation rule is still valuable, just not for this use case. The best practice would be to keep the rule enabled and ignore logs that contain your network management servers as the source, your internal subnets as the destination, and UDP port 161/SNMP as the service/application.

Here are several potential correlation rules that leverage firewall rules to detect compromised hosts using only firewall logs:

  • Rogue Name Servers. User devices should be configured to use the internal corporate DNS servers. The local DNS servers should be able to get out to the Internet to find domains they don’t have information on. This correlation rule should monitor for any source address accessing UDP/TCP port 53 (or even better the DNS application), where the destination is NOT the internal DNS servers.
  • Rogue Proxy Servers. From your perimeter firewalls, you should only see traffic from the LAN subnet coming from the proxy server to anywhere on TCP port 80/443 or application web browsing and SSL.  Anything else could be an attempt to subvert this control by an employee or contractor or malware configured to do so. If the source IP is NOT your proxy, and the destination application is web browsing or SSL, trigger an alert.
  • BOTNET Traffic. Older command and control software would leverage Internet Relay Chat (IRC) for management. While IRC is not necessarily more risky than any other instant messaging protocol, it is bad more often than not when seen in a corporate network. Just be prepared to create filters from some of your network admin computers. If the source and destination are any host, and the application in use is IRC, trigger an alert.
  • SPAM bot. All corporate email will be delivered through the corporate SMTP relays. If SMTP traffic is seen from a system using a different SMTP server, there is a good chance that the host is infected with software designed to send SPAM. If the source is any host on your LAN, the destination is any host other than your SMTP servers, and the application is SMTP, trigger an alert.
  • Server Compromise. The basics of client server technology are clients connected to servers on TCP ports up to 1024, using a source port of 1024-65535. When someone connects to a web server, they will be using a port between 1024-65535 and connecting to TCP 80 or 443. That is what should be showing up in the firewall logs. If the source is your web server(s), and the source port is not TCP 80 or 443, trigger an alert. This means that your web server is initiating the connection, which it should not be doing. Some tuning will have to be done to allow for software installation and updates. The same logic could be applied to any type of server.
  • Misuse of Administration Account. While not firewall log related, this one is great. All systems have some form of administrator or root account. Best practice for administrators is for them to have the same level of privilege on their own account. This provides accountability of the changes administrators make, rather than seeing a generic account name in the logs. Once this is complete, if there is a successful login, and the username is “administrator” or “root” or any other generic device administrator, trigger an alert. Someone is likely trying to make unauthorized changes.

These are some ideas to get you started with developing correlation rules. Be creative. One of the ways to develop new content is to peruse the SIEM events looking for the ones that are NOT getting correlated. There could be a lot of things happening that you don’t want to have happen, but you just don’t have a correlation rule for them yet. When building these rules, you are always going to get a lot of false positives in the beginning.  Do not get discouraged.  Create your rule, either replay several weeks work of data through it, or let it run and keep an eye on it.

For more information on logging capabilities and SIEM partners please visit this page. Palo Alto Networks has also added some of this functionality to the appliance and management platforms.

[Palo Alto Networks Blog]

English
Exit mobile version