Every networked environment generates thousands of logs from disparate systems. Individually, many of these events may seem worthless. But when looking for a specific needle in the haystack, these logs can be very valuable. To gain this level of visibility, many organizations deploy a SIEM (Security Information Event Management) solution.
A SIEM performs several tasks that, combined, make it a great analytics tool. SIEM is big data analytics for security events. The functionality generally includes the following:
- Centralize logs (and in some cases more). The logs from all of your systems can be forwarded to the SIEM, so that you only need to go to one place to get a consolidated view of what your systems are doing. Typically any of your network security appliances, routers, switches, servers, and network management solutions will generate logs that are supported by the SIEM.
- Normalize logs. Many logs have similar types of information in them; however, they format them in a different way. For instance, a firewall log and a router security log will have a timestamp and a source IP address, but if you were to put them into a spreadsheet, they might end up in different columns. Normalizing the events is the process of manipulating the log data so that they are all in the same columns, allowing you to search for something where the results could be found in many different types of logs.
- Correlate logs. Correlation allows you to apply intelligence to the logs, typically with if/then logic statements.
There are many indicators of compromise (IOCs) that can be identified by forwarding logs to a SIEM. These are a few that may be helpful to get started with a SIEM and get you thinking about how to develop more.
Many of the examples below will require some tinkering to fit your environment. The good news is that, as you develop these rules, there is nothing that will break. Each tuning configuration will lead you toward a better-monitored environment designed to reduce the amount of time it takes to resolve issues when they occur.
System compromises today include various stages, which we describe as the cyber attack lifecycle. Preventing any one of the stages can thwart the attack. The stages include reconnaissance, delivery of malware, exploitation of vulnerability, installation of command and control, command and control, and action or exfiltration of data.
Once a target is identified, an initial, malicious payload will be delivered to exploit a vulnerability on the target. If successful, more robust software, command and control, is installed. Once installed, the command and control software will talk to the control or management server. At this point, any action is possible — logging keystrokes, looking for passwords, exfiltration of data.
DNS is an important part of the attack infrastructure. Resolving domain names can be important to keep stability in the malware and allow for quick changes of IP addresses, if the management server gets taken down. For further stability, malware authors will often use their own DNS servers and configure compromised systems to resolve domain names from them. In addition to command and control, using a malicious DNS server allows attackers to return any IP address they want for any site requested. This allows attackers to set up phishing websites for popular banking or email services, steer that traffic to their website, and collect credentials.
Vendor Provided Correlation Rules
My general methodology with SIEM (and any Intrusion Prevention System for that matter) is to enable everything and see what happens, and then tune back what I am not interested in. The process is to enable the correlation rules, once your events are being forwarded, to see how the SIEM reacts.
For example, you may have a network monitoring system sending UDP packets on port 162 to poll system information via SNMP, generating lots of firewall events. These firewall events may trigger a port scanning rule on the SIEM. The port scanning correlation rule is still valuable, just not for this use case. The best practice would be to keep the rule enabled and ignore logs that contain your network management servers as the source, your internal subnets as the destination, and UDP port 161/SNMP as the service/application.
Here are several potential correlation rules that leverage firewall rules to detect compromised hosts using only firewall logs:
- Rogue Name Servers. User devices should be configured to use the internal corporate DNS servers. The local DNS servers should be able to get out to the Internet to find domains they don’t have information on. This correlation rule should monitor for any source address accessing UDP/TCP port 53 (or even better the DNS application), where the destination is NOT the internal DNS servers.
- Rogue Proxy Servers. From your perimeter firewalls, you should only see traffic from the LAN subnet coming from the proxy server to anywhere on TCP port 80/443 or application web browsing and SSL. Anything else could be an attempt to subvert this control by an employee or contractor or malware configured to do so. If the source IP is NOT your proxy, and the destination application is web browsing or SSL, trigger an alert.
- BOTNET Traffic. Older command and control software would leverage Internet Relay Chat (IRC) for management. While IRC is not necessarily more risky than any other instant messaging protocol, it is bad more often than not when seen in a corporate network. Just be prepared to create filters from some of your network admin computers. If the source and destination are any host, and the application in use is IRC, trigger an alert.
- SPAM bot. All corporate email will be delivered through the corporate SMTP relays. If SMTP traffic is seen from a system using a different SMTP server, there is a good chance that the host is infected with software designed to send SPAM. If the source is any host on your LAN, the destination is any host other than your SMTP servers, and the application is SMTP, trigger an alert.
- Server Compromise. The basics of client server technology are clients connected to servers on TCP ports up to 1024, using a source port of 1024-65535. When someone connects to a web server, they will be using a port between 1024-65535 and connecting to TCP 80 or 443. That is what should be showing up in the firewall logs. If the source is your web server(s), and the source port is not TCP 80 or 443, trigger an alert. This means that your web server is initiating the connection, which it should not be doing. Some tuning will have to be done to allow for software installation and updates. The same logic could be applied to any type of server.
- Misuse of Administration Account. While not firewall log related, this one is great. All systems have some form of administrator or root account. Best practice for administrators is for them to have the same level of privilege on their own account. This provides accountability of the changes administrators make, rather than seeing a generic account name in the logs. Once this is complete, if there is a successful login, and the username is “administrator” or “root” or any other generic device administrator, trigger an alert. Someone is likely trying to make unauthorized changes.
These are some ideas to get you started with developing correlation rules. Be creative. One of the ways to develop new content is to peruse the SIEM events looking for the ones that are NOT getting correlated. There could be a lot of things happening that you don’t want to have happen, but you just don’t have a correlation rule for them yet. When building these rules, you are always going to get a lot of false positives in the beginning. Do not get discouraged. Create your rule, either replay several weeks work of data through it, or let it run and keep an eye on it.
[Palo Alto Networks Blog]