Hackers vs. Hacked: The Game’s Not Over

The New York Times recently published an article, “Hacked vs. Hackers: Game On,” discussing the current state of network security, and in it made a couple of interesting points about the prevalence of breaches, the need for federal regulation, and how current network defense technology is failing.

While I agree that better defenses are needed because traditional detect-and-prevent solutions aren’t doing enough, I don’t agree that a better solution isn’t out there. But as long as there is money to be earned, attackers will not stop conjuring new methods of attack, making security a constant battle. The winners in this sphere will be the ones who are innovating security measures at an equally rapid rate.

If there is no silver bullet, traditional firewall and antivirus technology are like rubber bullets. To be fair, many companies realized this years ago and have already switched to next-generation enterprise security technology, deployed sandboxes, and have turned up the dial a couple of notches on their IT security and remediation teams. Customers of web applications, particularly those with enterprise contracts, are demanding safer products, 3rd party penetration testing, and brisk vulnerability remediation time.

However, while much progress has been made in the past few years, there is still a gaping hole where network security is concerned because even most next-generation security technology isn’t doing enough to keep up with determined attackers. Detection-focused technologies are great at detection, but costly and slow when it comes to prevention. Stand-alone, point solutions are great at preventing attacks whose delivery methods are primitive or well known, but are no match for advanced threats. Stateful, next-generation solutions are great at identifying traffic with certain protocols on certain ports, but are blind when it comes to the evasive maneuvers of clever attacks.

To echo the New York Times, “patch and pray” is not a good security strategy — for some it isn’t a strategy, period. Upgrading to the latest, patched version of an application is a luxury not available to enterprises that can’t afford even a few seconds of system downtime. Even when upgrades are done diligently, organizations are still at the mercy of their vendors. If the vendor doesn’t deem a vulnerability a priority, it’s not getting fixed. Likewise with deployed security devices, too many alerts or false positives thwart any kind of timely preventive or remediable reaction.

Securing a network — really securing it, not just checking off boxes on someone else’s to-do list — requires deep understanding of how attacks are delivered, and security components deployed at each step within that delivery chain. Those components must be closely integrated with each other so the data they supply gives a complete picture of who, what, when, where, why, and how an attack was launched. Only then can security professionals can begin to think about and plan their network defenses more strategically.

Palo Alto Networks is and has been thinking this way for years, and it’s in this aspect that our story diverges from the rest of the “next-generation security” pack. We built a platform that extends its protections against advanced threats to data centers, public and private clouds, and endpoints — both in-office and mobile. Customers have our exhaustive threat intelligence to rely on, and are backstopped by one of the industry’s best support organizations. We live and breathe “prevention” because we know how important network, data, and cybersecurity is, even if the rest of the world is stuck on “detect and remediate.”

There is definitely some truth to the premise that a catastrophic event with severe physical destruction or loss of life must occur in order to get the proper amount of attention cybersecurity needs. There aren’t any recorded deaths as a result of a cyber attack — and, gawd, I hope it never comes to that — but the potential for death and destruction is staring us in the face. But the other piece necessary for garnering deserved attention is making sure the public really understands what a cyber attack is — the how, what, who, and why important in grasping any complex concept.

The public is somewhat shielded from the gritty reality of cyber crime, primarily because it requires some technical knowledge, but also because what business in their right mind would want to admit details of their failings after a particularly dangerous breach in a way that the masses would understand? Breaches mean headlines, and as we’ve seen this past year, headlines for breaches mean C-level executives lose their jobs and business their reputations.

When the federal government finally does step in and start regulating data security and breach handling — which is where I’m certain we’re headed — the sense of urgency associated with security will naturally increase, and liability will be very clear-cut.

Until then, it’s up to us, the security vendors to admit traditional technologies aren’t doing enough, that there are gaps. But it’s one thing to be realistic about the state of security, or lack thereof, and another to admit defeat and say it can’t be done.

I’ve seen the havoc a breach can wreak not just on the business but on the lives and livelihoods of the everyday people who are at the ultimate receiving end any attack. This is why I think working in security is important. At the end of the day, it’s about preventing attackers from targeting my family, my friends, my country, and the technology and ideas that will ultimately make the world better.

I know this problem can be solved, and I know that Palo Alto Networks is solving it. Head hereto read about our Enterprise Security Platform, which fills the gaps, intelligently fortifies network defenses, and takes a preventative approach protecting businesses and governments from advanced threats.

[Palo Alto Networks Blog]

Palo Alto Networks 2015 Predictions: Tailored Threat Intelligence

As 2014 comes to a close, our subject matter experts check in on what they see as major topics and trends for the new year. (You can read all of our 2015 predictions content here.)

Reading the collective tealeaves for adversaries 12 months from now is almost always a losing proposition. You are essentially trying to predict the tools, tactics and techniques that are going to be employed by incredibly skilled and intelligent attackers. Yes, we know more data breaches will occur, more records will be stolen, new technologies will be exploited, and more malware will be created than has ever been seen before. These are all givens in today’s threat landscape—the bad guys are out there, getting more efficient at their jobs, and constantly evolving.

The question becomes, what can we do about this in 2015? Here’s how I see it:

1. The year big data security analytics goes mainstream.

For advanced threats, the problem has always been attempting to find the small indicators that could reveal an attack. Many have tried to bring together enough intelligence, horsepower and analysis to find these “needles in a haystack,” but it hasn’t been enough. While there have been hints of success, applying big data analytics techniques to security will come into its own in 2015. We have hit the inflection point where computing power, availability of data, analytic models and most importantly the willingness and drive to see them through are here. We will see massive advances this coming year in our ability to collect, analyses, search, correlate, visualize and turn data into actionable security intelligence.

2. Tailored threat intelligence.

Increasingly, sophisticated organizations are realizing that certain types of attacks, or certain groups of attackers come after them. For example, there are certain steps an adversary will take to compromise a retail companies’ Point of Sale (POS) systems versus an entertainment organization’s databases, or the customer records at a major hospital. The motivations are different, the exploits and malware unique, and the methods change in each case. 2015 will be a banner year for profiling how attacks differ by industry, which vectors are higher risk for individual organizations, and tailoring custom protections in each case.

3. Sharing security intelligence.

Many major enterprises have learned the critical importance of sharing intelligence about the current state of the threat landscape – such as those in organizations like the FS-ISAC. Everyone benefits from information shared by one member, and collective immunity can be developed, stopping advanced attacks before they can compromise multiple organizations. This coming year will represent widespread adoption and acceptance of information sharing. The days of “holding it close” are over – the volume and sophistication of attacks requires a joint response.

A common theme runs through my thoughts on 2015: making better use of the data we have. Whether it is better algorithms to predict the next attack, understanding your risk posture, or sharing what you know with others – intelligence is key. Turning the massive churn of data enterprise organizations see each day into actionable intelligence, automatically, will be a major theme for 2015.

 

Threat intelligence is among many industry-specific topics planned for Ignite 2015, where you will tackle your toughest security challenges, get your hands dirty in one of our workshops, and expand your threat IQ. Register now to join us March 30-April 1, 2015 in Las Vegas — the best security conference you’ll attend all year.

 [Palo Alto Networks Blog]

 

Palo Alto Networks 2015 Predictions: Datacenter

As 2014 comes to a close, our subject matter experts check in on what they see as major topics and trends for the new year. (You can read all of our 2015 predictions content here.)

 

1. Cloud security will become less cloudy

It’s amazing how fast things change. It was not that long ago that cloud computing skeptics said that no one will use the cloud for business applications because of the security issues. Now we hear from customers that they are moving entire datacenters – not just select applications – to the cloud. Why? Ubiquity is one reason. Reduced costs are another. Finally, they are realizing that security — specifically next-generation security — can be used to protect their applications and data from advanced cyber attacks. But traditional, port-based security technologies cannot exert the same levels of control.

With the recent release of our VM-Series for both Amazon Web Services and KVM joining Citrix SDX and VMware ESXi and NSX support, 2015 will be the year that customers can protect their public, private or hybrid cloud-based applications using the next-generation firewall and advanced threat prevention features found in our enterprise security platform. Further clarifying cloud security will be the elimination of the time-lag between virtual machine provisioning and security deployment through the use of native automation features such as VM-monitoring, dynamic address groups and the XML API.

2. The benefits of network segmentation based on Zero Trust will be realized

During a recent customer visit, a tenured networking professional challenged our discussion around network segmentation based on Zero Trust principles, stating he had been segmenting the network for security for years. “So what’s new here?” he asked. Conceptually there is nothing new here; rudimentary network segmentation can be done by routers, switches and even firewalls. The key difference is in the level of granularity by which we can segment the network.

The rash of recent high profile breaches — where attackers hide in plain sight on the network — points to the need for segmentation principles that are more advanced than mere port, protocol or subnet. As the conversation with this networking professional continued, I pointed out that with the application identity, a view into the content and knowledge of who the user is, we can segment business critical data and applications in a far more granular fashion than rudimentary segmentation would allow.

Specifically, we can verify the identity of specific business applications, forcing its use over standard ports and validating the user identity. We can find and block rogue or misconfigured applications — all the while inspecting the application flow for file types, and blocking both known and unknown threats. In 2015, I expect to see many organizations continue to re-think how they are segmenting their network and applying Zero Trust principles of Never Trust – Always Verify using the application, the respective content and the user as the basis for policy enforcement. The benefits our customers will begin to realize include improved security posture with less administrative effort.

3. 2015: The year of focus

According to IDTheftCenter.Org, 2014 had, as of Dec 2, 708 data breaches resulting in the loss of more than 81 million records. That represents data from roughly 25 percent of the U.S. population and the year isn’t even over. So in the spirit of Christmas, my last forward looking 2015 entry isn’t a prediction but a wish. While I don’t believe we will ever know the details behind the 700+ breaches, it’s safe to say that there were multiple steps along the way where someone could have said, “We could have been more focused here.” My 2015 wish is that users, netsec professionals and executives all become more focused on their respective network security responsibilities.

  • Users: Focus on the fact that you are integral to network security – even though you may not see yourself as an attack target, you can easily be an attack entry point. So here are some simple steps to lessen that risk. Count to five and think about the link you are clicking on. Look closely at it, and if you have doubts, don’t click. Say yes to your software (e.g., IE, Adobe, Firefox, etc.) updates as they often times include patches to vulnerability exploits — aka attack vectors. Lastly, think about what you do on your company network this way. It’s your benefits, payroll, and other personal data that are at risk, not just the company’s data.
  • Netsec professionals: I wish you had more time, but I’m a realist. My wish for you all is that you be more focused (than you already are) on things that appear out of the norm: strange traffic patterns or application usage in the datacenter, odd outbound behavior around the use of RDP, SSH or TeamViewer, odd data or application access requests. What we do know about many of these attacks is that the activity was hiding in plain sight using common applications – focus and vigilance may help us stop the progress of these attackers.
  • Executives: 2014 showed that not only your company reputation, but also your career is on the line. In 2015 you should focus on becoming more knowledgeable about your data. Where is it stored? Where it is going on the network? Is encryption in use? What SLAs are in place if it is stored externally? With that information in hand, ask your brightest netsec minds what else can you do to protect the data.

 

Datacenter security is among many industry-specific topics planned for Ignite 2015, where you will tackle your toughest security challenges, get your hands dirty in one of our workshops, and expand your threat IQ. Register now to join us March 30-April 1, 2015 in Las Vegas — the best security conference you’ll attend all year.

 [Palo Alto Networks Blog]

 

Keeping Up with Emerging Technologies Amidst Legislative Lag

Government organizations, such as the US Congress, can be a bit slow on the uptake, taking decades to recognize new technology and adjust our laws accordingly. For industries that deal with sensitive data, however, relying on legislative lag can lead to a false sense of security. Governments around the world have grown wise to the rapid pace of technological development, and the law is prepared to incorporate new technology as it is developed.

Some of the biggest challenges faced by businesses that handle sensitive personal data are best practices laws. Best practices laws demand a constant awareness of current and new technology and its potential impact on a client’s business practices. Depending on your field, privacy laws and regulations are often so vague that “best practices” just means the most conservative practices you can design, including a good insurance policy.

Contractual obligations are another challenging part of maintaining sensitive data. Businesses and governments frequently mandate data protection via contracts. The European Union (EU) recommends contractual clauses designed to export its privacy regulations to foreign businesses dealing with companies from the EU. Banks, insurers and other large corporations often maximize their protection by demanding “all reasonable protections,” “the utmost care” and other vague statements that seem more concerned with shifting liability to their contractual partner than actually protecting sensitive data.

Employing best practices and fulfilling vague contractual obligations requires an understanding of technology that is still in development. Early adopters of encryption no longer rely on DES and other outdated standards, depending instead on expert consultants who apprise them of improvements and new standards. These experts now regularly advise on more modern technologies and practices, such as AES in encryption, FPE in tokenization, multi-layered privacy design and merging of access identity management practices with encryption and de-identification policies. Although not all of these technologies are relevant to all businesses, some are mandated or recommended, and others become relevant due to vague regulations or contractual obligations. The need for technological mobility, flexibility and increased performance requires more points of access and greater protection, in turn leading to bottlenecks and runaway costs. Thus, both profit and compliance demand fast adoption of emerging technologies.

Your client’s cybersecurity obligations must also be balanced with its duties under transparency laws and regulations. Transparency and privacy have always been in conflict. Today, we see evidence of the privacy/transparency conflict in arguments over making health data available to researchers, censoring internet search results in the name of privacy and, of course, the ongoingpublic debate about mass surveillance. You know your client’s current transparency obligations, but how can you prepare them for the future without further sacrificing data security? Developments in the EU offer a good insight into a difficult new reality, one where the privacy concerns of the past are swept under the rug every time a new technology promises to minimize the privacy impact of new transparency rules. The European Medicines Agency (EMA) recently mandated increased transparency of clinical research data—requiring researchers and companies to share sensitive data among themselves while necessarily mitigating the risks of a data breach. Even businesses have joined the fight, with Google and BBC both planning to undermine the EU’s “Right to be Forgotten” ruling via new transparency reports.

The US has already begun debating the merits of copying the EU’s rules, and American corporations are preparing themselves for the changes around the corner—and confronting the ones that are already here. We all balance priorities in constant conflict: compliance, maintaining consumer confidence and generating a profit. Governments know that new technology is the primary force shaping this balance, and the onus is on businesses to make sure they keep up.

Harris Buller, Attorney, HushHush

Virginia Mushkatblat, Founder of HushHush

[ISACA]

Follow-On to VBA-Initiated Infostealer Campaign: Exploring Related Malware and Actors

In late October, we began examination of a VBA-initiated Infostealer campaign. This blog post follows up on additional information we gathered on related malware and associated actors.

Pivot On Initial Predator Pain Sample C2

In our previous post, we identified two Command and Control (C2) fully qualified domain names (FQDNs) for the initial Predator Pain sample analyzed: mail.rivardxteriaspte.co[.]uk and ftp.rivardxteriaspte.co[.]uk. We were interested in seeing whether any other malware samples had been observed communicating with these FQDNs and, if so, to which malware family they belonged.

Leveraging the Palo Alto Networks WildFire platform, we found an additional 14 samples that communicated with one or both of these C2 FQDNs between December 27, 2013, and August 1, 2014 (Table 1).

While anti-virus (AV) detections varied widely, all of these samples belong to the Predator Pain keylogger malware family. Additionally, a number of samples were also packaged with the Limitless keylogger, most likely for its exfiltration capabilities. Although Limitless is easily modified, one clear indication that it is employed is a default POST request over TCP/80 to the following URL:

http://www.limitlessproducts[.]org/Limitless/Login/submit_log.php

Both of these keylogger packages are available in the cybercrime underground for less than $40 USD, with cracked versions available for free (albeit with potentially unwanted “features”). The samples observed had the following capabilities (ordered by prevalence):

  • Collection of system information
  • Web browser password extraction
  • E-mail password extraction
  • Screenshot capture
  • Logging of web browser activity
  • Logging of e-mail activity
  • Logging of chat activity
  • Internet Download Manager password extraction

Figure 1 presents a malware-centric view of identified samples, categorized under the dominant malware family of Predator Pain.

The newly identified samples were almost exclusively downloaded from one domain, nova.co[.]in, which resolved for some time to the same IP as the download domain for the initially analyzed Predator Pain sample, 209.160.24.197. Sometime between mid-March and the first of August, the nova.co[.]in IP resolution shifted to 209.160.26.174. The download domain view of those samples for which data was available can be found in Figure 2.

The broader set of malware also revealed five samples that reached out to Pastebin, as an additional C2-oriented request. Associated Pastebin pages were no longer active when checked in November 2014. Figure 3 depicts the C2 communications for samples.

Additional Actor Analysis

In our last post for this campaign, we attributed the focal Predator Pain sample to an actor that goes by the handle “Skozzy”. The profile for the related malware enumerated above further supports this attribution, given the shared C2 infrastructure and dominance of two malware packages favored by this actor.

In an attempt to gain further insight into this actor, we also performed a pivot on WHOIS registrant information for the initial Predator Pain sample’s C2 domain. This revealed a “Josh Frank” (sometimes “Josh Franks”, “Franks Josh” or “Josh Frank Kelvin”) persona, which in turn was confirmed as associated with both 419 and dating scams, under at least the following e-mail addresses:

  • frankjosh61[at]yahoo.com
  • frankjosh60[at]yahoo.com
  • joshfrank615[at]yahoo.com (potential)

Additionally, this persona is known to register domains under two organizations, “Xteria pte” and “Amorex”, and has been observed using registrant contact information and/or social engineering references from Malaysia or the United Kingdom. Correlated domains lean towards financial (e.g., banking, brokerage) and dating themes, with registrar activity observed for associated domains as late as October 2014. A sampling of domains linked to this persona follows:

  • maybnk2u-malaysia[.]net
  • lexusmalaysia[.]com
  • attaccq[.]com
  • ahaldarazi[.]com
  • tegbet[.]com
  • acemovement[.]com

While it cannot be said with certainty that “Skozzy” and “Josh Frank” refer to the same individual, it is clear that there is a tie between the two in terms of motivations and objectives: financial gain through personal and/or business fraud.

Expanding on Actor Motivations and Objectives

As noted in the previous blog post on this topic, “roles across nation state, cybercrime, hacktivist and ankle-biter/script kiddies are not mutually exclusive and – in fact – continue to become fuzzier over time.” Actors using tools such as Predator Pain and Limitless have a myriad of options at their disposal for information collection. This extends into an equally broad range of potential malicious uses for that information. It also further blurs the lines between malicious actor categories, translating into increased challenges in characterization/qualification and attribution for cyber attacks.

Opportunism further extends within each of these malicious actor categories – especially with greater availability and a lower cost of entry for increasingly sophisticated and effective tools. One example is the shift by some cybercrime actors away from information theft from individuals and instead scaling up towards higher-yield attacks against companies and organizations. Clever application of insider, sensitive information gleaned from such tools can serve as a multiplier to the perceived legitimacy and potential impact of more precise second-stage social engineering and/or malware attacks.

With the demonstrated success of such tools and techniques to date, we anticipate continued growth in the number of these types of attacks in the future. The Palo Alto Networks Enterprise Security Platform can prevent, address and minimize the risk of these and other associated threats. Learn more about the platform here.

[Palo Alto Networks Blog]

English
Exit mobile version