I understand the stress of information security management. The stakes are high, our methodologies are continuously questioned and evolving—and rightly so. And yet our customers/stakeholders/employees/executives/families wonder why we haven’t solved that whole cyber security thing yet.
My goal in this post is to highlight an area of vulnerability management that is still around the corner, for some. Think of this as a heads-up. I’ll be speaking about this topic—and releasing brand-new data from ISACA at the upcoming CSX conferences in Las Vegas, London and Singapore, in the hopes of relieving some of that stress and surprise, when what the researchers are doing now starts significantly impacting your security program.
A lot of what arrives on the desk of security and compliance managers starts in the labs of security researchers. You know, “hackers.” For those not familiar with the security research community—these are the reverse engineers, bug hunters, exploit developers and creators of penetration testing tools that raise the security bar for vendors. Finding problems is their day job.
Over the past couple of years, the security research community has been shifting gears and setting their radar on a new target: firmware. Once obscure, firmware and embedded device research is now becoming mainstream. This year at Black Hat Las Vegas, at least 20 percent of the presentations were in some way related to Internet of Things (IoT) and/or firmware security, and trainings relating to device compromise via firmware are getting more popular.
Why Focus on Firmware? These researchers are responding to the growing numbers of systems and embedded devices powered by insecure firmware. These devices can be lucrative targets and the cost of compromise is relatively low.
Meanwhile, security and technology managers are already overworked just handling the basics: firewalls, endpoint security, intrusion prevention systems, access management, OS security; the list goes on. Solutions around firmware integrity monitoring are emerging, but many are not aware of the need.
Firmware: Easy to Pwn?
The security industry has made strides in making attacks on computers and servers more difficult; driving up the cost of attack by requiring advanced techniques to circumvent modern OS security mechanisms. Strong OS and hypervisor-level protections make systems less attractive targets, but not so much if the underlying firmware is left undefended.
There are a few fundamental reasons why firmware can make a realistic target:
No upgrade path for firmware: In contrast to software, firmware can be more difficult to update. Update policies may not exist; indeed, the ability to update may not even exist. Add to this the resiliency of these systems—literally devices that may sit around for decades. Changes in security requirements (e.g., updated encryption algorithms) may not be reflected in updated firmware. Even unsophisticated attack techniques are highly likely to work across outdated security mechanisms.
Traditional methods don’t apply or can be side-stepped: No matter how many layers of security are built into the OS, ultimately a system relies on the underlying firmware to boot and interact with hardware. Once firmware integrity is compromised, the other layers of protection may as well not exist. Attackers can bypass sophisticated security measures by directly targeting the firmware, which gets unfettered access to device functionality.
Breaches are hard to detect: Traditional protection systems do not monitor firmware integrity.
The new Advanced Persistent Threat (APT): Once a breach is detected, it is difficult to remediate. Malware can be cleaned up with antivirus or sandboxed on most systems, but a firmware compromise can persist and hide malicious behavior for months and years. Compromised firmware can also allow OS-level attacks to recur even after normal remediation actions are implemented.
The Internet of Firmware
Traditionally, firmware is associated with the BIOS on a PC, but embedded devices (a.k.a. IoT) rely on firmware in several of their components. We are not used to thinking of these new types of devices as miniature computers that need the same care in deployment, management and protection as our servers, computers and mobile phones. And they are out there by the billions: Not just in newfangled “smart” kickstarter projects for the home, but in mission- and life-critical devices used in factories, power plants, medical equipment and point-of-sale systems.
What to Do? The role of firmware—across servers, network devices, mobile devices, storage systems, network devices and the IoT creates an abundance of targets that are coupled with surprisingly low barriers to entry for attackers. If an attacker owns the IoT, they own the future fabric of our existence.
So, this new area of focus for researchers is not a trend that will be changing any time soon. As firmware-based vulnerability moves from theory to reality, how is this scenario affecting what plays out in the enterprise? How is it addressed by compliance frameworks? How do we address these risks?
Editor’s note: Justine Bone will be a keynote speaker at all three CSX 2016 conferences, presenting Mind the Gap: Analyzing Cyber Security Controls that Few Organizations are Implementing, and Why. An information technology and security executive with technical background in software security, risk management, information security governance and identity management, Bone spent more than 15 years working in the private sector for financial, news and information security companies, plus several years serving the intelligence community.
While there are more opportunities for security professionals than ever before, it is important to understand why this is occurring. From a big picture perspective, there are three major reasons one can point to.
Three Key Reasons for Rising Demand for Security Pros
The first is an increasingly complex and demanding regulatory landscape that companies must comply with. This includes well-established requirements such as SOx, PCI-DSS, GLBA, HIPAA and FISMA, as well as new and evolving standards companies will be expected to comply with in the future.
The second, which ties back to number one, relates to the increasing demands that organizations put on each other. Your organization may have a great security program but you are only as good as the weakest link, which may be a vendor or service provider. With the increased scrutiny on partners and providers, the bar has been raised for security programs everywhere.
And third is an increased awareness of security risks from the boardroom to the individual consumer. The old journalistic maxim “if it bleeds, it leads” now applies to cyber breaches and security events, which have become mainstream news. While security challenges are nothing new to practitioners, all things cyber have become a hot topic in the media, especially when it impacts individuals and consumers.
Certainly, there are other factors, but these three issues are some of the biggest drivers that have been contributing to the demand for security and IT risk professionals.
What does this mean to the job seeker? I’ll put it this way: just because everybody wants to dance, it doesn’t mean they know how… or can even snap their fingers to the beat. More simply, just because organizations are interested in building security programs, it does not mean they actually know what they are doing, especially when it comes to talent.
Over the last two years I have had hundreds of conversations with security professionals who are back on the market a year or less after joining a new company. Why? In most cases, the opportunity was not what it was represented to be and they have been put in a situation where they are either unsupported, professionally regressing, or set up to fail.
This is frustrating because while security professionals are often described as paranoid, skeptical or even jaded – mindsets critical to the job – most are idealists at heart. I don’t know many dedicated practitioners who don’t love the work they do or believe they are fighting for a worthy cause. The truth is, we are in a global technological arms race and it takes special people willing to take on these kinds of challenges. As a result, many go into the interview process with an overly optimistic mindset and don’t ask the hard questions.
Asking the Hard Questions
What are the hard questions? They are the ones that every security and IT risk professional needs to ask while interviewing and before accepting a new position. They are the questions that can uncover future obstacles and allow you to make a more informed, objective decision about an opportunity. So for simplicity’s sake, I’ll break them down into some basic categories: motive, history, leadership, resources and path.
Here are examples of the questions one needs to ask for each category:
Motive – Why is security important to this organization? What’s driving the program? What are the assets that need protection and how vital are they to the company’s success? How much is driven by compliance? Is security seen as a business enabler or a check box? What are the major initiatives in the coming 1-5 years? Why is the position open? How long has it been open? Is it open due to attrition, a particular security event or part of a new initiative?
History – What do you know about the company’s business? How do they stack up against their competitors? What are the security-specific challenges that organizations in their industry face? What has the company’s past position been towards security? Have they experienced any recent breaches or incidents? Do they have an established program? If so, how large and what kind of attrition have they had? Is this their first effort to build a program? Why? (See motive) How has the preceding security organization succeeded or failed?
Leadership – From the executive leadership team down, what can you learn about the culture of the organization? What is the CEO’s or board of directors’ public position on security? Has there been high turnover at the CIO or CISO level? What about the supporting security organization? What can you learn about the current approach to security? What can you learn about the manager your future role will report to? What has his/her career progression been?
Resources – This is critical for any level, but especially leadership roles. What is the annual budget for security? How is this determined? If more resources are needed, what’s the process for attaining them? To what extent does the security organization rely on external service providers? What is the current and projected headcount of the team? How is the team currently structured? What is the attrition level? If high, why? What kind of internal or external recruiting support can you expect? Does the company pay competitive salaries? Do you have a dedicated HR or recruiting partner? What are your impressions of the interview experience? Is it positive, competitive, effective?
Path – How is success determined? What are the internal growth opportunities? What is the average tenure of previous employees in your role? Where did your predecessor end up? Does the company offer any support for certifications, training or continued education? Does the company allow employees to attend or present at industry events? How does a role with this company align with or support your long-term career goals? How strong is the regional market for security professionals should you decide to move on?
Understandably, it’s not always possible to ask or get answers to every question, but it’s worth the effort. We’re talking about a serious commitment on your part and since there are more jobs than qualified applicants, you have an advantage. Companies that are unwilling or unresponsive to your questions may not be the best choice. In fact, if they are unwilling or unable to answer these kinds of questions, it’s a red flag and the buyer should beware.
Remember, the goal is to avoid landing in a new position that looked great on surface but turned out to be a mistake. Making a career move is a big investment in time, energy and your future. The more a person understands about a career opportunity before they accept, the better they will feel about their situation. With the significant challenges facing all of us, it is more important than ever to get it right. So don’t be afraid to ask the challenging questions. As the saying goes, “Trust, but verify.”
Gliimpse described itself as “healthcare’s platform for patient data. By unlocking patient data silos, we aggregate fragmented data into a patient-owned, longitudinal health profile. Gliimpse is your personal health history, in the palm of your hands.”
Why is this announcement relevant? I think this move could shake up the way we view and manage healthcare data in the long term, and in the short term, influence active conversations being had around patient data, not just in the U.S., but in the U.K. and around the world.
The one goal everyone agreed upon during the conversation at our roundtable was the need for transparency, and the need for patients to know how their data is shared, and why. Gliimpse did a very good job of communicating the benefit to the user in handing over sensitive medical data to the company, “Gliimpse began with a simple idea – everyone should be able to manage their health records, and share them securely with those they trust.”
What is less clear is who (if any entity) will be able to access this type of data in the future, beyond the patient and their healthcare providers. Fortune wrote that “it’s unclear at this point what Apple might have planned for Gliimpse, and that it might soon merge Gliimpse into one of its divisions to work on its healthcare efforts at some point in the future.”
I expect whatever happens, if Apple does roll out a service similar to Gliimpse’s proposition, that there will be a lengthy privacy statement that will provide detail on how the data will be used, and the consumer will need to sign that before using the service.,I imagine that many people will sign up, given the benefits the technology could bring them.
The proposition is certainly enticing. Earlier this year Gliimpse tweeted figures from a HealthMine survey, which revealed that 53 percent of consumers can’t access their electronic health data. The data also showed that 74 percent of consumers say easy electronic access to health data would improve their knowledge of their health and improve communication with their physicians.
The company’s website (which is no longer available, but the text is quoted here) stated “We all leave a bread-crumb trail of our medical ‘stuff’ – our health data and records that we can’t take with us when we leave a doctor’s office or clinic. Providers can’t easily share our records because they’re under HIPAA, the federal regulations regulating how they share patient data. A lack of interoperability makes sharing data nearly impossible. There are no common formats across a myriad of siloed clinical systems.”
“Thankfully, patients and individuals — like you and me — will help solve these two problems U.S. healthcare faces: data-access and data-sharing. How? There are no HIPAA violations when data flows from our health portals to patients. When patients control the data, the problems disappear.” the company states.
The concept of the patients controlling their own data is certainly an interesting one. This would not only circumvent the laws that govern the use of medical records by institutions in the U.S., but could potentially apply to other territories, such as the U.K.
I’m sure we’ve all shared the frustration of seeing our over-worked doctor flick through our paper medical records in a hurry to find our latest test results, or the inability for one hospital to share our data with another. It’s easy to see the benefit of having all personal medical records available at the touch of the button, but only if the system is highly secure and private.
It will be interesting to monitor the uptake of Gliimpse (or whatever the finalised Apple product is called) with U.S. consumers, and how the privacy and security concerns that will undoubtedly accompany the adoption, are addressed. There are many unanswered questions. For example, how will conflicting data be handled? What if a Gliimpse record and an official record differ? How can integrity across records be achieved? With the rise of consumer health apps and devices, will these be integrated as well?
I look forward to seeing the conversation develop.
Welcome back to our blog series where we reveal the solutions to LabyREnth, the Unit 42 Capture the Flag (CTF) challenge. We’ll be revealing the solutions to one challenge track per week. Next up, the Windows track challenges 1 through 6, followed by 7 through 9 next week.
It looks like we will have to unpack this ourselves before we can solve it. UPX uses the pushad instruction at the beginning to push the registers on to the stack so that it can retrieve them after unpacking and jumping to the original entry point. We can script IDA’s debugger to set a hardware read breakpoint at the location of the pushed registers on the stack to get us close to the OEP.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import idc
import idaapi
import idautils
idc.AddBpt(ScreenEA())
idc.LoadDebugger(“win32”,0)
idc.StartDebugger(“”,“”,“”)
idc.ResumeProcess()
idc.GetDebuggerEvent(WFNE_SUSP,–1);
address=idc.GetRegValue(‘ESP’)–1
idc.AddBptEx(address,1,BPT_RDWR)
idc.ResumeProcess()
After we hit our breakpoint, we can remove the breakpoint and run until the tail jump that gets us to the original entry point.
popa Instruction Followed by the Tail Jump
We can take the jump to the unpacked code and then use Scylla with our new found OEP to dump the process.
We can open our unpacked executable in Binary Ninja and can see there is a path that prints the good boy message and one that prints the bad boy message. There is a function called right before the branch that checks the key and determines what path we will take.
Main Showing the Good Boy and Bad Boy paths
If we look at the function we renamed to check_key, we can see that it moves bytes on to the stack and then checks to see if the input is 16 bytes long.
The program then enters a series of anti-debugging checks that will cause the function to return 0 (FALSE) if they are triggered. Before each check, there is also a string encoding operation performed against our string.
The first anti-debugging check is a call to CheckRemoteDebuggerPresent, which checks to see if the process is being debugged.
The second anti-debugging check is a call to FindWindowW checking for a Window named OLLYDBG, which is a popular debugger used by analysts.
The third anti-debugging check is a call to IsDebuggerPresent, which checks to see if the process is being debugged.
The fourth and final anti-debugging check uses the assembly instruction rdtsc twice as a timing check to see if the process is executing slowly and probably being debugged.
If we pass all the anti-debugging checks, we end up getting the final string operation, which checks the result of all the operations against an offset in the initial buffer of bytes. If they are not equal, the function returns 0 (FALSE). But if they are equal, the result is added, which is used as the XOR key in the final operation.
We can copy off the initial buffer and rewrite the operations in python, so that we can obtain the key.
We can open it in dnSpy to decompile and debug. We can see the key_click function looks interesting because it is tracking a state if keys are pressed in a certain order.
The keys are numbered from left to right starting at 0 for the white keys and the same for the black keys. If we press the keys in the correct order do_a_thing() is called.
This function plays a funny David Bowie video while the key scrolls in ascii art behind.
PAN{B4BY_Y3LL5_5O_LOUD!}
Windows 3 Challenge: Gotta keep your Squirtle happy
We are given an executable for the Squirtle challenge. When we run the binary we see some ascii art of Squirtle and a check for a password. If we get the password wrong, we sadly find out that we just killed a Squirtle and the program exits.
Dead Squirtle from an Incorrect Password
We can open the binary in Binary Ninja and take a look at the main function. If we look at the first branch instruction, there is a function call right before at 401070 that checks the password. We can see that it is just a string compare with the string “incorrect”.
Password Check Function
If we type the password correctly we happily find out that we didn’t kill a Squirtle and we get some more output. We have to pass some anti-debugging and anti-vm checks and then we are told that the answer is written in an answer.jpg file.
Correct Password Output
There is an answer.jpg file written after we ran the program, but it is corrupted so we need to figure out how to make the program write it correctly to disk. We can see at the end of the main function there is a loop with a multi-byte XOR key.
XOR Loop Writing the answer.jpg file
We can assume that if we pass each step we will get the correct key that will output the correct image. At each stage there is a check and then some fake rand() == rand() checks with some funny messages to obfuscate the code. Thankfully there are also helpful hints at each stage if we get stuck or are unsure of the correct path.
Sleep/GetTickCount Check along with fake rand checks
The first check is to see if there is a common debugger window class found.
The second check is to look at the Process Environment Block at offset fs: [30h+2] to see if the process is being debugged.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
BOOLfs_chk(VOID)
{
charIsDbgPresent=0;
__asm{
mov eax,fs:[30h]
mov al,[eax+2h]
mov IsDbgPresent,al
}
if(IsDbgPresent)
{
returnTRUE;
}
returnFALSE;
}
The third check uses the Windows API GetTickCount() to make sure the system hasn’t been freshly booted.
1
2
DWORD Counter=GetTickCount();
if(Counter<0xFFFFF){
The fourth check used Sleep along with GetTickCount() and wanted you to bypass the sleep call.
1
2
3
4
Sleep(1000);
DWORD Counter2=GetTickCount();
Counter2-=Counter;
if(Counter2>0xFF)
The fifth check just used the Windows API IsDebuggerPresent to find out if the process is being debugged. The sixth check similarly called the Windows API CheckRemoteDebuggerPresent to find out if the process was being debugged.
The seventh stage checked to see if there are greater than 2 CPUs.
1
2
3
4
5
6
7
8
9
10
BOOLcpu_num()
{
SYSTEM_INFO siSysInfo;
GetSystemInfo(&siSysInfo);
if(siSysInfo.dwNumberOfProcessors<2)
{
returnTRUE;
}
returnFALSE;
}
The eighth stage checked to see if there were more than 1024 GB of RAM.
1
2
3
4
5
MEMORYSTATUSEX statex;
statex.dwLength=sizeof(statex);
GlobalMemoryStatusEx(&statex);
if((statex.ullTotalPhys/1024)<1048576)
The final check looked to see if the CPU hypervisor bit was set.
1
2
3
4
5
6
7
8
9
10
11
12
BOOLhv_bit(VOID)
{
intCPUInfo[4]={–1};
__cpuid(CPUInfo,1);
if((CPUInfo[2]>>31)&1)
{
returnTRUE;
}
returnFALSE;
}
We can step through the program in a binary and make sure the correct path is taken. Then we will get the correct image.
Graph Trace of the Correct Path
We could also get the correct key at each stage and grab the buffer of the XOR’d image and decrypt it with python.
We can finally decode the binary and obtain the key (Sorry :)).
PAN{Th3_$quirtL3_$qu@d_w@z_bLuffiNg}
Windows 4 Challenge: 99 bottles of beer on the wall, 99 bottles of beer. Take one down and pass it around, 98 bottles of beer on the wall. Nah, you need to just pass the jugs of beer around.
For this particular challenge, participants were given an x64 binary asking for a valid serial number.
If the serial number is wrong, they should see the image below.
This seems like those traditional crackmes. Let’s try to find the function that is checking users’ input. In order to find it, we should always look for suspicious strings if applicable.
Using Hopper, we found the following strings as indicated in the image below.
So let’s pick one of the “strings” and see where it was referenced.
Ok, let’s try and see whether we can find any GetDlgItemText/GetDlgItemTextW API calls. Ok, it seems like we got one at 140001464.
If we step debug it and follow the flow, we will come across a string length check at 1400014b3. Now we know that the input string must be 32 characters in length.
Stepping through again, at 140001500 we will encounter a check to ensure that the input characters must be 1, 2 or 3. This makes sure that the serial for this challenge is only comprised of 1, 2 and 3.
If we were to analyze the application at 140001750, we can see that there is an array with an initial capacity of [0, 13, 7]. However the maximum capacity of the “jugs” is [19, 13, 7] and the expected end state is having the array be [10, 10, 0]. You basically have three “jugs” with a size of 7, 13 and 19. The 7 and 13 size “jugs” are filled and the container with size 19 is empty. What you need is 10 in two of the containers (13 and 19).
Let’s re-write it in pseudo codes. Let’s assume that 3 “jugs” are under the array, M and the 3 jugs are a,b and c.
At first it may seem to be very confusing what is actually going on here. But let’s take a closer look.
This is the limit of the jugs.
1
2
3
4
5
6
7
8
9
10
Limit:19137
———————————–
ABC
Jug:1|2|3
Water:0|13|7
If[B]+[A]>Limit–of–[A]
Fill[B]andput the remaining in[A]
Else
Fill[B]andclear[A]
Now in the serial, if we were to start with 31, it simply means to fill up jug C to jug A.
So the aim of this is to move around the “beer” so jug A == 10 and jug B == 10.
We would have realized that this is the classic “Liquid Pouring Puzzle” that some, if not most of us, have seen while we were in school.
You can write your tool based on the findings. But if we were to do it with pen and paper. We should get something like the one below.
Upon running RGB.exe, we’re presented with three sliders, presumably corresponding to the RBG colors, and once you’ve set their values you can check them. This indicates that we’ll need to figure out the correct three values to access the key.
Wrong values
A quick look at the PE file with Exeinfo shows that it’s a .NET program, which can be unpacked with de4dot.
Checking binary type
Running de4dot against the executable creates a new file, “RGB-cleaned.exe” that we can then decompile with dnSpy to look at the underlying source code.
Deobfuscating binary
When looking at the source code, we come to the challenge that we’ll need to solve.
Algorithm
Simply put, three conditions need to be met to get the MessageBox we want to display. At this point, I started poking around to see if I can just modify the code so it always prints the answer, but when you start diving into the functions being called, you can see there is a bit more going on and requires the actual numbers.
XORing numbers against array of numbers
So I decided to tackle the math aspect instead.
The three conditions that need to be met are that one equation result must equal another equation result and one of the specific values needs to be over 60. I opt to brute force it by iterating through every possible combination of numbers, knowing that each slider will be in a range of 1-255, with one being in the range of 60-255. This gives us roughly 12.5 million possibilities, 255*255*(255-60), which shouldn’t take long at all.
After a few minutes of thinking through the logic, I use the below script to find the value.
Opening up this Windows executable quickly reveals that we’re working with some shellcode. If this wasn’t apparent, the clue provided gave some hints as to what you’d be dealing with.
Discover the key in the sh>E11C0DE to rescue the Princess!
@jgrunzweig
To make things easier for challengers, I went ahead and compiled the shellcode into a working executable, versus simply giving you the raw shellcode bytes. Once opened we see that there are no imports and only seven functions.
Figure 1 Functions in shellcode and import table
Without even debugging this shellcode we can quickly scan the seven functions provided to see if anything jumps out. Sure enough, we quickly identify a function that is almost certainly RC4 at 0x40106C. The two loops iterating 256 times gives us a big hint. Working through this function, we can confirm it is in fact RC4.
Figure 2 RC4 function
We also identify a very small function which starts by loading fs:0x30, which should get a reverser’s attention fairly quickly. For those unaware, fs:0x30 points to the Process Environment Block (PEB), which holds a wealth of information. This function in question is specifically looking at the PEB’s LoaderData offset, which holds information about the loaded modules in the process. We then get the third loaded module, which is kernel32.dll, and grab this DLL’s base address (offset 0x10). This function is essentially grabbing the base address of kernel32.dll, which is most likely going to be used to load further functions.
Figure 3 Function getting kernel32 base address
We continue to identify yet another function that appears to be hashing data, as evident by the ROR13 call.
Figure 4 Possibly hashing function
At this point, let’s start stepping through our shellcode in a debugger. We quickly see multiple calls to our function that got kernel32’s base address, followed by another function that takes this base address and a DWORD as arguments. Looking through this function we see it walking through all of kernel32’s exported functions, hashing the name, and comparing it against the provided DWORD. This is a simple shellcode trick that will allow attackers to obfuscate what functions are being loaded by the malware when viewed statically. There are a few ways we can approach this. We can debug the code and rename as we encounter them. Alternatively, we can simply search for the hashes on Google. Since the ROR13 technique is so common, there are many places online that have documented these hashes, like this one.
After getting over this minor hurdle we can start to see what the code is doing to understand what it’s looking for. Looking at the code in detail, we can see that it’s building a buffer of 54 bytes and attempting to decode it against a key that is generated using RC4. In the event the key starts with ‘PAN{‘, it will display it in a messagebox dialog window.
The key is generated using a number of variables that are pulled from the machine it is running on. The first four bytes of the key are a static value of ‘b00!’. Following this, the code looks for the following data:
Current month plus 0x2D
Current day plus 0x5E
Current hour plus 0x42
The operating system major version plus 0x3C
The operating system minor version plus 0x3F
The isDebugged flag, which is pulled from the PEB, plus 0x69
The language version plus 0x5E
These values together give us a key that is eleven bytes long. With only that information, it would be very difficult to brute force. However, since we know how each byte in the key is generated, we can limit our key space for the brute force and hopefully determine what the malware is looking for.
Knowing that there are only 12 months in a year, we can assume the first generated byte is in the range between 1 and 12. Similarly, there are a maximum of 31 days in a month, giving our second byte a range of 1 to 31. We continue this pattern on the rest of the bytes in the RC4 key. Most people looked to have the most trouble limiting the key space on the operating system versions, and the language version. Fortunately, there are very few legimate operating system (OS) versions overall. The major OS version will have a value of either 5, 6, or 10. The minor OS version will have a value of 0, 1, 2, or 3.
For the language version, there is a check early on in the execution flow where the result of GetUserDefaultUILanguage has its primary language identifier verified to be 0x0, or LANG_NEUTRAL. Knowing this, we can limit the possibilities of to values of 0x0, 0x04, 0x08, 0x0c, 0x10, or 0x14.
Using all of this information, we can generate a brute-forcing script such as the following.
Since then, we continued tracking this threat using Palo Alto Networks AutoFocus and discovered more details of the attacks, including target information. We’ve seen examples of this attack campaign, which we’ve named “MILE TEA” (MIcrass Logedrut Elirks TEA), appearing as early as 2011, and that it has since expanded the scope of targets. It involves multiple malware families and often tricks targets by sending purported flight e-tickets in email attachments. The identified targets include three separate Japanese trading companies, a Japanese petroleum company, a mobile phone organization based in Japan, the Beijing office of a public organization of Japan, and a government agency in Taiwan.
Attack Overview
Figure 1 shows the number of attacks considered as a part of the MILE TEA campaign since 2011. As we can see, the volume of the threats is small in total.
Figure 1 Number of threats used in the attack campaign
In the first three years, most of the reported attacks were from Taiwan. saw infections in a few other countries in Asia, but the number was miniscule. In mid-2013, the target base shifted to Japan. Since 2015, most of the reported attacks are from Japan.
Figure 2 Reports by countries
The primary infection vector is a spear phishing email with a malicious attachment. Although we collected several document based exploit files (RTF, XLS, and PDF) in this attack campaign, most of the attachments were executable files that, interestingly, suggest a custom malware installer. Attackers often use self-extracting executable files or existing installer packages to reduce development costs if they require dropping multiple files. However, in this campaign, the attacker group created its own installer program with the following features:
Windows executable with folder icon
Creates directory with pre-determined name in the same path as the installer
Copies decoy files into the created directory
Installs a batch file and malware on Temp Dir
Executes a batch script to delete the installer
Figure 3 shows examples of the custom installer and its different folder icons.
Figure 3 Custom installers with the folder icon
The use of e-flight tickets as phishing lures has been seen repeatedly for a number of years. The following is the list of malicious attachment samples that use this technique. It is the most prevalent lure used by this threat actor to entice targets for this campaign.
Table 1 Samples of malicious attachments masquerading as E-Ticket
Malware
In this MILE TEA campaign, the actor uses the following three malware families as the initial infection by the custom installer. The primary purpose of these families is to establish a bridgehead, collecting system information and downloading additional malware from a remote server.
Malware
Executable Type
Cipher
C2 address from Blog
Elirks
PE, PE64, DLL
TEA, AES
Yes
Micrass
PE
TEA
No
Logedrut
PE, MSIL
DES
Yes
Table 2 Malware characteristics
While many security vendors classify these samples as different malware families, they share functionality, code, and infrastructure, leading us to conclude that they in fact belong to the previously mentioned malware families.
Functionality – Blog Access
As described in the previous blog post, one of the unique features of Elirks is that it retrieves a command and control (C2) address from a public-facing blog service. When configured, the malware accesses a predetermined blog page, discovers a specific string, and proceeds to decode it with Base64 and decrypts it using the Tiny Encryption Algorithm (TEA) cipher. The same functionality is found in Logedrut, however, instead of using the TEA cipher, it uses DES.
A sample of Logedrut (afe57a51c5b0e37df32282c41da1fdfa416bbd9f32fa94b8229d6f2cc2216486) accesses a free blog service hosted in Japan and reads the following article posted by the threat actor.
Figure 4 Encoded C2 address posted by attacker
The routine called GetAddressByBlog() in Logedrut looks for text between two pre-defined strings. In this particular case, the malware sample will look for test between “doctor fish” and “sech yamatala”. The threat determines encoded text is “pKuBzxxnCEeN2CWLAu8tj3r9WJKqblE+” and proceeds to handle it using the following function.
Figure 5 Code finding encoded C2 address from blog
This code deciphers the string with BASE64 and DES. So far all Logedrut samples use exactly the same key, 1q2w3e4r, for decryption. The following Python code can be used to decode the C2 address.
Elirks and Micrass employ exactly the same TEA cipher. TEA is a block cipher that operates against 64-bit (8 bytes) of data at a time to encrypt and decrypt. The author of the code added and extra cipher operation by XORing data when a block size is less than 64 bits. For example, if the encrypted data length is 248 bits (31 bytes), the code in both malware samples decrypts the first three blocks (64 x 3 = 192 bits) with TEA. The final block is only 56 bits (248 – 192 = 56), so the code uses a simple XOR operation against the remaining data. This supplement to TEA has not been widely used, and all Elirks and Micrass samples have the same static key (2D 4E 51 67 D5 52 3B 75) for the XOR operation. Due to these similarities, we can conclude that the author of both families may be the same, or has access to the same source code.
Figure 6 TEA with XOR Cipher in Elirks and Micrass
Infrastructure – C2 Servers
Based on our analysis, we see that only a handful samples share the same infrastructure directly. The threat actors carefully minimize reusing C2 domains and IP addresses among their malware samples, and yet they prefer using servers located in Hong Kong no matter where the target resides.
Figure 7 Location of C2 servers
Target Analysis
Identifying targets from spear-phishing emails
We found a spear phishing email sent to a government agency in Taiwan on March 2015. The email sender masquerades as an airline company, and the RAR archive attachment contains the custom installer named Ticket.exe that drops Ticket.doc and Micrass malware.
Figure 8 Spear-phishing email sent to an agency in Taiwan
During the analysis of the email, we came across an article in a Taiwan newspaper from February 2014 that alerted the public about a similar email message being widely distributed that contained a malicious attachment. The only difference between the email messages in Figure 8 and in the news article was the date. The adversary reused the email message more than a year ago.
Identifying targets from decoy files
The most interesting part of this attack campaign is that the threat actor has been using stolen documents from previously compromised organizations to perform additional attacks since early 2015. These documents are not publicly available nor do they look to be created from scratch by the attacker. Because they contain sensitive data tying to the specific business, it is unlikely that a third party would be able to craft them.
The following figure shows the decoy file installed by a sample identified in early April 2015. The file is a weekly report created at the end of March 2015 by a salesperson at a Japanese trading company. The report includes various sensitive information specific to their business.
Figure 9 Weekly report from a Japanese trading company
The properties identified within the document indicate that the company name matches the context, and the person who last modified it is the same individual seen in the document itself. Because of this, the file appears legitimate and it’s very unlikely that this document would ever be made publicly available. The threat actor almost certainly stole this document soon after it was created, and reused it as the decoy for next target within a week of the theft.
Figure 10 Property of the decoy document
Another installer found in Japan in May 2015 also contained sensitive information. The decoy looks to be a draft version of a legitimate contract addendum between the subsidiary of a Japanese petroleum company based in Australia, and a China-based company. The document provides details of the deal, including price. It contains a bunch of tracked changes by what appears to be two Japanese speaking individuals. We have confirmed that one of the individuals was a manager of an overseas project of the parent company in Japan by the official release of personnel change in 2013. The file is also considered to be stolen from a target organization and used for decoy for the next attack.
Figure 11 Contract addendum decoy file
In addition to those examples, we found the following decoy files that are likely stolen from previously compromised organizations.
Organization
Type of document
Beijing Office of a public organization of Japan
Budget Report
Another Trading Company in Japan
Internal investigation document
Mobile phone organization in Japan
Inventory of new smartphones
Table 3 Potential source of another decoy file
We cannot confirm whether those files were stolen as part of the MILE TEA campaign or not. Either way, it’s difficult to imagine that the threat actor sent those internal documents to entirely different organization or industries. One plausible explanation would be that the threat actors target different persons or departments within same organization or industry.
Identifying target from Malware
So far, we have described two trading companies in Japan that are possibly targeted. In addition to these two companies, there is another company in Japan that could be involved in the attack campaign as well. A sample of Logedrut was identified and is capable of communicating with C2 through an internal proxy server in the compromised organization. The sample contains an internal proxy address for a trading company in Japan as seen in String7 in the image below. Thus, the sample is specially crafted for this specific enterprise.
Figure 12 Internal proxy address in Logedrut
Conclusion
MILE TEA is five-year-long targeted attack campaign focused on businesses and government agencies in Asia Pacific and Japan. The threat actor behind this maintains and uses multiple malware families, including a custom installer. The actor is interested in organizations that conduct business in multiple countries. The trading companies cover an immensely broad area, from commodity products to aviation around the world. Another possible target is a Japanese petroleum company that has multiple offices and subsidiary companies in overseas countries. A public organization in Japan and a government agency in Taiwan were also targeted.
Palo Alto Networks customers are protected from this threat in the following ways:
WildFire accurately identifies all malware samples related to this operation as malicious.
Domains used by this operation have been flagged as malicious in Threat Prevention.
AutoFocus users can view malware related to this attack using the “Micrass”, “Elirks“, and “Logedrut” tags.
Indicators of Compromise
Note: We omitted some hashes containing potentially stolen documents from the compromised organization.