The Injected Code
We will start our journey with a presentation of the injected code we will focus on; bear in mind that there are thousands of different variations of injected code to legitimate Web sites out there. We are going to take a look at a specific injection -- one variation that targeted tens of Web sites. ThreatSeeker has found this injected piece of code on a selection of legitimate Web sites, ranging from government and financial sites to sports and shopping sites.
The next snapshot shows some of the varied site categories that were compromised, and the code injected to them:
The injected code is very lightly obfuscated; it uses the document.write function on a sequence of Unicode characters. Usually, the output of injected obfuscated code is an IFrame to a payload site. We can easily de-obfuscate it with SpiderMonkey by replacing the document.write function with the print function. Also, it is much easier if you have FireBug installed on your Mozilla, as FireBug presents the output automatically for these kinds of low-level obfuscations.
Let’s see what the injected code actually does:
The obfuscated code is, as suspected, an IFrame. The URL the IFrame leads to is the payload site. The site has the keyword "Google" in its hostname, trying to look legit. The next step now is to take a look at this payload site.
The Payload Site
Let’s have a look at the site’s source code:
Looks like another very light obfuscation here, the same method we saw with the injected code, earlier. Let’s de-obfuscate it and see what we get:
It appears the site runs a Java Applet, which is basically code from a class file. This file is part of the JAR archive you can see in the code. We can get the JAR archive from the same directory, and the class file will be in it. Embedded class files are written in Java, and since Java source code is compiled to byte-code, it is very easy to decompile the byte-code back to source code.
A range of Java decompiling programs exist out there. We will use JAD to decompile the class file:
Going over the decompiled code (the snapshot shows only part of it) leaves no room for doubt; this code downloads a file from a different Web site to the system and executes it. In order for that to work, first you need to have Java installed on your system. Second, you have to approve the Java Applet.
Java is designed with security in mind, as it doesn’t allow downloading and running files arbitrarily, which is exploit-like behavior. So, in order to trust the Java Applet's code, a digital signature is needed for it. If there is none, like in our case, a dialog box pops up asking whether we do trust this Java Applet code:
This is the point where the "user education demon" comes a knocking, asking if there is anybody home. Most users would say "I would never click on that!", but recent studies show that users do click on spontaneous pop ups.
We reported on that study in our TMTW blog for September this year (check the Security-Trends section). You can also see how the bad guys indicated in the applet the publisher's name as "Google Inc" and the name as "GoogleTrax", aiming to leverage the attack's success by relying on Google's reputation with services and software base.
We see social engineering-based attacks, dependent on users’ interaction to succeed, and luring users to run malicious binaries a LOT. It was just this week that we alerted on such attacks with the "U.S. Presidential Malware".
Also, these kinds of Java-based attacks become far more dangerous if your browser has some design flaws, like the ones reported in Google's Chrome browser. This flaw allows downloading and running files by Java Applets without any warning.
Going back to our dialog box, if the user confirms, then a malicious file named GoogleTrax.exe is downloaded and run on his system. So, another door opens and we enter into the next domain in our journey. It is time for some Malcode analysis!
Malcode Analysis: Under The Hood
Before reading further, we invite the readers who haven't already read it, to read a previous blog Nicolas wrote about two years ago: "Indeed, as I was analyzing this sample, I recognized the coding style, and also the features are matching perfectly.. Even the Keylogger logs are as ugly as before ;-)"
The sample GoogleTrax.exe looks like a regular Visual Basic application compiled using P-Code instead of native code. This makes the code a little harder to read, as it does not use Intel assembly code, but Visual Basic Pseudo code. It's an attempt to obfuscate what our sample does. Using some free tool such as P32DASM, one can read the Visual Basic Pcode, and although you don't get fully decompiled code, you can read the actual VB assembly. If you are familiar with normal assembly and stack machine, it makes quite a lot of sense from the first glance.
First, it tries to detect a SandBox, including, but not limited to: Anubis, CW SandBox, SandboxIE, etc. Here are a few screenshots of the Pcode with the detection code:
This one uses a blacklisted user name, and from a quick search online, I found one reference to this user name. It seems to be linked to CW Sandbox.
This detection uses a blacklisted Windows Product ID. It's probably from a Sandbox running Windows. If the Product ID is detected, then the sample simply exits.
This one looks for a special dll, and if it's loaded, then the sample exits. From the name of this dll, it's probably detecting SandBoxIE. There are other tricks used in the malware before it goes to its "unpacking" routine.
Similar to what was presented at Ottawa, the malware creates a child process, then unmaps its memory using ZwUnmapViewOfSection, and reallocates memory to hold the actual malicious code that will take over. This technique isn't new and has been used in malicious software for a long time. It seems to be very widely used nowadays though:
Anyway, the newly mapped code takes over, and the code is no longer a Visual Basic file. It seems to be hand written in assembly, at least for a lot of parts. Eventually, it creates a mutex, then tries to open the "explorer" process. If it succeeds, it injects chunks of code there. Actually, 12 blocks of code are injected into explorer.exe, and the last one is the in memory IAT (Import Address Table).
You can see on the table below, all the injected stubs. The first address is the Allocated Page in Explorer and the second one is the corresponding address in our malicious application.
Notes: All the Remote Process Addresses could be different, depending on the computer, the process that gets injected, etc. Indeed, almost always we use a goat process for code injection, because once you attach to explorer.exe, everything gets unstable and slow.
CreateRemoteThread will execute the remote thread at address: 0x8E0000 (again, it depends on the injected process), which maps to 401AB8 in the injector. Using this information, you can do static analysis with IDA Pro. We always combine static analysis with dynamic analysis, and then try to analyze most of it in IDA Pro, until it becomes time consuming. Then, finally, we mix both the dynamic and the static information.
By attaching another debugger instance to your goat process in memory (or to Explorer if you feel lucky), you can put a breakpoint at 0x8F0000 and resume execution of the injector. It will then execute the remote thread and close. Now, your injected process will stop in the second instance of your debugger, and you can keep doing the analysis from there. The analysis from there gets a bit confusing, as our target uses threads and does many different tasks. We will try to summarize what our malware actually does.
API Functions Resolution
Analyzing this sample takes quite some time because there are no imported functions besides ExitProcess, and a large amount of code is injected in different processes. Every time the Trojan needs to call a function, it will resolve it dynamically using some shellcode-like functions:
Every time it needs to call or load a function, a 32-bit hash is pushed onto the stack, (e.g., push 0593AE7CEh). The ESI register holds the address of important data, like the IAT that was injected early, but also various function pointers. For instance, from the screenshot above, [ESI+ABBh] points to the kernel32 ImageBase, and this information is needed by the HomeGetProcAddress function in order to resolve the api function. [ESI+0DD] is a pointer to the HomeGetProcAddress routine, etc.
The hashing technique is very common in shellcode, where it is used mainly to make smaller code. In this file, there is no size limitation; the main purpose is obfuscation. To identify the function called/resolved, you need to get the function corresponding to the Hash passed as a parameter. There is no plain text function name in the injected stubs.
All resolved functions are then called by a CALL EAX. In the example above, the code will resolve GetSystemDirectoryA, and then execute it.
NTFS Alternate Data Stream
After this, our sample opens itself on disk, gets its size, and allocates that amount of memory in order to make a copy of itself in memory. It then tries to delete itself from the disk. Right after that, it tries to open "C:\Windows\system32:msupdate.exe".
You probably noticed the ":" between the folder name, and the executable. This is an NTFS ADS (Alternate Data Stream). This is done in order to hide the executable from the average computer user, as the file won't appear in Windows Explorer, yet will be present. Once the stream got created, our sample compares where our sample was executed from. If it doesn't match the ADS path, it deletes its current file. By doing this, it makes sure to delete itself, and only remain as an ADS on disk. Using a tool like the free anti-rootkit tool, "Gmer", it's possible to see the ADS:
Inside the explorer context, the injected code opens the registry key "HKEY_LOCAL_MACHINE\Software\Microsoft\Active Setup\Installed Components\" and enumerates the existing keys. It opens every CLSID it can find, and compares the path returned with the path of its ADS. If nothing is found, it then creates its own CLSID and the StubPath used is the path of the ADS. It sets the value to "C:\WINDOWS\system32:mspudate.exe" in order to allow the injector to inject explorer.exe when the machine reboots:
It also changes another registry key in order to restart. It's an obvious one this time, probably to deceive someone trying to disinfect the machine. Indeed, it is adding an entry in the Windows\Run key. It uses the stream as the full path of the application to execute:
This is why it created a Mutex early, in order to avoid multi injection of explorer.exe, which could eventually lead to a crash, or a huge slow down of the machine. Only one of the auto-starts is going to execute our malicious software.
Then two threads are created. The thread functions, in correct order are: 0x930000 and 0x960000. Respectively, 0x402013 and 0x40255D in the injector (for static analysis). Right after the thread creation, it calls the sub function at 0x920000, which is 0x401DCC in the injector. The fact that we know which address in the injector matches the remote page makes things easier to analyze.
Default Browser Injection
In order to bypass some personal firewalls, another stub is injected in the default browser. The injection is made from explorer.exe, and it reads the registry to get the default browser path (injection is made at 0x920000).
It then looks for the browser process in memory, and tries to inject it, if detected. Only one stub is injected in Internet Explorer. The code is copied from 0x8B0000 (from Explorer or the goat process) and injected. The local address in the injector is at 0x400437, for static analysis. It also injects the IAT buffer in the browser.
Basically, that stub connects to g[removed]v.s[removed]s.net on a given port, possibly to get raw code to execute. At the time of this writing, the connection fails, and nothing is returned. In a previous variant, from my blog post about 2 years ago now, it was connecting to a remote Web site, to get raw assembly code, and executed it onto the heap. The sample keeps looping until the connection succeeds.
The first thread eventually calls the SetWindowsHookExA function, and set a "WH_JOURNALRECORD" hook, and its hook proc is 0x940000, which is 0x4020FD in the injector. This hook can be used to log keyboard and mouse events. The previous variant was using a different hook type, and the GetKeyboardState function.
The second thread checks the registry every 5 seconds. If the Auto Start (Active Setup, Installed Components) key is deleted, it will restore it in order to survive a reboot, and prevent simple users and disinfection tools from disinfecting the computer. As long as explorer.exe is running in memory, you "cannot" delete the registry key easily. Obviously, hot patching the malicious code, or pausing that thread disables that protection. One could just patch the Sleep() function parameter, and change the 5 seconds timing to infinite, and the malicious code will never check the registry any more.
In this blog, we took a look at the nature of a complete attack. The injected code we focused on had a low perimeter. But the method of obfuscation presented, although it was very basic, is actually massively and widely used by injected code in the wild -- ThreatSeeker has been finding thousands of legitimate Web sites injected with code that uses this light obfuscation method.
We also observed the use of a Java Applet to execute the attack from the Web side. This suggests that attackers aren’t afraid, at all, to rely on the user’s interaction to execute their malicious attacks. It also implies that attackers adapt to the latest hot trends that mass audiences are likely to follow, as this attack targeted a vulnerability in the Google Chrome browser in a timely manner.
From the malcode analysis perspective, we clearly saw that malcode authors don’t bother changing their code that much. This is very logical, in this day and age, as custom packers allow them to avoid AV detections and preserve old code easily.
The critical point we would like to highlight is the importance of user education. This is the kind of vulnerability that can’t be patched, and it does appear to be one of the weakest links in the security chain these days. We hope awareness of security issues will grow, and toward that end, we shall keep increasing it.
Security Researchers: Elad Sharf, Nicolas Brulez