This blog post is the first in a two-part series describing how VMRay Analyzer’s Intelligent Monitoring capabilities remove the noise from malware analysis.
In dealing with potentially malicious files, incident responders and IT security teams are swamped with information in the form of log files, reports, alerts, and notifications. As a result, it is critical for security products such as network sandboxes (or Automated Malware Analysis – AMA) to provide focused, non-diluted log files and reports.
Most commercial AMA solutions (or sandboxes) contain a massive amount of ‘noise’ or irrelevant information in their reports. To illustrate this point, we submitted a benign sample (a blank Word document) to three sandboxes in addition to VMRay Analyzer. The results are shown in Table 1 below.
Focused malware analysis reports and logs not only save Incident Responders and Malware Analysts time and effort but can be the difference between accurate, efficient security operations and information overload. In this blog post, we will provide insights into VMRay Analyzer’s intelligent monitoring process and explain how it helps DFIR (Digital Forensics and Incident Response) teams to:
- Generate concise and focused reports and logs for efficient manual analysis
- Apply Machine Learning algorithms with ease
- Reduce false alerts and improve automated detection efficacy
- Apply pattern matching algorithms efficiently to log files
- Reduce storage requirements
- Improve malware detection performance and scalability
The Hook-Based Monitoring Approach
Applications use the OS APIs to access system resources such as files, processes, network information, the registry, and other areas. Most commercial sandboxes monitor the behavior of a sample using a technique called hooking, in which they intercept function calls to these APIs. When an application calls a function, it is detoured to a different location where customized code—the hook function—resides. The hook then performs its own operations – such as logging the call and its parameters – and then transfers control back to the original API function. The hooks can be placed in different API layers and it is also possible to place them inside the OS kernel.
However, there are many challenges associated with this approach to monitoring.
Challenge 1: Where to Place Hooks?
As mentioned earlier, the execution flow can be intercepted by hooks either inside the user process in one or multiple parts of the OS API or inside the OS kernel. However, there are so many ways in which an operation can be performed that it makes it extremely difficult to intercept all the associated APIs or system calls. Figure 3 highlights this challenge by showing various function calls associated with creating a new process.
We’ve also published an analysis report where malware attempts ‘blinding the monitor’ i.e. evading analysis by doing illegitimate API usage. This explains why most commercial sandboxes rely on lower-level hooks, which in turn generate more noise as we will see later in this blogpost.
Challenge 2: Handling Noise
Another significant challenge is dealing with noise generated by irrelevant calls to the hooked functions. During the execution of the sample, the hooked functions may be invoked several times not just by the sample under analysis but also by OS internal threads as shown in Figure 4. In addition to OS internal threads, the browser or the MS Office Application also invokes the hooked functions. Since we are only interested in monitoring the operations performed directly by the sample’s code under analysis and not the OS runtime operations, browser or Office applications, the result is a log file which contains large sections of irrelevant entries.
Challenge 3: Limited Visibility
Often, calls may not reach the hook and the sandbox will have limited visibility into the behavior of the sample under analysis. Even when the hook is invoked, the sandbox may record far more information associated with the original function call than it needs to. This is sometimes called the avalanche effect. As an example, a ‘Download File’ operation can trigger several low-level API calls and capturing all these API calls would result in unnecessary information overload. Both these scenarios are illustrated in Figure 4. Either way, the result is a suboptimal log file with either too little information or a large quantity of unnecessary information.
The Full System Emulation Monitoring Approach
Sandboxes using the full system monitoring approach can see every machine instruction executed. However, the downside is that this can often lead to an information overload.
In addition, sandboxes that use this monitoring technique tend to be very slow since there is a significant overhead associated with CPU emulation. The benefit of having full control over the environment is negated by the challenge of having to deal with too much information (since every single machine instruction is monitored). While emulators, in theory, provide fine-grained monitoring, this is not usable in practice for the reasons mentioned above.
Intelligent Monitoring using an Agentless Hypervisor-based Approach
VMRay Analyzer brings an agentless approach to dynamic malware analysis. Embedded in the hypervisor, VMRay Analyzer monitors and analyzes malware behavior from that vantage point. There are no agents or hooks built into the system.
By leveraging the hardware virtualization extension features of modern-day CPUs, VMRay Analyzer partitions the guest VM’s memory into two sections: a section with trusted code (code that belongs to the OS or Internet Explorer or MS Office applications) and a section with untrusted code (the malware sample, downloaded code, injected shellcode). The analyzer can switch between these two sections at any time, allowing it to manage transitions more effectively. With that splitting in place, we can efficiently detect the control flow transitions between certain parts of the memory, i.e. between certain user or kernel modules. By specifically selecting sections of memory, we can monitor the behavior of a system with different scope and variable granularity.
VMRay Analyzer’s hypervisor-based monitoring approach provides total visibility into the behavior of the sample under analysis and enables monitoring only for parts of the system related to the analysis. This makes it unnecessary to do filtering on the analysis output as no side-effects of benign applications are ever monitored.
Unlike other approaches, VMRay automatically adjusts to the optimum monitoring granularity. This means regardless of whether the malware is doing an API call, using special CPU instructions to directly jump into the kernel, or using higher-level concepts such as COM objects, VMRay always intercepts at the highest semantic level possible. No semantic information is lost. Another important point is that no unnecessary data is logged, i.e. no sub-function calls or recursive function calls. Sometimes one high-level API results in dozens of kernel transitions, which all would be intercepted when monitoring at the kernel level. This gives VMRay its superior performance and results in reports with the highest relevant information density.
For a deeper look into VMRay Analyzer’s hypervisor-based approach to monitoring read our Technology Whitepaper