Spectre and Meltdown are attack methodologies enabled by fundamental processor design principles. In particular, they exploit unwanted side effects of caching, speculative/out-of-order execution, and branch target prediction. These features are part of most modern CPUs (Intel, AMD, ARM) and were widely introduced into production in the 1990s to enhance performance. As a result, the performance of modern microprocessors is heavily dependent on them and if they are disabled, CPU performance would fall back significantly. Reports from multiple media outlets have cited a 30% potential performance drawback from a combination of software patches and partially disabling out of order execution altogether. If they were to be completely deactivated, the impact would be far more severe.
Often times, the cybersecurity industry needs to strike a balance between maximum security and operational efficiency. Hence, one must understand how fundamentally important these aforementioned capabilities are for modern computers. As performance often conflicts with security objectives, researchers have been grappling with these implications for quite some time now. In 2013, my co-founder Dr. Ralf Hund, Dr. Thorsten Holz and I conducted research and published a paper on our findings called Practical Timing Side Channel Attacks Against Kernel Space ASLR. Back then, it was already apparent that this new paradigm within modern CPUs increased the attack surface for threat actors. In our paper, our research covered an attack method utilizing kernel space memory information. Today, speculative execution has opened many systems to information leakage.
As mentioned above, Spectre and Meltdown are not simply two vulnerabilities or exploits, but instead, they are general attack methodologies that can be implemented arbitrarily in many different ways. Hence, there can be no simple signature to detect them universally. Some security vendors recently published detection methods for the different proof of concepts (POCs) that are currently available on the Internet, which may be misleading. These demonstrations are in no way representative of the attack surface created by Spectre and Meltdown.
We understand that creating signatures for POCs is driven by marketing realities. While we believe the real-world value of these signatures is negligible, we will add the relevant YARA rules by default within VMRay Analyzer. What these detections are looking for is either the particular instruction sequence that is part of a given POC or a more generalized version of it.
Possible exploits can come in various forms and there is no generic detection for this attack class yet. Current approaches to generic detection of Spectre and Meltdown exploits come with various shortcomings.
Identifying explicit cache flushes (e.g., CLFLUSH).
Downside: This is used for benign purposes as well and also could be done implicitly by accessing the right memory locations manually.
Identifying the training loops for branch target misprediction/poisoning.
Downside: High FP rate, because very hard to differentiate from benign loops.
Detecting timing attacks by identifying sequences of RDTSC and memory reads.
Downside: Same construct as used for many other (benign) purposes as well. Also, obfuscation can be used that makes it very expensive to detect.
Using performance counters to detect abnormal memory usage and/or branch target misprediction (like with ROP and other related attack patterns).
Downside: Required functionality is tied to the particular CPU model and not available in many relevant use cases. Also only a heuristic and not a deterministic indicator.
At VMRay our underlying technology detects malicious behavior universally. The current detection methods used to identify Spectre and Meltdown exploits pose sufficient drawbacks. While we will implement a default YARA rule to detect the POC, we will continue to detect typical malicious behavior required to infect an environment with our agentless sandbox and associated VTI Engine.