Microarchitectural attacks have been all the rage. For the past two years, we’ve seen attacks like Meltdown, Spectre, Foreshadow/L1TF, Zombieload, and variants all discuss different ways to probe or leak data from a victim to a host. A new attack, published on March 10th by the same research teams that found the previous exploits, turns this principle on its head, and allows an attacker to inject their own values into the victim’s code. The data injection can either be instructions or memory addresses, allowing the attacker to obtain data from the victim. This data injection bypasses even stringent security enclave environments, such as Intel’s Software Guard Extensions (SGX), and the attackers claim that successful mitigation may result in a slowdown of 2x to 19x for any SGX code.

The High Level Overview

The attack is formally known as LVI, short for ‘Load Value Injection’, and has the MITRE reference CVE-2020-0551. The official website for the attack is https://lviattack.eu/. The attack was discovered on April 4th 2019 and reported to Intel, and disclosed publicly on March 10th 2020. A second group discovered and produced a proof-of-concept for one LVI attack variant in February 2020.

Currently Intel has plans to provide mitigations for SGX-class systems, however non-SGX environments (such as VMs or containers that aren’t programmed with SGX) will remain vulnerable. The researchers state that ‘in principle any processor that is vulnerable to Meltdown-type data leakage would also be vulnerable to LVI-style data injection’. The researchers focus was primarily on breaking Intel SGX protections, and proof of concept code is available. Additional funding for the project was provided by ‘generous gifts from Intel, as well as gifts from ARM and AMD’ – one of the researchers involved has stated on social media that some of his research students are at least part-funded by Intel.

Intel was involved in the disclosure, and has a security advisory available, listing the issue as a 5.6 MEDIUM on the severity scale. Intel also lists all the processors affected, including Atom, Core and Xeon, which goes as far back as Silvermont, Sandy Bridge, and even includes the newest processors, such as Ice Lake (10th Gen)* and the Tremont Atom core, which isn’t in the market yet.

*The LVI website says that Ice Lake isn’t vulnerable, however Intel’s guidelines says it is.

*Update: Intel has now updated its documents to say both Ice Lake and Tremont are not affected.

All told, LVI's moderate CVE score is the same as the scores assigned to Meltdown and Spectre back in 2018. This reflects the fact that LVI has a similar risk scope as those earlier exploits, which is to say data disclosure. Though in practice, LVI is perhaps even more niche. Whereas Meltdown and Spectre were moderately complex attacks that could be used against any and all "secure" programs, Intel and the researchers behind LVI are largely painting it as a theoretical attack, primarily useful against SGX in particular.

The practical security aspects are a mixed bag, then. For consumer systems, at least, SGX is rarely used outside of DRM uses (e.g. 4K Netflix), which isn't likely to upend too much. None the less, the researchers behind LVI have told ZDNet that the attack could theoretically be delivered via JavaScript, so it could potentially be delivered in a drive-by fashion, as opposed to requiring some kind of local code execution. The upshot, at least, is that LVI is already thought to be very hard to pull off, and JavaScript certainly wouldn't make that any easier.

As for enterprise and business users, the potential risk is greater due to both the more widespread use of SGX there, and the use of shared systems (virtualization). Ultimately such concerns are going to be on a per-application/per-environment basis, but in the case of shared systems in particular, the biggest risk is leaking information from another VM, or from a higher privileged user. Enterprises, in turn, are perhaps the best equipped to deal with the threat of LVI, but it comes after Meltdown and Spectre already upended things and hurt system performance.

The Attack

Load Value Injection is a four stage process:

  1. The attacker fills a microarchitectural buffer with a value
  2. This induces a fault or assisted load within the victim’s software (by redirecting the dataflow)
  3. The attacker’s value invokes a ‘code gadget’, allowing attacker instructions to be run
  4. The attacker hides traces of the attack to stop the processor detecting it

The other recent microarchitectural exploits, such as Spectre, Meltdown, L1TF, Zombieload and such, are all related to data leaks. They rely on data to be leaked or extracted from various buffers that are ‘all-access’ from the microarchitectural standpoint. LVI is different, in that it’s more of a direct ‘attack’ on the system in order to extract that data. While it means the attacker has to clean up after themselves, as a result of what the attack can do, it means it can be more dangerous than other previous exploits. The difference in the exploit means that current mitigations don’t work here, and this exploit according to the research essentially states that Intel’s secure enclave architecture requires significant changes in order to be useful again.

The focus of the attack has been on Intel’s secure enclave strategy, known as SGX, due to the nature of the technology. As reported by The Register, it is in fact the nature of SGX that actually assists the attack – SGX creates page faults for memory loads by altering non-secure buffer page tables (point 2 above).

Intel’s Own Analysis

Intel’s own deep dive into the problem explains that:

‘If an adversary can cause a specified victim load to fault, assist, or abort, the adversary may be able to select the data to have forwarded to dependent operations by the faulting/assisting/aborting load.

For certain code sequences, those dependent operations may create a covert channel with data of interest to the adversary. The adversary may then be able to infer the data's value through analyzing the covert channel.’

Intel goes on to say that in a fully trusted environment, this shouldn’t be an issue:

‘Due to the numerous, complex requirements that must be satisfied to implement the LVI method successfully, LVI is not a practical exploit in real-world environments where the OS and VMM are trusted.’

But then states that the fact that its own SGX solution is the vector for the attack, these requirements aren’t as strict.

‘Because of Intel SGX's strong adversary model, attacks on Intel SGX enclaves loosen some of these requirements. Notably, the strong adversary model of Intel SGX assumes that the OS or VMM may be malicious, and therefore the adversary may manipulate the victim enclave's page tables to cause arbitrary enclave loads to fault or assist.’

Then to state the obvious, Intel has a line for the ‘if you’re not doing anything wrong, it’s not a problem’ defense.

‘Where the OS and VMM are not malicious, LVI attacks are significantly more difficult to perform, even against Intel SGX enclaves.’

As a poignant ending, Intel’s official line is that this issue is not much of a concern for non-SGX environments where the OS and VMM are trusted. The researchers agree – while LVI is particularly severe for SGX, they believe it is more difficult to mount the attack in a non-SGX setting. That means that processors from other companies are less vulnerable to this style of attack however, those that are susceptible to Meltdown might be able to be compromised.

The Fix, and the Cost

Both Intel and the researchers have provided the same potential solution to the LVI class of attacks. The fix isn’t being planned at the microcode level, but at the code level, with compilers and SDK updates. The way to get around this issue is to essentially make instructions serialized through the processor, ensuring a very specific order of control.

Now remember that a lot of modern day processor performance relies on several things, such as the ability to rearrange micro-ops inside a core (out-of-order), and run multiple micro-ops in a single cycle (instructions per cycle). What these fixes do is essentially eliminate both of these when potentially attackable instructions are in flight.

For those that aren’t programmers, there exists a term in programming called a ‘fence’. A broad definition of a fence is to essentially make sure a program (typically a program running across several cores) stop at a particular point, and check to make sure everything is ok.

So, for example, imagine you have one core doing an addition, and another core doing a division at the same time. Now addition is a lot quicker than division, and therefore if there are a lot of parallel calculations to do, you might be able to fire off 4-10 additions in the time it takes to do a single division. However, if there is the potential for the additions or divisions to interact on the same place in memory, you might need a fence after a single addition+division, to make sure that there’s no conflict.

In a personal capacity, when I wrote compute programs for GPUs, I had to use fences when I moved from a parallel portion of my code to a serial portion of my code, and the fence made sure that everything I needed for the serial portion of my code, computed from the parallel portion, had been completed before moving on.

So the solution to LVI is to add these fences into the code – specifically after every memory load. This means that the system/program has to wait until every memory load is complete, essentially stalling the core for 100 nanoseconds or more. There is a knock on effect in that when a function returns a value, there are various ways for the ‘return’ to be made, and some of those are no longer viable with the new LVI attacks.

The researchers are quite clear in how this fix is expected to hurt performance – depending on the applications and various optimizations, we’re likely to see slowdowns from 2x to 19x. The researchers examined compiler variants on an i7-6700K for OpenSSL and an i9-9900K for SPEC2017.

Intel has not commented on potential performance reductions.

For those that could be affected, Intel gives the following advice for SGX system users:

  • Ensure the latest Intel SGX PSW 2.7.100.2 or above for Windows and 2.9.100.2 or above for Linux is installed

And for SGX Application Providers:

  • Review the technical details.
  • Intel is releasing an SGX SDK update to assist the SGX application provider in updating their enclave code. To apply the mitigation, SDK version 2.7.100.2 or above for Windows and 2.9.100.2 or above for Linux should be used.
  • Increase the Security Version Number (ISVSVN) of the enclave application to reflect that these modifications are in place.
  • For solutions that utilize Remote Attestation, refer to the Intel SGX Attestation Technical Details to determine if you need to implement changes to your SGX application for the purpose of SGX attestation.

Final Words

From the researchers, they told The Register that:

"We believe that none of the ingredients for LVI are exclusive to Intel processors. However, LVI turns out to be most practically exploitable on Intel processors … certain design decisions that are specific to the Intel SGX architecture (i.e. untrusted page tables). We consider non-SGX LVI attacks [such as those on AMD, Arm and others] of mainly academic interest and we agree with Intel's current assessment to not deploy extra mitigations for non-SGX environments, but we encourage future research to further investigate LVI in non-SGX environments," 

In the same light, all major chip architecture companies seem to have been told of the findings in advance, as well as Microsoft should parts of the Windows kernel need adjustment.

Technically, there are several variants of LVI, depending on the types of data and attack:


All can be found on the LVI website.

Overall, Intel has had a rough ride with its SGX platform. It had a complicated launch with Skylake, not being enabled on the first batches of processors then being enabled in later batches, and then SGX has been the focus of a number of these recent attacks on processors. The need for a modern core, especially one involved in everything from IoT all the way up to the cloud and Enterprise, to have an equivalent of a safe enclave architecture is paramount, and up until this point it has been added to certain processors, rather than necessarily being built from the ground up with it in mind – we can see that with Ice Lake and Tremont still affected. The attack surface of Intel’s SGX solution, compared to those from AMD or Apple, has grown in recent months due to these new attacks based on a microarchitectural level, and the only way around them is to invoke performance limiting restrictions on code development. Some paradigm has to change.

Comments Locked

42 Comments

View All Comments

  • Unashamed_unoriginal_username_x86 - Wednesday, March 11, 2020 - link

    If I can say something incredibly naïve and stupid, why can't they just, like, not tell everyone about these vulnerabilities? It doesn't seem they found most of them being applied in malware, and they require potentially costly mitigation efforts. What's wrong with antiviruses and not letting strangers get near your laptop? Sorry for any lost brain cells
  • teohhanhui - Wednesday, March 11, 2020 - link

    That's "security by obscurity" which isn't security at all.
  • darkswordsman17 - Wednesday, March 11, 2020 - link

    There's a few reasons. First, just not telling anyone won't accomplish what you think it would, as the issue is still there, open to be exploited. Often they don't even know if its being exploited (although they could possibly check various common malware to see, but that doesn't mean its not being exploited just that they aren't aware of it). Typically they will alert those that have control over this some time (I think some waited for over a year, although generally more like 3-6 months seems to be standard procedure) before more openly divulging them, to give them lead time to come up with a solution for the issue (patch, etc), giving them opportunity to quietly mitigate it before "its in the wild" (publicly shared). They are doing this as research though and its literally their jobs. And if they don't, someone else likely will (or others will find and potentially exploit, meaning not reporting leaves a lot of people open to potential attack). Lastly, its important for them to share this so that these issues can hopefully be fixed (notice how they speculate that others might be vulnerable as well) and/or taken into account (thereby providing better security for everyone). In simplest terms, not releasing their info leaves more vulnerable and likely little hope of a fix (if companies can get away with vulnerabilities not being known, they very likely won't bother fixing them - there's some evidence that Intel knew about the potential for some of these vulnerabilities for years, possibly over a decade even, and didn't do anything to prevent them, choosing to value things like performance over security).
  • FreckledTrout - Wednesday, March 11, 2020 - link

    This used to be how it was 20+ years ago. So hackers and governments had easy access once they found a vulnerability. They kept these things in inner circles while stuff was getting hacked left and right. This open approach on security is far better.
  • JoeyJoJo123 - Wednesday, March 11, 2020 - link

    Antivirus isn't always a solution for a data breach. Antivirus can usually only try to respond to known threats. If the threat isn't known, then what can an antivirus do to protect against that? Also, dumb analogy incoming:

    John (Intel) is some dude that lives in a house in a neighborhood, alone with his dog (Antivirus). John's house has a backdoor into his backyard. It doesn't have a lock on it (exploit).

    Next door neighbor Jim (Security Expert) one day spots that John's house doesn't have a lock on his backdoor.

    Now, Jim can just say nothing, and one day, eventually, John will get robbed (hacked), and maybe or maybe not his dog (antivirus) might be enough to deter the robbery. Jim has a conscience and doesn't want to just wait and let that happen one day because it'll just cost a real person their livelihood.

    Or Jim can say something to John (private disclosure of vulnerability). Problem is, people like John will go up and down for a year and say "Nawwww, it's not a problem, don't worry about it, it's fine, besides someone would need to physically be in my backyard to begin with to rob me, and what are the odds of that happening?"

    Eventually a year passes and Jim just makes a public Facebook post the entire neighborhood can see stating "John's backdoor has no lock on it. I'm not saying go rob him, but I'm saying he needs to put a lock on his backdoor already".

    The next day John goes to the hardware store and buys a door lock and installs it. Problem averted.
  • rahvin - Wednesday, March 11, 2020 - link

    Information security works best in layered defenses. From the firewall and email scanning, to the virus and anti-malware software right on down to infosec computer training. You need all the layers because each layer provides protection the other layers don't.
  • Spunjji - Thursday, March 12, 2020 - link

    This is actually a pretty damn good analogy. In my headcanon "Intel John" is played by John McAfee.
  • Drkrieger01 - Wednesday, March 11, 2020 - link

    This sounds like another exploit that can be mitigated by educating your working staff (like 'knowbe4.com'), and using a decent email filtration system (ex.: Fortimail). Education on what is/is not malware is paramount these days - you would not believe the crap that comes through email. So many javascript attacks, vbs scripts, macro embedded office files, etc., that users need to know how to identify... these are the things that will help protect your systems. Most of these exploits coming to light need direct access - educate the users to remove the 'direct access' from the equation.
    Yes, microcode updates are good as well, but lets be honest... are we ever going to see a completely secure CPU in the next 5 years? Probably not. Can it be 95% secure with proper care of use of the equipment, and education on identifying threats? Very likely.
  • JoeyJoJo123 - Wednesday, March 11, 2020 - link

    Yes, great solution. Let's just hire humans that never get overworked and never hastily click a link to try to mow through e-mail overload and never make mistakes. I've never fallen for phishing attempts, but that's not the issue here, it's a HARDWARE VULNERABILITY, let's not move the goalposts and say it's OK to have hardware vulnerabilities and that the real issue is security training.

    It's not feasible for every employee your company ever hires to never make mistakes. Every human is a human. And besides, this is a vulnerability that affects the hardware, where no matter how good your IT staff is, they can't just reformat the vulnerability away.
  • Spunjji - Thursday, March 12, 2020 - link

    Can back this up. Anyone who's worked for an educational institution in particular will know how utterly un-possible it is to prevent some minority of users from clicking random crap in emails. You can give training seminars, send out emails, post bulletins, whatever - and then 3 weeks later someone calls in and says "well, I clicked this thing..."

Log in

Don't have an account? Sign up now