Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add some notes about JIT compilation issues #99

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 43 additions & 0 deletions JIT_compilation_issues.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
### The background

* two ways to execute wasm code: interpreted, or compiled (aka native execution).
- note that "two ways" is an over-simplification. There's a spectrum in between, where you have slow interpreters at one end, interpeter speedup techniques along the middle, and fully compiled JIT (Just-in-Time) or AOT (Ahead-of-Time) execution at the opposite end.
- because of this spectrum between interpretation and compilation, some people ask why bother trying to do compiled execution since there is "literally no difference" between interpreted execution and compiled execution. Others point out that this is technically true, there is no difference "except for two orders of magnitude in execution speed."

* an example benchmark: the ECPAIRING precompile (i.e. the zk-snark precompile). We took the Rust implementation of ECPAIRING (which is used in parity, and in ethereumJS by compiling the Rust code to asm.js), and compiled it to wasm. Then we deployed it to our ewasm prototype testnet client, which uses Binaryen (a wasm interpreter) as the wasm engine. With interpreted wasm execution, three CALLs to the ECPAIRING contract took 21 seconds (we can't tell you exactly how much gas it will cost because we haven't yet finished the metering injection "sentinel contract"). Then we executed the same thing in node.js (which uses v8 as a wasm engine), and it took ~100 milliseconds. (somebody out there might wanna try this same benchmark by deploying it on Kovan and running it in Parity, and see how much gas it costs and how long it takes to run in Parity's wasm interpreter).

### The problem with interpreters: gas costs

* In theory, it is possible to just use wasm interpreters as the baseline (or maybe "reference") wasm engine. But the issue is, how will we calibrate ewasm gas costs? If gas costs are calibrated to interpreter execution speeds, then it would be cost prohibitive (in terms of the block gas limit) to run a contract such as the ECPAIRING contract. This would mean that there would still be demand from users to add new precompiles to the EVM/ewasm protocol, (recall that the advantage and motivation of "precompiled contracts" (aka "builtins"), is that they have custom gas costs calibrated to native execution speed).

### The problem with JITs: compiler bombs

* the easiest way to run wasm code at native speeds is to just plug-in to already existing JIT wasm engines: the browsers (Chrome/node.js/v8 and Firefox/spiderMonkey), or non-browsers (WAVM - WebAssembly Virtual Machine - which is based on llvm, i.e. translating wasm bytecode to llvm bitcode and then to machine code). Each client (geth, parity, trinity/pyethereum, ethereumJS) could choose to use whatever JIT engine they want.

* adopting any of these JIT engines would be easy -- just pull one off-the-shelf and it will work out of the box. But they're super complex machines, with massive codebases, so unless you are an expert in how JIT compilers work, they're essentially black boxes.

* the problem is, what if wasm JIT engines are vulnerable to DoS attacks? Running most contracts is fine because the JIT compilation happens very fast (e.g. a few milliseconds), then execution happens fast (say 100 milliseconds). But some contracts could be exploits which take the JIT engine a very long time to compile: "compiler bombs" or "JIT bombs".
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When designing the gas cost, I think I would ask "what is the gas cost we should use for the most efficient (theoretically) way to execute the code?". I think this disincentivizes using less-performant ways of implementing. Surely will be some error in figuring out the mos theoretically efficient way though.

Copy link
Collaborator

@ehildenb ehildenb May 31, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure the solution is a "bomb sniffing contract", as much as a "general way of specifying protocol level details as system contracts". Either way, it would be something that needs to have consensus on it (as all the software/hardware evolves around it), so would be good to be able to modularly insert such contracts.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, bad at github (not sure how to promote this to top-level comment).


* what we couldn't answer before: how do the standard wasm JIT engines work and are they vulnerable to JIT bombs? If they do only one linear pass over the wasm bytecode, then compilation time should be linearly proportional to code size, and JIT bomb attacks would be a non-issue. (or in fancier words, "does wasm JIT compilation have a linear-time upper bound?"). But if standard wasm JIT engines have compilation times that are quadratic (or worse) for certain inputs, then they could be vulnerable to JIT bombs.

* what we've just learned: the standard wasm JIT engines are not linear-time-bounded. We learned this by fuzz testing v8 (and WAVM) to find slow inputs. We found several bombs, which (for example) are 20kb pieces of wasm code that take two seconds to compile in v8. You can see them here [INSERT_LINKS_HERE]

* different wasm engines are vulnerable to different JIT bombs. Even different versions of the same engine, or the same engine run with different option flags, are affected differently by different bombs. Some bombs work across multiple versions (and maybe across multiple engines). We are still exploring and analyzing the features of bombs and how they exploit JIT engines; the studies so far are very preliminary (and we're not JIT compiler experts, or at least some of us aren't).

### Potential solutions

* Restated, the problem is that when we pull off-shelf-engine wasm engines and use them to JIT execute wasm contracts, the execution process (execution stage?) is metered. But the JIT compilation stage is not metered, and there does not appear to be any easy way to add metering to the compilation stage.

* One solution idea is to do metered AOT instead of (not metered) JIT. To imagine a system where contracts are AOT compiled at deployment time, picture every ethereum client maintaining a cache directory of binaries: 0x666cryptokitties.exe, 0xdeadMultiSigContract.exe, and so on.

* Some people's opinion is that a system based on metered AOT would be a big PITA. It would require implementing a new wasm AOT compiler, or adapting an existing wasm engine, adding metering, and then requiring clients to maintain a directory of compiled binaries. From the point of view of a compiler expert, this may not sound like such a big deal. But from the point of view of an average Ethereum client developer, it sounds like a lot of development effort.

* one way to explain to an average Ethereum client developer how metered AOT might work, is to say: take something like WAVM, and adapt it to do AOT compilation rather than JIT (we'll just call this WAVM). Then take WAVM and compile it to wasm (call it WAVM.wasm), and inject metering into that wasm. Then the Ethereum client will run this "WAVM-AOT-metered-compiler" whenever a user deploys a new wasm contract.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicating my comment here since it seems like might get lost with previous commit:

I'm not sure I'm following...

What output this compiler will provide? I guess it should provide machine code for the native platform. But that means at least one part of the compiler will be executed differently for different platforms. Thus the gas usage will be platform dependent, right?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify:

  1. This AOT compiler should emit native code (otherwise it wouldn't be useful).
  2. I'm assuming that the ethereum node can be run on different platforms (e.g Aarch64 and x86-64).
  3. I'm assuming that AOT compiler will execute different code path in order to produce native code for Aarch64 and x86-64.
    4.So it will spend different amount of gas to produce native code depending on the platform, thus it's not deterministic.

Copy link
Member

@axic axic Jun 1, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think one of the discussion points were that the AOT would generate bytecode for all supported architectures at the same time and as such it would be deterministic. It is super inefficient though.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see now. Thanks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or just meter the generation of machine code for one target "reference architecture" (eg. x86-64). Then if a client is running on a different architecture (i.e. Aarch64) it would be 2x less efficient, but that's fine. The point isn't to have actual execution costs perfectly match the calibrated gas costs. Just to be "good enough" to prevent DoS attacks.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be possible to devise architecture-specific bombs? i.e. a contract that passes the test for x86-64, but would explode on Aarch64?


* a twist on this idea is to create a wasm "JIT bomb sniffer". Take this WAVM.wasm, or take say v8.wasm (i.e. v8 compiled down to wasm) and inject metering into it. Call them WAVM-sniffer.wasm and v8-sniffer.wasm. Then when a user sends a contract deployment tx, pass it through the bomb sniffer and check if the sniffer's gas usage exceeds a threshold. If the bomb sniffer's gas threshold is exceeded, then deployment fails. This would hopefully ensure that any deployed contracts do not contain JIT bombs, and could be safely executed using off-the-shelf wasm engines.

* the downside of the bomb sniffer idea is that the sniffer would only be protection for particular versions of particular wasm engines. If a client upgrades the wasm engine to a newer version, there might be bombs that have already been deployed which could be used to DoS attack any clients using the new wasm engine. Also, to ensure that deployed contracts could be safely JITted by a variety of wasm engines, multiple sniffers would be needed, with each sniffer tailored to a particular wasm engine. Another concern is that there might be some way to "mask the bomb smell" and deploy JIT bombs by somehow sneaking them past the sniffer. Also, Ethereum client developers wouldn't be able to safely discuss wasm engine DoS vulnerabilities inside or around airports (or at least, they would have to whisper if they do).

### Next steps

* to experiment with the bomb sniffer idea, we plan to compile WAVM (and/or v8) to wasm, to see how big the wasm binary is and how long it takes to execute a bomb sniffer in a wasm interpreter. And we will also check whether WAVM.wasm (and/or v8.wasm) are JIT bombs when executed in themselves (i.e. executing WAVM.wasm/v8.wasm in WAVM.exe/v8.exe).