Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to Wasmer Reborn (part 1) #504

Merged
merged 128 commits into from
Dec 10, 2020
Merged

Conversation

webmaster128
Copy link
Member

@webmaster128 webmaster128 commented Aug 5, 2020

Closes #503
Closes #462
Closes #501
Closes #375
Closes #495
Closes #555

@webmaster128 webmaster128 added the WIP work in progress label Aug 5, 2020
@webmaster128 webmaster128 changed the title Upgrade to wasmer reborn Upgrade to Wasmer Reborn Aug 5, 2020
@webmaster128 webmaster128 force-pushed the upgrade-to-wasmer-reborn branch from 7eb1351 to 8685586 Compare September 8, 2020 08:04
@webmaster128 webmaster128 force-pushed the upgrade-to-wasmer-reborn branch from 3274902 to cc94242 Compare September 14, 2020 12:01
Copy link
Member

@ethanfrey ethanfrey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Impressive PR.

I wrote a lot of questions that came up trying to understand the new APIs and your usage of them better. However, the code looks solid. Happy for another pair of eyes on this, but from my end, I see nothing blocking merging this.

Also, very curious as to how the benchmarks change when this is merged. Would be great to have a before/after diagram (or some tracking of it)

@@ -22,12 +22,13 @@ incremental = false
overflow-checks = true

[features]
# Change this to [] if you don't need Windows support and want faster integration tests.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This only affects the tests, right?
Meaning, cargo test works out of the box on all platforms, but linux/osx can opt into cargo test --no-default-features for speed

The alternative would be default [], cargo test does the right thing on osx/linux, and you would need cargo test --features cranelift to run on windows at all.

I am leaning towards the second as a pattern. But it would be cool to see if that could be done automatically for both.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm.. wasted some time reflecting on Cargo files.

There is eg:

[target.x86_64-pc-windows-gnu.dependencies]
byte = "0.2.4"

But I cannot find the equivalent for enabling target-specific feature flags.

Minor point, but it would good to set a proper template for all the contracts

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is only relevant for integration tests. Unit tests are not affected.

The alternative would be default [], cargo test does the right thing on osx/linux, and you would need cargo test --features cranelift to run on windows at all.

Jupp, but it also requires different documentation. I went for the less optimized but more beginner friendly way. But open for change.

Hmmm.. wasted some time reflecting on Cargo files.

See also #649, which is probably what you are looking for

packages/vm/src/cache.rs Outdated Show resolved Hide resolved
@@ -125,7 +126,7 @@ where
}

// Get module from file system cache
if let Some(module) = self.fs_cache.load(checksum)? {
if let Some(module) = self.fs_cache.load(checksum, options.memory_limit)? {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This take an optional config memory limit (very nice), why not config above?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This limit is not optional but required for every instanciation.
You can have a single cache and get instances with different memory limits out of it. Not that we need it but I ran into design trouble when I had a fixed limit in the cache.

@@ -476,7 +479,7 @@ mod test {
}

#[test]
#[cfg(feature = "default-singlepass")]
#[cfg(feature = "metering")]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, so this code is just left as unmigrated placeholders and fails if metering flag is enabled now?
(Which is good to push into a separate PR)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Jupp, at some point I realized that adding a simple metering flag makes almost everything compile and pass.

packages/vm/src/calls.rs Show resolved Hide resolved
@@ -0,0 +1,170 @@
use crate::backend::{Querier, Storage};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This whole file is a TODO/placeholder, right? (I know you said metering was still pending upstream)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, lots of noop and outdated code here

///
/// Delegated to base.
fn memory_style(&self, memory: &MemoryType) -> MemoryStyle {
let adjusted = self.adjust_memory(memory);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to validate here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We validate everything we need to when storing the code the first time. Then self.adjust_memory just sets the maximum for the memory.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, now I know what you mean.

The result type is fixed by the Tunables trait in Wasmer and I can't return errors. But everything that goes through here also calls one of the other functions.

Self { limit, base }
}

/// Takes in input memory type as requested by the guest and sets
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great comments and clear logic here

}
}

impl<T: Tunables> Tunables for LimitingTunables<T> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am unclear from here what is called when a new page is allocated in an existing dynamic memory.

Or no need to intercept that, as we ensure valid minimum/maximum when they create the memory table in the first place, so the internal logic can handle dynamic page allocation?

Also, no intercept on pages means we cannot charge gas per memory usage, right? (No need to do so, I just remember the idea was floating around and curious if possible with the current API)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, we set a maximum once and Wasmer checks that the number of pages does not exceed the maximum internally.

Also, no intercept on pages means we cannot charge gas per memory usage, right?

Basically yes.

In the new metering we could implement special treatment for the memory grop op code. But then we also need to look at the minimum. So if we really really want, it can be done. But I'd say memory usage if free within the allowed range.

}

/// Created a store with no compiler and the given memory limit (in pages)
pub fn make_store_headless(memory_limit: Size) -> Store {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, headless can run pre-compiled code, but not compile itself, which makes it lighter to start up when running many instances, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. This is also usefull for cross compilation, e.g. for IoT stuff. You compile one machine and run in a much more limited environment.

@webmaster128 webmaster128 merged commit c44ff51 into master Dec 10, 2020
@webmaster128 webmaster128 deleted the upgrade-to-wasmer-reborn branch December 10, 2020 16:20
@webmaster128 webmaster128 changed the title Upgrade to Wasmer Reborn Upgrade to Wasmer Reborn (part 1) Dec 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants