Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory-Leak when generating Instance #3485

Closed
ubamrein opened this issue Jan 17, 2023 · 12 comments
Closed

Memory-Leak when generating Instance #3485

ubamrein opened this issue Jan 17, 2023 · 12 comments
Assignees
Labels
priority-high High priority issue
Milestone

Comments

@ubamrein
Copy link
Contributor

Describe the bug

I'm currently using wasmer to play arround with wasm-edgecomputing. For every request, I use the serialized bytes of a module to deserialize it, and execute the corresponding wasm file (I also tried the non serialized method before, but the memory leak was even worse).

The following code is the minimal amount of code I need, to reproduce a memory-leak:

let module_bytes = if let Some(wasm_module) = self.wasm_module.as_ref() {
    wasm_module // &Vec<u8>
} else {
    return Err("No wasm".into());
};

let mut store = Store::default();
let module = unsafe { Module::deserialize(&store, module_bytes.as_slice())? };
let instance = Instance::new(&mut store, &module, &Imports::default())?;

Steps to reproduce

Execute the above code within a loop, as for example like this (note I use compressed wasm files but that should not matter in this context as it is only done once):

use std::{thread::sleep, time::Duration};

use flate2::read::GzDecoder;
use wasmer::{Imports, Instance, Module, Store};

fn main() {
    env_logger::init();
    let m: Vec<u8> = {
        let wasm_file = include_bytes!("../example.gz");
        let store = Store::default();

        let wasm_file = decompress(wasm_file);

        let module = if let Ok(module) = Module::from_binary(&store, &wasm_file) {
            module
        } else {
            return;
        };
        module.serialize().unwrap().into()
    };

    loop {
        let mut store = Store::default();
        let module = unsafe { Module::deserialize(&store, m.as_slice()).unwrap() };
        let instance = Instance::new(&mut store, &module, &Imports::default()).unwrap();
        sleep(Duration::from_millis(1));
    }
}

fn decompress(archive: &[u8]) -> Vec<u8> {
    use std::io::prelude::*;
    let mut e = GzDecoder::new(archive);
    let mut buf = vec![];
    e.read_to_end(&mut buf).unwrap();
    buf
}

Expected behavior

There should be no memory-leak, as no wasm code gets executed (possible leaks there) and there is literally nothing, except for the wasm file, being loaded.

Actual behavior

You can verify that the memory usage grows and grows until it is OOM killed (which in the example above can take some time :)).

@Leo-Besancon
Copy link

I have the same issue (taking the example given here: https://docs.rs/wasmer/latest/wasmer/#usage).

Valgrind shows the following output:

==4143== 17,720 (6,400 direct, 11,320 indirect) bytes in 10 blocks are definitely lost in loss record 35 of 39
==4143==    at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==4143==    by 0xC843D0: alloc::alloc::alloc (alloc.rs:95)
==4143==    by 0xC74F8C: wasmer_vm::instance::allocator::InstanceAllocator::new (allocator.rs:81)
==4143==    by 0xB2F231: wasmer_compiler::engine::artifact::Artifact::instantiate (artifact.rs:350)
==4143==    by 0x1F02E4: wasmer::sys::module::Module::instantiate (module.rs:369)
==4143==    by 0x1F7FD5: wasmer::sys::instance::Instance::new (instance.rs:121)
==4143==    by 0x1F4751: wasmer_hello_world::run_func (main.rs:19)
==4143==    by 0x1F4D69: wasmer_hello_world::main (main.rs:34)
==4143==    by 0x1F5CCA: core::ops::function::FnOnce::call_once (function.rs:250)
==4143==    by 0x1FDFED: std::sys_common::backtrace::__rust_begin_short_backtrace (backtrace.rs:121)
==4143==    by 0x1EE5E0: std::rt::lang_start::{{closure}} (rt.rs:166)
==4143==    by 0xD3942B: std::rt::lang_start_internal (function.rs:287)

@ubamrein
Copy link
Contributor Author

Shouldn't this

self.consumed = true;

be done, after writing the instance at the new place? Or am I missing how this works?

@Leo-Besancon
Copy link

Leo-Besancon commented Jan 17, 2023

Maybe, I'm not sure, because drop should not be called while we are calling write_instance on the InstanceAllocator, so it should not matter?

I would think once we call write_instance on the InstanceAllocator, it is the job of the InstanceHandle to drop the instance when needed. But I don't see the drop implementation of the handle.

@ubamrein
Copy link
Contributor Author

Maybe, I'm not sure, because drop should not be called while we are calling write_instance on the InstanceAllocator, so it should not matter?

I would think once we call write_instance on the InstanceAllocator, it is the job of the InstanceHandle to drop the instance when needed. But I don't see the drop implementation of the handle.

Yeah I read the code wrong, you're right!

@ubamrein
Copy link
Contributor Author

Hmm indeed, I can find a Drop/dealloc implementation for InstanceInner and hence InstanceRef but there seems to be none for InstanceHandle.

I'll need to checkout and see if the Drop implementation will fix it... I'll hope I find some time later.

@ptitSeb
Copy link
Contributor

ptitSeb commented Jan 18, 2023

So, using Valgrind, I see some memory leak with a minimal program. Something seems off with the Instance. I'll dig further.

@ptitSeb
Copy link
Contributor

ptitSeb commented Jan 18, 2023

Created a branch with a potential fix.
I the leak with the InstanceHandle itself should be fixed, but running valgrind shows more leak. I need to inverstigate a bit more.

@ubamrein
Copy link
Contributor Author

In my case I can still see memory leaking, though it looks as if it is indeed less than before :)

@ptitSeb
Copy link
Contributor

ptitSeb commented Jan 23, 2023

This issue is also referenced in https://oss-fuzz.com/testcase-detail/5021130350264320

@syrusakbary syrusakbary added the priority-high High priority issue label Jan 24, 2023
@syrusakbary syrusakbary added this to the v3.2 milestone Jan 24, 2023
@ptitSeb
Copy link
Contributor

ptitSeb commented Jan 25, 2023

@ubamrein I have update the branch, instance memory should not be leaking anymore. Can you test again to confirm?

@ubamrein
Copy link
Contributor Author

@ubamrein I have update the branch, instance memory should not be leaking anymore. Can you test again to confirm?

@ptitSeb Yes I can confirm that I can't reproduce the memory leak anymore! Thanks for fixing it!

@ptitSeb
Copy link
Contributor

ptitSeb commented Jan 25, 2023

Branch has been merged to master. I guess this ticket can be closed now right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority-high High priority issue
Projects
None yet
Development

No branches or pull requests

4 participants