Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak with actix-web 2.0 and actix-rt 1.1.1 for hello world #1500

Closed
peshwar9 opened this issue May 11, 2020 · 13 comments
Closed

Memory leak with actix-web 2.0 and actix-rt 1.1.1 for hello world #1500

peshwar9 opened this issue May 11, 2020 · 13 comments

Comments

@peshwar9
Copy link

peshwar9 commented May 11, 2020

Hi,

I'm running the basic hello world program from actix guide, and I see memory leak with each request, details below:

rustc --version:
rustc 1.43.0 (4fb7144ed 2020-04-20)

cargo --version
cargo 1.43.0 (3532cf738 2020-03-17)

Cargo.toml

[dependencies]
actix-web = "2.0"
actix-rt = "=1.1.1"

src/main.rs:

use actix_web::{web, App,  HttpResponse, HttpServer, Responder};

async fn index() -> impl Responder {
    HttpResponse::Ok().body("Hello world!")
}

async fn index2() -> impl Responder {
    HttpResponse::Ok().body("Hello world again!")
}
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| {
        App::new()
            .route("/", web::get().to(index))
            .route("/again", web::get().to(index2))
    })
    .keep_alive(75)
    .bind("127.0.0.1:8088")?
    .run()
    .await
}

Used cargo run to do debug build and run the server.

Initial process memory consumption (rss) was 6.9 MB (6984 KB) on startup.

I ran the following load tests:

wrk -t12 -c400 -d10s http://localhost:8088

Result:
End of first run, memory of process(rss): 16468 KB
End of second run, memory of process(rss): 19108 KB
End of third run, memory of process(rss): 20512 KB

Expected behaviour: Memory consumption may increase, but should restore back to original levels
Actual behaviour : Memory consumption increases with each request but does not drop down even after running the server for over an hour.

Host platform: Mac OSX Mojave 10.14.5, Macbook Pro from which both server and client are run from separate terminals.

I looked through list of issues, and it was stated that there was a memory leak in actix-rt 1.1.0 which was fixed in 1.1.1, but I'm using this version and still get the memory leak. Please advise.

@Lesiuk
Copy link
Contributor

Lesiuk commented May 11, 2020

If everything is fine with actix-rt="=1.0" I would guess actix/actix-net@1b4a117 is not fixing the leak.

@robjtede
Copy link
Member

actix/actix-net@1b4a117 was fixing a leak introduced in 1.1.

This would be a different leak that needs finding.

@Lesiuk
Copy link
Contributor

Lesiuk commented May 11, 2020

Yeah. I can reproduce it. Will check where are all of those allocations comming from.

@cdbattags
Copy link
Member

@Lesiuk yeah, and I think as we move into further breaking changes we can try and regression test on mem allocation with some sort of e2e suite?

@robjtede any thoughts? These memory leaks are super scary and are decent reasons to stay away from any affected release until squashed, no?

@robjtede
Copy link
Member

Currently monitoring a leak-like issue in production myself that is likely this issue; it's managable for now and I wouldn't say anything needs yanking like rt v1.1.0 did.

Certainly is worth immediate attention, profiling, and tracking down. Eager to see the results of your allocation investigation @Lesiuk. You got time to do this today? If not I can do so later.

@Lesiuk
Copy link
Contributor

Lesiuk commented May 11, 2020

@robjtede I'm pretty convinced It's because of memory fragmentation and only happens with high number of concurrent connections (-c parameter). When running for 1 hour in a loop memory usage stopped increasing at 49.2 MB on my macboook pro and 32MB on my windows machine.

For example when testing with 35 concurrent connections memory stopped increasing at 9.4 MB.

I will confirm if It's really memory fragmentation using valgrind on linux VM (since It's not working well on osx and windows) tomorrow afternoon.

@cdbattags it could be tested using custom global allocator which would increase bytes allocated counter in every execution ofunsafe fn alloc(&self, l: Layout) -> *mut u8 and decreasing in every execution of unsafe fn dealloc(&self, ptr: *mut u8, l: Layout) (Alloc trait) or using jemalloc statistics.

@robjtede robjtede mentioned this issue Jun 6, 2020
15 tasks
@Lesiuk
Copy link
Contributor

Lesiuk commented Jun 7, 2020

Valgrind shows small memory leak inside

==1951==    at 0x483E0F0: memalign (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==1951==    by 0x483E212: posix_memalign (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==1951==    by 0x26A7839: aligned_malloc (alloc.rs:95)
==1951==    by 0x26A7839: alloc (alloc.rs:22)
==1951==    by 0x26A7839: __rdl_alloc (alloc.rs:304)
==1951==    by 0x223B56B: alloc::alloc::alloc (alloc.rs:80)
==1951==    by 0x223D1D3: <alloc::alloc::Global as core::alloc::AllocRef>::alloc (alloc.rs:174)
==1951==    by 0x223B4C4: alloc::alloc::exchange_malloc (alloc.rs:268)
==1951==    by 0x220A925: alloc::sync::Arc<T>::new (sync.rs:323)
==1951==    by 0x224D652: tokio::time::driver::entry::Entry::new (entry.rs:124)
==1951==    by 0x21E79AF: tokio::time::driver::registration::Registration::new (registration.rs:21)
==1951==    by 0x2238A04: tokio::time::delay::delay_until (delay.rs:19)
==1951==    by 0x8F37FF: actix_http::config::ServiceConfig::keep_alive_timer (config.rs:159)
==1951==    by 0x6B533C: actix_http::h1::dispatcher::Dispatcher<T,S,B,X,U>::with_timeout (dispatcher.rs:226)
==1951==    by 0x5C87BD: actix_http::h1::dispatcher::Dispatcher<T,S,B,X,U>::new (in /home/lesiuk/repos/backend/target/release/freedom)
==1951==    by 0x38C0AD: <actix_http::service::HttpServiceHandler<T,S,B,X,U> as actix_service::Service>::call (in /home/lesiuk/repos/backend/target/release/freedom)
==1951==    by 0x74B0C4: <actix_service::and_then::AndThenServiceResponse<A,B> as core::future::future::Future>::poll (in /home/lesiuk/repos/backend/target/release/freedom)
==1951==    by 0x64FEFE: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll (in /home/lesiuk/repos/backend/target/release/freedom)
==1951==    by 0x3C4523: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (in /home/lesiuk/repos/backend/target/release/freedom)
==1951==    by 0x6C1A5E: tokio::runtime::task::harness::Harness<T,S>::poll (in /home/lesiuk/repos/backend/target/release/freedom)
==1951==    by 0xE9F838: std::thread::local::LocalKey<T>::with (in /home/lesiuk/repos/backend/target/release/freedom)

But it's pretty minor (50 KB after 100 000 requests). Most of those 62 MB usage I had was memory fragmentation. (really allocated was only 12 MB).

@robjtede
Copy link
Member

Not seeing evidence of memory leak on latest v3 beta code. As @Lesiuk said, this is probably just memory fragmentation. If further evidence is brought for latest code, will be happy to reopen.

@LastLightSith
Copy link

I tested the above code with just updating the dependencies and it still shows memory leaks on ArchLinux

use actix_web::{web, App,  HttpResponse, HttpServer, Responder};

async fn index() -> impl Responder {
    HttpResponse::Ok().body("Hello world!")
}

async fn index2() -> impl Responder {
    HttpResponse::Ok().body("Hello world again!")
}
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| {
        App::new()
            .route("/", web::get().to(index))
            .route("/again", web::get().to(index2))
    })
    .keep_alive(75)
    .bind("127.0.0.1:8088")?
    .run()
    .await
}
[package]
name = "testapp"
version = "0.1.0"
edition = "2018"

[dependencies]
actix-web = "3.0.0-beta.1"
actix-rt = "1.1.1"
❯ wrk -t12 -c100 -d100s http://localhost:8088
Running 2m test @ http://localhost:8088
  12 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.59ms    2.21ms  47.28ms   90.24%
    Req/Sec     3.59k   599.18     6.46k    76.36%
  4292721 requests in 1.67m, 360.26MB read
Requests/sec:  42897.00
Transfer/sec:      3.60MB

I'm attaching KDE System Monitor screenshots

Before Testing:-

before

After Testing:-

after

@omid
Copy link
Contributor

omid commented Aug 2, 2020

I also still can see the leakage after upgrading to the latest beta version.
It adds another ~70MB memory usage after each execution of wrk -t12 -c100 -d100s http://localhost:8088 (~8.5 mil reqs)

@robjtede
Copy link
Member

app data memory leak fixed in beta 2

@peshwar9
Copy link
Author

peshwar9 commented Aug 17, 2020 via email

@omid
Copy link
Contributor

omid commented Aug 17, 2020

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants