Skip to content

Add ServerAssociation::requestor/acceptor_max_pdu_length()#685

Merged
Enet4 merged 3 commits intoEnet4:masterfrom
pgimeno4d:add-max-remote-pdu-length
Oct 3, 2025
Merged

Add ServerAssociation::requestor/acceptor_max_pdu_length()#685
Enet4 merged 3 commits intoEnet4:masterfrom
pgimeno4d:add-max-remote-pdu-length

Conversation

@pgimeno4d
Copy link
Contributor

Fixes #684 by adding a reader method.

@pgimeno4d pgimeno4d force-pushed the add-max-remote-pdu-length branch from 7786349 to 132703b Compare September 16, 2025 11:20
@Enet4
Copy link
Owner

Enet4 commented Sep 16, 2025

Sounds good, but test coverage for the exchange of maximum PDU lengths is a bit thin right now. Could you also extend association_echo so that it sets maximum PDU lengths explicitly and checks that these numbers are correctly shared on the association establishment? Much appreciated!

@Enet4 Enet4 added enhancement A-lib Area: library C-ul Crate: dicom-ul labels Sep 16, 2025
@pgimeno4d pgimeno4d changed the title Add ServerAssociation::max_remote_pdu_length() [WIP] Add ServerAssociation::max_remote_pdu_length() Sep 16, 2025
@pgimeno4d
Copy link
Contributor Author

pgimeno4d commented Sep 16, 2025

I'm working on something like that. The above patch is incomplete; the client will also need a way to read its peer's maximum, so I've marked it as WIP in the title.

The testing suite enhanced echo test I'm working on has already revealed a pair of related bugs. Unfortunately, in order to expose them both, the code needs to make two separate associations, one where server max > client max, and one vice versa, and that's going a bit farther than I had planned. I found the other by recompiling with the swapped maximums.

@pgimeno4d
Copy link
Contributor Author

So, I hadn't checked ClientAssociation, but when I did, I noticed that it already contained methods requestor_max_pdu_length() and acceptor_max_pdu_length(), so for consistency, I've changed the method to the same name as the client and added the acceptor one too.

Added unit tests. Note that the tests will fail unless #686 is merged first.

The conundrum about testing both directions was resolved by choosing which one is the maximum at random.

@Enet4 Enet4 added this to the DICOM-rs 0.9 milestone Sep 17, 2025
@pgimeno4d pgimeno4d changed the title [WIP] Add ServerAssociation::max_remote_pdu_length() Add ServerAssociation::max_remote_pdu_length() Sep 17, 2025
@pgimeno4d pgimeno4d changed the title Add ServerAssociation::max_remote_pdu_length() Add ServerAssociation::requestor/acceptor_max_pdu_length() Sep 18, 2025
@pgimeno4d
Copy link
Contributor Author

pgimeno4d commented Sep 18, 2025

This will be obsoleted by #679. If the plan is to merge 679 for 0.9, this one can be closed. Same with #686.

@Enet4
Copy link
Owner

Enet4 commented Oct 1, 2025

Given that v0.9.0 is overdue and #679 does not introduce breaking changes, it would make sense to contribute with these methods separately and leave TLS support for v0.9.1. It would be great if you could rebase this pull request on top of #689, so that these can be merged in order.

@pgimeno4d pgimeno4d force-pushed the add-max-remote-pdu-length branch from cafa5a6 to 47d27f8 Compare October 2, 2025 17:28
Copy link
Owner

@Enet4 Enet4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Can you also look into the check failures?

Modify association_echo.rs to verify that the results of the functions match the expectations, and to make both parts send and receive a packet with the maximum size admitted by each, with different sizes for the client and the server. The choice of which side gets the highest is random.
@pgimeno4d pgimeno4d force-pushed the add-max-remote-pdu-length branch from 47d27f8 to 70099e0 Compare October 2, 2025 18:32
@pgimeno4d
Copy link
Contributor Author

The previous force-push was premature, the latest one is good, sorry about that.

Here's the situation. The new test in this PR adds a transmission and a reception on each end, with a PDU that is exactly the maximum length that each side accepts, and checks that it gets through. Then a PDU with one byte over the maximum is sent by each side, and checks that it fails as expected. The maximum PDU size for the client and the server are different, which is how #686 was found.

I've rebased it on master. The server-side bug was fixed in 6b4351f, but the client-side bug remains in master, which is why CI fails: if you look at https://github.com/Enet4/dicom-rs/actions/runs/18202186950/job/51823609412?pr=685#step:4:634 you'll see the reason for the failure, which is ... PduTooLarge { pdu_length: 7884, max_pdu_length: 5678... because the client is checking against the server's maximum PDU instead of its own's.

I've tried to rebase it on #689, but it makes a mess of commits. Since it applies cleanly after #689, it's best if #689 is merged first, then I can rebase it again if necessary to verify that CI passes.

There's one more thing. Currently, which side gets the largest maximum PDU size is chosen at random in the test, so since the server end is fixed, CI fails only by chance now. I'd like to remove that random choice and test both cases always.

@pgimeno4d
Copy link
Contributor Author

I'm not sure why it's failing now. It appears to not be compiling the test in async mode or something like that.

Copy link
Owner

@Enet4 Enet4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment inline fixes the test compilation error.

As for the other failing tests, I did the quick experiment of splitting the two scenarios (max_is_server/max_is_client) for each sync/variant, and only the cases where max_is_client=true failed.

---- scu_scp_association_test_async_max_is_client stdout ----

thread 'scu_scp_association_test_async_max_is_client' panicked at ul\tests\association_echo.rs:308:43:
can't receive response (async): ReceivePdu { source: PduTooLarge { pdu_length: 7884, max_pdu_length: 5678, backtrace: Backtrace [{ fn: "std::backtrace_rs::backtrace::win64::trace", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\std\src\..\..\backtrace\src\backtrace\win64.rs", line: 85 }, { fn: "std::backtrace_rs::backtrace::trace_unsynchronized", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\std\src\..\..\backtrace\src\backtrace\mod.rs", line: 66 }, { fn: "std::backtrace::Backtrace::create", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\std\src\backtrace.rs", line: 331 }, { fn: "std::backtrace::Backtrace::force_capture", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\std\src\backtrace.rs", line: 312 }, { fn: "snafu::backtrace_impl::impl$0::generate", file: "~\.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\snafu-0.8.5\src\backtrace_impl_std.rs", line: 5 }, { fn: "dicom_ul::pdu::PduTooLargeSnafu<u32,u32>::build<u32,u32>", file: "~\dev\dicom-rs\ul\src\pdu\mod.rs", line: 65 }, { fn: "dicom_ul::pdu::PduTooLargeSnafu<u32,u32>::fail<u32,u32,enum2$<core::option::Option<enum2$<dicom_ul::pdu::Pdu> > > >", file: "~\dev\dicom-rs\ul\src\pdu\mod.rs", line: 65 }, { fn: "dicom_ul::pdu::reader::read_pdu<ref_mut$<std::io::cursor::Cursor<ref$<slice2$<u8> > > > >", file: "~\dev\dicom-rs\ul\src\pdu\reader.rs", line: 57 }, { fn: "dicom_ul::association::read_pdu_from_wire_async::async_fn$0<tokio::net::tcp::stream::TcpStream>", file: "~\dev\dicom-rs\ul\src\association\mod.rs", line: 218 }, { fn: "dicom_ul::association::client::non_blocking::impl$1::receive::async_fn$0::async_block$0", file: "~\dev\dicom-rs\ul\src\association\client.rs", line: 1166 }, { fn: "dicom_ul::association::client::non_blocking::timeout::async_fn$0<enum2$<dicom_ul::pdu::Pdu>,enum2$<dicom_ul::association::client::non_blocking::impl$1::receive::async_fn$0::async_block_env$0> >", file: "~\dev\dicom-rs\ul\src\association\client.rs", line: 983 }, { fn: "dicom_ul::association::client::non_blocking::impl$1::receive::async_fn$0", file: "~\dev\dicom-rs\ul\src\association\client.rs", line: 1168 }, { fn: "association_echo::run_scu_scp_association_test_async::async_fn$0", file: "~\dev\dicom-rs\ul\tests\association_echo.rs", line: 308 }, { fn: "association_echo::scu_scp_association_test_async_max_is_client::async_block$0", file: "~\dev\dicom-rs\ul\tests\association_echo.rs", line: 272 }, { fn: "core::future::future::impl$1::poll<ref_mut$<dyn$<core::future::future::Future<assoc$<Output,tuple$<> > > > > >", file: "\~\.rustup\toolchains\stable-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\future\future.rs", line: 133 }, { fn: "tokio::runtime::park::impl$4::block_on::closure$0<core::pin::Pin<ref_mut$<dyn$<core::future::future::Future<assoc$<Output,tuple$<> > > > > > >", file: "\~\.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\tokio-1.45.1\src\runtime\park.rs", line: 284 }, { fn: "tokio::task::coop::with_budget", file: "\~\.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\tokio-1.45.1\src\task\coop\mod.rs", line: 167 }, { fn: "tokio::task::coop::budget", file: "\~\.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\tokio-1.45.1\src\task\coop\mod.rs", line: 133 }, { fn: "tokio::runtime::park::CachedParkThread::block_on<core::pin::Pin<ref_mut$<dyn$<core::future::future::Future<assoc$<Output,tuple$<> > > > > > >", file: "\~\.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\tokio-1.45.1\src\runtime\park.rs", line: 284 }, { fn: "tokio::runtime::context::blocking::BlockingRegionGuard::block_on<core::pin::Pin<ref_mut$<dyn$<core::future::future::Future<assoc$<Output,tuple$<> > > > > > >", file: "\~\.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\tokio-1.45.1\src\runtime\context\blocking.rs", line: 66 }, { fn: "tokio::runtime::scheduler::multi_thread::impl$0::block_on::closure$0<core::pin::Pin<ref_mut$<dyn$<core::future::future::Future<assoc$<Output,tuple$<> > > > > > >", file: "\~\.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\tokio-1.45.1\src\runtime\scheduler\multi_thread\mod.rs", line: 87 }, { fn: "tokio::runtime::context::runtime::enter_runtime<tokio::runtime::scheduler::multi_thread::impl$0::block_on::closure_env$0<core::pin::Pin<ref_mut$<dyn$<core::future::future::Future<assoc$<Output,tuple$<> > > > > > >,tuple$<> >", file: "\~\.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\tokio-1.45.1\src\runtime\context\runtime.rs", line: 65 }, { fn: "tokio::runtime::scheduler::multi_thread::MultiThread::block_on<core::pin::Pin<ref_mut$<dyn$<core::future::future::Future<assoc$<Output,tuple$<> > > > > > >", file: "\~\.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\tokio-1.45.1\src\runtime\scheduler\multi_thread\mod.rs", line: 86 }, { fn: "tokio::runtime::runtime::Runtime::block_on_inner<core::pin::Pin<ref_mut$<dyn$<core::future::future::Future<assoc$<Output,tuple$<> > > > > > >", file: "\~\.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\tokio-1.45.1\src\runtime\runtime.rs", line: 358 }, { fn: "tokio::runtime::runtime::Runtime::block_on<core::pin::Pin<ref_mut$<dyn$<core::future::future::Future<assoc$<Output,tuple$<> > > > > > >", file: "\~\.cargo\registry\src\index.crates.io-1949cf8c6b5b557f\tokio-1.45.1\src\runtime\runtime.rs", line: 330 }, { fn: "association_echo::scu_scp_association_test_async_max_is_client", file: "~\dev\dicom-rs\ul\tests\association_echo.rs", line: 272 }, { fn: "association_echo::scu_scp_association_test_async_max_is_client::closure$0", file: "~\dev\dicom-rs\ul\tests\association_echo.rs", line: 271 }, { fn: "core::ops::function::FnOnce::call_once<association_echo::scu_scp_association_test_async_max_is_client::closure_env$0,tuple$<> >", file: "\~\.rustup\toolchains\stable-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\ops\function.rs", line: 253 }, { fn: "core::ops::function::FnOnce::call_once", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\core\src\ops\function.rs", line: 253 }, { fn: "test::__rust_begin_short_backtrace<enum2$<core::result::Result<tuple$<>,alloc::string::String> >,enum2$<core::result::Result<tuple$<>,alloc::string::String> > (*)()>", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\test\src\lib.rs", line: 648 }, { fn: "test::run_test_in_process", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\test\src\lib.rs", line: 671 }, { fn: "test::run_test::closure$0", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\test\src\lib.rs", line: 592 }, { fn: "test::run_test::closure$1", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\test\src\lib.rs", line: 622 }, { fn: "std::sys::backtrace::__rust_begin_short_backtrace<test::run_test::closure_env$1,tuple$<> >", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\std\src\sys\backtrace.rs", line: 158 }, { fn: "core::ops::function::FnOnce::call_once<std::thread::impl$0::spawn_unchecked_::closure_env$1<test::run_test::closure_env$1,tuple$<> >,tuple$<> >", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\core\src\ops\function.rs", line: 253 }, { fn: "alloc::boxed::impl$28::call_once", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\alloc\src\boxed.rs", line: 1971 }, { fn: "alloc::boxed::impl$28::call_once", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\alloc\src\boxed.rs", line: 1971 }, { fn: "std::sys::pal::windows::thread::impl$0::new::thread_start", file: "/rustc/1159e78c4747b02ef996e55082b704c09b970588/library\std\src\sys\pal\windows\thread.rs", line: 60 }, { fn: "BaseThreadInitThunk" }, { fn: "RtlUserThreadStart" }] } }
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    scu_scp_association_test_async_max_is_client
    scu_scp_association_test_max_is_client

@pgimeno4d
Copy link
Contributor Author

pgimeno4d commented Oct 3, 2025

As for the other failing tests, I did the quick experiment of splitting the two scenarios (max_is_server/max_is_client) for each sync/variant, and only the cases where max_is_client=true failed.

Yes, as predicted. It is testing something that is currently broken in master. Once #689 is merged, which fixes the problem, and this one rebased on top of it, it will pass. As I said, I could not rebase it on #689 because that made a mess of commits.

Edit: It appears that #689 is already merged, so now it indeed succeeds.

Thanks to @Enet4 for the fix.

Co-authored-by: Eduardo Pinho <enet4mikeenet@gmail.com>
@pgimeno4d pgimeno4d force-pushed the add-max-remote-pdu-length branch from df50246 to 5f35349 Compare October 3, 2025 11:19
@pgimeno4d
Copy link
Contributor Author

This is ready to be merged. I was lucky to make the last commit after 689 was merged, so the CI already included it and the tests passed without needing a rebase.

Copy link
Owner

@Enet4 Enet4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, can confirm that the tests pass after the merge. Thank you for the continued support on this @pgimeno4d. 👍

@Enet4 Enet4 merged commit 54010af into Enet4:master Oct 3, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-lib Area: library C-ul Crate: dicom-ul enhancement

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Make the remote PDU max length available to applications

2 participants