Skip to content

wolfpacs/wolfpacs

Repository files navigation

WolfPACS

Build status codecov.io Docker build License GPLv3 Documentation Version Observance

Logo

Raison d'être

With the advent of powerful AI solutions in Radiology, there is growing need to split the workload across multiple workers. WolfPACS acts as a load balancer, sending DICOM series to the correct worker.

Mission statement

Enable a pool of heterogeneous workers (hardware, software) to serve multiple clients in a flexible way.

Status

WolfPACS is currently in the Alpha phase of development. Some critical bugs may still remain in the software.

WolfPACS needs more black-box testing. If you have a use case please write [email protected].

Next milestone

  • Support a secondary node which can take over or run in parallel to the main node.

Bird's-eye view

Imagine a Medical AI company called Stroke Insight. They have developed a cutting-edge algorithm to analyze MRIs; to detect strokes. To analyze the images, they need computers with a lot of GPU power. We call these machines workers.

These expensive computers are critical resources, that need serve many clients concurrently. Moreover, once in a while, they need to take one or more workers offline in order to upgrade the software and/or the hardware. To avoid announcing a service window, they want to be able to move load between workers in a safe way.

For this flexibility, they need and use WolfPACS. Which can enable a pool of heterogeneous workers (hardware, software) to serve multiple clients in a flexible way.

Image flow (primary and derived images)

Two clients, a hospital in Stockholm and one in Berlin are using Stroke Insight's software. Naturally, both the hospital in Stockholm and Berlin have their own central PACS systems on premises. Whereas, Stroke Insight has their computers in a data center.

Incoming

flowchart LR

subgraph Stockholm
S_CLIENT[Radiologist] -->|Trigger| S_PACS[Local PACS]
end

S_PACS -->|Primary series| WP

subgraph Berlin
B_CLIENT[Radiologist] -->|Trigger| B_PACS[Local PACS]
end

B_PACS -->WP

subgraph WolfPACS
WP[Primary node]
SD[Secondary node]
end

subgraph Worker pool
WP-->|Series with the same StudyUID\nwill always end up on the same worker|WA[Worker C]
WB[Worker D]
end

subgraph Worker pool
WP-->WC[Worker A]
WD[Worker B]
end
Loading
  1. A Radiologist sends the primary series to Stroke Insight (which are running WolfPACS as a load balancer.)
  2. WolfPACS receives the series and routes the images to an appropriate worker with the right software.

Returning

flowchart LR

subgraph Worker pool
WA[Worker A]
WB[Worker B]
end

subgraph Worker pool
WC[Worker C]
WD[Worker D]
end

WA-->|Derived series|WP
WC-->WP

subgraph WolfPACS
WP[Primary node]
SD[Secondary node]
end

WP-->|Route the derived series\nto the original sender|S_PACS
WP-->B_PACS

subgraph Berlin
B_PACS[Local PACS]-->B_CLIENT[Radiologist]
end

subgraph Stockholm
S_PACS[Local PACS]-->S_CLIENT[Radiologist]
end
Loading
  1. The worker sends the new derived series back to WolfPACS.
  2. Finally, WolfPACS sends the new series back to the correct destination.

Mental model for WolfPACS administration

Any router / load balancer has two sides. One side facing the outside world. And the other side facing the inside world (workers).

We expose port 11112 for outside clients of WolfPACS. Workers on the other side, should contact WolfPACS on port 11113.

Therefore, if you deploy WolfPACS, you need to expose 11112 to the outside world. Whereas you want to keep 11113 open inside the firewall (trusted side).

flowchart LR

subgraph WolfPACS
WP[Primary node]
SD[Secondary node]
end

WP-.->WA

CA[Clients]-->|Port 11112|WolfPACS
WA[Workers]-->|Port 11113|WP
Loading

Client vs Destination

A client is anyone with the correct Application Entity (AE). This acts as a shared secret / password.

A destination is a server that can receive DICOM data. WolfPACS needs a hostname, an IP-address and a called AE.

So the client will send data to WolfPACS and the destination will receive data from WolfPACS.

Quick Start

Start WolfPACS in background.

docker run -d -p 11112:11112 -p 11113:11113 wolfpacs/wolfpacs

Debug WolfPACS instance

docker run -it -p 11112:11112 -p 11113:11113 wolfpacs/wolfpacs console

Environment variables

It is possible to configure some parts of WolfPACS using environmental variables.

Variable Description Default
WOLFPACS_INSIDE_PORT The port towards the workers 11112
WOLFPACS_OUTSIDE_PORT The port facing the outside world 11113

DICOM Conformance Statement

The following transfer syntax are are supported:

Transfer Syntax UID Supported
Implicit VR Little Endian 1.2.840.10008.1.2 Yes
Explicit VR Little Endian 1.2.840.10008.1.2.1 Yes
Explicit VR Big Endian 1.2.840.10008.1.2.2 Yes

Test plan

A PACS is classified as a medical device and needs to be painstakingly tested.

We use four different test in WolfPACS and we aim to test the software thoroughly.

Test Target Method
Unit tests One Module Erlang Eunit
Integration tests Many Modules Erlang Common Tests
Validation testing User requirements Python Robot Framework
Property based testing Hidden bugs / Fussing Erlang proper