Skip to content

Conversation

@ericspod
Copy link
Member

Description

This is the document setting out the specification of the MONAI Archive format for portable self-descriptive models. This is currently very minimal compared to what has been prototyped already by @Nic-Ma and discussed in #3482 and elsewhere. This PR is for discussion of what the format specification should be, what should be added or changed, how this does or does not meet the needs of the other subprojects, how code will use and interact with archive models, and how code can be generated based off the metadata included with the models.

Status

Work in progress

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

@ericspod
Copy link
Member Author

This metadata specification varies slightly from the definition given in the schema given by @Nic-Ma here. The example JSON metadata file has "channel_def" values mapping channel indices to descriptions of what the channels mean, use "is_patch_data" to state the data is not patch-wise but the whole image, and a number of the provided tags are considered optional or user-defined.

@ericspod ericspod mentioned this pull request Feb 21, 2022
17 tasks
@Nic-Ma
Copy link
Contributor

Nic-Ma commented Feb 22, 2022

The metadata change looks good to me.
My next PR by chance will be metadata verification and network input / output verification.

Thanks.

@dbericat
Copy link
Member

Copy link
Contributor

@CPBridge CPBridge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good but I have some minor suggestions


These files mostly are required to be present with the given names for the directory to define a valid MAR:

* **metadata.json**: netadata information in JSON format relating to the type of model, definition of input and output tensors, versions of the model and used software, and other information described below.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* **metadata.json**: netadata information in JSON format relating to the type of model, definition of input and output tensors, versions of the model and used software, and other information described below.
* **metadata.json**: metadata information in JSON format relating to the type of model, definition of input and output tensors, versions of the model and used software, and other information described below.

* **type**: what sort of data the tensor represents: "image", "label", etc.
* **format**: what format of information is stored: "magnitude", "hounsfield", "kspace", "segmentation", "multiclass", etc.
* **num_channels**: number of channels the tensor has, assumed channel dimension first.
* **spatial_shape**: shape of the spatial dimensions of the form "[H]", "[H, W]", or "[H, W, D]"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am concerned that in some cases this will be too restrictive. I have a few models that do not have a fixed input shape, but may have restrictions on the input shape, e.g. [32n, 32n, 2n] for something that must be as multiple 32 down the first two axes and a multiple of 2 down the last.

Copy link
Member Author

@ericspod ericspod Feb 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes this was something we in one meeting or another discussed earlier and I didn't include details here. There's a range of specification we can define, so using "*" for any size on one dimension, using "n" like multiplier here or as a power, so we'd need to specify shape probably as a string. For example, permitting any number of channels (somehow) and a spatial shape that's the same power of 2 would be "[*, 2n, 2n]". We would need a parser to recognize these formats and validate they are correct, and then generate code that validates the network itself.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about allowing "any size". That might be the case while dealing with Digital Pathology Slides...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @GreySeaWolf , may I know what's the any size you mean? Can we use "*" to represent it as @ericspod described?

Thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ericspod ,

I got a question here: What's the expected data type of spatial_shape in the metadata? Seems you are using str in this example. We need to finalize the data type then check it with the schema. Otherwise, it's hard to check network input / output shape with scripts.
What do you think?

Thanks.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shapes are going to have to be defined as lists of either strings to accommodate "2n" or "*" as specifiers or integers for exact know sizes. For example, a 2D image that's 256x256 in the spatial dimension with any number of channels would be ["*", 256, 256] by if the spatial dimensions had to be multiples n of a power p of 2 we would have ["*", "2**p*n", "2**p*n"]. The expression syntax we would allow here may get a bit complicated and requires validation in separate steps.

Tensor format specifiers are used to define input and output tensors and their meanings, and must be a dictionary containing at least these keys:

* **type**: what sort of data the tensor represents: "image", "label", etc.
* **format**: what format of information is stored: "magnitude", "hounsfield", "kspace", "segmentation", "multiclass", etc.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is mulitclass? I'm guessing this is a array of integers where each integer represents a class index. In this case, I would personally refer to this as a "label map"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea with that was to allow multiple categories for each voxel, so multi-labeling is assigning a label to each voxel as a number, to have multi-class requires keeping N channels for N classes to handle voxels that would be referenced in multiple channels.

* **monai_version**: version of MONAI the MAR was generated on, later versions expected to work.
* **pytorch_version**: version of Pytorch the MAR was generated on, later versions expected to work.
* **numpy_version**: version of Numpy the MAR was generated on, later versions expected to work.
* **optional_packages_version**: dictionary relating optional package names to their versions, these packages are not needed but are recommended to be isntalled with this stated minimum version.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* **optional_packages_version**: dictionary relating optional package names to their versions, these packages are not needed but are recommended to be isntalled with this stated minimum version.
* **optional_packages_version**: dictionary relating optional package names to their versions, these packages are not needed but are recommended to be installed with this stated minimum version.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also what about required dependencies (apart from torch, monai, numpy)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those would go into optional_packages_version so maybe call this other_package_versions instead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ericspod , I would suggest optional_packages_version or optional_dependencies_version, to be consistent with our installation guide doc:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies

Thanks.

@Nic-Ma
Copy link
Contributor

Nic-Ma commented Feb 28, 2022

Hi @ericspod @wyli ,

I found another problem from NVIDIA Clara users: Actually, developer of MMAR may be not very clear about what optional packages are used in this MMAR, especially when using MONAI docker.
I think we may need to provide some utily API to easily get all the optional packages used in the related MONAI python code of a MMAR?
What do you think?

Thanks in advance.

@vikashg
Copy link

vikashg commented Feb 28, 2022

@Nic-Ma and @ericspod. If we are thinking about backwards compatibility with CLARA and tensorrtserver (specially TensorRTServer), we need to include the names of the input and output nodes of the network . The names are important when specifying the config.pbtxt file used in TensorRTServer; Please find below an example config.pbtxt

name: "segmentation_mri_brain_tumors_br16_t1c2tc_v1"
platform: "tensorflow_graphdef"
max_batch_size: 32
input [
  {
    name: "NV_MODEL_INPUT"
    data_type: TYPE_FP32
    dims: [ 1, 224, 224, 128]
  }
]
output [
  {
    name: "NV_MODEL_OUTPUT"
    data_type: TYPE_FP32
    dims: [ 2, 224, 224, 128]
  }
]
instance_group [
  {
    kind: KIND_GPU
    count: 2
  }
]

Please let me know if you have questions.

@Nic-Ma
Copy link
Contributor

Nic-Ma commented Feb 28, 2022

Hi @vikashg ,

Thanks for your sharing.
I think we don't need to consider the backwards compatibility with CLARA and tensorrtserver so far.
Maybe we can raise this discussion in future versions.
@ericspod Please feel free to correct me if I misunderstood anything.

Thanks.

@Nic-Ma
Copy link
Contributor

Nic-Ma commented Feb 28, 2022

Hi @ericspod ,

I developed the full schema for the metadata example in this PR, and put the schema on our repo:
https://github.com/Project-MONAI/MONAI-extra-test-data/releases/tag/0.8.1
PR #3865 shows the draft API and usage.

Thanks.

@vikashg
Copy link

vikashg commented Feb 28, 2022

@Nic-Ma I am unaware if there is something that has changed in TRT server. But they do need the node names. Yes lets discuss it in the meeting. I think it should be included so we can also deploy the models.


A MAR package is defined primarily as a directory with a set of specifically named subdirectories containing the model and metadata files. The root directory should be named for the model, given as "ModelName", and should contain the following structure:
::
ModelName
Copy link

@MMelQin MMelQin Mar 4, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we call out the version of the model? I have seen use cases requiring multiple versions of the same model, and the Triton inference supports multiple versions of the same named model. The version info is embedded in the metadata, it is accessible, I guess the consumer will just need to parse the metadata.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @MMelQin ,

I think we already included the version and changelog of the model package:
https://github.com/Project-MONAI/MONAI/pull/3834/files#diff-139c21b7be482dc057c2b0b9d131f86500621c01b7132cb15060c54b2fb22775R84
@ericspod Maybe you can update the description to make it more clear?

Thanks in advance.

@Nic-Ma
Copy link
Contributor

Nic-Ma commented Mar 7, 2022

Hi @ericspod @wyli ,

I found another problem from NVIDIA Clara users: Actually, developer of MMAR may be not very clear about what optional packages are used in this MMAR, especially when using MONAI docker. I think we may need to provide some utily API to easily get all the optional packages used in the related MONAI python code of a MMAR? What do you think?

Thanks in advance.

Hi @ericspod @wyli ,

What do you guys think about this problem?

Thanks in advance.

@ericspod
Copy link
Member Author

ericspod commented Mar 9, 2022

Hi @ericspod @wyli ,

I found another problem from NVIDIA Clara users: Actually, developer of MMAR may be not very clear about what optional packages are used in this MMAR, especially when using MONAI docker. I think we may need to provide some utily API to easily get all the optional packages used in the related MONAI python code of a MMAR? What do you think?

Thanks in advance.

Something like pipreqs but with a project or network being bundled being considered the project.

@ericspod
Copy link
Member Author

ericspod commented Mar 9, 2022

There's also pipdeptree that will show a user what packages are installed as a tree.

Copy link
Contributor

@wyli wyli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, looks good to me as a starting point for the monai bundle concepts

@ericspod ericspod enabled auto-merge (squash) March 11, 2022 13:20
@wyli
Copy link
Contributor

wyli commented Mar 11, 2022

/build

@ericspod ericspod merged commit 0f43080 into Project-MONAI:dev Mar 11, 2022
"num_channels": 2,
"spatial_shape": [160, 160, 160],
"dtype": "float32",
"value_range": [0, 1],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ericspod ,

Some internal researchers raised a question about this "outputs" sections, is it exactly the output of network or output after postprocessing? If just output of network, the value range is not [0, 1].

Thanks.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that's true, this should be the output of the network itself. A network could have a sigmoid final layer and would output values in this range but yes this network in particular does not. For the sake of example it's fine but we should change it for correctness however the question is to what. As a segmentation network it's not the values themselves that we actually care about, it's the index of the maximal value in the channel dimension that determines the class for a pixel. It would be more correct so somehow specify the output as being "channel-dimension probabilities" rather than specific values. For other networks such as reconstructing autoencoders to specify the range is "like" that of the input.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, so do you want any change in this metadata config?

Thanks.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should discuss what this should be first if we need to go deeper into specifying things like I say, then we'll have an idea of what to put here. I'm afraid we're encountering more details we need to address as we go.

@ericspod ericspod deleted the mar_spec branch January 25, 2023 12:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants