diff --git a/docs/source/en/training/distributed_inference.md b/docs/source/en/training/distributed_inference.md index cd642d6aca07..0e1eb7962bf7 100644 --- a/docs/source/en/training/distributed_inference.md +++ b/docs/source/en/training/distributed_inference.md @@ -177,7 +177,7 @@ transformer = FluxTransformer2DModel.from_pretrained( ``` > [!TIP] -> At any point, you can try `print(pipeline.hf_device_map)` to see how the various models are distributed across devices. This is useful for tracking the device placement of the models. +> At any point, you can try `print(pipeline.hf_device_map)` to see how the various models are distributed across devices. This is useful for tracking the device placement of the models. You can also try `print(transformer.hf_device_map)` to see how the transformer model is sharded across devices. Add the transformer model to the pipeline for denoising, but set the other model-level components like the text encoders and VAE to `None` because you don't need them yet.