From 7b8ef40ecc918e64a0b718cd105a97dcb38646e0 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Tue, 8 Oct 2024 08:03:51 +0530 Subject: [PATCH] Update distributed_inference.md to include `transformer.device_map` (#9553) * Update distributed_inference.md to include `transformer.device_map` * Update docs/source/en/training/distributed_inference.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/training/distributed_inference.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/training/distributed_inference.md b/docs/source/en/training/distributed_inference.md index cd642d6aca074..0e1eb7962bf70 100644 --- a/docs/source/en/training/distributed_inference.md +++ b/docs/source/en/training/distributed_inference.md @@ -177,7 +177,7 @@ transformer = FluxTransformer2DModel.from_pretrained( ``` > [!TIP] -> At any point, you can try `print(pipeline.hf_device_map)` to see how the various models are distributed across devices. This is useful for tracking the device placement of the models. +> At any point, you can try `print(pipeline.hf_device_map)` to see how the various models are distributed across devices. This is useful for tracking the device placement of the models. You can also try `print(transformer.hf_device_map)` to see how the transformer model is sharded across devices. Add the transformer model to the pipeline for denoising, but set the other model-level components like the text encoders and VAE to `None` because you don't need them yet.