Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions torchtitan/experiments/simple_fsdp/backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ def get_compile_backend_with_passes(
def aot_eager_autobucketing_reordering_pass(
gm: torch.fx.GraphModule, example_inputs: Any
) -> torch.fx.GraphModule:
schedule_overlap_bucketing(gm)
schedule_overlap_bucketing(gm, collective_bucketing=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

collective_bucketing and insert_overlap_deps configs are turned on in this PR: #1965. Could you confirm which is the correct way to enable this pass?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And probably remove the unused configs

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, it's a bit confusing because we had some internal usage that didnt want the pass to depend on inductor. configs. today, those inductor configs are only used in the inductor psot grad application.

See: https://github.com/pytorch/pytorch/blob/a36e1d39ebbf60976fec5a0d8a96763e6adfbea3/torch/_inductor/fx_passes/post_grad.py#L292-L316

Potentially we can have a :

schedule_overlap_bucketing
and
schedule_overlap_bucketing_from_configs
where the latter reads in inductor configs. I'm not sure. open to ideas here.

Copy link
Contributor

@ruisizhang123 ruisizhang123 Dec 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh i see, then probably we can use this PR's config to enable aten-level aot_eager_autobucketing_reordering_pass, and the inductor config to enable inductor post grad passes in inductor_autobucketing_reordering_pass. 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, didn't fully get it. Does it mean we can remove some code for the aot_eager / inductor option in this PR? Do we have to use multiple toggles for one thing? e.g. I see the following for aot_eager

dist_opts.collective_bucketing = True

But I didn't see any special inductor configs for bucketing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I mean @IvanKobzarev need to update the code such that dist_opts are only put to inductor scheduling pass entry before he merges the PR....

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add some comment on what each steps are doing, for better readability

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add a function in pytorch that schedules this from inductor configs. i think that will be clearest.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pytorch/pytorch#169693

we can now just call schedule_overlap_bucketing_from_inductor_configs and use the configs.

gm.recompile()
return gm

Expand All @@ -67,7 +67,11 @@ def aot_eager_autobucketing_reordering_pass(
def inductor_autobucketing_reordering_pass(
gm: torch.fx.Graph,
) -> torch.fx.GraphModule:
return schedule_overlap_bucketing(gm.owning_module)
return schedule_overlap_bucketing(
gm.owning_module,
collective_bucketing=True,
insert_overlap_deps=True,
)

dist_opts.insert_overlap_deps = True
torch._inductor.config.reorder_for_peak_memory = False
Expand Down