Skip to content

Conversation

@IvanKobzarev
Copy link
Contributor

@IvanKobzarev IvanKobzarev commented Dec 3, 2025

IvanKobzarev added a commit that referenced this pull request Dec 3, 2025
ghstack-source-id: ab5fdb2
Pull Request resolved: #2103
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Dec 3, 2025
IvanKobzarev added a commit that referenced this pull request Dec 3, 2025
ghstack-source-id: 02c9bdb
Pull Request resolved: #2103
Copy link
Contributor

@tianyu-l tianyu-l left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the behavior before vs. after? I thought the code you are modifying already does bucketing.

@IvanKobzarev
Copy link
Contributor Author

what's the behavior before vs. after? I thought the code you are modifying already does bucketing.

Before this bucketing was not enabled. Config collective bucketing was not applied if the schedule_overlap is called manually.

gm: torch.fx.GraphModule, example_inputs: Any
) -> torch.fx.GraphModule:
schedule_overlap_bucketing(gm)
schedule_overlap_bucketing(gm, collective_bucketing=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

collective_bucketing and insert_overlap_deps configs are turned on in this PR: #1965. Could you confirm which is the correct way to enable this pass?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And probably remove the unused configs

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, it's a bit confusing because we had some internal usage that didnt want the pass to depend on inductor. configs. today, those inductor configs are only used in the inductor psot grad application.

See: https://github.com/pytorch/pytorch/blob/a36e1d39ebbf60976fec5a0d8a96763e6adfbea3/torch/_inductor/fx_passes/post_grad.py#L292-L316

Potentially we can have a :

schedule_overlap_bucketing
and
schedule_overlap_bucketing_from_configs
where the latter reads in inductor configs. I'm not sure. open to ideas here.

Copy link
Contributor

@ruisizhang123 ruisizhang123 Dec 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh i see, then probably we can use this PR's config to enable aten-level aot_eager_autobucketing_reordering_pass, and the inductor config to enable inductor post grad passes in inductor_autobucketing_reordering_pass. 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, didn't fully get it. Does it mean we can remove some code for the aot_eager / inductor option in this PR? Do we have to use multiple toggles for one thing? e.g. I see the following for aot_eager

dist_opts.collective_bucketing = True

But I didn't see any special inductor configs for bucketing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I mean @IvanKobzarev need to update the code such that dist_opts are only put to inductor scheduling pass entry before he merges the PR....

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add some comment on what each steps are doing, for better readability

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add a function in pytorch that schedules this from inductor configs. i think that will be clearest.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pytorch/pytorch#169693

we can now just call schedule_overlap_bucketing_from_inductor_configs and use the configs.

gm: torch.fx.GraphModule, example_inputs: Any
) -> torch.fx.GraphModule:
schedule_overlap_bucketing(gm)
schedule_overlap_bucketing(gm, collective_bucketing=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, didn't fully get it. Does it mean we can remove some code for the aot_eager / inductor option in this PR? Do we have to use multiple toggles for one thing? e.g. I see the following for aot_eager

dist_opts.collective_bucketing = True

But I didn't see any special inductor configs for bucketing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants