Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update fp8_meta amax when copying into Float8Tensor #567

Merged
merged 4 commits into from
Dec 16, 2023

Conversation

timmoon10
Copy link
Collaborator

While debugging a convergence issue with LLaMa SFT and NVIDIA/NeMo#7909, I've identified a subtle bug when a model with FP8 params loads a checkpoint. When the model is first initialized, it generates random weights and stores the amax value in fp8_meta. However, this amax becomes meaningless when we load the checkpoint and overwrite the weight values. The amax histories are used in the forward pass to update the scaling factors, resulting in bogus scaling factors that clip many FP8 values to the maxval or underflow them to zero.

This PR changes the behavior of copying into a Float8Tensor so that it updates the latest amax history if it has an fp8_meta. This is similar to how the FP8 cast kernel updates the latest amax history. This should fix the issue I'm seeing with LLaMa SFT, but it's not an entirely clean solution. In particular, it can't protect the user from abusing fp8_meta (e.g. loading a checkpoint in the middle of training will result in a bogus amax history, with or without Float8Tensor). It might also result in unexpected amax updates when multiple tensors share the same fp8_meta, e.g. copying into a tensor subview will affect the full tensor's amax history.

@timmoon10 timmoon10 added the bug Something isn't working label Dec 14, 2023
@timmoon10
Copy link
Collaborator Author

/te-ci pytorch

@ptrendx
Copy link
Member

ptrendx commented Dec 14, 2023

The view issue is why the views are not really supported for Float8Tensors.

@timmoon10
Copy link
Collaborator Author

/te-ci pytorch

@timmoon10
Copy link
Collaborator Author

/te-ci pytorch

@ptrendx ptrendx merged commit 4a147e0 into NVIDIA:main Dec 16, 2023
18 of 20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants