Skip to content

Set the env variable USE_DEEPSPEED_HPU to "true" in deepspeed.py#166

Merged
regisss merged 1 commit into
mainfrom
use_deepspeed_hpu
Feb 13, 2023
Merged

Set the env variable USE_DEEPSPEED_HPU to "true" in deepspeed.py#166
regisss merged 1 commit into
mainfrom
use_deepspeed_hpu

Conversation

@regisss
Copy link
Copy Markdown
Collaborator

@regisss regisss commented Feb 13, 2023

What does this PR do?

The env variable USE_DEEPSPEED_HPU is set to "true" Habana's DeepSpeed launcher but it does not work for multi-node runs. This PR fixes this behaviour.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

HuggingFaceDocBuilderDev commented Feb 13, 2023

The documentation is not available anymore as the PR was closed or merged.

@regisss regisss merged commit c808953 into main Feb 13, 2023
@regisss regisss deleted the use_deepspeed_hpu branch February 13, 2023 13:30
mounikamandava pushed a commit to emascarenhas/optimum-habana that referenced this pull request Aug 2, 2024
* Supporting llama int4 quantization using AutoGPTQ

* cleanups in int4

* Blocking running hqt with int4

* Rename int4 param to gptq

* Added call to preprocessing in gptq

* Added call to preprocessing in gptq fix

* Added call to preprocessing in gptq fix2

* Removed call to preprocessing (found a better solution on AutoGPTQ)

* Fixed deprecated message for exllama
mounikamandava added a commit to emascarenhas/optimum-habana that referenced this pull request Aug 2, 2024
hsubramony added a commit that referenced this pull request Oct 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants