Skip to content

[hf_hub] Token login#3739

Merged
danielhanchen merged 12 commits intounslothai:nightlyfrom
Datta0:token_login
Dec 18, 2025
Merged

[hf_hub] Token login#3739
danielhanchen merged 12 commits intounslothai:nightlyfrom
Datta0:token_login

Conversation

@Datta0
Copy link
Copy Markdown
Collaborator

@Datta0 Datta0 commented Dec 18, 2025

Use the token provided in the from_pretrained method to log in so that one can access private models.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @Datta0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request streamlines the process of authenticating with the Hugging Face Hub by introducing a dedicated utility function. This change centralizes the login logic, making it more efficient and reliable for from_pretrained methods to access private models, thereby improving the overall user experience when working with restricted resources.

Highlights

  • New Utility Function: Introduced a new hf_login utility function in unsloth/models/_utils.py to encapsulate the logic for logging into the Hugging Face Hub.
  • Centralized Hugging Face Login: Refactored the from_pretrained methods in unsloth/models/llama.py and unsloth/models/loader.py to utilize the new hf_login function, centralizing the authentication process for accessing private models.
  • Improved Private Model Access: The new hf_login function handles token retrieval and login attempts, providing a more robust and consistent way to access private models from Hugging Face.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a centralized hf_login utility function to handle Hugging Face Hub authentication, which is a good step towards reducing code duplication. The changes in unsloth/models/llama.py and unsloth/models/loader.py correctly adopt this new function.

My feedback focuses on improving the implementation of the new hf_login function for better robustness and type safety, and on removing some redundant code that was introduced when refactoring to use this new utility.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +2350 to +2354
def hf_login(token: str):
if token is None:
from huggingface_hub import get_token

token = get_token()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Handle get_token import on older huggingface_hub versions

The new hf_login helper unconditionally imports get_token from the top-level huggingface_hub before entering the try/except. On older hub releases where get_token only lives under huggingface_hub.utils (which the surrounding code previously supported), this import raises ImportError, so calls like FastLlamaModel.from_pretrained(token=None) will now crash instead of falling back to the legacy import paths. Please guard the import or reuse the existing fallbacks to preserve compatibility with those versions.

Useful? React with 👍 / 👎.


token = get_token()
if token is None:
return
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return
return None

Comment on lines 155 to 156
if token is None:
token = get_token()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this to login, and return token

Copy link
Copy Markdown
Contributor

@danielhanchen danielhanchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work

@danielhanchen danielhanchen merged commit 5d0286d into unslothai:nightly Dec 18, 2025
1 check passed
danielhanchen added a commit that referenced this pull request Dec 20, 2025
* Update _utils.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [FIX] [Transformers] VLM input embeds fix for gradients (#3715)

* Fix get_input_embeds call for VLMs

* patch input_require_grads instead

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* cleanup old patch

* cleanup old patch

* cleanup

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Apply suggestion from @danielhanchen

* use logger instead of prints

* Move unsloth present set

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update rope_embedding.py

* Fixes

* Update _utils.py

* Update import_fixes.py

* Update rl_replacements.py

* fix_openenv_no_vllm

* Fix

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update import_fixes.py

* Update import_fixes.py

* Update import_fixes.py

* logger

* Update __init__.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update __init__.py

* Update import_fixes.py

* Update __init__.py

* Update import_fixes.py

* Update import_fixes.py

* Update import_fixes.py

* Update import_fixes.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update import_fixes.py

* Update unsloth/import_fixes.py

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update save.py

* [fbgemm] Silence tma fbgemm (#3735)

* Silence fbgemm TMA print

Also safer .push_to_hub

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update loader.py

* Update save.py

* Update save.py

* Update _utils.py

* Update _utils.py

* Diffusers warnings

* Update pyproject.toml

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [hf_hub] Token login (#3739)

* login on token

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* cleanup old code

* safer imports

* cleanup

* Return token after login

* correct return types

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Apply suggestion from @danielhanchen

* add back imports

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* finish return token

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Do not overwrite slots (#3752)

* Do not overwrite slots

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Datta Nimmaturi <venkatadattasainimmaturi@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@Datta0 Datta0 deleted the token_login branch December 22, 2025 14:49
danielhanchen added a commit that referenced this pull request Dec 23, 2025
* Update _utils.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [FIX] [Transformers] VLM input embeds fix for gradients (#3715)

* Fix get_input_embeds call for VLMs

* patch input_require_grads instead

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* cleanup old patch

* cleanup old patch

* cleanup

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Apply suggestion from @danielhanchen

* use logger instead of prints

* Move unsloth present set

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update rope_embedding.py

* Fixes

* Update _utils.py

* Update import_fixes.py

* Update rl_replacements.py

* fix_openenv_no_vllm

* Fix

* Update __init__.py

* Update __init__.py

* Update __init__.py

* Update import_fixes.py

* Update import_fixes.py

* Update import_fixes.py

* logger

* Update __init__.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update __init__.py

* Update import_fixes.py

* Update __init__.py

* Update import_fixes.py

* Update import_fixes.py

* Update import_fixes.py

* Update import_fixes.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update import_fixes.py

* Update unsloth/import_fixes.py

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update save.py

* [fbgemm] Silence tma fbgemm (#3735)

* Silence fbgemm TMA print

Also safer .push_to_hub

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update loader.py

* Update save.py

* Update save.py

* Update _utils.py

* Update _utils.py

* Diffusers warnings

* Update pyproject.toml

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* [hf_hub] Token login (#3739)

* login on token

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* cleanup old code

* safer imports

* cleanup

* Return token after login

* correct return types

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Apply suggestion from @danielhanchen

* add back imports

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* finish return token

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Do not overwrite slots (#3752)

* Do not overwrite slots

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Update save.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Datta Nimmaturi <venkatadattasainimmaturi@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants