Skip to content

Avoid overwrite existing local implementation when loading remote custom model#38474

Merged
Isotr0py merged 2 commits intohuggingface:mainfrom
Isotr0py:fix-auto-register
Jun 5, 2025
Merged

Avoid overwrite existing local implementation when loading remote custom model#38474
Isotr0py merged 2 commits intohuggingface:mainfrom
Isotr0py:fix-auto-register

Conversation

@Isotr0py
Copy link
Contributor

@Isotr0py Isotr0py commented May 29, 2025

What does this PR do?

Fixes vllm-project/vllm#18720 (comment)

  • After loading Alibaba-NLP/gte-Qwen2-1.5B-instruct with trust_remote_code=True, the local Qwen2 implementation in HF is overwritten by its custom implementation, initializing any original Qwen2 models again will use the custom module, which causes unexpected results even if setting trust_remote_code=False.
  • This PR adds protection to avoid overwriting any existing local model implementation in Transformers.

Reproduce code:

from sentence_transformers import SentenceTransformer

model = SentenceTransformer("Qwen/Qwen2.5-0.5B-Instruct", trust_remote_code=False, device="cuda")
print(model[0].auto_model.__class__)

model = SentenceTransformer("Alibaba-NLP/gte-Qwen2-1.5B-instruct", trust_remote_code=True)
print(model[0].auto_model.__class__)

model = SentenceTransformer("Qwen/Qwen2.5-0.5B-Instruct", trust_remote_code=False, device="cuda")
print(model[0].auto_model.__class__)

Output without this PR

<class 'transformers.models.qwen2.modeling_qwen2.Qwen2Model'>
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 23.80it/s]
<class 'transformers_modules.Alibaba-NLP.gte-Qwen2-1.5B-instruct.a9af15a6372d7d6b25e9fb07c2ccb9e1fe645644.modeling_qwen.Qwen2Model'>
<class 'transformers_modules.Alibaba-NLP.gte-Qwen2-1.5B-instruct.a9af15a6372d7d6b25e9fb07c2ccb9e1fe645644.modeling_qwen.Qwen2Model'>

Output with this PR

<class 'transformers.models.qwen2.modeling_qwen2.Qwen2Model'>
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 23.64it/s]
<class 'transformers_modules.Alibaba-NLP.gte-Qwen2-1.5B-instruct.a9af15a6372d7d6b25e9fb07c2ccb9e1fe645644.modeling_qwen.Qwen2Model'>
<class 'transformers.models.qwen2.modeling_qwen2.Qwen2Model'>

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@Isotr0py Isotr0py requested review from ArthurZucker and Rocketknight1 and removed request for Rocketknight1 May 29, 2025 16:03
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@Rocketknight1
Copy link
Member

Hi @Isotr0py, it's a little weird but this is expected behaviour! When the user sets trust_remote_code=True, this actually favours remote code even when library code exists. If we didn't do this, then it would be impossible to load remote code when there was a config name match to a library model.

The solution is for users to remove trust_remote_code=True when it's no longer needed, although maybe this is a sign that we should add an if_needed value or something that loads the code without a warning if required, but prefers library code.

@Isotr0py
Copy link
Contributor Author

Isotr0py commented May 30, 2025

The solution is for users to remove trust_remote_code=True when it's no longer needed

In fact, we found this issue during investigation when tried removing trust_remote_code=True and found it took no effect, because loading remote model with same config has overwritten the library model registration.

A more comprehensive reproduce code with pure Transformers:

import torch
from transformers import AutoModel

# initialize a library model
model = AutoModel.from_pretrained('Qwen/Qwen2-0.5B-Instruct')
print(model.__class__)
# <class 'transformers.models.qwen2.modeling_qwen2.Qwen2Model'>

# initialize a remote model with `trust_remote_code=True`
model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen2-1.5B-instruct', trust_remote_code=True)
print(model.__class__)
# <class 'transformers_modules.Alibaba-NLP.gte-Qwen2-1.5B-instruct.a9af15a6372d7d6b25e9fb07c2ccb9e1fe645644.modeling_qwen.Qwen2Model'>

# re-initialize a library model without `trust_remote_code=True`
model = AutoModel.from_pretrained('Qwen/Qwen2-0.5B-Instruct')
print(model.__class__)
# <class 'transformers_modules.Alibaba-NLP.gte-Qwen2-1.5B-instruct.a9af15a6372d7d6b25e9fb07c2ccb9e1fe645644.modeling_qwen.Qwen2Model'>

The library Qwen2 model can't be loaded properly anymore (even if without trust_remote_code=True) after loading a remote Qwen2 model sharing same config with library model, because its registration has been overwriten by remote ones.

@Isotr0py
Copy link
Contributor Author

favours remote code even when library code exists. If we didn't do this, then it would be impossible to load remote code when there was a config name match to a library model.

<class 'transformers.models.qwen2.modeling_qwen2.Qwen2Model'>
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 23.64it/s]
<class 'transformers_modules.Alibaba-NLP.gte-Qwen2-1.5B-instruct.a9af15a6372d7d6b25e9fb07c2ccb9e1fe645644.modeling_qwen.Qwen2Model'>
<class 'transformers.models.qwen2.modeling_qwen2.Qwen2Model'>

BTW, given the reproduction outputs with this PR, it seems that remote code sharing config name with a library model can still be loaded with this PR, though it's quite weird for me too...

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @Rocketknight1 It makes sense to try and protect no? Or prevent someone from updating a model on the hub that would overwrite one of our code, but the use case makes complete sense!

@Rocketknight1
Copy link
Member

Yes, this is a good point, and I misunderstood part of the issue. You're absolutely correct that we should avoid overwriting the local implementation when loading a remote code model with the same class. I'll review this again and see if we can achieve that without breaking other parts of remote code loading

Copy link
Member

@Rocketknight1 Rocketknight1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed again and this change looks good, now that I understand what it's for! I made one suggestion to update the comment to be very clear about what that block is for.

@Isotr0py Isotr0py enabled auto-merge (squash) June 4, 2025 15:16
Isotr0py added 2 commits June 5, 2025 13:40
…ote model

Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
@Isotr0py Isotr0py merged commit 0f83352 into huggingface:main Jun 5, 2025
20 checks passed
@Isotr0py Isotr0py deleted the fix-auto-register branch June 5, 2025 13:50
bvantuan pushed a commit to bvantuan/transformers that referenced this pull request Jun 12, 2025
…tom model (huggingface#38474)

* avoid overwrite existing local implementation when loading custom remote model

Signed-off-by: Isotr0py <2037008807@qq.com>

* update comments

Signed-off-by: Isotr0py <2037008807@qq.com>

---------

Signed-off-by: Isotr0py <2037008807@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants