Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG: AttributeError: module 'torch.library' has no attribute 'custom_op' #222

Open
mruhlmannGit opened this issue Sep 20, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@mruhlmannGit
Copy link

Python -VV

Python -vv

Pip Freeze

Pip freeze

Reproduction Steps

docker build -t llm-mistral7b .

With the right dockerfile

Expected Behavior

in dockerfile, pytorch version is 2.1.1 and custom_op needs version >=2.4.0. Seems dockerfile is outdated.

Additional Context

docker run -it llm-mistral7b /bin/bash
The HF_TOKEN environment variable is not set or empty, not logging to Hugging Face.
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/lib/python3.10/runpy.py", line 110, in _get_module_details
import(pkg_name)
File "/usr/local/lib/python3.10/dist-packages/vllm/init.py", line 3, in
from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/arg_utils.py", line 11, in
from vllm.config import (CacheConfig, ConfigFormat, DecodingConfig,
File "/usr/local/lib/python3.10/dist-packages/vllm/config.py", line 12, in
from vllm.model_executor.layers.quantization import QUANTIZATION_METHODS
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/init.py", line 1, in
from vllm.model_executor.parameter import (BasevLLMParameter,
File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/parameter.py", line 7, in
from vllm.distributed import get_tensor_model_parallel_rank
File "/usr/local/lib/python3.10/dist-packages/vllm/distributed/init.py", line 1, in
from .communication_op import *
File "/usr/local/lib/python3.10/dist-packages/vllm/distributed/communication_op.py", line 6, in
from .parallel_state import get_tp_group
File "/usr/local/lib/python3.10/dist-packages/vllm/distributed/parallel_state.py", line 98, in
@torch.library.custom_op("vllm::inplace_all_reduce", mutates_args=["tensor"])
AttributeError: module 'torch.library' has no attribute 'custom_op'

Suggested Solutions

No response

@mruhlmannGit mruhlmannGit added the bug Something isn't working label Sep 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant