We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running a few torchbench benchmarks, using dynamo+openxla backend, ends up in a runtime error:
openxla
Traceback (most recent call last): File "xla/benchmarks/experiment_runner.py", line 601, in <module> main() File "xla/benchmarks/experiment_runner.py", line 597, in main runner.run() File "xla/benchmarks/experiment_runner.py", line 65, in run self.run_single_experiment(experiment_config, model_config) File "xla/benchmarks/experiment_runner.py", line 161, in run_single_experiment run_metrics, output = self.timed_run(benchmark_experiment, File "xla/benchmarks/experiment_runner.py", line 328, in timed_run output = loop() File "xla/benchmarks/experiment_runner.py", line 310, in loop output = benchmark_model.model_iter_fn( File "torch/_dynamo/eval_frame.py", line 489, in _fn return fn(*args, **kwargs) File "xla/benchmarks/benchmark_model.py", line 154, in eval pred = self.module(*inputs) File "torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/lib/python3.10/site-packages/doctr/models/detection/differentiable_binarization/pytorch.py", line 183, in forward def forward( File "torch/_dynamo/eval_frame.py", line 489, in _fn return fn(*args, **kwargs) File "torch/_dynamo/external_utils.py", line 17, in inner return fn(*args, **kwargs) File "torch/_functorch/aot_autograd.py", line 4939, in forward return compiled_fn(full_args) File "torch/_functorch/aot_autograd.py", line 1992, in g return f(*args) File "torch/_functorch/aot_autograd.py", line 3139, in runtime_wrapper all_outs = call_func_with_args( File "torch/_functorch/aot_autograd.py", line 2016, in call_func_with_args out = normalize_as_list(f(args)) File "torch/_functorch/aot_autograd.py", line 2120, in rng_functionalization_wrapper return compiled_fw(args) File "torch/_functorch/aot_autograd.py", line 1992, in g return f(*args) File "torch/_dynamo/backends/torchxla.py", line 49, in fwd compiled_graph = bridge.extract_compiled_graph(model, args) File "xla/torch_xla/core/dynamo_bridge.py", line 566, in extract_compiled_graph extract_internal(fused_module), node.args, None) File "xla/torch_xla/core/dynamo_bridge.py", line 341, in extract_internal dumb_return_handler, xla_args_need_update) = extract_graph_helper(xla_model) File "xla/torch_xla/core/dynamo_bridge.py", line 288, in extract_graph_helper graph_hash = torch_xla._XLAC._get_graph_hash(args_and_out) RuntimeError: torch_xla/csrc/aten_xla_bridge.cpp:95 : Check failed: xtensor
python xla/benchmarks/experiment_runner.py --suite-name torchbench --dynamo openxla --xla PJRT --accelerator cuda --test <test> --no-resume -k <model>
The text was updated successfully, but these errors were encountered:
Can't reproduce this issue anymore. Probably, it was solved when fixing other dynamo-related errors.
Sorry, something went wrong.
No branches or pull requests
Running a few torchbench benchmarks, using dynamo+
openxla
backend, ends up in a runtime error:To Reproduce
Affected Benchmarks
Environment
The text was updated successfully, but these errors were encountered: