Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 4 additions & 6 deletions docs/Coding_Conventions_and_Standards.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,35 +188,33 @@ new adapter following examples in https://github.com/justinchuby/lintrunner-adap

## Python Code Style

Follow the [Black formatter](https://black.readthedocs.io)'s coding style when possible. A maximum line length of 120 characters is allowed for consistency with the C++ code.
Follow the [Ruff formatter](https://docs.astral.sh/ruff/formatter/)'s coding style when possible. A maximum line length of 120 characters is allowed for consistency with the C++ code.

Please adhere to the [PEP8 Style Guide](https://www.python.org/dev/peps/pep-0008/). We use [Google's python style guide](https://google.github.io/styleguide/pyguide.html) as the style guide which is an extension to PEP8.

Use `pyright`, which is provided as a component of the `pylance` extension in VS Code for static type checking.

Auto-formatting is done with `black` and `isort`. The tools are configured in `pyproject.toml`. From the root of the repository, you can run
Auto-formatting and linting are done with [Ruff](https://docs.astral.sh/ruff/), which handles both code formatting and import sorting. The tools are configured in `pyproject.toml` and `.lintrunner.toml`. From the root of the repository, you can run

```sh
lintrunner f --all-files
```

to format Python files.

Use `pydocstyle` to lint documentation styles. `pydocstyle` is enabled in VS Code.

## IDEs

### VS Code

VS Code is automatically configured with workspace configurations.

For Python development is VS Code, read
For Python development in VS Code, read
[this tutorial](https://code.visualstudio.com/docs/python/python-tutorial) for
more information.

### PyCharm

Follow [black's documentation](https://black.readthedocs.io/en/stable/integrations/editors.html#pycharm-intellij-idea) to set up the black formatter for PyCharm.
Follow [Ruff's documentation](https://docs.astral.sh/ruff/editors/setup/#pycharm) to set up the Ruff formatter for PyCharm.

## Testing

Expand Down
1 change: 0 additions & 1 deletion include/onnxruntime/core/graph/constants.h
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,6 @@ constexpr const char* kAclExecutionProvider = "ACLExecutionProvider";
constexpr const char* kCoreMLExecutionProvider = "CoreMLExecutionProvider";
constexpr const char* kJsExecutionProvider = "JsExecutionProvider";
constexpr const char* kSnpeExecutionProvider = "SNPEExecutionProvider";
constexpr const char* kTvmExecutionProvider = "TvmExecutionProvider";
constexpr const char* kXnnpackExecutionProvider = "XnnpackExecutionProvider";
constexpr const char* kWebNNExecutionProvider = "WebNNExecutionProvider";
constexpr const char* kWebGpuExecutionProvider = "WebGpuExecutionProvider";
Expand Down
7 changes: 0 additions & 7 deletions onnxruntime/test/python/onnxruntime_test_python.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,13 +98,6 @@ def cuda_device_count(self, cuda_lib):
return -1
return num_device.value

def test_tvm_imported(self):
if "TvmExecutionProvider" not in onnxrt.get_available_providers():
return
import tvm # noqa: PLC0415

self.assertTrue(tvm is not None)

def test_get_version_string(self):
self.assertIsNot(onnxrt.get_version_string(), None)

Expand Down
1 change: 0 additions & 1 deletion onnxruntime/test/util/include/default_providers.h
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@ std::unique_ptr<IExecutionProvider> DefaultCudaNHWCExecutionProvider();
std::unique_ptr<IExecutionProvider> CudaExecutionProviderWithOptions(const OrtCUDAProviderOptionsV2* provider_options);
std::unique_ptr<IExecutionProvider> DefaultDnnlExecutionProvider();
std::unique_ptr<IExecutionProvider> DnnlExecutionProviderWithOptions(const OrtDnnlProviderOptions* provider_options);
// std::unique_ptr<IExecutionProvider> DefaultTvmExecutionProvider();
std::unique_ptr<IExecutionProvider> DefaultTensorrtExecutionProvider();
std::unique_ptr<IExecutionProvider> DefaultNvTensorRTRTXExecutionProvider();
std::unique_ptr<IExecutionProvider> TensorrtExecutionProviderWithOptions(const OrtTensorRTProviderOptions* params);
Expand Down
Loading