Document not found (404)
+This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +diff --git a/v0.22.1/.nojekyll b/v0.22.1/.nojekyll new file mode 100644 index 00000000000..f17311098f5 --- /dev/null +++ b/v0.22.1/.nojekyll @@ -0,0 +1 @@ +This file makes sure that Github Pages doesn't process mdBook's output. diff --git a/v0.22.1/404.html b/v0.22.1/404.html new file mode 100644 index 00000000000..7a2468ce5b5 --- /dev/null +++ b/v0.22.1/404.html @@ -0,0 +1,221 @@ + + +
+ + +This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +PyO3 exposes much of Python's C API through the ffi
module.
The C API is naturally unsafe and requires you to manage reference counts, errors and specific invariants yourself. Please refer to the C API Reference Manual and The Rustonomicon before using any function from that API.
+ +async
and await
This feature is still in active development. See the related issue.
+#[pyfunction]
and #[pymethods]
attributes also support async fn
.
#![allow(dead_code)]
+#[cfg(feature = "experimental-async")] {
+use std::{thread, time::Duration};
+use futures::channel::oneshot;
+use pyo3::prelude::*;
+
+#[pyfunction]
+#[pyo3(signature=(seconds, result=None))]
+async fn sleep(seconds: f64, result: Option<PyObject>) -> Option<PyObject> {
+ let (tx, rx) = oneshot::channel();
+ thread::spawn(move || {
+ thread::sleep(Duration::from_secs_f64(seconds));
+ tx.send(()).unwrap();
+ });
+ rx.await.unwrap();
+ result
+}
+}
+Python awaitables instantiated with this method can only be awaited in asyncio context. Other Python async runtime may be supported in the future.
+Send + 'static
constraintResulting future of an async fn
decorated by #[pyfunction]
must be Send + 'static
to be embedded in a Python object.
As a consequence, async fn
parameters and return types must also be Send + 'static
, so it is not possible to have a signature like async fn does_not_compile<'py>(arg: Bound<'py, PyAny>) -> Bound<'py, PyAny>
.
However, there is an exception for method receivers, so async methods can accept &self
/&mut self
. Note that this means that the class instance is borrowed for as long as the returned future is not completed, even across yield points and while waiting for I/O operations to complete. Hence, other methods cannot obtain exclusive borrows while the future is still being polled. This is the same as how async methods in Rust generally work but it is more problematic for Rust code interfacing with Python code due to pervasive shared mutability. This strongly suggests to prefer shared borrows &self
over exclusive ones &mut self
to avoid racy borrow check failures at runtime.
Even if it is not possible to pass a py: Python<'py>
parameter to async fn
, the GIL is still held during the execution of the future – it's also the case for regular fn
without Python<'py>
/Bound<'py, PyAny>
parameter, yet the GIL is held.
It is still possible to get a Python
marker using Python::with_gil
; because with_gil
is reentrant and optimized, the cost will be negligible.
.await
There is currently no simple way to release the GIL when awaiting a future, but solutions are currently in development.
+Here is the advised workaround for now:
+use std::{
+ future::Future,
+ pin::{Pin, pin},
+ task::{Context, Poll},
+};
+use pyo3::prelude::*;
+
+struct AllowThreads<F>(F);
+
+impl<F> Future for AllowThreads<F>
+where
+ F: Future + Unpin + Send,
+ F::Output: Send,
+{
+ type Output = F::Output;
+
+ fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
+ let waker = cx.waker();
+ Python::with_gil(|gil| {
+ gil.allow_threads(|| pin!(&mut self.0).poll(&mut Context::from_waker(waker)))
+ })
+ }
+}
+Cancellation on the Python side can be caught using CancelHandle
type, by annotating a function parameter with #[pyo3(cancel_handle)]
.
#![allow(dead_code)]
+#[cfg(feature = "experimental-async")] {
+use futures::FutureExt;
+use pyo3::prelude::*;
+use pyo3::coroutine::CancelHandle;
+
+#[pyfunction]
+async fn cancellable(#[pyo3(cancel_handle)] mut cancel: CancelHandle) {
+ futures::select! {
+ /* _ = ... => println!("done"), */
+ _ = cancel.cancelled().fuse() => println!("cancelled"),
+ }
+}
+}
+Coroutine
typeTo make a Rust future awaitable in Python, PyO3 defines a Coroutine
type, which implements the Python coroutine protocol.
Each coroutine.send
call is translated to a Future::poll
call. If a CancelHandle
parameter is declared, the exception passed to coroutine.throw
call is stored in it and can be retrieved with CancelHandle::cancelled
; otherwise, it cancels the Rust future, and the exception is reraised;
The type does not yet have a public constructor until the design is finalized.
+ +This chapter of the guide goes into detail on how to build and distribute projects using PyO3. The way to achieve this is very different depending on whether the project is a Python module implemented in Rust, or a Rust binary embedding Python. For both types of project there are also common problems such as the Python version to build for and the linker arguments to use.
+The material in this chapter is intended for users who have already read the PyO3 README. It covers in turn the choices that can be made for Python modules and for Rust binaries. There is also a section at the end about cross-compiling projects using PyO3.
+There is an additional sub-chapter dedicated to supporting multiple Python versions.
+PyO3 uses a build script (backed by the pyo3-build-config
crate) to determine the Python version and set the correct linker arguments. By default it will attempt to use the following in order:
python
executable (if it's a Python 3 interpreter).python3
executable.You can override the Python interpreter by setting the PYO3_PYTHON
environment variable, e.g. PYO3_PYTHON=python3.7
, PYO3_PYTHON=/usr/bin/python3.9
, or even a PyPy interpreter PYO3_PYTHON=pypy3
.
Once the Python interpreter is located, pyo3-build-config
executes it to query the information in the sysconfig
module which is needed to configure the rest of the compilation.
To validate the configuration which PyO3 will use, you can run a compilation with the environment variable PYO3_PRINT_CONFIG=1
set. An example output of doing this is shown below:
$ PYO3_PRINT_CONFIG=1 cargo build
+ Compiling pyo3 v0.14.1 (/home/david/dev/pyo3)
+error: failed to run custom build command for `pyo3 v0.14.1 (/home/david/dev/pyo3)`
+
+Caused by:
+ process didn't exit successfully: `/home/david/dev/pyo3/target/debug/build/pyo3-7a8cf4fe22e959b7/build-script-build` (exit status: 101)
+ --- stdout
+ cargo:rerun-if-env-changed=PYO3_CROSS
+ cargo:rerun-if-env-changed=PYO3_CROSS_LIB_DIR
+ cargo:rerun-if-env-changed=PYO3_CROSS_PYTHON_VERSION
+ cargo:rerun-if-env-changed=PYO3_PRINT_CONFIG
+
+ -- PYO3_PRINT_CONFIG=1 is set, printing configuration and halting compile --
+ implementation=CPython
+ version=3.8
+ shared=true
+ abi3=false
+ lib_name=python3.8
+ lib_dir=/usr/lib
+ executable=/usr/bin/python
+ pointer_width=64
+ build_flags=
+ suppress_build_script_link_lines=false
+
+The PYO3_ENVIRONMENT_SIGNATURE
environment variable can be used to trigger rebuilds when its value changes, it has no other effect.
If you save the above output config from PYO3_PRINT_CONFIG
to a file, it is possible to manually override the contents and feed it back into PyO3 using the PYO3_CONFIG_FILE
env var.
If your build environment is unusual enough that PyO3's regular configuration detection doesn't work, using a config file like this will give you the flexibility to make PyO3 work for you. To see the full set of options supported, see the documentation for the InterpreterConfig
struct.
Python extension modules need to be compiled differently depending on the OS (and architecture) that they are being compiled for. As well as multiple OSes (and architectures), there are also many different Python versions which are actively supported. Packages uploaded to PyPI usually want to upload prebuilt "wheels" covering many OS/arch/version combinations so that users on all these different platforms don't have to compile the package themselves. Package vendors can opt-in to the "abi3" limited Python API which allows their wheels to be used on multiple Python versions, reducing the number of wheels they need to compile, but restricts the functionality they can use.
+There are many ways to go about this: it is possible to use cargo
to build the extension module (along with some manual work, which varies with OS). The PyO3 ecosystem has two packaging tools, maturin
and setuptools-rust
, which abstract over the OS difference and also support building wheels for PyPI upload.
PyO3 has some Cargo features to configure projects for building Python extension modules:
+extension-module
feature, which must be enabled when building Python extension modules.abi3
feature and its version-specific abi3-pyXY
companions, which are used to opt-in to the limited Python API in order to support multiple Python versions in a single wheel.This section describes each of these packaging tools before describing how to build manually without them. It then proceeds with an explanation of the extension-module
feature. Finally, there is a section describing PyO3's abi3
features.
The PyO3 ecosystem has two main choices to abstract the process of developing Python extension modules:
+maturin
is a command-line tool to build, package and upload Python modules. It makes opinionated choices about project layout meaning it needs very little configuration. This makes it a great choice for users who are building a Python extension from scratch and don't need flexibility.setuptools-rust
is an add-on for setuptools
which adds extra keyword arguments to the setup.py
configuration file. It requires more configuration than maturin
, however this gives additional flexibility for users adding Rust to an existing Python package that can't satisfy maturin
's constraints.Consult each project's documentation for full details on how to get started using them and how to upload wheels to PyPI. It should be noted that while maturin
is able to build manylinux-compliant wheels out-of-the-box, setuptools-rust
requires a bit more effort, relying on Docker for this purpose.
There are also maturin-starter
and setuptools-rust-starter
examples in the PyO3 repository.
To build a PyO3-based Python extension manually, start by running cargo build
as normal in a library project which uses PyO3's extension-module
feature and has the cdylib
crate type.
Once built, symlink (or copy) and rename the shared library from Cargo's target/
directory to your desired output directory:
libyour_module.dylib
to your_module.so
.libyour_module.dll
to your_module.pyd
.libyour_module.so
to your_module.so
.You can then open a Python shell in the output directory and you'll be able to run import your_module
.
If you're packaging your library for redistribution, you should indicated the Python interpreter your library is compiled for by including the platform tag in its name. This prevents incompatible interpreters from trying to import your library. If you're compiling for PyPy you must include the platform tag, or PyPy will ignore the module.
+To use PyO3 with bazel one needs to manually configure PyO3, PyO3-ffi and PyO3-macros. In particular, one needs to make sure that it is compiled with the right python flags for the version you intend to use. +For example see:
+Rather than using just the .so
or .pyd
extension suggested above (depending on OS), you can prefix the shared library extension with a platform tag to indicate the interpreter it is compatible with. You can query your interpreter's platform tag from the sysconfig
module. Some example outputs of this are seen below:
# CPython 3.10 on macOS
+.cpython-310-darwin.so
+
+# PyPy 7.3 (Python 3.8) on Linux
+$ python -c 'import sysconfig; print(sysconfig.get_config_var("EXT_SUFFIX"))'
+.pypy38-pp73-x86_64-linux-gnu.so
+
+So, for example, a valid module library name on CPython 3.10 for macOS is your_module.cpython-310-darwin.so
, and its equivalent when compiled for PyPy 7.3 on Linux would be your_module.pypy38-pp73-x86_64-linux-gnu.so
.
See PEP 3149 for more background on platform tags.
+On macOS, because the extension-module
feature disables linking to libpython
(see the next section), some additional linker arguments need to be set. maturin
and setuptools-rust
both pass these arguments for PyO3 automatically, but projects using manual builds will need to set these directly in order to support macOS.
The easiest way to set the correct linker arguments is to add a build.rs
with the following content:
fn main() {
+ pyo3_build_config::add_extension_module_link_args();
+}
+Remember to also add pyo3-build-config
to the build-dependencies
section in Cargo.toml
.
An alternative to using pyo3-build-config
is add the following to a cargo configuration file (e.g. .cargo/config.toml
):
[target.x86_64-apple-darwin]
+rustflags = [
+ "-C", "link-arg=-undefined",
+ "-C", "link-arg=dynamic_lookup",
+]
+
+[target.aarch64-apple-darwin]
+rustflags = [
+ "-C", "link-arg=-undefined",
+ "-C", "link-arg=dynamic_lookup",
+]
+
+Using the MacOS system python3 (/usr/bin/python3
, as opposed to python installed via homebrew, pyenv, nix, etc.) may result in runtime errors such as Library not loaded: @rpath/Python3.framework/Versions/3.8/Python3
. These can be resolved with another addition to .cargo/config.toml
:
[build]
+rustflags = [
+ "-C", "link-args=-Wl,-rpath,/Library/Developer/CommandLineTools/Library/Frameworks",
+]
+
+Alternatively, one can include in build.rs
:
fn main() {
+ println!(
+ "cargo:rustc-link-arg=-Wl,-rpath,/Library/Developer/CommandLineTools/Library/Frameworks"
+ );
+}
+For more discussion on and workarounds for MacOS linking problems see this issue.
+Finally, don't forget that on MacOS the extension-module
feature will cause cargo test
to fail without the --no-default-features
flag (see the FAQ).
extension-module
featurePyO3's extension-module
feature is used to disable linking to libpython
on Unix targets.
This is necessary because by default PyO3 links to libpython
. This makes binaries, tests, and examples "just work". However, Python extensions on Unix must not link to libpython for manylinux compliance.
The downside of not linking to libpython
is that binaries, tests, and examples (which usually embed Python) will fail to build. If you have an extension module as well as other outputs in a single project, you need to use optional Cargo features to disable the extension-module
when you're not building the extension module. See the FAQ for an example workaround.
Py_LIMITED_API
/abi3
By default, Python extension modules can only be used with the same Python version they were compiled against. For example, an extension module built for Python 3.5 can't be imported in Python 3.8. PEP 384 introduced the idea of the limited Python API, which would have a stable ABI enabling extension modules built with it to be used against multiple Python versions. This is also known as abi3
.
The advantage of building extension modules using the limited Python API is that package vendors only need to build and distribute a single copy (for each OS / architecture), and users can install it on all Python versions from the minimum version and up. The downside of this is that PyO3 can't use optimizations which rely on being compiled against a known exact Python version. It's up to you to decide whether this matters for your extension module. It's also possible to design your extension module such that you can distribute abi3
wheels but allow users compiling from source to benefit from additional optimizations - see the support for multiple python versions section of this guide, in particular the #[cfg(Py_LIMITED_API)]
flag.
There are three steps involved in making use of abi3
when building Python packages as wheels:
abi3
feature in pyo3
. This ensures pyo3
only calls Python C-API functions which are part of the stable API, and on Windows also ensures that the project links against the correct shared object (no special behavior is required on other platforms):[dependencies]
+pyo3 = { version = "0.22.1", features = ["abi3"] }
+
+Ensure that the built shared objects are correctly marked as abi3
. This is accomplished by telling your build system that you're using the limited API. maturin
>= 0.9.0 and setuptools-rust
>= 0.11.4 support abi3
wheels.
+See the corresponding PRs for more.
Ensure that the .whl
is correctly marked as abi3
. For projects using setuptools
, this is accomplished by passing --py-limited-api=cp3x
(where x
is the minimum Python version supported by the wheel, e.g. --py-limited-api=cp35
for Python 3.5) to setup.py bdist_wheel
.
abi3
Because a single abi3
wheel can be used with many different Python versions, PyO3 has feature flags abi3-py37
, abi3-py38
, abi3-py39
etc. to set the minimum required Python version for your abi3
wheel.
+For example, if you set the abi3-py37
feature, your extension wheel can be used on all Python 3 versions from Python 3.7 and up. maturin
and setuptools-rust
will give the wheel a name like my-extension-1.0-cp37-abi3-manylinux2020_x86_64.whl
.
As your extension module may be run with multiple different Python versions you may occasionally find you need to check the Python version at runtime to customize behavior. See the relevant section of this guide on supporting multiple Python versions at runtime.
+PyO3 is only able to link your extension module to abi3 version up to and including your host Python version. E.g., if you set abi3-py38
and try to compile the crate with a host of Python 3.7, the build will fail.
++Note: If you set more that one of these
+abi3
version feature flags the lowest version always wins. For example, with bothabi3-py37
andabi3-py38
set, PyO3 would build a wheel which supports Python 3.7 and up.
abi3
extensions without a Python interpreterAs an advanced feature, you can build PyO3 wheel without calling Python interpreter with the environment variable PYO3_NO_PYTHON
set.
+Also, if the build host Python interpreter is not found or is too old or otherwise unusable,
+PyO3 will still attempt to compile abi3
extension modules after displaying a warning message.
+On Unix-like systems this works unconditionally; on Windows you must also set the RUSTFLAGS
environment variable
+to contain -L native=/path/to/python/libs
so that the linker can find python3.lib
.
If the python3.dll
import library is not available, an experimental generate-import-lib
crate
+feature may be enabled, and the required library will be created and used by PyO3 automatically.
Note: MSVC targets require LLVM binutils (llvm-dlltool
) to be available in PATH
for
+the automatic import library generation feature to work.
Due to limitations in the Python API, there are a few pyo3
features that do
+not work when compiling for abi3
. These are:
#[pyo3(text_signature = "...")]
does not work on classes until Python 3.10 or greater.dict
and weakref
options on classes are not supported until Python 3.9 or greater.If you want to embed the Python interpreter inside a Rust program, there are two modes in which this can be done: dynamically and statically. We'll cover each of these modes in the following sections. Each of them affect how you must distribute your program. Instead of learning how to do this yourself, you might want to consider using a project like PyOxidizer to ship your application and all of its dependencies in a single file.
+PyO3 automatically switches between the two linking modes depending on whether the Python distribution you have configured PyO3 to use (see above) contains a shared library or a static library. The static library is most often seen in Python distributions compiled from source without the --enable-shared
configuration option.
Embedding the Python interpreter dynamically is much easier than doing so statically. This is done by linking your program against a Python shared library (such as libpython.3.9.so
on UNIX, or python39.dll
on Windows). The implementation of the Python interpreter resides inside the shared library. This means that when the OS runs your Rust program it also needs to be able to find the Python shared library.
This mode of embedding works well for Rust tests which need access to the Python interpreter. It is also great for Rust software which is installed inside a Python virtualenv, because the virtualenv sets up appropriate environment variables to locate the correct Python shared library.
+For distributing your program to non-technical users, you will have to consider including the Python shared library in your distribution as well as setting up wrapper scripts to set the right environment variables (such as LD_LIBRARY_PATH
on UNIX, or PATH
on Windows).
Note that PyPy cannot be embedded in Rust (or any other software). Support for this is tracked on the PyPy issue tracker.
+Embedding the Python interpreter statically means including the contents of a Python static library directly inside your Rust binary. This means that to distribute your program you only need to ship your binary file: it contains the Python interpreter inside the binary!
+On Windows static linking is almost never done, so Python distributions don't usually include a static library. The information below applies only to UNIX.
+The Python static library is usually called libpython.a
.
Static linking has a lot of complications, listed below. For these reasons PyO3 does not yet have first-class support for this embedding mode. See issue 416 on PyO3's GitHub for more information and to discuss any issues you encounter.
+The auto-initialize
feature is deliberately disabled when embedding the interpreter statically because this is often unintentionally done by new users to PyO3 running test programs. Trying out PyO3 is much easier using dynamic embedding.
The known complications are:
+To import compiled extension modules (such as other Rust extension modules, or those written in C), your binary must have the correct linker flags set during compilation to export the original contents of libpython.a
so that extensions can use them (e.g. -Wl,--export-dynamic
).
The C compiler and flags which were used to create libpython.a
must be compatible with your Rust compiler and flags, else you will experience compilation failures.
Significantly different compiler versions may see errors like this:
+lto1: fatal error: bytecode stream in file 'rust-numpy/target/release/deps/libpyo3-6a7fb2ed970dbf26.rlib' generated with LTO version 6.0 instead of the expected 6.2
+
+Mismatching flags may lead to errors like this:
+/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libpython3.9.a(zlibmodule.o): relocation R_X86_64_32 against `.data' can not be used when making a PIE object; recompile with -fPIE
+
+If you encounter these or other complications when linking the interpreter statically, discuss them on issue 416 on PyO3's GitHub. It is hoped that eventually that discussion will contain enough information and solutions that PyO3 can offer first-class support for static embedding.
+When you run your Rust binary with an embedded interpreter, any #[pymodule]
created modules won't be accessible to import unless added to a table called PyImport_Inittab
before the embedded interpreter is initialized. This will cause Python statements in your embedded interpreter such as import your_new_module
to fail. You can call the macro append_to_inittab
with your module before initializing the Python interpreter to add the module function into that table. (The Python interpreter will be initialized by calling prepare_freethreaded_python
, with_embedded_python_interpreter
, or Python::with_gil
with the auto-initialize
feature enabled.)
Thanks to Rust's great cross-compilation support, cross-compiling using PyO3 is relatively straightforward. To get started, you'll need a few pieces of software:
+.config
for the platform you're targeting and the toolchain you are using.PATH
or setting the PYO3_PYTHON
variable (optional when building "abi3" extension modules).After you've obtained the above, you can build a cross-compiled PyO3 module by using Cargo's --target
flag. PyO3's build script will detect that you are attempting a cross-compile based on your host machine and the desired target.
When cross-compiling, PyO3's build script cannot execute the target Python interpreter to query the configuration, so there are a few additional environment variables you may need to set:
+PYO3_CROSS
: If present this variable forces PyO3 to configure as a cross-compilation.PYO3_CROSS_LIB_DIR
: This variable can be set to the directory containing the target's libpython DSO and the associated _sysconfigdata*.py
file for Unix-like targets, or the Python DLL import libraries for the Windows target. This variable is only needed when the output binary must link to libpython explicitly (e.g. when targeting Windows and Android or embedding a Python interpreter), or when it is absolutely required to get the interpreter configuration from _sysconfigdata*.py
.PYO3_CROSS_PYTHON_VERSION
: Major and minor version (e.g. 3.9) of the target Python installation. This variable is only needed if PyO3 cannot determine the version to target from abi3-py3*
features, or if PYO3_CROSS_LIB_DIR
is not set, or if there are multiple versions of Python present in PYO3_CROSS_LIB_DIR
.PYO3_CROSS_PYTHON_IMPLEMENTATION
: Python implementation name ("CPython" or "PyPy") of the target Python installation. CPython is assumed by default when this variable is not set, unless PYO3_CROSS_LIB_DIR
is set for a Unix-like target and PyO3 can get the interpreter configuration from _sysconfigdata*.py
.An experimental pyo3
crate feature generate-import-lib
enables the user to cross-compile
+extension modules for Windows targets without setting the PYO3_CROSS_LIB_DIR
environment
+variable or providing any Windows Python library files. It uses an external python3-dll-a
crate
+to generate import libraries for the Python DLL for MinGW-w64 and MSVC compile targets.
+python3-dll-a
uses the binutils dlltool
program to generate DLL import libraries for MinGW-w64 targets.
+It is possible to override the default dlltool
command name for the cross target
+by setting PYO3_MINGW_DLLTOOL
environment variable.
+Note: MSVC targets require LLVM binutils or MSVC build tools to be available on the host system.
+More specifically, python3-dll-a
requires llvm-dlltool
or lib.exe
executable to be present in PATH
when
+targeting *-pc-windows-msvc
. The Zig compiler executable can be used in place of llvm-dlltool
when the ZIG_COMMAND
+environment variable is set to the installed Zig program name ("zig"
or "python -m ziglang"
).
An example might look like the following (assuming your target's sysroot is at /home/pyo3/cross/sysroot
and that your target is armv7
):
export PYO3_CROSS_LIB_DIR="/home/pyo3/cross/sysroot/usr/lib"
+
+cargo build --target armv7-unknown-linux-gnueabihf
+
+If there are multiple python versions at the cross lib directory and you cannot set a more precise location to include both the libpython
DSO and _sysconfigdata*.py
files, you can set the required version:
export PYO3_CROSS_PYTHON_VERSION=3.8
+export PYO3_CROSS_LIB_DIR="/home/pyo3/cross/sysroot/usr/lib"
+
+cargo build --target armv7-unknown-linux-gnueabihf
+
+Or another example with the same sys root but building for Windows:
+export PYO3_CROSS_PYTHON_VERSION=3.9
+export PYO3_CROSS_LIB_DIR="/home/pyo3/cross/sysroot/usr/lib"
+
+cargo build --target x86_64-pc-windows-gnu
+
+Any of the abi3-py3*
features can be enabled instead of setting PYO3_CROSS_PYTHON_VERSION
in the above examples.
PYO3_CROSS_LIB_DIR
can often be omitted when cross compiling extension modules for Unix and macOS targets,
+or when cross compiling extension modules for Windows and the experimental generate-import-lib
+crate feature is enabled.
The following resources may also be useful for cross-compiling:
+PyO3 supports all actively-supported Python 3 and PyPy versions. As much as possible, this is done internally to PyO3 so that your crate's code does not need to adapt to the differences between each version. However, as Python features grow and change between versions, PyO3 cannot a completely identical API for every Python version. This may require you to add conditional compilation to your crate or runtime checks for the Python version.
+This section of the guide first introduces the pyo3-build-config
crate, which you can use as a build-dependency
to add additional #[cfg]
flags which allow you to support multiple Python versions at compile-time.
Second, we'll show how to check the Python version at runtime. This can be useful when building for multiple versions with the abi3
feature, where the Python API compiled against is not always the same as the one in use.
The pyo3-build-config
exposes multiple #[cfg]
flags which can be used to conditionally compile code for a given Python version. PyO3 itself depends on this crate, so by using it you can be sure that you are configured correctly for the Python version PyO3 is building against.
This allows us to write code like the following
+#[cfg(Py_3_7)]
+fn function_only_supported_on_python_3_7_and_up() {}
+
+#[cfg(not(Py_3_8))]
+fn function_only_supported_before_python_3_8() {}
+
+#[cfg(not(Py_LIMITED_API))]
+fn function_incompatible_with_abi3_feature() {}
+The following sections first show how to add these #[cfg]
flags to your build process, and then cover some common patterns flags in a little more detail.
To see a full reference of all the #[cfg]
flags provided, see the pyo3-build-cfg
docs.
pyo3-build-config
You can use the #[cfg]
flags in just two steps:
Add pyo3-build-config
with the resolve-config
feature enabled to your crate's build dependencies in Cargo.toml
:
[build-dependencies]
+pyo3-build-config = { version = "0.22.1", features = ["resolve-config"] }
+
+Add a build.rs
file to your crate with the following contents:
fn main() {
+ // If you have an existing build.rs file, just add this line to it.
+ pyo3_build_config::use_pyo3_cfgs();
+}
+After these steps you are ready to annotate your code!
+pyo3-build-cfg
flagsThe #[cfg]
flags added by pyo3-build-cfg
can be combined with all of Rust's logic in the #[cfg]
attribute to create very precise conditional code generation. The following are some common patterns implemented using these flags:
#[cfg(Py_3_7)]
+
+This #[cfg]
marks code that will only be present on Python 3.7 and upwards. There are similar options Py_3_8
, Py_3_9
, Py_3_10
and so on for each minor version.
#[cfg(not(Py_3_7))]
+
+This #[cfg]
marks code that will only be present on Python versions before (but not including) Python 3.7.
#[cfg(not(Py_LIMITED_API))]
+
+This #[cfg]
marks code that is only available when building for the unlimited Python API (i.e. PyO3's abi3
feature is not enabled). This might be useful if you want to ship your extension module as an abi3
wheel and also allow users to compile it from source to make use of optimizations only possible with the unlimited API.
#[cfg(any(Py_3_9, not(Py_LIMITED_API)))]
+
+This #[cfg]
marks code which is available when running Python 3.9 or newer, or when using the unlimited API with an older Python version. Patterns like this are commonly seen on Python APIs which were added to the limited Python API in a specific minor version.
#[cfg(PyPy)]
+
+This #[cfg]
marks code which is running on PyPy.
When building with PyO3's abi3
feature, your extension module will be compiled against a specific minimum version of Python, but may be running on newer Python versions.
For example with PyO3's abi3-py38
feature, your extension will be compiled as if it were for Python 3.8. If you were using pyo3-build-config
, #[cfg(Py_3_8)]
would be present. Your user could freely install and run your abi3 extension on Python 3.9.
There's no way to detect your user doing that at compile time, so instead you need to fall back to runtime checks.
+PyO3 provides the APIs Python::version()
and Python::version_info()
to query the running Python version. This allows you to do the following, for example:
use pyo3::Python;
+
+Python::with_gil(|py| {
+ // PyO3 supports Python 3.7 and up.
+ assert!(py.version_info() >= (3, 7));
+ assert!(py.version_info() >= (3, 7, 0));
+});
+
+ All notable changes to this project will be documented in this file. For help with updating to new +PyO3 versions, please see the migration guide.
+The format is based on Keep a Changelog +and this project adheres to Semantic Versioning.
+To see unreleased changes, please see the CHANGELOG on the main branch guide.
+ +#[pyo3(submodule)]
option for declarative #[pymodule]
s. #4301PartialEq<bool>
for Bound<'py, PyBool>
. #4305NotImplemented
instead of raising TypeError
from generated equality method when comparing different types. #4287#[pyo3::prelude::pymodule]
and similar for #[pyclass]
and #[pyfunction]
in declarative modules.#4288#[setter]
function. #4304heck
dependency to 0.5. #3966chrono-tz
optional dependency to include version 0.10. #4061num-rational
feature to add conversions with Python's fractions.Fraction
. #4148PyWeakref
, PyWeakrefReference
and PyWeakrefProxy
. #3835#[pyclass]
on enums that have tuple variants. #4072Decimal
conversion. #4079pyo3_disable_reference_pool
conditional compilation flag to avoid the overhead of the global reference pool at the cost of known limitations as explained in the performance section of the guide. #4095#[pyo3(constructor = (...))]
to customize the generated constructors for complex enum variants. #4158PyType::module
, which always matches Python __module__
. #4196PyType::fully_qualified_name
which matches the "fully qualified name" defined in PEP 737. #4196PyTypeMethods::mro
and PyTypeMethods::bases
. #4197#[pyclass(ord)]
to implement ordering based on PartialOrd
. #4202ToPyObject
and IntoPy<PyObject>
for PyBackedStr
and PyBackedBytes
. #4205#[pyclass(hash)]
option to implement __hash__
in terms of the Hash
implementation #4206#[pyclass(eq)]
option to generate __eq__
based on PartialEq
, and #[pyclass(eq_int)]
for simple enums to implement equality based on their discriminants. #4210From<Bound<'py, T>>
for PyClassInitializer<T>
. #4214as_super
methods to PyRef
and PyRefMut
for accesing the base class by reference. #4219PartialEq<str>
for Bound<'py, PyString>
. #4245PyModuleMethods::filename
on PyPy. #4249PartialEq<[u8]>
for Bound<'py, PyBytes>
. #4250pyo3_ffi::c_str
macro to create &'static CStr
on Rust versions which don't have 1.77's c""
literals. #4255bool
conversion with numpy
2.0's numpy.bool
type #4258PyAnyMethods::{bitnot, matmul, floor_div, rem, divmod}
. #4264PySliceIndices::slicelength
and the length
parameter of PySlice::indices()
. #3761Clone
ing pointers into the Python heap has been moved behind the py-clone
feature, as it must panic without the GIL being held as a soundness fix. #4095#[track_caller]
to all Py<T>
, Bound<'py, T>
and Borrowed<'a, 'py, T>
methods which can panic. #4098PyAnyMethods::dir
to be fallible and return PyResult<Bound<'py, PyList>>
(and similar for PyAny::dir
). #4100weakref
or dict
when compiling for abi3
for Python older than 3.9. #4194PyType::name
to always match Python __name__
. #4196#[pyclass(eq_int)]
. #4210module=
attribute of declarative modules' child #[pymodule]
s and #[pyclass]
es. #4213module
option for complex enum variants from the value set on the complex enum module
. #4228abi3
feature on PyPy or GraalPy. #4237#[pyo3(get)]
on #[pyclass]
fields. #4254PyCFunction::new
, PyCFunction::new_with_keywords
and PyCFunction::new_closure
now take &'static CStr
name and doc arguments (previously was &'static str
). #4255experimental-declarative-modules
feature is now stabilized and available by default. #4257PYO3_CROSS_LIB_DIR
is set to a missing path. #4043create_exception!
living in a different Rust module using the declarative-module
feature. #4086PY_VECTORCALL_ARGUMENTS_OFFSET
and PyVectorcall_NARGS
to fix a false-positive assertion. #4104PyUnicode_DATA
on PyPy: not exposed by PyPy. #4116#[pyo3(from_py_with = ...)]
attribute on dunder (__magic__
) method arguments instead of silently ignoring it. #4117mod
node. #4236__dict__
attribute missing for #[pyclass(dict)]
instances when building for abi3
on Python 3.9. #4251PySet::empty()
gil-ref constructor. #4082async fn
in #[pymethods]
with a &self
receiver and more than one additional argument. #4035__traverse__
. #4045#[pyclass]
living in a different Rust module using the experimental-declarative-modules
feature. #4054missing_docs
lint triggering on documented #[pymodule]
functions. #4067libpython
). #4073Send
and Sync
for PyBackedStr
and PyBackedBytes
. #4007Clone
, Debug
, PartialEq
, Eq
, PartialOrd
, Ord
and Hash
implementation for PyBackedBytes
and PyBackedStr
, and Display
for PyBackedStr
. #4020import_exception_bound!
macro to import exception types without generating GIL Ref functionality for them. #4027#[setter]
function arguments. #3998#[inline]
hints on many Bound
and Borrowed
methods. #4024#[pyo3(from_py_with = "")]
in #[setter]
methods #3995&Bound
in #[setter]
methods. #3998#[pymodule]
, #[pyfunction]
and #[pyclass]
macros. #4009pyo3::import_exception!
does not exist. #4012#[pymethod]
with a receiver and additional arguments. #4015PyMemoryView
type. #3514async fn
in for #[pyfunction]
and #[pymethods]
, with the experimental-async
feature. #3540 #3588 #3599 #3931PyTypeInfo
for PyEllipsis
, PyNone
and PyNotImplemented
. #3577#[pyclass]
on enums that have non-unit variants. #3582chrono
feature with abi3
feature. #3664FromPyObject
, IntoPy<PyObject>
and ToPyObject
are implemented on std::duration::Duration
#3670PyString::to_cow
. Add Py<PyString>::to_str
, Py<PyString>::to_cow
, and Py<PyString>::to_string_lossy
, as ways to access Python string data safely beyond the GIL lifetime. #3677Bound<T>
and Borrowed<T>
smart pointers as a new API for accessing Python objects. #3686PyNativeType::as_borrowed
to convert "GIL refs" to the new Bound
smart pointer. #3692FromPyObject::extract_bound
method, to migrate FromPyObject
implementations to the Bound API. #3706gil-refs
feature to allow continued use of the deprecated GIL Refs APIs. #3707PyAnyMethods
for binary operators (add
, sub
, etc.) #3712chrono-tz
feature allowing conversion between chrono_tz::Tz
and zoneinfo.ZoneInfo
#3730PyType_GetModuleByDef
. #3734std::time::SystemTime
and datetime.datetime
#3736Py::as_any
and Py::into_any
. #3785PyStringMethods::encode_utf8
. #3801PyBackedStr
and PyBackedBytes
, as alternatives to &str
and &bytes
where a Python object owns the data. #3802 #3991#[pymodule]
macro on Rust mod
blocks, with the experimental-declarative-modules
feature. #3815ExactSizeIterator
for set
and frozenset
iterators on abi3
feature. #3849Py::drop_ref
to explicitly drop a `Py`` and immediately decrease the Python reference count if the GIL is already held. #3871#[pymodule]
macro on single argument functions that take &Bound<'_, PyModule>
. #3905FromPyObject
for Cow<str>
. #3928Default
for GILOnceCell
. #3971PyDictMethods::into_mapping
, PyListMethods::into_sequence
and PyTupleMethods::into_sequence
. #3982PyDict::from_sequence
now takes a single argument of type &PyAny
(previously took two arguments Python
and PyObject
). #3532Py::is_ellipsis
and PyAny::is_ellipsis
in favour of any.is(py.Ellipsis())
. #3577PyTypeInfo
functionality into new traits HasPyGilRef
and PyTypeCheck
. #3600PyTryFrom
and PyTryInto
traits in favor of any.downcast()
via the PyTypeCheck
and PyTypeInfo
traits. #3601&self
/&mut self
#3609FromPyObject
for set types now also accept frozenset
objects as input. #3632FromPyObject
for bool
now also accepts NumPy's bool_
as input. #3638AsRefSource
associated type to PyNativeType
. #3653.is_true
to .is_truthy
on PyAny
and Py<PyAny>
to clarify that the test is not based on identity with or equality to the True singleton. #3657PyType::name
is now PyType::qualname
whereas PyType::name
efficiently accesses the full name which includes the module name. #3660Iter(A)NextOutput
types are now deprecated and __(a)next__
can directly return anything which can be converted into Python objects, i.e. awaitables do not need to be wrapped into IterANextOutput
or Option
any more. Option
can still be used as well and returning None
will trigger the fast path for __next__
, stopping iteration without having to raise a StopIteration
exception. #3661FromPyObject
on chrono::DateTime<Tz>
for all Tz
, not just FixedOffset
and Utc
. #3663PyTzInfoAccess
trait. For the deprecated gil-ref API, the trait is now implemented for &'py PyTime
and &'py PyDateTime
instead of PyTime
and PyDate
. #3679__traverse__
become no-ops for unsendable pyclasses if on the wrong thread, thereby avoiding hard aborts at the cost of potential leakage. #3689PyNativeType
in pyo3::prelude
. #3692extract::<i64>
(and other integer types) by avoiding call to __index__()
converting the value to an integer for 3.10+. Gives performance improvement of around 30% for successful extraction. #3742FromPyObject
for Py<T>
to just T: PyTypeCheck
. #3776PySet
and PyFrozenSet
iterators now always iterate the equivalent of iter(set)
. (A "fast path" with no noticeable performance benefit was removed.) #3849FromPyObject
for &str
, Cow<str>
, &[u8]
and Cow<[u8]>
onto a temporary trait FromPyObjectBound
when gil-refs
feature is deactivated. #3928GILPool
, Python::with_pool
, and Python::new_pool
. #3947Py_MAX_NDIMS
in favour of PyBUF_MAX_NDIM
. #3757datetime
types when an invalid datetime
module is on sys.path. #3818non_local_definitions
lint warning triggered by many PyO3 macros. #3901PyCode
and PyCode_Type
on PyPy: PyCode_Type
is not exposed by PyPy. #3934Prerelease of PyO3 0.21. See the GitHub diff for what changed between 0.21.0-beta.0 and the final release.
+portable-atomic
dependency. #3619abi3
stable ABI by the environment variable PYO3_USE_ABI3_FORWARD_COMPATIBILITY=1
. #3821portable-atomic
to support platforms without 64-bit atomics. #3619either
feature enabled without experimental-inspect
enabled. #3834pyo3
and pyo3-ffi
dependencies on pyo3-build-config
to require the same patch version, i.e. pyo3
0.20.2 requires exactly pyo3-build-config
0.20.2. #3721pyo3
0.20.0 with latest pyo3-build-config
0.20.X. #3724either
feature to add conversions for either::Either<L, R>
sum type. #3456smallvec
feature to add conversions for smallvec::SmallVec
. #3507take
and into_inner
methods to GILOnceCell
#3556#[classmethod]
methods can now also receive Py<PyType>
as their first argument. #3587#[pyfunction(pass_module)]
can now also receive Py<PyModule>
as their first argument. #3587traverse
method to GILProtected
. #3616abi3-py312
feature #3687chrono
dependency. #3512clippy::unnecessary_fallible_conversions
warning when using a Py<Self>
self
receiver. #3564indoc
dependency to 2.0 and unindent
dependency to 0.2. #3237syn
dependency to 2.0. #3239chrono
optional dependency to require 0.4.25 or newer. #3427__lt__
, __le__
, __eq__
, __ne__
, __gt__
and __ge__
in #[pymethods]
. #3203Py_GETENV
. #3336as_ptr
and into_ptr
inherent methods for Py
, PyAny
, PyRef
, and PyRefMut
. #3359DoubleEndedIterator
for PyTupleIterator
and PyListIterator
. #3366#[pyclass(rename_all = "...")]
option: this allows renaming all getters and setters of a struct, or all variants of an enum. Available renaming rules are: "camelCase"
, "kebab-case"
, "lowercase"
, "PascalCase"
, "SCREAMING-KEBAB-CASE"
, "SCREAMING_SNAKE_CASE"
, "snake_case"
, "UPPERCASE"
. #3384PyObject_GC_IsTracked
and PyObject_GC_IsFinalized
on Python 3.9 and up (PyPy 3.10 and up). #3403None
, Ellipsis
, and NotImplemented
. #3408Py_mod_multiple_interpreters
constant and its possible values. #3494PyInterpreterConfig
struct, its constants and Py_NewInterpreterFromConfig
. #3502PySet::discard
to return PyResult<bool>
(previously returned nothing). #3281IntoPy
for Rust tuples to Python tuples. #3321PyDict::get_item
to no longer suppress arbitrary exceptions (the return type is now PyResult<Option<&PyAny>>
instead of Option<&PyAny>
), and deprecate PyDict::get_item_with_error
. #3330AsPyPointer
is now an unsafe trait
. #3358os.PathLike
values in implementation of FromPyObject
for PathBuf
. #3374__builtins__
to globals in py.run()
and py.eval()
if they're missing. #3378FromPyObject
for BigInt
and BigUint
. #3379PyIterator::from_object
and PyByteArray::from
now take a single argument of type &PyAny
(previously took two arguments Python
and AsPyPointer
). #3389AsPyPointer
with AsRef<PyAny>
as a bound in the blanket implementation of From<&T> for PyObject
. #3391impl IntoPy<PyObject> for &T where T: AsPyPointer
with implementations of impl IntoPy<PyObject>
for &PyAny
, &T where T: AsRef<PyAny>
, and &Py<T>
. #3393std::io::Error
kind in implementation of From<std::io::IntoInnerError>
for PyErr
#3396ErrorKind
in implementation of From<PyErr>
for OSError
subclass. #3397PyErr
in implementation of From<std::io::Error>
for PyErr
if the std::io::Error
has been built using a Python exception (previously would create a new exception wrapping the std::io::Error
). #3402#[pymodule]
will now return the same module object on repeated import by the same Python interpreter, on Python 3.9 and up. #3446chrono
types to Python datetime
types (datetime
cannot represent leap-seconds). #3458Err
returned from #[pyfunction]
will now have a non-None __context__
if called from inside a catch
block. #3455#[__new__]
form of #[new]
attribute. #3505#[args]
attribute for #[pymethods]
. #3232IntoPyPointer
trait in favour of into_ptr
inherent methods. #3385PySet::discard
. #3281PyTupleIterator
type returned by PyTuple::iter
is now public and hence can be named by downstream crates. #3366PyOS_FSPath
on PyPy. #3374PyTypeBuilder::build
. #3401_Py_GetAllocatedBlocks
, _PyObject_GC_Malloc
, and _PyObject_GC_Calloc
on Python 3.11 and up. #3403ResourceWarning
and crashes related to GC when running with debug builds of CPython. #3404Option<T>
default arguments will no longer re-wrap Some(T)
or expressions evaluating to None
. #3461IterNextOutput::Return
not returning a value on PyPy. #3471#[pymethods]
blocks. #3491#[new]
, #[classmethod]
, #[staticmethod]
, and #[classattr]
. #3484PyMarshal_WriteObjectToString
from PyMarshal_ReadObjectFromString
with the abi3
feature. #3490_PyFrameEvalFunction
on Python 3.11 and up (it now receives a _PyInterpreterFrame
opaque struct). #3500PyState_AddModule
, PyState_RemoveModule
and PyState_FindModule
for PyPy 3.9 and up. #3295_PyObject_CallFunction_SizeT
and _PyObject_CallMethod_SizeT
. #3297PyErr::Display
for all Python versions, and FFI symbol PyErr_DisplayException
for Python 3.12. #3334PyType_GetDict()
for Python 3.12. #3339PyAny::downcast_exact
. #3346PySlice::full()
to construct a full slice (::
). #3353PyErr
for 3.12 betas to avoid deprecated ffi methods. #3306object.h
for Python 3.12.0b4. #3335pyo3::ffi
struct definitions to be compatible with 3.12.0b4. #3342float
to f64
(and PyFloat::value
) on non-abi3 builds. #3345SystemError
raised in PyUnicodeDecodeError_Create
on PyPy 3.10. #3297Py_EnterRecursiveCall
to return c_int
(was incorrectly returning ()
). #3300PyErr::matches
and PyErr::is_instance
returned results inconsistent with PyErr::get_type
. #3313PanicException
when unwinding after the exception was "normalized". #3326PyErr::from_value
and PyErr::into_value
losing traceback on conversion. #3328hashbrown
optional dependency to include version 0.14 #3258indexmap
optional dependency to include version 2. #3277pyo3::types::PyFrozenSetBuilder
to allow building a PyFrozenSet
item by item. #3156ipaddress.IPv4Address
/ipaddress.IPv6Address
and std::net::IpAddr
. #3197num-bigint
feature in combination with abi3
. #3198PyErr_GetRaisedException()
, PyErr_SetRaisedException()
to FFI definitions for Python 3.12 and later. #3248Python::with_pool
which is a safer but more limited alternative to Python::new_pool
. #3263PyDict::get_item_with_error
on PyPy. #3270#[new]
methods may to return Py<Self>
in order to return existing instances. #3287__complex__
to Complex
when using abi3
or PyPy. #3185PyAny::hasattr
. #3271PySet
or PyFrozenSet
or returning types converted into these internally, e.g. HashSet
or BTreeSet
. #3286text_signature
option (and automatically generate signature) for #[new]
in #[pymethods]
. #2980decimal.Decimal
and rust_decimal::Decimal
. #3016#[pyo3(from_item_all)]
when deriving FromPyObject
to specify get_item
as getter for all fields. #3120pyo3::exceptions::PyBaseExceptionGroup
for Python 3.11, and corresponding FFI definition PyExc_BaseExceptionGroup
. #3141#[new]
with #[classmethod]
to create a constructor which receives a (subtype's) class/PyType
as its first argument. #3157PyClass::get
and Py::get
for GIL-indepedent access to classes with #[pyclass(frozen)]
. #3158PyAny::is_exact_instance
and PyAny::is_exact_instance_of
. #3161PyAny::is_instance_of::<T>(obj)
is now equivalent to T::is_type_of(obj)
, and now returns bool
instead of PyResult<bool>
. #2881text_signature
option on #[pyclass]
structs. #2980anyhow::Error
/eyre::Report
containing a basic PyErr
without a chain in a PyRuntimeError
. #3004#[getter]
and #[setter]
to use a common call "trampoline" to slightly reduce generated code size and compile times. #3029text_signature
. #3050None
in automatically-generated text_signature
. #3066PySequence::list
and PySequence::tuple
to PySequence::to_list
and PySequence::to_tuple
. (The old names continue to exist as deprecated forms.) #3111PyRef::py
and PyRefMut::py
to match the underlying borrow. #3131Python::with_gil
, is now locked inside of implementations of the __traverse__
slot. #3168Python::acquire_gil
is replaced by Python::with_gil
. #2981PyGetSetDef
, PyMemberDef
, PyStructSequence_Field
and PyStructSequence_Desc
to have *const c_char
members for name
and doc
(not *mut c_char
). #3036fmt::Display
, instead return "<unprintable object>"
string and report error via sys.unraisablehook()
#3062#[pyfunction]
s take references into #[pyclass]
es #3142__traverse__
implementation. #3168Drop
implementations of unsendable classes on other threads. #3176#[pymethods]
items come from somewhere else (for example, as a macro argument) and a custom receiver like Py<Self>
is used. #3178GILProtected<T>
to mediate concurrent access to a value using Python's global interpreter lock (GIL). #2975PyASCIIObject
/ PyUnicode
and associated methods on big-endian architectures. #3015_PyDict_Contains_KnownHash()
for CPython 3.10 and up. #3088#[pymethods]
and #[pyfunction]
called "output". #3022#[staticmethod]
. #3055is_instance
for PyDateTime
(would incorrectly check for a PyDate
). #3071PyUnicode_InternImmortal
since Python 3.10. #3071chrono
to avoid depending on time
v0.1.x. #2939IntoPy<PyObject>
, ToPyObject
and FromPyObject
for Cow<[u8]>
to efficiently handle both bytes
and bytearray
objects. #2899IntoPy<PyObject>
, ToPyObject
and FromPyObject
for Cell<T>
. #3014PyList::to_tuple()
, as a convenient and efficient conversion from lists to tuples. #3042PyTuple::to_list()
, as a convenient and efficient conversion from tuples to lists. #3044PySequence
conversion for list
and tuple
inputs. #2944#[pyclass]
type object fails during module import. #2947PyMapping
conversion for dict
inputs. #2954create_exception!
to take a dotted.module
to place the exception in a submodule. #2979PyObject
s cloned in allow_threads
blocks. #2952clippy::redundant_closure
lint on default arguments in #[pyo3(signature = (...))]
annotations. #2990non_snake_case
lint on generated code in #[pyfunction]
macro. #2993PyErr::write_unraisable()
. #2889Python::Ellipsis()
and PyAny::is_ellipsis()
methods. #2911PyDict::update()
and PyDict::update_if_missing()
methods. #2912PyIter_Check
on CPython 3.7 is now implemented as hasattr(type(obj), "__next__")
, which works correctly on all platforms and adds support for abi3
. #2914PYO3_CONFIG_FILE
instead of denying. #2926__releasebuffer__
to sys.unraisablehook
rather than causing SystemError
. #2886PyIterator
succeeding for Python classes which did not implement __next__
. #2914__traverse__
when visiting None
fields of Option<T: AsPyPointer>
. #2921#[pymethods(crate = "...")]
option being ignored. #2923pythonXY_d.dll
for debug Python builds on Windows. #2937indexmap
optional depecency to allow >= 1.6, < 2
. #2849hashbrown
optional dependency to allow >= 0.9, < 0.14
. #2875memoffset
dependency to 0.8. #2875GILOnceCell::get_or_try_init
for fallible GILOnceCell
initialization. #2398experimental-inspect
with type_input()
and type_output()
helpers to get the Python type of any Python-compatible object. #2490 #2882#[pyclass]
macro can now take get_all
and set_all
to create getters and setters for every field. #2692#[pyo3(signature = (...))]
option for #[pyfunction]
and #[pymethods]
. #2702pyo3-build-config
: rebuild when PYO3_ENVIRONMENT_SIGNATURE
environment variable value changes. #2727std::num
and Python int
. #2730Py::downcast()
as a companion to PyAny::downcast()
, as well as downcast_unchecked()
for both types. #2734Warning
classes as well as PyErr::warn_explicit
. #2742abi3-py311
feature. #2776_PyErr_ChainExceptions()
for CPython. #2788PyVectorcall_NARGS
and PY_VECTORCALL_ARGUMENTS_OFFSET
for PyPy 3.8 and up. #2811PyList::get_item_unchecked
for PyPy. #2827PyCFunction::new_closure
to take name
and doc
arguments. #2686PyType::is_subclass
, PyErr::is_instance
and PyAny::is_instance
now take &PyAny
instead of &PyType
arguments, so that they work with objects that pretend to be types using __subclasscheck__
and __instancecheck__
. #2695#[args]
attribute and passing "args" specification directly to #[pyfunction]
in favor of the new #[pyo3(signature = (...))]
option. #2702Option<T>
arguments to #[pyfunction]
and #[pymethods]
without also using #[pyo3(signature)]
to specify whether the arguments should be required or have defaults. #2703#[pyfunction]
and #[pymethods]
to use a common call "trampoline" to slightly reduce generated code size and compile times. #2705PyAny::cast_as()
and Py::cast_as()
are now deprecated in favor of PyAny::downcast()
and the new Py::downcast()
. #2734PyAny::downcast()
. #2734__text_signature__
for all Python functions created using #[pyfunction]
and #[pymethods]
. #2784PySet::new
and PyFrozenSet::new
. #2795#[cfg(...)]
and #[pyo3(...)]
attributes on #[pyclass]
struct fields will now work. #2796PyFunction
on when building for abi3 or PyPy. #2838derive(FromPyObject)
to use intern!
when applicable for #[pyo3(item)]
. #2879pyproto
feature, #[pyproto]
macro, and all accompanying APIs. #2587PyModule::filename
on PyPy. #2715PyCodeObject
is now once again defined with fields on Python 3.7. #2726TypeError
if #[new]
pymethods with no arguments receive arguments when called from Python. #2749NOARGS
argument calling convention for methods that have a single py: Python
argument (as a performance optimization). #2760isize
values to c_long
in PySlice::new
. #2769PyUnicodeDecodeError_Create
on PyPy leading to indeterminate behavior (typically a TypeError
). #2772**kwargs
to accept keyword arguments which share a name with a positional-only argument (as permitted by PEP 570). #2800PyObject_Vectorcall
on PyPy 3.9 and up. #2811PyCFunction::new_closure
. #2842ExactSizeIterator
for PyListIterator
, PyDictIterator
, PySetIterator
and PyFrozenSetIterator
. #2676impl FromPyObject for [T; N]
no longer accepting types passing PySequence_Check
, e.g. NumPy arrays, since version 0.17.0. This the same fix that was applied impl FromPyObject for Vec<T>
in version 0.17.1 extended to fixed-size arrays. #2675FunctionDescription::extract_arguments_fastcall
due to creating slices from a null pointer. #2687chrono
feature to convert chrono
types into types in the datetime
module. #2612num-bigint
feature on PyPy
. #2626__richcmp__
for enums, fixing __ne__
returning always returning True
. #2622Option<&SomePyClass>
argument with a default. #2630impl FromPyObject for Vec<T>
no longer accepting types passing PySequence_Check
, e.g. NumPy arrays, since 0.17.0. #2631PyDictItems
, PyDictKeys
, and PyDictValues
types added in PyO3 0.17.0.#[pyo3(from_py_with = "...")]
attribute on an argument of type Option<T>
. #2592redundant-closure
lint on **kwargs
arguments for #[pyfunction]
and #[pymethods]
. #25950.3
(the multiple-pymethods
feature now requires Rust 1.62 for correctness). #2492timezone_utc
. #1588ToPyObject
for [T; N]
. #2313PyDictKeys
, PyDictValues
and PyDictItems
Rust types. #2358append_to_inittab
. #2377PyFrame_GetCode
. #2406PyCode
and PyFrame
high level objects. #2408Py_fstring_input
, sendfunc
, and _PyErr_StackItem
. #2423PyDateTime::new_with_fold
, PyTime::new_with_fold
, PyTime::get_fold
, and PyDateTime::get_fold
for PyPy. #2428#[pyclass(frozen)]
. #2448#[pyo3(name)]
on enum variants. #2457CompareOp::matches
to implement __richcmp__
as the result of a Rust std::cmp::Ordering
comparison. #2460PySuper
type. #2486generate-import-lib
feature. #2506Py_EnterRecursiveCall
and Py_LeaveRecursiveCall
. #2511PyDict::get_item_with_error
. #2536#[pyclass(sequence)]
option. #2567tzinfo
to take Option<&PyTzInfo>
instead of Option<&PyObject>
: PyDateTime::new
, PyDateTime::new_with_fold
, PyTime::new
, and PyTime::new_with_fold
. #1588PyTypeObject::type_object
method to the PyTypeInfo
trait, and deprecate the PyTypeObject
trait. #2287Py
and PyAny
now accept impl IntoPy<Py<PyString>>
rather than just &str
to allow use of the intern!
macro. #2312pyproto
feature to be opt-in instead of opt-out. #2322#[pyfunction]
return types do not implement IntoPy
. #2326T: IntoPy
for impl<T, const N: usize> IntoPy<PyObject> for [T; N]
instead of T: ToPyObject
. #2326ToBorrowedObject
trait. #2333PySet
and PyDict
will now panic if the underlying collection is mutated during the iteration. #2380PySet
and PyDict
will now panic if the underlying collection is mutated during the iteration. #2380#[classattr]
methods to be fallible. #2385#[pymethods]
with the same name for a single #[pyclass]
. #2399lib_name
when using PYO3_CONFIG_FILE
. #2404ValueError
raised by the #[derive(FromPyObject)]
implementation for a tuple struct. #2414#[classattr]
methods to take Python
argument. #2456PyCapsule
type to resolve soundness issues: #2485
+PyCapsule::new
and PyCapsule::new_with_destructor
now take name: Option<CString>
instead of &CStr
.F
in PyCapsule::new_with_destructor
must now be Send
.PyCapsule::get_context
deprecated in favor of PyCapsule::context
which doesn't take a py: Python<'_>
argument.PyCapsule::set_context
no longer takes a py: Python<'_>
argument.PyCapsule::name
now returns PyResult<Option<&CStr>>
instead of &CStr
.FromPyObject::extract
for Vec<T>
no longer accepts Python str
inputs. #2500#[pymodule]
is only initialized once. #2523pyo3_build_config::add_extension_module_link_args
now also emits linker arguments for wasm32-unknown-emscripten
. #2538PySequence
and PyMapping
now require inputs to inherit from (or register with) collections.abc.Sequence
and collections.abc.Mapping
respectively. #2477PyFunction
on when building for abi3 or PyPy. #2542Python::acquire_gil
. #2549Dict
, WeakRef
and BaseNativeType
members of the PyClass
private implementation details. #2572PyThreadState_DeleteCurrent
. #2357wrap_pymodule
interactions with name resolution rules: it no longer "sees through" glob imports of use submodule::*
when submodule::submodule
is a #[pymodule]
. #2363PyEval_EvalCodeEx
to take *const *mut PyObject
array arguments instead of *mut *mut PyObject
. #2368#[pyclass] struct r#RawName
) incorrectly having r#
at the start of the class name created in Python. #2395Py_tracefunc
to be unsafe extern "C" fn
(was previously safe). #2407#[pyo3(from_py_with = "...")]
annotations on a field in a #[derive(FromPyObject)]
struct. #2414_PyDateTime_BaseTime
and _PyDateTime_BaseDateTime
lacking leading underscores in their names. #2421PyArena
on Python 3.10 and up. #2421PyCompilerFlags
missing member cf_feature_version
on Python 3.8 and up. #2423PyAsyncMethods
missing member am_send
on Python 3.10 and up. #2423PyGenObject
having multiple incorrect members on various Python versions. #2423PySyntaxErrorObject
missing members end_lineno
and end_offset
on Python 3.10 and up. #2423PyHeapTypeObject
missing member ht_module
on Python 3.9 and up. #2423PyFrameObject
having multiple incorrect members on various Python versions. #2424 #2434PyTypeObject
missing deprecated field tp_print
on Python 3.8. #2428PyDateTime_CAPI
. PyDateTime_Date
, PyASCIIObject
, PyBaseExceptionObject
, PyListObject
, and PyTypeObject
on PyPy. #2428_inittab
field initfunc
typo'd as initfun
. #2431_PyDateTime_BaseTime
and _PyDateTime_BaseDateTime
incorrectly having fold
member. #2432PyTypeObject
. PyHeapTypeObject
, and PyCFunctionObject
having incorrect members on PyPy 3.9. #2433PyGetSetDef
to have *const c_char
for doc
member (not *mut c_char
). #2439#[pyo3(from_py_with = "...")]
being ignored for 1-element tuple structs and transparent structs. #2440memoffset
to avoid UB when computing PyCell
layout. #2450repr
for enums renamed by #[pyclass(name = "...")]
#2457PyObject_CallNoArgs
incorrectly being available when building for abi3 on Python 3.9. #2476#[pyfunction]
arguments. #2503PyCapsule
type with select workarounds. Users are encourage to upgrade to PyO3 0.17 at their earliest convenience which contains API breakages which fix the issues in a long-term fashion. #2522
+PyCapsule::new
and PyCapsule::new_with_destructor
now take ownership of a copy of the name
to resolve a possible use-after-free.PyCapsule::name
now returns an empty CStr
instead of dereferencing a null pointer if the capsule has no name.F
in PyCapsule::new_with_destructor
will never be called if the capsule is deleted from a thread other than the one which the capsule was created in (a warning will be emitted).generate-import-lib
feature to support auto-generating non-abi3 python import libraries for Windows targets. #2364Py_ExitStatusException
. #2374generate-abi3-import-lib
feature in favor of the new generate-import-lib
feature. #2364warn_default_encoding
field to PyConfig
on 3.10+. The previously missing field could result in incorrect behavior or crashes. #2370pathconfig_warnings
and program_name
fields of PyConfig
on 3.10+. Previously, the order of the fields was swapped and this could lead to incorrect behavior or crashes. #2370PyTzInfoAccess
trait for safe access to time zone information. #2263generate-abi3-import-lib
feature to auto-generate python3.dll
import libraries for Windows. #2282PyDateTime_BaseTime
and PyDateTime_BaseDateTime
. #2294FromPyObject::extract
which is common when functions accept multiple distinct types. #2279libpython
link name for CPython 3.7 on Unix. #2288PyDateTime_DATE_GET_TZINFO
or PyDateTime_TIME_GET_TZINFO
on datetime
or time
without a tzinfo. #2289n
breaking serialization of the interpreter configuration on Windows since PyO3 0.16.3. #2299parking_lot
dependency supported versions to include 0.12. #2239pyo3_build_config::InterpreterConfig
to run Python scripts using the configured executable. #2092as_bytes
method to Py<PyBytes>
. #2235PyType_FromModuleAndSpec
, PyType_GetModule
, PyType_GetModuleState
and PyModule_AddType
. #2250pyo3_build_config::cross_compiling_from_to
as a helper to detect when PyO3 is cross-compiling. #2253#[pyclass(mapping)]
option to leave sequence slots empty in container implementations. #2265PyString::intern
to enable usage of the Python's built-in string interning. #2268intern!
macro which can be used to amortize the cost of creating Python strings by storing them inside a GILOnceCell
. #2269PYO3_CROSS_PYTHON_IMPLEMENTATION
environment variable for selecting the default cross Python implementation. #2272#[pyo3(crate = "...", text_signature = "...")]
options to be used directly in #[pyclass(crate = "...", text_signature = "...")]
. #2234PYO3_CROSS_LIB_DIR
environment variable optional when cross compiling. #2241METH_FASTCALL
calling convention as limited API on Python 3.10. #2250pyo3_build_config::cross_compiling
in favor of pyo3_build_config::cross_compiling_from_to
. #2253abi3-py310
feature: use Python 3.10 ABI when available instead of silently falling back to the 3.9 ABI. #2242PYO3_CROSS_LIB_DIR
is set for some host/target combinations. #2232syn
to require minimal patch version 1.0.56. #2240pyo3-ffi
runs before that of pyo3
to fix cross compilation. #2224hashbrown
optional dependency supported versions to include 0.12. #2197pyo3-build-config
. #2198indoc
optional dependency to 1.0. #2004abi3-py36
feature. #2006pyo3-build-config
no longer enables the resolve-config
feature by default. #2008inventory
optional dependency to 0.2. #2019paste
dependency. #2081pyo3::ffi
are now a re-export of a separate pyo3-ffi
crate. #2126PyCapsule
type exposing the Capsule API. #1980pyo3_build_config::Sysconfigdata
and supporting APIs. #1996Py::setattr
method. #2009#[pyo3(crate = "some::path")]
option to all attribute macros (except the deprecated #[pyproto]
). #2022create_exception!
macro to take an optional docstring. #2027#[pyclass]
for fieldless (aka C-like) enums. #2034__getbuffer__
and __releasebuffer__
to #[pymethods]
. #2067wrap_pyfunction
and wrap_pymodule
. #2081wrap_pyfunction!
to wrap a #[pyfunction]
implemented in a different Rust module or crate. #2091PyAny::contains
method (in
operator for PyAny
). #2115PyMapping::contains
method (in
operator for PyMapping
). #2133__traverse__
and __clear__
to #[pymethods]
. #2159from_py_with
on struct tuples and enums to override the default from-Python conversion. #2181eq
, ne
, lt
, le
, gt
, ge
methods to PyAny
that wrap rich_compare
. #2175Py::is
and PyAny::is
methods to check for object identity. #2183__getattribute__
magic method. #2187PyType::is_subclass
, PyErr::is_instance
and PyAny::is_instance
now operate run-time type object instead of a type known at compile-time. The old behavior is still available as PyType::is_subclass_of
, PyErr::is_instance_of
and PyAny::is_instance_of
. #1985PyErr
(the old names are just marked deprecated for now): #2026
+pytype
-> get_type
pvalue
-> value
(and deprecate equivalent instance
)ptraceback
-> traceback
from_instance
-> from_value
into_instance
-> into_value
PyErr::new_type
now takes an optional docstring and now returns PyResult<Py<PyType>>
rather than a ffi::PyTypeObject
pointer. #2027PyType::is_instance
; it is inconsistent with other is_instance
methods in PyO3. Instead of typ.is_instance(obj)
, use obj.is_instance(typ)
. #2031__getitem__
, __setitem__
and __delitem__
in #[pymethods]
now implement both a Python mapping and sequence by default. #2065#[derive(FromPyObject)]
for enums. #2068wrap_pyfunction
and wrap_pymodule
. #2081__ipow__
magic method. #2083_PyCFunctionFast
. #2126PyDateTimeAPI
and PyDateTime_TimeZone_UTC
are now unsafe functions instead of statics. #2126PyDateTimeAPI
does not implicitly call PyDateTime_IMPORT
anymore to reflect the original Python API more closely. Before the first call to PyDateTime_IMPORT
a null pointer is returned. Therefore before calling any of the following FFI functions PyDateTime_IMPORT
must be called to avoid undefined behavior: #2126
+PyDateTime_TimeZone_UTC
PyDate_Check
PyDate_CheckExact
PyDateTime_Check
PyDateTime_CheckExact
PyTime_Check
PyTime_CheckExact
PyDelta_Check
PyDelta_CheckExact
PyTZInfo_Check
PyTZInfo_CheckExact
PyDateTime_FromTimestamp
PyDate_FromTimestamp
gc
option for pyclass
(e.g. #[pyclass(gc)]
). Just implement a __traverse__
#[pymethod]
. #2159ml_meth
field of PyMethodDef
is now represented by the PyMethodDefPointer
union. 2166#[pyproto]
traits. #2173Default
impl for PyMethodDef
. #2166PartialEq
impl for Py
and PyAny
(use the new is
instead). #2183PyObject_HasAttr
on PyPy. #2025PyErr::into_value
. #2026needless-option-as-deref
in code generated by #[pyfunction]
and #[pymethods]
. #2040PySlice::indices
. #2061wrap_pymodule!
macro using the wrong name for a #[pymodule]
with a #[pyo3(name = "..")]
attribute. #2081#[pymethods]
accepting implementations with the wrong number of arguments. #2083#[pyfunction]
generated code when a required argument following an Option
was not provided. #2093ExactSizeIterator
implementations. #2124PyCMethod_New
on Python 3.9 and up. #2143_PyLong_NumBits
and _PyLong_AsByteArray
on PyPy. #2146AsPyPointer
for Option<T>
. #2160_PyLong_NumBits
to return size_t
instead of c_int
. #2161TypeError
thrown when argument parsing failed missing the originating causes. 2177Py::as_ref
and Py::into_ref
for Py<PySequence>
, Py<PyIterator>
and Py<PyMapping>
. #1682PyTraceback
type to represent and format Python tracebacks. #1977#[classattr]
constants with a known magic method name (which is lowercase) no longer trigger lint warnings expecting constants to be uppercase. #1969#[classattr]
by functions with the name of a known magic method. #1969catch_unwind
in allow_threads
which can cause fatal crashes. #1989__get__
implementation when accessing descriptor on type object. #1997pyo3
's Cargo.toml
now advertises links = "python"
to inform Cargo that it links against libpython. #1819anyhow
feature to convert anyhow::Error
into PyErr
. #1822eyre
feature to convert eyre::Report
into PyErr
. #1893PyList::get_item_unchecked
and PyTuple::get_item_unchecked
to get items without bounds checks. #1733#[doc = include_str!(...)]
attributes on Rust 1.54 and up. #1746PyAny::py
as a convenience for PyNativeType::py
. #1751std::ops::Index<usize>
for PyList
, PyTuple
and PySequence
. #1825std::ops::Index
for PyList
, PyTuple
and PySequence
. #1829PyMapping
type to represent the Python mapping protocol. #1844PyList
and PyTuple
. #1849as_sequence
methods to PyList
and PyTuple
. #1860#[pymethods]
, intended as a replacement for #[pyproto]
. #1864abi3-py310
feature. #1889PyCFunction::new_closure
to create a Python function from a Rust closure. #1901#[pyfunction]
. #1925PyErr::take
to attempt to fetch a Python exception if present. #1957PyList
, PyTuple
and PySequence
's APIs now accepts only usize
indices instead of isize
.
+#1733, #1802,
+#1803PyList::get_item
and PyTuple::get_item
now return PyResult<&PyAny>
instead of panicking. #1733PySequence::in_place_repeat
and PySequence::in_place_concat
now return PyResult<&PySequence>
instead of PyResult<()>
, which is needed in case of immutable sequences such as tuples. #1803PySequence::get_slice
now returns PyResult<&PySequence>
instead of PyResult<&PyAny>
. #1829PyTuple::split_from
. #1804PyTuple::slice
, new method PyTuple::get_slice
added with usize
indices. #1828PyParser_SimpleParseStringFlags
, PyParser_SimpleParseStringFlagsFilename
, PyParser_SimpleParseFileFlags
when building for Python 3.9. #1830PyParser_ASTFromString
, PyParser_ASTFromStringObject
, PyParser_ASTFromFile
, PyParser_ASTFromFileObject
, PyParser_SimpleParseStringFlags
, PyParser_SimpleParseStringFlagsFilename
, PyParser_SimpleParseFileFlags
, PyParser_SimpleParseString
, PyParser_SimpleParseFile
, Py_SymtableString
, and Py_SymtableStringObject
. #1830#[pymethods]
now handles magic methods similarly to #[pyproto]
. In the future, #[pyproto]
may be deprecated. #1864PySys_AddWarnOption
, PySys_AddWarnOptionUnicode
and PySys_HasWarnOptions
. #1887#[call]
attribute in favor of using fn __call__
. #1929_PyImport_FindExtensionObject
on Python 3.10. #1942PyErr::fetch
to panic in debug mode if no exception is present. #1957Python::with_gil
with Python initialized but threading not initialized. #1874python3.dll
when cross-compiling to Windows with abi3
. #1880PyTuple_ClearFreeList
incorrectly being present for Python 3.9 and up. #1887#[derive(FromPyObject)]
for enums. #1888__mod__
magic method fallback to __rmod__
. #1934._PyImport_FindExtensionObject
on Python 3.10. #1942pyo3_build_config::InterpreterConfig
and subfields public. #1848resolve-config
feature to the pyo3-build-config
to control whether its build script does anything. #1856s390x-unknown-linux-gnu
target. #1850PyString::data
as unsafe
and disable it and some supporting PyUnicode FFI APIs (which depend on a C bitfield) on big-endian targets. #1834PyString::data
to access the raw bytes stored in a Python string. #1794AttributeError
to avoid panic when calling del
on a #[setter]
defined class property. #1779PyGILState_Check
and Py_tracefunc
to the unlimited API. #1787_type
field to PyStatus
struct definition. #1791num-complex
optional dependency to support interop with rust-numpy
and ndarray
when building with the MSRV of 1.41 #1799Python::run_code
. #1806PyModule::from_code
. #1810pyo3::
in pyo3::types::datetime
which broke builds using -Z avoid-dev-deps
#1811indexmap
feature to add ToPyObject
, IntoPy
and FromPyObject
implementations for indexmap::IndexMap
. #1728pyo3_build_config::add_extension_module_link_args
to use in build scripts to set linker arguments (for macOS). #1755Python::with_gil_unchecked
unsafe variation of Python::with_gil
to allow obtaining a Python
in scenarios where Python::with_gil
would fail. #1769PyErr::new
no longer acquires the Python GIL internally. #1724cargo:rustc-cdylib-link-arg
in its build script, as Cargo unintentionally allowed crates to pass linker args to downstream crates in this way. Projects supporting macOS may need to restore .cargo/config.toml
files. #1755#[doc(hidden)]
on structs and functions annotated with PyO3 macros. #1722#[pyfunction]
s. #1726Py_Buffer
on PyPy. #1737dictoffset
on 32-bit Windows. #1475Py_DecodeLocale
. The 2nd argument is now *mut Py_ssize_t
instead of Py_ssize_t
. #1766IntoPy<PyObject>
for &PathBuf
and &OsString
. #1712PyList_SET_ITEM
. #1713num-bigint
optional dependency to 0.4. #1481num-complex
optional dependency to 0.4. #1482hashbrown
optional dependency supported versions to include 0.11. #1496[T; N]
to all N
using const generics (on Rust 1.51 and up). #1128OsStr
/ OsString
and Python strings. #1379Path
/ PathBuf
and Python strings (and pathlib.Path
objects). #1379 #1654#[pyo3(...)]
attributes to control various PyO3 macro functionality:
+
+PyCFunction_CheckExact
for Python 3.9 and later. #1425Py_IS_TYPE
. #1429_Py_InitializeMain
. #1473cpython/import.h
.#1475#[pyclass]
macro. #1504PyDateTime_TimeZone_UTC
. #1572#[pyclass(extends=Exception)]
. #1591PyErr::cause
and PyErr::set_cause
. #1679cpython/pystate.h
. #1687wrap_pyfunction!
macro to pyo3::prelude
. #1695#[pymethods]
block per #[pyclass]
by default, to remove the dependency on inventory
. Add a multiple-pymethods
feature to opt-in the original behavior and dependency on inventory
. #1457PyTimeAccess::get_fold
to return a bool
instead of a u8
. #1397PyCFunction_Call
for Python 3.9 and up. #1425PyModule_GetFilename
. #1425auto-initialize
feature is no longer enabled by default. #1443PyCFunction::new
and PyCFunction::new_with_keywords
to take &'static str
arguments rather than implicitly copying (and leaking) them. #1450PyModule::call
, PyModule::call0
, PyModule::call1
and PyModule::get
. #1492PyBufferError
s raised from PyBuffer::copy_to_slice
and PyBuffer::copy_from_slice
. #1534-undefined
and dynamic_lookup
linker arguments on macOS with the extension-module
feature. #1539#[pyproto]
methods which are easier to implement as #[pymethods]
: #1560
+PyBasicProtocol::__bytes__
and PyBasicProtocol::__format__
PyContextProtocol::__enter__
and PyContextProtocol::__exit__
PyDescrProtocol::__delete__
and PyDescrProtocol::__set_name__
PyMappingProtocol::__reversed__
PyNumberProtocol::__complex__
and PyNumberProtocol::__round__
PyAsyncProtocol::__aenter__
and PyAsyncProtocol::__aexit__
#[pyo3(...)]
options:
+
+PyEval_InitThreads
in #[pymodule]
init code. #1630METH_FASTCALL
argument passing convention, when possible, to improve #[pyfunction]
and method performance.
+#1619, #1660BaseException
etc. #1426Python::is_instance
, Python::is_subclass
, Python::release
, Python::xdecref
, and Py::from_owned_ptr_or_panic
. #1426PyTypeInfo
:
+
+PYO3_CROSS_INCLUDE_DIR
environment variable and the associated C header parsing functionality. #1521raw_pycfunction!
macro. #1619PyClassAlloc
trait. #1657PyList::get_parked_item
. #1664PyCFunction_ClearFreeList
for Python 3.9 and later. #1425PYO3_CROSS_LIB_DIR
environment variable no long required when compiling for x86-64 Python from macOS arm64 and reverse. #1428_PyEval_RequestCodeExtraIndex
, which took an argument of the wrong type. #1429PyIndex_Check
missing with the abi3
feature. #1436TypeError
raised when keyword-only argument passed along with a positional argument in *args
. #1440&PyTuple
of *args
in #[pyfunction]
. #1440#[pymethods]
inside macro expansions. #1505__doc__
in __all__
generated for #[pymodule]
. #1509PYO3_CROSS
family of environment variables are set. #1514EnvironmentError
, IOError
, and WindowsError
on PyPy. #1533cargo check
and cargo clippy
in a Python virtualenv. #1557ffi::PyDateTimeAPI
without the GIL. #1563FromPyObject
implementations for u128
and i128
. #1638#[pyclass(extends=PyDict)]
leaking the dict contents on drop. #1657PyList::get_item
with negative indices. #1668PyEval_SetProfile
/PyEval_SetTrace
to take Option<Py_tracefunc>
parameters. #1692ToPyObject
impl for HashSet
to accept non-default hashers. #1702with_embedded_python_interpreter
to initialize a Python interpreter, execute a closure, and finalize the interpreter. #1355serde
feature which provides implementations of Serialize
and Deserialize
for Py<T>
. #1366_PyCFunctionFastWithKeywords
on Python 3.7 and up. #1384PyDateTime::new_with_fold
method. #1398size_hint
impls for {PyDict,PyList,PySet,PyTuple}Iterator
s. #1699prepare_freethreaded_python
will no longer register an atexit
handler to call Py_Finalize
. This resolves a number of issues with incompatible C extensions causing crashes at finalization. #1355PyLayout::py_init
, PyClassDict::clear_dict
, and opt_to_pyobj
safe, as they do not perform any unsafe operations. #1404r#raw_idents
as argument names in pyfunctions. #1383PyFunction_GetCode
(was incorrectly PyFunction_Code
). #1387PyMarshal_WriteObjectToString
and PyMarshal_ReadObjectFromString
as available in limited API. #1387PyListObject
and those from funcobject.h
as requiring non-limited API. #1387Result
usage in pyobject_native_type_base
. #1402#[pyclass(dict)]
and #[pyclass(weakref)]
with the abi3
feature on Python 3.9 and up. #1342PyOS_BeforeFork
, PyOS_AfterFork_Parent
, PyOS_AfterFork_Child
for Python 3.7 and up. #1348auto-initialize
feature to control whether PyO3 should automatically initialize an embedded Python interpreter. For compatibility this feature is enabled by default in PyO3 0.13.1, but is planned to become opt-in from PyO3 0.14.0. #1347PYO3_CROSS_INCLUDE_DIR
. #1350PyEval_CallObjectWithKeywords
, PyEval_CallObject
, PyEval_CallFunction
, PyEval_CallMethod
when building for Python 3.9. #1338PyGetSetDef_DICT
and PyGetSetDef_INIT
which have never been in the Python API. #1341PyGen_NeedsFinalizing
, PyImport_Cleanup
(removed in 3.9), and PyOS_InitInterrupts
(3.10). #1348PyOS_AfterFork
for Python 3.7 and up. #1348PyCoro_Check
, PyAsyncGen_Check
, and PyCoroWrapper_Check
, which have never been in the Python API (for the first two, it is possible to use PyCoro_CheckExact
and PyAsyncGen_CheckExact
instead; these are the actual functions provided by the Python API). #1348PyUnicode_FromUnicode
, PyUnicode_AsUnicode
and PyUnicode_AsUnicodeAndSize
, which will be removed from 3.12 and up due to PEP 623. #1370PyFrame_ClearFreeList
when building for Python 3.9. #1341_PyDict_Contains
when building for Python 3.10. #1341PyGen_NeedsFinalizing
and PyImport_Cleanup
(for 3.9 and up), and PyOS_InitInterrupts
(3.10). #1348Py_TRACE_REFS
config setting automatically if Py_DEBUG
is set on Python 3.8 and up. #1334#[deny(warnings)]
attribute (and instead refuse warnings only in CI). #1340__module__
with #[pyclass]
. #1343PyFrozenSet::empty
to &PyFrozenSet
(was incorrectly &PySet
). #1351Py_INCREF
on heap type objects on Python versions before 3.8. #1365pyo3cls
and pyo3-derive-backend
to pyo3-macros
and pyo3-macros-backend
respectively. #1317#[pyclass]
types. See the migration guide for full details. #1152
+abi3-py36
, abi3-py37
, abi3-py38
etc. to set the minimum Python version when using the limited API. #1263TypeError
messages generated by pymethod wrappers. #1212PyEval_SetProfile
and PyEval_SetTrace
. #1255PyContext_New
, etc). #1259PyAny::is_instance
method. #1276char
and PyString
. #1282PyBuffer_SizeFromFormat
, PyObject_LengthHint
, PyObject_CallNoArgs
, PyObject_CallOneArg
, PyObject_CallMethodNoArgs
, PyObject_CallMethodOneArg
, PyObject_VectorcallDict
, and PyObject_VectorcallMethod
. #1287u128
/i128
and PyLong
for PyPy. #1310Python::version
and Python::version_info
to get the running interpreter version. #1322PyType::name
from Cow<str>
to PyResult<&str>
. #1152#[pyclass(subclass)]
is now required for subclassing from Rust (was previously just required for subclassing from Python). #1152PyIterator
to be consistent with other native types: it is now used as &PyIterator
instead of PyIterator<'a>
. #1176PyDowncastError
messages to be closer to Python's builtin error messages. #1212Debug
and Display
impls for PyException
to be consistent with PyAny
. #1275Debug
impl of PyErr
to output more helpful information (acquiring the GIL if necessary). #1275PyTypeInfo::is_instance
and PyTypeInfo::is_exact_instance
to PyTypeInfo::is_type_of
and PyTypeInfo::is_exact_type_of
. #1278PyAny::call0
, Py::call0
and PyAny::call_method0
and Py::call_method0
on Python 3.9 and up. #1287#[pyclass(name = "MyClass")]
. #1303Python::is_instance
, Python::is_subclass
, Python::release
, and Python::xdecref
. #1292PyUnicode_AsUnicodeCopy
, PyUnicode_GetMax
, _Py_CheckRecursionLimit
, PyObject_AsCharBuffer
, PyObject_AsReadBuffer
, PyObject_CheckReadBuffer
and PyObject_AsWriteBuffer
, which will be removed in Python 3.10. #1217python3
feature. #1235PyCodeObject
struct (co_posonlyargcount
) - caused invalid access to other fields in Python >3.7. #1260x86_64-unknown-linux-musl
target from x86_64-unknown-linux-gnu
host. #1267#[text_signature]
interacting badly with rust r#raw_identifiers
. #1286PyObject_Vectorcall
and PyVectorcall_Call
. #1287#[new]
methods. #1319From<Py<T>>
for PyObject
, a regression introduced in PyO3 0.12. #1297#[pyfunction]
. #1209Python::check_signals
as a safe a wrapper for PyErr_CheckSignals
. #1214pyo3::class::methods
. #1169c_char
is u8
. #1182Py_FinalizeEx
, PyOS_getsig
, and PyOS_setsig
. #1021PyString::to_str
for accessing PyString
as &str
. #1023Python::with_gil
for executing a closure with the Python GIL. #1037PyAny::downcast
. #1050Debug
for PyIterator
. #1051PyBytes::new_with
and PyByteArray::new_with
for initialising bytes
and bytearray
objects using a closure. #1074#[derive(FromPyObject)]
macro for enums and structs. #1065Py::as_ref
and Py::into_ref
for converting Py<T>
to &T
. #1098Result
types other than PyResult
from #[pyfunction]
, #[pymethod]
and #[pyproto]
functions. #1106.ToPyObject
, IntoPy
, and FromPyObject
for hashbrown's HashMap
and HashSet
types (requires the hashbrown
feature). #1114#[pyfunction(pass_module)]
and #[pyfn(pass_module)]
to pass the module object as the first function argument. #1143PyModule::add_function
and PyModule::add_submodule
as typed alternatives to PyModule::add_wrapped
. #1143PyCFunction
and PyFunction
types. #1163RuntimeError
to PyRuntimeError
. The old names continue to exist but are deprecated.&T
or Py<T>
, just like other Python-native types.PyException::py_err
to PyException::new_err
.PyUnicodeDecodeErr::new_err
to PyUnicodeDecodeErr::new
.PyStopIteration::stop_iteration
.T: Send
for the return value T
of Python::allow_threads
. #1036PYTHON_SYS_EXECUTABLE
to PYO3_PYTHON
. The old name will continue to work (undocumented) but will be removed in a future release. #1039unsafe
from signature of PyType::as_type_ptr
. #1047PyIterator::from_object
to PyResult<PyIterator>
(was Result<PyIterator, PyDowncastError>
). #1051IntoPy
is no longer implied by FromPy
. #1063PyObject
to be a type alias for Py<PyAny>
. #1063PyErr
to be compatible with the std::error::Error
trait: #1067 #1115
+Display
, Error
, Send
and Sync
for PyErr
and PyErrArguments
.PyErr::instance
for accessing PyErr
as &PyBaseException
.PyErr
's fields are now an implementation detail. The equivalent values can be accessed with PyErr::ptype
, PyErr::pvalue
and PyErr::ptraceback
.PyErr::print
and PyErr::print_and_set_sys_last_vars
to &self
(was self
).PyErrValue
, PyErr::from_value
, PyErr::into_normalized
, and PyErr::normalize
.PyException::into
.Into<PyResult<T>>
for PyErr
and PyException
.#[pyproto]
to return NotImplemented
if Python should try a reversed operation. #1072PyModule::add
to impl IntoPy<PyObject>
(was impl ToPyObject
). #1124PyErr
APIs; see the "changed" section above. #1024 #1067 #1115PyString::to_string
(use new PyString::to_str
). #1023PyString::as_bytes
. #1023Python::register_any
. #1023GILGuard::acquire
from the public API. Use Python::acquire_gil
or Python::with_gil
. #1036FromPy
trait. #1063AsPyRef
trait. #1098Py_SetProgramName
and Py_SetPythonHome
to take *const
arguments (was *mut
). #1021FromPyObject
for num_bigint::BigInt
for Python objects with an __index__
method. #1027_PyLong_AsByteArray
to take *mut c_uchar
argument (was *const c_uchar
). #1029#[pyclass(dict, unsendable)]
. #1058 #1059&Self
as an argument type for functions in a #[pymethods]
block. #1071#[pyproto]
implementations. #1093extension-module
feature. #1095+
operator not trying __radd__
when both __add__
and __radd__
are defined in PyNumberProtocol
(and similar for all other reversible operators). #1107#[pyclass(unsendable)]
. #1009parking_lot
dependency to 0.11
. #1010PyObject_AsFileDescriptor
. #938PyByteArray::data
, PyByteArray::as_bytes
, and PyByteArray::as_bytes_mut
. #967GILOnceCell
to use in situations where lazy_static
or once_cell
can deadlock. #975Py::borrow
, Py::borrow_mut
, Py::try_borrow
, and Py::try_borrow_mut
for accessing #[pyclass]
values. #976IterNextOutput
and IterANextOutput
for returning from __next__
/ __anext__
. #997#[pyo3(get)]
attribute. (Remove the hidden API GetPropertyValue
.) #934Py_Finalize
at exit to flush buffers, etc. #943Send
bound for #[pyclass]
. #966Python
argument to most methods on PyObject
and Py<T>
to ensure GIL safety. #970PyTypeObject::type_object
- now takes Python
argument and returns &PyType
. #970PyTuple::slice
and PyTuple::split_from
from Py<PyTuple>
to &PyTuple
. #970PyTuple::as_slice
to &[&PyAny]
. #971PyTypeInfo::type_object
to type_object_raw
, and add Python
argument. #975num-complex
optional dependendency from 0.2
to 0.3
. #977num-bigint
optional dependendency from 0.2
to 0.3
. #978#[pyproto]
is re-implemented without specialization. #961PyClassAlloc::alloc
is renamed to PyClassAlloc::new
. #990#[pyproto]
methods can now have return value T
or PyResult<T>
(previously only PyResult<T>
was supported). #996#[pyproto]
methods can now skip annotating the return type if it is ()
. #998ManagedPyRef
(unused, and needs specialization) #930None
to Option<T>
argument #[pyfunction]
with a default value. #936PyClass.__new__
's not respecting subclasses when inherited by a Python class. #990Option<T>
from #[pyproto]
methods. #996PyRef<Self>
and PyRefMut<Self>
to #[getter]
and #[setter]
methods. #999Python::acquire_gil
after dropping a PyObject
or Py<T>
. #924_PyDict_NewPresized
. #849IntoPy<PyObject>
for HashSet
and BTreeSet
. #864PyAny::dir
method. #886macros
feature (enabled by default). #897#[classattr]
on functions in #[pymethods]
. #905Clone
for PyObject
and Py<T>
. #908Deref<Target = PyAny>
for all builtin types. (PyList
, PyTuple
, PyDict
etc.) #911Deref<Target = PyAny>
for PyCell<T>
. #911#[classattr]
support for associated constants in #[pymethods]
. #914PanicException
. #797PyObject
and Py<T>
reference counts to decrement immediately upon drop when the GIL is held. #851PyIterProtocol
methods to use either PyRef
or PyRefMut
as the receiver type. #856FromPyObject
for Py<T>
to apply to a wider range of T
, including all T: PyClass
. #880ObjectProtocol
trait to the PyAny
struct. #911#![feature(specialization)]
in crates depending on PyO3. #917PyMethodsProtocol
trait. #889num-traits
dependency. #895ObjectProtocol
trait. #911PyAny::None
. Users should use Python::None
instead. #911*ProtocolImpl
traits. #917__radd__
and other __r*__
methods as implementations for Python mathematical operators. #839&'static
references to Python objects as arguments to #[pyfunction]
and #[pymethods]
. #869AsPyRef::as_ref
. #876#[pyo3(get)]
attribute on Py<T>
fields. #880PyList::get_item
returning borrowed objects when it was not safe to do so. #890Python::acquire_gil
calls creating dangling references. #893Python::allow_threads
. #912FromPyObject
implementations for HashSet
and BTreeSet
. #842PyCell
, which has RefCell-like features. #770PyClass
, PyLayout
, PyClassInitializer
. #683IntoIterator
for PySet
and PyFrozenSet
. #716FromPyObject
is now automatically implemented for T: Clone
pyclasses. #730#[pyo3(get)]
and #[pyo3(set)]
will now use the Rust doc-comment from the field for the Python property. #755#[setter]
functions may now take an argument of Pyo3::Python
. #760PyTypeInfo::BaseLayout
and PyClass::BaseNativeType
. #770PyDowncastImpl
. #770FromPyObject
and IntoPy<PyObject>
traits for arrays (up to 32). #778migration.md
and types.md
in the guide. #795, #802ffi::{_PyBytes_Resize, _PyDict_Next, _PyDict_Contains, _PyDict_GetDictPtr}
. #820#[new]
does not take PyRawObject
and can return Self
. #683FromPyObject
for &T
and &mut T
are no longer specializable. Implement PyTryFrom
for your type to control the behavior of FromPyObject::extract
for your types. #713IntoPy<U> for T
where U: FromPy<T>
is no longer specializable. Control the behavior of this via the implementation of FromPy
. #713parking_lot::Mutex
instead of spin::Mutex
. #7341.42.0-nightly 2020-01-21
. #761PyRef
and PyRefMut
are renewed for PyCell
. #770PyAny
is now on the top level module and prelude. #816PyRawObject
. #683PyNoArgsFunction
. #741initialize_type
. To set the module name for a #[pyclass]
, use the module
argument to the macro. #751AsPyRef::as_mut/with/with_mut/into_py/into_mut_py
. #770PyTryFrom::try_from_mut/try_from_mut_exact/try_from_mut_unchecked
. #770Python::mut_from_owned_ptr/mut_from_borrowed_ptr
. #770ObjectProtocol::get_base/get_mut_base
. #770#[pyo3(set)]
. #745PyObject
with #[pyo3(get)]
. #760#[pymethods]
used in conjunction with #[cfg]
. #769"*"
in a #[pyfunction()]
argument list incorrectly accepting any number of positional arguments (use args = "*"
when this behavior is desired). #792PyModule::dict
. #809DESCRIPTION
is not null-terminated. #822FromPyObject
for HashMap
and BTreeMap
#[name = "foo"]
attribute for #[pyfunction]
and in #[pymethods]
. #692#[text_signature]
attribute. #675#[init]
is removed. #658&Py~
types have !Send
bound. #655!
type. #672.PyString::as_bytes
. #639
+and PyString::to_string_lossy
#642.__contains__
and __iter__
from PyMappingProtocol. #644module
argument to pyclass
macro. #499py_run!
macro #512PyBytes
can now be indexed just like Vec<u8>
IntoPy<PyObject>
for PyRef
and PyRefMut
.gc
parameter for pyclass
(e.g. #[pyclass(gc)]
) without implementing the class::PyGCProtocol
trait is now a compile-time error. Failing to implement this trait could lead to segfaults. #532PyByteArray::data
has been replaced with PyDataArray::to_vec
because returning a &[u8]
is unsound. (See this comment for a great write-up for why that was unsound)mashup
with paste
.GILPool
gained a Python
marker to prevent it from being misused to release Python objects without the GIL held.IntoPyObject
was replaced with IntoPy<PyObject>
#[pyclass(subclass)]
is hidden a unsound-subclass
feature because it's causing segmentation faults.PyModule
generate an index of its members (__all__
list).slf: PyRef<T>
for pyclass(#419)pymethods
marshal
module. #460Python::run
returns PyResult<()>
instead of PyResult<&PyAny>
.#[getter]
and #[setter]
can now omit wrapping the
+result type in PyResult
if they don't raise exceptions.type_object::PyTypeObject
has been marked unsafe because breaking the contract type_object::PyTypeObject::init_type
can lead to UB.PySequenceProtocol
implementation in #423.#[getter]
method.pymethods
crashing on doc comments containing double quotes.PySet::new
and PyFrozenSet::new
now return PyResult<&Py[Frozen]Set>
; exceptions are raised if
+the items are not hashable.venv
on Windows.PyTuple::new
now returns &PyTuple
instead of Py<PyTuple>
.*args
and **kwargs
+tuple/dict now doesn't contain arguments that are otherwise assigned to parameters.cargo test
to fail with weird linking errors when the extension-module
feature is activated. For now you can work around this by making the extension-module
feature optional and running the tests with cargo test --no-default-features
:[dependencies.pyo3]
+version = "0.6.0"
+
+[features]
+extension-module = ["pyo3/extension-module"]
+default = ["extension-module"]
+
+wrap_pymodule!
macro similar to the existing wrap_pyfunction!
macro. Only available on python 3PyRef
and PyRefMut
types, which allow to differentiate between an instance of a rust struct on the rust heap and an instance that is embedded inside a python object. By kngwyu in #335FromPy<T>
and IntoPy<T>
which are equivalent to From<T>
and Into<T>
except that they require a gil token.ManagedPyRef
, which should eventually replace ToBorrowedObject
.PyObjectRef
to PyAny
in #388add_function
to add_wrapped
as it now also supports modules.#[pymodinit]
to #[pymodule]
py.init(|| value)
becomes Py::new(value)
py.init_ref(|| value)
becomes PyRef::new(value)
py.init_mut(|| value)
becomes PyRefMut::new(value)
.PyRawObject::init
is now infallible, e.g. it returns ()
instead of PyResult<()>
.py_exception!
to create_exception!
and refactored the error macros.wrap_function!
to wrap_pyfunction!
#[prop(get, set)]
to #[pyo3(get, set)]
#[pyfunction]
now supports the same arguments as #[pyfn()]
crate::types::exceptions
moved to crate::exceptions
IntoPyTuple
with IntoPy<Py<PyTuple>>
.IntoPyPointer
and ToPyPointer
moved into the crate root.class::CompareOp
moved into class::basic::CompareOp
PyList::{sort, reverse}
by chr1sj0nes in #357 and #358typeob
module to type_object
PyToken
was removed due to unsoundness (See #94).PyObjectAlloc
NoArgs
. Just use an empty tuplePyObjectWithGIL
. PyNativeType
is sufficient now that PyToken is removed.#[pyclass]
struct was considered to be part of a python object, even though you can create instances that are not part of the python heap. This was fixed through PyRef
and PyRefMut
.__dict__
in #403.Yanked
+#[pyclass]
objects can now be returned from rust functionsPyComplex
by kngwyu in #226PyDict::from_sequence
, equivalent to dict([(key, val), ...])
datetime
standard library types: PyDate
, PyTime
, PyDateTime
, PyTzInfo
, PyDelta
with associated ffi
types, by pganssle #200.PyString
, PyUnicode
, and PyBytes
now have an as_bytes
method that returns &[u8]
.PyObjectProtocol::get_type_ptr
by ijl in #242pyo3::types
instead.py_err
instead of new
, as they return PyErr
and not Self
.as_mut
and friends take and &mut self
instead of &self
ObjectProtocol::call
now takes an Option<&PyDict>
for the kwargs instead of an IntoPyDictPointer
.IntoPyDictPointer
was replace by IntoPyDict
which doesn't convert PyDict
itself anymore and returns a PyDict
instead of *mut PyObject
.PyTuple::new
now takes an IntoIterator
instead of a slicePyTypeObject
into PyTypeObject
without the create method and PyTypeCreate
with requires PyObjectAlloc<Self> + PyTypeInfo + Sized
.cargo edition --fix
which prefixed path with crate::
for rust 2018async
to pyasync
as async will be a keyword in the 2018 edition.NonNull<*mut PyObject>
for Py and PyObject by ijl #260PyString
, PyUnicode
, and PyBytes
no longer have a data
method
+(replaced by as_bytes
) and PyStringData
has been removed.__class__
by kngwyu #263PyDowncastError
use_extern_macros
was stabilizedconcat_idents
with mashupproc_macro
has been stabilized on nightly (rust-lang/rust#52081). This means that we can remove the proc_macro
feature, but now we need the use_extern_macros
from the 2018 edition instead.py
and live in the prelude. This means you can use #[pyclass]
, #[pymethods]
, #[pyproto]
, #[pyfunction]
and #[pymodinit]
directly, at least after a use pyo3::prelude::*
. They were also moved into a module called proc_macro
. You shouldn't use #[pyo3::proc_macro::pyclass]
or other longer paths in attributes because proc_macro_path_invoc
isn't going to be stabilized soon.base
option in the pyclass
macro to extends
.#[pymodinit]
uses the function name as module name, unless the name is overrriden with #[pymodinit(name)]
RefFromPyObject
traitFromPyObject
for Py<T>
c_char
usage #93self.__dict__
supoort #68pyo3::prelude
module #70Iterator
support for PyTuple, PyList, PyDict #75PyErr
implementation. Drop py
parameter from constructor.PyO3 exposes a group of attributes powered by Rust's proc macro system for defining Python classes as Rust structs.
+The main attribute is #[pyclass]
, which is placed upon a Rust struct
or enum
to generate a Python type for it. They will usually also have one #[pymethods]
-annotated impl
block for the struct, which is used to define Python methods and constants for the generated Python type. (If the multiple-pymethods
feature is enabled, each #[pyclass]
is allowed to have multiple #[pymethods]
blocks.) #[pymethods]
may also have implementations for Python magic methods such as __str__
.
This chapter will discuss the functionality and configuration these attributes offer. Below is a list of links to the relevant section of this chapter for each:
+ +To define a custom Python class, add the #[pyclass]
attribute to a Rust struct or enum.
#![allow(dead_code)]
+use pyo3::prelude::*;
+
+#[pyclass]
+struct MyClass {
+ inner: i32,
+}
+
+// A "tuple" struct
+#[pyclass]
+struct Number(i32);
+
+// PyO3 supports unit-only enums (which contain only unit variants)
+// These simple enums behave similarly to Python's enumerations (enum.Enum)
+#[pyclass(eq, eq_int)]
+#[derive(PartialEq)]
+enum MyEnum {
+ Variant,
+ OtherVariant = 30, // PyO3 supports custom discriminants.
+}
+
+// PyO3 supports custom discriminants in unit-only enums
+#[pyclass(eq, eq_int)]
+#[derive(PartialEq)]
+enum HttpResponse {
+ Ok = 200,
+ NotFound = 404,
+ Teapot = 418,
+ // ...
+}
+
+// PyO3 also supports enums with Struct and Tuple variants
+// These complex enums have sligtly different behavior from the simple enums above
+// They are meant to work with instance checks and match statement patterns
+// The variants can be mixed and matched
+// Struct variants have named fields while tuple enums generate generic names for fields in order _0, _1, _2, ...
+// Apart from this both types are functionally identical
+#[pyclass]
+enum Shape {
+ Circle { radius: f64 },
+ Rectangle { width: f64, height: f64 },
+ RegularPolygon(u32, f64),
+ Nothing(),
+}
+The above example generates implementations for PyTypeInfo
and PyClass
for MyClass
, Number
, MyEnum
, HttpResponse
, and Shape
. To see these generated implementations, refer to the implementation details at the end of this chapter.
To integrate Rust types with Python, PyO3 needs to place some restrictions on the types which can be annotated with #[pyclass]
. In particular, they must have no lifetime parameters, no generic parameters, and must implement Send
. The reason for each of these is explained below.
Rust lifetimes are used by the Rust compiler to reason about a program's memory safety. They are a compile-time only concept; there is no way to access Rust lifetimes at runtime from a dynamic language like Python.
+As soon as Rust data is exposed to Python, there is no guarantee that the Rust compiler can make on how long the data will live. Python is a reference-counted language and those references can be held for an arbitrarily long time which is untraceable by the Rust compiler. The only possible way to express this correctly is to require that any #[pyclass]
does not borrow data for any lifetime shorter than the 'static
lifetime, i.e. the #[pyclass]
cannot have any lifetime parameters.
When you need to share ownership of data between Python and Rust, instead of using borrowed references with lifetimes consider using reference-counted smart pointers such as Arc
or Py
.
A Rust struct Foo<T>
with a generic parameter T
generates new compiled implementations each time it is used with a different concrete type for T
. These new implementations are generated by the compiler at each usage site. This is incompatible with wrapping Foo
in Python, where there needs to be a single compiled implementation of Foo
which is integrated with the Python interpreter.
Currently, the best alternative is to write a macro which expands to a new #[pyclass]
for each instantiation you want:
#![allow(dead_code)]
+use pyo3::prelude::*;
+
+struct GenericClass<T> {
+ data: T,
+}
+
+macro_rules! create_interface {
+ ($name: ident, $type: ident) => {
+ #[pyclass]
+ pub struct $name {
+ inner: GenericClass<$type>,
+ }
+ #[pymethods]
+ impl $name {
+ #[new]
+ pub fn new(data: $type) -> Self {
+ Self {
+ inner: GenericClass { data: data },
+ }
+ }
+ }
+ };
+}
+
+create_interface!(IntClass, i64);
+create_interface!(FloatClass, String);
+Because Python objects are freely shared between threads by the Python interpreter, there is no guarantee which thread will eventually drop the object. Therefore all types annotated with #[pyclass]
must implement Send
(unless annotated with #[pyclass(unsendable)]
).
By default, it is not possible to create an instance of a custom class from Python code.
+To declare a constructor, you need to define a method and annotate it with the #[new]
+attribute. Only Python's __new__
method can be specified, __init__
is not available.
#![allow(dead_code)]
+use pyo3::prelude::*;
+#[pyclass]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ #[new]
+ fn new(value: i32) -> Self {
+ Number(value)
+ }
+}
+Alternatively, if your new
method may fail you can return PyResult<Self>
.
#![allow(dead_code)]
+use pyo3::prelude::*;
+use pyo3::exceptions::PyValueError;
+#[pyclass]
+struct Nonzero(i32);
+
+#[pymethods]
+impl Nonzero {
+ #[new]
+ fn py_new(value: i32) -> PyResult<Self> {
+ if value == 0 {
+ Err(PyValueError::new_err("cannot be zero"))
+ } else {
+ Ok(Nonzero(value))
+ }
+ }
+}
+If you want to return an existing object (for example, because your new
+method caches the values it returns), new
can return pyo3::Py<Self>
.
As you can see, the Rust method name is not important here; this way you can
+still, use new()
for a Rust-level constructor.
If no method marked with #[new]
is declared, object instances can only be
+created from Rust, but not from Python.
For arguments, see the Method arguments
section below.
The next step is to create the module initializer and add our class to it:
+#![allow(dead_code)]
+use pyo3::prelude::*;
+#[pyclass]
+struct Number(i32);
+
+#[pymodule]
+fn my_module(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ m.add_class::<Number>()?;
+ Ok(())
+}
+Often is useful to turn a #[pyclass]
type T
into a Python object and access it from Rust code. The [Py<T>
] and [Bound<'py, T>
] smart pointers are the ways to represent a Python object in PyO3's API. More detail can be found about them in the Python objects section of the guide.
Most Python objects do not offer exclusive (&mut
) access (see the section on Python's memory model). However, Rust structs wrapped as Python objects (called pyclass
types) often do need &mut
access. Due to the GIL, PyO3 can guarantee exclusive access to them.
The Rust borrow checker cannot reason about &mut
references once an object's ownership has been passed to the Python interpreter. This means that borrow checking is done at runtime using with a scheme very similar to std::cell::RefCell<T>
. This is known as interior mutability.
Users who are familiar with RefCell<T>
can use Py<T>
and Bound<'py, T>
just like RefCell<T>
.
For users who are not very familiar with RefCell<T>
, here is a reminder of Rust's rules of borrowing:
Py<T>
and Bound<'py, T>
, like RefCell<T>
, ensure these borrowing rules by tracking references at runtime.
use pyo3::prelude::*;
+#[pyclass]
+struct MyClass {
+ #[pyo3(get)]
+ num: i32,
+}
+Python::with_gil(|py| {
+ let obj = Bound::new(py, MyClass { num: 3 }).unwrap();
+ {
+ let obj_ref = obj.borrow(); // Get PyRef
+ assert_eq!(obj_ref.num, 3);
+ // You cannot get PyRefMut unless all PyRefs are dropped
+ assert!(obj.try_borrow_mut().is_err());
+ }
+ {
+ let mut obj_mut = obj.borrow_mut(); // Get PyRefMut
+ obj_mut.num = 5;
+ // You cannot get any other refs until the PyRefMut is dropped
+ assert!(obj.try_borrow().is_err());
+ assert!(obj.try_borrow_mut().is_err());
+ }
+
+ // You can convert `Bound` to a Python object
+ pyo3::py_run!(py, obj, "assert obj.num == 5");
+});
+A Bound<'py, T>
is restricted to the GIL lifetime 'py
. To make the object longer lived (for example, to store it in a struct on the
+Rust side), use Py<T>
. Py<T>
needs a Python<'_>
token to allow access:
use pyo3::prelude::*;
+#[pyclass]
+struct MyClass {
+ num: i32,
+}
+
+fn return_myclass() -> Py<MyClass> {
+ Python::with_gil(|py| Py::new(py, MyClass { num: 1 }).unwrap())
+}
+
+let obj = return_myclass();
+
+Python::with_gil(move |py| {
+ let bound = obj.bind(py); // Py<MyClass>::bind returns &Bound<'py, MyClass>
+ let obj_ref = bound.borrow(); // Get PyRef<T>
+ assert_eq!(obj_ref.num, 1);
+});
+As detailed above, runtime borrow checking is currently enabled by default. But a class can opt of out it by declaring itself frozen
. It can still use interior mutability via standard Rust types like RefCell
or Mutex
, but it is not bound to the implementation provided by PyO3 and can choose the most appropriate strategy on field-by-field basis.
Classes which are frozen
and also Sync
, e.g. they do use Mutex
but not RefCell
, can be accessed without needing the Python GIL via the Bound::get
and Py::get
methods:
use std::sync::atomic::{AtomicUsize, Ordering};
+use pyo3::prelude::*;
+
+#[pyclass(frozen)]
+struct FrozenCounter {
+ value: AtomicUsize,
+}
+
+let py_counter: Py<FrozenCounter> = Python::with_gil(|py| {
+ let counter = FrozenCounter {
+ value: AtomicUsize::new(0),
+ };
+
+ Py::new(py, counter).unwrap()
+});
+
+py_counter.get().value.fetch_add(1, Ordering::Relaxed);
+
+Python::with_gil(move |_py| drop(py_counter));
+Frozen classes are likely to become the default thereby guiding the PyO3 ecosystem towards a more deliberate application of interior mutability. Eventually, this should enable further optimizations of PyO3's internals and avoid downstream code paying the cost of interior mutability when it is not actually required.
+#[pyclass]
can be used with the following parameters:
Parameter | Description |
---|---|
constructor | This is currently only allowed on variants of complex enums. It allows customization of the generated class constructor for each variant. It uses the same syntax and supports the same options as the signature attribute of functions and methods. |
crate = "some::path" | Path to import the pyo3 crate, if it's not accessible at ::pyo3 . |
dict | Gives instances of this class an empty __dict__ to store custom attributes. |
eq | Implements __eq__ using the PartialEq implementation of the underlying Rust datatype. |
eq_int | Implements __eq__ using __int__ for simple enums. |
extends = BaseType | Use a custom baseclass. Defaults to PyAny |
freelist = N | Implements a free list of size N. This can improve performance for types that are often created and deleted in quick succession. Profile your code to see whether freelist is right for you. |
frozen | Declares that your pyclass is immutable. It removes the borrow checker overhead when retrieving a shared reference to the Rust struct, but disables the ability to get a mutable reference. |
get_all | Generates getters for all fields of the pyclass. |
hash | Implements __hash__ using the Hash implementation of the underlying Rust datatype. |
mapping | Inform PyO3 that this class is a Mapping , and so leave its implementation of sequence C-API slots empty. |
module = "module_name" | Python code will see the class as being defined in this module. Defaults to builtins . |
name = "python_name" | Sets the name that Python sees this class as. Defaults to the name of the Rust struct. |
ord | Implements __lt__ , __gt__ , __le__ , & __ge__ using the PartialOrd implementation of the underlying Rust datatype. Requires eq |
rename_all = "renaming_rule" | Applies renaming rules to every getters and setters of a struct, or every variants of an enum. Possible values are: "camelCase", "kebab-case", "lowercase", "PascalCase", "SCREAMING-KEBAB-CASE", "SCREAMING_SNAKE_CASE", "snake_case", "UPPERCASE". |
sequence | Inform PyO3 that this class is a Sequence , and so leave its C-API mapping length slot empty. |
set_all | Generates setters for all fields of the pyclass. |
subclass | Allows other Python classes and #[pyclass] to inherit from this class. Enums cannot be subclassed. |
text_signature = "(arg1, arg2, ...)" | Sets the text signature for the Python class' __new__ method. |
unsendable | Required if your struct is not Send . Rather than using unsendable , consider implementing your struct in a threadsafe way by e.g. substituting Rc with Arc . By using unsendable , your class will panic when accessed by another thread. Also note the Python's GC is multi-threaded and while unsendable classes will not be traversed on foreign threads to avoid UB, this can lead to memory leaks. |
weakref | Allows this class to be weakly referenceable. |
All of these parameters can either be passed directly on the #[pyclass(...)]
annotation, or as one or
+more accompanying #[pyo3(...)]
annotations, e.g.:
// Argument supplied directly to the `#[pyclass]` annotation.
+#[pyclass(name = "SomeName", subclass)]
+struct MyClass {}
+
+// Argument supplied as a separate annotation.
+#[pyclass]
+#[pyo3(name = "SomeName", subclass)]
+struct MyClass {}
+These parameters are covered in various sections of this guide.
+Generally, #[new]
methods have to return T: Into<PyClassInitializer<Self>>
or
+PyResult<T> where T: Into<PyClassInitializer<Self>>
.
For constructors that may fail, you should wrap the return type in a PyResult as well. +Consult the table below to determine which type your constructor should return:
+Cannot fail | May fail | |
---|---|---|
No inheritance | T | PyResult<T> |
Inheritance(T Inherits U) | (T, U) | PyResult<(T, U)> |
Inheritance(General Case) | PyClassInitializer<T> | PyResult<PyClassInitializer<T>> |
By default, object
, i.e. PyAny
is used as the base class. To override this default,
+use the extends
parameter for pyclass
with the full path to the base class.
+Currently, only classes defined in Rust and builtins provided by PyO3 can be inherited
+from; inheriting from other classes defined in Python is not yet supported
+(#991).
For convenience, (T, U)
implements Into<PyClassInitializer<T>>
where U
is the
+base class of T
.
+But for a more deeply nested inheritance, you have to return PyClassInitializer<T>
+explicitly.
To get a parent class from a child, use PyRef
instead of &self
for methods,
+or PyRefMut
instead of &mut self
.
+Then you can access a parent class by self_.as_super()
as &PyRef<Self::BaseClass>
,
+or by self_.into_super()
as PyRef<Self::BaseClass>
(and similar for the PyRefMut
+case). For convenience, self_.as_ref()
can also be used to get &Self::BaseClass
+directly; however, this approach does not let you access base clases higher in the
+inheritance hierarchy, for which you would need to chain multiple as_super
or
+into_super
calls.
use pyo3::prelude::*;
+
+#[pyclass(subclass)]
+struct BaseClass {
+ val1: usize,
+}
+
+#[pymethods]
+impl BaseClass {
+ #[new]
+ fn new() -> Self {
+ BaseClass { val1: 10 }
+ }
+
+ pub fn method1(&self) -> PyResult<usize> {
+ Ok(self.val1)
+ }
+}
+
+#[pyclass(extends=BaseClass, subclass)]
+struct SubClass {
+ val2: usize,
+}
+
+#[pymethods]
+impl SubClass {
+ #[new]
+ fn new() -> (Self, BaseClass) {
+ (SubClass { val2: 15 }, BaseClass::new())
+ }
+
+ fn method2(self_: PyRef<'_, Self>) -> PyResult<usize> {
+ let super_ = self_.as_super(); // Get &PyRef<BaseClass>
+ super_.method1().map(|x| x * self_.val2)
+ }
+}
+
+#[pyclass(extends=SubClass)]
+struct SubSubClass {
+ val3: usize,
+}
+
+#[pymethods]
+impl SubSubClass {
+ #[new]
+ fn new() -> PyClassInitializer<Self> {
+ PyClassInitializer::from(SubClass::new()).add_subclass(SubSubClass { val3: 20 })
+ }
+
+ fn method3(self_: PyRef<'_, Self>) -> PyResult<usize> {
+ let base = self_.as_super().as_super(); // Get &PyRef<'_, BaseClass>
+ base.method1().map(|x| x * self_.val3)
+ }
+
+ fn method4(self_: PyRef<'_, Self>) -> PyResult<usize> {
+ let v = self_.val3;
+ let super_ = self_.into_super(); // Get PyRef<'_, SubClass>
+ SubClass::method2(super_).map(|x| x * v)
+ }
+
+ fn get_values(self_: PyRef<'_, Self>) -> (usize, usize, usize) {
+ let val1 = self_.as_super().as_super().val1;
+ let val2 = self_.as_super().val2;
+ (val1, val2, self_.val3)
+ }
+
+ fn double_values(mut self_: PyRefMut<'_, Self>) {
+ self_.as_super().as_super().val1 *= 2;
+ self_.as_super().val2 *= 2;
+ self_.val3 *= 2;
+ }
+
+ #[staticmethod]
+ fn factory_method(py: Python<'_>, val: usize) -> PyResult<PyObject> {
+ let base = PyClassInitializer::from(BaseClass::new());
+ let sub = base.add_subclass(SubClass { val2: val });
+ if val % 2 == 0 {
+ Ok(Py::new(py, sub)?.to_object(py))
+ } else {
+ let sub_sub = sub.add_subclass(SubSubClass { val3: val });
+ Ok(Py::new(py, sub_sub)?.to_object(py))
+ }
+ }
+}
+Python::with_gil(|py| {
+ let subsub = pyo3::Py::new(py, SubSubClass::new()).unwrap();
+ pyo3::py_run!(py, subsub, "assert subsub.method1() == 10");
+ pyo3::py_run!(py, subsub, "assert subsub.method2() == 150");
+ pyo3::py_run!(py, subsub, "assert subsub.method3() == 200");
+ pyo3::py_run!(py, subsub, "assert subsub.method4() == 3000");
+ pyo3::py_run!(py, subsub, "assert subsub.get_values() == (10, 15, 20)");
+ pyo3::py_run!(py, subsub, "assert subsub.double_values() == None");
+ pyo3::py_run!(py, subsub, "assert subsub.get_values() == (20, 30, 40)");
+ let subsub = SubSubClass::factory_method(py, 2).unwrap();
+ let subsubsub = SubSubClass::factory_method(py, 3).unwrap();
+ let cls = py.get_type_bound::<SubSubClass>();
+ pyo3::py_run!(py, subsub cls, "assert not isinstance(subsub, cls)");
+ pyo3::py_run!(py, subsubsub cls, "assert isinstance(subsubsub, cls)");
+});
+You can inherit native types such as PyDict
, if they implement
+PySizedLayout
.
+This is not supported when building for the Python limited API (aka the abi3
feature of PyO3).
To convert between the Rust type and its native base class, you can take
+slf
as a Python object. To access the Rust fields use slf.borrow()
or
+slf.borrow_mut()
, and to access the base class use slf.downcast::<BaseClass>()
.
#[cfg(not(Py_LIMITED_API))] {
+use pyo3::prelude::*;
+use pyo3::types::PyDict;
+use std::collections::HashMap;
+
+#[pyclass(extends=PyDict)]
+#[derive(Default)]
+struct DictWithCounter {
+ counter: HashMap<String, usize>,
+}
+
+#[pymethods]
+impl DictWithCounter {
+ #[new]
+ fn new() -> Self {
+ Self::default()
+ }
+
+ fn set(slf: &Bound<'_, Self>, key: String, value: Bound<'_, PyAny>) -> PyResult<()> {
+ slf.borrow_mut().counter.entry(key.clone()).or_insert(0);
+ let dict = slf.downcast::<PyDict>()?;
+ dict.set_item(key, value)
+ }
+}
+Python::with_gil(|py| {
+ let cnt = pyo3::Py::new(py, DictWithCounter::new()).unwrap();
+ pyo3::py_run!(py, cnt, "cnt.set('abc', 10); assert cnt['abc'] == 10")
+});
+}
+If SubClass
does not provide a base class initialization, the compilation fails.
use pyo3::prelude::*;
+
+#[pyclass]
+struct BaseClass {
+ val1: usize,
+}
+
+#[pyclass(extends=BaseClass)]
+struct SubClass {
+ val2: usize,
+}
+
+#[pymethods]
+impl SubClass {
+ #[new]
+ fn new() -> Self {
+ SubClass { val2: 15 }
+ }
+}
+The __new__
constructor of a native base class is called implicitly when
+creating a new instance from Python. Be sure to accept arguments in the
+#[new]
method that you want the base class to get, even if they are not used
+in that fn
:
#[allow(dead_code)]
+#[cfg(not(Py_LIMITED_API))] {
+use pyo3::prelude::*;
+use pyo3::types::PyDict;
+
+#[pyclass(extends=PyDict)]
+struct MyDict {
+ private: i32,
+}
+
+#[pymethods]
+impl MyDict {
+ #[new]
+ #[pyo3(signature = (*args, **kwargs))]
+ fn new(args: &Bound<'_, PyAny>, kwargs: Option<&Bound<'_, PyAny>>) -> Self {
+ Self { private: 0 }
+ }
+
+ // some custom methods that use `private` here...
+}
+Python::with_gil(|py| {
+ let cls = py.get_type_bound::<MyDict>();
+ pyo3::py_run!(py, cls, "cls(a=1, b=2)")
+});
+}
+Here, the args
and kwargs
allow creating instances of the subclass passing
+initial items, such as MyDict(item_sequence)
or MyDict(a=1, b=2)
.
PyO3 supports two ways to add properties to your #[pyclass]
:
#[pyo3(get, set)]
attribute can be added directly to the field definition in the #[pyclass]
.#[getter]
and #[setter]
functions in the #[pymethods]
block.We'll cover each of these in the following sections.
+#[pyo3(get, set)]
For simple cases where a member variable is just read and written with no side effects, you can declare getters and setters in your #[pyclass]
field definition using the pyo3
attribute, like in the example below:
use pyo3::prelude::*;
+#[pyclass]
+struct MyClass {
+ #[pyo3(get, set)]
+ num: i32,
+}
+The above would make the num
field available for reading and writing as a self.num
Python property. To expose the property with a different name to the field, specify this alongside the rest of the options, e.g. #[pyo3(get, set, name = "custom_name")]
.
Properties can be readonly or writeonly by using just #[pyo3(get)]
or #[pyo3(set)]
respectively.
To use these annotations, your field type must implement some conversion traits:
+get
the field type must implement both IntoPy<PyObject>
and Clone
.set
the field type must implement FromPyObject
.For example, implementations of those traits are provided for the Cell
type, if the inner type also implements the trait. This means you can use #[pyo3(get, set)]
on fields wrapped in a Cell
.
#[getter]
and #[setter]
For cases which don't satisfy the #[pyo3(get, set)]
trait requirements, or need side effects, descriptor methods can be defined in a #[pymethods]
impl
block.
This is done using the #[getter]
and #[setter]
attributes, like in the example below:
use pyo3::prelude::*;
+#[pyclass]
+struct MyClass {
+ num: i32,
+}
+
+#[pymethods]
+impl MyClass {
+ #[getter]
+ fn num(&self) -> PyResult<i32> {
+ Ok(self.num)
+ }
+}
+A getter or setter's function name is used as the property name by default. There are several +ways how to override the name.
+If a function name starts with get_
or set_
for getter or setter respectively,
+the descriptor name becomes the function name with this prefix removed. This is also useful in case of
+Rust keywords like type
+(raw identifiers
+can be used since Rust 2018).
use pyo3::prelude::*;
+#[pyclass]
+struct MyClass {
+ num: i32,
+}
+#[pymethods]
+impl MyClass {
+ #[getter]
+ fn get_num(&self) -> PyResult<i32> {
+ Ok(self.num)
+ }
+
+ #[setter]
+ fn set_num(&mut self, value: i32) -> PyResult<()> {
+ self.num = value;
+ Ok(())
+ }
+}
+In this case, a property num
is defined and available from Python code as self.num
.
Both the #[getter]
and #[setter]
attributes accept one parameter.
+If this parameter is specified, it is used as the property name, i.e.
use pyo3::prelude::*;
+#[pyclass]
+struct MyClass {
+ num: i32,
+}
+#[pymethods]
+impl MyClass {
+ #[getter(number)]
+ fn num(&self) -> PyResult<i32> {
+ Ok(self.num)
+ }
+
+ #[setter(number)]
+ fn set_num(&mut self, value: i32) -> PyResult<()> {
+ self.num = value;
+ Ok(())
+ }
+}
+In this case, the property number
is defined and available from Python code as self.number
.
Attributes defined by #[setter]
or #[pyo3(set)]
will always raise AttributeError
on del
+operations. Support for defining custom del
behavior is tracked in
+#1778.
To define a Python compatible method, an impl
block for your struct has to be annotated with the
+#[pymethods]
attribute. PyO3 generates Python compatible wrappers for all functions in this
+block with some variations, like descriptors, class method static methods, etc.
Since Rust allows any number of impl
blocks, you can easily split methods
+between those accessible to Python (and Rust) and those accessible only to Rust. However to have multiple
+#[pymethods]
-annotated impl
blocks for the same struct you must enable the multiple-pymethods
feature of PyO3.
use pyo3::prelude::*;
+#[pyclass]
+struct MyClass {
+ num: i32,
+}
+#[pymethods]
+impl MyClass {
+ fn method1(&self) -> PyResult<i32> {
+ Ok(10)
+ }
+
+ fn set_method(&mut self, value: i32) -> PyResult<()> {
+ self.num = value;
+ Ok(())
+ }
+}
+Calls to these methods are protected by the GIL, so both &self
and &mut self
can be used.
+The return type must be PyResult<T>
or T
for some T
that implements IntoPy<PyObject>
;
+the latter is allowed if the method cannot raise Python exceptions.
A Python
parameter can be specified as part of method signature, in this case the py
argument
+gets injected by the method wrapper, e.g.
use pyo3::prelude::*;
+#[pyclass]
+struct MyClass {
+#[allow(dead_code)]
+ num: i32,
+}
+#[pymethods]
+impl MyClass {
+ fn method2(&self, py: Python<'_>) -> PyResult<i32> {
+ Ok(10)
+ }
+}
+From the Python perspective, the method2
in this example does not accept any arguments.
To create a class method for a custom class, the method needs to be annotated
+with the #[classmethod]
attribute.
+This is the equivalent of the Python decorator @classmethod
.
use pyo3::prelude::*;
+use pyo3::types::PyType;
+#[pyclass]
+struct MyClass {
+ #[allow(dead_code)]
+ num: i32,
+}
+#[pymethods]
+impl MyClass {
+ #[classmethod]
+ fn cls_method(cls: &Bound<'_, PyType>) -> PyResult<i32> {
+ Ok(10)
+ }
+}
+Declares a class method callable from Python.
+&Bound<'_, PyType>
.parameter-list
, see the documentation of Method arguments
section.PyResult<T>
or T
for some T
that implements IntoPy<PyObject>
.To create a constructor which takes a positional class argument, you can combine the #[classmethod]
and #[new]
modifiers:
#![allow(dead_code)]
+use pyo3::prelude::*;
+use pyo3::types::PyType;
+#[pyclass]
+struct BaseClass(PyObject);
+
+#[pymethods]
+impl BaseClass {
+ #[new]
+ #[classmethod]
+ fn py_new(cls: &Bound<'_, PyType>) -> PyResult<Self> {
+ // Get an abstract attribute (presumably) declared on a subclass of this class.
+ let subclass_attr: Bound<'_, PyAny> = cls.getattr("a_class_attr")?;
+ Ok(Self(subclass_attr.unbind()))
+ }
+}
+To create a static method for a custom class, the method needs to be annotated with the
+#[staticmethod]
attribute. The return type must be T
or PyResult<T>
for some T
that implements
+IntoPy<PyObject>
.
use pyo3::prelude::*;
+#[pyclass]
+struct MyClass {
+ #[allow(dead_code)]
+ num: i32,
+}
+#[pymethods]
+impl MyClass {
+ #[staticmethod]
+ fn static_method(param1: i32, param2: &str) -> PyResult<i32> {
+ Ok(10)
+ }
+}
+To create a class attribute (also called class variable), a method without
+any arguments can be annotated with the #[classattr]
attribute.
use pyo3::prelude::*;
+#[pyclass]
+struct MyClass {}
+#[pymethods]
+impl MyClass {
+ #[classattr]
+ fn my_attribute() -> String {
+ "hello".to_string()
+ }
+}
+
+Python::with_gil(|py| {
+ let my_class = py.get_type_bound::<MyClass>();
+ pyo3::py_run!(py, my_class, "assert my_class.my_attribute == 'hello'")
+});
+++Note: if the method has a
+Result
return type and returns anErr
, PyO3 will panic during +class creation.
If the class attribute is defined with const
code only, one can also annotate associated
+constants:
use pyo3::prelude::*;
+#[pyclass]
+struct MyClass {}
+#[pymethods]
+impl MyClass {
+ #[classattr]
+ const MY_CONST_ATTRIBUTE: &'static str = "foobar";
+}
+Free functions defined using #[pyfunction]
interact with classes through the same mechanisms as the self parameters of instance methods, i.e. they can take GIL-bound references, GIL-bound reference wrappers or GIL-indepedent references:
#![allow(dead_code)]
+use pyo3::prelude::*;
+#[pyclass]
+struct MyClass {
+ my_field: i32,
+}
+
+// Take a reference when the underlying `Bound` is irrelevant.
+#[pyfunction]
+fn increment_field(my_class: &mut MyClass) {
+ my_class.my_field += 1;
+}
+
+// Take a reference wrapper when borrowing should be automatic,
+// but interaction with the underlying `Bound` is desired.
+#[pyfunction]
+fn print_field(my_class: PyRef<'_, MyClass>) {
+ println!("{}", my_class.my_field);
+}
+
+// Take a reference to the underlying Bound
+// when borrowing needs to be managed manually.
+#[pyfunction]
+fn increment_then_print_field(my_class: &Bound<'_, MyClass>) {
+ my_class.borrow_mut().my_field += 1;
+
+ println!("{}", my_class.borrow().my_field);
+}
+
+// Take a GIL-indepedent reference when you want to store the reference elsewhere.
+#[pyfunction]
+fn print_refcnt(my_class: Py<MyClass>, py: Python<'_>) {
+ println!("{}", my_class.get_refcnt(py));
+}
+Classes can also be passed by value if they can be cloned, i.e. they automatically implement FromPyObject
if they implement Clone
, e.g. via #[derive(Clone)]
:
#![allow(dead_code)]
+use pyo3::prelude::*;
+#[pyclass]
+#[derive(Clone)]
+struct MyClass {
+ my_field: Box<i32>,
+}
+
+#[pyfunction]
+fn dissamble_clone(my_class: MyClass) {
+ let MyClass { mut my_field } = my_class;
+ *my_field += 1;
+}
+Note that #[derive(FromPyObject)]
on a class is usually not useful as it tries to construct a new Rust value by filling in the fields by looking up attributes of any given Python value.
Similar to #[pyfunction]
, the #[pyo3(signature = (...))]
attribute can be used to specify the way that #[pymethods]
accept arguments. Consult the documentation for function signatures
to see the parameters this attribute accepts.
The following example defines a class MyClass
with a method method
. This method has a signature that sets default values for num
and name
, and indicates that py_args
should collect all extra positional arguments and py_kwargs
all extra keyword arguments:
use pyo3::prelude::*;
+use pyo3::types::{PyDict, PyTuple};
+
+#[pyclass]
+struct MyClass {
+ num: i32,
+}
+#[pymethods]
+impl MyClass {
+ #[new]
+ #[pyo3(signature = (num=-1))]
+ fn new(num: i32) -> Self {
+ MyClass { num }
+ }
+
+ #[pyo3(signature = (num=10, *py_args, name="Hello", **py_kwargs))]
+ fn method(
+ &mut self,
+ num: i32,
+ py_args: &Bound<'_, PyTuple>,
+ name: &str,
+ py_kwargs: Option<&Bound<'_, PyDict>>,
+ ) -> String {
+ let num_before = self.num;
+ self.num = num;
+ format!(
+ "num={} (was previously={}), py_args={:?}, name={}, py_kwargs={:?} ",
+ num, num_before, py_args, name, py_kwargs,
+ )
+ }
+}
+In Python, this might be used like:
+>>> import mymodule
+>>> mc = mymodule.MyClass()
+>>> print(mc.method(44, False, "World", 666, x=44, y=55))
+py_args=('World', 666), py_kwargs=Some({'x': 44, 'y': 55}), name=Hello, num=44, num_before=-1
+>>> print(mc.method(num=-1, name="World"))
+py_args=(), py_kwargs=None, name=World, num=-1, num_before=44
+
+The #[pyo3(text_signature = "...")
option for #[pyfunction]
also works for #[pymethods]
.
#![allow(dead_code)]
+use pyo3::prelude::*;
+use pyo3::types::PyType;
+
+#[pyclass]
+struct MyClass {}
+
+#[pymethods]
+impl MyClass {
+ #[new]
+ #[pyo3(text_signature = "(c, d)")]
+ fn new(c: i32, d: &str) -> Self {
+ Self {}
+ }
+ // the self argument should be written $self
+ #[pyo3(text_signature = "($self, e, f)")]
+ fn my_method(&self, e: i32, f: i32) -> i32 {
+ e + f
+ }
+ // similarly for classmethod arguments, use $cls
+ #[classmethod]
+ #[pyo3(text_signature = "($cls, e, f)")]
+ fn my_class_method(cls: &Bound<'_, PyType>, e: i32, f: i32) -> i32 {
+ e + f
+ }
+ #[staticmethod]
+ #[pyo3(text_signature = "(e, f)")]
+ fn my_static_method(e: i32, f: i32) -> i32 {
+ e + f
+ }
+}
+
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| {
+ let inspect = PyModule::import_bound(py, "inspect")?.getattr("signature")?;
+ let module = PyModule::new_bound(py, "my_module")?;
+ module.add_class::<MyClass>()?;
+ let class = module.getattr("MyClass")?;
+
+ if cfg!(not(Py_LIMITED_API)) || py.version_info() >= (3, 10) {
+ let doc: String = class.getattr("__doc__")?.extract()?;
+ assert_eq!(doc, "");
+
+ let sig: String = inspect
+ .call1((&class,))?
+ .call_method0("__str__")?
+ .extract()?;
+ assert_eq!(sig, "(c, d)");
+ } else {
+ let doc: String = class.getattr("__doc__")?.extract()?;
+ assert_eq!(doc, "");
+
+ inspect.call1((&class,)).expect_err("`text_signature` on classes is not compatible with compilation in `abi3` mode until Python 3.10 or greater");
+ }
+
+ {
+ let method = class.getattr("my_method")?;
+
+ assert!(method.getattr("__doc__")?.is_none());
+
+ let sig: String = inspect
+ .call1((method,))?
+ .call_method0("__str__")?
+ .extract()?;
+ assert_eq!(sig, "(self, /, e, f)");
+ }
+
+ {
+ let method = class.getattr("my_class_method")?;
+
+ assert!(method.getattr("__doc__")?.is_none());
+
+ let sig: String = inspect
+ .call1((method,))?
+ .call_method0("__str__")?
+ .extract()?;
+ assert_eq!(sig, "(e, f)"); // inspect.signature skips the $cls arg
+ }
+
+ {
+ let method = class.getattr("my_static_method")?;
+
+ assert!(method.getattr("__doc__")?.is_none());
+
+ let sig: String = inspect
+ .call1((method,))?
+ .call_method0("__str__")?
+ .extract()?;
+ assert_eq!(sig, "(e, f)");
+ }
+
+ Ok(())
+ })
+}
+Note that text_signature
on #[new]
is not compatible with compilation in
+abi3
mode until Python 3.10 or greater.
PyO3 supports writing instance methods using the normal method receivers for shared &self
and unique &mut self
references. This interacts with lifetime elision insofar as the lifetime of a such a receiver is assigned to all elided output lifetime parameters.
This is a good default for general Rust code where return values are more likely to borrow from the receiver than from the other arguments, if they contain any lifetimes at all. However, when returning bound references Bound<'py, T>
in PyO3-based code, the GIL lifetime 'py
should usually be derived from a GIL token py: Python<'py>
passed as an argument instead of the receiver.
Specifically, signatures like
+fn frobnicate(&self, py: Python) -> Bound<Foo>;
+will not work as they are inferred as
+fn frobnicate<'a, 'py>(&'a self, py: Python<'py>) -> Bound<'a, Foo>;
+instead of the intended
+fn frobnicate<'a, 'py>(&'a self, py: Python<'py>) -> Bound<'py, Foo>;
+and should usually be written as
+fn frobnicate<'py>(&self, py: Python<'py>) -> Bound<'py, Foo>;
+The same problem does not exist for #[pyfunction]
s as the special case for receiver lifetimes does not apply and indeed a signature like
fn frobnicate(bar: &Bar, py: Python) -> Bound<Foo>;
+will yield compiler error E0106 "missing lifetime specifier".
+#[pyclass]
enumsEnum support in PyO3 comes in two flavors, depending on what kind of variants the enum has: simple and complex.
+A simple enum (a.k.a. C-like enum) has only unit variants.
+PyO3 adds a class attribute for each variant, so you can access them in Python without defining #[new]
. PyO3 also provides default implementations of __richcmp__
and __int__
, so they can be compared using ==
:
use pyo3::prelude::*;
+#[pyclass(eq, eq_int)]
+#[derive(PartialEq)]
+enum MyEnum {
+ Variant,
+ OtherVariant,
+}
+
+Python::with_gil(|py| {
+ let x = Py::new(py, MyEnum::Variant).unwrap();
+ let y = Py::new(py, MyEnum::OtherVariant).unwrap();
+ let cls = py.get_type_bound::<MyEnum>();
+ pyo3::py_run!(py, x y cls, r#"
+ assert x == cls.Variant
+ assert y == cls.OtherVariant
+ assert x != y
+ "#)
+})
+You can also convert your simple enums into int
:
use pyo3::prelude::*;
+#[pyclass(eq, eq_int)]
+#[derive(PartialEq)]
+enum MyEnum {
+ Variant,
+ OtherVariant = 10,
+}
+
+Python::with_gil(|py| {
+ let cls = py.get_type_bound::<MyEnum>();
+ let x = MyEnum::Variant as i32; // The exact value is assigned by the compiler.
+ pyo3::py_run!(py, cls x, r#"
+ assert int(cls.Variant) == x
+ assert int(cls.OtherVariant) == 10
+ "#)
+})
+PyO3 also provides __repr__
for enums:
use pyo3::prelude::*;
+#[pyclass(eq, eq_int)]
+#[derive(PartialEq)]
+enum MyEnum{
+ Variant,
+ OtherVariant,
+}
+
+Python::with_gil(|py| {
+ let cls = py.get_type_bound::<MyEnum>();
+ let x = Py::new(py, MyEnum::Variant).unwrap();
+ pyo3::py_run!(py, cls x, r#"
+ assert repr(x) == 'MyEnum.Variant'
+ assert repr(cls.OtherVariant) == 'MyEnum.OtherVariant'
+ "#)
+})
+All methods defined by PyO3 can be overridden. For example here's how you override __repr__
:
use pyo3::prelude::*;
+#[pyclass(eq, eq_int)]
+#[derive(PartialEq)]
+enum MyEnum {
+ Answer = 42,
+}
+
+#[pymethods]
+impl MyEnum {
+ fn __repr__(&self) -> &'static str {
+ "42"
+ }
+}
+
+Python::with_gil(|py| {
+ let cls = py.get_type_bound::<MyEnum>();
+ pyo3::py_run!(py, cls, "assert repr(cls.Answer) == '42'")
+})
+Enums and their variants can also be renamed using #[pyo3(name)]
.
use pyo3::prelude::*;
+#[pyclass(eq, eq_int, name = "RenamedEnum")]
+#[derive(PartialEq)]
+enum MyEnum {
+ #[pyo3(name = "UPPERCASE")]
+ Variant,
+}
+
+Python::with_gil(|py| {
+ let x = Py::new(py, MyEnum::Variant).unwrap();
+ let cls = py.get_type_bound::<MyEnum>();
+ pyo3::py_run!(py, x cls, r#"
+ assert repr(x) == 'RenamedEnum.UPPERCASE'
+ assert x == cls.UPPERCASE
+ "#)
+})
+Ordering of enum variants is optionally added using #[pyo3(ord)]
.
+Note: Implementation of the PartialOrd
trait is required when passing the ord
argument. If not implemented, a compile time error is raised.
use pyo3::prelude::*;
+#[pyclass(eq, ord)]
+#[derive(PartialEq, PartialOrd)]
+enum MyEnum{
+ A,
+ B,
+ C,
+}
+
+Python::with_gil(|py| {
+ let cls = py.get_type_bound::<MyEnum>();
+ let a = Py::new(py, MyEnum::A).unwrap();
+ let b = Py::new(py, MyEnum::B).unwrap();
+ let c = Py::new(py, MyEnum::C).unwrap();
+ pyo3::py_run!(py, cls a b c, r#"
+ assert (a < b) == True
+ assert (c <= b) == False
+ assert (c > a) == True
+ "#)
+})
+You may not use enums as a base class or let enums inherit from other classes.
+use pyo3::prelude::*;
+#[pyclass(subclass)]
+enum BadBase {
+ Var1,
+}
+use pyo3::prelude::*;
+
+#[pyclass(subclass)]
+struct Base;
+
+#[pyclass(extends=Base)]
+enum BadSubclass {
+ Var1,
+}
+#[pyclass]
enums are currently not interoperable with IntEnum
in Python.
An enum is complex if it has any non-unit (struct or tuple) variants.
+PyO3 supports only struct and tuple variants in a complex enum. Unit variants aren't supported at present (the recommendation is to use an empty tuple enum instead).
+PyO3 adds a class attribute for each variant, which may be used to construct values and in match patterns. PyO3 also provides getter methods for all fields of each variant.
+use pyo3::prelude::*;
+#[pyclass]
+enum Shape {
+ Circle { radius: f64 },
+ Rectangle { width: f64, height: f64 },
+ RegularPolygon(u32, f64),
+ Nothing { },
+}
+
+#[cfg(Py_3_10)]
+Python::with_gil(|py| {
+ let circle = Shape::Circle { radius: 10.0 }.into_py(py);
+ let square = Shape::RegularPolygon(4, 10.0).into_py(py);
+ let cls = py.get_type_bound::<Shape>();
+ pyo3::py_run!(py, circle square cls, r#"
+ assert isinstance(circle, cls)
+ assert isinstance(circle, cls.Circle)
+ assert circle.radius == 10.0
+
+ assert isinstance(square, cls)
+ assert isinstance(square, cls.RegularPolygon)
+ assert square[0] == 4 # Gets _0 field
+ assert square[1] == 10.0 # Gets _1 field
+
+ def count_vertices(cls, shape):
+ match shape:
+ case cls.Circle():
+ return 0
+ case cls.Rectangle():
+ return 4
+ case cls.RegularPolygon(n):
+ return n
+ case cls.Nothing():
+ return 0
+
+ assert count_vertices(cls, circle) == 0
+ assert count_vertices(cls, square) == 4
+ "#)
+})
+WARNING: Py::new
and .into_py
are currently inconsistent. Note how the constructed value is not an instance of the specific variant. For this reason, constructing values is only recommended using .into_py
.
use pyo3::prelude::*;
+#[pyclass]
+enum MyEnum {
+ Variant { i: i32 },
+}
+
+Python::with_gil(|py| {
+ let x = Py::new(py, MyEnum::Variant { i: 42 }).unwrap();
+ let cls = py.get_type_bound::<MyEnum>();
+ pyo3::py_run!(py, x cls, r#"
+ assert isinstance(x, cls)
+ assert not isinstance(x, cls.Variant)
+ "#)
+})
+The constructor of each generated class can be customized using the #[pyo3(constructor = (...))]
attribute. This uses the same syntax as the #[pyo3(signature = (...))]
+attribute on function and methods and supports the same options. To apply this attribute simply place it on top of a variant in a #[pyclass]
complex enum as shown below:
use pyo3::prelude::*;
+#[pyclass]
+enum Shape {
+ #[pyo3(constructor = (radius=1.0))]
+ Circle { radius: f64 },
+ #[pyo3(constructor = (*, width, height))]
+ Rectangle { width: f64, height: f64 },
+ #[pyo3(constructor = (side_count, radius=1.0))]
+ RegularPolygon { side_count: u32, radius: f64 },
+ Nothing { },
+}
+
+#[cfg(Py_3_10)]
+Python::with_gil(|py| {
+ let cls = py.get_type_bound::<Shape>();
+ pyo3::py_run!(py, cls, r#"
+ circle = cls.Circle()
+ assert isinstance(circle, cls)
+ assert isinstance(circle, cls.Circle)
+ assert circle.radius == 1.0
+
+ square = cls.Rectangle(width = 1, height = 1)
+ assert isinstance(square, cls)
+ assert isinstance(square, cls.Rectangle)
+ assert square.width == 1
+ assert square.height == 1
+
+ hexagon = cls.RegularPolygon(6)
+ assert isinstance(hexagon, cls)
+ assert isinstance(hexagon, cls.RegularPolygon)
+ assert hexagon.side_count == 6
+ assert hexagon.radius == 1
+ "#)
+})
+The #[pyclass]
macros rely on a lot of conditional code generation: each #[pyclass]
can optionally have a #[pymethods]
block.
To support this flexibility the #[pyclass]
macro expands to a blob of boilerplate code which sets up the structure for "dtolnay specialization". This implementation pattern enables the Rust compiler to use #[pymethods]
implementations when they are present, and fall back to default (empty) definitions when they are not.
This simple technique works for the case when there is zero or one implementations. To support multiple #[pymethods]
for a #[pyclass]
(in the multiple-pymethods
feature), a registry mechanism provided by the inventory
crate is used instead. This collects impl
s at library load time, but isn't supported on all platforms. See inventory: how it works for more details.
The #[pyclass]
macro expands to roughly the code seen below. The PyClassImplCollector
is the type used internally by PyO3 for dtolnay specialization:
#[cfg(not(feature = "multiple-pymethods"))] {
+use pyo3::prelude::*;
+// Note: the implementation differs slightly with the `multiple-pymethods` feature enabled.
+struct MyClass {
+ #[allow(dead_code)]
+ num: i32,
+}
+
+impl pyo3::types::DerefToPyAny for MyClass {}
+
+#[allow(deprecated)]
+#[cfg(feature = "gil-refs")]
+unsafe impl pyo3::type_object::HasPyGilRef for MyClass {
+ type AsRefTarget = pyo3::PyCell<Self>;
+}
+unsafe impl pyo3::type_object::PyTypeInfo for MyClass {
+ const NAME: &'static str = "MyClass";
+ const MODULE: ::std::option::Option<&'static str> = ::std::option::Option::None;
+ #[inline]
+ fn type_object_raw(py: pyo3::Python<'_>) -> *mut pyo3::ffi::PyTypeObject {
+ <Self as pyo3::impl_::pyclass::PyClassImpl>::lazy_type_object()
+ .get_or_init(py)
+ .as_type_ptr()
+ }
+}
+
+impl pyo3::PyClass for MyClass {
+ type Frozen = pyo3::pyclass::boolean_struct::False;
+}
+
+impl<'a, 'py> pyo3::impl_::extract_argument::PyFunctionArgument<'a, 'py> for &'a MyClass
+{
+ type Holder = ::std::option::Option<pyo3::PyRef<'py, MyClass>>;
+
+ #[inline]
+ fn extract(obj: &'a pyo3::Bound<'py, PyAny>, holder: &'a mut Self::Holder) -> pyo3::PyResult<Self> {
+ pyo3::impl_::extract_argument::extract_pyclass_ref(obj, holder)
+ }
+}
+
+impl<'a, 'py> pyo3::impl_::extract_argument::PyFunctionArgument<'a, 'py> for &'a mut MyClass
+{
+ type Holder = ::std::option::Option<pyo3::PyRefMut<'py, MyClass>>;
+
+ #[inline]
+ fn extract(obj: &'a pyo3::Bound<'py, PyAny>, holder: &'a mut Self::Holder) -> pyo3::PyResult<Self> {
+ pyo3::impl_::extract_argument::extract_pyclass_ref_mut(obj, holder)
+ }
+}
+
+impl pyo3::IntoPy<PyObject> for MyClass {
+ fn into_py(self, py: pyo3::Python<'_>) -> pyo3::PyObject {
+ pyo3::IntoPy::into_py(pyo3::Py::new(py, self).unwrap(), py)
+ }
+}
+
+impl pyo3::impl_::pyclass::PyClassImpl for MyClass {
+ const IS_BASETYPE: bool = false;
+ const IS_SUBCLASS: bool = false;
+ const IS_MAPPING: bool = false;
+ const IS_SEQUENCE: bool = false;
+ type BaseType = PyAny;
+ type ThreadChecker = pyo3::impl_::pyclass::SendablePyClass<MyClass>;
+ type PyClassMutability = <<pyo3::PyAny as pyo3::impl_::pyclass::PyClassBaseType>::PyClassMutability as pyo3::impl_::pycell::PyClassMutability>::MutableChild;
+ type Dict = pyo3::impl_::pyclass::PyClassDummySlot;
+ type WeakRef = pyo3::impl_::pyclass::PyClassDummySlot;
+ type BaseNativeType = pyo3::PyAny;
+
+ fn items_iter() -> pyo3::impl_::pyclass::PyClassItemsIter {
+ use pyo3::impl_::pyclass::*;
+ let collector = PyClassImplCollector::<MyClass>::new();
+ static INTRINSIC_ITEMS: PyClassItems = PyClassItems { slots: &[], methods: &[] };
+ PyClassItemsIter::new(&INTRINSIC_ITEMS, collector.py_methods())
+ }
+
+ fn lazy_type_object() -> &'static pyo3::impl_::pyclass::LazyTypeObject<MyClass> {
+ use pyo3::impl_::pyclass::LazyTypeObject;
+ static TYPE_OBJECT: LazyTypeObject<MyClass> = LazyTypeObject::new();
+ &TYPE_OBJECT
+ }
+
+ fn doc(py: Python<'_>) -> pyo3::PyResult<&'static ::std::ffi::CStr> {
+ use pyo3::impl_::pyclass::*;
+ static DOC: pyo3::sync::GILOnceCell<::std::borrow::Cow<'static, ::std::ffi::CStr>> = pyo3::sync::GILOnceCell::new();
+ DOC.get_or_try_init(py, || {
+ let collector = PyClassImplCollector::<Self>::new();
+ build_pyclass_doc(<MyClass as pyo3::PyTypeInfo>::NAME, pyo3::ffi::c_str!(""), collector.new_text_signature())
+ }).map(::std::ops::Deref::deref)
+ }
+}
+
+Python::with_gil(|py| {
+ let cls = py.get_type_bound::<MyClass>();
+ pyo3::py_run!(py, cls, "assert cls.__name__ == 'MyClass'")
+});
+}
+
+ Classes can be callable if they have a #[pymethod]
named __call__
.
+This allows instances of a class to behave similar to functions.
This method's signature must look like __call__(<self>, ...) -> object
- here,
+any argument list can be defined as for normal pymethods
The following pyclass is a basic decorator - its constructor takes a Python object +as argument and calls that object when called. An equivalent Python implementation +is linked at the end.
+An example crate containing this pyclass can be found here
+use pyo3::prelude::*;
+use pyo3::types::{PyDict, PyTuple};
+use std::cell::Cell;
+
+/// A function decorator that keeps track how often it is called.
+///
+/// It otherwise doesn't do anything special.
+#[pyclass(name = "Counter")]
+pub struct PyCounter {
+ // Keeps track of how many calls have gone through.
+ //
+ // See the discussion at the end for why `Cell` is used.
+ count: Cell<u64>,
+
+ // This is the actual function being wrapped.
+ wraps: Py<PyAny>,
+}
+
+#[pymethods]
+impl PyCounter {
+ // Note that we don't validate whether `wraps` is actually callable.
+ //
+ // While we could use `PyAny::is_callable` for that, it has some flaws:
+ // 1. It doesn't guarantee the object can actually be called successfully
+ // 2. We still need to handle any exceptions that the function might raise
+ #[new]
+ fn __new__(wraps: Py<PyAny>) -> Self {
+ PyCounter {
+ count: Cell::new(0),
+ wraps,
+ }
+ }
+
+ #[getter]
+ fn count(&self) -> u64 {
+ self.count.get()
+ }
+
+ #[pyo3(signature = (*args, **kwargs))]
+ fn __call__(
+ &self,
+ py: Python<'_>,
+ args: &Bound<'_, PyTuple>,
+ kwargs: Option<&Bound<'_, PyDict>>,
+ ) -> PyResult<Py<PyAny>> {
+ let old_count = self.count.get();
+ let new_count = old_count + 1;
+ self.count.set(new_count);
+ let name = self.wraps.getattr(py, "__name__")?;
+
+ println!("{} has been called {} time(s).", name, new_count);
+
+ // After doing something, we finally forward the call to the wrapped function
+ let ret = self.wraps.call_bound(py, args, kwargs)?;
+
+ // We could do something with the return value of
+ // the function before returning it
+ Ok(ret)
+ }
+}
+
+#[pymodule]
+pub fn decorator(module: &Bound<'_, PyModule>) -> PyResult<()> {
+ module.add_class::<PyCounter>()?;
+ Ok(())
+}
+Python code:
+from decorator import Counter
+
+
+@Counter
+def say_hello():
+ print("hello")
+
+
+say_hello()
+say_hello()
+say_hello()
+say_hello()
+
+assert say_hello.count == 4
+
+Output:
+say_hello has been called 1 time(s).
+hello
+say_hello has been called 2 time(s).
+hello
+say_hello has been called 3 time(s).
+hello
+say_hello has been called 4 time(s).
+hello
+
+A Python implementation of this looks similar to the Rust version:
+class Counter:
+ def __init__(self, wraps):
+ self.count = 0
+ self.wraps = wraps
+
+ def __call__(self, *args, **kwargs):
+ self.count += 1
+ print(f"{self.wraps.__name__} has been called {self.count} time(s)")
+ self.wraps(*args, **kwargs)
+
+Note that it can also be implemented as a higher order function:
+def Counter(wraps):
+ count = 0
+ def call(*args, **kwargs):
+ nonlocal count
+ count += 1
+ print(f"{wraps.__name__} has been called {count} time(s)")
+ return wraps(*args, **kwargs)
+ return call
+
+Cell
for?A previous implementation used a normal u64
, which meant it required a &mut self
receiver to update the count:
#[pyo3(signature = (*args, **kwargs))]
+fn __call__(
+ &mut self,
+ py: Python<'_>,
+ args: &Bound<'_, PyTuple>,
+ kwargs: Option<&Bound<'_, PyDict>>,
+) -> PyResult<Py<PyAny>> {
+ self.count += 1;
+ let name = self.wraps.getattr(py, "__name__")?;
+
+ println!("{} has been called {} time(s).", name, self.count);
+
+ // After doing something, we finally forward the call to the wrapped function
+ let ret = self.wraps.call(py, args, kwargs)?;
+
+ // We could do something with the return value of
+ // the function before returning it
+ Ok(ret)
+}
+The problem with this is that the &mut self
receiver means PyO3 has to borrow it exclusively,
+and hold this borrow across theself.wraps.call(py, args, kwargs)
call. This call returns control to the user's Python code
+which is free to call arbitrary things, including the decorated function. If that happens PyO3 is unable to create a second unique borrow and will be forced to raise an exception.
As a result, something innocent like this will raise an exception:
+@Counter
+def say_hello():
+ if say_hello.count < 2:
+ print(f"hello from decorator")
+
+say_hello()
+# RuntimeError: Already borrowed
+
+The implementation in this chapter fixes that by never borrowing exclusively; all the methods take &self
as receivers, of which multiple may exist simultaneously. This requires a shared counter and the easiest way to do that is to use Cell
, so that's what is used here.
This shows the dangers of running arbitrary Python code - note that "running arbitrary Python code" can be far more subtle than the example above:
+__del__
methods).This is especially important if you are writing unsafe code; Python code must never be able to cause undefined behavior. You must ensure that your Rust code is in a consistent state before doing any of the above things.
+ +At this point we have a Number
class that we can't actually do any math on!
Before proceeding, we should think about how we want to handle overflows. There are three obvious solutions:
+int
. However that would be quite boring - we'd
+be reinventing the wheel.Number
overflows, but that makes the API painful to use.i32
. This is the approach we'll take here. To do that we'll just forward to i32
's
+wrapping_*
methods.Let's address the first overflow, in Number
's constructor:
from my_module import Number
+
+n = Number(1 << 1337)
+
+Traceback (most recent call last):
+ File "example.py", line 3, in <module>
+ n = Number(1 << 1337)
+OverflowError: Python int too large to convert to C long
+
+Instead of relying on the default FromPyObject
extraction to parse arguments, we can specify our
+own extraction function, using the #[pyo3(from_py_with = "...")]
attribute. Unfortunately PyO3
+doesn't provide a way to wrap Python integers out of the box, but we can do a Python call to mask it
+and cast it to an i32
.
#![allow(dead_code)]
+use pyo3::prelude::*;
+
+fn wrap(obj: &Bound<'_, PyAny>) -> PyResult<i32> {
+ let val = obj.call_method1("__and__", (0xFFFFFFFF_u32,))?;
+ let val: u32 = val.extract()?;
+ // 👇 This intentionally overflows!
+ Ok(val as i32)
+}
+We also add documentation, via ///
comments, which are visible to Python users.
#![allow(dead_code)]
+use pyo3::prelude::*;
+
+fn wrap(obj: &Bound<'_, PyAny>) -> PyResult<i32> {
+ let val = obj.call_method1("__and__", (0xFFFFFFFF_u32,))?;
+ let val: u32 = val.extract()?;
+ Ok(val as i32)
+}
+
+/// Did you ever hear the tragedy of Darth Signed The Overfloweth? I thought not.
+/// It's not a story C would tell you. It's a Rust legend.
+#[pyclass(module = "my_module")]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ #[new]
+ fn new(#[pyo3(from_py_with = "wrap")] value: i32) -> Self {
+ Self(value)
+ }
+}
+With that out of the way, let's implement some operators:
+use pyo3::exceptions::{PyZeroDivisionError, PyValueError};
+
+use pyo3::prelude::*;
+
+#[pyclass]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ fn __add__(&self, other: &Self) -> Self {
+ Self(self.0.wrapping_add(other.0))
+ }
+
+ fn __sub__(&self, other: &Self) -> Self {
+ Self(self.0.wrapping_sub(other.0))
+ }
+
+ fn __mul__(&self, other: &Self) -> Self {
+ Self(self.0.wrapping_mul(other.0))
+ }
+
+ fn __truediv__(&self, other: &Self) -> PyResult<Self> {
+ match self.0.checked_div(other.0) {
+ Some(i) => Ok(Self(i)),
+ None => Err(PyZeroDivisionError::new_err("division by zero")),
+ }
+ }
+
+ fn __floordiv__(&self, other: &Self) -> PyResult<Self> {
+ match self.0.checked_div(other.0) {
+ Some(i) => Ok(Self(i)),
+ None => Err(PyZeroDivisionError::new_err("division by zero")),
+ }
+ }
+
+ fn __rshift__(&self, other: &Self) -> PyResult<Self> {
+ match other.0.try_into() {
+ Ok(rhs) => Ok(Self(self.0.wrapping_shr(rhs))),
+ Err(_) => Err(PyValueError::new_err("negative shift count")),
+ }
+ }
+
+ fn __lshift__(&self, other: &Self) -> PyResult<Self> {
+ match other.0.try_into() {
+ Ok(rhs) => Ok(Self(self.0.wrapping_shl(rhs))),
+ Err(_) => Err(PyValueError::new_err("negative shift count")),
+ }
+ }
+}
+use pyo3::prelude::*;
+
+#[pyclass]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ fn __pos__(slf: PyRef<'_, Self>) -> PyRef<'_, Self> {
+ slf
+ }
+
+ fn __neg__(&self) -> Self {
+ Self(-self.0)
+ }
+
+ fn __abs__(&self) -> Self {
+ Self(self.0.abs())
+ }
+
+ fn __invert__(&self) -> Self {
+ Self(!self.0)
+ }
+}
+complex()
, int()
and float()
built-in functions.use pyo3::prelude::*;
+
+#[pyclass]
+struct Number(i32);
+
+use pyo3::types::PyComplex;
+
+#[pymethods]
+impl Number {
+ fn __int__(&self) -> i32 {
+ self.0
+ }
+
+ fn __float__(&self) -> f64 {
+ self.0 as f64
+ }
+
+ fn __complex__<'py>(&self, py: Python<'py>) -> Bound<'py, PyComplex> {
+ PyComplex::from_doubles_bound(py, self.0 as f64, 0.0)
+ }
+}
+We do not implement the in-place operations like __iadd__
because we do not wish to mutate Number
.
+Similarly we're not interested in supporting operations with different types, so we do not implement
+the reflected operations like __radd__
either.
Now Python can use our Number
class:
from my_module import Number
+
+def hash_djb2(s: str):
+ '''
+ A version of Daniel J. Bernstein's djb2 string hashing algorithm
+ Like many hashing algorithms, it relies on integer wrapping.
+ '''
+
+ n = Number(0)
+ five = Number(5)
+
+ for x in s:
+ n = Number(ord(x)) + ((n << five) - n)
+ return n
+
+assert hash_djb2('l50_50') == Number(-1152549421)
+
+use std::collections::hash_map::DefaultHasher;
+use std::hash::{Hash, Hasher};
+
+use pyo3::exceptions::{PyValueError, PyZeroDivisionError};
+use pyo3::prelude::*;
+use pyo3::class::basic::CompareOp;
+use pyo3::types::{PyComplex, PyString};
+
+fn wrap(obj: &Bound<'_, PyAny>) -> PyResult<i32> {
+ let val = obj.call_method1("__and__", (0xFFFFFFFF_u32,))?;
+ let val: u32 = val.extract()?;
+ Ok(val as i32)
+}
+/// Did you ever hear the tragedy of Darth Signed The Overfloweth? I thought not.
+/// It's not a story C would tell you. It's a Rust legend.
+#[pyclass(module = "my_module")]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ #[new]
+ fn new(#[pyo3(from_py_with = "wrap")] value: i32) -> Self {
+ Self(value)
+ }
+
+ fn __repr__(slf: &Bound<'_, Self>) -> PyResult<String> {
+ // Get the class name dynamically in case `Number` is subclassed
+ let class_name: Bound<'_, PyString> = slf.get_type().qualname()?;
+ Ok(format!("{}({})", class_name, slf.borrow().0))
+ }
+
+ fn __str__(&self) -> String {
+ self.0.to_string()
+ }
+
+ fn __hash__(&self) -> u64 {
+ let mut hasher = DefaultHasher::new();
+ self.0.hash(&mut hasher);
+ hasher.finish()
+ }
+
+ fn __richcmp__(&self, other: &Self, op: CompareOp) -> PyResult<bool> {
+ match op {
+ CompareOp::Lt => Ok(self.0 < other.0),
+ CompareOp::Le => Ok(self.0 <= other.0),
+ CompareOp::Eq => Ok(self.0 == other.0),
+ CompareOp::Ne => Ok(self.0 != other.0),
+ CompareOp::Gt => Ok(self.0 > other.0),
+ CompareOp::Ge => Ok(self.0 >= other.0),
+ }
+ }
+
+ fn __bool__(&self) -> bool {
+ self.0 != 0
+ }
+
+ fn __add__(&self, other: &Self) -> Self {
+ Self(self.0.wrapping_add(other.0))
+ }
+
+ fn __sub__(&self, other: &Self) -> Self {
+ Self(self.0.wrapping_sub(other.0))
+ }
+
+ fn __mul__(&self, other: &Self) -> Self {
+ Self(self.0.wrapping_mul(other.0))
+ }
+
+ fn __truediv__(&self, other: &Self) -> PyResult<Self> {
+ match self.0.checked_div(other.0) {
+ Some(i) => Ok(Self(i)),
+ None => Err(PyZeroDivisionError::new_err("division by zero")),
+ }
+ }
+
+ fn __floordiv__(&self, other: &Self) -> PyResult<Self> {
+ match self.0.checked_div(other.0) {
+ Some(i) => Ok(Self(i)),
+ None => Err(PyZeroDivisionError::new_err("division by zero")),
+ }
+ }
+
+ fn __rshift__(&self, other: &Self) -> PyResult<Self> {
+ match other.0.try_into() {
+ Ok(rhs) => Ok(Self(self.0.wrapping_shr(rhs))),
+ Err(_) => Err(PyValueError::new_err("negative shift count")),
+ }
+ }
+
+ fn __lshift__(&self, other: &Self) -> PyResult<Self> {
+ match other.0.try_into() {
+ Ok(rhs) => Ok(Self(self.0.wrapping_shl(rhs))),
+ Err(_) => Err(PyValueError::new_err("negative shift count")),
+ }
+ }
+
+ fn __xor__(&self, other: &Self) -> Self {
+ Self(self.0 ^ other.0)
+ }
+
+ fn __or__(&self, other: &Self) -> Self {
+ Self(self.0 | other.0)
+ }
+
+ fn __and__(&self, other: &Self) -> Self {
+ Self(self.0 & other.0)
+ }
+
+ fn __int__(&self) -> i32 {
+ self.0
+ }
+
+ fn __float__(&self) -> f64 {
+ self.0 as f64
+ }
+
+ fn __complex__<'py>(&self, py: Python<'py>) -> Bound<'py, PyComplex> {
+ PyComplex::from_doubles_bound(py, self.0 as f64, 0.0)
+ }
+}
+
+#[pymodule]
+fn my_module(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ m.add_class::<Number>()?;
+ Ok(())
+}
+const SCRIPT: &'static str = r#"
+def hash_djb2(s: str):
+ n = Number(0)
+ five = Number(5)
+
+ for x in s:
+ n = Number(ord(x)) + ((n << five) - n)
+ return n
+
+assert hash_djb2('l50_50') == Number(-1152549421)
+assert hash_djb2('logo') == Number(3327403)
+assert hash_djb2('horizon') == Number(1097468315)
+
+
+assert Number(2) + Number(2) == Number(4)
+assert Number(2) + Number(2) != Number(5)
+
+assert Number(13) - Number(7) == Number(6)
+assert Number(13) - Number(-7) == Number(20)
+
+assert Number(13) / Number(7) == Number(1)
+assert Number(13) // Number(7) == Number(1)
+
+assert Number(13) * Number(7) == Number(13*7)
+
+assert Number(13) > Number(7)
+assert Number(13) < Number(20)
+assert Number(13) == Number(13)
+assert Number(13) >= Number(7)
+assert Number(13) <= Number(20)
+assert Number(13) == Number(13)
+
+
+assert (True if Number(1) else False)
+assert (False if Number(0) else True)
+
+
+assert int(Number(13)) == 13
+assert float(Number(13)) == 13
+assert Number.__doc__ == "Did you ever hear the tragedy of Darth Signed The Overfloweth? I thought not.\nIt's not a story C would tell you. It's a Rust legend."
+assert Number(12345234523452) == Number(1498514748)
+try:
+ import inspect
+ assert inspect.signature(Number).__str__() == '(value)'
+except ValueError:
+ # Not supported with `abi3` before Python 3.10
+ pass
+assert Number(1337).__str__() == '1337'
+assert Number(1337).__repr__() == 'Number(1337)'
+"#;
+
+
+use pyo3::PyTypeInfo;
+
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| -> PyResult<()> {
+ let globals = PyModule::import_bound(py, "__main__")?.dict();
+ globals.set_item("Number", Number::type_object_bound(py))?;
+
+ py.run_bound(SCRIPT, Some(&globals), None)?;
+ Ok(())
+ })
+}
+At the beginning of this chapter we said that PyO3 doesn't provide a way to wrap Python integers out +of the box but that's a half truth. There's not a PyO3 API for it, but there's a Python C API +function that does:
+unsigned long PyLong_AsUnsignedLongMask(PyObject *obj)
+
+We can call this function from Rust by using pyo3::ffi::PyLong_AsUnsignedLongMask
. This is an unsafe
+function, which means we have to use an unsafe block to call it and take responsibility for upholding
+the contracts of this function. Let's review those contracts:
Let's create that helper function. The signature has to be fn(&Bound<'_, PyAny>) -> PyResult<T>
.
&Bound<'_, PyAny>
represents a checked borrowed reference, so the pointer derived from it is valid (and not null).Python
token to use in our call to PyErr::take
.#![allow(dead_code)]
+use std::os::raw::c_ulong;
+use pyo3::prelude::*;
+use pyo3::ffi;
+
+fn wrap(obj: &Bound<'_, PyAny>) -> Result<i32, PyErr> {
+ let py: Python<'_> = obj.py();
+
+ unsafe {
+ let ptr = obj.as_ptr();
+
+ let ret: c_ulong = ffi::PyLong_AsUnsignedLongMask(ptr);
+ if ret == c_ulong::MAX {
+ if let Some(err) = PyErr::take(py) {
+ return Err(err);
+ }
+ }
+
+ Ok(ret as i32)
+ }
+}
+
+ Recall the Number
class from the previous chapter:
#![allow(dead_code)]
+use pyo3::prelude::*;
+
+#[pyclass]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ #[new]
+ fn new(value: i32) -> Self {
+ Self(value)
+ }
+}
+
+#[pymodule]
+fn my_module(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ m.add_class::<Number>()?;
+ Ok(())
+}
+At this point Python code can import the module, access the class and create class instances - but +nothing else.
+from my_module import Number
+
+n = Number(5)
+print(n)
+
+<builtins.Number object at 0x000002B4D185D7D0>
+
+It can't even print an user-readable representation of itself! We can fix that by defining the
+__repr__
and __str__
methods inside a #[pymethods]
block. We do this by accessing the value
+contained inside Number
.
use pyo3::prelude::*;
+
+#[pyclass]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ // For `__repr__` we want to return a string that Python code could use to recreate
+ // the `Number`, like `Number(5)` for example.
+ fn __repr__(&self) -> String {
+ // We use the `format!` macro to create a string. Its first argument is a
+ // format string, followed by any number of parameters which replace the
+ // `{}`'s in the format string.
+ //
+ // 👇 Tuple field access in Rust uses a dot
+ format!("Number({})", self.0)
+ }
+ // `__str__` is generally used to create an "informal" representation, so we
+ // just forward to `i32`'s `ToString` trait implementation to print a bare number.
+ fn __str__(&self) -> String {
+ self.0.to_string()
+ }
+}
+In the __repr__
, we used a hard-coded class name. This is sometimes not ideal,
+because if the class is subclassed in Python, we would like the repr to reflect
+the subclass name. This is typically done in Python code by accessing
+self.__class__.__name__
. In order to be able to access the Python type information
+and the Rust struct, we need to use a Bound
as the self
argument.
use pyo3::prelude::*;
+use pyo3::types::PyString;
+
+#[pyclass]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ fn __repr__(slf: &Bound<'_, Self>) -> PyResult<String> {
+ // This is the equivalent of `self.__class__.__name__` in Python.
+ let class_name: Bound<'_, PyString> = slf.get_type().qualname()?;
+ // To access fields of the Rust struct, we need to borrow the `PyCell`.
+ Ok(format!("{}({})", class_name, slf.borrow().0))
+ }
+}
+Let's also implement hashing. We'll just hash the i32
. For that we need a Hasher
. The one
+provided by std
is DefaultHasher
, which uses the SipHash algorithm.
use std::collections::hash_map::DefaultHasher;
+
+// Required to call the `.hash` and `.finish` methods, which are defined on traits.
+use std::hash::{Hash, Hasher};
+
+use pyo3::prelude::*;
+
+#[pyclass]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ fn __hash__(&self) -> u64 {
+ let mut hasher = DefaultHasher::new();
+ self.0.hash(&mut hasher);
+ hasher.finish()
+ }
+}
+To implement __hash__
using the Rust Hash
trait implementation, the hash
option can be used.
+This option is only available for frozen
classes to prevent accidental hash changes from mutating the object. If you need
+an __hash__
implementation for a mutable class, use the manual method from above. This option also requires eq
: According to the
+Python docs "If a class does not define an __eq__()
+method it should not define a __hash__()
operation either"
use pyo3::prelude::*;
+
+#[pyclass(frozen, eq, hash)]
+#[derive(PartialEq, Hash)]
+struct Number(i32);
+++Note: When implementing
+__hash__
and comparisons, it is important that the following property holds:+k1 == k2 -> hash(k1) == hash(k2) +
In other words, if two keys are equal, their hashes must also be equal. In addition you must take +care that your classes' hash doesn't change during its lifetime. In this tutorial we do that by not +letting Python code change our
+Number
class. In other words, it is immutable.By default, all
+#[pyclass]
types have a default hash implementation from Python. +Types which should not be hashable can override this by setting__hash__
to None. +This is the same mechanism as for a pure-Python class. This is done like so:+use pyo3::prelude::*; +#[pyclass] +struct NotHashable {} + +#[pymethods] +impl NotHashable { + #[classattr] + const __hash__: Option<Py<PyAny>> = None; +}
PyO3 supports the usual magic comparison methods available in Python such as __eq__
, __lt__
+and so on. It is also possible to support all six operations at once with __richcmp__
.
+This method will be called with a value of CompareOp
depending on the operation.
use pyo3::class::basic::CompareOp;
+
+use pyo3::prelude::*;
+
+#[pyclass]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ fn __richcmp__(&self, other: &Self, op: CompareOp) -> PyResult<bool> {
+ match op {
+ CompareOp::Lt => Ok(self.0 < other.0),
+ CompareOp::Le => Ok(self.0 <= other.0),
+ CompareOp::Eq => Ok(self.0 == other.0),
+ CompareOp::Ne => Ok(self.0 != other.0),
+ CompareOp::Gt => Ok(self.0 > other.0),
+ CompareOp::Ge => Ok(self.0 >= other.0),
+ }
+ }
+}
+If you obtain the result by comparing two Rust values, as in this example, you
+can take a shortcut using CompareOp::matches
:
use pyo3::class::basic::CompareOp;
+
+use pyo3::prelude::*;
+
+#[pyclass]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ fn __richcmp__(&self, other: &Self, op: CompareOp) -> bool {
+ op.matches(self.0.cmp(&other.0))
+ }
+}
+It checks that the std::cmp::Ordering
obtained from Rust's Ord
matches
+the given CompareOp
.
Alternatively, you can implement just equality using __eq__
:
use pyo3::prelude::*;
+
+#[pyclass]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ fn __eq__(&self, other: &Self) -> bool {
+ self.0 == other.0
+ }
+}
+
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| {
+ let x = &Bound::new(py, Number(4))?;
+ let y = &Bound::new(py, Number(4))?;
+ assert!(x.eq(y)?);
+ assert!(!x.ne(y)?);
+ Ok(())
+ })
+}
+To implement __eq__
using the Rust PartialEq
trait implementation, the eq
option can be used.
use pyo3::prelude::*;
+
+#[pyclass(eq)]
+#[derive(PartialEq)]
+struct Number(i32);
+To implement __lt__
, __le__
, __gt__
, & __ge__
using the Rust PartialOrd
trait implementation, the ord
option can be used. Note: Requires eq
.
use pyo3::prelude::*;
+
+#[pyclass(eq, ord)]
+#[derive(PartialEq, PartialOrd)]
+struct Number(i32);
+We'll consider Number
to be True
if it is nonzero:
use pyo3::prelude::*;
+
+#[pyclass]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ fn __bool__(&self) -> bool {
+ self.0 != 0
+ }
+}
+use std::collections::hash_map::DefaultHasher;
+use std::hash::{Hash, Hasher};
+
+use pyo3::prelude::*;
+use pyo3::class::basic::CompareOp;
+use pyo3::types::PyString;
+
+#[pyclass]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ #[new]
+ fn new(value: i32) -> Self {
+ Self(value)
+ }
+
+ fn __repr__(slf: &Bound<'_, Self>) -> PyResult<String> {
+ let class_name: Bound<'_, PyString> = slf.get_type().qualname()?;
+ Ok(format!("{}({})", class_name, slf.borrow().0))
+ }
+
+ fn __str__(&self) -> String {
+ self.0.to_string()
+ }
+
+ fn __hash__(&self) -> u64 {
+ let mut hasher = DefaultHasher::new();
+ self.0.hash(&mut hasher);
+ hasher.finish()
+ }
+
+ fn __richcmp__(&self, other: &Self, op: CompareOp) -> PyResult<bool> {
+ match op {
+ CompareOp::Lt => Ok(self.0 < other.0),
+ CompareOp::Le => Ok(self.0 <= other.0),
+ CompareOp::Eq => Ok(self.0 == other.0),
+ CompareOp::Ne => Ok(self.0 != other.0),
+ CompareOp::Gt => Ok(self.0 > other.0),
+ CompareOp::Ge => Ok(self.0 >= other.0),
+ }
+ }
+
+ fn __bool__(&self) -> bool {
+ self.0 != 0
+ }
+}
+
+#[pymodule]
+fn my_module(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ m.add_class::<Number>()?;
+ Ok(())
+}
+
+ Python's object model defines several protocols for different object behavior, such as the sequence, mapping, and number protocols. Python classes support these protocols by implementing "magic" methods, such as __str__
or __repr__
. Because of the double-underscores surrounding their name, these are also known as "dunder" methods.
PyO3 makes it possible for every magic method to be implemented in #[pymethods]
just as they would be done in a regular Python class, with a few notable differences:
__new__
and __init__
are replaced by the #[new]
attribute.__del__
is not yet supported, but may be in the future.__buffer__
and __release_buffer__
are currently not supported and instead PyO3 supports __getbuffer__
and __releasebuffer__
methods (these predate PEP 688), again this may change in the future.__traverse__
and __clear__
methods for controlling garbage collection.If a magic method is not on the list above (for example __init_subclass__
), then it should just work in PyO3. If this is not the case, please file a bug report.
If a function name in #[pymethods]
is a magic method which is known to need special handling, it will be automatically placed into the correct slot in the Python type object. The function name is taken from the usual rules for naming #[pymethods]
: the #[pyo3(name = "...")]
attribute is used if present, otherwise the Rust function name is used.
The magic methods handled by PyO3 are very similar to the standard Python ones on this page - in particular they are the subset which have slots as defined here.
+When PyO3 handles a magic method, a couple of changes apply compared to other #[pymethods]
:
#[pyo3(signature = (...)]
and #[pyo3(text_signature = "...")]
attributes are not allowed.The following sections list all magic methods for which PyO3 implements the necessary special handling. The +given signatures should be interpreted as follows:
+<self>
. It can be
+&self
, &mut self
or a Bound
reference like self_: PyRef<'_, Self>
and
+self_: PyRefMut<'_, Self>
, as described here.Python<'py>
argument is always allowed as the first argument.PyResult
.object
means that any type is allowed that can be extracted from a Python
+object (if argument) or converted to a Python object (if return value).pyo3::basic::CompareOp
for
+__richcmp__
's second argument.NotImplemented
.__str__
needs to return a
+string object. This is indicated by object (Python type)
.__str__(<self>) -> object (str)
__repr__(<self>) -> object (str)
__hash__(<self>) -> isize
Objects that compare equal must have the same hash value. Any type up to 64 bits may be returned instead of isize
, PyO3 will convert to an isize automatically (wrapping unsigned types like u64
and usize
).
use pyo3::prelude::*;
+
+#[pyclass]
+struct NotHashable {}
+
+#[pymethods]
+impl NotHashable {
+ #[classattr]
+ const __hash__: Option<PyObject> = None;
+}
+__lt__(<self>, object) -> object
__le__(<self>, object) -> object
__eq__(<self>, object) -> object
__ne__(<self>, object) -> object
__gt__(<self>, object) -> object
__ge__(<self>, object) -> object
The implementations of Python's "rich comparison" operators <
, <=
, ==
, !=
, >
and >=
respectively.
Note that implementing any of these methods will cause Python not to generate a default __hash__
implementation, so consider also implementing __hash__
.
__richcmp__(<self>, object, pyo3::basic::CompareOp) -> object
Implements Python comparison operations (==
, !=
, <
, <=
, >
, and >=
) in a single method.
+The CompareOp
argument indicates the comparison operation being performed. You can use
+CompareOp::matches
to adapt a Rust std::cmp::Ordering
result to the requested comparison.
This method cannot be implemented in combination with any of __lt__
, __le__
, __eq__
, __ne__
, __gt__
, or __ge__
.
Note that implementing __richcmp__
will cause Python not to generate a default __hash__
implementation, so consider implementing __hash__
when implementing __richcmp__
.
If you want to leave some operations unimplemented, you can return py.NotImplemented()
+for some of the operations:
use pyo3::class::basic::CompareOp;
+
+use pyo3::prelude::*;
+
+#[pyclass]
+struct Number(i32);
+
+#[pymethods]
+impl Number {
+ fn __richcmp__(&self, other: &Self, op: CompareOp, py: Python<'_>) -> PyObject {
+ match op {
+ CompareOp::Eq => (self.0 == other.0).into_py(py),
+ CompareOp::Ne => (self.0 != other.0).into_py(py),
+ _ => py.NotImplemented(),
+ }
+ }
+}
+If the second argument object
is not of the type specified in the
+signature, the generated code will automatically return NotImplemented
.
__getattr__(<self>, object) -> object
__getattribute__(<self>, object) -> object
__setattr__(<self>, value: object) -> ()
__delattr__(<self>, object) -> ()
Overrides attribute access.
+__bool__(<self>) -> bool
Determines the "truthyness" of an object.
+__call__(<self>, ...) -> object
- here, any argument list can be defined
+as for normal pymethods
Iterators can be defined using these methods:
+__iter__(<self>) -> object
__next__(<self>) -> Option<object> or IterNextOutput
(see details)Returning None
from __next__
indicates that that there are no further items.
Example:
+use pyo3::prelude::*;
+
+#[pyclass]
+struct MyIterator {
+ iter: Box<dyn Iterator<Item = PyObject> + Send>,
+}
+
+#[pymethods]
+impl MyIterator {
+ fn __iter__(slf: PyRef<'_, Self>) -> PyRef<'_, Self> {
+ slf
+ }
+ fn __next__(mut slf: PyRefMut<'_, Self>) -> Option<PyObject> {
+ slf.iter.next()
+ }
+}
+In many cases you'll have a distinction between the type being iterated over
+(i.e. the iterable) and the iterator it provides. In this case, the iterable
+only needs to implement __iter__()
while the iterator must implement both
+__iter__()
and __next__()
. For example:
use pyo3::prelude::*;
+
+#[pyclass]
+struct Iter {
+ inner: std::vec::IntoIter<usize>,
+}
+
+#[pymethods]
+impl Iter {
+ fn __iter__(slf: PyRef<'_, Self>) -> PyRef<'_, Self> {
+ slf
+ }
+
+ fn __next__(mut slf: PyRefMut<'_, Self>) -> Option<usize> {
+ slf.inner.next()
+ }
+}
+
+#[pyclass]
+struct Container {
+ iter: Vec<usize>,
+}
+
+#[pymethods]
+impl Container {
+ fn __iter__(slf: PyRef<'_, Self>) -> PyResult<Py<Iter>> {
+ let iter = Iter {
+ inner: slf.iter.clone().into_iter(),
+ };
+ Py::new(slf.py(), iter)
+ }
+}
+
+Python::with_gil(|py| {
+ let container = Container { iter: vec![1, 2, 3, 4] };
+ let inst = pyo3::Py::new(py, container).unwrap();
+ pyo3::py_run!(py, inst, "assert list(inst) == [1, 2, 3, 4]");
+ pyo3::py_run!(py, inst, "assert list(iter(iter(inst))) == [1, 2, 3, 4]");
+});
+For more details on Python's iteration protocols, check out the "Iterator Types" section of the library +documentation.
+This guide has so far shown how to use Option<T>
to implement yielding values
+during iteration. In Python a generator can also return a value. To express
+this in Rust, PyO3 provides the IterNextOutput
enum to both Yield
values
+and Return
a final value - see its docs for further details and an example.
__await__(<self>) -> object
__aiter__(<self>) -> object
__anext__(<self>) -> Option<object> or IterANextOutput
The magic methods in this section can be used to implement Python container types. They are two main categories of container in Python: "mappings" such as dict
, with arbitrary keys, and "sequences" such as list
and tuple
, with integer keys.
The Python C-API which PyO3 is built upon has separate "slots" for sequences and mappings. When writing a class
in pure Python, there is no such distinction in the implementation - a __getitem__
implementation will fill the slots for both the mapping and sequence forms, for example.
By default PyO3 reproduces the Python behaviour of filling both mapping and sequence slots. This makes sense for the "simple" case which matches Python, and also for sequences, where the mapping slot is used anyway to implement slice indexing.
+Mapping types usually will not want the sequence slots filled. Having them filled will lead to outcomes which may be unwanted, such as:
+PySequence
. This may lead to consumers of the type handling it incorrectly.__iter__
for sequences, which calls __getitem__
with consecutive positive integers starting from 0 until an IndexError
is returned. Unless the mapping only contains consecutive positive integer keys, this __iter__
implementation will likely not be the intended behavior.Use the #[pyclass(mapping)]
annotation to instruct PyO3 to only fill the mapping slots, leaving the sequence ones empty. This will apply to __getitem__
, __setitem__
, and __delitem__
.
Use the #[pyclass(sequence)]
annotation to instruct PyO3 to fill the sq_length
slot instead of the mp_length
slot for __len__
. This will help libraries such as numpy
recognise the class as a sequence, however will also cause CPython to automatically add the sequence length to any negative indices before passing them to __getitem__
. (__getitem__
, __setitem__
and __delitem__
mapping slots are still used for sequences, for slice operations.)
__len__(<self>) -> usize
Implements the built-in function len()
.
__contains__(<self>, object) -> bool
Implements membership test operators.
+Should return true if item
is in self
, false otherwise.
+For objects that don’t define __contains__()
, the membership test simply
+traverses the sequence until it finds a match.
By default, all #[pyclass]
types with an __iter__
method support a
+default implementation of the in
operator. Types which do not want this
+can override this by setting __contains__
to None
. This is the same
+mechanism as for a pure-Python class. This is done like so:
use pyo3::prelude::*;
+
+#[pyclass]
+struct NoContains {}
+
+#[pymethods]
+impl NoContains {
+ #[classattr]
+ const __contains__: Option<PyObject> = None;
+}
+__getitem__(<self>, object) -> object
Implements retrieval of the self[a]
element.
Note: Negative integer indexes are not handled specially by PyO3.
+However, for classes with #[pyclass(sequence)]
, when a negative index is
+accessed via PySequence::get_item
, the underlying C API already adjusts
+the index to be positive.
__setitem__(<self>, object, object) -> ()
Implements assignment to the self[a]
element.
+Should only be implemented if elements can be replaced.
Same behavior regarding negative indices as for __getitem__
.
__delitem__(<self>, object) -> ()
Implements deletion of the self[a]
element.
+Should only be implemented if elements can be deleted.
Same behavior regarding negative indices as for __getitem__
.
fn __concat__(&self, other: impl FromPyObject) -> PyResult<impl ToPyObject>
Concatenates two sequences.
+Used by the +
operator, after trying the numeric addition via
+the __add__
and __radd__
methods.
fn __repeat__(&self, count: isize) -> PyResult<impl ToPyObject>
Repeats the sequence count
times.
+Used by the *
operator, after trying the numeric multiplication via
+the __mul__
and __rmul__
methods.
fn __inplace_concat__(&self, other: impl FromPyObject) -> PyResult<impl ToPyObject>
Concatenates two sequences.
+Used by the +=
operator, after trying the numeric addition via
+the __iadd__
method.
fn __inplace_repeat__(&self, count: isize) -> PyResult<impl ToPyObject>
Concatenates two sequences.
+Used by the *=
operator, after trying the numeric multiplication via
+the __imul__
method.
__get__(<self>, object, object) -> object
__set__(<self>, object, object) -> ()
__delete__(<self>, object) -> ()
Binary arithmetic operations (+
, -
, *
, @
, /
, //
, %
, divmod()
,
+pow()
and **
, <<
, >>
, &
, ^
, and |
) and their reflected versions:
(If the object
is not of the type specified in the signature, the generated code
+will automatically return NotImplemented
.)
__add__(<self>, object) -> object
__radd__(<self>, object) -> object
__sub__(<self>, object) -> object
__rsub__(<self>, object) -> object
__mul__(<self>, object) -> object
__rmul__(<self>, object) -> object
__matmul__(<self>, object) -> object
__rmatmul__(<self>, object) -> object
__floordiv__(<self>, object) -> object
__rfloordiv__(<self>, object) -> object
__truediv__(<self>, object) -> object
__rtruediv__(<self>, object) -> object
__divmod__(<self>, object) -> object
__rdivmod__(<self>, object) -> object
__mod__(<self>, object) -> object
__rmod__(<self>, object) -> object
__lshift__(<self>, object) -> object
__rlshift__(<self>, object) -> object
__rshift__(<self>, object) -> object
__rrshift__(<self>, object) -> object
__and__(<self>, object) -> object
__rand__(<self>, object) -> object
__xor__(<self>, object) -> object
__rxor__(<self>, object) -> object
__or__(<self>, object) -> object
__ror__(<self>, object) -> object
__pow__(<self>, object, object) -> object
__rpow__(<self>, object, object) -> object
In-place assignment operations (+=
, -=
, *=
, @=
, /=
, //=
, %=
,
+**=
, <<=
, >>=
, &=
, ^=
, |=
):
__iadd__(<self>, object) -> ()
__isub__(<self>, object) -> ()
__imul__(<self>, object) -> ()
__imatmul__(<self>, object) -> ()
__itruediv__(<self>, object) -> ()
__ifloordiv__(<self>, object) -> ()
__imod__(<self>, object) -> ()
__ipow__(<self>, object, object) -> ()
__ilshift__(<self>, object) -> ()
__irshift__(<self>, object) -> ()
__iand__(<self>, object) -> ()
__ixor__(<self>, object) -> ()
__ior__(<self>, object) -> ()
Unary operations (-
, +
, abs()
and ~
):
__pos__(<self>) -> object
__neg__(<self>) -> object
__abs__(<self>) -> object
__invert__(<self>) -> object
Coercions:
+__index__(<self>) -> object (int)
__int__(<self>) -> object (int)
__float__(<self>) -> object (float)
__getbuffer__(<self>, *mut ffi::Py_buffer, flags) -> ()
__releasebuffer__(<self>, *mut ffi::Py_buffer) -> ()
+Errors returned from __releasebuffer__
will be sent to sys.unraiseablehook
. It is strongly advised to never return an error from __releasebuffer__
, and if it really is necessary, to make best effort to perform any required freeing operations before returning. __releasebuffer__
will not be called a second time; anything not freed will be leaked.If your type owns references to other Python objects, you will need to integrate
+with Python's garbage collector so that the GC is aware of those references. To
+do this, implement the two methods __traverse__
and __clear__
. These
+correspond to the slots tp_traverse
and tp_clear
in the Python C API.
+__traverse__
must call visit.call()
for each reference to another Python
+object. __clear__
must clear out any mutable references to other Python
+objects (thus breaking reference cycles). Immutable references do not have to be
+cleared, as every cycle must contain at least one mutable reference.
__traverse__(<self>, pyo3::class::gc::PyVisit<'_>) -> Result<(), pyo3::class::gc::PyTraverseError>
__clear__(<self>) -> ()
Example:
+use pyo3::prelude::*;
+use pyo3::PyTraverseError;
+use pyo3::gc::PyVisit;
+
+#[pyclass]
+struct ClassWithGCSupport {
+ obj: Option<PyObject>,
+}
+
+#[pymethods]
+impl ClassWithGCSupport {
+ fn __traverse__(&self, visit: PyVisit<'_>) -> Result<(), PyTraverseError> {
+ if let Some(obj) = &self.obj {
+ visit.call(obj)?
+ }
+ Ok(())
+ }
+
+ fn __clear__(&mut self) {
+ // Clear reference, this decrements ref counter.
+ self.obj = None;
+ }
+}
+Usually, an implementation of __traverse__
should do nothing but calls to visit.call
.
+Most importantly, safe access to the GIL is prohibited inside implementations of __traverse__
,
+i.e. Python::with_gil
will panic.
++ +Note: these methods are part of the C API, PyPy does not necessarily honor them. If you are building for PyPy you should measure memory consumption to make sure you do not have runaway memory growth. See this issue on the PyPy bug tracker.
+
Thank you for your interest in contributing to PyO3! All are welcome - please consider reading our Code of Conduct to keep our community positive and inclusive.
+If you are searching for ideas how to contribute, proceed to the "Getting started contributing" section. If you have found a specific issue to contribute to and need information about the development process, you may find the section "Writing pull requests" helpful.
+If you want to become familiar with the codebase, see +Architecture.md.
+Please join in with any part of PyO3 which interests you. We use GitHub issues to record all bugs and ideas. Feel free to request an issue to be assigned to you if you want to work on it.
+You can browse the API of the non-public parts of PyO3 here.
+The following sections also contain specific ideas on where to start contributing to PyO3.
+To work and develop PyO3, you need Python & Rust installed on your system.
+nox
is used to automate many of our CI tasks.The PyO3 Discord server is very active with users who are new to PyO3, and often completely new to Rust. Helping them debug is a great way to get experience with the PyO3 codebase.
+Helping others often reveals bugs, documentation weaknesses, and missing APIs. It's a good idea to open GitHub issues for these immediately so the resolution can be designed and implemented!
+Issues where the solution is clear and work is not in progress use the needs-implementer label.
+Don't be afraid if the solution is not clear to you! The core PyO3 contributors will be happy to mentor you through any questions you have to help you write the solution.
+PyO3 has a user guide (using mdbook) as well as the usual Rust API docs. The aim is for both of these to be detailed, easy to understand, and up-to-date. Pull requests are always welcome to fix typos, change wording, add examples, etc.
+There are some specific areas of focus where help is currently needed for the documentation:
+To build the docs (including all features), install nox
and then run
nox -s docs -- open
+
+We use lots of code blocks in our docs. Run cargo test --doc
when making changes to check that
+the doctests still work, or cargo test
to run all the tests including doctests. See
+https://doc.rust-lang.org/rustdoc/documentation-tests.html for a guide on doctests.
You can preview the user guide by building it locally with mdbook
.
First, install mdbook
and nox
. Then, run
nox -s build-guide -- --open
+
+To check all links in the guide are valid, also install lychee
and use the check-guide
session instead:
nox -s check-guide
+
+Issues which don't yet have a clear solution use the needs-design label.
+If any of these issues interest you, please join in with the conversation on the issue! All opinions are valued, and if you're interested in going further with e.g. draft PRs to experiment with API designs, even better!
+Everybody is welcome to submit comments on open PRs. Please help ensure new PyO3 APIs are safe, performant, tidy, and easy to use!
+Here are a few things to note when you are writing PRs.
+The PyO3 repo uses GitHub Actions. PRs are blocked from merging if CI is not successful. Formatting, linting and tests are checked for all Rust and Python code. In addition, all warnings in Rust code are disallowed (using RUSTFLAGS="-D warnings"
).
Tests run with all supported Python versions with the latest stable Rust compiler, as well as for Python 3.9 with the minimum supported Rust version.
+If you are adding a new feature, you should add it to the full
feature in our Cargo.toml* so that it is tested in CI.
You can run these tests yourself with
+nox
. Use nox -l
to list the full set of subcommands you can run.
nox -s ruff
nox -s rustfmt
cargo semver-checks check-release
nox -s clippy-all
cargo test --features full
nox -s check-feature-powerset
PyO3 uses trybuild
to develop UI tests to capture error messages from the Rust compiler for some of the macro functionality.
Because there are several feature combinations for these UI tests, when updating them all (e.g. for a new Rust compiler version) it may be helpful to use the update-ui-tests
nox session:
nox -s update-ui-tests
+
+We use towncrier to generate a CHANGELOG for each release.
+To include your changes in the release notes, you should create one (or more) news items in the newsfragments
directory. Valid news items should be saved as <PR>.<CATEGORY>.md
where <PR>
is the pull request number and <CATEGORY>
is one of the following:
packaging
- for dependency changes and Python / Rust version compatibility changesadded
- for new featureschanged
- for features which already existed but have been altered or deprecatedremoved
- for features which have been removedfixed
- for "changed" features which were classed as a bugfixDocs-only PRs do not need news items; start your PR title with docs:
to skip the check.
PyO3 has a lot of generic APIs to increase usability. These can come at the cost of generic code bloat. Where reasonable, try to implement a concrete sub-portion of generic functions. There are two forms of this:
+inner
and keep it as a local to the function._foo
and place it directly below foo
in the source code (where foo
is the original generic function).PyO3 makes a lot of FFI calls to Python's C API using raw pointers. Where possible try to avoid using pointers-to-temporaries in expressions:
+// dangerous
+pyo3::ffi::Something(name.to_object(py).as_ptr());
+
+// because the following refactoring is a use-after-free error:
+let name = name.to_object(py).as_ptr();
+pyo3::ffi::Something(name)
+Instead, prefer to bind the safe owned PyObject
wrapper before passing to ffi functions:
let name: PyObject = name.to_object(py);
+pyo3::ffi::Something(name.as_ptr())
+// name will automatically be freed when it falls out of scope
+PyO3 aims to keep sufficient compatibility to make packaging Python extensions built with PyO3 feasible on most common package managers.
+To keep package maintainers' lives simpler, PyO3 will commit, wherever possible, to only adjust minimum supported Rust and Python versions at the same time. This bump will only come in an 0.x
release, roughly once per year, after the oldest supported Python version reaches its end-of-life. (Check https://endoflife.date/python for a clear timetable on these.)
Below are guidelines on what compatibility all PRs are expected to deliver for each language.
+PyO3 supports all officially supported Python versions, as well as the latest PyPy3 release. All of these versions are tested in CI.
+PyO3 aims to make use of up-to-date Rust language features to keep the implementation as efficient as possible.
+The minimum Rust version supported will be decided when the release which bumps Python and Rust versions is made. At the time, the minimum Rust version will be set no higher than the lowest Rust version shipped in the current Debian, RHEL and Alpine Linux distributions.
+CI tests both the most recent stable Rust version and the minimum supported Rust version. Because of Rust's stability guarantees this is sufficient to confirm support for all Rust versions in between.
+PyO3 has two sets of benchmarks for evaluating some aspects of its performance. The benchmark suite is currently very small - please open PRs with new benchmarks if you're interested in helping to expand it!
+First, there are Rust-based benchmarks located in the pyo3-benches
subdirectory. You can run these benchmarks with:
nox -s bench
+
+Second, there is a Python-based benchmark contained in the pytests
subdirectory. You can read more about it here.
You can view what code is and isn't covered by PyO3's tests. We aim to have 100% coverage - please check coverage and add tests if you notice a lack of coverage!
+nox
.cargo install cargo-llvm-cov
+cargo llvm-cov
+
+lcov.info
file withnox -s coverage -- lcov
+
+You can install an IDE plugin to view the coverage. For example, if you use VSCode:
+settings.json
:{
+ "coverage-gutters.coverageFileNames": [
+ "lcov.info",
+ "cov.xml",
+ "coverage.xml",
+ ],
+ "coverage-gutters.showLineCoverage": true
+}
+
+At the moment there is no official organisation that accepts sponsorship on PyO3's behalf. If you're seeking to provide significant funding to the PyO3 ecosystem, please reach out to us on GitHub or Discord and we can discuss.
+In the meanwhile, some of our maintainers have personal GitHub sponsorship pages and would be grateful for your support:
+In this portion of the guide we'll talk about the mapping of Python types to Rust types offered by PyO3, as well as the traits available to perform conversions between them.
+ +When writing functions callable from Python (such as a #[pyfunction]
or in a #[pymethods]
block), the trait FromPyObject
is required for function arguments, and IntoPy<PyObject>
is required for function return values.
Consult the tables in the following section to find the Rust types provided by PyO3 which implement these traits.
+When accepting a function argument, it is possible to either use Rust library types or PyO3's Python-native types. (See the next section for discussion on when to use each.)
+The table below contains the Python type and the corresponding function argument types that will accept them:
+Python | Rust | Rust (Python-native) |
---|---|---|
object | - | PyAny |
str | String , Cow<str> , &str , char , OsString , PathBuf , Path | PyString , PyUnicode |
bytes | Vec<u8> , &[u8] , Cow<[u8]> | PyBytes |
bool | bool | PyBool |
int | i8 , u8 , i16 , u16 , i32 , u32 , i64 , u64 , i128 , u128 , isize , usize , num_bigint::BigInt 1, num_bigint::BigUint 1 | PyLong |
float | f32 , f64 | PyFloat |
complex | num_complex::Complex 2 | PyComplex |
fractions.Fraction | num_rational::Ratio 3 | - |
list[T] | Vec<T> | PyList |
dict[K, V] | HashMap<K, V> , BTreeMap<K, V> , hashbrown::HashMap<K, V> 4, indexmap::IndexMap<K, V> 5 | PyDict |
tuple[T, U] | (T, U) , Vec<T> | PyTuple |
set[T] | HashSet<T> , BTreeSet<T> , hashbrown::HashSet<T> 4 | PySet |
frozenset[T] | HashSet<T> , BTreeSet<T> , hashbrown::HashSet<T> 4 | PyFrozenSet |
bytearray | Vec<u8> , Cow<[u8]> | PyByteArray |
slice | - | PySlice |
type | - | PyType |
module | - | PyModule |
collections.abc.Buffer | - | PyBuffer<T> |
datetime.datetime | SystemTime , chrono::DateTime<Tz> 6, chrono::NaiveDateTime 6 | PyDateTime |
datetime.date | chrono::NaiveDate 6 | PyDate |
datetime.time | chrono::NaiveTime 6 | PyTime |
datetime.tzinfo | chrono::FixedOffset 6, chrono::Utc 6, chrono_tz::TimeZone 7 | PyTzInfo |
datetime.timedelta | Duration , chrono::Duration 6 | PyDelta |
decimal.Decimal | rust_decimal::Decimal 8 | - |
ipaddress.IPv4Address | std::net::IpAddr , std::net::IpV4Addr | - |
ipaddress.IPv6Address | std::net::IpAddr , std::net::IpV6Addr | - |
os.PathLike | PathBuf , Path | PyString , PyUnicode |
pathlib.Path | PathBuf , Path | PyString , PyUnicode |
typing.Optional[T] | Option<T> | - |
typing.Sequence[T] | Vec<T> | PySequence |
typing.Mapping[K, V] | HashMap<K, V> , BTreeMap<K, V> , hashbrown::HashMap<K, V> 4, indexmap::IndexMap<K, V> 5 | &PyMapping |
typing.Iterator[Any] | - | PyIterator |
typing.Union[...] | See #[derive(FromPyObject)] | - |
It is also worth remembering the following special types:
+What | Description |
---|---|
Python<'py> | A GIL token, used to pass to PyO3 constructors to prove ownership of the GIL. |
Bound<'py, T> | A Python object connected to the GIL lifetime. This provides access to most of PyO3's APIs. |
Py<T> | A Python object isolated from the GIL lifetime. This can be sent to other threads. |
PyObject | An alias for Py<PyAny> |
PyRef<T> | A #[pyclass] borrowed immutably. |
PyRefMut<T> | A #[pyclass] borrowed mutably. |
For more detail on accepting #[pyclass]
values as function arguments, see the section of this guide on Python Classes.
Using Rust library types as function arguments will incur a conversion cost compared to using the Python-native types. Using the Python-native types is almost zero-cost (they just require a type check similar to the Python builtin function isinstance()
).
However, once that conversion cost has been paid, the Rust standard library types offer a number of benefits:
+Python::allow_threads
to release the Python GIL and let other Python threads make progress while your Rust code is executing.Vec<i32>
, which will only accept a Python list
containing integers. The Python-native equivalent, &PyList
, would accept a Python list
containing Python objects of any type.For most PyO3 usage the conversion cost is worth paying to get these benefits. As always, if you're not sure it's worth it in your case, benchmark it!
+When returning values from functions callable from Python, PyO3's smart pointers (Py<T>
, Bound<'py, T>
, and Borrowed<'a, 'py, T>
) can be used with zero cost.
Because Bound<'py, T>
and Borrowed<'a, 'py, T>
have lifetime parameters, the Rust compiler may ask for lifetime annotations to be added to your function. See the section of the guide dedicated to this.
If your function is fallible, it should return PyResult<T>
or Result<T, E>
where E
implements From<E> for PyErr
. This will raise a Python
exception if the Err
variant is returned.
Finally, the following Rust types are also able to convert to Python as return values:
+Rust type | Resulting Python Type |
---|---|
String | str |
&str | str |
bool | bool |
Any integer type (i32 , u32 , usize , etc) | int |
f32 , f64 | float |
Option<T> | Optional[T] |
(T, U) | Tuple[T, U] |
Vec<T> | List[T] |
Cow<[u8]> | bytes |
HashMap<K, V> | Dict[K, V] |
BTreeMap<K, V> | Dict[K, V] |
HashSet<T> | Set[T] |
BTreeSet<T> | Set[T] |
Py<T> | T |
Bound<T> | T |
PyRef<T: PyClass> | T |
PyRefMut<T: PyClass> | T |
Requires the num-bigint
optional feature.
Requires the num-complex
optional feature.
Requires the hashbrown
optional feature.
Requires the indexmap
optional feature.
Requires the chrono
optional feature.
Requires the chrono-tz
optional feature.
Requires the rust_decimal
optional feature.
Requires the num-rational
optional feature.
PyO3 provides some handy traits to convert between Python types and Rust types.
+.extract()
and the FromPyObject
traitThe easiest way to convert a Python object to a Rust value is using
+.extract()
. It returns a PyResult
with a type error if the conversion
+fails, so usually you will use something like
use pyo3::prelude::*;
+use pyo3::types::PyList;
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| {
+ let list = PyList::new_bound(py, b"foo");
+let v: Vec<i32> = list.extract()?;
+ assert_eq!(&v, &[102, 111, 111]);
+ Ok(())
+ })
+}
+This method is available for many Python object types, and can produce a wide
+variety of Rust types, which you can check out in the implementor list of
+FromPyObject
.
FromPyObject
is also implemented for your own Rust types wrapped as Python
+objects (see the chapter about classes). There, in order to both be
+able to operate on mutable references and satisfy Rust's rules of non-aliasing
+mutable references, you have to extract the PyO3 reference wrappers PyRef
+and PyRefMut
. They work like the reference wrappers of
+std::cell::RefCell
and ensure (at runtime) that Rust borrows are allowed.
FromPyObject
FromPyObject
can be automatically derived for many kinds of structs and enums
+if the member types themselves implement FromPyObject
. This even includes members
+with a generic type T: FromPyObject
. Derivation for empty enums, enum variants and
+structs is not supported.
FromPyObject
for structsThe derivation generates code that will attempt to access the attribute my_string
on
+the Python object, i.e. obj.getattr("my_string")
, and call extract()
on the attribute.
use pyo3::prelude::*;
+
+#[derive(FromPyObject)]
+struct RustyStruct {
+ my_string: String,
+}
+
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| -> PyResult<()> {
+ let module = PyModule::from_code_bound(
+ py,
+ "class Foo:
+ def __init__(self):
+ self.my_string = 'test'",
+ "",
+ "",
+ )?;
+
+ let class = module.getattr("Foo")?;
+ let instance = class.call0()?;
+ let rustystruct: RustyStruct = instance.extract()?;
+ assert_eq!(rustystruct.my_string, "test");
+ Ok(())
+ })
+}
+By setting the #[pyo3(item)]
attribute on the field, PyO3 will attempt to extract the value by calling the get_item
method on the Python object.
use pyo3::prelude::*;
+
+#[derive(FromPyObject)]
+struct RustyStruct {
+ #[pyo3(item)]
+ my_string: String,
+}
+
+use pyo3::types::PyDict;
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| -> PyResult<()> {
+ let dict = PyDict::new_bound(py);
+ dict.set_item("my_string", "test")?;
+
+ let rustystruct: RustyStruct = dict.extract()?;
+ assert_eq!(rustystruct.my_string, "test");
+ Ok(())
+ })
+}
+The argument passed to getattr
and get_item
can also be configured:
use pyo3::prelude::*;
+
+#[derive(FromPyObject)]
+struct RustyStruct {
+ #[pyo3(item("key"))]
+ string_in_mapping: String,
+ #[pyo3(attribute("name"))]
+ string_attr: String,
+}
+
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| -> PyResult<()> {
+ let module = PyModule::from_code_bound(
+ py,
+ "class Foo(dict):
+ def __init__(self):
+ self.name = 'test'
+ self['key'] = 'test2'",
+ "",
+ "",
+ )?;
+
+ let class = module.getattr("Foo")?;
+ let instance = class.call0()?;
+ let rustystruct: RustyStruct = instance.extract()?;
+ assert_eq!(rustystruct.string_attr, "test");
+ assert_eq!(rustystruct.string_in_mapping, "test2");
+
+ Ok(())
+ })
+}
+This tries to extract string_attr
from the attribute name
and string_in_mapping
+from a mapping with the key "key"
. The arguments for attribute
are restricted to
+non-empty string literals while item
can take any valid literal that implements
+ToBorrowedObject
.
You can use #[pyo3(from_item_all)]
on a struct to extract every field with get_item
method.
+In this case, you can't use #[pyo3(attribute)]
or barely use #[pyo3(item)]
on any field.
+However, using #[pyo3(item("key"))]
to specify the key for a field is still allowed.
use pyo3::prelude::*;
+
+#[derive(FromPyObject)]
+#[pyo3(from_item_all)]
+struct RustyStruct {
+ foo: String,
+ bar: String,
+ #[pyo3(item("foobar"))]
+ baz: String,
+}
+
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| -> PyResult<()> {
+ let py_dict = py.eval_bound("{'foo': 'foo', 'bar': 'bar', 'foobar': 'foobar'}", None, None)?;
+ let rustystruct: RustyStruct = py_dict.extract()?;
+ assert_eq!(rustystruct.foo, "foo");
+ assert_eq!(rustystruct.bar, "bar");
+ assert_eq!(rustystruct.baz, "foobar");
+
+ Ok(())
+ })
+}
+FromPyObject
for tuple structsTuple structs are also supported but do not allow customizing the extraction. The input is
+always assumed to be a Python tuple with the same length as the Rust type, the n
th field
+is extracted from the n
th item in the Python tuple.
use pyo3::prelude::*;
+
+#[derive(FromPyObject)]
+struct RustyTuple(String, String);
+
+use pyo3::types::PyTuple;
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| -> PyResult<()> {
+ let tuple = PyTuple::new_bound(py, vec!["test", "test2"]);
+
+ let rustytuple: RustyTuple = tuple.extract()?;
+ assert_eq!(rustytuple.0, "test");
+ assert_eq!(rustytuple.1, "test2");
+
+ Ok(())
+ })
+}
+Tuple structs with a single field are treated as wrapper types which are described in the +following section. To override this behaviour and ensure that the input is in fact a tuple, +specify the struct as
+use pyo3::prelude::*;
+
+#[derive(FromPyObject)]
+struct RustyTuple((String,));
+
+use pyo3::types::PyTuple;
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| -> PyResult<()> {
+ let tuple = PyTuple::new_bound(py, vec!["test"]);
+
+ let rustytuple: RustyTuple = tuple.extract()?;
+ assert_eq!((rustytuple.0).0, "test");
+
+ Ok(())
+ })
+}
+FromPyObject
for wrapper typesThe pyo3(transparent)
attribute can be used on structs with exactly one field. This results
+in extracting directly from the input object, i.e. obj.extract()
, rather than trying to access
+an item or attribute. This behaviour is enabled per default for newtype structs and tuple-variants
+with a single field.
use pyo3::prelude::*;
+
+#[derive(FromPyObject)]
+struct RustyTransparentTupleStruct(String);
+
+#[derive(FromPyObject)]
+#[pyo3(transparent)]
+struct RustyTransparentStruct {
+ inner: String,
+}
+
+use pyo3::types::PyString;
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| -> PyResult<()> {
+ let s = PyString::new_bound(py, "test");
+
+ let tup: RustyTransparentTupleStruct = s.extract()?;
+ assert_eq!(tup.0, "test");
+
+ let stru: RustyTransparentStruct = s.extract()?;
+ assert_eq!(stru.inner, "test");
+
+ Ok(())
+ })
+}
+FromPyObject
for enumsThe FromPyObject
derivation for enums generates code that tries to extract the variants in the
+order of the fields. As soon as a variant can be extracted successfully, that variant is returned.
+This makes it possible to extract Python union types like str | int
.
The same customizations and restrictions described for struct derivations apply to enum variants,
+i.e. a tuple variant assumes that the input is a Python tuple, and a struct variant defaults to
+extracting fields as attributes but can be configured in the same manner. The transparent
+attribute can be applied to single-field-variants.
use pyo3::prelude::*;
+
+#[derive(FromPyObject)]
+#[derive(Debug)]
+enum RustyEnum<'py> {
+ Int(usize), // input is a positive int
+ String(String), // input is a string
+ IntTuple(usize, usize), // input is a 2-tuple with positive ints
+ StringIntTuple(String, usize), // input is a 2-tuple with String and int
+ Coordinates3d {
+ // needs to be in front of 2d
+ x: usize,
+ y: usize,
+ z: usize,
+ },
+ Coordinates2d {
+ // only gets checked if the input did not have `z`
+ #[pyo3(attribute("x"))]
+ a: usize,
+ #[pyo3(attribute("y"))]
+ b: usize,
+ },
+ #[pyo3(transparent)]
+ CatchAll(Bound<'py, PyAny>), // This extraction never fails
+}
+
+use pyo3::types::{PyBytes, PyString};
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| -> PyResult<()> {
+ {
+ let thing = 42_u8.to_object(py);
+ let rust_thing: RustyEnum<'_> = thing.extract(py)?;
+
+ assert_eq!(
+ 42,
+ match rust_thing {
+ RustyEnum::Int(i) => i,
+ other => unreachable!("Error extracting: {:?}", other),
+ }
+ );
+ }
+ {
+ let thing = PyString::new_bound(py, "text");
+ let rust_thing: RustyEnum<'_> = thing.extract()?;
+
+ assert_eq!(
+ "text",
+ match rust_thing {
+ RustyEnum::String(i) => i,
+ other => unreachable!("Error extracting: {:?}", other),
+ }
+ );
+ }
+ {
+ let thing = (32_u8, 73_u8).to_object(py);
+ let rust_thing: RustyEnum<'_> = thing.extract(py)?;
+
+ assert_eq!(
+ (32, 73),
+ match rust_thing {
+ RustyEnum::IntTuple(i, j) => (i, j),
+ other => unreachable!("Error extracting: {:?}", other),
+ }
+ );
+ }
+ {
+ let thing = ("foo", 73_u8).to_object(py);
+ let rust_thing: RustyEnum<'_> = thing.extract(py)?;
+
+ assert_eq!(
+ (String::from("foo"), 73),
+ match rust_thing {
+ RustyEnum::StringIntTuple(i, j) => (i, j),
+ other => unreachable!("Error extracting: {:?}", other),
+ }
+ );
+ }
+ {
+ let module = PyModule::from_code_bound(
+ py,
+ "class Foo(dict):
+ def __init__(self):
+ self.x = 0
+ self.y = 1
+ self.z = 2",
+ "",
+ "",
+ )?;
+
+ let class = module.getattr("Foo")?;
+ let instance = class.call0()?;
+ let rust_thing: RustyEnum<'_> = instance.extract()?;
+
+ assert_eq!(
+ (0, 1, 2),
+ match rust_thing {
+ RustyEnum::Coordinates3d { x, y, z } => (x, y, z),
+ other => unreachable!("Error extracting: {:?}", other),
+ }
+ );
+ }
+
+ {
+ let module = PyModule::from_code_bound(
+ py,
+ "class Foo(dict):
+ def __init__(self):
+ self.x = 3
+ self.y = 4",
+ "",
+ "",
+ )?;
+
+ let class = module.getattr("Foo")?;
+ let instance = class.call0()?;
+ let rust_thing: RustyEnum<'_> = instance.extract()?;
+
+ assert_eq!(
+ (3, 4),
+ match rust_thing {
+ RustyEnum::Coordinates2d { a, b } => (a, b),
+ other => unreachable!("Error extracting: {:?}", other),
+ }
+ );
+ }
+
+ {
+ let thing = PyBytes::new_bound(py, b"text");
+ let rust_thing: RustyEnum<'_> = thing.extract()?;
+
+ assert_eq!(
+ b"text",
+ match rust_thing {
+ RustyEnum::CatchAll(ref i) => i.downcast::<PyBytes>()?.as_bytes(),
+ other => unreachable!("Error extracting: {:?}", other),
+ }
+ );
+ }
+ Ok(())
+ })
+}
+If none of the enum variants match, a PyTypeError
containing the names of the
+tested variants is returned. The names reported in the error message can be customized
+through the #[pyo3(annotation = "name")]
attribute, e.g. to use conventional Python type
+names:
use pyo3::prelude::*;
+
+#[derive(FromPyObject)]
+#[derive(Debug)]
+enum RustyEnum {
+ #[pyo3(transparent, annotation = "str")]
+ String(String),
+ #[pyo3(transparent, annotation = "int")]
+ Int(isize),
+}
+
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| -> PyResult<()> {
+ {
+ let thing = 42_u8.to_object(py);
+ let rust_thing: RustyEnum = thing.extract(py)?;
+
+ assert_eq!(
+ 42,
+ match rust_thing {
+ RustyEnum::Int(i) => i,
+ other => unreachable!("Error extracting: {:?}", other),
+ }
+ );
+ }
+
+ {
+ let thing = "foo".to_object(py);
+ let rust_thing: RustyEnum = thing.extract(py)?;
+
+ assert_eq!(
+ "foo",
+ match rust_thing {
+ RustyEnum::String(i) => i,
+ other => unreachable!("Error extracting: {:?}", other),
+ }
+ );
+ }
+
+ {
+ let thing = b"foo".to_object(py);
+ let error = thing.extract::<RustyEnum>(py).unwrap_err();
+ assert!(error.is_instance_of::<pyo3::exceptions::PyTypeError>(py));
+ }
+
+ Ok(())
+ })
+}
+If the input is neither a string nor an integer, the error message will be:
+"'<INPUT_TYPE>' cannot be converted to 'str | int'"
.
#[derive(FromPyObject)]
Container Attributespyo3(transparent)
+obj.extract()
instead of get_item()
or
+getattr()
pyo3(annotation = "name")
+pyo3("int")
reports the variant's type as int
.#[derive(FromPyObject)]
Field Attributespyo3(attribute)
, pyo3(attribute("name"))
+pyo3(item)
, pyo3(item("key"))
+ToBorrowedObject
pyo3(from_py_with = "...")
+fn(&Bound<PyAny>) -> PyResult<T>
where T
is the Rust type of the argument.IntoPy<T>
This trait defines the to-python conversion for a Rust type. It is usually implemented as
+IntoPy<PyObject>
, which is the trait needed for returning a value from #[pyfunction]
and
+#[pymethods]
.
All types in PyO3 implement this trait, as does a #[pyclass]
which doesn't use extends
.
Occasionally you may choose to implement this for custom types which are mapped to Python types +without having a unique python type.
+use pyo3::prelude::*;
+#[allow(dead_code)]
+struct MyPyObjectWrapper(PyObject);
+
+impl IntoPy<PyObject> for MyPyObjectWrapper {
+ fn into_py(self, py: Python<'_>) -> PyObject {
+ self.0
+ }
+}
+ToPyObject
traitToPyObject
is a conversion trait that allows various objects to be
+converted into PyObject
. IntoPy<PyObject>
serves the
+same purpose, except that it consumes self
.
PyO3's attributes (#[pyclass]
, #[pymodule]
, etc.) are procedural macros, which means that they rewrite the source of the annotated item. You can view the generated source with the following command, which also expands a few other things:
cargo rustc --profile=check -- -Z unstable-options --pretty=expanded > expanded.rs; rustfmt expanded.rs
+
+(You might need to install rustfmt if you don't already have it.)
+You can also debug classic !
-macros by adding -Z trace-macros
:
cargo rustc --profile=check -- -Z unstable-options --pretty=expanded -Z trace-macros > expanded.rs; rustfmt expanded.rs
+
+Note that those commands require using the nightly build of rust and may occasionally have bugs. See cargo expand for a more elaborate and stable version of those commands.
+Valgrind is a tool to detect memory management bugs such as memory leaks.
+You first need to install a debug build of Python, otherwise Valgrind won't produce usable results. In Ubuntu there's e.g. a python3-dbg
package.
Activate an environment with the debug interpreter and recompile. If you're on Linux, use ldd
with the name of your binary and check that you're linking e.g. libpython3.7d.so.1.0
instead of libpython3.7.so.1.0
.
Download the suppressions file for CPython.
+Run Valgrind with valgrind --suppressions=valgrind-python.supp ./my-command --with-options
The best start to investigate a crash such as an segmentation fault is a backtrace. You can set RUST_BACKTRACE=1
as an environment variable to get the stack trace on a panic!
. Alternatively you can use a debugger such as gdb
to explore the issue. Rust provides a wrapper, rust-gdb
, which has pretty-printers for inspecting Rust variables. Since PyO3 uses cdylib
for Python shared objects, it does not receive the pretty-print debug hooks in rust-gdb
(rust-lang/rust#96365). The mentioned issue contains a workaround for enabling pretty-printers in this case.
rust-gdb <my-binary>
b
) on rust_panic
if you are investigating a panic!
r
to runbt
or bt full
to print the stacktraceOften it is helpful to run a small piece of Python code to exercise a section of Rust.
+rust-gdb --args python -c "import my_package; my_package.sum_to_string(1, 2)"
+
+
+ This portion of the guide is dedicated to crates which are external to the main PyO3 project and provide additional functionality you might find useful.
+Because these projects evolve independently of the PyO3 repository the content of these articles may fall out of date over time; please file issues on the PyO3 GitHub to alert maintainers when this is the case.
+ +async
and await
async
/await
support is currently being integrated in PyO3. See the dedicated documentation
If you are working with a Python library that makes use of async functions or wish to provide
+Python bindings for an async Rust library, pyo3-asyncio
+likely has the tools you need. It provides conversions between async functions in both Python and
+Rust and was designed with first-class support for popular Rust runtimes such as
+tokio
and async-std
. In addition, all async Python
+code runs on the default asyncio
event loop, so pyo3-asyncio
should work just fine with existing
+Python libraries.
In the following sections, we'll give a general overview of pyo3-asyncio
explaining how to call
+async Python functions with PyO3, how to call async Rust functions from Python, and how to configure
+your codebase to manage the runtimes of both.
Here are some examples to get you started right away! A more detailed breakdown +of the concepts in these examples can be found in the following sections.
+Here we initialize the runtime, import Python's asyncio
library and run the given future to completion using Python's default EventLoop
and async-std
. Inside the future, we convert asyncio
sleep into a Rust future and await it.
# Cargo.toml dependencies
+[dependencies]
+pyo3 = { version = "0.14" }
+pyo3-asyncio = { version = "0.14", features = ["attributes", "async-std-runtime"] }
+async-std = "1.9"
+
+//! main.rs
+
+use pyo3::prelude::*;
+
+#[pyo3_asyncio::async_std::main]
+async fn main() -> PyResult<()> {
+ let fut = Python::with_gil(|py| {
+ let asyncio = py.import("asyncio")?;
+ // convert asyncio.sleep into a Rust Future
+ pyo3_asyncio::async_std::into_future(asyncio.call_method1("sleep", (1.into_py(py),))?)
+ })?;
+
+ fut.await?;
+
+ Ok(())
+}
+The same application can be written to use tokio
instead using the #[pyo3_asyncio::tokio::main]
+attribute.
# Cargo.toml dependencies
+[dependencies]
+pyo3 = { version = "0.14" }
+pyo3-asyncio = { version = "0.14", features = ["attributes", "tokio-runtime"] }
+tokio = "1.4"
+
+//! main.rs
+
+use pyo3::prelude::*;
+
+#[pyo3_asyncio::tokio::main]
+async fn main() -> PyResult<()> {
+ let fut = Python::with_gil(|py| {
+ let asyncio = py.import("asyncio")?;
+ // convert asyncio.sleep into a Rust Future
+ pyo3_asyncio::tokio::into_future(asyncio.call_method1("sleep", (1.into_py(py),))?)
+ })?;
+
+ fut.await?;
+
+ Ok(())
+}
+More details on the usage of this library can be found in the API docs and the primer below.
+PyO3 Asyncio can also be used to write native modules with async functions.
+Add the [lib]
section to Cargo.toml
to make your library a cdylib
that Python can import.
[lib]
+name = "my_async_module"
+crate-type = ["cdylib"]
+
+Make your project depend on pyo3
with the extension-module
feature enabled and select your
+pyo3-asyncio
runtime:
For async-std
:
[dependencies]
+pyo3 = { version = "0.14", features = ["extension-module"] }
+pyo3-asyncio = { version = "0.14", features = ["async-std-runtime"] }
+async-std = "1.9"
+
+For tokio
:
[dependencies]
+pyo3 = { version = "0.14", features = ["extension-module"] }
+pyo3-asyncio = { version = "0.14", features = ["tokio-runtime"] }
+tokio = "1.4"
+
+Export an async function that makes use of async-std
:
//! lib.rs
+
+use pyo3::{prelude::*, wrap_pyfunction};
+
+#[pyfunction]
+fn rust_sleep(py: Python<'_>) -> PyResult<&Bound<'_, PyAny>> {
+ pyo3_asyncio::async_std::future_into_py(py, async {
+ async_std::task::sleep(std::time::Duration::from_secs(1)).await;
+ Ok(Python::with_gil(|py| py.None()))
+ })
+}
+
+#[pymodule]
+fn my_async_module(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ m.add_function(wrap_pyfunction!(rust_sleep, m)?)?;
+
+ Ok(())
+}
+If you want to use tokio
instead, here's what your module should look like:
//! lib.rs
+
+use pyo3::{prelude::*, wrap_pyfunction};
+
+#[pyfunction]
+fn rust_sleep(py: Python<'_>) -> PyResult<&Bound<'_, PyAny>>> {
+ pyo3_asyncio::tokio::future_into_py(py, async {
+ tokio::time::sleep(std::time::Duration::from_secs(1)).await;
+ Ok(Python::with_gil(|py| py.None()))
+ })
+}
+
+#[pymodule]
+fn my_async_module(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ m.add_function(wrap_pyfunction!(rust_sleep, m)?)?;
+ Ok(())
+}
+You can build your module with maturin (see the Using Rust in Python section in the PyO3 guide for setup instructions). After that you should be able to run the Python REPL to try it out.
+maturin develop && python3
+🔗 Found pyo3 bindings
+🐍 Found CPython 3.8 at python3
+ Finished dev [unoptimized + debuginfo] target(s) in 0.04s
+Python 3.8.5 (default, Jan 27 2021, 15:41:15)
+[GCC 9.3.0] on linux
+Type "help", "copyright", "credits" or "license" for more information.
+>>> import asyncio
+>>>
+>>> from my_async_module import rust_sleep
+>>>
+>>> async def main():
+>>> await rust_sleep()
+>>>
+>>> # should sleep for 1s
+>>> asyncio.run(main())
+>>>
+
+Let's take a look at a dead simple async Python function:
+# Sleep for 1 second
+async def py_sleep():
+ await asyncio.sleep(1)
+
+Async functions in Python are simply functions that return a coroutine
object. For our purposes,
+we really don't need to know much about these coroutine
objects. The key factor here is that calling
+an async
function is just like calling a regular function, the only difference is that we have
+to do something special with the object that it returns.
Normally in Python, that something special is the await
keyword, but in order to await this
+coroutine in Rust, we first need to convert it into Rust's version of a coroutine
: a Future
.
+That's where pyo3-asyncio
comes in.
+pyo3_asyncio::async_std::into_future
+performs this conversion for us.
The following example uses into_future
to call the py_sleep
function shown above and then await the
+coroutine object returned from the call:
use pyo3::prelude::*;
+
+#[pyo3_asyncio::tokio::main]
+async fn main() -> PyResult<()> {
+ let future = Python::with_gil(|py| -> PyResult<_> {
+ // import the module containing the py_sleep function
+ let example = py.import("example")?;
+
+ // calling the py_sleep method like a normal function
+ // returns a coroutine
+ let coroutine = example.call_method0("py_sleep")?;
+
+ // convert the coroutine into a Rust future using the
+ // tokio runtime
+ pyo3_asyncio::tokio::into_future(coroutine)
+ })?;
+
+ // await the future
+ future.await?;
+
+ Ok(())
+}
+Alternatively, the below example shows how to write a #[pyfunction]
which uses into_future
to receive and await
+a coroutine argument:
#[pyfunction]
+fn await_coro(coro: &Bound<'_, PyAny>>) -> PyResult<()> {
+ // convert the coroutine into a Rust future using the
+ // async_std runtime
+ let f = pyo3_asyncio::async_std::into_future(coro)?;
+
+ pyo3_asyncio::async_std::run_until_complete(coro.py(), async move {
+ // await the future
+ f.await?;
+ Ok(())
+ })
+}
+This could be called from Python as:
+import asyncio
+
+async def py_sleep():
+ asyncio.sleep(1)
+
+await_coro(py_sleep())
+
+If for you wanted to pass a callable function to the #[pyfunction]
instead, (i.e. the last line becomes await_coro(py_sleep))
, then the above example needs to be tweaked to first call the callable to get the coroutine:
#[pyfunction]
+fn await_coro(callable: &Bound<'_, PyAny>>) -> PyResult<()> {
+ // get the coroutine by calling the callable
+ let coro = callable.call0()?;
+
+ // convert the coroutine into a Rust future using the
+ // async_std runtime
+ let f = pyo3_asyncio::async_std::into_future(coro)?;
+
+ pyo3_asyncio::async_std::run_until_complete(coro.py(), async move {
+ // await the future
+ f.await?;
+ Ok(())
+ })
+}
+This can be particularly helpful where you need to repeatedly create and await a coroutine. Trying to await the same coroutine multiple times will raise an error:
+RuntimeError: cannot reuse already awaited coroutine
+
+++If you're interested in learning more about
+coroutines
andawaitables
in general, check out the +Python 3asyncio
docs for more information.
Here we have the same async function as before written in Rust using the
+async-std
runtime:
/// Sleep for 1 second
+async fn rust_sleep() {
+ async_std::task::sleep(std::time::Duration::from_secs(1)).await;
+}
+Similar to Python, Rust's async functions also return a special object called a
+Future
:
let future = rust_sleep();
+We can convert this Future
object into Python to make it awaitable
. This tells Python that you
+can use the await
keyword with it. In order to do this, we'll call
+pyo3_asyncio::async_std::future_into_py
:
use pyo3::prelude::*;
+
+async fn rust_sleep() {
+ async_std::task::sleep(std::time::Duration::from_secs(1)).await;
+}
+
+#[pyfunction]
+fn call_rust_sleep(py: Python<'_>) -> PyResult<&Bound<'_, PyAny>>> {
+ pyo3_asyncio::async_std::future_into_py(py, async move {
+ rust_sleep().await;
+ Ok(Python::with_gil(|py| py.None()))
+ })
+}
+In Python, we can call this pyo3 function just like any other async function:
+from example import call_rust_sleep
+
+async def rust_sleep():
+ await call_rust_sleep()
+
+Python's event loop requires some special treatment, especially regarding the main thread. Some of
+Python's asyncio
features, like proper signal handling, require control over the main thread, which
+doesn't always play well with Rust.
Luckily, Rust's event loops are pretty flexible and don't need control over the main thread, so in
+pyo3-asyncio
, we decided the best way to handle Rust/Python interop was to just surrender the main
+thread to Python and run Rust's event loops in the background. Unfortunately, since most event loop
+implementations prefer control over the main thread, this can still make some things awkward.
Because Python needs to control the main thread, we can't use the convenient proc macros from Rust
+runtimes to handle the main
function or #[test]
functions. Instead, the initialization for PyO3 has to be done from the main
function and the main
+thread must block on pyo3_asyncio::async_std::run_until_complete
.
Because we have to block on one of those functions, we can't use #[async_std::main]
or #[tokio::main]
+since it's not a good idea to make long blocking calls during an async function.
++Internally, these
+#[main]
proc macros are expanded to something like this:+fn main() { + // your async main fn + async fn _main_impl() { /* ... */ } + Runtime::new().block_on(_main_impl()); +}
Making a long blocking call inside the
+Future
that's being driven byblock_on
prevents that +thread from doing anything else and can spell trouble for some runtimes (also this will actually +deadlock a single-threaded runtime!). Many runtimes have some sort ofspawn_blocking
mechanism +that can avoid this problem, but again that's not something we can use here since we need it to +block on the main thread.
For this reason, pyo3-asyncio
provides its own set of proc macros to provide you with this
+initialization. These macros are intended to mirror the initialization of async-std
and tokio
+while also satisfying the Python runtime's needs.
Here's a full example of PyO3 initialization with the async-std
runtime:
use pyo3::prelude::*;
+
+#[pyo3_asyncio::async_std::main]
+async fn main() -> PyResult<()> {
+ // PyO3 is initialized - Ready to go
+
+ let fut = Python::with_gil(|py| -> PyResult<_> {
+ let asyncio = py.import("asyncio")?;
+
+ // convert asyncio.sleep into a Rust Future
+ pyo3_asyncio::async_std::into_future(
+ asyncio.call_method1("sleep", (1.into_py(py),))?
+ )
+ })?;
+
+ fut.await?;
+
+ Ok(())
+}
+asyncio.run
In Python 3.7+, the recommended way to run a top-level coroutine with asyncio
+is with asyncio.run
. In v0.13
we recommended against using this function due to initialization issues, but in v0.14
it's perfectly valid to use this function... with a caveat.
Since our Rust <--> Python conversions require a reference to the Python event loop, this poses a problem. Imagine we have a PyO3 Asyncio module that defines
+a rust_sleep
function like in previous examples. You might rightfully assume that you can call pass this directly into asyncio.run
like this:
import asyncio
+
+from my_async_module import rust_sleep
+
+asyncio.run(rust_sleep())
+
+You might be surprised to find out that this throws an error:
+Traceback (most recent call last):
+ File "example.py", line 5, in <module>
+ asyncio.run(rust_sleep())
+RuntimeError: no running event loop
+
+What's happening here is that we are calling rust_sleep
before the future is
+actually running on the event loop created by asyncio.run
. This is counter-intuitive, but expected behaviour, and unfortunately there doesn't seem to be a good way of solving this problem within PyO3 Asyncio itself.
However, we can make this example work with a simple workaround:
+import asyncio
+
+from my_async_module import rust_sleep
+
+# Calling main will just construct the coroutine that later calls rust_sleep.
+# - This ensures that rust_sleep will be called when the event loop is running,
+# not before.
+async def main():
+ await rust_sleep()
+
+# Run the main() coroutine at the top-level instead
+asyncio.run(main())
+
+Python allows you to use alternatives to the default asyncio
event loop. One
+popular alternative is uvloop
. In v0.13
using non-standard event loops was
+a bit of an ordeal, but in v0.14
it's trivial.
uvloop
in a PyO3 Asyncio Native Extensions# Cargo.toml
+
+[lib]
+name = "my_async_module"
+crate-type = ["cdylib"]
+
+[dependencies]
+pyo3 = { version = "0.14", features = ["extension-module"] }
+pyo3-asyncio = { version = "0.14", features = ["tokio-runtime"] }
+async-std = "1.9"
+tokio = "1.4"
+
+//! lib.rs
+
+use pyo3::{prelude::*, wrap_pyfunction};
+
+#[pyfunction]
+fn rust_sleep(py: Python<'_>) -> PyResult<&Bound<'_, PyAny>>> {
+ pyo3_asyncio::tokio::future_into_py(py, async {
+ tokio::time::sleep(std::time::Duration::from_secs(1)).await;
+ Ok(Python::with_gil(|py| py.None()))
+ })
+}
+
+#[pymodule]
+fn my_async_module(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ m.add_function(wrap_pyfunction!(rust_sleep, m)?)?;
+
+ Ok(())
+}
+$ maturin develop && python3
+🔗 Found pyo3 bindings
+🐍 Found CPython 3.8 at python3
+ Finished dev [unoptimized + debuginfo] target(s) in 0.04s
+Python 3.8.8 (default, Apr 13 2021, 19:58:26)
+[GCC 7.3.0] :: Anaconda, Inc. on linux
+Type "help", "copyright", "credits" or "license" for more information.
+>>> import asyncio
+>>> import uvloop
+>>>
+>>> import my_async_module
+>>>
+>>> uvloop.install()
+>>>
+>>> async def main():
+... await my_async_module.rust_sleep()
+...
+>>> asyncio.run(main())
+>>>
+
+uvloop
in Rust ApplicationsUsing uvloop
in Rust applications is a bit trickier, but it's still possible
+with relatively few modifications.
Unfortunately, we can't make use of the #[pyo3_asyncio::<runtime>::main]
attribute with non-standard event loops. This is because the #[pyo3_asyncio::<runtime>::main]
proc macro has to interact with the Python
+event loop before we can install the uvloop
policy.
[dependencies]
+async-std = "1.9"
+pyo3 = "0.14"
+pyo3-asyncio = { version = "0.14", features = ["async-std-runtime"] }
+
+//! main.rs
+
+use pyo3::{prelude::*, types::PyType};
+
+fn main() -> PyResult<()> {
+ pyo3::prepare_freethreaded_python();
+
+ Python::with_gil(|py| {
+ let uvloop = py.import("uvloop")?;
+ uvloop.call_method0("install")?;
+
+ // store a reference for the assertion
+ let uvloop = PyObject::from(uvloop);
+
+ pyo3_asyncio::async_std::run(py, async move {
+ // verify that we are on a uvloop.Loop
+ Python::with_gil(|py| -> PyResult<()> {
+ assert!(pyo3_asyncio::async_std::get_current_loop(py)?.is_instance(
+ uvloop
+ .as_ref(py)
+ .getattr("Loop")?
+ )?);
+ Ok(())
+ })?;
+
+ async_std::task::sleep(std::time::Duration::from_secs(1)).await;
+
+ Ok(())
+ })
+ })
+}
+testing
moduleIt is desirable if both the Python and Rust parts of the application end up +logging using the same configuration into the same place.
+This section of the guide briefly discusses how to connect the two languages'
+logging ecosystems together. The recommended way for Python extension modules is
+to configure Rust's logger to send log messages to Python using the pyo3-log
+crate. For users who want to do the opposite and send Python log messages to
+Rust, see the note at the end of this guide.
pyo3-log
to send Rust log messages to PythonThe pyo3-log crate allows sending the messages from the Rust side to Python's +logging system. This is mostly suitable for writing native extensions for +Python programs.
+Use pyo3_log::init
to install the logger in its default configuration.
+It's also possible to tweak its configuration (mostly to tune its performance).
use log::info;
+use pyo3::prelude::*;
+
+#[pyfunction]
+fn log_something() {
+ // This will use the logger installed in `my_module` to send the `info`
+ // message to the Python logging facilities.
+ info!("Something!");
+}
+
+#[pymodule]
+fn my_module(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ // A good place to install the Rust -> Python logger.
+ pyo3_log::init();
+
+ m.add_function(wrap_pyfunction!(log_something, m)?)?;
+ Ok(())
+}
+Then it is up to the Python side to actually output the messages somewhere.
+import logging
+import my_module
+
+FORMAT = '%(levelname)s %(name)s %(asctime)-15s %(filename)s:%(lineno)d %(message)s'
+logging.basicConfig(format=FORMAT)
+logging.getLogger().setLevel(logging.INFO)
+my_module.log_something()
+
+It is important to initialize the Python loggers first, before calling any Rust +functions that may log. This limitation can be worked around if it is not +possible to satisfy, read the documentation about caching.
+To have python logs be handled by Rust, one need only register a rust function to handle logs emitted from the core python logging module.
+This has been implemented within the pyo3-pylogger crate.
+use log::{info, warn};
+use pyo3::prelude::*;
+
+fn main() -> PyResult<()> {
+ // register the host handler with python logger, providing a logger target
+ // set the name here to something appropriate for your application
+ pyo3_pylogger::register("example_application_py_logger");
+
+ // initialize up a logger
+ env_logger::Builder::from_env(env_logger::Env::default().default_filter_or("trace")).init();
+
+ // Log some messages from Rust.
+ info!("Just some normal information!");
+ warn!("Something spooky happened!");
+
+ // Log some messages from Python
+ Python::with_gil(|py| {
+ py.run(
+ "
+import logging
+logging.error('Something bad happened')
+",
+ None,
+ None,
+ )
+ })
+}
+
+ Use the create_exception!
macro:
use pyo3::create_exception;
+
+create_exception!(module, MyError, pyo3::exceptions::PyException);
+module
is the name of the containing module.MyError
is the name of the new exception type.For example:
+use pyo3::prelude::*;
+use pyo3::create_exception;
+use pyo3::types::IntoPyDict;
+use pyo3::exceptions::PyException;
+
+create_exception!(mymodule, CustomError, PyException);
+
+Python::with_gil(|py| {
+ let ctx = [("CustomError", py.get_type_bound::<CustomError>())].into_py_dict_bound(py);
+ pyo3::py_run!(
+ py,
+ *ctx,
+ "assert str(CustomError) == \"<class 'mymodule.CustomError'>\""
+ );
+ pyo3::py_run!(py, *ctx, "assert CustomError('oops').args == ('oops',)");
+});
+When using PyO3 to create an extension module, you can add the new exception to +the module like this, so that it is importable from Python:
+use pyo3::prelude::*;
+use pyo3::exceptions::PyException;
+
+pyo3::create_exception!(mymodule, CustomError, PyException);
+
+#[pymodule]
+fn mymodule(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
+ // ... other elements added to module ...
+ m.add("CustomError", py.get_type_bound::<CustomError>())?;
+
+ Ok(())
+}
+As described in the function error handling chapter, to raise an exception from a #[pyfunction]
or #[pymethods]
, return an Err(PyErr)
. PyO3 will automatically raise this exception for you when returning the result to Python.
You can also manually write and fetch errors in the Python interpreter's global state:
+use pyo3::{Python, PyErr};
+use pyo3::exceptions::PyTypeError;
+
+Python::with_gil(|py| {
+ PyTypeError::new_err("Error").restore(py);
+ assert!(PyErr::occurred(py));
+ drop(PyErr::fetch(py));
+});
+Python has an isinstance
method to check an object's type.
+In PyO3 every object has the PyAny::is_instance
and PyAny::is_instance_of
methods which do the same thing.
use pyo3::prelude::*;
+use pyo3::types::{PyBool, PyList};
+
+Python::with_gil(|py| {
+ assert!(PyBool::new_bound(py, true).is_instance_of::<PyBool>());
+ let list = PyList::new_bound(py, &[1, 2, 3, 4]);
+ assert!(!list.is_instance_of::<PyBool>());
+ assert!(list.is_instance_of::<PyList>());
+});
+To check the type of an exception, you can similarly do:
+use pyo3::exceptions::PyTypeError;
+use pyo3::prelude::*;
+Python::with_gil(|py| {
+let err = PyTypeError::new_err(());
+err.is_instance_of::<PyTypeError>(py);
+});
+It is possible to use an exception defined in Python code as a native Rust type.
+The import_exception!
macro allows importing a specific exception class and defines a Rust type
+for that exception.
#![allow(dead_code)]
+use pyo3::prelude::*;
+
+mod io {
+ pyo3::import_exception!(io, UnsupportedOperation);
+}
+
+fn tell(file: &Bound<'_, PyAny>) -> PyResult<u64> {
+ match file.call_method0("tell") {
+ Err(_) => Err(io::UnsupportedOperation::new_err("not supported: tell")),
+ Ok(x) => x.extract::<u64>(),
+ }
+}
+pyo3::exceptions
+defines exceptions for several standard library modules.
Sorry that you're having trouble using PyO3. If you can't find the answer to your problem in the list below, you can also reach out for help on GitHub Discussions and on Discord.
+lazy_static
and once_cell::sync
both use locks to ensure that initialization is performed only by a single thread. Because the Python GIL is an additional lock this can lead to deadlocks in the following way:
lazy_static
value.Python::import
.lazy_static
value.lazy_static
's initialization to lock to release.PyO3 provides a struct GILOnceCell
which works equivalently to OnceCell
but relies solely on the Python GIL for thread safety. This means it can be used in place of lazy_static
or once_cell
where you are experiencing the deadlock described above. See the documentation for GILOnceCell
for an example how to use it.
cargo test
; or I can't build in a Cargo workspace: I'm having linker issues like "Symbol not found" or "Undefined reference to _PyExc_SystemError"!Currently, #340 causes cargo test
to fail with linking errors when the extension-module
feature is activated. Linking errors can also happen when building in a cargo workspace where a different crate also uses PyO3 (see #2521). For now, there are three ways we can work around these issues.
extension-module
feature optional. Build with maturin develop --features "extension-module"
[dependencies.pyo3]
+version = "0.22.1"
+
+[features]
+extension-module = ["pyo3/extension-module"]
+
+extension-module
feature optional and default. Run tests with cargo test --no-default-features
:[dependencies.pyo3]
+version = "0.22.1"
+
+[features]
+extension-module = ["pyo3/extension-module"]
+default = ["extension-module"]
+
+pyproject.toml
file to control maturin settings, add the following section:[tool.maturin]
+features = ["pyo3/extension-module"]
+# Or for maturin 0.12:
+# cargo-extra-args = ["--features", "pyo3/extension-module"]
+
+cargo test
: my crate cannot be found for tests in tests/
directory!The Rust book suggests to put integration tests inside a tests/
directory.
For a PyO3 extension-module
project where the crate-type
is set to "cdylib"
in your Cargo.toml
,
+the compiler won't be able to find your crate and will display errors such as E0432
or E0463
:
error[E0432]: unresolved import `my_crate`
+ --> tests/test_my_crate.rs:1:5
+ |
+1 | use my_crate;
+ | ^^^^^^^^^^^^ no external crate `my_crate`
+
+The best solution is to make your crate types include both rlib
and cdylib
:
# Cargo.toml
+[lib]
+crate-type = ["cdylib", "rlib"]
+
+This is because Ctrl-C raises a SIGINT signal, which is handled by the calling Python process by simply setting a flag to action upon later. This flag isn't checked while Rust code called from Python is executing, only once control returns to the Python interpreter.
+You can give the Python interpreter a chance to process the signal properly by calling Python::check_signals
. It's good practice to call this function regularly if you have a long-running Rust function so that your users can cancel it.
#[pyo3(get)]
clones my field!You may have a nested struct similar to this:
+use pyo3::prelude::*;
+#[pyclass]
+#[derive(Clone)]
+struct Inner {/* fields omitted */}
+
+#[pyclass]
+struct Outer {
+ #[pyo3(get)]
+ inner: Inner,
+}
+
+#[pymethods]
+impl Outer {
+ #[new]
+ fn __new__() -> Self {
+ Self { inner: Inner {} }
+ }
+}
+When Python code accesses Outer
's field, PyO3 will return a new object on every access (note that their addresses are different):
outer = Outer()
+
+a = outer.inner
+b = outer.inner
+
+assert a is b, f"a: {a}\nb: {b}"
+
+AssertionError: a: <builtins.Inner object at 0x00000238FFB9C7B0>
+b: <builtins.Inner object at 0x00000238FFB9C830>
+
+This can be especially confusing if the field is mutable, as getting the field and then mutating it won't persist - you'll just get a fresh clone of the original on the next access. Unfortunately Python and Rust don't agree about ownership - if PyO3 gave out references to (possibly) temporary Rust objects to Python code, Python code could then keep that reference alive indefinitely. Therefore returning Rust objects requires cloning.
+If you don't want that cloning to happen, a workaround is to allocate the field on the Python heap and store a reference to that, by using Py<...>
:
use pyo3::prelude::*;
+#[pyclass]
+struct Inner {/* fields omitted */}
+
+#[pyclass]
+struct Outer {
+ inner: Py<Inner>,
+}
+
+#[pymethods]
+impl Outer {
+ #[new]
+ fn __new__(py: Python<'_>) -> PyResult<Self> {
+ Ok(Self {
+ inner: Py::new(py, Inner {})?,
+ })
+ }
+
+ #[getter]
+ fn inner(&self, py: Python<'_>) -> Py<Inner> {
+ self.inner.clone_ref(py)
+ }
+}
+This time a
and b
are the same object:
outer = Outer()
+
+a = outer.inner
+b = outer.inner
+
+assert a is b, f"a: {a}\nb: {b}"
+print(f"a: {a}\nb: {b}")
+
+a: <builtins.Inner object at 0x0000020044FCC670>
+b: <builtins.Inner object at 0x0000020044FCC670>
+
+The downside to this approach is that any Rust code working on the Outer
struct now has to acquire the GIL to do anything with its field.
pyo3
crate re-exported from dependency but the proc-macros fail!All PyO3 proc-macros (#[pyclass]
, #[pyfunction]
, #[derive(FromPyObject)]
+and so on) expect the pyo3
crate to be available under that name in your crate
+root, which is the normal situation when pyo3
is a direct dependency of your
+crate.
However, when the dependency is renamed, or your crate only indirectly depends
+on pyo3
, you need to let the macro code know where to find the crate. This is
+done with the crate
attribute:
use pyo3::prelude::*;
+pub extern crate pyo3;
+mod reexported { pub use ::pyo3; }
+#[pyclass]
+#[pyo3(crate = "reexported::pyo3")]
+struct MyClass;
+STATUS_DLL_NOT_FOUND
or STATUS_ENTRYPOINT_NOT_FOUND
!This happens on Windows when linking to the python DLL fails or the wrong one is linked. The Python DLL on Windows will usually be called something like:
+python3X.dll
for Python 3.X, e.g. python310.dll
for Python 3.10python3.dll
when using PyO3's abi3
featureThe DLL needs to be locatable using the Windows DLL search order. Some ways to achieve this are:
+PATH
environment variable, for example C:\Users\<You>\AppData\Local\Programs\Python\Python310
If the wrong DLL is linked it is possible that this happened because another program added itself and its own Python DLLs to PATH
. Rearrange your PATH
variables to give the correct DLL priority.
++Note: Changes to
+PATH
(or any other environment variable) are not visible to existing shells. Restart it for changes to take effect.
For advanced troubleshooting, Dependency Walker can be used to diagnose linking errors.
+ +PyO3 provides a number of Cargo features to customize functionality. This chapter of the guide provides detail on each of them.
+By default, only the macros
feature is enabled.
extension-module
This feature is required when building a Python extension module using PyO3.
+It tells PyO3's build script to skip linking against libpython.so
on Unix platforms, where this must not be done.
See the building and distribution section for further detail.
+abi3
This feature is used when building Python extension modules to create wheels which are compatible with multiple Python versions.
+It restricts PyO3's API to a subset of the full Python API which is guaranteed by PEP 384 to be forwards-compatible with future Python versions.
+See the building and distribution section for further detail.
+abi3-pyXY
features(abi3-py37
, abi3-py38
, abi3-py39
, abi3-py310
and abi3-py311
)
These features are extensions of the abi3
feature to specify the exact minimum Python version which the multiple-version-wheel will support.
See the building and distribution section for further detail.
+generate-import-lib
This experimental feature is used to generate import libraries for Python DLL +for MinGW-w64 and MSVC (cross-)compile targets.
+Enabling it allows to (cross-)compile extension modules to any Windows targets +without having to install the Windows Python distribution files for the target.
+See the building and distribution +section for further detail.
+auto-initialize
This feature changes Python::with_gil
to automatically initialize a Python interpreter (by calling prepare_freethreaded_python
) if needed.
If you do not enable this feature, you should call pyo3::prepare_freethreaded_python()
before attempting to call any other Python APIs.
experimental-async
This feature adds support for async fn
in #[pyfunction]
and #[pymethods]
.
The feature has some unfinished refinements and performance improvements. To help finish this off, see issue #1632 and its associated draft PRs.
+experimental-inspect
This feature adds the pyo3::inspect
module, as well as IntoPy::type_output
and FromPyObject::type_input
APIs to produce Python type "annotations" for Rust types.
This is a first step towards adding first-class support for generating type annotations automatically in PyO3, however work is needed to finish this off. All feedback and offers of help welcome on issue #2454.
+gil-refs
This feature is a backwards-compatibility feature to allow continued use of the "GIL Refs" APIs deprecated in PyO3 0.21. These APIs have performance drawbacks and soundness edge cases which the newer Bound<T>
smart pointer and accompanying APIs resolve.
This feature and the APIs it enables is expected to be removed in a future PyO3 version.
+py-clone
This feature was introduced to ease migration. It was found that delayed reference counts cannot be made sound and hence Clon
ing an instance of Py<T>
must panic without the GIL being held. To avoid migrations introducing new panics without warning, the Clone
implementation itself is now gated behind this feature.
pyo3_disable_reference_pool
This is a performance-oriented conditional compilation flag, e.g. set via $RUSTFLAGS
, which disabled the global reference pool and the assocaited overhead for the crossing the Python-Rust boundary. However, if enabled, Drop
ping an instance of Py<T>
without the GIL being held will abort the process.
macros
This feature enables a dependency on the pyo3-macros
crate, which provides the procedural macros portion of PyO3's API:
#[pymodule]
#[pyfunction]
#[pyclass]
#[pymethods]
#[derive(FromPyObject)]
It also provides the py_run!
macro.
These macros require a number of dependencies which may not be needed by users who just need PyO3 for Python FFI. Disabling this feature enables faster builds for those users, as these dependencies will not be built if this feature is disabled.
+++This feature is enabled by default. To disable it, set
+default-features = false
for thepyo3
entry in your Cargo.toml.
multiple-pymethods
This feature enables each #[pyclass]
to have more than one #[pymethods]
block.
Most users should only need a single #[pymethods]
per #[pyclass]
. In addition, not all platforms (e.g. Wasm) are supported by inventory
, which is used in the implementation of the feature. For this reason this feature is not enabled by default, meaning fewer dependencies and faster compilation for the majority of users.
See the #[pyclass]
implementation details for more information.
nightly
The nightly
feature needs the nightly Rust compiler. This allows PyO3 to use the auto_traits
and negative_impls
features to fix the Python::allow_threads
function.
resolve-config
The resolve-config
feature of the pyo3-build-config
crate controls whether that crate's
+build script automatically resolves a Python interpreter / build configuration. This feature is primarily useful when building PyO3
+itself. By default this feature is not enabled, meaning you can freely use pyo3-build-config
as a standalone library to read or write PyO3 build configuration files or resolve metadata about a Python interpreter.
These features enable conversions between Python types and types from other Rust crates, enabling easy access to the rest of the Rust ecosystem.
+anyhow
Adds a dependency on anyhow. Enables a conversion from anyhow’s Error
type to PyErr
, for easy error handling.
chrono
Adds a dependency on chrono. Enables a conversion from chrono's types to python:
+PyDelta
PyDelta
PyTzInfo
PyDate
PyTime
PyDateTime
chrono-tz
Adds a dependency on chrono-tz.
+Enables conversion from and to Tz
.
+It requires at least Python 3.9.
either
Adds a dependency on either. Enables a conversions into either’s Either
type.
eyre
Adds a dependency on eyre. Enables a conversion from eyre’s Report
type to PyErr
, for easy error handling.
hashbrown
Adds a dependency on hashbrown and enables conversions into its HashMap
and HashSet
types.
indexmap
Adds a dependency on indexmap and enables conversions into its IndexMap
type.
num-bigint
Adds a dependency on num-bigint and enables conversions into its BigInt
and BigUint
types.
num-complex
Adds a dependency on num-complex and enables conversions into its Complex
type.
num-rational
Adds a dependency on num-rational and enables conversions into its Ratio
type.
rust_decimal
Adds a dependency on rust_decimal and enables conversions into its Decimal
type.
serde
Enables (de)serialization of Py<T>
objects via serde.
+This allows to use #[derive(Serialize, Deserialize)
on structs that hold references to #[pyclass]
instances
#[cfg(feature = "serde")]
+#[allow(dead_code)]
+mod serde_only {
+use pyo3::prelude::*;
+use serde::{Deserialize, Serialize};
+
+#[pyclass]
+#[derive(Serialize, Deserialize)]
+struct Permission {
+ name: String,
+}
+
+#[pyclass]
+#[derive(Serialize, Deserialize)]
+struct User {
+ username: String,
+ permissions: Vec<Py<Permission>>,
+}
+}
+smallvec
Adds a dependency on smallvec and enables conversions into its SmallVec
type.
The #[pyfunction]
attribute is used to define a Python function from a Rust function. Once defined, the function needs to be added to a module using the wrap_pyfunction!
macro.
The following example defines a function called double
in a Python module called my_extension
:
use pyo3::prelude::*;
+
+#[pyfunction]
+fn double(x: usize) -> usize {
+ x * 2
+}
+
+#[pymodule]
+fn my_extension(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ m.add_function(wrap_pyfunction!(double, m)?)?;
+ Ok(())
+}
+This chapter of the guide explains full usage of the #[pyfunction]
attribute. In this first section, the following topics are covered:
There are also additional sections on the following topics:
+ +The #[pyo3]
attribute can be used to modify properties of the generated Python function. It can take any combination of the following options:
Overrides the name exposed to Python.
+In the following example, the Rust function no_args_py
will be added to the Python module
+module_with_functions
as the Python function no_args
:
use pyo3::prelude::*;
+
+#[pyfunction]
+#[pyo3(name = "no_args")]
+fn no_args_py() -> usize {
+ 42
+}
+
+#[pymodule]
+fn module_with_functions(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ m.add_function(wrap_pyfunction!(no_args_py, m)?)?;
+ Ok(())
+}
+
+Python::with_gil(|py| {
+ let m = pyo3::wrap_pymodule!(module_with_functions)(py);
+ assert!(m.getattr(py, "no_args").is_ok());
+ assert!(m.getattr(py, "no_args_py").is_err());
+});
+Defines the function signature in Python. See Function Signatures.
+ #[pyo3(text_signature = "...")]
Overrides the PyO3-generated function signature visible in Python tooling (such as via inspect.signature
). See the corresponding topic in the Function Signatures subchapter.
Set this option to make PyO3 pass the containing module as the first argument to the function. It is then possible to use the module in the function body. The first argument must be of type &Bound<'_, PyModule>
, Bound<'_, PyModule>
, or Py<PyModule>
.
The following example creates a function pyfunction_with_module
which returns the containing module's name (i.e. module_with_fn
):
use pyo3::prelude::*;
+use pyo3::types::PyString;
+
+#[pyfunction]
+#[pyo3(pass_module)]
+fn pyfunction_with_module<'py>(
+ module: &Bound<'py, PyModule>,
+) -> PyResult<Bound<'py, PyString>> {
+ module.name()
+}
+
+#[pymodule]
+fn module_with_fn(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ m.add_function(wrap_pyfunction!(pyfunction_with_module, m)?)
+}
+The #[pyo3]
attribute can be used on individual arguments to modify properties of them in the generated function. It can take any combination of the following options:
Set this on an option to specify a custom function to convert the function argument from Python to the desired Rust type, instead of using the default FromPyObject
extraction. The function signature must be fn(&Bound<'_, PyAny>) -> PyResult<T>
where T
is the Rust type of the argument.
The following example uses from_py_with
to convert the input Python object to its length:
use pyo3::prelude::*;
+
+fn get_length(obj: &Bound<'_, PyAny>) -> PyResult<usize> {
+ let length = obj.len()?;
+ Ok(length)
+}
+
+#[pyfunction]
+fn object_length(#[pyo3(from_py_with = "get_length")] argument: usize) -> usize {
+ argument
+}
+
+Python::with_gil(|py| {
+ let f = pyo3::wrap_pyfunction_bound!(object_length)(py).unwrap();
+ assert_eq!(f.call1((vec![1, 2, 3],)).unwrap().extract::<usize>().unwrap(), 3);
+});
+You can pass Python def
'd functions and built-in functions to Rust functions PyFunction
+corresponds to regular Python functions while PyCFunction
describes built-ins such as
+repr()
.
You can also use Bound<'_, PyAny>::is_callable
to check if you have a callable object. is_callable
+will return true
for functions (including lambdas), methods and objects with a __call__
method.
+You can call the object with Bound<'_, PyAny>::call
with the args as first parameter and the kwargs
+(or None
) as second parameter. There are also Bound<'_, PyAny>::call0
with no args and
+Bound<'_, PyAny>::call1
with only positional args.
The ways to convert a Rust function into a Python object vary depending on the function:
+fn foo()
: add #[pyfunction]
and then use wrap_pyfunction!
to get the corresponding PyCFunction
.foo: fn()
either:
+#[pyclass]
struct which stores the function as a field and implement __call__
to call the stored function.PyCFunction::new_closure
to create an object directly from the function.In order to make Rust functions callable from Python, PyO3 generates an extern "C"
+function whose exact signature depends on the Rust signature. (PyO3 chooses the optimal
+Python argument passing convention.) It then embeds the call to the Rust function inside this
+FFI-wrapper function. This wrapper handles extraction of the regular arguments and the keyword
+arguments from the input PyObject
s.
The wrap_pyfunction
macro can be used to directly get a Bound<PyCFunction>
given a
+#[pyfunction]
and a Bound<PyModule>
: wrap_pyfunction!(rust_fun, module)
.
#[pyfn]
shorthandThere is a shorthand to #[pyfunction]
and wrap_pymodule!
: the function can be placed inside the module definition and
+annotated with #[pyfn]
. To simplify PyO3, it is expected that #[pyfn]
may be removed in a future release (See #694).
An example of #[pyfn]
is below:
use pyo3::prelude::*;
+
+#[pymodule]
+fn my_extension(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ #[pyfn(m)]
+ fn double(x: usize) -> usize {
+ x * 2
+ }
+
+ Ok(())
+}
+#[pyfn(m)]
is just syntactic sugar for #[pyfunction]
, and takes all the same options
+documented in the rest of this chapter. The code above is expanded to the following:
use pyo3::prelude::*;
+
+#[pymodule]
+fn my_extension(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ #[pyfunction]
+ fn double(x: usize) -> usize {
+ x * 2
+ }
+
+ m.add_function(wrap_pyfunction!(double, m)?)?;
+ Ok(())
+}
+
+ This chapter contains a little background of error handling in Rust and how PyO3 integrates this with Python exceptions.
+This covers enough detail to create a #[pyfunction]
which raises Python exceptions from errors originating in Rust.
There is a later section of the guide on Python exceptions which covers exception types in more detail.
+Rust code uses the generic Result<T, E>
enum to propagate errors. The error type E
is chosen by the code author to describe the possible errors which can happen.
PyO3 has the PyErr
type which represents a Python exception. If a PyO3 API could result in a Python exception being raised, the return type of that API
will be PyResult<T>
, which is an alias for the type Result<T, PyErr>
.
In summary:
+Err
variant of the PyResult
.?
operator, with PyErr
as the error type.PyResult
crosses from Rust back to Python via PyO3, if the result is an Err
variant the contained exception will be raised.(There are many great tutorials on Rust error handling and the ?
operator, so this guide will not go into detail on Rust-specific topics.)
As indicated in the previous section, when a PyResult
containing an Err
crosses from Rust to Python, PyO3 will raise the exception contained within.
Accordingly, to raise an exception from a #[pyfunction]
, change the return type T
to PyResult<T>
. When the function returns an Err
it will raise a Python exception. (Other Result<T, E>
types can be used as long as the error E
has a From
conversion for PyErr
, see custom Rust error types below.)
This also works for functions in #[pymethods]
.
For example, the following check_positive
function raises a ValueError
when the input is negative:
use pyo3::exceptions::PyValueError;
+use pyo3::prelude::*;
+
+#[pyfunction]
+fn check_positive(x: i32) -> PyResult<()> {
+ if x < 0 {
+ Err(PyValueError::new_err("x is negative"))
+ } else {
+ Ok(())
+ }
+}
+
+fn main(){
+ Python::with_gil(|py|{
+ let fun = pyo3::wrap_pyfunction_bound!(check_positive, py).unwrap();
+ fun.call1((-1,)).unwrap_err();
+ fun.call1((1,)).unwrap();
+ });
+}
+All built-in Python exception types are defined in the pyo3::exceptions
module. They have a new_err
constructor to directly build a PyErr
, as seen in the example above.
PyO3 will automatically convert a Result<T, E>
returned by a #[pyfunction]
into a PyResult<T>
as long as there is an implementation of std::from::From<E> for PyErr
. Many error types in the Rust standard library have a From
conversion defined in this way.
If the type E
you are handling is defined in a third-party crate, see the section on foreign rust error types below for ways to work with this error.
The following example makes use of the implementation of From<ParseIntError> for PyErr
to raise exceptions encountered when parsing strings as integers:
use pyo3::prelude::*;
+use std::num::ParseIntError;
+
+#[pyfunction]
+fn parse_int(x: &str) -> Result<usize, ParseIntError> {
+ x.parse()
+}
+
+fn main() {
+ Python::with_gil(|py| {
+ let fun = pyo3::wrap_pyfunction_bound!(parse_int, py).unwrap();
+ let value: usize = fun.call1(("5",)).unwrap().extract().unwrap();
+ assert_eq!(value, 5);
+ });
+}
+When passed a string which doesn't contain a floating-point number, the exception raised will look like the below:
+>>> parse_int("bar")
+Traceback (most recent call last):
+ File "<stdin>", line 1, in <module>
+ValueError: invalid digit found in string
+
+As a more complete example, the following snippet defines a Rust error named CustomIOError
. It then defines a From<CustomIOError> for PyErr
, which returns a PyErr
representing Python's OSError
.
+Therefore, it can use this error in the result of a #[pyfunction]
directly, relying on the conversion if it has to be propagated into a Python exception.
use pyo3::exceptions::PyOSError;
+use pyo3::prelude::*;
+use std::fmt;
+
+#[derive(Debug)]
+struct CustomIOError;
+
+impl std::error::Error for CustomIOError {}
+
+impl fmt::Display for CustomIOError {
+ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ write!(f, "Oh no!")
+ }
+}
+
+impl std::convert::From<CustomIOError> for PyErr {
+ fn from(err: CustomIOError) -> PyErr {
+ PyOSError::new_err(err.to_string())
+ }
+}
+
+pub struct Connection {/* ... */}
+
+fn bind(addr: String) -> Result<Connection, CustomIOError> {
+ if &addr == "0.0.0.0" {
+ Err(CustomIOError)
+ } else {
+ Ok(Connection{ /* ... */})
+ }
+}
+
+#[pyfunction]
+fn connect(s: String) -> Result<(), CustomIOError> {
+ bind(s)?;
+ // etc.
+ Ok(())
+}
+
+fn main() {
+ Python::with_gil(|py| {
+ let fun = pyo3::wrap_pyfunction_bound!(connect, py).unwrap();
+ let err = fun.call1(("0.0.0.0",)).unwrap_err();
+ assert!(err.is_instance_of::<PyOSError>(py));
+ });
+}
+If lazy construction of the Python exception instance is desired, the
+PyErrArguments
+trait can be implemented instead of From
. In that case, actual exception argument creation is delayed
+until the PyErr
is needed.
A final note is that any errors E
which have a From
conversion can be used with the ?
+("try") operator with them. An alternative implementation of the above parse_int
which instead returns PyResult
is below:
use pyo3::prelude::*;
+
+fn parse_int(s: String) -> PyResult<usize> {
+ let x = s.parse()?;
+ Ok(x)
+}
+
+use pyo3::exceptions::PyValueError;
+
+fn main() {
+ Python::with_gil(|py| {
+ assert_eq!(parse_int(String::from("1")).unwrap(), 1);
+ assert_eq!(parse_int(String::from("1337")).unwrap(), 1337);
+
+ assert!(parse_int(String::from("-1"))
+ .unwrap_err()
+ .is_instance_of::<PyValueError>(py));
+ assert!(parse_int(String::from("foo"))
+ .unwrap_err()
+ .is_instance_of::<PyValueError>(py));
+ assert!(parse_int(String::from("13.37"))
+ .unwrap_err()
+ .is_instance_of::<PyValueError>(py));
+ })
+}
+The Rust compiler will not permit implementation of traits for types outside of the crate where the type is defined. (This is known as the "orphan rule".)
+Given a type OtherError
which is defined in third-party code, there are two main strategies available to integrate it with PyO3:
MyOtherError
. Then implement From<MyOtherError> for PyErr
(or PyErrArguments
), as well as From<OtherError>
for MyOtherError
.map_err
to write code freely to convert OtherError
into whatever is needed. This requires boilerplate at every usage however gives unlimited flexibility.To detail the newtype strategy a little further, the key trick is to return Result<T, MyOtherError>
from the #[pyfunction]
. This means that PyO3 will make use of From<MyOtherError> for PyErr
to create Python exceptions while the #[pyfunction]
implementation can use ?
to convert OtherError
to MyOtherError
automatically.
The following example demonstrates this for some imaginary third-party crate some_crate
with a function get_x
returning Result<i32, OtherError>
:
mod some_crate {
+ pub struct OtherError(());
+ impl OtherError {
+ pub fn message(&self) -> &'static str { "some error occurred" }
+ }
+ pub fn get_x() -> Result<i32, OtherError> { Ok(5) }
+}
+
+use pyo3::prelude::*;
+use pyo3::exceptions::PyValueError;
+use some_crate::{OtherError, get_x};
+
+struct MyOtherError(OtherError);
+
+impl From<MyOtherError> for PyErr {
+ fn from(error: MyOtherError) -> Self {
+ PyValueError::new_err(error.0.message())
+ }
+}
+
+impl From<OtherError> for MyOtherError {
+ fn from(other: OtherError) -> Self {
+ Self(other)
+ }
+}
+
+#[pyfunction]
+fn wrapped_get_x() -> Result<i32, MyOtherError> {
+ // get_x is a function returning Result<i32, OtherError>
+ let x: i32 = get_x()?;
+ Ok(x)
+}
+
+fn main() {
+ Python::with_gil(|py| {
+ let fun = pyo3::wrap_pyfunction_bound!(wrapped_get_x, py).unwrap();
+ let value: usize = fun.call0().unwrap().extract().unwrap();
+ assert_eq!(value, 5);
+ });
+}
+
+ The #[pyfunction]
attribute also accepts parameters to control how the generated Python function accepts arguments. Just like in Python, arguments can be positional-only, keyword-only, or accept either. *args
lists and **kwargs
dicts can also be accepted. These parameters also work for #[pymethods]
which will be introduced in the Python Classes section of the guide.
Like Python, by default PyO3 accepts all arguments as either positional or keyword arguments. Most arguments are required by default, except for trailing Option<_>
arguments, which are implicitly given a default of None
. This behaviour can be configured by the #[pyo3(signature = (...))]
option which allows writing a signature in Python syntax.
This section of the guide goes into detail about use of the #[pyo3(signature = (...))]
option and its related option #[pyo3(text_signature = "...")]
#[pyo3(signature = (...))]
For example, below is a function that accepts arbitrary keyword arguments (**kwargs
in Python syntax) and returns the number that was passed:
use pyo3::prelude::*;
+use pyo3::types::PyDict;
+
+#[pyfunction]
+#[pyo3(signature = (**kwds))]
+fn num_kwds(kwds: Option<&Bound<'_, PyDict>>) -> usize {
+ kwds.map_or(0, |dict| dict.len())
+}
+
+#[pymodule]
+fn module_with_functions(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ m.add_function(wrap_pyfunction!(num_kwds, m)?).unwrap();
+ Ok(())
+}
+Just like in Python, the following constructs can be part of the signature::
+/
: positional-only arguments separator, each parameter defined before /
is a positional-only parameter.*
: var arguments separator, each parameter defined after *
is a keyword-only parameter.*args
: "args" is var args. Type of the args
parameter has to be &Bound<'_, PyTuple>
.**kwargs
: "kwargs" receives keyword arguments. The type of the kwargs
parameter has to be Option<&Bound<'_, PyDict>>
.arg=Value
: arguments with default value.
+If the arg
argument is defined after var arguments, it is treated as a keyword-only argument.
+Note that Value
has to be valid rust code, PyO3 just inserts it into the generated
+code unmodified.Example:
+use pyo3::prelude::*;
+use pyo3::types::{PyDict, PyTuple};
+
+#[pyclass]
+struct MyClass {
+ num: i32,
+}
+#[pymethods]
+impl MyClass {
+ #[new]
+ #[pyo3(signature = (num=-1))]
+ fn new(num: i32) -> Self {
+ MyClass { num }
+ }
+
+ #[pyo3(signature = (num=10, *py_args, name="Hello", **py_kwargs))]
+ fn method(
+ &mut self,
+ num: i32,
+ py_args: &Bound<'_, PyTuple>,
+ name: &str,
+ py_kwargs: Option<&Bound<'_, PyDict>>,
+ ) -> String {
+ let num_before = self.num;
+ self.num = num;
+ format!(
+ "num={} (was previously={}), py_args={:?}, name={}, py_kwargs={:?} ",
+ num, num_before, py_args, name, py_kwargs,
+ )
+ }
+
+ fn make_change(&mut self, num: i32) -> PyResult<String> {
+ self.num = num;
+ Ok(format!("num={}", self.num))
+ }
+}
+Arguments of type Python
must not be part of the signature:
#![allow(dead_code)]
+use pyo3::prelude::*;
+#[pyfunction]
+#[pyo3(signature = (lambda))]
+pub fn simple_python_bound_function(py: Python<'_>, lambda: PyObject) -> PyResult<()> {
+ Ok(())
+}
+N.B. the position of the /
and *
arguments (if included) control the system of handling positional and keyword arguments. In Python:
import mymodule
+
+mc = mymodule.MyClass()
+print(mc.method(44, False, "World", 666, x=44, y=55))
+print(mc.method(num=-1, name="World"))
+print(mc.make_change(44, False))
+
+Produces output:
+py_args=('World', 666), py_kwargs=Some({'x': 44, 'y': 55}), name=Hello, num=44
+py_args=(), py_kwargs=None, name=World, num=-1
+num=44
+num=-1
+
+++Note: to use keywords like
+struct
as a function argument, use "raw identifier" syntaxr#struct
in both the signature and the function definition:+#![allow(dead_code)] +use pyo3::prelude::*; +#[pyfunction(signature = (r#struct = "foo"))] +fn function_with_keyword(r#struct: &str) { + let _ = r#struct; + /* ... */ +}
⚠️ Warning: This behaviour is being phased out 🛠️
+The special casing of trailing optional arguments is deprecated. In a future pyo3
version, arguments of type Option<..>
will share the same behaviour as other arguments, they are required unless a default is set using #[pyo3(signature = (...))]
.
This is done to better align the Python and Rust definition of such functions and make it more intuitive to rewrite them from Python in Rust. Specifically def some_fn(a: int, b: Optional[int]): ...
will not automatically default b
to none
, but requires an explicit default if desired, where as in current pyo3
it is handled the other way around.
During the migration window a #[pyo3(signature = (...))]
will be required to silence the deprecation warning. After support for trailing optional arguments is fully removed, the signature attribute can be removed if all arguments should be required.
As a convenience, functions without a #[pyo3(signature = (...))]
option will treat trailing Option<T>
arguments as having a default of None
. In the example below, PyO3 will create increment
with a signature of increment(x, amount=None)
.
#![allow(deprecated)]
+use pyo3::prelude::*;
+
+/// Returns a copy of `x` increased by `amount`.
+///
+/// If `amount` is unspecified or `None`, equivalent to `x + 1`.
+#[pyfunction]
+fn increment(x: u64, amount: Option<u64>) -> u64 {
+ x + amount.unwrap_or(1)
+}
+
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| {
+ let fun = pyo3::wrap_pyfunction_bound!(increment, py)?;
+
+ let inspect = PyModule::import_bound(py, "inspect")?.getattr("signature")?;
+ let sig: String = inspect
+ .call1((fun,))?
+ .call_method0("__str__")?
+ .extract()?;
+
+ #[cfg(Py_3_8)] // on 3.7 the signature doesn't render b, upstream bug?
+ assert_eq!(sig, "(x, amount=None)");
+
+ Ok(())
+ })
+}
+To make trailing Option<T>
arguments required, but still accept None
, add a #[pyo3(signature = (...))]
annotation. For the example above, this would be #[pyo3(signature = (x, amount))]
:
use pyo3::prelude::*;
+#[pyfunction]
+#[pyo3(signature = (x, amount))]
+fn increment(x: u64, amount: Option<u64>) -> u64 {
+ x + amount.unwrap_or(1)
+}
+
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| {
+ let fun = pyo3::wrap_pyfunction_bound!(increment, py)?;
+
+ let inspect = PyModule::import_bound(py, "inspect")?.getattr("signature")?;
+ let sig: String = inspect
+ .call1((fun,))?
+ .call_method0("__str__")?
+ .extract()?;
+
+ #[cfg(Py_3_8)] // on 3.7 the signature doesn't render b, upstream bug?
+ assert_eq!(sig, "(x, amount)");
+
+ Ok(())
+ })
+}
+To help avoid confusion, PyO3 requires #[pyo3(signature = (...))]
when an Option<T>
argument is surrounded by arguments which aren't Option<T>
.
The function signature is exposed to Python via the __text_signature__
attribute. PyO3 automatically generates this for every #[pyfunction]
and all #[pymethods]
directly from the Rust function, taking into account any override done with the #[pyo3(signature = (...))]
option.
This automatic generation can only display the value of default arguments for strings, integers, boolean types, and None
. Any other default arguments will be displayed as ...
. (.pyi
type stub files commonly also use ...
for default arguments in the same way.)
In cases where the automatically-generated signature needs adjusting, it can be overridden using the #[pyo3(text_signature)]
option.)
The example below creates a function add
which accepts two positional-only arguments a
and b
, where b
has a default value of zero.
use pyo3::prelude::*;
+
+/// This function adds two unsigned 64-bit integers.
+#[pyfunction]
+#[pyo3(signature = (a, b=0, /))]
+fn add(a: u64, b: u64) -> u64 {
+ a + b
+}
+
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| {
+ let fun = pyo3::wrap_pyfunction_bound!(add, py)?;
+
+ let doc: String = fun.getattr("__doc__")?.extract()?;
+ assert_eq!(doc, "This function adds two unsigned 64-bit integers.");
+
+ let inspect = PyModule::import_bound(py, "inspect")?.getattr("signature")?;
+ let sig: String = inspect
+ .call1((fun,))?
+ .call_method0("__str__")?
+ .extract()?;
+
+ #[cfg(Py_3_8)] // on 3.7 the signature doesn't render b, upstream bug?
+ assert_eq!(sig, "(a, b=0, /)");
+
+ Ok(())
+ })
+}
+The following IPython output demonstrates how this generated signature will be seen from Python tooling:
+>>> pyo3_test.add.__text_signature__
+'(a, b=..., /)'
+>>> pyo3_test.add?
+Signature: pyo3_test.add(a, b=0, /)
+Docstring: This function adds two unsigned 64-bit integers.
+Type: builtin_function_or_method
+
+The #[pyo3(text_signature = "(<some signature>)")]
attribute can be used to override the default generated signature.
In the snippet below, the text signature attribute is used to include the default value of 0
for the argument b
, instead of the automatically-generated default value of ...
:
use pyo3::prelude::*;
+
+/// This function adds two unsigned 64-bit integers.
+#[pyfunction]
+#[pyo3(signature = (a, b=0, /), text_signature = "(a, b=0, /)")]
+fn add(a: u64, b: u64) -> u64 {
+ a + b
+}
+
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| {
+ let fun = pyo3::wrap_pyfunction_bound!(add, py)?;
+
+ let doc: String = fun.getattr("__doc__")?.extract()?;
+ assert_eq!(doc, "This function adds two unsigned 64-bit integers.");
+
+ let inspect = PyModule::import_bound(py, "inspect")?.getattr("signature")?;
+ let sig: String = inspect
+ .call1((fun,))?
+ .call_method0("__str__")?
+ .extract()?;
+ assert_eq!(sig, "(a, b=0, /)");
+
+ Ok(())
+ })
+}
+PyO3 will include the contents of the annotation unmodified as the __text_signature__
. Below shows how IPython will now present this (see the default value of 0 for b):
>>> pyo3_test.add.__text_signature__
+'(a, b=0, /)'
+>>> pyo3_test.add?
+Signature: pyo3_test.add(a, b=0, /)
+Docstring: This function adds two unsigned 64-bit integers.
+Type: builtin_function_or_method
+
+If no signature is wanted at all, #[pyo3(text_signature = None)]
will disable the built-in signature. The snippet below demonstrates use of this:
use pyo3::prelude::*;
+
+/// This function adds two unsigned 64-bit integers.
+#[pyfunction]
+#[pyo3(signature = (a, b=0, /), text_signature = None)]
+fn add(a: u64, b: u64) -> u64 {
+ a + b
+}
+
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| {
+ let fun = pyo3::wrap_pyfunction_bound!(add, py)?;
+
+ let doc: String = fun.getattr("__doc__")?.extract()?;
+ assert_eq!(doc, "This function adds two unsigned 64-bit integers.");
+ assert!(fun.getattr("__text_signature__")?.is_none());
+
+ Ok(())
+ })
+}
+Now the function's __text_signature__
will be set to None
, and IPython will not display any signature in the help:
>>> pyo3_test.add.__text_signature__ == None
+True
+>>> pyo3_test.add?
+Docstring: This function adds two unsigned 64-bit integers.
+Type: builtin_function_or_method
+
+
+ To get started using PyO3 you will need three things: a Rust toolchain, a Python environment, and a way to build. We'll cover each of these below.
+++If you'd like to chat to the PyO3 maintainers and other PyO3 users, consider joining the PyO3 Discord server. We're keen to hear about your experience getting started, so we can make PyO3 as accessible as possible for everyone!
+
First, make sure you have Rust installed on your system. If you haven't already done so, try following the instructions here. PyO3 runs on both the stable
and nightly
versions so you can choose whichever one fits you best. The minimum required Rust version is 1.63.
If you can run rustc --version
and the version is new enough you're good to go!
To use PyO3, you need at least Python 3.7. While you can simply use the default Python interpreter on your system, it is recommended to use a virtual environment.
+While you can use any virtualenv manager you like, we recommend the use of pyenv
in particular if you want to develop or test for multiple different Python versions, so that is what the examples in this book will use. The installation instructions for pyenv
can be found here. (Note: To get the pyenv activate
and pyenv virtualenv
commands, you will also need to install the pyenv-virtualenv
plugin. The pyenv installer will install both together.)
It can be useful to keep the sources used when installing using pyenv
so that future debugging can see the original source files. This can be done by passing the --keep
flag as part of the pyenv install
command.
For example:
+pyenv install 3.12 --keep
+
+There are a number of build and Python package management systems such as setuptools-rust
or manually. We recommend the use of maturin
, which you can install here. It is developed to work with PyO3 and provides the most "batteries included" experience, especially if you are aiming to publish to PyPI. maturin
is just a Python package, so you can add it in the same way you already install Python packages.
System Python:
+pip install maturin --user
+
+pipx:
+pipx install maturin
+
+pyenv:
+pyenv activate pyo3
+pip install maturin
+
+poetry:
+poetry add -G dev maturin
+
+After installation, you can run maturin --version
to check that you have correctly installed it.
First you should create the folder and virtual environment that are going to contain your new project. Here we will use the recommended pyenv
:
mkdir pyo3-example
+cd pyo3-example
+pyenv virtualenv pyo3
+pyenv local pyo3
+
+After this, you should install your build manager. In this example, we will use maturin
. After you've activated your virtualenv, add maturin
to it:
pip install maturin
+
+Now you can initialize the new project:
+maturin init
+
+If maturin
is already installed, you can create a new project using that directly as well:
maturin new -b pyo3 pyo3-example
+cd pyo3-example
+pyenv virtualenv pyo3
+pyenv local pyo3
+
+Sadly, maturin
cannot currently be run in existing projects, so if you want to use Python in an existing project you basically have two options:
If you opt for the second option, here are the things you need to pay attention to:
+Make sure that the Rust crate you want to be able to access from Python is compiled into a library. You can have a binary output as well, but the code you want to access from Python has to be in the library part. Also, make sure that the crate type is cdylib
and add PyO3 as a dependency as so:
# If you already have [package] information in `Cargo.toml`, you can ignore
+# this section!
+[package]
+# `name` here is name of the package.
+name = "pyo3_start"
+# these are good defaults:
+version = "0.1.0"
+edition = "2021"
+
+[lib]
+# The name of the native library. This is the name which will be used in Python to import the
+# library (i.e. `import string_sum`). If you change this, you must also change the name of the
+# `#[pymodule]` in `src/lib.rs`.
+name = "pyo3_example"
+
+# "cdylib" is necessary to produce a shared library for Python to import from.
+crate-type = ["cdylib"]
+
+[dependencies]
+pyo3 = { version = "0.22.1", features = ["extension-module"] }
+
+You should also create a pyproject.toml
with the following contents:
[build-system]
+requires = ["maturin>=1,<2"]
+build-backend = "maturin"
+
+[project]
+name = "pyo3_example"
+requires-python = ">=3.7"
+classifiers = [
+ "Programming Language :: Rust",
+ "Programming Language :: Python :: Implementation :: CPython",
+ "Programming Language :: Python :: Implementation :: PyPy",
+]
+
+After this you can setup Rust code to be available in Python as below; for example, you can place this code in src/lib.rs
:
use pyo3::prelude::*;
+
+/// Formats the sum of two numbers as string.
+#[pyfunction]
+fn sum_as_string(a: usize, b: usize) -> PyResult<String> {
+ Ok((a + b).to_string())
+}
+
+/// A Python module implemented in Rust. The name of this function must match
+/// the `lib.name` setting in the `Cargo.toml`, else Python will not be able to
+/// import the module.
+#[pymodule]
+fn pyo3_example(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ m.add_function(wrap_pyfunction!(sum_as_string, m)?)?;
+ Ok(())
+}
+Now you can run maturin develop
to prepare the Python package, after which you can use it like so:
$ maturin develop
+# lots of progress output as maturin runs the compilation...
+$ python
+>>> import pyo3_example
+>>> pyo3_example.sum_as_string(5, 20)
+'25'
+
+For more instructions on how to use Python code from Rust, see the Python from Rust page.
+In development, any changes in the code would require running maturin develop
before testing. To streamline the development process, you may want to install Maturin Import Hook which will run maturin develop
automatically when the library with code changes is being imported.
Welcome to the PyO3 user guide! This book is a companion to PyO3's API docs. It contains examples and documentation to explain all of PyO3's use cases in detail.
+The rough order of material in this user guide is as follows:
+Please choose from the chapters on the left to jump to individual topics, or continue below to start with PyO3's README.
+⚠️ Warning: API update in progress 🛠️
+PyO3 0.21 has introduced a significant new API, termed the "Bound" API after the new smart pointer Bound<T>
.
While most of this guide has been updated to the new API, it is possible some stray references to the older "GIL Refs" API such as &PyAny
remain.
Rust bindings for Python, including tools for creating native Python extension modules. Running and interacting with Python code from a Rust binary is also supported.
+ +PyO3 supports the following software versions:
+You can use PyO3 to write a native Python module in Rust, or to embed Python in a Rust binary. The following sections explain each of these in turn.
+PyO3 can be used to generate a native Python module. The easiest way to try this out for the first time is to use maturin
. maturin
is a tool for building and publishing Rust-based Python packages with minimal configuration. The following steps install maturin
, use it to generate and build a new Python package, and then launch Python to import and execute a function from the package.
First, follow the commands below to create a new directory containing a new Python virtualenv
, and install maturin
into the virtualenv using Python's package manager, pip
:
# (replace string_sum with the desired package name)
+$ mkdir string_sum
+$ cd string_sum
+$ python -m venv .env
+$ source .env/bin/activate
+$ pip install maturin
+
+Still inside this string_sum
directory, now run maturin init
. This will generate the new package source. When given the choice of bindings to use, select pyo3 bindings:
$ maturin init
+✔ 🤷 What kind of bindings to use? · pyo3
+ ✨ Done! New project created string_sum
+
+The most important files generated by this command are Cargo.toml
and lib.rs
, which will look roughly like the following:
Cargo.toml
[package]
+name = "string_sum"
+version = "0.1.0"
+edition = "2021"
+
+[lib]
+# The name of the native library. This is the name which will be used in Python to import the
+# library (i.e. `import string_sum`). If you change this, you must also change the name of the
+# `#[pymodule]` in `src/lib.rs`.
+name = "string_sum"
+# "cdylib" is necessary to produce a shared library for Python to import from.
+#
+# Downstream Rust code (including code in `bin/`, `examples/`, and `tests/`) will not be able
+# to `use string_sum;` unless the "rlib" or "lib" crate type is also included, e.g.:
+# crate-type = ["cdylib", "rlib"]
+crate-type = ["cdylib"]
+
+[dependencies]
+pyo3 = { version = "0.22.1", features = ["extension-module"] }
+
+src/lib.rs
use pyo3::prelude::*;
+
+/// Formats the sum of two numbers as string.
+#[pyfunction]
+fn sum_as_string(a: usize, b: usize) -> PyResult<String> {
+ Ok((a + b).to_string())
+}
+
+/// A Python module implemented in Rust. The name of this function must match
+/// the `lib.name` setting in the `Cargo.toml`, else Python will not be able to
+/// import the module.
+#[pymodule]
+fn string_sum(m: &Bound<'_, PyModule>) -> PyResult<()> {
+ m.add_function(wrap_pyfunction!(sum_as_string, m)?)?;
+ Ok(())
+}
+Finally, run maturin develop
. This will build the package and install it into the Python virtualenv previously created and activated. The package is then ready to be used from python
:
$ maturin develop
+# lots of progress output as maturin runs the compilation...
+$ python
+>>> import string_sum
+>>> string_sum.sum_as_string(5, 20)
+'25'
+
+To make changes to the package, just edit the Rust source code and then re-run maturin develop
to recompile.
To run this all as a single copy-and-paste, use the bash script below (replace string_sum
in the first command with the desired package name):
mkdir string_sum && cd "$_"
+python -m venv .env
+source .env/bin/activate
+pip install maturin
+maturin init --bindings pyo3
+maturin develop
+
+If you want to be able to run cargo test
or use this project in a Cargo workspace and are running into linker issues, there are some workarounds in the FAQ.
As well as with maturin
, it is possible to build using setuptools-rust
or manually. Both offer more flexibility than maturin
but require more configuration to get started.
To embed Python into a Rust binary, you need to ensure that your Python installation contains a shared library. The following steps demonstrate how to ensure this (for Ubuntu), and then give some example code which runs an embedded Python interpreter.
+To install the Python shared library on Ubuntu:
+sudo apt install python3-dev
+
+To install the Python shared library on RPM based distributions (e.g. Fedora, Red Hat, SuSE), install the python3-devel
package.
Start a new project with cargo new
and add pyo3
to the Cargo.toml
like this:
[dependencies.pyo3]
+version = "0.22.1"
+features = ["auto-initialize"]
+
+Example program displaying the value of sys.version
and the current user name:
use pyo3::prelude::*;
+use pyo3::types::IntoPyDict;
+
+fn main() -> PyResult<()> {
+ Python::with_gil(|py| {
+ let sys = py.import_bound("sys")?;
+ let version: String = sys.getattr("version")?.extract()?;
+
+ let locals = [("os", py.import_bound("os")?)].into_py_dict_bound(py);
+ let code = "os.getenv('USER') or os.getenv('USERNAME') or 'Unknown'";
+ let user: String = py.eval_bound(code, None, Some(&locals))?.extract()?;
+
+ println!("Hello {}, I'm Python {}", user, version);
+ Ok(())
+ })
+}
+The guide has a section with lots of examples +about this topic.
+built
crate as a PyDict
Everyone is welcomed to contribute to PyO3! There are many ways to support the project, such as:
+Our contributing notes and architecture guide have more resources if you wish to volunteer time for PyO3 and are searching where to start.
+If you don't have time to contribute yourself but still wish to support the project's future success, some of our maintainers have GitHub sponsorship pages:
+PyO3 is licensed under the Apache-2.0 license or the MIT license, at your option.
+Python is licensed under the Python License.
+Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in PyO3 by you, as defined in the Apache License, shall be dual-licensed as above, without any additional terms or conditions.
+ + +