diff --git a/README.md b/README.md index f08d888d..54c8de2c 100644 --- a/README.md +++ b/README.md @@ -220,6 +220,31 @@ For AMD Instinct GPUs, visit the official website: [AMD Instinct](https://www.am The corresponding FlashMLA version can be found at: [AITER/MLA](https://github.com/ROCm/aiter/blob/main/aiter/mla.py) +### PaddlePaddle Compatible API + +PaddlePaddle provides a PyTorch-compatible API layer that allows PyTorch ecosystem libraries to run seamlessly on Paddle. With this compatibility layer, you can use FlashMLA in PaddlePaddle projects with minimal code changes. + +**Installation:** + +```bash +PADDLE_COMPATIBLE_API=1 pip install -v . +``` + +**Usage Example:** + +```python +import paddle + +# Enable PyTorch compatibility mode +paddle.compat.enable_torch_proxy() + +# Now you can use FlashMLA as if you were using PyTorch +import flashmla + +# Use FlashMLA operations with PaddlePaddle tensors +# The compatibility layer handles the framework differences automatically +``` + ## Citation ```bibtex diff --git a/setup.py b/setup.py index 15fa6717..9edb79a2 100644 --- a/setup.py +++ b/setup.py @@ -5,6 +5,11 @@ from setuptools import setup, find_packages +if os.environ.get("PADDLE_COMPATIBLE_API", "0").lower() in ["1", "on", "true"]: + import paddle + + paddle.compat.enable_torch_proxy() + from torch.utils.cpp_extension import ( BuildExtension, CUDAExtension,