From 20d168e4deeaae9c8864601314ed76812280a4c2 Mon Sep 17 00:00:00 2001 From: lucylq Date: Thu, 10 Apr 2025 09:08:15 -0700 Subject: [PATCH] Add link to executorch-examples in module/c++ doc (#10026) https://github.com/pytorch-labs/executorch-examples/tree/main/mv2/cpp (cherry picked from commit 6c63cc9f7b1a19c37c200fed83ca984fb539e4df) --- docs/source/using-executorch-cpp.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/source/using-executorch-cpp.md b/docs/source/using-executorch-cpp.md index 11841bfc4e8..6a3f3db7427 100644 --- a/docs/source/using-executorch-cpp.md +++ b/docs/source/using-executorch-cpp.md @@ -32,6 +32,8 @@ if (result.ok()) { For more information on the Module class, see [Running an ExecuTorch Model Using the Module Extension in C++](extension-module.md). For information on high-level tensor APIs, see [Managing Tensor Memory in C++](extension-tensor.md). +For complete examples of building and running a C++ application using the Module API, refer to our [examples GitHub repository](https://github.com/pytorch-labs/executorch-examples/tree/main/mv2/cpp). + ## Low-Level APIs Running a model using the low-level runtime APIs allows for a high-degree of control over memory allocation, placement, and loading. This allows for advanced use cases, such as placing allocations in specific memory banks or loading a model without a file system. For an end to end example using the low-level runtime APIs, see [Running an ExecuTorch Model in C++ Tutorial](running-a-model-cpp-tutorial.md).