From d45304b50e4ba47d06142316868fc23ae6ac7485 Mon Sep 17 00:00:00 2001 From: roman-janik-nxp Date: Wed, 8 Oct 2025 14:53:02 +0200 Subject: [PATCH 01/26] NXP backend: Update user guide and docs Readme (#14852) This PR updates NXP backend Readmes in backend and examples directories. - cc @robert-kalmar @JakeStevens @digantdesai --- backends/nxp/README.md | 20 +++++++++++--------- examples/nxp/README.md | 22 +++++++++++----------- 2 files changed, 22 insertions(+), 20 deletions(-) diff --git a/backends/nxp/README.md b/backends/nxp/README.md index 8b76d1e276b..ca2eedc4470 100644 --- a/backends/nxp/README.md +++ b/backends/nxp/README.md @@ -5,16 +5,18 @@ This subtree contains the ExecuTorch Backend implementation for the The eIQ® Neutron NPU is a highly scalable accelerator core architecture providing machine learning (ML) acceleration, able to support common and critical tasks for edge AI such as anomaly detection, speech recognition, -image classification, object detection, facial recognition, image segmentation, and generative AI use cases like +image classification, object detection, facial recognition, image segmentation, and generative AI use cases like large and small language models (LLMs & SLMs) and text-to-speech (TTS). -The architecture provides power and performance optimized NPUs integrated with NXP's broad portfolio of +The architecture provides power and performance optimized NPUs integrated with NXP's broad portfolio of microcontrollers and applications processors. -The eIQ Neutron NPUs offer support for a wide variety of neural network types such as CNN, RNN, TCN and Transformer +The eIQ Neutron NPUs offer support for a wide variety of neural network types such as CNN, RNN, TCN and Transformer networks, as well as the ability to adapt and scale to new model architectures, topologies and layer types introduced -to AI workloads. ML application development with the eIQ Neutron NPU is fully supported by the +to AI workloads. ML application development with the eIQ Neutron NPU is fully supported by the [eIQ machine learning software development environment](https://www.nxp.com/design/design-center/software/eiq-ml-development-environment/eiq-toolkit-for-end-to-end-model-development-and-deployment:EIQ-TOOLKIT). The eIQ AI SW Stack provides a streamlined development experience for developers and end-users of NXP products. +eIQ extensions connect broader AI ecosystems to the edge, such as the NVIDIA TAO extension, which enables developers +to bring AI models trained and fine-tuned with TAO to NXP-powered edge devices. ## Supported NXP platforms @@ -22,7 +24,7 @@ At this moment following eIQ® Neutron NPU variants and NXP platforms are suppor * **eIQ Neutron N3-64**, available on [i.MX RT700](https://www.nxp.com/products/i.MX-RT700) -In the future the NXP eIQ Neutron Backend will be extended to support [i.MX 9 Application Processors](https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-9-processors:IMX9-PROCESSORS) +In the future the NXP eIQ Neutron Backend will be extended to support [i.MX 9 Application Processors](https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-9-processors:IMX9-PROCESSORS) with eIQ Neutron NPU, like the [i.MX 95](https://www.nxp.com/products/iMX95). @@ -33,7 +35,7 @@ The eIQ Neutron NPU Backend should be considered as prototype quality at this mo improvements. NXP and the ExecuTorch community is actively developing this codebase. ## Neutron Backend implementation and SW architecture -Neutron Backend uses the eIQ Neutron Converter as ML compiler to compile the delegated subgraph to Neutron microcode. +Neutron Backend uses the eIQ Neutron Converter as ML compiler to compile the delegated subgraph to Neutron microcode. The Neutron Converter accepts the ML model in LiteRT format, for the **eIQ Neutron N3** class therefore the Neutron Backend uses the LiteRT flatbuffers format as IR between the ExecuTorch and Neutron Converter ML compiler. @@ -44,10 +46,10 @@ uses the LiteRT flatbuffers format as IR between the ExecuTorch and Neutron Conv `node_conveters` is structured as single module for each Edge operator. * `backend/ir/lib` - automatically generated handlers from LiteRT flatbuffers schema. * `backend/ir/tflite_generator` and `backend/ir/tflite_optimizer` handle the serialization - of the in-memory built subgraph for delegation into LiteRT/TFLite flatbuffers + of the in-memory built subgraph for delegation into LiteRT/TFLite flatbuffers representation. Code taken from the onnx2tflite tool. -* `edge_passes` - Various passes operating on Edge dialect level. -* `quantizer` - Neutron Backend quantizer implementation. +* `edge_passes` - Various passes operating on Edge dialect level. +* `quantizer` - Neutron Backend quantizer implementation. * `runtime` - Neutron Backend runtime implementation. For running compiled on device. * `tests/` - Unit tests for Neutron backend. * `tests/converter/node_converter` - Operator level unit tests. diff --git a/examples/nxp/README.md b/examples/nxp/README.md index 8a6ba39c091..3a276d28f21 100644 --- a/examples/nxp/README.md +++ b/examples/nxp/README.md @@ -4,11 +4,11 @@ format and delegate the model computation to eIQ Neutron NPU using the eIQ Neutr ## Layout * `experimental/` - contains CifarNet model example. -* `models` - demo models instantiation used in examples +* `models` - various example models. * `aot_neutron_compile.py` - script with end-to-end ExecuTorch AoT Neutron Backend workflow. * `README.md` - this file. -* `run_aot_example.sh` - utility script to launch _aot_neutron_compile.py_. Primarily for CI purpose. -* `setup.sh` - setup script to install NeutronBackend dependencies. +* `run_aot_example.sh` - utility script for aot_neutron_compile.py. +* `setup.sh` - setup script for Neutron Converter installation. ## Setup Please finish tutorial [Setting up ExecuTorch](https://pytorch.org/executorch/main/getting-started-setup). @@ -23,24 +23,24 @@ $ ./examples/nxp/setup.sh * MobileNetV2 ## PyTorch Model Delegation to Neutron Backend -First we will start with an example script converting the model. This example show the CifarNet model preparation. -It is the same model which is part of the `example_cifarnet` in +First we will start with an example script converting the model. This example show the CifarNet model preparation. +It is the same model which is part of the `example_cifarnet` in [MCUXpresso SDK](https://www.nxp.com/design/design-center/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-software-development-kit-sdk:MCUXpresso-SDK). -The NXP MCUXpresso software and tools offer comprehensive development solutions designed to help accelerate embedded -system development of applications based on MCUs from NXP. The MCUXpresso SDK includes a flexible set of peripheral +The NXP MCUXpresso software and tools offer comprehensive development solutions designed to help accelerate embedded +system development of applications based on MCUs from NXP. The MCUXpresso SDK includes a flexible set of peripheral drivers designed to speed up and simplify development of embedded applications. The steps are expected to be executed from the `executorch` root folder. -1. Run the `aot_neutron_compile.py` example with the `cifar10` model +1. Run the `aot_neutron_compile.py` example with the `cifar10` model ```commandline $ python -m examples.nxp.aot_neutron_compile --quantize \ - --delegate --neutron_converter_flavor SDK_25_06 -m cifar10 + --delegate --neutron_converter_flavor SDK_25_09 -m cifar10 ``` -2. It will generate you `cifar10_nxp_delegate.pte` file which can be used with the MCUXpresso SDK `cifarnet_example` +2. It will generate you `cifar10_nxp_delegate.pte` file which can be used with the MCUXpresso SDK `cifarnet_example` project, presented [here](https://mcuxpresso.nxp.com/mcuxsdk/latest/html/middleware/eiq/executorch/docs/nxp/topics/example_applications.html#how-to-build-and-run-executorch-cifarnet-example). This project will guide you through the process of deploying your PTE model to the device. To get the MCUXpresso SDK follow this [guide](https://mcuxpresso.nxp.com/mcuxsdk/latest/html/middleware/eiq/executorch/docs/nxp/topics/getting_mcuxpresso.html), -use the MCUXpresso SDK v25.06.00. +use the MCUXpresso SDK v25.09.00. From 7505b1656c421ad456913b43c36e666e184b1ffd Mon Sep 17 00:00:00 2001 From: Mergen Nachin Date: Mon, 13 Oct 2025 12:49:37 -0400 Subject: [PATCH 02/26] Update top-level README.md file (#15049) --- README.md | 18 +++++++----------- 1 file changed, 7 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index c7053431813..d2d115e32d2 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ ExecuTorch uses **ahead-of-time (AOT) compilation** to prepare PyTorch models fo 2. **⚙️ Compile** — Quantize, optimize, and partition to hardware backends → `.pte` 3. **🚀 Execute** — Load `.pte` on-device via lightweight C++ runtime -Models use a standardized [Core ATen operator set](https://docs.pytorch.org/executorch/main/compiler-ir-advanced.html#intermediate-representation). [Partitioners](https://docs.pytorch.org/executorch/main/compiler-delegate-and-partitioner.html) delegate subgraphs to specialized hardware (NPU/GPU) with CPU fallback. +Models use a standardized [Core ATen operator set](https://docs.pytorch.org/executorch/main/concepts.html#core-aten-operators). [Partitioners](https://docs.pytorch.org/executorch/main/compiler-delegate-and-partitioner.html) delegate subgraphs to specialized hardware (NPU/GPU) with CPU fallback. Learn more: [How ExecuTorch Works](https://docs.pytorch.org/executorch/main/intro-how-it-works.html) • [Architecture Guide](https://docs.pytorch.org/executorch/main/getting-started-architecture.html) @@ -104,16 +104,14 @@ outputs = method.execute([torch.randn(1, 3, 224, 224)]) Module module("model.pte"); auto tensor = make_tensor_ptr({2, 2}, {1.0f, 2.0f, 3.0f, 4.0f}); -auto outputs = module.forward(tensor); +auto outputs = module.forward({tensor}); ``` **[Swift (iOS)](https://docs.pytorch.org/executorch/main/ios-section.html)** ```swift -import ExecuTorch - let module = Module(filePath: "model.pte") -let input = Tensor([1.0, 2.0, 3.0, 4.0], shape: [2, 2]) -let outputs = try module.forward(input) +let input = Tensor([1.0, 2.0, 3.0, 4.0]) +let outputs: [Value] = try module.forward([input]) ``` **[Kotlin (Android)](https://docs.pytorch.org/executorch/main/android-section.html)** @@ -153,8 +151,6 @@ runner->generate("Hello, how are you?", config); **[Swift (iOS)](https://docs.pytorch.org/executorch/main/llm/run-on-ios.html)** ```swift -import ExecuTorchLLM - let runner = TextRunner(modelPath: "llama.pte", tokenizerPath: "tiktoken.bin") try runner.generate("Hello, how are you?", Config { $0.sequenceLength = 128 @@ -204,9 +200,9 @@ ExecuTorch powers on-device AI at scale across Meta's family of apps, VR/AR devi **Multimodal:** [Llava](examples/models/llava/README.md) (vision-language), [Voxtral](examples/models/voxtral/README.md) (audio-language) -**Vision/Speech:** [MobileNetV2](https://github.com/meta-pytorch/executorch-examples/tree/main/mv2), [DeepLabV3](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3), [Whisper](https://github.com/meta-pytorch/executorch-examples/tree/main/whisper/android/WhisperApp) +**Vision/Speech:** [MobileNetV2](https://github.com/meta-pytorch/executorch-examples/tree/main/mv2), [DeepLabV3](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3) -**Resources:** [`examples/`](examples/) directory • [executorch-examples](https://github.com/meta-pytorch/executorch-examples) out-of-tree demos • [Optimum-ExecuTorch](https://github.com/huggingface/optimum-executorch) for HuggingFace models +**Resources:** [`examples/`](examples/) directory • [executorch-examples](https://github.com/meta-pytorch/executorch-examples) mobile demos • [Optimum-ExecuTorch](https://github.com/huggingface/optimum-executorch) for HuggingFace models ## Key Features @@ -226,7 +222,7 @@ See [Advanced Topics](https://docs.pytorch.org/executorch/main/advanced-topics-s - [**Documentation Home**](https://docs.pytorch.org/executorch/main/index.html) — Complete guides and tutorials - [**API Reference**](https://docs.pytorch.org/executorch/main/api-section.html) — Python, C++, Java/Kotlin APIs - [**Backend Integration**](https://docs.pytorch.org/executorch/main/backend-delegates-integration.html) — Build custom hardware backends -- [**Troubleshooting**](https://docs.pytorch.org/executorch/main/support-section.html) — Common issues and solutions +- [**Troubleshooting**](https://docs.pytorch.org/executorch/main/using-executorch-troubleshooting.html) — Common issues and solutions ## Community & Contributing From 15ccdf5e3edfc22edcfebba18605d7bbd78cf14f Mon Sep 17 00:00:00 2001 From: Mergen Nachin Date: Mon, 13 Oct 2025 13:06:34 -0400 Subject: [PATCH 03/26] Fix documentation link for Core ATen operators (#15050) Updated link to Core ATen operator set documentation. ### Summary [PLEASE REMOVE] See [CONTRIBUTING.md's Pull Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests) for ExecuTorch PR guidelines. [PLEASE REMOVE] If this PR closes an issue, please add a `Fixes #` line. [PLEASE REMOVE] If this PR introduces a fix or feature that should be the upcoming release notes, please add a "Release notes: " label. For a list of available release notes labels, check out [CONTRIBUTING.md's Pull Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests). ### Test plan [PLEASE REMOVE] How did you test this PR? Please write down any manual commands you used and note down tests that you have written if applicable. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index d2d115e32d2..531fcc3b4ef 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ ExecuTorch uses **ahead-of-time (AOT) compilation** to prepare PyTorch models fo 2. **⚙️ Compile** — Quantize, optimize, and partition to hardware backends → `.pte` 3. **🚀 Execute** — Load `.pte` on-device via lightweight C++ runtime -Models use a standardized [Core ATen operator set](https://docs.pytorch.org/executorch/main/concepts.html#core-aten-operators). [Partitioners](https://docs.pytorch.org/executorch/main/compiler-delegate-and-partitioner.html) delegate subgraphs to specialized hardware (NPU/GPU) with CPU fallback. +Models use a standardized [Core ATen operator set](https://docs.pytorch.org/executorch/main/compiler-ir-advanced.html#intermediate-representation). [Partitioners](https://docs.pytorch.org/executorch/main/compiler-delegate-and-partitioner.html) delegate subgraphs to specialized hardware (NPU/GPU) with CPU fallback. Learn more: [How ExecuTorch Works](https://docs.pytorch.org/executorch/main/intro-how-it-works.html) • [Architecture Guide](https://docs.pytorch.org/executorch/main/getting-started-architecture.html) From 3c8f647ed292e3254236fa2b67f6d9fa65ed1a8a Mon Sep 17 00:00:00 2001 From: Mergen Nachin Date: Mon, 13 Oct 2025 16:51:39 -0400 Subject: [PATCH 04/26] Fix various minor links in top-level README.md (#15052) --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 531fcc3b4ef..66be37bedc8 100644 --- a/README.md +++ b/README.md @@ -200,9 +200,9 @@ ExecuTorch powers on-device AI at scale across Meta's family of apps, VR/AR devi **Multimodal:** [Llava](examples/models/llava/README.md) (vision-language), [Voxtral](examples/models/voxtral/README.md) (audio-language) -**Vision/Speech:** [MobileNetV2](https://github.com/meta-pytorch/executorch-examples/tree/main/mv2), [DeepLabV3](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3) +**Vision/Speech:** [MobileNetV2](https://github.com/meta-pytorch/executorch-examples/tree/main/mv2), [DeepLabV3](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3), [Whisper](https://github.com/meta-pytorch/executorch-examples/tree/main/whisper/android/WhisperApp) -**Resources:** [`examples/`](examples/) directory • [executorch-examples](https://github.com/meta-pytorch/executorch-examples) mobile demos • [Optimum-ExecuTorch](https://github.com/huggingface/optimum-executorch) for HuggingFace models +**Resources:** [`examples/`](examples/) directory • [executorch-examples](https://github.com/meta-pytorch/executorch-examples) out-of-tree demos • [Optimum-ExecuTorch](https://github.com/huggingface/optimum-executorch) for HuggingFace models ## Key Features @@ -222,7 +222,7 @@ See [Advanced Topics](https://docs.pytorch.org/executorch/main/advanced-topics-s - [**Documentation Home**](https://docs.pytorch.org/executorch/main/index.html) — Complete guides and tutorials - [**API Reference**](https://docs.pytorch.org/executorch/main/api-section.html) — Python, C++, Java/Kotlin APIs - [**Backend Integration**](https://docs.pytorch.org/executorch/main/backend-delegates-integration.html) — Build custom hardware backends -- [**Troubleshooting**](https://docs.pytorch.org/executorch/main/using-executorch-troubleshooting.html) — Common issues and solutions +- [**Troubleshooting**](https://docs.pytorch.org/executorch/main/support-section.html) — Common issues and solutions ## Community & Contributing From 68e587a0c7086bb94bbbc6530d0fb98063cafdf0 Mon Sep 17 00:00:00 2001 From: robert-kalmar Date: Tue, 14 Oct 2025 14:47:23 +0200 Subject: [PATCH 05/26] NXP Backend: Update Readme files (#14896) --- backends/nxp/README.md | 2 -- examples/nxp/README.md | 6 +++--- 2 files changed, 3 insertions(+), 5 deletions(-) diff --git a/backends/nxp/README.md b/backends/nxp/README.md index ca2eedc4470..b27c054e7c1 100644 --- a/backends/nxp/README.md +++ b/backends/nxp/README.md @@ -15,8 +15,6 @@ networks, as well as the ability to adapt and scale to new model architectures, to AI workloads. ML application development with the eIQ Neutron NPU is fully supported by the [eIQ machine learning software development environment](https://www.nxp.com/design/design-center/software/eiq-ml-development-environment/eiq-toolkit-for-end-to-end-model-development-and-deployment:EIQ-TOOLKIT). The eIQ AI SW Stack provides a streamlined development experience for developers and end-users of NXP products. -eIQ extensions connect broader AI ecosystems to the edge, such as the NVIDIA TAO extension, which enables developers -to bring AI models trained and fine-tuned with TAO to NXP-powered edge devices. ## Supported NXP platforms diff --git a/examples/nxp/README.md b/examples/nxp/README.md index 3a276d28f21..336a0e9189b 100644 --- a/examples/nxp/README.md +++ b/examples/nxp/README.md @@ -4,11 +4,11 @@ format and delegate the model computation to eIQ Neutron NPU using the eIQ Neutr ## Layout * `experimental/` - contains CifarNet model example. -* `models` - various example models. +* `models` - demo models instantiation used in examples. * `aot_neutron_compile.py` - script with end-to-end ExecuTorch AoT Neutron Backend workflow. * `README.md` - this file. -* `run_aot_example.sh` - utility script for aot_neutron_compile.py. -* `setup.sh` - setup script for Neutron Converter installation. +* `run_aot_example.sh` - utility script to launch _aot_neutron_compile.py_. Primarily for CI purpose. +* `setup.sh` - setup script to install Neutron Backend dependencies. ## Setup Please finish tutorial [Setting up ExecuTorch](https://pytorch.org/executorch/main/getting-started-setup). From 15be82f31b7f2eb931afe145c8044c8b13988062 Mon Sep 17 00:00:00 2001 From: Sicheng Stephen Jia Date: Thu, 16 Oct 2025 14:16:38 -0400 Subject: [PATCH 06/26] [Samsung][docs] Update to the new template (#15087) Summary: Title says it all! Add docs for the Samsung backend based on the template introduced in https://github.com/pytorch/executorch/pull/14873. --- .../samsung/samsung-op-support-table.csv | 45 +++++++ .../backends/samsung/samsung-op-support.rst | 11 ++ .../backends/samsung/samsung-overview.md | 117 ++++++++++++++++++ .../backends/samsung/samsung-partitioner.md | 29 +++++ .../backends/samsung/samsung-quantization.md | 60 +++++++++ 5 files changed, 262 insertions(+) create mode 100644 docs/source/backends/samsung/samsung-op-support-table.csv create mode 100644 docs/source/backends/samsung/samsung-op-support.rst create mode 100644 docs/source/backends/samsung/samsung-overview.md create mode 100644 docs/source/backends/samsung/samsung-partitioner.md create mode 100644 docs/source/backends/samsung/samsung-quantization.md diff --git a/docs/source/backends/samsung/samsung-op-support-table.csv b/docs/source/backends/samsung/samsung-op-support-table.csv new file mode 100644 index 00000000000..7d925c43400 --- /dev/null +++ b/docs/source/backends/samsung/samsung-op-support-table.csv @@ -0,0 +1,45 @@ +Operator,Quantization,Constraints +add,static int8, +avg_pool2d,static int8,"ceil_mode=False, divisor_override=pooling_region" +batch_norm,static int8, +bmm,static int8, +cat,static int8,at most 1 constant tensor +clamp,static int8, +constant_pad_nd,static int8,padding_value=0.0 only +conv2d,static int8,constant weights +dequantize_per_channel,, +dequantize_per_tensor,, +div,static int8, +embedding,static int8, +expand_copy,,"expanding at most one axis, new dimensions must be size 1" +gelu,static int8, +getitem,, +hardsigmoid,static int8, +hardswish,static int8, +hardtanh,static int8, +layer_norm,static int8,norm at last axis only +leaky_relu,static int8, +linear,static int8,constant weights +log_softmax,static int8, +max_pool2d,static int8,"ceil_mode=False, indices not supported" +maximum,, +mean_dim,static int8, +minimum,, +mul,static int8, +permute,static int8, +pixel_shuffle,, +quantize_per_channel,, +quantize_per_tensor,, +relu,static int8, +reshape,static int8, +rsqrt,static int8, +select,static int8, +slice_copy,static int8, +softmax,static int8, +sqrt,static int8, +squeeze,static int8, +sub,static int8, +to_copy,,memory_format=contiguous only +unsqueeze,static int8, +upsample_bilinear2d,static int8, +upsample_nearest2d,static int8, diff --git a/docs/source/backends/samsung/samsung-op-support.rst b/docs/source/backends/samsung/samsung-op-support.rst new file mode 100644 index 00000000000..ecccd565021 --- /dev/null +++ b/docs/source/backends/samsung/samsung-op-support.rst @@ -0,0 +1,11 @@ +================ +Operator Support +================ + +This page lists the PyTorch operators currently supported by the Samsung Exynos backend. + +.. csv-table:: Operator Support + :file: samsung-op-support-table.csv + :header-rows: 1 + :widths: 25 15 55 + :align: center diff --git a/docs/source/backends/samsung/samsung-overview.md b/docs/source/backends/samsung/samsung-overview.md new file mode 100644 index 00000000000..464d4e322c7 --- /dev/null +++ b/docs/source/backends/samsung/samsung-overview.md @@ -0,0 +1,117 @@ +# Samsung Exynos Backend + +ExecuTorch's Samsung Exynos backend enables the execution of ExecuTorch models on +Samsung SoCs via the NPU/DSP. The delegate is built on top of the +[Samsung Exynos AI Litecore SDK]((https://soc-developer.semiconductor.samsung.com/global/development/ai-litecore)). + +## Features + +- Wide range of operator support +- Supported inference precisions: + - FP16 + - 8-bit statically quantized (int8/uint8) + - 16-bit statically quantized (int16/uint16) + +## Target Requirements + +Currently, the Samsung Exynos backend is supported only for devices with the +following chipsets: + +- Exynos 2500 (E9955) + +## Development Requirements + +The [Samsung Exynos AI Litecore SDK](https://soc-developer.semiconductor.samsung.com/global/development/ai-litecore) +is required to build the Exynos backend from source, and is also required to +export models to the Exynos delegate. + +---- + +## Using the Samsung Exynos Backend + +To target the Exynos backend during the export and lowering process, pass an instance of +the `EnnPartitioner` to `to_edge_transform_and_lower`. The example below +demonstrates this process using the MobileNet V2 model from torchvision. + +```python +import torch +import torchvision.models as models +from torchvision.models.mobilenetv2 import MobileNet_V2_Weights +from executorch.backends.samsung.partition.enn_partitioner import EnnPartitioner +from executorch.backends.samsung.serialization.compile_options import ( + gen_samsung_backend_compile_spec, +) +from executorch.exir import to_edge_transform_and_lower + +mobilenet_v2 = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval() +sample_inputs = (torch.randn(1, 3, 224, 224), ) + +chipset = "E9955" +compile_specs = [gen_samsung_backend_compile_spec(chipset)] + +et_program = to_edge_transform_and_lower( + torch.export.export(mobilenet_v2, sample_inputs), + partitioner=[EnnPartitioner(compile_specs)], +).to_executorch() + +with open("mv2_xnnpack.pte", "wb") as file: + et_program.write_to_file(file) +``` + +See [Partitioner API](samsung-partitioner.md) for a reference on available partitioner options. + +---- + +## Quantization + +The Samsung Exynos backend support statically quantized models with 8-bit and 16-bit +integral types. + +See [Samsung Exynos Quantization](samsung-quantization.md) for more +information on available quantization schemes and APIs. + +---- + +## Runtime Integration + +To run the model on-device, use the standard ExecuTorch runtime APIs. + +The Exynos backend is currently not available in any of ExecuTorch's published packages. +To access it, build ExecuTorch from source. When building from source, pass +`-DEXECUTORCH_BUILD_EXYNOS=ON` when configuring the CMake build. See [Running on Device](/getting-started.md#running-on-device) +for more information. + +Then, to link against the backend, add the `executorch_backends` CMake target as a build +dependency. + +``` +# CMakeLists.txt +add_subdirectory("executorch") +... +target_link_libraries( + my_target + PRIVATE executorch + executorch_backends + ... +) +``` + +No additional steps are necessary to use the backend beyond linking the target. Any +Exynos delegated .pte file will automatically run on the registered backend. + +## Reference + +**→{doc}`exynos-partitioner` — Partitioner options.** + +**→{doc}`exynos-quantization` — Supported quantization schemes.** + +**→{doc}`exynos-op-support` — Supported operators.** + +```{toctree} +:maxdepth: 2 +:hidden: +:caption: Exynos Backend + +exynos-partitioner +exynos-quantization +exynos-op-support diff --git a/docs/source/backends/samsung/samsung-partitioner.md b/docs/source/backends/samsung/samsung-partitioner.md new file mode 100644 index 00000000000..eb84a795551 --- /dev/null +++ b/docs/source/backends/samsung/samsung-partitioner.md @@ -0,0 +1,29 @@ +# Partitioner API + +The `EnnPartitioner` API is the primary entrypoint when exporting a model to the Samsung +Exynos backend. The partitioner is responsible for determining which parts of the model +should be lowered to the backend and also provides an interface for configuring the +behaviour of the backend. + +Currently, the configuration options for `EnnPartitioner` can be generated automatically +using the `gen_samsung_backend_compile_spec` API. For instance, + +```python +from executorch.backends.samsung.partition.enn_partitioner import EnnPartitioner +from executorch.backends.samsung.serialization.compile_options import ( + gen_samsung_backend_compile_spec, +) + +from executorch.exir import to_edge_transform_and_lower + +chipset = "E9955" +compile_specs = [gen_samsung_backend_compile_spec(chipset)] + +et_program = to_edge_transform_and_lower( + exported_program, + partitioner=[EnnPartitioner(compile_specs)], +).to_executorch() +``` + +At the moment, only `"E9955"` is supported as a valid chipset name, which corresponds to +the Exynose 2500 SoC. Support for additional chipsets will be added in the future. diff --git a/docs/source/backends/samsung/samsung-quantization.md b/docs/source/backends/samsung/samsung-quantization.md new file mode 100644 index 00000000000..ad4b50cb93d --- /dev/null +++ b/docs/source/backends/samsung/samsung-quantization.md @@ -0,0 +1,60 @@ +# Quantization + +The Exynos backend currently supports executing statically quantized 8-bit models. + +### 8-bit quantization with the PT2E quantization flow + +To perform 8-bit quantization with the PT2E flow, perform the following steps prior to exporting the model: + +1) Create an instance of the `EnnQuantizer` class and set the desired quantization behaviour. +2) Use `torch.export.export` to obtain a graph module representation of the source model. +3) Use `prepare_pt2e` to prepare the model for quantization. +4) Execute the prepared model with representative samples to calibrate the quantizated tensor activation ranges. +5) Use `convert_pt2e` to quantize the model. +6) Export and lower the model using the standard export flow. + +The output of `convert_pt2e` is a PyTorch model which can be exported and lowered using +the same export flow as non-quantized models. As it is a regular PyTorch model, it can +also be used to evaluate the accuracy of the quantized model using standard PyTorch +techniques. + +The below example shows how to quantize a MobileNetV2 model using the PT2E quantization flow. + +```python +import torch +import torchvision.models as models +from torchvision.models.mobilenetv2 import MobileNet_V2_Weights + +from executorch.backends.samsung.partition.enn_partitioner import EnnPartitioner +from executorch.backends.samsung.quantizer.quantizer import EnnQuantizer, Precision + +from executorch.exir import to_edge_transform_and_lower +from torchao.quantization.pt2e.quantize_pt2e import convert_pt2e, prepare_pt2e + +model = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval() +sample_inputs = (torch.randn(1, 3, 224, 224), ) + +# Currently, "A8W8" is the only supported precision mode +precision = "A8W8" +is_per_channel = True +is_qat = False + +quantizer = EnnQuantizer() +quantizer.set_quant_params(precision, is_per_channel, is_qat) # (1) + +training_ep = torch.export.export(model, sample_inputs).module() # (2) +prepared_model = prepare_pt2e(training_ep, quantizer) # (3) + +for cal_sample in [torch.randn(1, 3, 224, 224)]: # Replace with representative model inputs + prepared_model(cal_sample) # (4) Calibrate + +quantized_model = convert_pt2e(prepared_model) # (5) + +et_program = to_edge_transform_and_lower( # (6) + torch.export.export(quantized_model, sample_inputs), + partitioner=[EnnPartitioner()], +).to_executorch() +``` + +See [PyTorch 2 Export Post Training Quantization](https://docs.pytorch.org/ao/main/tutorials_source/pt2e_quant_ptq.html) +for more information. From 6108d6be73885e3d88203460f5bbc0b7f002137a Mon Sep 17 00:00:00 2001 From: Siddartha Pothapragada Date: Thu, 16 Oct 2025 23:22:12 -0700 Subject: [PATCH 07/26] Add Pico2 Tutorials on Raspberry Pi (#15188) ### Summary Add Pico2 Tutorials on Raspberry Pi [PLEASE REMOVE] See [CONTRIBUTING.md's Pull Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests) for ExecuTorch PR guidelines. [PLEASE REMOVE] If this PR closes an issue, please add a `Fixes #` line. [PLEASE REMOVE] If this PR introduces a fix or feature that should be the upcoming release notes, please add a "Release notes: " label. For a list of available release notes labels, check out [CONTRIBUTING.md's Pull Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests). ### Test plan [PLEASE REMOVE] How did you test this PR? Please write down any manual commands you used and note down tests that you have written if applicable. --- docs/source/embedded-section.md | 3 + docs/source/pico2_tutorial.md | 198 ++++++++++++++++++++++++++ examples/raspberry_pi/pico2/README.md | 39 ++--- 3 files changed, 224 insertions(+), 16 deletions(-) create mode 100644 docs/source/pico2_tutorial.md diff --git a/docs/source/embedded-section.md b/docs/source/embedded-section.md index 5636a7546dc..aac64190030 100644 --- a/docs/source/embedded-section.md +++ b/docs/source/embedded-section.md @@ -26,6 +26,8 @@ Start here for C++ development with ExecuTorch runtime APIs and essential tutori - {doc}`tutorial-arm-ethos-u` — Export a simple PyTorch model for the ExecuTorch Ethos-U backend - {doc}`raspberry_pi_llama_tutorial` — Deploy a LLaMA model on a Raspberry Pi +- {doc}`pico2_tutorial` — Deploy a demo MNIST model on the Raspberry Pi Pico 2 + ```{toctree} :hidden: @@ -38,3 +40,4 @@ using-executorch-building-from-source embedded-backends tutorial-arm-ethos-u raspberry_pi_llama_tutorial +pico2_tutorial diff --git a/docs/source/pico2_tutorial.md b/docs/source/pico2_tutorial.md new file mode 100644 index 00000000000..7098df11b05 --- /dev/null +++ b/docs/source/pico2_tutorial.md @@ -0,0 +1,198 @@ +# Pico2: A simple MNIST Tutorial + +Deploy your PyTorch models directly to Raspberry Pi Pico2 microcontroller with ExecuTorch. + +## What You'll Build + +A 28×28 MNIST digit classifier running on memory constrained, low power microcontrollers: + +- Input: ASCII art digits (0, 1, 4, 7) +- Output: Real-time predictions via USB serial +- Memory: <400KB total footprint + +## Prerequisites + +- [Environment Setup section](https://docs.pytorch.org/executorch/1.0/using-executorch-building-from-source.html) + +- Refer to this link on how to accept 'EULA' agreement and setup toolchain [link](https://docs.pytorch.org/executorch/1.0/backends-arm-ethos-u.html#development-requirements) + +- Verify ARM toolchain + +```bash +which arm-none-eabi-gcc # --> arm/ethos-u-scratch/arm-gnu-toolchain-13.3.rel1-x86_64-arm-none-eabi/bin/ +``` + +## Step 1: Generate pte from given example Model + +- Use the [provided example model](https://github.com/pytorch/executorch/blob/main/examples/raspberry_pi/pico2/export_mlp_mnist.py) + +```bash +python export_mlp_mnist.py # Creates balanced_tiny_mlp_mnist.pte +``` + +- **Note:** This is hand-crafted MNIST Classifier (proof-of-concept), and not production trained. This tiny MLP recognizes digits 0, 1, 4, and 7 using manually designed feature detectors. + +## Step 2: Build Firmware for Pico2 + +```bash +# Generate model + +python export_mlp_mnist.py # Creates balanced_tiny_mlp_mnist.pte + +# Build Pico2 firmware (one command!) + +./executorch/examples/rpi/build_firmware_pico.sh --model=balanced_tiny_mlp_mnist.pte # This creates executorch_pico.uf2, a firmware image for Pico2 +``` + +Output: **executorch_pico.uf2** firmware file (examples/raspberry_pi/pico2/build/) + +**Note:** 'build_firmware_pico.sh' script converts given model pte to hex array and generates C code for the same via this helper [script](https://github.com/pytorch/executorch/blob/main/examples/raspberry_pi/pico2/pte_to_array.py). This C code is then compiled to generate final .uf2 binary which is then flashed to Pico2. + +## Step 3: Flash to Pico2 + +Hold BOOTSEL button on Pico2 +Connect USB → Mounts as ^RPI-RP2^ drive +Drag & drop ^executorch_pico.uf2^ file +Release BOOTSEL → Pico2 reboots with your model + +## Step 4: Verify Deployment + +**Success indicators:** + +- LED blinks 10× at 500ms → Model running ✅ +- LED blinks 10× at 100ms → Error, check serial ❌ + +**View predictions:** + +```bash +# Connect serial terminal +screen /dev/tty.usbmodem1101 115200 +# Expected output: + +Something like: + +=== Digit 7 === +############################ +############################ + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### + #### +#### +### + +Input stats: 159 white pixels out of 784 total +Running neural network inference... +✅ Neural network results: + Digit 0: 370.000 + Digit 1: 0.000 + Digit 2: -3.000 + Digit 3: -3.000 + Digit 4: 860.000 + Digit 5: -3.000 + Digit 6: -3.000 + Digit 7: 1640.000 ← PREDICTED + Digit 8: -3.000 + Digit 9: -3.000 + +� PREDICTED: 7 (Expected: 7) ✅ CORRECT! +``` + +## Memory Optimization Tips + +### Pico2 Constraints + +- 520KB SRAM (runtime memory) +- 4MB Flash (model storage) +- Keep models small: + +### Common Issues + +- "Memory allocation failed" → Reduce model size and use quantization +- "Operator missing" → Use selective build: ^--operators=add,mul,relu^ +- "Import error" → Check ^arm-none-eabi-gcc^ toolchain setup. + +In order to resolve some of the issues above, refer to the following guides: + +- [ExecuTorch Quantization Optimization Guide](https://docs.pytorch.org/executorch/1.0/quantization-optimization.html) +- [Model Export & Lowering](https://docs.pytorch.org/executorch/1.0/using-executorch-export.html) and +- [Selective Build support](https://docs.pytorch.org/executorch/1.0/kernel-library-selective-build.html) + +### Firmware Size Analysis + +```bash +cd +ls -al examples/raspberry_pi/pico2/build/executorch_pico.elf +``` + +- **Overall section sizes** + +```bash +arm-none-eabi-size -A examples/raspberry_pi/pico2/build/executorch_pico.elf +``` + +- **Detailed section breakdown** + +```bash +arm-none-eabi-objdump -h examples/raspberry_pi/pico2/build/executorch_pico.elf +``` + +- **Symbol sizes (largest consumers)** + +```bash +arm-none-eabi-nm --print-size --size-sort --radix=d examples/raspberry_pi/pico2/build/executorch_pico.elf | tail -20 +``` + +### Model Memory Footprint + +- **Model data specifically** + +```bash +arm-none-eabi-nm --print-size --size-sort --radix=d examples/raspberry_pi/pico2/build/executorch_pico.elf | grep -i model +``` + +- **Check what's in .bss (uninitialized data)** + +```bash +arm-none-eabi-objdump -t examples/raspberry_pi/pico2/build/executorch_pico.elf | grep ".bss" | head -10 +``` + +- **Memory map overview** + +```bash +arm-none-eabi-readelf -l examples/raspberry_pi/pico2/build/executorch_pico.elf +``` + +## Next Steps + +### Scale up your deployment + +- Use real production trained model +- Optimize further → INT8 quantization, pruning + +### Happy Inference! + +**Result:** PyTorch model → Pico2 deployment in 4 simple steps 🚀 +Total tutorial time: ~15 minutes + +**Conclusion:** Real-time inference on memory constrained, low power microcontrollers, a complete PyTorch → ExecuTorch → Pico2 demo MNIST deployment diff --git a/examples/raspberry_pi/pico2/README.md b/examples/raspberry_pi/pico2/README.md index 976754d6c5e..e9da5a7fd1d 100644 --- a/examples/raspberry_pi/pico2/README.md +++ b/examples/raspberry_pi/pico2/README.md @@ -4,44 +4,48 @@ This document outlines the steps required to run a simple MNIST digit recognitio ## Demo Model: Hand-crafted MNIST Classifier -The included `export_mlp_mnist.py` creates a demonstration model with hand-crafted weights (not production-trained). This tiny MLP recognizes digits 0, 1, 4, and 7 using manually designed feature detectors. +The included `export_mlp_mnist.py` (in examples/raspberry_pi/pico2) creates a demonstration model with hand-crafted weights (not production-trained). This tiny MLP recognizes digits 0, 1, 4, and 7 using manually designed feature detectors. Note: This is a proof-of-concept. For production use, train your model on real MNIST data. -## Bring Your Own Model +## Bring Your Own Model and Deploy This demo demonstrates ExecuTorch's ability to bring your own PyTorch model and deploy it to Pico2 with one simple script. The complete pipeline works from any PyTorch model to a runnable binary: -### Train your PyTorch model +- Use existing demo model (examples/raspberry_pi/pico2/export_mlp_mnist.py) or bring your own model +- Build firmware with one command and pass the model file (.pte) as an argument +- Deploy directly to Pico2 -Export using `torch.export()` and `to_edge()` -Build firmware with one command -Deploy directly to Pico2 +### Important Caveats -#### Important Caveats: - -- Memory constraints - Models must fit in 520KB SRAM +- Memory constraints - Models must fit in 520KB SRAM (Pico2) - Missing operators - Some ops may not be supported -- Selective builds - Include only operators your model uses +- Selective builds - Include only operators your model uses if you want to reduce binary size ## Memory Constraints & Optimization -- Critical: Pico2 has limited memory: -- 520KB SRAM (on-chip static RAM) -- 4MB QSPI Flash (onboard storage) +- Critical: Pico2 has limited memory + - 520KB SRAM (on-chip static RAM) + - 4MB QSPI Flash (onboard storage) ### Always apply optimization techniques on large models that do not fit in Pico2 memory: Large models will not fit. Keep your `.pte` files small! + - Quantization (INT8, INT4) - Model pruning - Operator fusion - Selective builds (include only needed operators) -For more details , refer to the [ExecuTorch Quantization Optimization Guide](https://docs.pytorch.org/executorch/1.0/quantization-optimization.html), [Model Export & Lowering](https://docs.pytorch.org/executorch/1.0/using-executorch-export.html) and [Selective Build support](https://docs.pytorch.org/executorch/1.0/kernel-library-selective-build.html) + +For more details , refer to the following guides: + +- [ExecuTorch Quantization Optimization Guide](https://docs.pytorch.org/executorch/1.0/quantization-optimization.html) +- [Model Export & Lowering](https://docs.pytorch.org/executorch/1.0/using-executorch-export.html) and +- [Selective Build support](https://docs.pytorch.org/executorch/1.0/kernel-library-selective-build.html) ## (Prerequisites) Prepare the Environment for Arm Setup executorch development environment. Also see instructions for setting up the environment for Arm. -Make sure you have the toolchain configured correctly. Refer to this [setup](https://docs.pytorch.org/executorch/1.0/backends-arm-ethos-u.html#development-requirements) for more details. +Make sure you have the toolchain configured correctly. Refer to this [setup](https://docs.pytorch.org/executorch/main/backends-arm-ethos-u.html#development-requirements) for more details. ```bash which arm-none-eabi-gcc @@ -73,6 +77,7 @@ Hold the BOOTSEL button on Pico2 and connect to your computer. It mounts as `RPI ### Verify Execution The Pico2 LED blinks 10 times at 500ms intervals for successful execution. Via serial terminal, you'll see: + ```bash ... ... @@ -134,9 +139,11 @@ Running neural network inference... ### Debugging via Serial Terminal On macOS/Linux: + ```bash screen /dev/tty.usbmodem1101 115200 ``` + Replace `/dev/tty.usbmodem1101` with your device path. If LED blinks 10 times at 100ms intervals, check logs for errors, but if it blinks 10 times at 500ms intervals, it is successful! -Result: A complete PyTorch → ExecuTorch → Pico2 demo neural network deployment! 🚀 +Result: A complete PyTorch → ExecuTorch → Pico2 demo MNIST deployment! 🚀 From c00cd39f185de1931448eaca703e42c8571f4710 Mon Sep 17 00:00:00 2001 From: Scott Roy <161522778+metascroy@users.noreply.github.com> Date: Fri, 17 Oct 2025 15:04:51 -0700 Subject: [PATCH 08/26] Update mps docs and fix coreml/mps doc references (#15179) --- CONTRIBUTING.md | 4 +- README-wheel.md | 2 +- backends/apple/coreml/README.md | 2 +- docs/source/backends-overview.md | 30 ++++----- .../mps/mps-overview.md} | 63 +++++-------------- docs/source/ios-coreml.md | 2 +- docs/source/ios-mps.md | 2 +- docs/source/quantization-overview.md | 2 +- .../using-executorch-building-from-source.md | 2 +- docs/source/using-executorch-export.md | 4 +- docs/source/using-executorch-ios.md | 2 +- 11 files changed, 41 insertions(+), 74 deletions(-) rename docs/source/{backends-mps.md => backends/mps/mps-overview.md} (60%) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 71e097042d7..40d3a206f5b 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -24,8 +24,8 @@ For Apple, please refer to the [iOS documentation](docs/source/using-executorch- executorch ├── backends - Backend delegate implementations for various hardware targets. Each backend uses partitioner to split the graph into subgraphs that can be executed on specific hardware, quantizer to optimize model precision, and runtime components to execute the graph on target hardware. For details refer to the backend documentation and the Export and Lowering tutorial for more information. │ ├── apple - Apple-specific backends. -│ │ ├── coreml - CoreML backend for Apple devices. See doc. -│ │ └── mps - Metal Performance Shaders backend for Apple devices. See doc. +│ │ ├── coreml - CoreML backend for Apple devices. See doc. +│ │ └── mps - Metal Performance Shaders backend for Apple devices. See doc. │ ├── arm - ARM architecture backends. See doc. │ ├── cadence - Cadence-specific backends. See doc. │ ├── example - Example backend implementations. diff --git a/README-wheel.md b/README-wheel.md index 7ae9b0aa2e0..e20b447f96a 100644 --- a/README-wheel.md +++ b/README-wheel.md @@ -12,7 +12,7 @@ The prebuilt `executorch.runtime` module included in this package provides a way to run ExecuTorch `.pte` files, with some restrictions: * Only [core ATen operators](docs/source/ir-ops-set-definition.md) are linked into the prebuilt module * Only the [XNNPACK backend delegate](docs/source/backends-xnnpack.md) is linked into the prebuilt module. -* \[macOS only] [Core ML](docs/source/backends-coreml.md) and [MPS](docs/source/backends-mps.md) backend +* \[macOS only] [Core ML](docs/source/backends/coreml/coreml-overview.md) and [MPS](docs/source/backends/mps/mps-overview.md) backend are also linked into the prebuilt module. Please visit the [ExecuTorch website](https://pytorch.org/executorch) for diff --git a/backends/apple/coreml/README.md b/backends/apple/coreml/README.md index d063dfc8b71..d72f04da1a1 100644 --- a/backends/apple/coreml/README.md +++ b/backends/apple/coreml/README.md @@ -1,7 +1,7 @@ # ExecuTorch Core ML Delegate This subtree contains the Core ML Delegate implementation for ExecuTorch. -Core ML is an optimized framework for running machine learning models on Apple devices. The delegate is the mechanism for leveraging the Core ML framework to accelerate operators when running on Apple devices. To learn how to use the CoreML delegate, see the [documentation](https://github.com/pytorch/executorch/blob/main/docs/source/backends-coreml.md). +Core ML is an optimized framework for running machine learning models on Apple devices. The delegate is the mechanism for leveraging the Core ML framework to accelerate operators when running on Apple devices. To learn how to use the CoreML delegate, see the [documentation](https://github.com/pytorch/executorch/blob/main/docs/source/backends/coreml/coreml-overview.md). ## Layout - `compiler/` : Lowers a module to Core ML backend. diff --git a/docs/source/backends-overview.md b/docs/source/backends-overview.md index bfa17bc9a9c..dfeb6243d37 100644 --- a/docs/source/backends-overview.md +++ b/docs/source/backends-overview.md @@ -18,20 +18,20 @@ Backends are the bridge between your exported model and the hardware it runs on. ## Choosing a Backend -| Backend | Platform(s) | Hardware Type | Typical Use Case | -|------------------------------------------------|---------------------|---------------|---------------------------------| -| [XNNPACK](backends-xnnpack) | All | CPU | General-purpose, fallback | -| [Core ML](/backends/coreml/coreml-overview.md) | iOS, macOS | NPU/GPU/CPU | Apple devices, high performance | -| [Metal Performance Shaders](backends-mps) | iOS, macOS | GPU | Apple GPU acceleration | -| [Vulkan ](backends-vulkan) | Android | GPU | Android GPU acceleration | -| [Qualcomm](backends-qualcomm) | Android | NPU | Qualcomm SoCs | -| [MediaTek](backends-mediatek) | Android | NPU | MediaTek SoCs | -| [ARM EthosU](backends-arm-ethos-u) | Embedded | NPU | ARM MCUs | -| [ARM VGF](backends-arm-vgf) | Android | NPU | ARM platforms | -| [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs | -| [NXP](backends-nxp) | Embedded | NPU | NXP SoCs | -| [Cadence](backends-cadence) | Embedded | DSP | DSP-optimized workloads | -| [Samsung Exynos](backends-samsung-exynos) | Android | NPU | Samsung SoCs | +| Backend | Platform(s) | Hardware Type | Typical Use Case | +|-----------------------------------------------------------------|---------------------|---------------|---------------------------------| +| [XNNPACK](backends-xnnpack) | All | CPU | General-purpose, fallback | +| [Core ML](/backends/coreml/coreml-overview.md) | iOS, macOS | NPU/GPU/CPU | Apple devices, high performance | +| [Metal Performance Shaders](/backends/mps/mps-overview.md) | iOS, macOS | GPU | Apple GPU acceleration | +| [Vulkan ](backends-vulkan) | Android | GPU | Android GPU acceleration | +| [Qualcomm](backends-qualcomm) | Android | NPU | Qualcomm SoCs | +| [MediaTek](backends-mediatek) | Android | NPU | MediaTek SoCs | +| [ARM EthosU](backends-arm-ethos-u) | Embedded | NPU | ARM MCUs | +| [ARM VGF](backends-arm-vgf) | Android | NPU | ARM platforms | +| [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs | +| [NXP](backends-nxp) | Embedded | NPU | NXP SoCs | +| [Cadence](backends-cadence) | Embedded | DSP | DSP-optimized workloads | +| [Samsung Exynos](backends-samsung-exynos) | Android | NPU | Samsung SoCs | **Tip:** For best performance, export a `.pte` file for each backend you plan to support. @@ -52,7 +52,7 @@ Backends are the bridge between your exported model and the hardware it runs on. backends-xnnpack backends/coreml/coreml-overview -backends-mps +backends/mps/mps-overview backends-vulkan backends-qualcomm backends-mediatek diff --git a/docs/source/backends-mps.md b/docs/source/backends/mps/mps-overview.md similarity index 60% rename from docs/source/backends-mps.md rename to docs/source/backends/mps/mps-overview.md index 184bd88e3a7..a2280defad5 100644 --- a/docs/source/backends-mps.md +++ b/docs/source/backends/mps/mps-overview.md @@ -1,55 +1,27 @@ # MPS Backend -In this tutorial we will walk you through the process of getting setup to build the MPS backend for ExecuTorch and running a simple model on it. +MPS delegate is the ExecuTorch solution to take advantage of Apple's GPU for on-device ML using the [MPS Graph](https://developer.apple.com/documentation/metalperformanceshadersgraph/mpsgraph?language=objc) framework and tuned kernels provided by [MPS](https://developer.apple.com/documentation/metalperformanceshaders?language=objc). -The MPS backend device maps machine learning computational graphs and primitives on the [MPS Graph](https://developer.apple.com/documentation/metalperformanceshadersgraph/mpsgraph?language=objc) framework and tuned kernels provided by [MPS](https://developer.apple.com/documentation/metalperformanceshaders?language=objc). +## Target Requirements -::::{grid} 2 -:::{grid-item-card} What you will learn in this tutorial: -:class-card: card-prerequisites -* In this tutorial you will learn how to export [MobileNet V3](https://pytorch.org/vision/main/models/mobilenetv3.html) model to the MPS delegate. -* You will also learn how to compile and deploy the ExecuTorch runtime with the MPS delegate on macOS and iOS. -::: -:::{grid-item-card} Tutorials we recommend you complete before this: -:class-card: card-prerequisites -* [Introduction to ExecuTorch](intro-how-it-works.md) -* [Getting Started](getting-started.md) -* [Building ExecuTorch with CMake](using-executorch-building-from-source.md) -* [ExecuTorch iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo) -* [ExecuTorch LLM iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) -::: -:::: +Below are the minimum OS requirements on various hardware for running a MPS-delegated ExecuTorch model: +- [macOS](https://developer.apple.com/macos) >= 12.4 +- [iOS](https://www.apple.com/ios) >= 15.4 +## Development Requirements +To develop you need: -## Prerequisites (Hardware and Software) +- [Xcode](https://developer.apple.com/xcode/) >= 14.1 -In order to be able to successfully build and run a model using the MPS backend for ExecuTorch, you'll need the following hardware and software components: +Before starting, make sure you install the Xcode Command Line Tools: -### Hardware: - - A [mac](https://www.apple.com/mac/) for tracing the model - -### Software: - - - **Ahead of time** tracing: - - [macOS](https://www.apple.com/macos/) 12 - - - **Runtime**: - - [macOS](https://www.apple.com/macos/) >= 12.4 - - [iOS](https://www.apple.com/ios) >= 15.4 - - [Xcode](https://developer.apple.com/xcode/) >= 14.1 - -## Setting up Developer Environment - -***Step 1.*** Complete the steps in [Getting Started](getting-started.md) to set up the ExecuTorch development environment. - -You will also need a local clone of the ExecuTorch repository. See [Building ExecuTorch from Source](using-executorch-building-from-source.html) for instructions. All commands in this document should be run from the executorch repository. - -## Build +```bash +xcode-select --install +``` -### AOT (Ahead-of-time) Components +## Using the MPS Backend -**Compiling model for MPS delegate**: -- In this step, you will generate a simple ExecuTorch program that lowers MobileNetV3 model to the MPS delegate. You'll then pass this Program (the `.pte` file) during the runtime to run it using the MPS backend. +In this step, you will generate a simple ExecuTorch program that lowers MobileNetV3 model to the MPS delegate. You'll then pass this Program (the `.pte` file) during the runtime to run it using the MPS backend. ```bash cd executorch @@ -121,7 +93,7 @@ python3 -m examples.apple.mps.scripts.mps_example --model_name="mv3" --generate_ python3 -m devtools.inspector.inspector_cli --etdump_path etdump.etdp --etrecord_path etrecord.bin ``` -## Deploying and Running on Device +## Runtime integration ***Step 1***. Create the ExecuTorch core and MPS delegate frameworks to link on iOS ```bash @@ -146,8 +118,3 @@ From the same page, include the needed libraries for the MPS delegate: - `Metal.framework` In this tutorial, you have learned how to lower a model to the MPS delegate, build the mps_executor_runner and run a lowered model through the MPS delegate, or directly on device using the MPS delegate static library. - - -## Frequently encountered errors and resolution. - -If you encountered any bugs or issues following this tutorial please file a bug/issue on the [ExecuTorch repository](https://github.com/pytorch/executorch/issues), with hashtag **#mps**. diff --git a/docs/source/ios-coreml.md b/docs/source/ios-coreml.md index 48271326d87..ff6551aa0c2 100644 --- a/docs/source/ios-coreml.md +++ b/docs/source/ios-coreml.md @@ -1 +1 @@ -```{include} backends-coreml.md +```{include} backends/coreml/coreml-overview.md diff --git a/docs/source/ios-mps.md b/docs/source/ios-mps.md index d6f305d33aa..13717675ba5 100644 --- a/docs/source/ios-mps.md +++ b/docs/source/ios-mps.md @@ -1 +1 @@ -```{include} backends-mps.md +```{include} backends/mps/mps-overview.md diff --git a/docs/source/quantization-overview.md b/docs/source/quantization-overview.md index 4ff8d34a4a8..4ac886b9ed2 100644 --- a/docs/source/quantization-overview.md +++ b/docs/source/quantization-overview.md @@ -29,7 +29,7 @@ These quantizers usually support configs that allow users to specify quantizatio Not all quantization options are supported by all backends. Consult backend-specific guides for supported quantization modes and configuration, and how to initialize the backend-specific PT2E quantizer: * [XNNPACK quantization](backends-xnnpack.md#quantization) -* [CoreML quantization](backends-coreml.md#quantization) +* [CoreML quantization](backends/coreml/coreml-quantization.md) * [QNN quantization](backends-qualcomm.md#step-2-optional-quantize-your-model) diff --git a/docs/source/using-executorch-building-from-source.md b/docs/source/using-executorch-building-from-source.md index 48901f62a76..36f8f5fefac 100644 --- a/docs/source/using-executorch-building-from-source.md +++ b/docs/source/using-executorch-building-from-source.md @@ -385,7 +385,7 @@ xcode-select --install ``` Run the above command with `--help` flag to learn more on how to build additional backends -(like [Core ML](backends-coreml.md), [MPS](backends-mps.md) or XNNPACK), etc. +(like [Core ML](backends/coreml/coreml-overview.md), [MPS](backends/mps/mps-overview.md) or XNNPACK), etc. Note that some backends may require additional dependencies and certain versions of Xcode and iOS. See backend-specific documentation for more details. diff --git a/docs/source/using-executorch-export.md b/docs/source/using-executorch-export.md index 7abf5cbd30a..f0ad7c18467 100644 --- a/docs/source/using-executorch-export.md +++ b/docs/source/using-executorch-export.md @@ -33,8 +33,8 @@ As part of the .pte file creation process, ExecuTorch identifies portions of the Commonly used hardware backends are listed below. For mobile, consider using XNNPACK for Android and XNNPACK or Core ML for iOS. To create a .pte file for a specific backend, pass the appropriate partitioner class to `to_edge_transform_and_lower`. See the appropriate backend documentation and the [Export and Lowering](#export-and-lowering) section below for more information. - [XNNPACK (CPU)](backends-xnnpack.md) -- [Core ML (iOS)](backends-coreml.md) -- [Metal Performance Shaders (iOS GPU)](backends-mps.md) +- [Core ML (iOS)](backends/coreml/coreml-overview.md) +- [Metal Performance Shaders (iOS GPU)](backends/mps/mps-overview.md) - [Vulkan (Android GPU)](backends-vulkan.md) - [Qualcomm NPU](backends-qualcomm.md) - [MediaTek NPU](backends-mediatek.md) diff --git a/docs/source/using-executorch-ios.md b/docs/source/using-executorch-ios.md index 15ccef8d8a1..8e075853161 100644 --- a/docs/source/using-executorch-ios.md +++ b/docs/source/using-executorch-ios.md @@ -107,7 +107,7 @@ git clone -b release/1.0 https://github.com/pytorch/executorch.git --depth 1 --r python3 -m venv .venv && source .venv/bin/activate && pip install --upgrade pip ``` -4. Install the required dependencies, including those needed for the backends like [Core ML](backends-coreml.md) or [MPS](backends-mps.md), if you plan to build them later: +4. Install the required dependencies, including those needed for the backends like [Core ML](backends/coreml/coreml-overview.md) or [MPS](backends/mps/mps-overview.md), if you plan to build them later: ```bash ./install_requirements.sh From a566c094c22241a0c3095f65f951c8e55fd3f030 Mon Sep 17 00:00:00 2001 From: Sicheng Stephen Jia Date: Fri, 17 Oct 2025 20:10:31 -0400 Subject: [PATCH 09/26] [ET-VK][docs] Update to the new template (#14996) --- backends/vulkan/README.md | 207 +----------------- backends/vulkan/docs/android_demo.md | 128 ----------- docs/source/backends-overview.md | 8 +- docs/source/backends-vulkan.md | 205 ----------------- .../backends/samsung/samsung-overview.md | 12 +- .../vulkan/tutorials/etvk-llama-tutorial.md | 159 ++++++++++++++ .../tutorials/etvk-profiling-tutorial.md | 144 ++++++++++++ .../vulkan/tutorials/vulkan-tutorials.md | 13 ++ .../backends/vulkan/vulkan-op-support.rst | 46 ++++ .../source/backends/vulkan/vulkan-overview.md | 163 ++++++++++++++ .../backends/vulkan/vulkan-partitioner.md | 55 +++++ .../backends/vulkan/vulkan-quantization.md | 163 ++++++++++++++ .../backends/vulkan/vulkan-troubleshooting.md | 57 +++++ examples/vulkan/README.md | 106 ++++----- 14 files changed, 868 insertions(+), 598 deletions(-) delete mode 100644 backends/vulkan/docs/android_demo.md delete mode 100644 docs/source/backends-vulkan.md create mode 100644 docs/source/backends/vulkan/tutorials/etvk-llama-tutorial.md create mode 100644 docs/source/backends/vulkan/tutorials/etvk-profiling-tutorial.md create mode 100644 docs/source/backends/vulkan/tutorials/vulkan-tutorials.md create mode 100644 docs/source/backends/vulkan/vulkan-op-support.rst create mode 100644 docs/source/backends/vulkan/vulkan-overview.md create mode 100644 docs/source/backends/vulkan/vulkan-partitioner.md create mode 100644 docs/source/backends/vulkan/vulkan-quantization.md create mode 100644 docs/source/backends/vulkan/vulkan-troubleshooting.md diff --git a/backends/vulkan/README.md b/backends/vulkan/README.md index e0a953d05fe..b51a736c7df 100644 --- a/backends/vulkan/README.md +++ b/backends/vulkan/README.md @@ -1,205 +1,4 @@ -# Vulkan Backend +# The ExecuTorch Vulkan Backend -The ExecuTorch Vulkan delegate is a native GPU delegate for ExecuTorch that is -built on top of the cross-platform Vulkan GPU API standard. It is primarily -designed to leverage the GPU to accelerate model inference on Android devices, -but can be used on any platform that supports an implementation of Vulkan: -laptops, servers, and edge devices. - -::::{note} -The Vulkan delegate is currently under active development, and its components -are subject to change. -:::: - -## What is Vulkan? - -Vulkan is a low-level GPU API specification developed as a successor to OpenGL. -It is designed to offer developers more explicit control over GPUs compared to -previous specifications in order to reduce overhead and maximize the -capabilities of the modern graphics hardware. - -Vulkan has been widely adopted among GPU vendors, and most modern GPUs (both -desktop and mobile) in the market support Vulkan. Vulkan is also included in -Android from Android 7.0 onwards. - -**Note that Vulkan is a GPU API, not a GPU Math Library**. That is to say it -provides a way to execute compute and graphics operations on a GPU, but does not -come with a built-in library of performant compute kernels. - -## The Vulkan Compute Library - -The ExecuTorch Vulkan Delegate is a wrapper around a standalone runtime known as -the **Vulkan Compute Library**. The aim of the Vulkan Compute Library is to -provide GPU implementations for PyTorch operators via GLSL compute shaders. - -The Vulkan Compute Library is a fork/iteration of the [PyTorch Vulkan Backend](https://pytorch.org/tutorials/prototype/vulkan_workflow.html). -The core components of the PyTorch Vulkan backend were forked into ExecuTorch -and adapted for an AOT graph-mode style of model inference (as opposed to -PyTorch which adopted an eager execution style of model inference). - -The components of the Vulkan Compute Library are contained in the -`executorch/backends/vulkan/runtime/` directory. The core components are listed -and described below: - -``` -runtime/ -├── api/ .................... Wrapper API around Vulkan to manage Vulkan objects -└── graph/ .................. ComputeGraph class which implements graph mode inference - └── ops/ ................ Base directory for operator implementations - ├── glsl/ ........... GLSL compute shaders - │ ├── *.glsl - │ └── conv2d.glsl - └── impl/ ........... C++ code to dispatch GPU compute shaders - ├── *.cpp - └── Conv2d.cpp -``` - -## Features - -The Vulkan delegate currently supports the following features: - -* **Memory Planning** - * Intermediate tensors whose lifetimes do not overlap will share memory allocations. This reduces the peak memory usage of model inference. -* **Capability Based Partitioning**: - * A graph can be partially lowered to the Vulkan delegate via a partitioner, which will identify nodes (i.e. operators) that are supported by the Vulkan delegate and lower only supported subgraphs -* **Support for upper-bound dynamic shapes**: - * Tensors can change shape between inferences as long as its current shape is smaller than the bounds specified during lowering - -In addition to increasing operator coverage, the following features are -currently in development: - -* **Quantization Support** - * We are currently working on support for 8-bit dynamic quantization, with plans to extend to other quantization schemes in the future. -* **Memory Layout Management** - * Memory layout is an important factor to optimizing performance. We plan to introduce graph passes to introduce memory layout transitions throughout a graph to optimize memory-layout sensitive operators such as Convolution and Matrix Multiplication. -* **Selective Build** - * We plan to make it possible to control build size by selecting which operators/shaders you want to build with - -## End to End Example - -To further understand the features of the Vulkan Delegate and how to use it, -consider the following end to end example with a simple single operator model. - -### Compile and lower a model to the Vulkan Delegate - -Assuming ExecuTorch has been set up and installed, the following script can be -used to produce a lowered MobileNet V2 model as `vulkan_mobilenetv2.pte`. - -Once ExecuTorch has been set up and installed, the following script can be used -to generate a simple model and lower it to the Vulkan delegate. - -``` -# Note: this script is the same as the script from the "Setting up ExecuTorch" -# page, with one minor addition to lower to the Vulkan backend. -import torch -from torch.export import export -from executorch.exir import to_edge - -from executorch.backends.vulkan.partitioner.vulkan_partitioner import VulkanPartitioner - -# Start with a PyTorch model that adds two input tensors (matrices) -class Add(torch.nn.Module): - def __init__(self): - super(Add, self).__init__() - - def forward(self, x: torch.Tensor, y: torch.Tensor): - return x + y - -# 1. torch.export: Defines the program with the ATen operator set. -aten_dialect = export(Add(), (torch.ones(1), torch.ones(1))) - -# 2. to_edge: Make optimizations for Edge devices -edge_program = to_edge(aten_dialect) -# 2.1 Lower to the Vulkan backend -edge_program = edge_program.to_backend(VulkanPartitioner()) - -# 3. to_executorch: Convert the graph to an ExecuTorch program -executorch_program = edge_program.to_executorch() - -# 4. Save the compiled .pte program -with open("vk_add.pte", "wb") as file: - file.write(executorch_program.buffer) -``` - -Like other ExecuTorch delegates, a model can be lowered to the Vulkan Delegate -using the `to_backend()` API. The Vulkan Delegate implements the -`VulkanPartitioner` class which identifies nodes (i.e. operators) in the graph -that are supported by the Vulkan delegate, and separates compatible sections of -the model to be executed on the GPU. - -This means the a model can be lowered to the Vulkan delegate even if it contains -some unsupported operators. This will just mean that only parts of the graph -will be executed on the GPU. - - -::::{note} -The [supported ops list](https://github.com/pytorch/executorch/blob/main/backends/vulkan/op_registry.py#L194) -Vulkan partitioner code can be inspected to examine which ops are currently -implemented in the Vulkan delegate. -:::: - -### Build Vulkan Delegate libraries - -The easiest way to build and test the Vulkan Delegate is to build for Android -and test on a local Android device. Android devices have built in support for -Vulkan, and the Android NDK ships with a GLSL compiler which is needed to -compile the Vulkan Compute Library's GLSL compute shaders. - -The Vulkan Delegate libraries can be built by setting `-DEXECUTORCH_BUILD_VULKAN=ON` -when building with CMake. - -First, make sure that you have the Android NDK installed; any NDK version past -NDK r19c should work. Note that the examples in this doc have been validated with -NDK r27b. The Android SDK should also be installed so that you have access to `adb`. - -The instructions in this page assumes that the following environment variables -are set. - -```shell -export ANDROID_NDK= -# Select the appropriate Android ABI for your device -export ANDROID_ABI=arm64-v8a -# All subsequent commands should be performed from ExecuTorch repo root -cd -# Make sure adb works -adb --version -``` - -To build and install ExecuTorch libraries (for Android) with the Vulkan -Delegate: - -```shell -# From executorch root directory -(rm -rf cmake-android-out && \ - pp cmake . -DCMAKE_INSTALL_PREFIX=cmake-android-out \ - -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \ - -DANDROID_ABI=$ANDROID_ABI \ - -DEXECUTORCH_BUILD_VULKAN=ON \ - -DPYTHON_EXECUTABLE=python \ - -Bcmake-android-out && \ - cmake --build cmake-android-out -j16 --target install) -``` - -### Run the Vulkan model on device - -::::{note} -Since operator support is currently limited, only binary arithmetic operators -will run on the GPU. Expect inference to be slow as the majority of operators -are being executed via Portable operators. -:::: - -Now, the partially delegated model can be executed (partially) on your device's -GPU! - -```shell -# Build a model runner binary linked with the Vulkan delegate libs -cmake --build cmake-android-out --target executor_runner -j32 - -# Push model to device -adb push vk_add.pte /data/local/tmp/vk_add.pte -# Push binary to device -adb push cmake-android-out/executor_runner /data/local/tmp/runner_bin - -# Run the model -adb shell /data/local/tmp/runner_bin --model_path /data/local/tmp/vk_add.pte -``` +Please see the [Vulkan Backend Overview](../../docs/source/backends/vulkan/vulkan-overview.md) +to learn more about the ExecuTorch Vulkan Backend. diff --git a/backends/vulkan/docs/android_demo.md b/backends/vulkan/docs/android_demo.md deleted file mode 100644 index ff84938b06f..00000000000 --- a/backends/vulkan/docs/android_demo.md +++ /dev/null @@ -1,128 +0,0 @@ -# Building and Running ExecuTorch with the Vulkan Backend - -The [ExecuTorch Vulkan Delegate](../../../docs/source/native-delegates-executorch-vulkan-delegate.md) -is a native GPU delegate for ExecuTorch. - - -::::{grid} 2 -:::{grid-item-card} What you will learn in this tutorial: -:class-card: card-content -* How to export the Llama3.2-1B parameter model with partial GPU delegation -* How to execute the partially delegated model on Android -::: -:::{grid-item-card} Prerequisites: -:class-card: card-prerequisites -* Follow [**Setting up ExecuTorch**](../../../docs/source/getting-started-setup.rst) -* It is also recommended that you read through [**ExecuTorch Vulkan Delegate**](../../../docs/source/native-delegates-executorch-vulkan-delegate.md) and follow the example in that page -::: -:::: - -## Prerequisites - -Note that all the steps below should be performed from the ExecuTorch repository -root directory, and assumes that you have gone through the steps of setting up -ExecuTorch. - -It is also assumed that the Android NDK and Android SDK is installed, and the -following environment examples are set. - -```shell -export ANDROID_NDK= -# Select an appropriate Android ABI for your device -export ANDROID_ABI=arm64-v8a -# All subsequent commands should be performed from ExecuTorch repo root -cd -# Make sure adb works -adb --version -``` - -## Lowering the Llama3.2-1B model to Vulkan - -::::{note} -The resultant model will only be partially delegated to the Vulkan backend. In -particular, only binary arithmetic operators (`aten.add`, `aten.sub`, -`aten.mul`, `aten.div`), matrix multiplication operators (`aten.mm`, `aten.bmm`), -and linear layers (`aten.linear`) will be executed on the GPU via the Vulkan -delegate. The rest of the model will be executed using Portable operators. - -Operator support for LLaMA models is currently in active development; please -check out the `main` branch of the ExecuTorch repo for the latest capabilities. -:::: - -First, obtain the `consolidated.00.pth`, `params.json` and `tokenizer.model` -files for the `Llama3.2-1B` model from the [Llama website](https://www.llama.com/llama-downloads/). - -Once the files have been downloaded, the `export_llama` script can be used to -partially lower the Llama model to Vulkan. - -```shell -# The files will usually be downloaded to ~/.llama -python -m examples.models.llama.export_llama \ - --disable_dynamic_shape --vulkan -kv --use_sdpa_with_kv_cache -d fp32 \ - --model "llama3_2" \ - -c ~/.llama/checkpoints/Llama3.2-1B/consolidated.00.pth \ - -p ~/.llama/checkpoints/Llama3.2-1B/params.json \ - --metadata '{"get_bos_id":128000, "get_eos_ids":[128009, 128001]}' -``` - -A `vulkan_llama2.pte` file should have been created as a result of running the -script. - -Push the tokenizer binary and `vulkan_llama2.pte` onto your Android device: - -```shell -adb push ~/.llama/tokenizer.model /data/local/tmp/ -adb push vulkan_llama2.pte /data/local/tmp/ -``` - -## Build and Run the LLaMA runner binary on Android - -First, build and install ExecuTorch libraries, then build the LLaMA runner -binary using the Android NDK toolchain. - -```shell -./install_executorch.sh --clean -(mkdir cmake-android-out && \ - cmake . -DCMAKE_INSTALL_PREFIX=cmake-android-out \ - -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \ - -DANDROID_ABI=$ANDROID_ABI \ - -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \ - -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \ - -DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \ - -DEXECUTORCH_BUILD_VULKAN=ON \ - -DEXECUTORCH_BUILD_KERNELS_QUANTIZED=ON \ - -DEXECUTORCH_BUILD_KERNELS_LLM=ON \ - -DPYTHON_EXECUTABLE=python \ - -Bcmake-android-out && \ - cmake --build cmake-android-out -j16 --target install) - -# Build LLaMA Runner library -(rm -rf cmake-android-out/examples/models/llama && \ - cmake examples/models/llama \ - -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \ - -DANDROID_ABI=$ANDROID_ABI \ - -DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON \ - -DEXECUTORCH_BUILD_KERNELS_LLM=ON \ - -DCMAKE_INSTALL_PREFIX=cmake-android-out \ - -DPYTHON_EXECUTABLE=python \ - -Bcmake-android-out/examples/models/llama && \ - cmake --build cmake-android-out/examples/models/llama -j16) -``` - -Finally, push and run the llama runner binary on your Android device. Note that -your device must have sufficient GPU memory to execute the model. - -```shell -adb push cmake-android-out/examples/models/llama/llama_main /data/local/tmp/llama_main - -adb shell /data/local/tmp/llama_main \ - --model_path=/data/local/tmp/vulkan_llama2.pte \ - --tokenizer_path=/data/local/tmp/tokenizer.model \ - --prompt "Hello" -``` - -Note that currently model inference will be very slow due to the high amount of -delegate blobs in the lowered graph, which requires a transfer to and from the -GPU for each sub graph. Performance is expected to improve drastically as more -of the model can be lowered to the Vulkan delegate, and techniques such as -quantization are supported. diff --git a/docs/source/backends-overview.md b/docs/source/backends-overview.md index dfeb6243d37..da2febced3a 100644 --- a/docs/source/backends-overview.md +++ b/docs/source/backends-overview.md @@ -23,7 +23,7 @@ Backends are the bridge between your exported model and the hardware it runs on. | [XNNPACK](backends-xnnpack) | All | CPU | General-purpose, fallback | | [Core ML](/backends/coreml/coreml-overview.md) | iOS, macOS | NPU/GPU/CPU | Apple devices, high performance | | [Metal Performance Shaders](/backends/mps/mps-overview.md) | iOS, macOS | GPU | Apple GPU acceleration | -| [Vulkan ](backends-vulkan) | Android | GPU | Android GPU acceleration | +| [Vulkan ](/backends/vulkan/vulkan-overview.md) | Android | GPU | Android GPU acceleration | | [Qualcomm](backends-qualcomm) | Android | NPU | Qualcomm SoCs | | [MediaTek](backends-mediatek) | Android | NPU | MediaTek SoCs | | [ARM EthosU](backends-arm-ethos-u) | Embedded | NPU | ARM MCUs | @@ -31,7 +31,7 @@ Backends are the bridge between your exported model and the hardware it runs on. | [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs | | [NXP](backends-nxp) | Embedded | NPU | NXP SoCs | | [Cadence](backends-cadence) | Embedded | DSP | DSP-optimized workloads | -| [Samsung Exynos](backends-samsung-exynos) | Android | NPU | Samsung SoCs | +| [Samsung Exynos](/backends/samsung/samsung-overview.md) | Android | NPU | Samsung SoCs | **Tip:** For best performance, export a `.pte` file for each backend you plan to support. @@ -53,7 +53,7 @@ Backends are the bridge between your exported model and the hardware it runs on. backends-xnnpack backends/coreml/coreml-overview backends/mps/mps-overview -backends-vulkan +backends/vulkan/vulkan-overview backends-qualcomm backends-mediatek backends-arm-ethos-u @@ -61,4 +61,4 @@ backends-arm-vgf build-run-openvino backends-nxp backends-cadence -backends-samsung-exynos +backends/samsung/samsung-overview diff --git a/docs/source/backends-vulkan.md b/docs/source/backends-vulkan.md deleted file mode 100644 index 3ae80950645..00000000000 --- a/docs/source/backends-vulkan.md +++ /dev/null @@ -1,205 +0,0 @@ -# Vulkan Backend - -The ExecuTorch Vulkan delegate is a native GPU delegate for ExecuTorch that is -built on top of the cross-platform Vulkan GPU API standard. It is primarily -designed to leverage the GPU to accelerate model inference on Android devices, -but can be used on any platform that supports an implementation of Vulkan: -laptops, servers, and edge devices. - -::::{note} -The Vulkan delegate is currently under active development, and its components -are subject to change. -:::: - -## What is Vulkan? - -Vulkan is a low-level GPU API specification developed as a successor to OpenGL. -It is designed to offer developers more explicit control over GPUs compared to -previous specifications in order to reduce overhead and maximize the -capabilities of the modern graphics hardware. - -Vulkan has been widely adopted among GPU vendors, and most modern GPUs (both -desktop and mobile) in the market support Vulkan. Vulkan is also included in -Android from Android 7.0 onwards. - -**Note that Vulkan is a GPU API, not a GPU Math Library**. That is to say it -provides a way to execute compute and graphics operations on a GPU, but does not -come with a built-in library of performant compute kernels. - -## The Vulkan Compute Library - -The ExecuTorch Vulkan Delegate is a wrapper around a standalone runtime known as -the **Vulkan Compute Library**. The aim of the Vulkan Compute Library is to -provide GPU implementations for PyTorch operators via GLSL compute shaders. - -The Vulkan Compute Library is a fork/iteration of the [PyTorch Vulkan Backend](https://pytorch.org/tutorials/prototype/vulkan_workflow.html). -The core components of the PyTorch Vulkan backend were forked into ExecuTorch -and adapted for an AOT graph-mode style of model inference (as opposed to -PyTorch which adopted an eager execution style of model inference). - -The components of the Vulkan Compute Library are contained in the -`executorch/backends/vulkan/runtime/` directory. The core components are listed -and described below: - -``` -runtime/ -├── api/ .................... Wrapper API around Vulkan to manage Vulkan objects -└── graph/ .................. ComputeGraph class which implements graph mode inference - └── ops/ ................ Base directory for operator implementations - ├── glsl/ ........... GLSL compute shaders - │ ├── *.glsl - │ └── conv2d.glsl - └── impl/ ........... C++ code to dispatch GPU compute shaders - ├── *.cpp - └── Conv2d.cpp -``` - -## Features - -The Vulkan delegate currently supports the following features: - -* **Memory Planning** - * Intermediate tensors whose lifetimes do not overlap will share memory allocations. This reduces the peak memory usage of model inference. -* **Capability Based Partitioning**: - * A graph can be partially lowered to the Vulkan delegate via a partitioner, which will identify nodes (i.e. operators) that are supported by the Vulkan delegate and lower only supported subgraphs -* **Support for upper-bound dynamic shapes**: - * Tensors can change shape between inferences as long as its current shape is smaller than the bounds specified during lowering - -In addition to increasing operator coverage, the following features are -currently in development: - -* **Quantization Support** - * We are currently working on support for 8-bit dynamic quantization, with plans to extend to other quantization schemes in the future. -* **Memory Layout Management** - * Memory layout is an important factor to optimizing performance. We plan to introduce graph passes to introduce memory layout transitions throughout a graph to optimize memory-layout sensitive operators such as Convolution and Matrix Multiplication. -* **Selective Build** - * We plan to make it possible to control build size by selecting which operators/shaders you want to build with - -## End to End Example - -To further understand the features of the Vulkan Delegate and how to use it, -consider the following end to end example with a simple single operator model. - -### Compile and lower a model to the Vulkan Delegate - -Assuming ExecuTorch has been set up and installed, the following script can be -used to produce a lowered MobileNet V2 model as `vulkan_mobilenetv2.pte`. - -Once ExecuTorch has been set up and installed, the following script can be used -to generate a simple model and lower it to the Vulkan delegate. - -``` -# Note: this script is the same as the script from the "Setting up ExecuTorch" -# page, with one minor addition to lower to the Vulkan backend. -import torch -from torch.export import export -from executorch.exir import to_edge - -from executorch.backends.vulkan.partitioner.vulkan_partitioner import VulkanPartitioner - -# Start with a PyTorch model that adds two input tensors (matrices) -class Add(torch.nn.Module): - def __init__(self): - super(Add, self).__init__() - - def forward(self, x: torch.Tensor, y: torch.Tensor): - return x + y - -# 1. torch.export: Defines the program with the ATen operator set. -aten_dialect = export(Add(), (torch.ones(1), torch.ones(1))) - -# 2. to_edge: Make optimizations for Edge devices -edge_program = to_edge(aten_dialect) -# 2.1 Lower to the Vulkan backend -edge_program = edge_program.to_backend(VulkanPartitioner()) - -# 3. to_executorch: Convert the graph to an ExecuTorch program -executorch_program = edge_program.to_executorch() - -# 4. Save the compiled .pte program -with open("vk_add.pte", "wb") as file: - file.write(executorch_program.buffer) -``` - -Like other ExecuTorch delegates, a model can be lowered to the Vulkan Delegate -using the `to_backend()` API. The Vulkan Delegate implements the -`VulkanPartitioner` class which identifies nodes (i.e. operators) in the graph -that are supported by the Vulkan delegate, and separates compatible sections of -the model to be executed on the GPU. - -This means the a model can be lowered to the Vulkan delegate even if it contains -some unsupported operators. This will just mean that only parts of the graph -will be executed on the GPU. - - -::::{note} -The [supported ops list](https://github.com/pytorch/executorch/blob/main/backends/vulkan/op_registry.py#L194) -Vulkan partitioner code can be inspected to examine which ops are currently -implemented in the Vulkan delegate. -:::: - -### Build Vulkan Delegate libraries - -The easiest way to build and test the Vulkan Delegate is to build for Android -and test on a local Android device. Android devices have built in support for -Vulkan, and the Android NDK ships with a GLSL compiler which is needed to -compile the Vulkan Compute Library's GLSL compute shaders. - -The Vulkan Delegate libraries can be built by setting `-DEXECUTORCH_BUILD_VULKAN=ON` -when building with CMake. - -First, make sure that you have the Android NDK installed; any NDK version past -NDK r19c should work. Note that the examples in this doc have been validated with -NDK r27b. The Android SDK should also be installed so that you have access to `adb`. - -The instructions in this page assumes that the following environment variables -are set. - -```shell -export ANDROID_NDK= -# Select the appropriate Android ABI for your device -export ANDROID_ABI=arm64-v8a -# All subsequent commands should be performed from ExecuTorch repo root -cd -# Make sure adb works -adb --version -``` - -To build and install ExecuTorch libraries (for Android) with the Vulkan -Delegate: - -```shell -# From executorch root directory -(rm -rf cmake-android-out && \ - pp cmake . -DCMAKE_INSTALL_PREFIX=cmake-android-out \ - -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \ - -DANDROID_ABI=$ANDROID_ABI \ - -DEXECUTORCH_BUILD_VULKAN=ON \ - -DPYTHON_EXECUTABLE=python \ - -Bcmake-android-out && \ - cmake --build cmake-android-out -j16 --target install) -``` - -### Run the Vulkan model on device - -::::{note} -Since operator support is currently limited, only binary arithmetic operators -will run on the GPU. Expect inference to be slow as the majority of operators -are being executed via Portable operators. -:::: - -Now, the partially delegated model can be executed (partially) on your device's -GPU! - -```shell -# Build a model runner binary linked with the Vulkan delegate libs -cmake --build cmake-android-out --target vulkan_executor_runner -j32 - -# Push model to device -adb push vk_add.pte /data/local/tmp/vk_add.pte -# Push binary to device -adb push cmake-android-out/backends/vulkan/vulkan_executor_runner /data/local/tmp/runner_bin - -# Run the model -adb shell /data/local/tmp/runner_bin --model_path /data/local/tmp/vk_add.pte -``` diff --git a/docs/source/backends/samsung/samsung-overview.md b/docs/source/backends/samsung/samsung-overview.md index 464d4e322c7..8b0dea0c696 100644 --- a/docs/source/backends/samsung/samsung-overview.md +++ b/docs/source/backends/samsung/samsung-overview.md @@ -101,17 +101,17 @@ Exynos delegated .pte file will automatically run on the registered backend. ## Reference -**→{doc}`exynos-partitioner` — Partitioner options.** +**→{doc}`samsung-partitioner` — Partitioner options.** -**→{doc}`exynos-quantization` — Supported quantization schemes.** +**→{doc}`samsung-quantization` — Supported quantization schemes.** -**→{doc}`exynos-op-support` — Supported operators.** +**→{doc}`samsung-op-support` — Supported operators.** ```{toctree} :maxdepth: 2 :hidden: :caption: Exynos Backend -exynos-partitioner -exynos-quantization -exynos-op-support +samsung-partitioner +samsung-quantization +samsung-op-support diff --git a/docs/source/backends/vulkan/tutorials/etvk-llama-tutorial.md b/docs/source/backends/vulkan/tutorials/etvk-llama-tutorial.md new file mode 100644 index 00000000000..cb14c72331e --- /dev/null +++ b/docs/source/backends/vulkan/tutorials/etvk-llama-tutorial.md @@ -0,0 +1,159 @@ +# Exporting Llama 3.2 1B/3B Instruct to ExecuTorch Vulkan and running on device + +This tutorial assumes that you have a working local copy of the ExecuTorch repo, +and have gone through the steps to install the executorch pip package or have +installed it by building from source. + +This tutorial also assumes that you have the Android SDK tools installed and +that you are able to connect to an Android device via `adb`. + +Finally, the Android NDK should also be installed, and your environment should +have a variable `ANDROID_NDK` that points to the root directory of the NDK. + +```shell +export ANDROID_NDK= +``` + +## Download the Llama 3.2 1B/3B Instruct model checkpoint and tokenizer + +The model checkpoint and tokenizer can be downloaded from the +[Meta Llama website](https://www.llama.com/llama-downloads/). + +The model files should be downloaded to `~/.llama/checkpoints/Llama3.2-1B-Instruct`. + +## Export the Llama 3.2 1B/3B model + +First, navigate to the root of the ExecuTorch repo. + +```shell +# Navigate to executorch root +cd ~/executorch +``` + +Then, set some environment variables to describe how the model should be +exported. Feel free to tune the values to your preferences. + +```shell +export LLM_NAME=Llama3.2 && \ +export LLM_SIZE=1B && \ +export LLM_SUFFIX="-Instruct" && \ +export QUANT=8da4w && \ +export BACKEND=vulkan && \ +export GROUP_SIZE=64 && \ +export CONTEXT_LENGTH=2048 +``` + +Then, export the Llama 3.2 1B/3B Instruct model to ExecuTorch Vulkan. Note that +that `--vulkan-force-fp16` flag is set, which will improve model inference +latency at the cost of model accuracy. Feel free to remove this flag. + +```shell +python -m examples.models.llama.export_llama \ + -c $HOME/.llama/checkpoints/${LLM_NAME}-${LLM_SIZE}${LLM_SUFFIX}/consolidated.00.pth \ + -p $HOME/.llama/checkpoints/${LLM_NAME}-${LLM_SIZE}${LLM_SUFFIX}/params.json \ + -d fp32 --${BACKEND} \ + -qmode ${QUANT} -G ${GROUP_SIZE} \ + --max_seq_length ${CONTEXT_LENGTH} \ + --max_context_length ${CONTEXT_LENGTH} \ + -kv --use_sdpa_with_kv_cache \ + --metadata '{"append_eos_to_prompt": 0, "get_bos_id":128000, "get_eos_ids":[128009, 128001]}' \ + --model "llama3_2" \ + --output_name $HOME/.llama/checkpoints/${LLM_NAME}-${LLM_SIZE}${LLM_SUFFIX}/${LLM_NAME}-${LLM_SIZE}${LLM_SUFFIX}_${BACKEND}_${QUANT}_g${GROUP_SIZE}_c${CONTEXT_LENGTH}.pte + +``` + +After exporting the model, push the exported `.pte` file and the tokenizer to +your device. + +```shell +adb shell mkdir -p /data/local/tmp/llama && \ +adb push ~/.llama/checkpoints/${LLM_NAME}-${LLM_SIZE}${LLM_SUFFIX}/tokenizer.model \ + /data/local/tmp/llama/${LLM_NAME}-${LLM_SIZE}${LLM_SUFFIX}_tokenizer.model && \ +adb push ~/.llama/checkpoints/${LLM_NAME}-${LLM_SIZE}${LLM_SUFFIX}/${LLM_NAME}-${LLM_SIZE}${LLM_SUFFIX}_${BACKEND}_${QUANT}_g${GROUP_SIZE}_c${CONTEXT_LENGTH}.pte \ + /data/local/tmp/llama/${LLM_NAME}-${LLM_SIZE}${LLM_SUFFIX}_${BACKEND}_${QUANT}_g${GROUP_SIZE}_c${CONTEXT_LENGTH}.pte +``` + +## Build Core Executorch Components + +To be able to run the `.pte` file on device, first the core libraries, +including the Vulkan backend, must be compiled for Android. + +```shell +cmake . \ + -DCMAKE_INSTALL_PREFIX=cmake-out-android-so \ + -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \ + -DANDROID_SUPPORT_FLEXIBLE_PAGE_SIZES=ON \ + --preset "android-arm64-v8a" \ + -DANDROID_PLATFORM=android-28 \ + -DPYTHON_EXECUTABLE=python \ + -DCMAKE_BUILD_TYPE=Release \ + -DEXECUTORCH_PAL_DEFAULT=posix \ + -DEXECUTORCH_BUILD_LLAMA_JNI=ON \ + -DEXECUTORCH_BUILD_EXTENSION_NAMED_DATA_MAP=ON \ + -DEXECUTORCH_BUILD_VULKAN=ON \ + -DEXECUTORCH_BUILD_TESTS=OFF \ + -Bcmake-out-android-so && \ +cmake --build cmake-out-android-so -j16 --target install --config Release +``` + +## Build and push the llama runner binary to Android + +Then, build a binary that can be used to run the `.pte` file. + +```shell +cmake examples/models/llama \ + -DCMAKE_INSTALL_PREFIX=cmake-out-android-so \ + -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \ + -DANDROID_SUPPORT_FLEXIBLE_PAGE_SIZES=ON \ + -DEXECUTORCH_ENABLE_LOGGING=ON \ + -DANDROID_ABI=arm64-v8a \ + -DANDROID_PLATFORM=android-28 \ + -DCMAKE_BUILD_TYPE=Release \ + -DPYTHON_EXECUTABLE=python \ + -Bcmake-out-android-so/examples/models/llama && \ +cmake --build cmake-out-android-so/examples/models/llama -j16 --config Release +``` + +Once the binary is built, it can be pushed to your Android device. + +```shell +adb shell mkdir /data/local/tmp/etvk/ && \ +adb push cmake-out-android-so/examples/models/llama/llama_main /data/local/tmp/etvk/ +``` + +## Execute the llama runner binary + +Finally, we can execute the lowered `.pte` file on your device. + +```shell +adb shell /data/local/tmp/etvk/llama_main \ + --model_path=/data/local/tmp/llama/${LLM_NAME}-${LLM_SIZE}${LLM_SUFFIX}_${BACKEND}_${QUANT}_g${GROUP_SIZE}_c${CONTEXT_LENGTH}.pte \ + --tokenizer_path=/data/local/tmp/llama/${LLM_NAME}-${LLM_SIZE}${LLM_SUFFIX}_tokenizer.model \ + --temperature=0 --seq_len=400 --warmup \ + --prompt=\"\<\|begin_of_text\|\>\<\|start_header_id\|\>system\<\|end_header_id\|\>Write me a short poem.\<\|eot_id\|\>\<\|start_header_id\|\>assistant\<\|end_header_id\|\>\" +``` + +Here is some sample output captured from a Galaxy S24: + +```shell +E tokenizers:hf_tokenizer.cpp:60] Error parsing json file: [json.exception.parse_error.101] parse error at line 1, column 1: syntax error while parsing value - invalid literal; last read: 'I' +<|begin_of_text|><|start_header_id|>system<|end_header_id|>Write me a short poem.<|eot_id|><|start_header_id|>assistant<|end_header_id|> + +Here is a short poem I came up with: + +"Moonlight whispers secrets to the night +A gentle breeze that rustles the light +The stars up high, a twinkling show +A peaceful world, where dreams grow slow" + +I hope you enjoy it!<|eot_id|> + +PyTorchObserver {"prompt_tokens":14,"generated_tokens":54,"model_load_start_ms":1760077800721,"model_load_end_ms":1760077802998,"inference_start_ms":1760077802998,"inference_end_ms":1760077804187,"prompt_eval_end_ms":1760077803162,"first_token_ms":1760077803162,"aggregate_sampling_time_ms":19,"SCALING_FACTOR_UNITS_PER_SECOND":1000} + Prompt Tokens: 14 Generated Tokens: 54 + Model Load Time: 2.277000 (seconds) + Total inference time: 1.189000 (seconds) Rate: 45.416316 (tokens/second) + Prompt evaluation: 0.164000 (seconds) Rate: 85.365854 (tokens/second) + Generated 54 tokens: 1.025000 (seconds) Rate: 52.682927 (tokens/second) + Time to first generated token: 0.164000 (seconds) + Sampling time over 68 tokens: 0.019000 (seconds) +``` diff --git a/docs/source/backends/vulkan/tutorials/etvk-profiling-tutorial.md b/docs/source/backends/vulkan/tutorials/etvk-profiling-tutorial.md new file mode 100644 index 00000000000..07982d81c1c --- /dev/null +++ b/docs/source/backends/vulkan/tutorials/etvk-profiling-tutorial.md @@ -0,0 +1,144 @@ +# Executing and profiling an ExecuTorch Vulkan model on device + +This tutorial assumes that you have a working local copy of the ExecuTorch repo, +and have gone through the steps to install the executorch pip package or have +installed it by building from source. + +This tutorial also assumes that you have the Android SDK tools installed and +that you are able to connect to an Android device via `adb`. + +Finally, the Android NDK should also be installed, and your environment should +have a variable `ANDROID_NDK` that points to the root directory of the NDK. + +```shell +export ANDROID_NDK= +``` + +## Lower a model to ExecuTorch Vulkan and obtain the `.pte` file + + +The commands in this tutorial are assumed to be executed from ExecuTorch's root +directory. + +```shell +cd ~/executorch +``` + +For this tutorial, we will use the export script in +[`executorch/examples/vulkan/export.py`](https://github.com/pytorch/executorch/tree/main/examples/vulkan), +however any method of generating a `.pte` file will suffice. In this tutorial, +the InceptionV3 model is exported. + +```shell +python -m examples.vulkan.export --model_name=ic3 -o . -fp16 +``` + +After exporting, there should be a file called `ic3_vulkan.pte` in the root +directory of ExecuTorch. Feel free to modify the `-o` argument of the script to +control where the `.pte` file will be stored. + +Then, push the `.pte` file to device. + +```shell +adb shell mkdir -p /data/local/tmp/etvk/models/ && \ +adb push ic3_vulkan.pte /data/local/tmp/etvk/models/ic3_vulkan.pte +``` + +## Build the `executor_runner` binary and push to device + +To be able to run the `.pte` file on device, first the core libraries, +including the Vulkan backend, must be compiled for Android. Note that +`-DEXECUTORCH_ENABLE_EVENT_TRACER=ON` is used to turn on profiling, and +`-DEXECUTORCH_BUILD_EXECUTOR_RUNNER=ON` is used to build the runner binary that +will be used to execute and profile the `.pte` file. + + +```shell +cmake . \ + -DCMAKE_INSTALL_PREFIX=cmake-out-android-so \ + -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \ + -DANDROID_SUPPORT_FLEXIBLE_PAGE_SIZES=ON \ + --preset "android-arm64-v8a" \ + -DANDROID_PLATFORM=android-28 \ + -DPYTHON_EXECUTABLE=python \ + -DCMAKE_BUILD_TYPE=Release \ + -DEXECUTORCH_PAL_DEFAULT=posix \ + -DEXECUTORCH_BUILD_LLAMA_JNI=ON \ + -DEXECUTORCH_BUILD_EXTENSION_NAMED_DATA_MAP=ON \ + -DEXECUTORCH_BUILD_VULKAN=ON \ + -DEXECUTORCH_BUILD_TESTS=OFF \ + -DEXECUTORCH_BUILD_EXTENSION_EVALUE_UTIL=ON \ + -DEXECUTORCH_BUILD_EXECUTOR_RUNNER=ON \ + -DEXECUTORCH_ENABLE_EVENT_TRACER=ON \ + -Bcmake-out-android-so && \ +cmake --build cmake-out-android-so -j16 --target install --config Release +``` + +Once the build completes, we can push the runner binary to device. + +```shell +adb push cmake-out-android-so/executor_runner /data/local/tmp/etvk/executor_runner +``` + +## Execute the `.pte` file + +Finally, we can execute the lowered `.pte` file on your device. To test run the +model file without profiling: + +```shell +adb shell /data/local/tmp/etvk/executor_runner \ + --model_path /data/local/tmp/etvk/models/ic3_vulkan.pte +``` + +Now, with profiling: + +```shell +MODEL_NAME=ic3 && \ +BACKEND=vulkan && \ +NUM_ITERS=3 && \ +adb shell mkdir -p /data/local/tmp/etvk/etdumps/ && \ +adb shell /data/local/tmp/etvk/executor_runner \ + --model_path /data/local/tmp/etvk/models/${MODEL_NAME}_${BACKEND}.pte \ + --num_executions=${NUM_ITERS} \ + --etdump_path /data/local/tmp/etvk/etdumps/${MODEL_NAME}_${BACKEND}.etdp && \ +adb pull /data/local/tmp/etvk/etdumps/${MODEL_NAME}_${BACKEND}.etdp ${MODEL_NAME}_${BACKEND}.etdp && \ +adb shell rm /data/local/tmp/etvk/etdumps/${MODEL_NAME}_${BACKEND}.etdp && \ +python devtools/inspector/inspector_cli.py \ + --etdump_path ${MODEL_NAME}_${BACKEND}.etdp +``` + +Here is some sample (tailed) output from a Samsung Galaxy S24: + +```shell +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 165 │ Execute │ conv2d_clamp_half_163 │ 0.345082 │ 0.346164 │ 0.346247 │ 0.345748 │ 0.344812 │ 0.346268 │ [] │ True │ │ [2081488974948084, 2081488995911052, 2081489016763676] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 166 │ Execute │ conv2d_clamp_half_164 │ 0.306124 │ 0.30654 │ 0.306998 │ 0.306557 │ 0.30602 │ 0.307112 │ [] │ True │ │ [2081488975294716, 2081488996256228, 2081489017110204] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 167 │ Execute │ set_zero_int32_165 │ 0.00240245 │ 0.00244403 │ 0.00248561 │ 0.00244403 │ 0.00239205 │ 0.002496 │ [] │ True │ │ [2081488975601100, 2081488996563132, 2081489017417680] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 168 │ Execute │ concat_2_texture3d_half_166 │ 0.0122305 │ 0.01248 │ 0.0125634 │ 0.0124108 │ 0.0121682 │ 0.0125842 │ [] │ True │ │ [2081488975603960, 2081488996565940, 2081489017420436] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 169 │ Execute │ set_zero_int32_167 │ 0.00157056 │ 0.00161195 │ 0.00161214 │ 0.00159478 │ 0.00156021 │ 0.00161219 │ [] │ True │ │ [2081488975616804, 2081488996578888, 2081489017432968] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 170 │ Execute │ concat_3_texture3d_half_168 │ 0.0420369 │ 0.0423281 │ 0.0427857 │ 0.0423974 │ 0.0419641 │ 0.0429001 │ [] │ True │ │ [2081488975618728, 2081488996580864, 2081489017434944] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 171 │ Execute │ update_concat_offset_3_int32_169 │ 0.00261035 │ 0.00265193 │ 0.00265212 │ 0.00263468 │ 0.00259995 │ 0.00265217 │ [] │ True │ │ [2081488975661992, 2081488996623556, 2081489017477272] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 172 │ Execute │ concat_1_texture3d_half_170 │ 0.00758157 │ 0.00774789 │ 0.00803914 │ 0.00779994 │ 0.00753999 │ 0.00811195 │ [] │ True │ │ [2081488975664956, 2081488996626572, 2081489017480288] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 173 │ Execute │ mean2d_half_171 │ 0.0147889 │ 0.0148721 │ 0.0150384 │ 0.0149067 │ 0.0147681 │ 0.01508 │ [] │ True │ │ [2081488975673432, 2081488996634476, 2081489017488400] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 174 │ Execute │ view_half_172 │ 0.00644803 │ 0.00644803 │ 0.00653119 │ 0.00648268 │ 0.00644803 │ 0.00655198 │ [] │ True │ │ [2081488975688876, 2081488996649712, 2081489017503532] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 175 │ Execute │ view_half_173 │ 0.00488806 │ 0.00488806 │ 0.00488806 │ 0.00488806 │ 0.00488806 │ 0.00488806 │ [] │ True │ │ [2081488975695688, 2081488996656524, 2081489017510448] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 176 │ Execute │ linear_naive_texture3d_half_174 │ 0.586726 │ 0.590096 │ 0.595338 │ 0.590876 │ 0.585884 │ 0.596648 │ [] │ True │ │ [2081488975700940, 2081488996661776, 2081489017515700] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 177 │ Execute │ image_to_nchw_texture3d_half_float_175 │ 0.00270395 │ 0.00270414 │ 0.00274572 │ 0.00272139 │ 0.00270391 │ 0.00275612 │ [] │ True │ │ [2081488976297952, 2081488997248024, 2081489018106160] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 178 │ Execute │ DELEGATE_CALL │ 20.8864 │ 20.9461 │ 21.5925 │ 21.1906 │ 20.8715 │ 21.7541 │ [] │ False │ │ [358395625, 380178646, 401147657] │ +├─────┼────────────────────┼────────────────────────────────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼──────────────┼────────────┼───────────────────┼─────────────────────────┼────────────────────────────────────────────────────────┤ +│ 179 │ Execute │ Method::execute │ 20.8867 │ 20.9464 │ 21.593 │ 21.191 │ 20.8718 │ 21.7547 │ [] │ False │ │ [358395521, 380178542, 401147552] │ +╘═════╧════════════════════╧════════════════════════════════════════╧══════════════╧══════════════╧══════════════╧══════════════╧══════════════╧══════════════╧════════════╧═══════════════════╧═════════════════════════╧════════════════════════════════════════════════════════╛ +``` diff --git a/docs/source/backends/vulkan/tutorials/vulkan-tutorials.md b/docs/source/backends/vulkan/tutorials/vulkan-tutorials.md new file mode 100644 index 00000000000..953c93a9c12 --- /dev/null +++ b/docs/source/backends/vulkan/tutorials/vulkan-tutorials.md @@ -0,0 +1,13 @@ +# Vulkan Backend Tutorials + +**→{doc}`etvk-profiling-tutorial`** + +**→{doc}`etvk-llama-tutorial`** + +```{toctree} +:maxdepth: 2 +:hidden: +:caption: Tutorials + +etvk-profiling-tutorial +etvk-llama-tutorial diff --git a/docs/source/backends/vulkan/vulkan-op-support.rst b/docs/source/backends/vulkan/vulkan-op-support.rst new file mode 100644 index 00000000000..623907cb504 --- /dev/null +++ b/docs/source/backends/vulkan/vulkan-op-support.rst @@ -0,0 +1,46 @@ +================ +Operator Support +================ + +This page lists the operators currently supported by the Vulkan backend. The +source of truth for this information is `op_registry.py `_, +which is used by the Vulkan Partitioner to determine which operators should be +lowered to the Vulkan backend and additionally describes the capabilities of +each operator implementation. + +If an operator used in your model is not in this list, feel free to create a +feature request on Github and we will do our best to add an implementation for +the operator. + +The namespace of an operator describes where it originates from: + +* **aten** - operators in this namespace correspond 1:1 to operators in PyTorch's + `ATen library `_. + They all support fp16 and fp32 dtypes at a minimum. +* **dim_order_op** - these operators are inserted when lowering to ExecuTorch in + order to manage optimal tensor memory layouts. They are typically removed, + since the Vulkan backend manages optimal tensor representations internally. +* **llama** - custom ops targeted for LLM inference. These are typically inserted + by model source transformations applied to a `nn.Module` and are not invoked + directly by a PyTorch model. +* **operator** - these operators work with symbolic integers, which are also + supported by the Vulkan backend. +* **quantized_decomposed** / **torchao** - these ops are introduced by quantization + workflows (either torchao's `quantize_` API or the PT2E quantization flow). + They typically represent quantizing/dequantizing a tensor, or choosing the + quantization parameters for a tensor. In practice, most instances of these + operators will be fused into a custom op in the **et_vk** namespace. +* **et_vk** - these are custom operators implemented only in the Vulkan backend. + They typically represent quantized variants of **aten** operators, or fusions + of common operator patterns. They are inserted by operator fusion graph passes + when lowering to the Vulkan backend. + +All operators support dynamic input shapes unless otherwise noted (i.e. "no +resize support"). The expectation is that over time, all operators will be able +to support dynamic shapes. + +.. csv-table:: Operator Support + :file: vulkan-op-support-table.csv + :header-rows: 1 + :widths: 25 25 75 + :align: left diff --git a/docs/source/backends/vulkan/vulkan-overview.md b/docs/source/backends/vulkan/vulkan-overview.md new file mode 100644 index 00000000000..50c87cd047b --- /dev/null +++ b/docs/source/backends/vulkan/vulkan-overview.md @@ -0,0 +1,163 @@ +# Vulkan Backend + +The ExecuTorch Vulkan (ET-VK) backend enables ExecuTorch models to execute on +GPUs via the cross-platform [Vulkan API](https://www.vulkan.org/). Although the +Vulkan API support is almost ubiquitous among modern GPUs, the ExecuTorch Vulkan +backend is currently developed with a specific focus for **Android GPUs**. + +## Features + +- Wide operator support via an in-tree [GLSL compute shader library](https://github.com/pytorch/executorch/tree/main/backends/vulkan/runtime/graph/ops/glsl) +- Support for models that require dynamic shapes +- Support for FP32 and FP16 inference modes +- Support for quantized linear layers with 8-bit/4-bit weights and 8-bit dynamically quantized activations +- Support for quantized linear layers with 8-bit/4-bit weights and FP32/FP16 activations + +Note that the Vulkan backend is under active development, and its GLSL compute +shader library is being consistently expanded over time. Additional support for +quantized operators (i.e. quantized convolution) and additional quantization +modes is on the way. + +## Target Requirements + +- Supports Vulkan 1.1 + +## Development Requirements + +To contribute to the Vulkan delegate, the [Vulkan SDK](https://vulkan.lunarg.com/sdk/home#android) +must be installed on the development system. After installation, the `glslc` binary must +be found in your `PATH` in order to compile Vulkan shaders. This can be checked by +running + +```sh +glslc --version +``` + +If this is not the case after completing the Vulkan SDK installation, you may have to +go into `~/VulkanSDK//` and run + +```sh +source setup-env.sh +``` + +or alternatively, + +```sh +python install_vulkan.py +``` + +The [Android NDK](https://developer.android.com/ndk/downloads) must also be installed. +Any NDK version past NDK r17c should suffice. + +---- + +## Using the Vulkan Backend + +To lower a model to the Vulkan backend during the export and lowering process, +pass an instance of `VulkanPartitioner` to `to_edge_transform_and_lower`. The +example below demonstrates this process using the MobileNet V2 model from +torchvision. + +```python +import torch +import torchvision.models as models + +from executorch.backends.vulkan.partitioner.vulkan_partitioner import VulkanPartitioner +from executorch.exir import to_edge_transform_and_lower + +from torchvision.models.mobilenetv2 import MobileNet_V2_Weights + +mobilenet_v2 = models.mobilenetv2.mobilenet_v2( + weights=MobileNet_V2_Weights.DEFAULT +).eval() + +sample_inputs = (torch.randn(1, 3, 224, 224),) + +exported_program = torch.export.export(mobilenet_v2, sample_inputs) + +etvk_program = to_edge_transform_and_lower( + exported_program, + partitioner=[VulkanPartitioner()], +).to_executorch() + +with open("mv2_vulkan.pte", "wb") as file: + etvk_program.write_to_file(file) +``` + +See [Partitioner API](vulkan-partitioner.md) +for a reference on available partitioner options. + +---- + +## Quantization + +The Vulkan delegate currently supports execution of quantized linear layers. +See [Vulkan Quantization](vulkan-quantization.md) +for more information on available quantization schemes and APIs. + +---- + +## Runtime Integration + +To run the model on-device, use the standard ExecuTorch runtime APIs. + +For integration in Android applications, the Vulkan backend is included in the +[executorch-android-vulkan](https://mvnrepository.com/artifact/org.pytorch/executorch-android-vulkan) +package. + +When building from source, pass `-DEXECUTORCH_BUILD_VULKAN=ON` when configuring +the CMake build to compile the Vulkan backend. See [Running on Device](/getting-started.md#running-on-device) +for more information. + +To link against the backend, add the `executorch_backends` CMake target as a +build dependency, or link directly against `libvulkan_backend`. Due to the use +of static initialization to register available compute shaders and operators, +it is required to ensure that the library is linked with `--whole-archive`. + +```cmake +# CMakeLists.txt +find_package(executorch CONFIG REQUIRED COMPONENTS vulkan_backend executorch_backends) + +... +target_link_libraries( + my_target + PRIVATE + executorch + executorch_backends + ... +) + +# Ensure that unused code is not discarded. The required linker options may be +# different depending on the target platform. Typically, the +# executorch_target_link_options_shared_lib function from +# executorch/tools/cmake/Utils.cmake can be used to set the required linker +# options. +target_link_options( + executorch_backends INTERFACE "SHELL:LINKER:--whole-archive \ + $ \ + LINKER:--no-whole-archive" +) +``` + +No additional steps are necessary to use the backend beyond linking the target. +Any Vulkan-delegated .pte file will automatically run on the registered backend. + +## Additional Resources + +**→{doc}`vulkan-partitioner`** + +**→{doc}`vulkan-quantization`** + +**→{doc}`vulkan-troubleshooting`** + +```{toctree} +:maxdepth: 2 +:hidden: +:caption: Vulkan Backend + +vulkan-partitioner +vulkan-quantization +vulkan-op-support +vulkan-troubleshooting + +tutorials/vulkan-tutorials diff --git a/docs/source/backends/vulkan/vulkan-partitioner.md b/docs/source/backends/vulkan/vulkan-partitioner.md new file mode 100644 index 00000000000..566ec491b47 --- /dev/null +++ b/docs/source/backends/vulkan/vulkan-partitioner.md @@ -0,0 +1,55 @@ +# Partitioner API + +[VulkanPartitioner](https://github.com/pytorch/executorch/blob/main/backends/vulkan/partitioner/vulkan_partitioner.py) +is a Python class that controls what operators in a model can or should be +delegated to the Vulkan backend. It is the primary entrypoint to the Vulkan +backend and is also used to configure the behaviour of the Vulkan backend. + +## Usage + +For most use-cases, constructing `VulkanPartitioner()` with no arguments is +sufficient. In this case, the partitioner will lower as much of the model to +the Vulkan backend as possible. + +```python +etvk_program = to_edge_transform_and_lower( + exported_program, + partitioner=[VulkanPartitioner()], +).to_executorch() +``` + +## Common Config Options + +Generally, the Vulkan backend is configured by passing a `compile_options` +dictionary to `VulkanPartitioner()`, i.e. + +```python +compile_options = { + "require_dynamic_shapes": True, + "force_fp16": True, +} + +etvk_program = to_edge_transform_and_lower( + exported_program, + partitioner=[VulkanPartitioner(compile_options)], +).to_executorch() +``` + +### `require_dynamic_shapes` + +If a model is expected to use dynamic shapes, then it is recommended to set the +`"required_dynamic_shapes"` key in `compile_options`. + +Not all operators in Vulkan support dynamic shapes at the moment, although the +majority do. This flag will prevent operators that don't support dynamic shapes +from being lowered to Vulkan. + +### `force_fp16` + +This option causes the Vulkan backend to internally convert all FP32 tensors to +FP16. This can improve inference latency and memory footprint at the cost of +model accuracy. + +FP32 input tensors will be automatically converted to FP16 upon entering the +Vulkan backend, and FP16 outputs will be automatically be converted to FP32 as +they are returned. diff --git a/docs/source/backends/vulkan/vulkan-quantization.md b/docs/source/backends/vulkan/vulkan-quantization.md new file mode 100644 index 00000000000..89c9f7514b0 --- /dev/null +++ b/docs/source/backends/vulkan/vulkan-quantization.md @@ -0,0 +1,163 @@ +# Quantization + +The Vulkan backend currently supports execution of quantized linear layers, +where weights are symmetrically quantized to 8-bit or 4-bit with per output +channel or per group quantization scales. + +Support for additional quantized operators and quantization schemes (i.e. static ++ dynamic quantized convolution, support for statically quantized linear) is +under active development and will be added soon. + +### 4-bit quantization with torchao `quantize_` + +The `quantize_` API from [torchao](https://github.com/pytorch/ao) allows for +more advanced quantization schemes, and is the quantization workflow needed to +access 4-bit quantization. 4-bit quantization is commonly used for LLMs. + +Two options are available to execute linear layers with 4-bit quantization: + +1. Dynamically quantized activations via `Int8DynamicActivationIntxWeightConfig` +2. Weight only quantization via `IntxWeightOnlyConfig` + +Dynamically quantized activations can provide a significant boost in latency +compared to weight only quantization, since it allows GPUs to leverage +accelerated integer dot product instructions when computing matrix +multiplication. + +Below is a simple example of quantizing a simple sequence of linear layers using +the `quantize_` API. + +```python +import torch + +from executorch.backends.vulkan.partitioner.vulkan_partitioner import VulkanPartitioner + +from executorch.exir import to_edge_transform_and_lower +from torchao.quantization.granularity import PerGroup +from torchao.quantization.quant_api import ( + Int8DynamicActivationIntxWeightConfig, + IntxWeightOnlyConfig, + quantize_, +) +from torchao.utils import unwrap_tensor_subclass + + +class LinearSequenceModule(torch.nn.Module): + def __init__(self): + super().__init__() + self.linear1 = torch.nn.Linear(128, 64, bias=False) + self.linear2 = torch.nn.Linear(64, 32, bias=False) + self.linear3 = torch.nn.Linear(32, 16, bias=False) + + def forward(self, x): + x = self.linear1(x) + x = self.linear2(x) + x = self.linear3(x) + return x + + +linear_sequence_module = LinearSequenceModule() + +M = 32 +sample_inputs = (torch.randn(M, 128),) + +group_size = 32 + +q_config_8da4w = Int8DynamicActivationIntxWeightConfig( + weight_dtype=torch.int4, weight_granularity=PerGroup(group_size) +) + +q_config_4w = IntxWeightOnlyConfig( + weight_dtype=torch.int4, granularity=PerGroup(group_size) +) + +quantize_(linear_sequence_module, q_config_8da4w) +unwrap_tensor_subclass(linear_sequence_module) + +# Regular export path from here +exported_program = torch.export.export(linear_sequence_module, sample_inputs) + +etvk_program = to_edge_transform_and_lower( + exported_program, + partitioner=[VulkanPartitioner()], +).to_executorch() +``` + +### 8-bit quantization with PT2E quantization + +For 8-bit quantized linear layers, currently the only quantization scheme +supported is weight only quantization, with weights that are symmetrically +quantized to 8 bits with per output channel quantization scales. + +To access this quantization mode, the PT2E quantization flow must be used. At a +high level, the steps to quantize a model are: + +1) Create an instance of the `VulkanQuantizer` class and specify desired quantization behaviour +2) Use `torch.export.export` to prepare for quantization. +3) Call `prepare_pt2e` to prepare the exported graph for quantization. +4) Execute the prepared model with representative samples to calibrate the quantizated tensor activation ranges. +5) Call `convert_pt2e` to quantize the model. +6) Export and lower the model using the standard flow. + +For example: + +```python +import torch + +from executorch.backends.vulkan.partitioner.vulkan_partitioner import VulkanPartitioner + +from executorch.backends.vulkan.quantizer.vulkan_quantizer import ( + get_symmetric_quantization_config, + VulkanQuantizer, +) + +from executorch.exir import to_edge_transform_and_lower + +from torchao.quantization.pt2e.quantize_pt2e import convert_pt2e, prepare_pt2e + +from torchao.utils import unwrap_tensor_subclass + + +class LinearSequenceModule(torch.nn.Module): + def __init__(self): + super().__init__() + self.linear1 = torch.nn.Linear(128, 64, bias=False) + self.linear2 = torch.nn.Linear(64, 32, bias=False) + self.linear3 = torch.nn.Linear(32, 16, bias=False) + + def forward(self, x): + x = self.linear1(x) + x = self.linear2(x) + x = self.linear3(x) + return x + + +linear_sequence_module = LinearSequenceModule() + +M = 32 +# Create sample inputs +sample_inputs = (torch.randn(M, 128),) + +# Setup quantizer +quantizer = VulkanQuantizer() +quantizer.set_global(get_symmetric_quantization_config(is_dynamic=False, weight_bits=8)) + +# Export the model +exported_program = torch.export.export(linear_sequence_module, sample_inputs) +graph_module = exported_program.module() + +# Quantize the exported program with PT2E quantization flow +quantized_module = prepare_pt2e(graph_module, quantizer) +# Calibrate. In practice, this would be done by iterating over a real dataset +quantized_module(*sample_inputs) +quantized_module = convert_pt2e(quantized_module) + +# Export once more +exported_program = torch.export.export(quantized_module, sample_inputs) + +# Lower to vulkan +etvk_program = to_edge_transform_and_lower( + exported_program, + partitioner=[VulkanPartitioner()], +).to_executorch() +``` diff --git a/docs/source/backends/vulkan/vulkan-troubleshooting.md b/docs/source/backends/vulkan/vulkan-troubleshooting.md new file mode 100644 index 00000000000..9845f588004 --- /dev/null +++ b/docs/source/backends/vulkan/vulkan-troubleshooting.md @@ -0,0 +1,57 @@ +# Troubleshooting + +This page describes common issues that you may encounter when using the Vulkan +backend and how to debug and resolve them. + +## Vulkan Backend Not Found + +If you try to execute a .pte file that has been lowered to the Vulkan backend +and you see an error like: + +```shell +E 00:00:00.366934 executorch:method.cpp:74] Backend VulkanBackend is not registered. +``` + +This error indicates the Vulkan backend is not registered with the runtime. This +can happen because the backend was not compiled or linked, or because the +registration code was optimized out. + +First, make sure that when building ExecuTorch, cmake is configured with +`-DEXECUTORCH_BUILD_VULKAN=ON`. + +Next, make sure that your application is linking the `vulkan_backend` target, +or the `executorch_backends` target. + +Finally, ensure that `vulkan_backend` or `executorch_backends` is being linked +with the equivalent of `--whole-archive`. + +## Slow Performance + +Performance issues can be caused by a variety of factors: + +* A key compute shader (most often convolution or linear) is not performing well + on your target GPU +* Unsupported operators are causing too many graph breaks +* An existing operator is lacking support for some memory layout or storage type + resulting in a high number of copies being inserted to ensure tensors are in + a required representation for the next operator + +If you experience poor on-device performance for a particular model, please +obtain some profiling data while running your model. The +[profiling tutorial](./tutorials/etvk-profiling-tutorial.md) can +be a good reference for how to do this. + +Then, please file an issue on Github with the following details: + +* The device(s) you have tested with, and which devices exhibit poor performance + running the model +* The profiling data collected from executing the model +* The release version of ExecuTorch you are using, or the commit hash you built + from if you built from source +* If available, an export script that can be used to export your model to aid + in reproducing the issue +* If available, the `.pte` file you are testing with to aid in reproducing the + issue. + +We will do our best to patch performance problems in the Vulkan backend and +help you resolve your issue. diff --git a/examples/vulkan/README.md b/examples/vulkan/README.md index 71fdd0e4183..7831809be69 100644 --- a/examples/vulkan/README.md +++ b/examples/vulkan/README.md @@ -1,80 +1,84 @@ -# Vulkan Delegate Export Examples +# Example export script for the ExecuTorch Vulkan backend -This directory contains scripts for exporting models with the Vulkan delegate in ExecuTorch. Vulkan delegation allows you to run your models on devices with Vulkan-capable GPUs, potentially providing significant performance improvements over CPU execution. +This directory contains `export.py`, a utility script that can be used to export +models registered in [`executorch/examples/models/__init__.py`](https://github.com/pytorch/executorch/blob/main/examples/models/__init__.py) +to the Vulkan backend. -## Scripts +## Usage -- `export.py`: Basic export script for models to use with Vulkan delegate -- `aot_compiler.py`: Advanced export script with quantization support +Note that all example commands are assumed to be executed from executorch root. -## Usage +```shell +cd ~/executorch +``` ### Basic Export -```bash -python -m executorch.examples.vulkan.export -m -o +For example, to export MobileNet V2: + +```shell +MODEL_NAME=mv2 && \ +OUTPUT_DIR=. && \ +python -m examples.vulkan.export -m ${MODEL_NAME} -o ${OUTPUT_DIR} ``` -### Export with Quantization (Experimental) +This will create a file name `mv2_vulkan.pte` in the specified output directory. -```bash -python -m executorch.examples.vulkan.aot_compiler -m -q -o -``` +### With dynamic shape support -### Dynamic Shape Support +To enable exporting with dynamic shapes, simply add the `-d` flag. -```bash -python -m executorch.examples.vulkan.export -m -d -o +```shell +MODEL_NAME=mv2 && \ +OUTPUT_DIR=. && \ +python -m examples.vulkan.export -m ${MODEL_NAME} -o ${OUTPUT_DIR} -d ``` -### Additional Options +### Export a bundled pte -- `-s/--strict`: Export with strict mode (default: True) -- `-a/--segment_alignment`: Specify segment alignment in hex (default: 0x1000) -- `-e/--external_constants`: Save constants in external .ptd file (default: False) -- `-r/--etrecord`: Generate and save an ETRecord to the given file location +Use the `-b` flag to export a bundled PTE file (i.e. `.bpte`). This is a `.pte` +file with bundled test cases that can be used for correctness checking. -## Examples +```shell +MODEL_NAME=mv2 && \ +OUTPUT_DIR=. && \ +python -m examples.vulkan.export -m ${MODEL_NAME} -o ${OUTPUT_DIR} -d -b +``` -```bash -# Export MobileNetV2 with Vulkan delegate -python -m executorch.examples.vulkan.export -m mobilenet_v2 -o ./exported_models +This will create a file called `mv2_vulkan.bpte` in the specified output directory. -# Export MobileNetV3 with quantization -python -m executorch.examples.vulkan.aot_compiler -m mobilenet_v3 -q -o ./exported_models +### With correctness testing -# Export with dynamic shapes -python -m executorch.examples.vulkan.export -m mobilenet_v2 -d -o ./exported_models +The script can also execute the exported and lowered model via pybindings to +check output correctness before writing the output file. -# Export with ETRecord for debugging -python -m executorch.examples.vulkan.export -m mobilenet_v2 -r ./records/mobilenet_record.etrecord -o ./exported_models -``` +To enable this, ensure that your machine: -## Supported Operations +1. Has the [Vulkan SDK](https://vulkan.lunarg.com/sdk/home#android) installed +2. Has Vulkan drivers -The Vulkan delegate supports various operations including: +Additionally, you will need to install the executorch python package from +source, since the Vulkan backend is not included by default in the pip package. -- Basic arithmetic (add, subtract, multiply, divide) -- Activations (ReLU, Sigmoid, Tanh, etc.) -- Convolutions (Conv1d, Conv2d, ConvTranspose2d) -- Pooling operations (MaxPool2d, AvgPool2d) -- Linear/Fully connected layers -- BatchNorm, GroupNorm -- Various tensor operations (cat, reshape, permute, etc.) +```shell +CMAKE_ARGS="-DEXECUTORCH_BUILD_VULKAN=ON " ./install_executorch.sh -e +``` -For a complete list of supported operations, refer to the Vulkan delegate implementation in the ExecuTorch codebase. +Once these conditions are fulfilled, the `--test` flag can be passed to the +script. -## Debugging and Optimization +```shell +MODEL_NAME=mv2 && \ +OUTPUT_DIR=. && \ +python -m examples.vulkan.export -m ${MODEL_NAME} -o ${OUTPUT_DIR} -d --test +``` -If you encounter issues with Vulkan delegation: +You should see some output like -1. Use `-r/--etrecord` to generate an ETRecord for debugging -2. Check if your operations are supported by the Vulkan delegate -3. Ensure your Vulkan drivers are up to date -4. Try using the export script with `--strict False` if strict mode causes issues +```shell +INFO:root:✓ Model test PASSED - outputs match reference within tolerance +``` -## Requirements +### Quantization support -- Vulkan runtime libraries (libvulkan.so.1) -- A Vulkan-capable GPU with appropriate drivers -- PyTorch with Vulkan support +Support for quantization is under active development and will be added soon! From 56ee96ba68b773f1b8ffc3e0ab647879b6d9cdef Mon Sep 17 00:00:00 2001 From: Mergen Nachin Date: Sat, 18 Oct 2025 17:24:03 -0400 Subject: [PATCH 10/26] Success Stories page initial stage (#15236) --- docs/source/success-stories.md | 114 +++++++++++++++++++++++++++------ 1 file changed, 93 insertions(+), 21 deletions(-) diff --git a/docs/source/success-stories.md b/docs/source/success-stories.md index cba874132c6..013f81dcae5 100644 --- a/docs/source/success-stories.md +++ b/docs/source/success-stories.md @@ -6,51 +6,123 @@ Discover how organizations are leveraging ExecuTorch to deploy AI models at scal --- -## 🎯 Featured Success Stories +## Featured Success Stories ::::{grid} 1 :gutter: 3 -:::{grid-item-card} **🚀 Story 1: [Title Placeholder]** +:::{grid-item-card} **Meta's Family of Apps** :class-header: bg-primary text-white -**Industry:** [Industry] -**Hardware:** [Hardware Platform] -**Impact:** [Key Metrics] +**Industry:** Social Media & Messaging +**Hardware:** Android & iOS Devices +**Impact:** Billions of users, latency reduction -[Placeholder Description] - Brief overview of the challenge, solution, and results achieved. +Powers Instagram, WhatsApp, Facebook, and Messenger with real-time on-device AI for content ranking, recommendations, and privacy-preserving features at scale. - -[Read Full Story →](#story-1-details) +[Read Blog →](https://engineering.fb.com/2025/07/28/android/executorch-on-device-ml-meta-family-of-apps/) ::: -:::{grid-item-card} **⚡ Story 2: [Title Placeholder]** +:::{grid-item-card} **Meta Quest & Ray-Ban Smart Glasses** :class-header: bg-success text-white -**Industry:** [Industry] -**Hardware:** [Hardware Platform] -**Impact:** [Key Metrics] +**Industry:** AR/VR & Wearables +**Hardware:** Quest 3, Ray-Ban Meta Smart Glasses, Meta Ray-Ban Display -[Placeholder Description] - Brief overview of the challenge, solution, and results achieved. +Enables immersive mixed reality with real-time computer vision, hand tracking, voice commands, and translation on power-constrained wearable devices. +::: +:::{grid-item-card} **Liquid AI: Efficient, Flexible On-Device Intelligence** +:class-header: bg-info text-white +**Industry:** Artificial Intelligence / Edge Computing +**Hardware:** CPU via PyTorch ExecuTorch +**Impact:** 2× faster inference, lower latency, seamless multimodal deployment -[Read Full Story →](#story-2-details) +Liquid AI builds foundation models that make AI work where the cloud can't. In its LFM2 series, the team uses PyTorch ExecuTorch within the LEAP Edge SDK to deploy high-performance multimodal models efficiently across devices. ExecuTorch provides the flexibility to support custom architectures and processing pipelines while reducing inference latency through graph optimization and caching. Together, they enable faster, more efficient, privacy-preserving AI that runs entirely on the edge. + +[Read Blog →](https://www.liquid.ai/blog/how-liquid-ai-uses-executorch-to-power-efficient-flexible-on-device-intelligence) ::: -:::{grid-item-card} **🧠 Story 3: [Title Placeholder]** -:class-header: bg-info text-white +:::{grid-item-card} **PrivateMind: Complete Privacy with On-Device AI** +:class-header: bg-warning text-white + +**Industry:** Privacy & Personal Computing +**Hardware:** iOS & Android Devices +**Impact:** 100% on-device processing + +PrivateMind delivers a fully private AI assistant using ExecuTorch's .pte format. Built with React Native ExecuTorch, it supports LLaMA, Qwen, Phi-4, and custom models with offline speech-to-text and PDF chat capabilities. + +[Visit →](https://privatemind.swmansion.com) +::: + +:::{grid-item-card} **NimbleEdge: On-Device Agentic AI Platform** +:class-header: bg-danger text-white + +**Industry:** AI Infrastructure +**Hardware:** iOS & Android Devices +**Impact:** 30% higher TPS on iOS, faster time-to-market with Qwen/Gemma models + +NimbleEdge successfully integrated ExecuTorch with its open-source DeliteAI platform to enable agentic workflows orchestrated in Python on mobile devices. The extensible ExecuTorch ecosystem allowed implementation of on-device optimization techniques leveraging contextual sparsity. ExecuTorch significantly accelerated the release of "NimbleEdge AI" for iOS, enabling models like Qwen 2.5 with tool calling support and achieving up to 30% higher transactions per second. + +[Visit →](https://nimbleedge.com) • [Blog →](https://www.nimbleedge.com/blog/meet-nimbleedge-ai-the-first-truly-private-on-device-assistant) • [iOS App →](https://apps.apple.com/in/app/nimbleedge-ai/id6746237456) +::: + +:::: + +--- + +## Featured Ecosystem Integrations and Interoperability -**Industry:** [Industry] -**Hardware:** [Hardware Platform] -**Impact:** [Key Metrics] +::::{grid} 2 2 3 3 +:gutter: 2 -[Placeholder Description] - Brief overview of the challenge, solution, and results achieved. +:::{grid-item-card} **Hugging Face Transformers** +:class-header: bg-secondary text-white +Popular models from Hugging Face easily export to ExecuTorch format for on-device deployment. -[Read Full Story →](#story-3-details) +[Learn More →](https://github.com/huggingface/optimum-executorch/) +::: + +:::{grid-item-card} **React Native ExecuTorch** +:class-header: bg-secondary text-white + +Declarative toolkit for running AI models and LLMs in React Native apps with privacy-first, on-device execution. + +[Explore →](https://docs.swmansion.com/react-native-executorch/) • [Blog →](https://expo.dev/blog/how-to-run-ai-models-with-react-native-executorch) +::: + +:::{grid-item-card} **torchao** +:class-header: bg-secondary text-white + +PyTorch-native quantization and optimization library for preparing efficient models for ExecuTorch deployment. + +[Blog →](https://pytorch.org/blog/torchao-quantized-models-and-quantization-recipes-now-available-on-huggingface-hub/) • [Qwen Example →](https://huggingface.co/pytorch/Qwen3-4B-INT8-INT4) • [Phi Example →](https://huggingface.co/pytorch/Phi-4-mini-instruct-INT8-INT4) +::: + +:::{grid-item-card} **Unsloth** +:class-header: bg-secondary text-white + +Optimize LLM fine-tuning with faster training and reduced VRAM usage, then deploy efficiently with ExecuTorch. + +[Example Model →](https://huggingface.co/metascroy/Llama-3.2-1B-Instruct-int8-int4) ::: :::: --- + +## Featured Demos + +- **Text and Multimodal LLM demo mobile apps** - Text (Llama, Qwen3, Phi-4) and multimodal (Gemma3, Voxtral) mobile demo apps. [Try →](https://github.com/meta-pytorch/executorch-examples/tree/main/llm) + +- **Voxtral** - Deploy audio-text-input LLM on CPU (via XNNPACK) and on CUDA. [Try →](https://github.com/pytorch/executorch/blob/main/examples/models/voxtral/README.md) + +- **LoRA adapter** - Export two LoRA adapters that share a single foundation weight file, saving memory and disk space. [Try →](https://github.com/meta-pytorch/executorch-examples/tree/main/program-data-separation/cpp/lora_example) + +- **OpenVINO from Intel** - Deploy [Yolo12](https://github.com/pytorch/executorch/tree/main/examples/models/yolo12), [Llama](https://github.com/pytorch/executorch/tree/main/examples/openvino/llama), and [Stable Diffusion](https://github.com/pytorch/executorch/tree/main/examples/openvino/stable_diffusion) on [OpenVINO from Intel](https://www.intel.com/content/www/us/en/developer/articles/community/optimizing-executorch-on-ai-pcs.html). + +- **Demo title** - Brief description of the demo [Try →](#) + +*Want to showcase your demo? [Submit here →](https://github.com/pytorch/executorch/issues)* \ No newline at end of file From 923761c19b943d13b5cf8b489a4cd5416c273aa9 Mon Sep 17 00:00:00 2001 From: Siddartha Pothapragada Date: Sun, 19 Oct 2025 21:44:17 -0700 Subject: [PATCH 11/26] Android Documentation Improvements and other fixes (#15260) 1. Make Android doc more Dev focused / friendly (Quick Start Navigation , More Clear Pathfinding & Logical Flow for Devs) 3. Update links to be relevant in Android sections 4. Fix the broken vulkan & coreML links from Android & iOS flow 5. Fix navigation related issues [PLEASE REMOVE] See [CONTRIBUTING.md's Pull Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests) for ExecuTorch PR guidelines. [PLEASE REMOVE] If this PR closes an issue, please add a `Fixes [PLEASE REMOVE] If this PR introduces a fix or feature that should be the upcoming release notes, please add a "Release notes: " label. For a list of available release notes labels, check out [CONTRIBUTING.md's Pull Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests). [PLEASE REMOVE] How did you test this PR? Please write down any manual commands you used and note down tests that you have written if applicable. --- docs/source/android-examples.md | 4 +- docs/source/android-vulkan.md | 2 +- .../source/backends/coreml/coreml-overview.md | 11 +- .../source/backends/vulkan/vulkan-overview.md | 6 +- docs/source/edge-platforms-section.md | 1 + docs/source/using-executorch-android.md | 168 +++++++++++------- .../using-executorch-building-from-source.md | 2 + 7 files changed, 114 insertions(+), 80 deletions(-) diff --git a/docs/source/android-examples.md b/docs/source/android-examples.md index 65580870c57..057fd48bc55 100644 --- a/docs/source/android-examples.md +++ b/docs/source/android-examples.md @@ -1,7 +1,7 @@ # Examples & Demos -- [Working with LLMs - Android Examples](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/android) -- [Demo Apps](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app) +- [Working with LLMs - Android Examples](https://github.com/meta-pytorch/executorch-examples/blob/main/llm/android/LlamaDemo/README.md) - ExecuTorch Llama Android Demo App +- [Demo Apps](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app) - DeepLab v3 model for image segmentation - {doc}`tutorial-arm-vgf` — Export a simple PyTorch model for the ExecuTorch VGF backend ```{toctree} diff --git a/docs/source/android-vulkan.md b/docs/source/android-vulkan.md index 6399ac4ec7c..aa987835989 100644 --- a/docs/source/android-vulkan.md +++ b/docs/source/android-vulkan.md @@ -1 +1 @@ -```{include} backends-vulkan.md +```{include} backends/vulkan/vulkan-overview.md diff --git a/docs/source/backends/coreml/coreml-overview.md b/docs/source/backends/coreml/coreml-overview.md index a08e3ce14ff..18ae4815a1a 100644 --- a/docs/source/backends/coreml/coreml-overview.md +++ b/docs/source/backends/coreml/coreml-overview.md @@ -10,6 +10,7 @@ Core ML delegate is the ExecuTorch solution to take advantage of Apple's [Core M ## Target Requirements Below are the minimum OS requirements on various hardware for running a Core ML-delegated ExecuTorch model: + - [macOS](https://developer.apple.com/macos) >= 13.0 - [iOS](https://developer.apple.com/ios/) >= 16.0 - [iPadOS](https://developer.apple.com/ipados/) >= 16.0 @@ -61,7 +62,6 @@ See [Partitioner API](coreml-partitioner.md) for a reference on available partit The Core ML delegate can also be used as a backend to execute quantized models. See [Core ML Quantization](coreml-quantization.md) for more information on available quantization schemes and APIs. - ## Backward compatibility Core ML supports backward compatibility via the [`minimum_deployment_target`](coreml-partitioner.md#coreml-compilespec) option. A model exported with a specific deployment target is guaranteed to work on all deployment targets >= the specified deployment target. For example, a model exported with `coremltools.target.iOS17` will work on iOS 17 or higher. @@ -91,16 +91,15 @@ target_link_libraries( No additional steps are necessary to use the backend beyond linking the target. A Core ML-delegated .pte file will automatically run on the registered backend. - ## Reference -**→{doc}`coreml-troubleshooting` — Debug common issues.** +**→{doc}`/backends/coreml/coreml-troubleshooting` — Debug common issues.** -**→{doc}`coreml-partitioner` — Partitioner options.** +**→{doc}`/backends/coreml/coreml-partitioner` — Partitioner options.** -**→{doc}`coreml-quantization` — Supported quantization schemes.** +**→{doc}`/backends/coreml/coreml-quantization` — Supported quantization schemes.** -**→{doc}`coreml-op-support` — Supported operators.** +**→{doc}`/backends/coreml/coreml-op-support` — Supported operators.** ```{toctree} :maxdepth: 2 diff --git a/docs/source/backends/vulkan/vulkan-overview.md b/docs/source/backends/vulkan/vulkan-overview.md index 50c87cd047b..ede7d330e4b 100644 --- a/docs/source/backends/vulkan/vulkan-overview.md +++ b/docs/source/backends/vulkan/vulkan-overview.md @@ -144,11 +144,11 @@ Any Vulkan-delegated .pte file will automatically run on the registered backend. ## Additional Resources -**→{doc}`vulkan-partitioner`** +**→{doc}`/backends/vulkan/vulkan-partitioner`** -**→{doc}`vulkan-quantization`** +**→{doc}`/backends/vulkan/vulkan-quantization`** -**→{doc}`vulkan-troubleshooting`** +**→{doc}`/backends/vulkan/vulkan-troubleshooting`** ```{toctree} :maxdepth: 2 diff --git a/docs/source/edge-platforms-section.md b/docs/source/edge-platforms-section.md index 2b9ee2131de..1396806b4e0 100644 --- a/docs/source/edge-platforms-section.md +++ b/docs/source/edge-platforms-section.md @@ -59,6 +59,7 @@ Key features: ## Next Steps After choosing your platform: + - **{doc}`backends-section`** - Deep dive into backend selection and optimization - **{doc}`llm/working-with-llms`** - Working with Large Language Models on edge devices diff --git a/docs/source/using-executorch-android.md b/docs/source/using-executorch-android.md index cdeb2417a5f..a5623bf3fdd 100644 --- a/docs/source/using-executorch-android.md +++ b/docs/source/using-executorch-android.md @@ -1,12 +1,20 @@ + # Using ExecuTorch on Android -To use from Android, ExecuTorch provides Java/Kotlin API bindings and Android platform integration, available as an AAR file. +🚀 Quick Start: __New to ExecuTorch__ ? Jump to [Using AAR from Maven Central](#using-aar-from-maven-central) for the fastest setup, then see the [Runtime Integration](#runtime-integration) example. -Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on [this page about cross compilation](using-executorch-building-from-source.md#cross-compilation). +To use from Android, ExecuTorch provides Java/Kotlin API bindings and Android platform integration, available as an AAR file. +Note: This page covers Android app integration through the AAR library. The ExecuTorch C++ APIs can also be used from Android native, and the documentation can be found on this page about cross compilation. ## Installation -All ExecuTorch Android libraries are packaged into an [Android library (AAR)](https://developer.android.com/studio/projects/android-library), `executorch.aar` for both generic (image/audio processing) and LLM (LLaMA) use case. In each release, prebuilt AAR artifacts are uploaded to [Maven](https://repo.maven.apache.org/maven2/org/pytorch/executorch-android/) and S3. Users can also build the AAR from source. +__Choose your installation method:__ + +- __[Maven Central](#using-aar-from-maven-central)__ (recommended): Easiest for most developers +- __[Direct AAR file](#using-aar-file-directly)__: For specific versions or offline development +- __[Build from source](#building-from-source)__: For custom backends or contributions + +All ExecuTorch Android libraries are packaged into an Android library (AAR), executorch.aar for both generic (image/audio processing) and LLM (LLaMA) use case. In each release, prebuilt AAR artifacts are uploaded to Maven and S3. Users can also build the AAR from source. ### Contents of library @@ -14,7 +22,7 @@ The AAR artifact contains the Java library for users to integrate with their Jav - [Java library](https://github.com/pytorch/executorch/tree/main/extension/android/executorch_android/src/main/java/org/pytorch/executorch) - JNI contains the JNI binding for the corresponding Java code, and ExecuTorch native library, including - - core ExecuTorch runtime libraries + - Core ExecuTorch runtime libraries - XNNPACK backend - Portable kernels - Optimized kernels @@ -24,42 +32,52 @@ The AAR artifact contains the Java library for users to integrate with their Jav The AAR library can be used for generic Android device with arm64-v8a or x86_64 architecture. It can be used across form factors, including phones, tablets, tv boxes, etc, as it does not contain any UI components. -## Using AAR from Maven Central +XNNPACK backend -ExecuTorch is available on [Maven Central](https://mvnrepository.com/artifact/org.pytorch/executorch-android). +Portable kernels +Optimized kernels +Quantized kernels +LLaMa-specific Custom ops library. +Comes with two ABI variants, arm64-v8a and x86_64. +The AAR library can be used for generic Android device with arm64-v8a or x86_64 architecture. It can be used across form factors, including phones, tablets, tv boxes, etc, as it does not contain any UI components. -Simply add the target [`org.pytorch:executorch-android:${executorch_version}`](https://repo.maven.apache.org/maven2/org/pytorch/executorch-android/${executorch_version}/) to your Android app dependency (build.gradle), and build your app. +## Using AAR from Maven Central -For example: -``` -# app/build.gradle.kts +✅ Recommended for most developers +ExecuTorch is available on Maven Central. +Simply add the target org.pytorch:executorch-android:${executorch_version} to your Android app dependency (build.gradle), and build your app. For example: + +```kotlin +app/build.gradle.kts dependencies { - implementation("org.pytorch:executorch-android:${executorch_version}") +implementation("org.pytorch:executorch-android:${executorch_version}") } ``` -Note: If you want to use release v0.5.0, please use dependency `org.pytorch:executorch-android:0.5.1`. - -Click the screenshot below to watch the *demo video* on how to add the package and run a simple ExecuTorch model with Android Studio. +Note: If you want to use release v1.0.0, please use dependency org.pytorch:executorch-android:1.0.0. +Click the screenshot below to watch the demo video on how to add the package and run a simple ExecuTorch model with Android Studio. - Integrating and Running ExecuTorch on Android +Integrating and Running ExecuTorch on Android ## Using AAR file directly You can also directly specify an AAR file in the app. We upload pre-built AAR to S3 during each release, or as a snapshot. -### Released versions (recommended) +### Released versions (Recommended) | Version | AAR | SHASUMS | | ------- | --- | ------- | -| [${executorch_version}](https://github.com/pytorch/executorch/releases/tag/${executorch_version}) | [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/${executorch_version}/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/${executorch_version}/executorch.aar.sha256sums) | +| [v1.0.0](https://github.com/pytorch/executorch/releases/tag/v1.0.0) | [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/v1.0.0/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/v1.0.0/executorch.aar.sha256sums) | +| [v0.7.0](https://github.com/pytorch/executorch/releases/tag/v0.7.0) | [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/v0.7.0/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/v0.7.0/executorch.aar.sha256sums) | | [v0.6.0](https://github.com/pytorch/executorch/releases/tag/v0.6.0) | [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/v0.6.0/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/v0.6.0/executorch.aar.sha256sums) | | [v0.5.0](https://github.com/pytorch/executorch/releases/tag/v0.5.0) | [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/v0.5.0-rc3/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/v0.5.0-rc3/executorch.aar.sha256sums) | + ### Snapshots from main branch Starting from 2025-04-12, you can download nightly `main` branch snapshots: + * `executorch.aar`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-{YYYYMMDD}/executorch.aar` * `executorch.aar.sha256sums`: `https://ossci-android.s3.amazonaws.com/executorch/release/snapshot-{YYYYMMDD}/executorch.aar.sha256sums` * Replace `YYYYMMDD` with the actual date you want to use. @@ -77,11 +95,11 @@ We aim to make every daily snapshot available and usable. However, for best stab ## Using AAR file To add the AAR file to your app: -1. Download the AAR. -2. Add it to your gradle build rule as a file path. +Download the AAR. +Add it to your gradle build rule as a file path. +An AAR file itself does not contain dependency info, unlike the Maven one which bundled with pom.xml. The Java package requires fbjni and soloader, and currently requires users to explicitly declare the dependency. Therefore, two more dependencies in gradle rule is required: -An AAR file itself does not contain dependency info, unlike the Maven one which bundled with pom.xml. The Java package requires `fbjni` and `soloader`, and currently requires users to explicitly declare the dependency. Therefore, two more `dependencies` in gradle rule is required: -``` +```kotlin implementation("com.facebook.soloader:soloader:0.10.5") implementation("com.facebook.fbjni:fbjni:0.7.0") ``` @@ -89,18 +107,20 @@ implementation("com.facebook.fbjni:fbjni:0.7.0") ### Example usage In your app working directory, such as executorch-examples/llm/android/LlamaDemo, -``` + +```sh mkdir -p app/libs curl https://ossci-android.s3.amazonaws.com/executorch/release/${executorch_version}/executorch.aar -o app/libs/executorch.aar ``` And include it in gradle: -``` -# app/build.gradle.kts + +```kotlin +app/build.gradle.kts dependencies { - implementation(files("libs/executorch.aar")) - implementation("com.facebook.soloader:soloader:0.10.5") - implementation("com.facebook.fbjni:fbjni:0.7.0") +implementation(files("libs/executorch.aar")) +implementation("com.facebook.soloader:soloader:0.10.5") +implementation("com.facebook.fbjni:fbjni:0.7.0") } ``` @@ -108,52 +128,62 @@ Now you can compile your app with the ExecuTorch Android library. ## Building from Source -`scripts/build_android_library.sh` is a helper script to build the Java library (into .jar), native library (into .so), and the packaged AAR file. - -You need Android [SDK](https://developer.android.com/studio) and [NDK](https://developer.android.com/ndk/downloads) to use it. - -Current NDK version used in ExecuTorch CI: r27b. +```text +scripts/build_android_library.sh +``` -You need to set `ANDROID_HOME` to Android SDK home and `ANDROID_NDK` to the correct NDK root (containing NOTICE file). +is a helper script to build the Java library (into .jar), native library (into .so), and the packaged AAR file. +You need Android SDK and NDK to use it. +Current NDK version used in ExecuTorch CI: r28c. +You need to set ANDROID_HOME to Android SDK home and ANDROID_NDK to the correct NDK root (containing NOTICE file). -``` +```sh export ANDROID_HOME=/path/to/sdk export ANDROID_NDK=/path/to/ndk sh scripts/build_android_library.sh ``` -Currently, XNNPACK backend is always built with the script. +NOTE: Currently, XNNPACK backend is always built with the script. ### Optional environment variables -Optionally, set these environment variables before running `build_android_library.sh`. +Optionally, set these environment variables before running build_android_library.sh. -#### ANDROID_ABIS -Set environment variable `ANDROID_ABIS` to either `arm64-v8a` or `x86_64` if you only need to build the native library for one ABI only. -``` +- __ANDROID_ABIS__ + +Set environment variable ANDROID_ABIS to either arm64-v8a or x86_64 if you only need to build the native library for one ABI only. + +```sh export ANDROID_ABIS=arm64-v8a -# or -# export ANDROID_ABIS=x86_64 +``` + + (Or) + +```sh +export ANDROID_ABIS=x86_64 +``` + +And then run the script. + +```sh sh scripts/build_android_library.sh ``` -#### EXECUTORCH_CMAKE_BUILD_TYPE -Set environment variable `EXECUTORCH_CMAKE_BUILD_TYPE` to `Release` or `Debug` based on your needs. +- __EXECUTORCH_CMAKE_BUILD_TYPE__ + +Set environment variable EXECUTORCH_CMAKE_BUILD_TYPE to Release or Debug based on your needs. -#### Using MediaTek backend +- __Using MediaTek backend__ -To use [MediaTek backend](backends-mediatek.md), -after installing and setting up the SDK, set `NEURON_BUFFER_ALLOCATOR_LIB` and `NEURON_USDK_ADAPTER_LIB` to the corresponding path. +To use MediaTek backend, after installing and setting up the SDK, set NEURON_BUFFER_ALLOCATOR_LIB and NEURON_USDK_ADAPTER_LIB to the corresponding path. -#### Using Qualcomm AI Engine Backend +- __Using Qualcomm AI Engine Backend__ -To use [Qualcomm AI Engine Backend](backends-qualcomm.md#qualcomm-ai-engine-backend), -after installing and setting up the SDK, set `QNN_SDK_ROOT` to the corresponding path. +To use Qualcomm AI Engine Backend, after installing and setting up the SDK, set QNN_SDK_ROOT to the corresponding path. -#### Using Vulkan Backend +- __Using Vulkan Backend__ -To use [Vulkan Backend](backends-vulkan.md#vulkan-backend), -set `EXECUTORCH_BUILD_VULKAN` to `ON`. +To use Vulkan Backend, set EXECUTORCH_BUILD_VULKAN to ON. ## Android Backends @@ -166,6 +196,7 @@ The following backends are available for Android: | [Qualcomm AI Engine](https://www.qualcomm.com/developer/software/qualcomm-ai-engine-direct-sdk) | NPU | [Doc](backends-qualcomm.md) | | [Vulkan](https://www.vulkan.org/) | GPU | [Doc](backends-vulkan.md) | +Start with XNNPACK (CPU backend) for maximum compatibility, then add hardware-specific backends for optimization. ## Runtime Integration @@ -175,26 +206,27 @@ Here is an example code sample in Java that demonstrates how to integrate ExecuT import org.pytorch.executorch.EValue; import org.pytorch.executorch.Module; import org.pytorch.executorch.Tensor; - public class MainActivity extends Activity { - private Module module; - - @Override - protected void onCreate(Bundle savedInstanceState) { - super.onCreate(savedInstanceState); - // Load the ExecuTorch module - Module module = Module.load("/data/local/tmp/add.pte"); - Tensor tensor1 = Tensor.fromBlob(new float[] {1.0f}, new long[] {1}); - Tensor tensor2 = Tensor.fromBlob(new float[] {20.0f}, new long[] {1}); - - EValue eValue1 = EValue.from(tensor1); - EValue eValue2 = EValue.from(tensor2); - float result = module.forward(eValue1, eValue2)[0].toTensor().getDataAsFloatArray()[0]; - } +private Module module; + +@Override +protected void onCreate(Bundle savedInstanceState) { + super.onCreate(savedInstanceState); + // Load the ExecuTorch module + Module module = Module.load("/data/local/tmp/add.pte"); + + Tensor tensor1 = Tensor.fromBlob(new float[] {1.0f}, new long[] {1}); + Tensor tensor2 = Tensor.fromBlob(new float[] {20.0f}, new long[] {1}); + + EValue eValue1 = EValue.from(tensor1); + EValue eValue2 = EValue.from(tensor2); + + float result = module.forward(eValue1, eValue2)[0].toTensor().getDataAsFloatArray()[0]; } ``` -Push the corresponding pte file to the phone: +Push the corresponding pte file to your Android device: + ```sh adb push extension/module/test/resources/add.pte /data/local/tmp/ ``` diff --git a/docs/source/using-executorch-building-from-source.md b/docs/source/using-executorch-building-from-source.md index 36f8f5fefac..c14e05ccf76 100644 --- a/docs/source/using-executorch-building-from-source.md +++ b/docs/source/using-executorch-building-from-source.md @@ -5,6 +5,7 @@ Even if you don't use CMake directly, CMake can emit scripts for other format like Make, Ninja or Xcode. For information, see [cmake-generators(7)](https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html). ## System Requirements + ### Operating System ExecuTorch is tested on the following systems, although it should also work in similar environments. @@ -20,6 +21,7 @@ ExecuTorch is tested on the following systems, although it should also work in s * Windows 10+ with Visual Studio 2022+ (experimental) ### Software Requirements + * `conda` or another virtual environment manager - `conda` is recommended as it provides cross-language support and integrates smoothly with `pip` (Python's built-in package manager) From 3d6b5d1b9d4ed0439b0d663ebd909915ac437273 Mon Sep 17 00:00:00 2001 From: Siddartha Pothapragada Date: Mon, 20 Oct 2025 09:17:07 -0700 Subject: [PATCH 12/26] Updated Android doc with proper 1.0.0 backend links to executorch (#15266) ### Summary [PLEASE REMOVE] See [CONTRIBUTING.md's Pull Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests) for ExecuTorch PR guidelines. [PLEASE REMOVE] If this PR closes an issue, please add a `Fixes #` line. [PLEASE REMOVE] If this PR introduces a fix or feature that should be the upcoming release notes, please add a "Release notes: " label. For a list of available release notes labels, check out [CONTRIBUTING.md's Pull Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests). ### Test plan [PLEASE REMOVE] How did you test this PR? Please write down any manual commands you used and note down tests that you have written if applicable. --- docs/source/backends/coreml/coreml-overview.md | 1 + docs/source/using-executorch-android.md | 18 ++++++++++++++---- 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/docs/source/backends/coreml/coreml-overview.md b/docs/source/backends/coreml/coreml-overview.md index 18ae4815a1a..bff0cb8994e 100644 --- a/docs/source/backends/coreml/coreml-overview.md +++ b/docs/source/backends/coreml/coreml-overview.md @@ -17,6 +17,7 @@ Below are the minimum OS requirements on various hardware for running a Core ML- - [tvOS](https://developer.apple.com/tvos/) >= 16.0 ## Development Requirements + To develop you need: - [macOS](https://developer.apple.com/macos) >= 13.0 diff --git a/docs/source/using-executorch-android.md b/docs/source/using-executorch-android.md index a5623bf3fdd..e9d4449532d 100644 --- a/docs/source/using-executorch-android.md +++ b/docs/source/using-executorch-android.md @@ -64,16 +64,26 @@ Click the screenshot below to watch the demo video on how to add the package and You can also directly specify an AAR file in the app. We upload pre-built AAR to S3 during each release, or as a snapshot. -### Released versions (Recommended) +### Latest Released versions (Recommended) + +Starting from [v1.0.0](https://github.com/pytorch/executorch/releases/tag/v1.0.0), there are respective executorch.aar library available by backends + +| AAR | SHASUMS | Backend | +| ------- | --- | ------- | +| [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-xnnpack/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-xnnpack/executorch.aar.sha256sums) | [XNNPACK](backends-xnnpack.md) | +| [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-qnn/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-qnn/executorch.aar.sha256sums) | [Qualcomm AI Engine](backends-qualcomm.md) | +| [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-vulkan/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-vulkan/executorch.aar.sha256sums) | [Vulkan](backends/vulkan/vulkan-overview.md) | + +### Older Released versions + +Download the older released version | Version | AAR | SHASUMS | | ------- | --- | ------- | -| [v1.0.0](https://github.com/pytorch/executorch/releases/tag/v1.0.0) | [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/v1.0.0/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/v1.0.0/executorch.aar.sha256sums) | | [v0.7.0](https://github.com/pytorch/executorch/releases/tag/v0.7.0) | [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/v0.7.0/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/v0.7.0/executorch.aar.sha256sums) | | [v0.6.0](https://github.com/pytorch/executorch/releases/tag/v0.6.0) | [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/v0.6.0/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/v0.6.0/executorch.aar.sha256sums) | | [v0.5.0](https://github.com/pytorch/executorch/releases/tag/v0.5.0) | [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/v0.5.0-rc3/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/v0.5.0-rc3/executorch.aar.sha256sums) | - ### Snapshots from main branch Starting from 2025-04-12, you can download nightly `main` branch snapshots: @@ -194,7 +204,7 @@ The following backends are available for Android: | [XNNPACK](https://github.com/google/XNNPACK) | CPU | [Doc](backends-xnnpack.md) | | [MediaTek NeuroPilot](https://neuropilot.mediatek.com/) | NPU | [Doc](backends-mediatek.md) | | [Qualcomm AI Engine](https://www.qualcomm.com/developer/software/qualcomm-ai-engine-direct-sdk) | NPU | [Doc](backends-qualcomm.md) | -| [Vulkan](https://www.vulkan.org/) | GPU | [Doc](backends-vulkan.md) | +| [Vulkan](https://www.vulkan.org/) | GPU | [Doc](backends/vulkan/vulkan-overview.md) | Start with XNNPACK (CPU backend) for maximum compatibility, then add hardware-specific backends for optimization. From 4a73a873d3f1b37be06d7b9cd08ace16b0c6e476 Mon Sep 17 00:00:00 2001 From: Siddartha Pothapragada Date: Mon, 20 Oct 2025 17:09:16 -0700 Subject: [PATCH 13/26] Android Docs: Fix stale backend link (android-samsung-exynos) (#15287) [PLEASE REMOVE] See [CONTRIBUTING.md's Pull Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests) for ExecuTorch PR guidelines. [PLEASE REMOVE] If this PR closes an issue, please add a `Fixes [PLEASE REMOVE] If this PR introduces a fix or feature that should be the upcoming release notes, please add a "Release notes: " label. For a list of available release notes labels, check out [CONTRIBUTING.md's Pull Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests). [PLEASE REMOVE] How did you test this PR? Please write down any manual commands you used and note down tests that you have written if applicable. --- docs/source/android-backends.md | 4 ++-- docs/source/using-executorch-android.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/source/android-backends.md b/docs/source/android-backends.md index d506813990b..d4da0966ed9 100644 --- a/docs/source/android-backends.md +++ b/docs/source/android-backends.md @@ -16,7 +16,7 @@ Available hardware acceleration backends for Android deployment. - {doc}`android-qualcomm` — Qualcomm AI Engine (NPU) - {doc}`android-mediatek` — MediaTek NPU acceleration - {doc}`android-arm-vgf` — ARM VGF Backend -- {doc}`android-samsung-exynos` — Samsung Exynos NPU +- {doc}`backends/samsung/samsung-overview` — Samsung Exynos NPU ```{toctree} :hidden: @@ -25,4 +25,4 @@ android-vulkan android-qualcomm android-mediatek android-arm-vgf -android-samsung-exynos +backends/samsung/samsung-overview diff --git a/docs/source/using-executorch-android.md b/docs/source/using-executorch-android.md index e9d4449532d..d417100dc68 100644 --- a/docs/source/using-executorch-android.md +++ b/docs/source/using-executorch-android.md @@ -201,7 +201,7 @@ The following backends are available for Android: | Backend | Type | Doc | | ------- | -------- | --- | -| [XNNPACK](https://github.com/google/XNNPACK) | CPU | [Doc](backends-xnnpack.md) | +| [XNNPACK](https://github.com/google/XNNPACK) | CPU | [Doc](backends/xnnpack/xnnpack-overview.md) | | [MediaTek NeuroPilot](https://neuropilot.mediatek.com/) | NPU | [Doc](backends-mediatek.md) | | [Qualcomm AI Engine](https://www.qualcomm.com/developer/software/qualcomm-ai-engine-direct-sdk) | NPU | [Doc](backends-qualcomm.md) | | [Vulkan](https://www.vulkan.org/) | GPU | [Doc](backends/vulkan/vulkan-overview.md) | From 11ff6e929348d98abc25d986120457eea05ab429 Mon Sep 17 00:00:00 2001 From: Jack <32371937+jackzhxng@users.noreply.github.com> Date: Tue, 21 Oct 2025 09:55:47 +0700 Subject: [PATCH 14/26] Export LLMs with Optimum docs (#15062) --- docs/source/llm/export-llm-optimum.md | 171 ++++++++++++++++++++++++++ docs/source/llm/export-llm.md | 2 + docs/source/llm/getting-started.md | 6 +- docs/source/llm/working-with-llms.md | 1 + 4 files changed, 179 insertions(+), 1 deletion(-) create mode 100644 docs/source/llm/export-llm-optimum.md diff --git a/docs/source/llm/export-llm-optimum.md b/docs/source/llm/export-llm-optimum.md new file mode 100644 index 00000000000..1a104f77bc4 --- /dev/null +++ b/docs/source/llm/export-llm-optimum.md @@ -0,0 +1,171 @@ +# Exporting LLMs with HuggingFace's Optimum ExecuTorch + +[Optimum ExecuTorch](https://github.com/huggingface/optimum-executorch) provides a streamlined way to export Hugging Face transformer models to ExecuTorch format. It offers seamless integration with the Hugging Face ecosystem, making it easy to export models directly from the Hugging Face Hub. + +## Overview + +Optimum ExecuTorch supports a much wider variety of model architectures compared to ExecuTorch's native `export_llm` API. While `export_llm` focuses on a limited set of highly optimized models (Llama, Qwen, Phi, and SmolLM) with advanced features like SpinQuant and attention sink, Optimum ExecuTorch can export diverse architectures including Gemma, Mistral, GPT-2, BERT, T5, Whisper, Voxtral, and many others. + +### Use Optimum ExecuTorch when: +- You need to export models beyond the limited set supported by `export_llm` +- Exporting directly from Hugging Face Hub model IDs, including model variants such as finetunes +- You want a simpler interface with Hugging Face ecosystem integration + +### Use export_llm when: +- Working with one of the highly optimized supported models (Llama, Qwen, Phi, SmolLM) +- You need advanced optimizations like SpinQuant or attention sink +- You need pt2e quantization for QNN/CoreML/Vulkan backends +- Working with Llama models requiring custom checkpoints + +See [Exporting LLMs](export-llm.md) for details on using the native `export_llm` API. + +## Prerequisites + +### Installation + +First, clone and install Optimum ExecuTorch from source: + +```bash +git clone https://github.com/huggingface/optimum-executorch.git +cd optimum-executorch +pip install '.[dev]' +``` + +For access to the latest features and optimizations, install dependencies in dev mode: + +```bash +python install_dev.py +``` + +This installs `executorch`, `torch`, `torchao`, `transformers`, and other dependencies from nightly builds or source. + +## Supported Models + +Optimum ExecuTorch supports a wide range of model architectures including decoder-only LLMs (Llama, Qwen, Gemma, Mistral, etc.), multimodal models, vision models, audio models (Whisper), encoder models (BERT, RoBERTa), and seq2seq models (T5). + +For the complete list of supported models, see the [Optimum ExecuTorch documentation](https://github.com/huggingface/optimum-executorch#-supported-models). + +## Export Methods + +Optimum ExecuTorch offers two ways to export models: + +### Method 1: CLI Export + +The CLI is the simplest way to export models. It provides a single command to convert models from Hugging Face Hub to ExecuTorch format. + +#### Basic Export + +```bash +optimum-cli export executorch \ + --model "HuggingFaceTB/SmolLM2-135M-Instruct" \ + --task "text-generation" \ + --recipe "xnnpack" \ + --output_dir="./smollm2_exported" +``` + +#### With Optimizations + +Add custom SDPA, KV cache optimization, and quantization: + +```bash +optimum-cli export executorch \ + --model "HuggingFaceTB/SmolLM2-135M-Instruct" \ + --task "text-generation" \ + --recipe "xnnpack" \ + --use_custom_sdpa \ + --use_custom_kv_cache \ + --qlinear 8da4w \ + --qembedding 8w \ + --output_dir="./smollm2_exported" +``` + +#### Available CLI Arguments + +Key arguments for LLM export include `--model`, `--task`, `--recipe` (backend), `--use_custom_sdpa`, `--use_custom_kv_cache`, `--qlinear` (linear quantization), `--qembedding` (embedding quantization), and `--max_seq_len`. + +For the complete list of arguments, run: +```bash +optimum-cli export executorch --help +``` + +## Optimization Options + +### Custom Operators + +Optimum ExecuTorch includes custom SDPA (~3x speedup) and custom KV cache (~2.5x speedup) operators. Enable with `--use_custom_sdpa` and `--use_custom_kv_cache`. + +### Quantization + +Optimum ExecuTorch uses [TorchAO](https://github.com/pytorch/ao) for quantization. Common options: +- `--qlinear 8da4w`: int8 dynamic activation + int4 weight (recommended) +- `--qembedding 4w` or `--qembedding 8w`: int4/int8 embedding quantization + +Example: +```bash +optimum-cli export executorch \ + --model "meta-llama/Llama-3.2-1B" \ + --task "text-generation" \ + --recipe "xnnpack" \ + --use_custom_sdpa \ + --use_custom_kv_cache \ + --qlinear 8da4w \ + --qembedding 4w \ + --output_dir="./llama32_1b" +``` + +### Backend Support + +Supported backends: `xnnpack` (CPU), `coreml` (Apple GPU), `portable` (baseline), `cuda` (NVIDIA GPU). Specify with `--recipe`. + +## Exporting Different Model Types + +Optimum ExecuTorch supports various model architectures with different tasks: + +- **Decoder-only LLMs**: Use `--task text-generation` +- **Multimodal LLMs**: Use `--task multimodal-text-to-text` +- **Seq2Seq models** (T5): Use `--task text2text-generation` +- **ASR models** (Whisper): Use `--task automatic-speech-recognition` + +For detailed examples of exporting each model type, see the [Optimum ExecuTorch export guide](https://github.com/huggingface/optimum-executorch/blob/main/optimum/exporters/executorch/README.md). + +## Running Exported Models + +### Verifying Output with Python + +After exporting, you can verify the model output in Python before deploying to device using classes from `modeling.py`, such as the `ExecuTorchModelForCausalLM` class for LLMs: + +```python +from optimum.executorch import ExecuTorchModelForCausalLM +from transformers import AutoTokenizer + +# Load the exported model +model = ExecuTorchModelForCausalLM.from_pretrained("./smollm2_exported") +tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM2-135M-Instruct") + +# Generate text +generated_text = model.text_generation( + tokenizer=tokenizer, + prompt="Once upon a time", + max_seq_len=128, +) +print(generated_text) +``` + +### Running on Device + +After verifying your model works correctly, deploy it to device: + +- [Running with C++](run-with-c-plus-plus.md) - Run exported models using ExecuTorch's C++ runtime +- [Running on Android](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/android) - Deploy to Android devices +- [Running on iOS](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) - Deploy to iOS devices + +## Performance + +For performance benchmarks and on-device metrics, see the [Optimum ExecuTorch benchmarks](https://github.com/huggingface/optimum-executorch#-benchmarks-on-mobile-devices) and the [ExecuTorch Benchmark Dashboard](https://hud.pytorch.org/benchmark/llms?repoName=pytorch%2Fexecutorch). + +## Additional Resources + +- [Optimum ExecuTorch GitHub](https://github.com/huggingface/optimum-executorch) - Full documentation and examples +- [Supported Models](https://github.com/huggingface/optimum-executorch#-supported-models) - Complete model list +- [Export Guide](https://github.com/huggingface/optimum-executorch/blob/main/optimum/exporters/executorch/README.md) - Detailed export examples +- [TorchAO Quantization](https://github.com/pytorch/ao) - Quantization library documentation diff --git a/docs/source/llm/export-llm.md b/docs/source/llm/export-llm.md index 05328afbd43..108e357a3e1 100644 --- a/docs/source/llm/export-llm.md +++ b/docs/source/llm/export-llm.md @@ -20,6 +20,8 @@ As of this doc, the list of supported LLMs include the following: The up-to-date list of supported LLMs can be found in the code [here](https://github.com/pytorch/executorch/blob/main/extension/llm/export/config/llm_config.py#L32). +**Note:** If you need to export models that are not on this list or other model architectures (such as Gemma, Mistral, BERT, T5, Whisper, etc.), see [Exporting LLMs with Optimum](export-llm-optimum.md) which supports a much wider variety of models from Hugging Face Hub. + ## The export_llm API `export_llm` is ExecuTorch's high-level export API for LLMs. In this tutorial, we will focus on exporting Llama 3.2 1B using this API. `export_llm`'s arguments are specified either through CLI args or through a yaml configuration whose fields are defined in [`LlmConfig`](https://github.com/pytorch/executorch/blob/main/extension/llm/export/config/llm_config.py). To call `export_llm`: diff --git a/docs/source/llm/getting-started.md b/docs/source/llm/getting-started.md index 6b6f9d96df7..95caae6ddd9 100644 --- a/docs/source/llm/getting-started.md +++ b/docs/source/llm/getting-started.md @@ -18,8 +18,12 @@ To follow this guide, you'll need to install ExecuTorch. Please see [Setting Up Deploying LLMs to ExecuTorch can be boiled down to a two-step process: (1) exporting the LLM to a `.pte` file and (2) running the `.pte` file using our C++ APIs or Swift/Java bindings. -- [Exporting LLMs](export-llm.md) +### Exporting +- [Exporting LLMs](export-llm.md) - Export using ExecuTorch's native `export_llm` API with advanced optimizations +- [Exporting LLMs with Optimum](export-llm-optimum.md) - Export Hugging Face models with broader architecture support - [Exporting custom LLMs](export-custom-llm.md) + +### Running - [Running with C++](run-with-c-plus-plus.md) - [Running on Android (XNNPack)](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/android) - [Running on Android (Qualcomm)](build-run-llama3-qualcomm-ai-engine-direct-backend.md) diff --git a/docs/source/llm/working-with-llms.md b/docs/source/llm/working-with-llms.md index 4c238f7ae5c..e4088efd12b 100644 --- a/docs/source/llm/working-with-llms.md +++ b/docs/source/llm/working-with-llms.md @@ -11,6 +11,7 @@ Learn how to export LLM models and deploy them across different platforms and ru getting-started export-llm +export-llm-optimum export-custom-llm run-with-c-plus-plus build-run-llama3-qualcomm-ai-engine-direct-backend From 8d6d4d2d094eb8cd1c12e99c86bf707b1357bd4e Mon Sep 17 00:00:00 2001 From: Sicheng Stephen Jia Date: Tue, 21 Oct 2025 11:29:26 -0400 Subject: [PATCH 15/26] [ET-VK] Add redirect for backends-vulkan (#15305) Summary: Title says it all! cc @manuelcandales @digantdesai @cbilgin --- CONTRIBUTING.md | 4 +- .../vulkan/vulkan-op-support-table.csv | 113 ++++++++++++++++++ .../backends/vulkan/vulkan-op-support.rst | 2 +- docs/source/conf.py | 3 +- 4 files changed, 118 insertions(+), 4 deletions(-) create mode 100644 docs/source/backends/vulkan/vulkan-op-support-table.csv diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 40d3a206f5b..ec616371fea 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -33,8 +33,8 @@ executorch │ ├── openvino - OpenVINO backend for Intel hardware. │ ├── qualcomm - Qualcomm-specific backends. See doc. │ ├── transforms - Transformations for backend optimization. -│ ├── vulkan - Vulkan backend for cross-platform GPU support. See doc. -│ └── xnnpack - XNNPACK backend for optimized neural network operations. See doc. +│ ├── vulkan - Vulkan backend for cross-platform GPU support. See doc. +│ └── xnnpack - XNNPACK backend for optimized neural network operations. See doc. ├── codegen - Tooling to autogenerate bindings between kernels and the runtime. ├── configurations - Configuration files. ├── devtools - Model profiling, debugging, and inspection. Please refer to the tools documentation for more information. diff --git a/docs/source/backends/vulkan/vulkan-op-support-table.csv b/docs/source/backends/vulkan/vulkan-op-support-table.csv new file mode 100644 index 00000000000..34d2ece924a --- /dev/null +++ b/docs/source/backends/vulkan/vulkan-op-support-table.csv @@ -0,0 +1,113 @@ +Namespace,Operator,Notes +aten,_log_softmax, +aten,_native_batch_norm_legit_no_training, +aten,_softmax, +aten,_to_copy,dtype conversion between float types only +aten,_weight_int8pack_mm, +aten,abs, +aten,add, +aten,addmm, +aten,amax,keepdim=True required; max 2D reductions +aten,amin,keepdim=True required; max 2D reductions +aten,arange, +aten,avg_pool2d, +aten,bmm, +aten,cat, +aten,clamp, +aten,clone, +aten,constant_pad_nd, +aten,convolution,batch=1 for 2D conv; no transposed 1D conv; no 3D conv +aten,cos, +aten,div, +aten,div.Tensor_mode, +aten,embedding, +aten,eq, +aten,exp, +aten,expand_copy,no resize support +aten,flip, +aten,full, +aten,full_like, +aten,ge, +aten,gelu, +aten,gt, +aten,hardshrink, +aten,hardtanh, +aten,index_select, +aten,le, +aten,leaky_relu, +aten,linear, +aten,lt, +aten,max_pool2d, +aten,max_pool2d_with_indices, +aten,mean,keepdim=True required; max 2D reductions +aten,minimum, +aten,mm, +aten,native_group_norm, +aten,native_layer_norm,resize supported +aten,neg, +aten,ones, +aten,ones_like, +aten,permute, +aten,permute_copy, +aten,pow, +aten,relu, +aten,repeat, +aten,round, +aten,rsqrt, +aten,scalar_tensor, +aten,select_copy, +aten,sigmoid, +aten,sin, +aten,slice_copy, +aten,split, +aten,split_with_sizes_copy, +aten,sqrt, +aten,squeeze_copy, +aten,sub, +aten,sum,keepdim=True required; max 2D reductions +aten,t_copy, +aten,tanh, +aten,unsqueeze_copy, +aten,upsample_bilinear2d, +aten,upsample_nearest2d, +aten,view_copy, +aten,zeros, +aten,zeros_like, +aten,_assert_scalar,removed via graph pass +aten,sym_constrain_range_for_size,removed via graph pass +aten,sym_size, +dim_order_ops,_clone_dim_order,no dtype conversion; removable if no dtype change +dim_order_ops,_to_dim_order_copy,no dtype conversion; removable if no dtype change +llama,custom_sdpa, +llama,sdpa_with_kv_cache, +llama,update_cache, +operator,add, +operator,eq, +operator,ge, +operator,getitem, +operator,gt, +operator,le, +operator,lt, +quantized_decomposed,choose_qparams, +quantized_decomposed,choose_qparams_per_token_asymmetric, +quantized_decomposed,dequantize_per_channel, +quantized_decomposed,dequantize_per_tensor, +quantized_decomposed,dequantize_per_token, +quantized_decomposed,quantize_per_channel, +quantized_decomposed,quantize_per_tensor, +quantized_decomposed,quantize_per_token, +torchao,choose_qparams_affine, +torchao,dequantize_affine, +torchao,quantize_affine, +et_vk,add_q8ta_q8ta_q8to,no resize support +et_vk,apply_rotary_emb, +et_vk,conv2d_q8ta_q8csw_q8to,no resize support +et_vk,conv2d_q8ta_q8csw_q8to_dw,no resize support +et_vk,conv_with_clamp,batch=1 for 2D conv; no transposed 1D conv +et_vk,dequantize_q8to_from_conv2d,no resize support +et_vk,grid_priors, +et_vk,linear_dq8ca_q4gsw, +et_vk,linear_q4gsw, +et_vk,linear_q8ta_q8csw, +et_vk,linear_qcs4w, +et_vk,quantize_q8ta_for_conv2d,no resize support diff --git a/docs/source/backends/vulkan/vulkan-op-support.rst b/docs/source/backends/vulkan/vulkan-op-support.rst index 623907cb504..547f7f9dc6c 100644 --- a/docs/source/backends/vulkan/vulkan-op-support.rst +++ b/docs/source/backends/vulkan/vulkan-op-support.rst @@ -39,7 +39,7 @@ All operators support dynamic input shapes unless otherwise noted (i.e. "no resize support"). The expectation is that over time, all operators will be able to support dynamic shapes. -.. csv-table:: Operator Support +.. csv-table:: Vulkan Backend Operator Support :file: vulkan-op-support-table.csv :header-rows: 1 :widths: 25 25 75 diff --git a/docs/source/conf.py b/docs/source/conf.py index 31abdef2820..78268c8d053 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -264,7 +264,8 @@ "export-overview": "using-executorch-export.html", "runtime-build-and-cross-compilation": "using-executorch-building-from-source.html", "tutorials/export-to-executorch-tutorial": "../using-executorch-export.html", - "build-run-vulkan": "backends-vulkan.html", + "build-run-vulkan": "backends/vulkan/vulkan-overview.html", + "backends-vulkan": "backends/vulkan/vulkan-overview.html", "executorch-arm-delegate-tutorial": "backends-arm-ethos-u.html", "build-run-coreml": "backends-coreml.html", "build-run-mediatek-backend": "backends-mediatek.html", From 7aa15fef2ca61190fea7eef403f871948a37f438 Mon Sep 17 00:00:00 2001 From: Anthony Shoumikhin Date: Tue, 21 Oct 2025 09:54:11 -0700 Subject: [PATCH 16/26] Revise ExecuTorch documentation for Apple runtime (#15293) --- docs/source/_static/img/swiftpm_xcode1.png | Bin 29231 -> 29037 bytes docs/source/using-executorch-ios.md | 12 +++++++++--- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/docs/source/_static/img/swiftpm_xcode1.png b/docs/source/_static/img/swiftpm_xcode1.png index 4e624ed43dfaac23555a9aa3b81bde95ee221b5c..b9acb23847b0b942badca248038b2d21deb3450a 100644 GIT binary patch literal 29037 zcmcG#bzB?W^FNx960CvZF2yNOD8-5fYk^|LDPG*&Efq?E0)?VQi@Q^t1b3&n6?b<6 z`K8bE`O5v>fA8IW?Pho6%$)bkoU_@SgsQ2$g5Xl&0ssJrg1oFc0Duh!0D$<|AT)*T z+-n2<@xV$-SqcECh{8jfVxsSXZtAb30i{DU+vw{l8%%U|kMuvYx-0j2}b(Pf^WSm?r7zCg4 zKILVUz-3@y5OX!R6j7Ib`5$%koj9YlySuXp5096Z*Hf?OPn}$?c%Hp@@q&k!kB5(s z8!f@@=HuvY>do!w#`JF^|J9DHg`1hHjkCLrlOx05c1_AwpDjH}BTrr;i4w?k6dc))vI*vHb@_KFn02#~Q1E`?Rd;|bsoT({k%5AJHWqvYR zn4cdS8bYD&?@{Oz`asR2DSbWjXvX})9Qrgrhu+!P*Z|hE3-b$mdwXqdZMm80tgNhX zIDC0|d1GT^egVCqrT(5^FxX$cxrO%jc66)A1r+-3?(S}GVIFPc{QUg#@^Wo$?f(A$ z;NXCfk#Tit{vLI|{%3J*d4Yz4?9amNaDUJ2k z&CSBX-0A7*;^N}$?CizG#ogUK5{aCfTe!YO&CJXU3=DL2b$$H!@$Meg0*B8nENpLY zkB^VHG}NxHuAZEn2nh*sad93U9sT_IGdVdKClJ-s)AO9~8QEu)o}ONBZ?A&9JO>Ad ztE-FZYlYCz(Ch2#xVShBKa_3C*6-iHo0^(5H8c_v6P1;fS-+upo<61fieh@q(AwNs zTU+a%wX?i1yD+zqo|YCF5g{QV;qUM7=-@#336+OJ={@b^2_4V~RU(A>yQL*1+f&v5A*Vl`Si$z34S|%>4%8EaK{``A% z==c&OW7V7afW{ow3gJ#vknnsT(iWAEgyW9G6xH>$R(ydo*E z4^hp|&Snz7QIL~4*xTLR-il!}EA?q5B`JY38fBie+27r1Zf-96{!ebWwS75?nTe^Q zqN26(=lD8GH+sFbtxX{Vr4+taJ95@slvtRXRaI40oS)NPn=6)t>S`#OpPPdom8(O#?X~sWj;BPfy0*QQIAR>(=z|VD0_{ zcy?C~L|^ez15Ex>5MTs)EC&F}f&Y*J_UQfZMvdOl8{+>vHZ1b7b70#lB^6a>P*gI(|EI?!!^XrT0p0C5>4Nu%^o<6g66ZYAK zpnW*6(ETpRkV5(O6Mk(G#0u;ApLjb;rM&+1PVgDh)o%Xo_##!|1(x&pz9(uh#%yi9 zKjHhy4Q^vv(VKo_y#fMjLf{sEW?II0@0@uaU6b6BN^_2-vXJ}M{sd|-RL~5h?JDuI zFjlN7@|Ujso7DQ70B_->`EFS8jlWBOgm972cpx>ez(M?1uVZR>#cQf5V=LNwg+QyA zh3l+=`}c-RA0g=Zo7LCQ)qNuq>X=}k(Rd#J))tM8vZUnXCy4I3g69-|Kc>0onFt=p zqcS(3WxvY-ii#h0d^l?gk}z=7x#|hI-b`!qky~<_PJKF!&Ju|SVk+YHa~$1P3ydt? zj!)dT942YgpzVSgJtC8-SGJ)6arnKM%wAV}(xEN5=MD(hRY8+)#huzg$qop&w;0aD z@W2ysAq>eM%;@Hz;zQk<0c=8$fua0(V9NY48dv@E>Fdu9(4=}8P>55E?jkM2upfDoKMaH=5-$?_O-xXbaL^+9G zr1L~`&ps(L|1}3AN`FUDd<8kx&?VxYflW@{96T$w&V9{xVlM?>5BoGX z*Ek?zr{+2doZr5i$!dTIADbThW_MMF_l!G&#!Rzn9a)?6==B{N6YNXhh6(N_20yvi$$^Ol5zr?pe#vCd48TL zHNtmxf{jU$r;$bHoVdmwP?2e_eiGBLC7{Qk6*}KvKG9G&ufiPf;L&wn4ph{$hvm=5 z%vgdXuKl+eZ#g97Onq=)XUb6@eQ4N!>_VH8lAPQ)8wT-y65fQL>GB{$#vb-s^7JT#$(o92K$V)j!979atO>k!RBmC&{bTHzIp}ugM@( zXT--nGmg>X6ODLlm2$}NrAp)%2}@-g`1m;PYk}))%VdcB_!`FT6>}D5L>@2O+Zy2W z;_*QHMmBrII2M0p5OTTI%@pLwjeT%v2BH{8o{CVt0*HKk9I!C?nA)nFV@9{%l3L{K zVmD$A)~E8G=KM3wY^gn#!j7JvE0FA zrr0d35+(rcJ5WdRvH|{Go!5*>xO}Mwx*qqZk}H5D=IP5WK6*RZtF8}*eea)8kMMj% zuEZMxe_33c?fJGhxF9sG42o|!3(zd%)EQgQ7Uk-3oioiv-WU4W&r&}i-sZo6m--bz zl;$~gxHTO-J4*tFqmGjZW6vOpy95jKUN-n><4g4eUnjU$zj2JLw5N zXzqxNVv zoSn8t^v9{wAw3m=DI&}EVO>Xd5`lG2F@6|3-lrhNe3Kd=pQ?ArhwCX>=QQ=FBzzaE zA8%IuDVN?_)j+-^JHQ00X274Oxu5+(3`u2{URH_yLsDo-sJ%3yjmwNI(wR-2`742%AoaSQ?BWr zPtrP({=%#h2oh6e9KOU>f1ldH>TuMri%-1xgydue_RcOoWUERIwR(7;`V~G`4m>5F zj@aE>ndN5kG4}qCJ#F)pp!~;mz)E>Fw-hm1M{?eam$@Z|@o zeGYR9X&=uqoaDba3~*95BxUOsxqPB3vTJbp+b)5AjCt?!Rna*yTbbb5t<1@_!2X`~ z!jfcBfCmgdoULl}>d8&*4>I%GgtMiom%iTDEc-q0cTtt+tkMr;ikik%a;o0BNknwi zDt60i8bx~K)}NColQ3(LR9mentWQRK;(^l?296#%GHPmBNvrHT;ZQ$%vgu#oueHwS zHe(JtA22;d8A!I?l+|vk1c_$a7>-gk97`n4 zuGc^9n)chR>3>dLex}X$rUd@mk^k&=jg^d?e95cgaMTwbqNLbQ#uLLR? zk2URGlJoe^pxP+btpt}5=Rq^RZlaiie7knJLKx)XFB9k@?I!}CnU1Pw?CiCeTLy?1 zOq7n(sS7_UEEJp29f0q1oOOKm&XO9tvL6D=rYloFUCy?x1P1+B8U}X+KC)O$l74`C zTy%psi=N~M!NS+j{nDU7O_gcS@vehqaTH6{P>9DFA&se-b;C1cYHn)jq=kb}v5-f$My0GG@*+|YmgVOUR6On>uo*hs<`9XWeYbuDS?9L&*U`-E^b%y>lJ zUAPxaQ+5gG#^lTppyhf#st&u?TwS5bB+`>U5hzGH88 zIIE@@`d9+*@q}&@s^^ZzFP^`MgR$Z~{T_9^>DCxmPRe;m0XaG|FF6fHY2H}7y+khE zS;@`2FA#hN5L1q&{{Hj!NSl}7t)q5}z*p+rO*XN_R|fZ7k@{~XMRangs#MYp746)OQ`pOezX17<`Q}E`o$A4f#H_L_ zB1e(yj^#J~)XzxmpYN@W`9iai-tTMJYQC&6b_C_KsN3MI-W&WN;XQoC4;=<#8wIoA}q1!g%MchB{I9~Pc zo-n2U5_;EU>3UVjzXl~&P7rw2>5;lPmG%BL_IKH|#wR<*yH+s>)6R2RL-RT%TP{1xvjvgA zXOHp42j-fmJK|)L`P^<<1K`GVkdZ4*rN>mui`JTmoQ>?Xzdf{P(z)tv{q3LjX9Zpp@S$v03NvIa zU>N7~)VGHr?_C*-v!ADXjf}`!D?YJ3yIq&Ur0JsHU<++{S4wq#y|ypIkEo^}2XqscsS4Xtk$Fp1-1RR-)#laZ0RM@``Y&tP z*(mqW@=4ltLe8r3#QbFu=DhowA|@UcM*3v~A1!6Zu!M}*TflOn|F5}Y!mcgeijY@S zkA~`z@@=3(O4lv}c8T~8EBi_)^;?Gox|j6t;jAU^?zZ4%Y>asy%CHHfryrGSJm{U7 z?CD-+hkv|?QrG-GSGGwu{fagw&YW+X{Cl)Owt0a}{E8Fo(%>CX`AJZN{$o2AYLDqK zg0eTa9j{0rNAdGSiVCkO=%-h#VUCia;5t4m*c6-Yr^P5MoWA|TSUA1c-xTA5l%o^O zVHBW4J1*r!RB$%{S8etPB$>}a)oCGeE-l4$gNa-Zxe3bvJ+sY@xK&O5vQMuPSuwMb zWJ5XrwHoUR?=h#AI-=cj>9w@##3F~;I>2gX1^4z#fXF*y2`ePPx}@qgKddjk3bhgJ zLmdk4kh_gb(|7);A;0@3H(lAYM}0w>)*saO?hCQ}2Wg-Ep#|mz`CH9Uu>5Q5yXU1* zb54&Zov-cHGXldZ8(<_)G$qSO*lXcW{!=9X_Mk)=_sd9i$!6f@_+u6^i*QLg_H%+} zcj#)vCYhMX%#jl6U4bT6_XsHwjh`_N-N3A26O}W6#_&2dAg?e(@>9)M((7j>aF7l2 z)n{Uim(H4{^hsTgEUrDmIWtTxE8R9ZfbH&vU-oZUe%=mY8^!Ol2}U#Hun7*y!k^8w zj&0j`;?ohwE5}HGi*xEY?Wq@_#63|!U! zxBGISrs8yM+pBq03A3AM!Wxr1#bde%qM`EFrpwWJYb3tz zv1*+0c}Ty~bi=ZENZ#@2x;Ttw5Hit!zH2Qp6=Z;zqJ(6eJU5igJLG;LS8g99)Lxsy zi(svK7~35m7N}|sP{{80B9Km1x#IjiSp<2}Y4X-K)IBzCI16}{y~mjUYkTy-q!uJB zLR@6N1XI7IOMuhY{7)IgTSHrd@ghd&;n(-@e82d)kSJwd#Ij{Yf}O9sQ2w|zP3lrT zc8ZRnK(>cp?}`biKo+D@hvZrZx@xBh5^oT!`9=gr0*@TBfzip=hsF1fwq(dt#p;Kz zsG!1|;5ykC)e7HIh#~U37PryLdxgJk;haO_LP;+#pHudUZpg`a-iXd1>9LSD-YzP5 z=;;Wc7I#z8RiNV-S}ZmObC?V)YEuLzOvW*Y%c>8PXM=d0|2yMVaD{UW(%2pnomrDbo$c|;wD_1>ZvihQQheJdj(k#KI z_sLh(k`}^VtvY(|dV5IeXamauY8UI)(-yER{L!w4h4a;a81_e%rQ^p(981E_6ePG- zC!G9J(&)F|FR%wEPM!?oU6BzjMwUfL{C2TWM^|C{ug>+kan2mC9kx4u4wEylaE=e9 z4_9BGZ{C8pZUkCXh+k+hm~sj~9~oREuz|J)YE1{rJM$-*ojPPPNB$yl`89#}b8{58 z{3}3B|^9>nZwZF*TCA-{e7a4TG`sT>g#u+Y` z@O`;A?-wf(L_Qkqe65lCU_@-?y_5WXl>6$wQV{IpKFbTNx4i84jRx>YuyZe?%6E(~ zw#@_rLgVo>iG6o|y*(vl5(21Gjw7E_)QZ3b04eDcG(Eq73B5KUJ>)b0p@P{Wt z5W09&Nsm<%9=i?XPW>t`Su2}Mj}x6jpB<0qJ5)@N-J>WD0r{fGQ;OSx+k$jElz2Ng ziF!7%=(Za&C29wm-8=VO9hS9S;LjmRhwQu9Cx9y#;o5iW(SKfohYEM@D%R& zI5N(cvi3iIjw5`Aw=T9^kBSKp_NBAK!L7^2rpp4QYkuy75S^{$6?oZw+Yu;hxb!uJ_qMtNYM{voz}2b#T3D%*_>vhsBKN} z+TLTBj9TJQRNsjmdVAgT6rK!TxzZgC45Yljt{R$Cpj#+Je32rfNUVu| z$7>DZa7uHRmP55K&U_M#7Ia3AFq=g>d1#1>sXd-f{BZD*zl2}*7Hp5DP&8MqJ6rP| zT!>jg_d{#(gWNaV8ct;?U+nYD*4M8w-CV-uFn@COOFs?H!Nue`@bQ}+m!N;%o1N+} z=wTd#BG_R*O;74#A3ja?Y59?i^fi;ZNG?w*J#3D>p6JcUr@S7M#&14vCnaOTEDn2fr1uDJ3u2*19{Dtk3bXAJ;k@Iy@GsuoIzFG{@@W5nyv zV;U?HeGo7?x|C>vo0`=V|3jQ-!9h;YwKQ|lA}pS|>(sgho`!|~)9Vl735LTRtG?^` zyPuGE@3NsAH;Z`UE9%^tJ9X^I$SR*ziv>vof5jF1r#Im8grC{{0*qwhinvL_+_|JC zA?x>dRC44W_m-BB(LPVfS>L6{Zti_QSvph>Q=RRVH1?|NT|WssU3fnUn>&DW@gurg za4PuJp=CE;4{H^|x<+3+Uf+413Id|(tqR*bJ*ZvVu?%;Xf+~X2XM4h}5ilSxD>7VEI>4{? ze(n{xv3m;w)^6X#!J_05|D*`-PDa4J!+S>iA&c&b0XC7)Q8ib^_84qP;q(Kdfojn;TCi>k$+=Nl8 zHT^$YPE}%M58cCFHf*Jz6097{7R7vxkR%Krx>l#eEatYh4_OK2@yhyUeH4p2^D; zV?5MdVdUxj!!gz?z#u3kOnkrh9rkHj27INq%&KLe>F};>DaJYsp47~B(OH@~{~+TR zKjLYKtEhcbI~h;K^~7P9`yu|C+^M!1(Ta+{BF0w`epd*?$CAZx7il8k zAxj+d8%GQ^J5`X9Fo{`%j=tv`;kDq?Wn`y$`@lA!C0+6K1J%*b7XD?KB&s9%#Zf;? zZh?#l6iO*N{8j);(`xIz>H^be@efwX(X*JcVRbWW`e_#_+N%GPuy2W(h-4}-#FZHK zb_@{mEU!r1s_=3^Lw~J@K$0gP7=E4u47adFkPw2*W2v>NGQvLJ16>{9V1wVgfo7Bg zkKTMt3NIy0m~QC#VNJ4J=GOyisaIOeLIV`fzYJH*H75(M+K)BT-R=BE`{Ja1NR@N~ zX1x0v6v(XsTFG3|iDlu0d~3%bo+r7mhRUXiOEsMD^7r%^@xAWq^h_CWg;0E>`nH_x zi7t8pxz)t&2t)q4*$&ztgJP$mu4^GlQ37ORFk{!E-hwyjELj>gKY{Sr0^pnGl9oI% zhenK_0$lQ z>+h3P_!|BG@t$N3OHKePHhx*?n*~kqCTCN=@rc~z#=hxXo0uC!GNV1JQPRote^+$5 zv7s-K-@nxp(B+*OPILWdJ=HLnpTUdr{hOtWLaqJ!hJEUi2uyI_1gut83~|^?%W-rm z0kLLqsm?P zGmsn|7lCw~_)?(6QlZCr{m$v{9PrVfd*Z_^W{WWxyvN?83|xK#?=Jg<1%arkddI2B zP8_jq5EYN1k~;L-JQvhred3y^HM$lFJ?PM2M2ycow)9EhM5Im}E>wy)tc~%Yv3N&k ztBz{}bBhA66-k~l!>wve&kuHkPj=RXqIwgY_JQjv^!)M5&mXW17(Pgtj(nk73N9D* zBT83eLW5- zsX)>A#)#=vpnQGf&)stV<=6H(>5FL>ITE3@Zn33nnkW7Jdfatznwg+MWksR;B>|Y$ z@@BVatWlfJgD9dF#$QvUs}ZPIXG16)2@|z#Uqs>gRvblr=25l_%%=72mrVtMJ}Uz2 zYrDvAJwK3Bx>RbRA_#7%4KF`biPbl@Nwxw9syr%arbBZh`?SeqW4)?f>^K~qtTF?` z1JxWBc8IPabI+-@q0cp7u7_`E9ma}*$9`TA?&)5dl-&L$d5~v3CH3zySftTR0V;Y? zORbiBc=}#ufp&xC0f}nJDBFlrF}r)s?+KG>LfK<%sQB{XT~w34C{u&$Hf@80>oZ}O z#tmpBa>O;KWXiY&ym$5TIUVndow9deW9Omx3OffPc2f=1?9ZzwJ)m;1+Cp(+1N#;-vW03A| zCvWJ+x)J0m;L8Fs+}tcleJ~c;NX1*Ir3H!=ke2^+*VHj63drCqy^z2=J~ul5pciD~ zr~YtHFm9zMT1T8fTpyQ{)EZER)T);e`1Ci`o*?6mW&(ZXw~P7&OadCqe94@Z#*GMc zG(fSnA6i$;%X+eX6o%v$vJkN$ng{E0ZxT2E2Zi@>S&CBWQiq996q8lPH2k6VDdlttcdCXmGOYSXx5wQ<2im4EH`@n>wo zTDez!S~AjAnfw6|VI5M)7aP`J8Glz@K;@{^{BJUNY;L+Y2X-)GZxkm0&3K6K^Dlu` zT2z?-$RJt;GGco_1;k1S41)822rO$$KstegE~B=sC_v!aB4HD?P0R*t474<<9aI4ivb6pY zusK=F*uK}9VzUUFij|nxE1!VTDmv|-H8&&-Sp)WiZY+0v8WhK|6h8m_$Ma4qlg@kD zW(OslZ;Sk@^!#x_e1Co?pM1JkT7qrGPVcAt z0wHzEhvDm?38+VG#YaCehw}Vw9=DiT58pn^3SxFj zbuh+$Q)B)vFRNAfp;%sy)kRA*t`PZVtVM|E4gK=Zwv~eCeiGd(_UYgCO^yR4PSTt2 z4)zYNTj9fdLxUIqD(8r3JuO$0!KQ~?I1mb{zljJ*dyf+09@LSQ7exnHe)g#NpXEM zIGLl9z3i{I`)qB0!|*Z_9S3XZ69JRP3I9z2_jhu!D89a+*X(xZ05&Jd!XdiJx2uM>2{`=LD7-B# z)*bbY5exOFMrS7soUH~5^DP-8H7@9DvF8{q7S(f>vH1huSb?mqq!y}6RD9_Z55LVRX;QhQpG}>{+7+(b^xJ_E|pr>eWYX^ zmsWm8>gwZKoGb-6K723!%iFI{wAw-zW$wbd8^Upk|4g;hI_$=NWBZ=<54u!k;4eQ& ztX5U)ldnGf5_sd3l~&&+dyR(nzmQFQnC^BNpgp(MFLG&*bWZufWB9UKeXaLD0$s@% z7SycGF2`7lZT45%!KDu3EoJ*t(f@70371_1n`w@TR^mAWX$cE%)t^w!j&0tJ2(eI_ z|AOlcvjCe%N%3g(r)$R*j-$RmKZO~!9;>px`X?&9<$r>O3&Nh9 zE6yn99KY})4(2%a&@KJkdLXoxP^JLo!ayZA)n6nH{98T|W+2~AFWtDXj>5XIP!jv> zj*AYBKU>r%ZV4i#R#zv=)x*~RI#G}!$6I)HVAuR8PI;?$$J;<(Gx95iU0cq+-gR_1 zF|C{P_B`X9mEgZ9pwmHFw)1o;SkKW{Iq^7mM=@EsuDc}XY1K)D&ZXx-Z%0QbQ@C~d zAAcKb=+aM_rA=czbc(!|2em!CVl{vtM2~pY{Q2>#a@Qm08CGOc59oVZ>$Ipy(oLP$Li6 z=vMDF3V?O^?58`O4|~_IeBywYvGYd->ezz@A7^VL+4 zYuID7DA_8E_3I29eW~?YTi~SEZBXoHr4NIS*X6h;WnX_kXnWHd0A;N^b-G3#Q&)c?@PY;pF1Z+2s8^Z98sL{!p44{I)x(VM`M>{dPGNVVU{)aVmtX&=!c5OS~ zjOh-%1DcZHSJ#V}A=qNBX1Ch6!%%dv%s&RLo&M~> z7v*Toj;jL%t)YwrRAfaZ+Anci-paM$asFq3RSF%ZevYZ+fB)_|eN)}hqxZYBwX;F2 zC(QgNbF?98FY5S=40N~x%PL&`DyB(!_6+XVLl4Dx-7ag>rx(PejVq>wYU)a_bgec) zoyNCeMqFfvkyGMVi`^lfPWFQS8FfQtzGou@KW0KdidxwshH6!5{J)_tjvTCRyCsoN zJkd|FK}#z^cPyx(sZ!vp)9Xw91;L(8N!y~Mrz;fUdHYpTAfkE>la62?C#7t>LX-A( z*E{r}-#I)epc17rOyayAo@Oc0O(R|P8GFEb$reS~J2wS0P8q;VyXB8XZo1afMljxK zKCN9sEuQu3$lRRL8V(LlbtJgN%f5yR_7+k5_eLUPacRzvF?4hpQOlAB`s z#V;E3?sn~N1CYHRl_7O@gR6m^g%NZh@7bajR`6R4DppF%4$fL|AXn_^ zl6P9%sOQ%2^c;ay5!KNM0`~ zgnoHdjG3Amz?M9^nl8w{oHxjnI%nk>o4x{iDMw%`WW{!|8M) zcgaa*Ms|*F40~SoTW9af;aM`|X-mgq-HM$#y#AIVQRXJ9@^T%E*dpP(wa<2ll!}x`=bUou zHkna!79C5o;Kx*&hm@m%BzX#No@Cegnx2GwFyUq3m+6SO_E_=?NQv5?c|{rg;F3|q zZ{MrLs~J!1)@L!QdFY)D;=(M}$Tq)N7(Y^2WFue%Q+4A=&zC6uy%AzvQ~+))thg=r z^|TC&%=xTubsVx>A4C)aM%E9#+D>X{5J8-9@?JWTdbAK-`4?FO5CO(^-6u!#@^9PJ~%Q4r7u^~i+>f8h@*oifo0>cOLNt+dVygsXXx*iWp9IvL;GSu zy;qTAoH1Y|!wx}HjM4Ou<}mUu3HgBxWu) z2y6L^*_AOE2TQ8WSNtHj<|E20Qa>kmC+G*}lRXpz9$0vs8Dt{y z7ZH91;HwkXJn9NNjWDDsO_ZsBWVz+>DX_>0b=dsPZ$9!`@xj-RpKAgW;UCt*vw^{w z)eLe579YmbKKB?yRR!3 zbYNpbRL@KxkxS<;@hO>hpNM&;{wEM=GQOIs+Imi&=EY;SsxETl4*d#dYkt*I2fGqi zF1=|rqd&VX`{D5+HrSUAKO!1vLgpZZ^?~UFaboe5-7@QKqiAEEY^($UZxi_vGl_mU zVbVnB!QvQIMaVstjLfN?**GlcCvZs9s-b^%`pdC>JIl_T!DrDo1E}i-KgU-wqMd)9 z#cXg8UL|xZ!hgWu{YBr2LtLh9WKTm27}P9Ow{#C{K!yX! z8(}BLFmOjKw&E5wNgaI%xJmZP>CnWUq1X-D`ty_aR>VV2w)$6rCK*AT8HZd$nAz|G z>1~~0{*llHn?KLJ3X7p?_+xTDtf^3tv@}-fG!wYGd{nIJ+!vxI5@rD~Tdp$xI0Kw? z+1FX4bd#;vQEIHM1WQ`Fk()#)gW?MX5%Q8-{P$~of&q>c3LFY_O>tM%2(8+*KSj=; zE1vnJ5xw^!yU{pD#xWW*L-GOcm%W}!&CK6iXo$~==JD8@Gdmb@)P5Q>k8lGa?BB}p zOzh&K1N}8%i57gUk4%}Or97+VLUMotH1~CQ0m%CwUsJW3USZtnOv zc{@ceV*_}(rLLNP1nO@y4LRpnJcwL>@=?S)}~Zn&*s1M^WhoBmHBiH}9L>3Y4=!*P6H z`8w$`5JyzAX2yr8rM=K~dGSNyYYjsznDoU6!ftGPDgKbS;c7%khR*OXj zwnwj=71LZZ+CZP>kSo<583Gk~c9sKLY_4dZL)8AfKi|kk=qY*>XUZ?)jRRt}=Fq9o z9}k|GoV*bx4=tMI<<)SPAZd{cAw%38-#i)V`AEl zYh!BHt7OQOC00fP=XUe^yj78OvW7ND&|poFMxl?6Gjs@an! z%43hi1e#2PR6sn;ICda&zNQKbQ~rzu*|9%YxG9?B1(_F_=j z{}ZVHus!K3+oG^xuVZZmOD~5?n@b~bNIiYfm{F}{Gg-cU!H;)oIyP1fg&V^Y1E%4~ zU~}s4&&|wa^+z;k3gH6#qZlb}+3#e*B9~Jnzs;i^cCG-7_%!TfdZ<;?{3-qn+40bz zp6^(fG+Adw6jW-e+XN>-?|c~Dzl9-61-Ne@N4>PW3jLC=Xo3A?G@WGYh(GaPc_aY zsWe^k#;-2V3fqrJtQ%v+Dqv+CX4=WrcZO^Bn9{gs-^4;{(v_Qaz;TQ+7YI0G;>#PE zd!A_Z%c!KHW=^w!ApT)wv)nQ3WY8yECzBTisj+l zrJEUf5662+KOJ&nx%l9!b(w_;V0n+y{oD9NsiprW~fXKcJ@m3c=r)N=wLMQ$c2 z^YUAV@Kf|Zje@ZGkfI+VIx3Q^l?H4D%}lVeTn1%7P>o{Av-anvCg%S!jZpO)ZuTfD z3O^u=*N(2=3@*JncyXycHh6td;5Pi`8&=K(%V3kp#;p%x+#Y7;AX1hbf78~tQ-Ei!u+ zHhtBT4Fp}`CrSF{>A9Qi;_sb}w_78^9G>b(&iBnrVPK|!o%NjgO|&?!dF+@)aRZ2*#0h@;p`Tf7xnAMBgmI}DizSmZ&S4+=GLEHN5;%kv^af187EQMjB>=^ zPL`(fuyt=(ZR$=}vNSd5BMHC0*k_9Eawf64@BA{H07e22Cu#$*1#SP_B>z*^n^6+%-oY(145q^61oq78Rp?d)7ts%KpUZEYg%Ege zcfgHvlCEI!?UA6Q{?u{QvpX?aFgs*AWA_@$;M)`afk5$`Zd3(?UT=CHpLqAY^5D*z z41p)g0m!*8T}Y?wNpo#X7hI}a*u;+`xVhaQiq?0hxCmb;zoTs(I(v6oljkV>l;Hh^ zK2z${%R#2=Xd8shWryLo+q(rRihyudZUc|T5;f4Huj8qgDS|Tm8RTVa+v}U;w_n1{ z&h7CF7mzf=0`8h(eA{VtQmk|?()j;R&`BZCw%6GlYq5<=7_(viA%a*${Oatetf_2l zC@C;kRsBKBy$<+1gvvpQ#CcYSQU&iAj$~Or%>k|?Nqf>cA5y(RKDu>Q=c*f8f1<}7 zW)V`n`XRwuL=)1uDEAf^*jL)GOa%af-dYRi4cgMndN}Z#WJQ?9ReA@3$5fBp1X92qMUH?Jw{Ab}-~$VR{nt{zet;A{0Imx3Gzm>^ zB%83IZmA@DH|D2ED8eTF&%iwoFkJ}pNQa*DaYVq*QBXngN3K)JJJk)F_iWiV%fH^n z&%jG_KJjB)ItetRKJWh67T}uUt-`;-jK(M$-<1EpGK=5e&Qp`?z^oN!dxNvB+iFYp zW#z>!!&8GrS_nXhAnq5|FEgDbXV!4<u zZOF*p9KSr{?ww-HvWCniG zV^hl<*$DI9IM|_M0OWTU9De7#Ib>)kQ*S;J{aOmz3C{5^!LD$G3GqreuAIogDbGZ6 zzk=M$D7y~kZG}7>UfHkna+e=4jR1CEin?T-(=CZGq)eZ%O_7~lq#x`)b9Xo3^Zdkc zY6B?TPJUMBxm;L7Ia$guqZ7$FRy)1kEPX?(?Gno!$=72sbbL0d)}GFBVJU$7W%iBV zD~yl@^ECvyO1nCGp!99`v1rIJlQiOYx7MnPVm{}46v``%Seh3kFfm^;IMT0J1F$ zce(bz?B|s`TX@v!4l$i}O%!v7vyB#38V-5b+C>R9(!na4Dj184-JkI8-0v@^IcUw^ z7ik{J7g`-LG(Dtb(O&hD5xf`Dw4|Drv$eRKJvlGbcrR8zG|^C6KD}c*94d&(IksyH zd+N^m_~l&c>NXNNZbydSG9l@JW?WtP0qsw_VhnxGObY0|NGo?JA@B`#!S=Kz4mkA! z`1<7|i z-tk8s1MEJIjqGjS$Rcxi!FNL3AM*as-Z1%l@MBGX-01~!CYFrKVwXs4+UG9ABk2m*cwZr=dr!5U;>Bg#G(|DF# zq^S$7@oNMHa5c0?0i;X|hQk`YL*nPKsY zZ)>As-?B*c0{*dL`|g6jW4q>;uI=>0v=yq6qr$2A=Y{KKk1NB}t}g>FJg2chV7S!+ zu@V9n2d7{@omq!eJ!13^Yj%2d+s!Isc?ytyJd)V#sDiLS9mqysq2#6geP>HyTafNE zd+JDr2<-QKDOgG7jBHcPR(vg_wYV&KD49n4>@!Kb-n%fdVzRJHic~NQ6WLPq4|-KM z#y4NFKitB#h zdNYiq=GT5ws8}x0KFya{0QJ^$%o}qOK zHhw}Q6Q*wmt3DJ3UwrfaMZ9PG7)(xP3CX6|)!NC7v_Kp!m;~9EeT+D}t3L0<{jXbJ z82!=YvZuychZ*1*{msM=hXc#MbL797F^T63gnaNJ%dQNR#f=XzFZ4XA`?(4}}@&)Vt;8Mx)9VKs9MSJqc#$kcF22S%cqdSQBl$f80WU;|k zX3Lz(9WN)Uk0*(G4E{!gS^*n=Ka9J&=3EvTT+?HBz4gdlOey%<7YVy4Ipp184jr4# z6`a%K(u_-^9;ywq_}sKN_B%ZDIW(pI!Cy_8iIbRaN+(*{oY#NK+~sH156L(3R5=(~ zCep+=Ww%v|Gu<~YzTAUCrrN|} zHq#pzQw`Eh5IIvw+JFYe)kCPBo?8#93jojq7fB~Hmvz>15Y;Uf?;`(B<%%hCy7WTP z&byb4GiA$t60xs<d(+*GqpqnUF}rbQ2ke^h{hJ-DiN!$n!=6 zIh%U`)d^1}j)!IVza#7T+isute=uYs)#D+#?VT>Zb5yRyU$w$Da*H;s1GlGQfw1$HWkxz#EY_Y^3%b;Z> z?d`d1pq|8b-Qe;*pXa{~`}k<$;v*|*S)M=SwK+?5o4gb6V%XR}<_}CTx~s!CRqk5! z5PLAV>5%%{^&$>lvsXq$B{cJQ{q41=W2xD+6lZzz<#MM)hj3(uaOkkh9K_sCvZ{7x~DKly<@_^8O5B{g!D;Pm1urEMLWp#jbb zLp61X)J=M?(4uzH0W_eKT1FaP>(sJ11|v!amQt^vKMw zG~Gn}FPfWz#s7uVXc6$8Nl}-PuPr|1IPCjdE`;(sWf{_uWj%XS6+;*~Rfg^pq)F0U z3qzxl&~{(@-Q0#<`QXD>iWgB=>Oc50<&keJK`Xi9E51zHdC5I+^Fv6F7x^X?Jc7Zyi?m5sWVu@#t2Nn#dHi0;LHx2lcu{?XcB~VsKQYripl-?+m5q zzSb}HfH%R?tC~Rr}Y}R za77!wm_};RY9rmyiDByJwZsv`|J5StrSAy&BuB7}JIcKcdkgcQPAXTFM`OCp}KuR;; zSsr4#ZYv=oknXm?a+B-%1?G^LC?7}Hg`xtzZEnmvzt6t${I(UyYSXr3#M1S~RW(aW zLoH)ehpi2^lXvrn){Flv&@6~~ehx-6Yp2YW!bBB!FMt7OdD<8SxhngFO4TGmhMW)T z2#s)cl@q--jP=fw)@$f)PMOKQ+0lW2Wc5wn)7#QK{$RVc*p@(;FUv4;V)m}VR-QY1 z+Gx}~^a}Yb053hF)Z)P)%-7#UJPau?kQ>&rf9;g5_z%t4_%U3VbT;0qu2lVyy?vTw zor0E*t!ag-4gHYmSDr(01hYf;me10tXI)X9`u)Q-PWx$fCJ$%7J{fV2y}0X8AAWIR zbUQ>TFt^cS!oI$No2<6E+LXpY!(9~ zdea%kl{tnal2)bMFN`Dh6+QXK;YI@`qJD=hofGvUX9kl)lW5>f^q^ZFDk0xJ!oN3t z|G)FDS^|E8-N{j1Mzxur1|?D1?%RFKwd9lM%b&XTCWB|i)*Ahv;3B<2YpKG+nxD85 z&L4D4Y)tAu_K@|x0Ez9NmCC9z&M?RSUoYkeUus%-tgia;ZwJ}N>kL$pDXr%uZ1a<+ zJ-F{^hu&`7X@0yr@A;?To>7Vp|EzV?Q3zG%{FJ}bvBjy zCnbV$z@hi!%I}!1s_3>|rOx)7-VO#hR|{i2H1xqpwN6HxB`uC8>F`MZRs_D~GDd7rYNbYyYXoWHdz9(v0T(?Ntdy zTxV?e2WsD9(fIl;5pDN`NO}K|CiS1kleGWLFGZ@vQ@^LuzV$R(Ew!RbJKXhD?U2ns z^(I8r1d%=a^TGRrhF(=kt;9byf}#(gNc5r8fJu{yVRJ5g)r;>U!?S4y^SD&aqZi(( z*l@)!H1me@zH8|pk2J_R-ijEg_C>35hCU%GQZ_z>X={Q1cqU*i+$faHZLh#)Y$^W9 z6N-d7NqMEe{FzCUdUrG?H{^##T&?yYEKn|&L2@vOPbxsc!8J28A)s$O?g%RCC77GFI1%vi~J7+zG*7;iJQl)bLSrOwFZ+ zt(!`KpL%ZB=L7t@gllQ}J|{Rra~9dP{GTA6<>zZN##tq6f6l82N@eFO?VGh?i9cX> zi3bj_yD(#$PhXz2y#Fss@cOTnYG%71tqL^Odr9}1W4f!JDf4eXN5hF$j0fVt!hg-8 z25_PVWD*FwSTBk-TbY`oUv+YDS6X@9UQh}jwcv6AxH1bNBkh6pOiX!i78tGM)bn}X zehcF(_)ryc-K7}$+Y%I@_m&y^D!08k0g2m+B4nW|q9mFm@^Ltokki(rII`7uQYu^O zwI0RvxGH9&7#U}r7v8Uh4(dUYCC9!NG}4ipSTx+9Hg8J03yFH%(wio7Q0i-%bksP= z!5ub~vwEx`st4xIztba7)2A`%du_yLWlfrKDDUIxy7{%Be?IP9qH-4%JGmJdVdh@d zaYAe@*@7`K!5NXZ>ApAjb4!kQ&#s|fHrMZa;LSQbZ`HJlAw&XNg6VBfEq3Lixl>yS zd9Q6r)OA?t?^X)F!`UvH?n*sTjL`UrRvj5hL7CP{{aud?UheCIe7N_#TD%rLz5Bc< zt;)ge;biP}8{tHXkTa!m7e(|wcc-n}s7v$%w{9i`L$(%k;k4}Dwqau!s$~wEY`zx0 z&lggxyh2-=DqiW+o-m^*p3znRHxWbbs6-gC>Kku8zoWeQfp)P+otTl~t9q1hMbx5~ zmO-u0$Z_?1ME1r5VeaTJn3$93mzM!U&bAT}C>;r+R^Ls{|MMc$SU5w5OF$%&_lvm2 zQTi)Fa&6WGft(g@iK?y;iQrD6-;D9a)bR-)y}|pG{Beq(@<-CZjJ4@F<&PJt$#PAG z7@t0&wFz0!j9P!+w`7&%C#II)lztPOrKEOS#j(WJF19Cjmc`|O z+3E$GnkoO(rfuq3|4^sGBrT5cjJfnZ z3EZe0q2KQDQc&5|#fO?6+M#TMc~MsPt}tOxI` zM`t~5qy;9%{?vVOO*1GlElb#GphY`XqR}ij=B|JE*$8i*r47B5qb~kGj$a1 zxRNtOhJ1}d426nE&hGD@Uujh_U|<8y2W-OnjH|frtO;*=$1%WL{EYhf`Q=mjuH&^= zB&$=;x0S1jn+H)Mko)5}|1!dcj^RX6?Nhgd272}3y?AJi9t1Ut4!@^q!P$owBdSHc z)X^_J^x(p$RK`hMm}y4dU|~8xF>C8eVUMXdjz+OR;$R;C_QVY3DOFez}5@ zk&!W;zcvU#-;iD%Hzq%EAOeqiQ+mA~kW-36vB%UL3`*;$4!^|;> zo9rX7br>rRdFcXr#h1nb>ww50?Kx zduCet?O?R{o;nQ`H$TM9Z0XI?Y{9hTm5WgBCv%7K*pedSSQg@U_ci_0gWTTN*|yE1 zGWT46Fi@Ll;bQt0#+ikh!=}SVBAKCsr|q70n*{sjNQtmvDmd`+B4bJiZV_XH>otF^ zN1=r!)3;OXFq)P&uc`DS-}pLO&eZ}YoWZ|BADo8oqRtkQQ>P4^We%>QDPXj!aBcQ|7otp`4uhL(ns?f(u?#E+VK??(m;#8`RG1-v84EzcB1bN^<3UCsRF| ze=3U8$z--y8`45+g1qgKaK&?Jk(& zTw`ejz?I@ox*Filh~{Gkjjm10tWwQmaCAC{0B_Gf?~JXbf@FszRDN6nJA~yf#8T{D z{;>u1;5YGOXD!fG>G7c)o#)8ZjbR~j>Ld^S)n*#ILrGlyG-6O@uSlW5{@<{ca{#g! za@xJ+r!Rht-S3!P-+WZ%U*uoZ5>$Vhc&QmIzpt~++6wpnRGgnnmg1?-gck_R2d<9d z86cWJf8an|`ar~c2m;fq2`CjWISlR(G6GohTbVouN z+*#UUYH!&==pe6;(9DmCT`4ZriSZ)5R68)!^ETcb4qT32{zu@w9oB@KA#R=C+inj{ z?_mv93-5wRG{|i(ZGojPbm>r~g(5)7%mAqXr-hBbp0Ox|oKgwP-aOvM z0ok+F(?&m7kCoznNL#OZ=OE`CGjcwPT9X8z{2wJ1;qkaaB(V0CC1#UrgPSM4L9YpK zPb0u`d~U%e4ePaL9*>L!vudx`@Nv1_)H>izpVI2t2)tuq)3Ewi8KQ4@U##UjnCX4c z9Yessy-cUY`&8l3NPZHA4U&IcKnT*NMgXd08j|x}thVo52v@4I8hCgnCzu2xw{hOVdY=azFNq*ivlC8nn;GS~HpgKU z)KXilU2{6rMk4usz?TCNP9VL_Y0_`3`r^G3lvJRhAU}nGIsYR_ILWJgM}@DHULFzg zsOeIS%_g6y{ka{4fYd&Vq%e{|YMYuB2GSpzvy6aFp>}}2`%scDu>hca2dVQFtB}!R z;GKa6eUK3aQ2$O4qn6Sc4HCne4j!;ycZhleX~mRxqi)EdZ|SSZV(yD=o~ayWR`~t% z2NaRL)bo%cHBtk}>m~R73mS8{u^WS+^&XomGxY8Khqau1D3`LGdcm>0b@g>taY>ZxW z`qvcjaL^1&kyLSJo@G7`2Nsj#zvFJe{9-KjkSO%TI<>y64ts#jTu`)u*769|rv-plG9FZ5eWbOb!* z=wQbT*yb2w8b}fVU!|@F080gf<|SEuD+o)U1<-&^JR5}OiijeNt0r1(UpYQr+Opa8qG=LOlzRaSF|FV zpmw&8SC20Z5b2+V(7}kVE}4KS_ZVp@d}g_$NaW1xN7yeX{W-GYPAfn2rJF22`hfL} z0<`sn_IFa_M=|Kp^UK@GMP|aX2a|`}MWXE`mmP2DNKgAvm!M>p-)ru)EmHC4T$gB% zTgPRL-b8g-xj3vbb%3GnUuceB8s0AB{7J>aoTHgni7EYY)8#uQoRE0fou7i?Bg=P!z<7d6M{?!1YpOVr@oQr<#5x`*>d$bh*ZExGsf!A`< zatrj!rM4hCi?kIC4SVQajSOFu99(*I}1Hc^~9+s6oSX!QM&OtiN8?8T;Q3i(x;++#v$4{51t_DA1^yB{>JJ77XwP@k2T z##wII(}gm!=HWoCS);YdAhA+9_MpbOR=AWAf^z1l`xjaI>%qqb!nC~fd$sJGE-ILN z?J1L3AldRtt=ep}C&f9z^x(C@^9WsM$A@H78s0uIaqKXFlAf!`zBx7!$cmELkDXmD zTeMxk1vwD^qJ}83=5o?7*p}%Y$?T}ZC_ia!b~gU8#p!+02AJuuINfI5AX%UXi%SvL zGP0Z8YI-lu^^=LKS|N+AFTup6r7wybpRMdpxj(Y+`_v9dYL0*|eQQ7xoqcTh%*ctc zy6lV@tNW1s@(_&RICFP*@S#wIW-7042Dp{1J$a(OeJs;5rFJrD$u@g6(CS&S!^SL8 zXN3Eb4D3|fxDK5@xQsIrz55v1ePYBb(qHhaPXv0p(GyB@XveWM0zC>iRkTh#y#`wn zh)5R*zxeT57RK&>@M4>ovpr_Pr~3-ssP$z_jM^d`c@QB%VIHxy7ZvX9{gWiz6rvp2m5i%c66NFFl$SWP_i;r?F6)oPpPI4YtJ#iQH!P%WxdelfJLfBog!!a@+?c01; z^cfi31()L)x^&R8k{&NUIh*MfF0+Ycd;cXfPD}=(F;?zd6v9E{%SQUP=I!BZ7j1R5 ztghcHwR1ECygW}a;{}g6VHZ4AdS93i?mU1~E}jM7CO6!J{o>O?dUwD#C9O@2;05G} zH<%#tOGQNx8a`+VV-%n>GTu(zJdXi_d?wAeiTJ&l^05itRA@LbKgFJYiM3$)_N#Lg z9T=e%uMAL$D1@*Wa`ZK=KRp8kAZP$@_%Q1p6vsD~gy!U%c>d^q{h zp@Yxo;?TI57XPGoI&ujX?xu~6zP-Ja5EMgH+{;&3&Ds7K_>m=SrmDKOoqQVy_cdRv z7`R6?LQj}5L8yX#nwrrq?oE}~#jui7-%@FWsQ%XF?q%!>5m1Ci!4fDzA4eqhtu{xQ z%Vg})>dJwtv|>-@DF4#I7lL5ojdg*@8m_rN6H~?lc}N7uiGcH8Sm-M^B`VZ36$95? zZJNX)lh8&fO%rkb@kB`O!;fCMXoQIAHt+(Z#@`WDCo34%**m{tGI!fR#c)X0U+o&o_yi$MSh}N&}EA{02Cx; z_%n_ZKFy(jbFYY`(*jgQ!kQFg5$%`_A30dM@wa1|& zub?!n;{iaMDlpdiu~eRTY!z5sSpjH)t29&e2(K^pn+*aG`y+BBEwc?kql3Q^R2@*N zZ~leWm(ylFB4LJe7i%FVRv4TtrPuVmD|=r3ryKm_{gAhQRJF5%D4s@=?KS}Row zy5J!kcr|@cboch^vcILz834hO>5O|fZR3lrie%6FS}H{$P|6*5rXRDC|toWsm5GX5wMTPS91yl*uS8cpNR#%nUoRj5Mv) z?d&wz&HXhJQelsw*L(ro}d79a8F58!+&mzK8ycdb!d3%39TVLl3_Zi zqI*_*}9GM~;h`LKLFY~2-txVWxKN2v=y?F1lOU+t*2 z=>yqf?`9_GxT^zN#S??Yxz!w#Qo5PwqJ`a|nFL_=e?16Hy5ZPE%A(te=;q$56fXCE zl2CYWSQQaw6O#GCH++_Z#_auVH@r)Z>nv~l-e3Na7cpx9@VsEcy1P~$%EFt6z{C4d zAAE@9>IJ6A0{8IdTN-%95BLYOuEx}g)Ih>VHe0nXo{rY!C5;jhvvX=IP2k#@NurZl z0RzKZ_u#<$JaJ~;pN;QOD`GtW_;PaGZVi0dEb=k`R9dRi%MqvV5Zf)FzJ{6BjlHPU z{_Pj?y4Caek1%HGy3auAcK8JWv9*Q{Tg$p*G;w8-L&l=Xb@TTR0#yCcBvK0p`UoH= zO*!hT9=sJ)>mrRoqIn-gk{i7XPSomc0D6{X8@dJc@JoS3H+vr!=&04uXSKG66-3Pfc-QGn{|nK>*4D`!$_k zP@9d@3Vsj8J8u?)u{*(K69M?@HX4pkLYP^4r{IHw5?7dd`S;Y`6mxg0(d6#%o6LPO za;2=NzxA-nN9U*A`JaE#DZAf8tRQ|fv!g$^ohb&^zUBosPmmWF*Jbu-X98a?t;0J2 zF>kEk5xALeYpR@Qm7s_L#;WIl$TqsS;FROLW%7c_cXs%i+?Z5v_0|$;JI5f(}NVURBg$S_0 z+MZL-pbA{-{!rc($MYxckWfFqAhFB=du=d25U8Q^Wu_xJH791*bup~?9}@rmfWrjqWn62s)(oHNcT$N&J&;!SF-5hLjM` z;Uwvv`~WVquMfNR#`RS}!SJLDp5pOW1)Zo6ZRuOZg1wdoc8SCpvXR#~)0+h&n)jlq zrTAoHqmn-vn`XD(K!_41W@9)?0lvlV z{8?J%k?2HTBsJ-i#{LNy zC?WCW2>}p&O3xrHm+&P9igZ5)lUhxJy<<;KvSR;!GBK8(PUm)Gk^l{H!L!EZK&U=^ zhMKBXigKlfF@Tclaj$y7mc;&e)-b&>SBK8fFjXusLX<@QyzliFif1>QvIeMq;I?bh`zbe2HB+P5VC&O{-srDS1 z&|+H`7X;PBBtKah(e}R3xYl^}?8wH>!um#=BT-fK`Nb54nSpcH^1}X(wO_k84m&nS z<_;X@IlQIodp={vj+?M!+ZnmoSeK-Bg#g-)SXwtH=(H?N4)VK=9Q8^n_;J@uHj#_9 z7pjQ$78HCfvSweLapAIJ-3v`*K3`Ki`h`Bw}pR3t^^-%A=V61z{37+RX8>+CrB*(Yos-fHiX`fq$aH&17(NxKc~Sl;v~ zWS&SUBuGhWYn2bb2!fJoVUc}qH3=kCgL3=nxK0o1N1;?{sL=Nff+;RA_i~V#Yspd% z{;6KjUHi@Lgt6TCgq6kx`U{b^u4BySM>A`;TjPbL&9uJSC8{Q>COzyCX{prB8eu_oJ^WIU}YGvcS<4>t`+q$TR6O?GlFvM5bl z0hBzWEv%_93W{6iK^Lm{9?QGeC+Kvtx1u@V6vkx|27&;M$`Z8r)tERnrlr?zGvT z#hQn-ma|pYmiwZw4?l4Gp_X1D-+A<1ngm*cNnp>2TLpoi%;_x=trfpty-r`Iqh1(e z^SI5r%Nx_9`USh55v)Y1t&jnde!QpTe+~IeLvQAAzU=j!y5z-BCZAI1V$l4#4m(sWW<6~W z^*xTzWJ#}2ihd$Zf@E-TCNRnbmm^3gNFot)U@nwDbheV6V#(!rqb*I4w@#QRhF5>O zq#56}jJ+q&tBi&6)hrA#>@f6?m`HzS-AGiG)b@lhxzm#wObMgN#fkShChYLmtfdr=I&i>t6 Nmy=d_RU)bH|39bW$x;9S literal 29231 zcmbrlcT`i|w=RqY5fM-k0Vz=s5a}Sj1XQ{dr3nZD=`~94u_4ksNDUn+(u;r)kfu^X z?+|+Lgb)&v{e{=}ch0$I-22y;F|zl{TC+TJ&S$Oc?7b4AqoqQ9<@OaaGBRp4)hEx% z$S4S;HgNd@>3EM(Oqq1LXsf8HNJdr>MRjUHPWpV#?YYV$veG`5buzN^GxmBf++S!u zm9~UB3s_h|Us(%yJHIALk&(%IOOu+;*6tQ}yq%p~+@!tb*#D7`CbiGH1=;WXBjWBT z$NoZ7=gwoOtMwgm0U-e)cKIuJ?%a`ewX%_Z{zUm-bcznhPXyM;HuiyOyZBmc4U#M;f$)&8}+J=EpSnO%!lP!D%G zcJ?zz|MmKd)7snqzdgCQ{Tmi3K*6&SL16(Q!T+0@yS>f-L+xzjui8Ix{dFgMW=vY& z&D!-b)Y;kE#a&+4($(79#mx@tF8hzC|6}mKasE}*vG=xidhx`br0GVANnTW3Sma-| z|2_17RWJkFA+?_%#RPg488b^mXz7yqp$FZ}SK zpzyzH|Fi#p>lpn%bpB`m|JKoVwI}7#;tZbre|-GUzJKLq15D}N!{K zW=N7#Gh16*L?ZF++qW~*Q&ZD3ZEbBNZ4_$mSbMsi7GRfd~u@4bRNX9v>f*y#D(2`}Fkm;NW0taDom!_(W__xbbZ($dnYnVF`hCLbT4 z!otFUfPk8s+DDHb9by5Er%xjzBMS-&^78T+gMpl!-0MNW`@;PfFJ4@6oDvce`u_bp zS2VyH2AG+dL7`BCG+<_Gran|#PEPJ_B=F(G2MQPUTg=Qj(jSTToCCm*=dbqhm)$XT!jeed^ls@^VH-M)NRc>wsWlVsZ;Q{UgTI zB55@}J>AaEZm6^N;lqdeQOnxe+U4EopP6pYo;{1G#^<#itgfzxGy=OAAo<6h%IC$* zmObPm@TqK@_05cs-`w~TZnURG5;pHsu+fs@G%?&aFpur7O6n+hrx~_%-hNX4!$MuK z+Pjji2hl)4;pWsTo`sb)u5LH1a_8q5W^!!UDP#Tdr-h$cUVmnBMO}x#O2S@O177(X zWBpxfSwQ_Du^&+!RkQPZ8r%CF{yJ;DV-h>qTF!L)cHkK+X8UH3WmACR#gp`(M2GY> zyFy@W`J}HQM?3<^`MK}@x!~)MsmSbBh+C|hslUEM$U@L`?KB`z_|uS#%yC}riK3qO z#IvK7`CPsMo(`mXMS0c5R!zKd!NWK&wJJkNSVM!jh<4e~OtTL~xqy6#VXV;NL1o4m6u2M-oFEMA*{# z73y4nTP#f0+@tt#fQlog_7(^xC4v0?bNo)>iAhN^kt_kqo=S+mip@je8d=*sV@964 zcCGQfC4X?ICD(J8=_Jkv23Fi{qruB=2PJIXM3>Ri=WFpkl<`mWH7%-Ii`aQ`-~G3n z%9Cflk<7*_&pwH6uH<$$t;22C7<`RZBz7t_4i;z`R}qU|H$7KNGnzU|VWf(#nYEkLyEH-Y$>`yYtc(EP>amA9|8RU&O0doS5@u)bx*NKJ{73*wR( zwWo}6;Q{Hr4x@j!JUYE(Y48l9n!LvKjJt$G!z^Hz`w1Zq90mN|M^z;&z%Juf zYK-=~mF;Tbo>l`MErK?-)+koCbaI(3w;Dt7rZtGo9?AmcqqM%WCzEAt7Il8Sdl@@P zun;JPaXF~S>fWC~CSRMSymZVmba-{(wi~W)oI^?C;d9&jNmH@vCpl@VH|dpn6OS6lzU^{kO=5gkQqC$P#|{d0jgFBOt$tMsy`zCw?{W`Iy-e z&GtGE`O{^OdZM;73lax0GUH zC{wc6u-%f%#&CEvnYz@P-~%r!wk#Or*(J`=;akwoSyDPDhsvh%gwdd(?|KqfJ{Dxd zV(gj(OAS$Ja5(Hc4FSHc6ll_sNvRm72v8%0*`ZkL#t$n)3^pVLvzQTjL7br!#+TvE zb%U-`jLpgCY_j)@YY@XB&MQrhiHEO@;a7&^G;C{^y^YiA$}UQ74r9&8LT7eWpZ^%O zYAMxsPLL6a>W3-lQhE&V*a~iihy*cT-0jPZL< zM4Cl|0BlDXiktxN4C)ii0uMbV`n*-7dw=T#Q3ReWR(Fa&IpKSyN5eQ(Ny6BS2@u4k z3Ij*YU+!~r%j>U>dki2L=#f-2EZw^I9JM}De>D2}nkXn~!Y{RHMdlLxow@8|NsCOC ztE~+;;rK=VjTz=}%+DBeB$@5A{cF!%hhR?^`D|rcQ)n(N>J9)LkB{K)6@d@p#n!;L zk8H=D;80BoP2N%~@6Nx^Of3w!opNC%nlFs-;!{}u?%*d@%*kZax*#XAVZ{j>q)_bM z&T7j*aPv*OgQN@XYSUBT5#@Om1Y;qjdpMk5d%r}3Tb)i$M?);5?%(j)cvv&H7Q`_y z%D|R#6SS5MeCh0{4bB;?76x{CR-y#VHqF7Ll7V(B(d5hH?O!PssT2V+7;uHQ$p+>h zPs7+%9;$fTb9~hq{+@VKpzI*Co~47#p#jnNb~hXLB8qN!OYy_XY#_lP+H_ZP@rtvD z14~=%u*U#+t8Xh8eDrL{4infrL)F_H6eB2`_Dn+n7iB;FnT(N7O&3oDBg2V6Vg&PN zd~hcDJ|UnJ>m+IAZ`W5k#KeCyUTjwDJu8N3q97Y_&+}Ax*VHTWM9^UW?)y(D z@G!0c0CqiVEU6#y+f8>1ay^GMu`dyh6eln{BU=-P5X*`d8p2s!K~#=pu0@ z&vDyV$JwyzvCWW?9j+K*B=^|KAMbSJsbfCK<^1F?y0VcQ@pch<_S;yEh{B-{#J(OH zTyeWC*8)1Ln86_Xw*-y)v(X67*<-ni01FWU&*$>VwJ#Aq{cv_Hc;HggPaIB~|H4DA z0a(fxKti@lTk@FV{jE>L&RcxVEZy?rOPwFSl*WB{)Z$$CM5DI6Eij<5U>9PG<=Rx66Yy0xmV_d6GHx{HdMn_=tkstVUYbxLbc`1mni`!R5* z{v0ho4s2{_eTxbL2xNpLs&DQm5*GpyllzbaE&tHkS193?b ztb+lI>otD>56{sdga_dAeI|~0v)0flcfCe7z5J%N)i&B!^STdwHTB%dzN9#=z&k^E zfUn=>W&ax~F2du5Tu4!n_eP1^#g-EDI{n%QUN*Huh(D%*0XH(4JkUR5GR=YP(Tz}5 zg^l*NARtG}A+46|p#y`n=<9*a67Zwb0$Ee~$n{^S_i@DS5OhHt8L9&6Jdq8kh0I8r zGaOSfiruwST!S{hP4KhWeN_>K+u1z&5L^MSvW-EIf%Dh}%EUI-Y*ffth-_o;@XOX;l zDRnH^;F0}G&W^;X_{i7dzEAQn)b-8%0oP~iH(3*=M3S56;bZzrRg26?KaEq7uJ`f-x=oh+bitLO;SG6je&s-o*S%HNmqf+bk&V#{E{ zCSt-9iaYe0=3|{S>u9uX+ErF>_v*Yg!7Fv;-pk#SVgc&MgIH;moQt~843`Ji%?2d~ zsd!@^OxzYs;6l*v0b8afVa&hj+16Yf^4gPJT@L4T7 zVbqE{Oyb(DM`oVpkf-G@Mq}>y?zh>PQ}ae!fX;JW>j(AJ>*;$%$<yB-f zPkJyx7%7z2cTNLjQGLgiu+xRA@YCI57Cc-wkO5x+TuVnR7ZKU)bjzmYhC<>ZeiM2t zc)Pr{iPl)}Z*UXGwbv`YqIZ`H5fi4)s^F2{Tg5FdvO?0qE`9V)S zt7HWbjI|a0Y-pg07&*qOU;xVP?)g|eRo@m!Xo8Lnz994wErti~;ejbE#V*$E09pjDGUI*uQ@wK06iU*2?=^w(z0> z;znM+!smS}wyHouP2Nd)WWh0RpBcle2HrTq_<#2+8r0_3s&-ze!f=?$a{FJ>Wd`Fy z5BI~ELbNYkzn2t$ni}o4y{g(FuWW3GU^p(80!Dmdqu_qojG$HGZSe9mDq8N3RJHF{ zWZSz}K=H!vBJ}>wIIvibcs??)3M=_Gi?L&q9$fUT`F*mm;T499DaN`jSh zI$)O(hk5GVwBD7TjGvOE(lQ47yg{F7?3u>kw8>9+BZSGqndt|#yT3fEQ+KQcT_~w- zBy-eTL)v8#{ChDmy7=W=EfddL1FfNI#q;T(w`t9pv>k+MB!_AvwfD@@Z%XU;RT%YM zwuR^<29S4#JHyMt9b*?HJG{iLS~YV^__xXig~K_t^)dOwqgs48_7@M3GXJ%o-MY6~-s;uM>hUEF)Q z^Gh*P+Z@Is?)H7U<$76lringIkv|;1aivzbb1Xm8IqC+bm5p!IkCYVSR=S1GTkf3) zZxZSt5*@E8C%{3b3-g5?A+^r_AERVk0~mDU@(h$@2b(=PLbQ9vF%?W9Fa{gNTLcXwbUaz1+DoXk0Zu`4wQ=X zeWKzfQHupH^JZZQA#td!Z5oF^Xnqb45=|}G2LZ+9X>Avp>^6C<2G<{8{BscGubU$? zlFrrINtsiA%c+LFoo3@LQ&&S1(v|c%Rr|QN(PX--cMS=On?ez_d%I>fh;#kZk)E%w z>Vyu+$JqOb!A)LjPiXkRkD64&Up1w85YlJCq!jX8*k&o>gyoy2@FnZsFPQ!F=ewfR z@TPPknlk;%T+bhhJgI9Z;#G;=dcS;Y>EybEyU334{B~lu)^A0~)auarWjpz@Z-T5~ zQ?S~NvXXsDg^%0ZnB!73Z2y>2{H9ku!V<=Mz{&Rvi_-ha?Sir)W;_uzZVdC@i3(_a;iDb}->#;3+?DzQNXpIei_ zMRS!mzs~96h314i&TT6tsUC+ALQZ`gXESB9$@^y+)xSo4cq?|KZM+ymdwfxj=bbP6 z-cw+(2eFQLXL`(VdRgXq;mf-Uqk!4pptg3$Q6#EK^blt&{tQ-tep2{y*EVZIK9=zo zMVB(kGSzqo`pITCG_bBf)q3#B@-`z8mKi4ugc}1a>$J=iJw#Is=q}V4FO#1Os@kUs z43sw{0sZ~sXJ@m- z>ui$5rCI|mi_F5Tb4c5Ymyt^M9=Zu9(_v?Vf9Yul-$l_ojmqpjx~f^UJxj(41}mIv+JY zM;_-;Ym|s>Vwkh;z}jCEPS>*cfz=6OJbqEr>*NjxLXNemxHR6jZyXPH^!YyJGt1kY z0Nzi4+Ca6M+Ajj+4h3=I$8{gRuwvlb*eb#<0}ND?_1zNIMbt`@tlv+DX(v~`Agmc- z8)c`jv>CA>*}C&Norj~te3mD#q+e!(ZydYAu^jlG?~}wQsw~U$ z{H?nBkO5e&epJVT$8&T~AP^|PKG_$QM8Am4zo!23YBX*AwZg8&o3Sr8X}kFY?1LmB zo|G4bD-d2a-0pyb$djvs~|xrpRX`+Z8Tz5I|?7Q7($+SWSvr*dY|ibAHm_Ejq!|94K}zKA1e z0rXWGzLYFTGNVzMl@GaI@|s<9 z3yEvGmETs23BwG`+yZ0|%Z@<8tGsL!Pkmt1-`7wp#O7IW!J9vX3pOwe`Cj1)OC)?8 z9EQ5eF7on`c=0usLijodzU$W}cWzW-64h-A|36ZvH@Fsn8*YHY@e=|03P`?zJ)p}} zc5MRx^4oW2x2HeIg;3xVY#0Gy@n1|Z!q9rFjjsBKi$gfUALpTFVnt0ykY*aSK>hXr z(V0?)*7`_rp2|v(n>S_R(|1%>u&Bh&s3!%a@$s4z*nB+fgLC5G5(&wEYAQ7Z_I>IZDqjWc)CO{&85HVmcH-cEVIZu2}; z1s4t6kbq~k5swa=Aaj1jEK@R-7Gpl3DVTLG~EIZ9f@PSa$?`nvW!a#qmf1+(fXF zmuIi_7zb&RPCUe{x`%t3Zf}-F1`9Ud+kjy@RFzVQRFVkxpoCA)1e>PGC0loQ4(-9~ ziqsCmW1A7TM#R5`gBJ)ym28MoC<|XUJv{5NEn$WpXeW*~r(uTVId30H+9Pg&TDTqg z)o98z>hCyjK~?@}3o$_XV#RIN{JLsoTiTIozNx%}xrF*?X6ZpXx z7~(IZwjRztxnE$=&{-znb{q(&`cs+e;??`bwl_9a zP`=QUfS&4;OY>tHN5^|s50ev+sl9_h&coEz`$ezb`=IuTi1`zcL#F1O9W{^n3%yfgUI={>5G zUR5X|tf|V;-tuJeyZKM?+0(4(TJuZo{i^&~x3U(6V_$kds(tW#p{na6w!KqF^8Q+f zg}v+xv9$;W|-!rIRr@_kmGx&gj^yC>UOYu(c8`?vGh~b?!FRvSQ*77oNYs>9gKbiHxczbkO6bk;mJ0)NOoFP`x+&tF?F9(i!3`v)su3XU&Y0d{xXt@2k( zHtxyCv}K|U5?DPbt|8_4P-z?90E^Am>#H7hm zFn0mX5rW5*vNmQR-Ex8_`$7;wg)3E3Z!YXw&qmXP>c9dKzkV$Gc61-tLqd_ON%Y;v zKO`zo%X2cWkLK0|1{MSnEpH*BC@44h-J5Da7R#VLX59fWXTdC8qK^ncwdLQt{gG6}0VI$JU14@98<3-8!Vn2o;!!FJ!1L&t#1FiwDl@qIr0-9#RQ(cR_TZol^hvB1#ud1J$X+I6bsdt`fjB(UC2j?e9r zR6}{Y4BnG(U(aE1?o7*rgW1p{MC+Fn7U< zi2^4IkU%y3X3v2|Zo^}>o#Lw985FLe+Z|??*Qp_(CUw*TqYS@m`_0QJzYf@N0)8n7f!Fsom6?h zBvFxZH*yq;n$4Z7YhdDNgl-GJR_YJ$w4d9OfM`tpI1 z-zS2PfJo`}GRDQdf};U2?@naMPM|5l0B0NW%b3-A>>bI@Pie`y6GQK{8e;I+pq3kx z!85hCV3ZYLj{HPCT;8MA8xedDoJg<3@5uP$3%LkBT}>*YzQmK;CKjVQDW1!O3|PsMNalUmHep<-Z_vNmn66iyP>MG^@UN5LMt#v3E0d9 zWapdE1a(u2vCOtDI3j@0%XHtPES$5GT^&9_vu?j@#<} zv932vAEZhQ{Wfk&1qu`;$k%fb%FbW+yRUrSFcXn3YoCs%46@=!gV}?Jyw=Msfrt0E zJD%bk1?2`c(>@TE=ZkvGm<}3p;Ogpw$;ydMv0lk`wxngGrgpNKk-qZr>4@e2cyx)B z(i!p-$mBf|^eVw?UzIx^Xyv18uNg^e4XaElJrV)45I0S?yHahaa3#h$M9(8S_3vp! zUzv<0H{;Q@6J-84DE1YU(w?Y%ZCFrsy#O257EbidTwCr_J#*(h2K;c+YtQ!Rq$_6* zt9p^*NFqab(kg*EAI2pn_UyYUrIqbT)}wWGrYS)Ib?60VhW$Sxbv)a?t_|8dA8)I} z(x*NwS`L;g!V)TQqjQ2gFSDoT4s(^dHDM-0>ENnz>rFo?2yV@(=Drk$%q$fp>9dKFCLLYa; z=IQ?ag3KU>h&iZg8n1(ZTB~O2_D}}YUn&s>Z8)3< zckQC$a>^_#ZK9HO1(T6h#Z?U%+P+sb{@C8V-C4c`NHCB;6-g{W@uV+A zLLWm3CTXBAL?6~uR3+6|QNwU~2B7pv^(CN|SxBQ8{Y{<*Sy?k0C~9abXl?bTved}U z=oWU5d0VFW`={gF;32p$gpT{+q1>DR!NawT33|B-(2>S@fT;AwD-}je}(qP!F6QaxH+`l5+I;X8ioWZFUO^4Fem$UA)nHsi~sDZPylc?r>xH$?B%qDzSNNsl9H5O&*zhVf}RCM z!UwH^_wW-?xjyBO?dRU1B|aU40d6@USxR4yK@=Ef?FW&>&J#9Yt{;aElDh#PpZBT* zAr5GG4wU>81dw; z*LlA`2@@p_YD+Cb=f-~>xLyBuI|yRuDnhz-J*cudj-ZQr#&<}F;%SQ99v=iNuNwd#ciBXCd@6*3 zL6CHLpxu1fw?5`uQU=FbLmHdC4mwKlWpHW2%H~82Ad8_Odz&@^{9H)Rz=6GO_J2mc zd`mb-W`3{@`_UbcbPMmw1HynK&$<@2!9)BA4F-YZ=fE5vT+dp0?Azq zsqYLPri*wMbU7G9L5k=tN#`t;TTz5__zPs_ur1nuRnNj52kIwb_SQQaCo83fafUUY zU&N%?3GUZ^-LufcUWGKp|E^1Ah)ePmLqgI`jDDUu1AoAP>$;S!M|6{U7(Q}Q(={2C ze)iT4dBg#foMgEDW8AivsNtr>T?wB2!>Utx>!v}819O6b<|`lGrJQ9qjlVM8M7L}u z=88X4Po(y>+%+Y_b@RM+Ov0n1QjivBfZ;rN_+f}w*7Jmv(EQlv!n5`z_B_mmI}CS0 za^T>z2w|VL5Y=5B7siTW;vdgHpvv~p;M^Cg<%?f{Ka%%twxv7^7}y$zDOC%Ggd7!B zn1#NnHx|5aN`0O;NG8;u<>bmgd7G#J4AUyaAcM*^)nA`9-Vx&754!>1vEu+emjCO> za|@ z^OE@F&2O9_l|Z)t07+8)s>O_tA9n9hXBWG^+TNXVmCjctuK_TpMUcv`hGQcZgk}YJvM9qr+hM)DAtW{p+OE(rd60MGyaM3oX!H!w?R z`eh{pk^Zt$|6pV?J{II@x|7hr2+=$Zq!To>ebqR5r~0eF!T!?U(svQ9r82{R&`W$R z)H62AOC4QESWhUVI`MY>hp|S!m3^}~1tmMW@!V>H;(cWzI;-1d|}IMyxDcIDN*3JKTzWwiF7_&(8^(+ThMQ7q4689I_z>D zgy=VzIemR1WqP3HvD_^&THsgZbBGh~T$TJ!l?6lhvkdsVN~@c#G=B(~^18fmgk+Zf z5JNNz&%lqEyW2WfrT!!9m1RypFdBBl_Wk#G#)?S?>s-crqY0=cIR#*d`S?==^37ie zpB!M!WsjKUt5Hd2(WBnZ?Oozxo$J}Yd)^JWi zp}sT+)WxA1s-(P6NZZ}oR*;JSLg~XE=HCdSkJs0>&U-2A(G=rZ!r$dM`US&8O!ac{ zIM{rg2!NQuPQ;i0l5+C__-x7A)`ltkHr9DjQRJ*=AQoba1@j$AjDf19eWDV*}?OTPAdf~l;~x7%t`H%lw;7}x1FItP>`65pI4;*QsGc)Jr=R=hZ3 z7Yxfd=$U|7+vJCk$bnz=L?SSp!WCv>qZ+~jhMVMh#e}8idSXjnee7WQG9Y9GoU~zC zmXoW1kckP1IqYr0@)+opu$I`k8;IR6!-0KX!qPl-dXs=T+KcIm)0YxG;Nk!p$hirn}KMm|AKrG3nA(T^L8_ZMJ60p>8r3k&>~zzu4u*wyQ9RB#Snmh}n10tyc}HxE9yGQR zk%L5oKV}6H_9fJbMQS5SN{N7w+?QUNy7+;7yY0DUAfi!jb*y9jPJx^}=tOc0M(h}Z zmI~AWI})i7DA>DnB25FjmEp5Xx~WsA1$4wgns9p<^{V>86WgKFRw_XjmkdgFsCX(| z8|1gVRXD_*D06{UqTuJ_Lt8EP%Ez`|P?9ayU;{|&`RF%+?i-HCSCa?=KB_{Ts5?(s zbpf=gf?!uolJ4Im%Em(L_HR$wE_r<*!1D-A|LY3_g<{MGNRRJcnaVbN!wCRgo5Z7J z$Y{KK=jvV146!Rvn%QSDcmNIU=`*uyJi%90p{_!b0x_OX5&JXseM1aG1qj07@KBPC zEznK6H}=HC>D;uFnTVm6h3DXXC8mmzFXI=B*98Sf6&^oUa{kHAuHAtpzt72Ks^(BM zVK)=paG(8dASEqMr=31eJ>tpT^Lf;}=Wa`U9mi(>j{Adu$2i0QeDLb?H}Vv0w+DZsrY(R+ zHI7hfrOuag=1sUXZSs|{Yrv76@hXE$x=Vz2OHdPWr|8+YqbveOo zcOp78+TfA{^94G0@u=#E^sCOM{mA&t*^SAV8+vqKS@{QO(=CZFIRkWLxLgnWe4EtM zHjh?Te4#>~pa*pB3r*7l;)0{1ez%#MvVKq-mvvohQkhaN9(kJ5k^a<_GW4^WDRwKz zgkxpyn{&8*gc9IA2zm02jwM9nUVhraYuxfD(Yu}Rh^}~47U}*3?M9ib1H>YsbO~hre17Ail4=;WybhVCUquu^&(yl95T^cLRVaa-VkO`d#CqA4s@S=)K6~ zeEH<*#!p+&$nCVpwCLwBw@#}M~T#VWU<%BYP>gbgu z;srv`s!RycOI6+!cflo0_$$0JX^^nzT#NNn-=sV@_T$n{X=dmSxdL9~`}+@pYgcEn zRp{MbH0^I(>f=0`SQ_52$6@pY#H~sWz3@k>;=hOw)**bZRI;x67WBpKF~svz6H@-E z4fHGR+7;K;f%y2ZQJT+-8adlfqZq?f0hc-_xe!%Ak#n1$GU5$H-wK@rq)wHY1jI&c z#W>LxzA^s%2pe!$KZ=%2otODV(o5#6BFCD&Q(9G8YY#|&4nSmXSr}??y(sItV_Oqo zsCaTMT#}Z^+VCQ2?4C$r*i)cSRdmbxJu%M2asb?i26|M>vvQ;%TShXi5W+UfTRDPm zQ`i+yGC=HrDh)`R6{ROoLV?7iEF+lfBOSzb`_tt6a4tgB>cEeC&$`bu1aO=ON4fAw zlHcztt9{gJ!#CAp7%bSw?_jCmjS-6*1W5)O2htj_WV$!z!sG@j!w!l?DX;#zfljg* zCy!?=>7@xz#WX`}hYNKDZG-w%>4IR9WF#7d7V8#xkZa z@Iq{OhKyYU2syt_OZb4kKJ>{8J!k(`fB2p zbd~{vN~oBYYmzwjg}@IL6wfCEHkCMDHqNa#9?)NG$4I$WUO7FS(%0wimA+uNJkuFt zGe|)d%1zsu;SrYi_P4LxB_A59=q|Y^iw&ej(175nS3h=~Po$6v9-!yg&LSCciUZfy~2fJYxfBr~(-mLpjwp)i76Yt$VkzgVv$hSQ0 zl@^Wkp}Bi>f8VMF_E9AHiB2N_&6l4xT04$hbuBIkhO=GvCM#FMvbVlR({|(NbnexW z?{M@34=&X-nT8*cdo-K>j4*f-Wbk2JnPL`|;Kb6&`6zh9FRFs-KAX)>+@lTP@lg}E zD;iSN0+5Sj*xC+KqP@B^{lZv2pzDpi@+W-dlxg|2UibAXSoc3)wly7z5V>e^BdA!k zS3X5dl7dOAT#C4 zf@M5C?Ru`fPOk<_u%WKB&T@$DWVkza|6BRxGX?`9vlqLKL8$sWeg-@!{<&VSx`H%B zPk`fm{Ufu{j6t(bDnqJgK3Am7+PdipA7TLZ9iCr*9_;P07iNBylS_AsOTDt$(R<%o zwIV70drm*Z0M~+ycmkYD<~jaS!K|1radTPqkbut_j}6fBtO)$wDeN?BcSrW?)}y7PbF^{AT(4I2id%qtWBTC(ph& z*rB6@+6D?=kT_V1)>f~bFl3@iVx+Nkh&>dgAihiwG@%^`Iw(MBTHql^Fl?E@Q!hGh zRM7`AlU`iS%S{-=(hzAvQ}Y*eG>NvX^!~IqV&=+o6OxY>Rl+6;4+fvwEj}N(%LHue9}A3l|8k`HOw$r&B_QK=bG5 zoPJ7hJ+n>n8hm@+#M-GPF5-&SgI4u##9y?DF4^Eew@gSkP^YpH55D88=_Y9MK9xQ4MkX-7I=!iGWhsDC zQ&qo#4cGz*}f77Jnhz%G`qx}%(n@K_}$e_J8|EHsy z!qVg3ig-Y8yQ})Vh@#bVh6v(-4P)EwFycNj0mR%~;BrsWI@I_1qgT&=?V`tPP7z73 zTmTBZ%vNE@U2d}4l%pKdzw#`!#jpD1?%QVj<6;2Qe3LIn+UsVlS+tM$tq+BpHCxSF zRIalFlMR*!X3<0|+B{x9oXN8D0HQj-v+G1eExG+SD+(|gm8~vKc$~sZoaaHdZxzp`RPXX0C@7MVH1#_EWdJ11=!F3c6k@5@FIer-7_K^Bj!;x`q|um&!?KjHq2Ic>1aXl z)6rgK=i65{3I;~>99Kd=T2aJ`Nr?{7YhOf=-L;CK`ds??3TYd`rH5BPFYs(#a{a=& zzd&0ufNymQCtbc5m&u9BL82`c+&rx%JbO2fiykno=$37a@Fphu(klA$yAOuOU849i zCB8CGsj%%#N3gsDCpP?Yd8Y_im*3|uCeKLV*uhst?8qNC4glO0{C#WSUp-g%^L&gu zJqQt7kW``3gXMBGXn1#z{`2Z9@r7lJbK7+6j0$_BRzsOE)YmVOAD8%{i}Dd`hY5mf z@0w8WmAKsU2ZA>vgnu&mS`NU@xf0$<_Hv!$Pq)c`_Lj$}EL!F1nsyp23x3P{5K=c0 zc)Mk%;@-qtE}pDO02YsoA(~v-YP1};>`fL8S_($GfIV+>oQ|R-_*W!>fb&F0+^Mf4Ua_mq*PD+IBx zAWFM#9wMiW=qk)Ys4;IiHg?k#PITLB%HDb$VX=Hra36Ms|=y}fp}0p`$wE6@HV_e^w_L=fB2?bl&!*r4-NQZ^Jke-Lu+kE zos1SsvoA<160TT0=u^?e4d(|sV)UvlPmb&#vx#Q`3vd7fznKLMES_=#jVD^#O7Zk1 zetoaAjcoi;QR0&|1?M0vb9^~Wb`LoGIKw8`%_Q@63e_3ABk6UR{!Z^#?T+vwm0k2u z;Awfp&fL^G)G8r;2!8J*Yuk2b<@fno^SC^zb7-KX#buWr6tiN?heAD|(gu2!9zKGC zN;1*S2NbuDR@qU1N|e3RwvJKv64Fww0I(%BVME{@e;}G$>U^cFQjX)b_d3h;NK;g4MURq5=v&EjokZ?$%k6ied`VQyT^s8 zms4c8v8S)@@Dcn@ZePqtq)GxyH|3RTPM;prggFOs`xm!b4XFTQr$O8~xbf$1yGqQD zPhU1q?|NFU2K@RKwQSjKd7_xD@Rxp3#Or<}`em%c^J8-#@0ie)#qtv37wAA5^%@63)gksjrm_?)ND3*Pyv1g4_MLcOzMX<&)BQ54X}@`Ia_tmG&clOj*g=wR++X zb_nF=xUthcH?m=2x?7L?0beSnp9tJRAJbG-I=Gi|F~T@Gy zy-L{Y0|yzl+(xrzp}Z*h0uDBzu%|0eb7;qyK?Dv=XEyLdgIG>Yh^Z&K=ol#A+UAtI76c7a!=^!8o2q++(C>=qffb^oGg7l8mQ9+Oi? z_uiWXq}R|p1f;I?7fhmWPBV_&C3zC%VWgxc`Z^9YphEK7vx=u5TJC6i{tO?qn&{ zW&vzvzu-xS3W8lNS9qCUbun=E%9v2dX*~kygbnJ?Lcuj0f-S=o`qpH`W#WJuH%pP` z(`sn1giyOM}ANp!yJHf#^jk7wS;t9=>k?|Qio>s`bP zZKj{P9VuFzm>%s@`8WRjQ|orl4!;H;-W=S=D?BCf=bSX)UOn7!ZYlF}@s8gcy13pr z74{+tdA#Ztspj>}LY|}VUQwHm9puT4;~2b(FEZn#b>3m05ydJB(YPjTYrOD$JhRVD zKB4DNGFKzNOPu4j*n7nEDNv?!RTJ=}$rwM3(VI6fUednWf+!DF#U!P999*jizDcyZ zTfNrT@JeI&L@GD-fy0%H-uJt^M&-ohtPh0TWgI!Z-C_YIWL|ObmKAf7jn8dont|9K zh5=>Wr!h3QrfFCF&eFVZ4iFAi&)NkK1EZ4=y8F7x?Q?71L$7Y! zH|Ga$CMLsH&xH5M7|u7{8tgTCE9aaGXzf(gYdW0N7T2#E;<7zlJp{s+EmN!=%S##$ z?`Sslqo>wyNk4kg6%WlSm6b1ypP4}iU)P#3mp5?;yQ=aJ_2?t`Aoqom+z_tM^BPef;@PBg9Ht^K*lw94 z9$;Gn$VueqkEm9?TMtPKum;AQ6`;wr;TVXo3c9>sJIoH15Z|dUtWzR?BpeFvqx9n| zt-6e$>Aq-D8%>E!#B9lMc0%0Mu^qb51HBWn$NdjLR-U5zXD{iH4&aYBSrxww6HYD| zSDj}zJ=+t_C472xShgk2K!M98>uPR1jbx&6=qERO=I!`O&1A29q-Ha_$SY^+=u#04 zYuHHm?W=aG#Fe^W3BRZ#;V=ub3d9A%+mgv6f|6hBgPgdH)L^Pq!YexRc+qeWz>o~> zBnz2!hKHOx>kLg!2!AHGRW#l0GEB3a5ImG{9`Z3*Gkmm z545{t`v>+5W8;OTe2LQ)4(Xx>h|Y_|&yAw>+qgDYNH%ST3CK z<9(|EG{0VH2uqD4Y&`neCh(IZ1C1Gzxd<}%qkj6i%lFi%$;=yzGb}Ig$z(J9dLV!L z>XF|bA&u~Zh=CsXF6TKTwl`SC_sQkp96ioC^=GTyY%OBquiZ)W-E2S5G4`-Tu9jYXErTqkm@6V+1=pGQ?Uz?AD)nNn-?>2rea`K zucQX9Jz+c+XDRU@CUtAgWruD71qrsVZt{t^3f!{Q{%BFCAQgn0_-OWuGd z>RlJ(($&Hp-TI!t;uK6ZjroJ(Ei09!NVUm`zHklZHq*#zn*VyV?CIjsCi|BkGik`@ z1$2e`iWc8N*Bi}0CWb|SjU0WtkVOM;Kyn_1?|-P_z<-mhKX_(O0@|@auvxlcbc*FSB_zMXKn5$Ok(Mr4qu zWqQauLaEuS$_^FNC2Nq=G5AyGa@R}aT%M0&WPezU0%CwhBK-H1$Po91r{V=L#mh}? zvu_E3e$=UA_Q;ZJg+>YI%LE1ox0$bQ{NBGw;aC{>h5N#a6gPg)9m}Q0i_PAq$6NT; zVIsCIxm;~&OM&z73m*d39bn2Gsw3(IUDgbXjSf0rS_N;#Ftk`>PA93NZ2N1^$`I{( z11wVaB1B7B7_Ii!$S%+a3}p{5I2HD`Jh(L)_t|O1v@;kJB&_EZVr&)j8F!gnf%COK zXQ}#4F1Z0N>2Q@Wv{ldtKE68)chi+V2{82EVtjk!+|ckUnLT+1@LCDH)6Pb3jT=jH&6l4H91iLe2!0!-$z{yVlN_Pww8-^a? z{pi>>T_g!IzrzgxI3=&>9FpL3O?YSe8vEZaowG!$ibYgH9$9GA=$8_0t?)FYCe%T1 zC-5Ba>9my1ez=-F!FJ$KuUI1kxcm8mj+C*jP*86h!0A`9qF=~G_>D_GmfRP{Ej9o# z+IWfA^N?0N!_eaW$0LB&f=J{{?0OIF96Vp@Pww0Mr$cD~X;6Loon_Ef z76GQD>7=WG_4Ux*(Lnfd@emS*;%OdCq_mRe+a26PN zrW(N03sgiynv~PqPA3um4TylFG4xgY4!n34&`$6`6zkod;{IPDLtT5|CS`9s`5k(? zqdYvCpZl$6{WyHTKPH?6}h$ z!Z#I&haL>Wg0Znj-*)HlnvenFC&-9F`^qy8p#O8s2!T-MF_WupdYE^WhbJKvjOPzD zTTTN2KSo9v&)!TbyP-bJeuD7m)Ww!D?@2bQ8ZfV!qX%F`Q;T@>d&J3zUogV`eZ&fq z?x1DHxG``W|aD0%Z2smXi*WMNsS2oBDBB#>qmTu!73hljW0)GVS;-y zia#Sx7W0tqkAdLTUH1ai!dAA&sZ+xMuuH5*RqH_>gnwF?*32`+p()Z z2ZQF%qFt6q>arIw{dSz-)WJTEKsqf7#nBc8)H3dDvX`*m1xp?mZgM7mJI zzu*!KVKHTkDBe}YRX+vaXWsh9*& zjjJdy<^Ywf;h)8goY^nO`{gf`g29fr3x&BL66FUzWA@Rcmg6huKZ0$kNZvm?6(>@54U|fio3-psf7)6 z4RIZV|B)W{)yQ~}=}uNd*vr*~quAMPsdaQLLO0YWjQ87ffW`NpRza-r$U7Lg`JVPk zQt%r!ffxrQav|`+sviWNmO)9c8d97~KuP@n+A%jC{_Xitj^3Zd%IrighzH@_VGe+; z-Hej*J70S8U|;{m`ECiBB`ziP?Q4#T8G-cV>m;FNQ(Uz?@;;9t7i!O$0(?Pq!v1eW zd-w~<0;i|V(^&FjO5?B9V#DiKl%2Sr%I%;qIXidSZDRim?PX$5@Q_{JgU(~4NjP(D zk^Mb%Jg`V&7`cOTSdyH%iQ&utzkpfr`fDlov6X^Pb*>tj)n+!xfX+BjIG%X2+~XYP z@Ko>rqEM?o<(tgb8q3xg=^2YbET;lJYd+PdvW%S$u6?;T;nM&0+KlJG`prr`m7ekE zJ7(sqsepg9KbF6P7frpiYOX8QKJN2hFMeT3eAXqcT*U~_8XS3_><>P!HQ^-+^;56# z+DBM448`F(2L4xPX9v-etg(5ut&I1_I*)NZgPWjqowjq=+8orj70UtEN~b53na#W;acwbECz<-+@UIA<0-PHgoEc5qD(>usSdR`Z zE^GtYdV^xG9OnO^?oNpSVF;FgGol#Ayt4b7=#)OWxlz=+_=%P}^BX~5Pi3ZAyjmPq zC&R|P)cn5w=kZWQz(!sF&bvsS^zghNg+t-mIx#WBq9|#_u~V6(V@@mpe)r7|Tt@sS zODIBgjHqAM%37JYU~y+*Q;i!1ZB!e{@@OkOlC*YV9a?hoYBqiI9_;#8TuH3QObOk) zP4y^IY9Zyof+{0M{(G%#!geEsP~^wjV+{#ET~Phsr%7fQV{CO2bjaRD^^VjU+`yt^ zktmcK-IokTRG+>^5j>LnuOCogDTCSb4(h z(!ow(lmX3Tfhh(UKcsa0Zzdpu8DW1A+!c9i*vNS4>auI_+;Fg`c@|1qqdf#QB1HeF zE+feP<@EolGOCIs7K1CV*OuSL7spblfNkbXKX=g_BpFJN{(r?mSc&!%<41ahnx@dg zE2sfR*i;2&9NtQNWsd=R|9byinf+Y|FOU7Y)?ps&@DVij{qKCALCWxnk6#aX^%WN- zfEXoY^%1Z)+>^SOz8_IdigW|Uqr5%SRm2(^Fa1_XuLtn~G@kzNOkValDLz~5IrV6k zH*tF&G^uF91GrGLsnuv?u!)0FJF=<8o6C)%r95fnAF;` z=}jCj_EG^S&5aUvQP*L$e`$Acn_w65z2HWLMGyTyN!87Go#~w5f&-p7`uC_FBx z{dRaB3241LtF-Id;xBKA+Ai*JhxikU_Sj}R?t!?BAdhD~DUe?B;bK@JL5X>ED5HDo zUQ^Fuj~`dK;$8 zp}~=(?WStNz=K;bDdhS}t7@C-$a#iN>DnLeaqh>RkIrdv_fmZ#=Zkrm5@Mo6`+bRA zn2=VS<~_;qj9666uY|4u9)DpeQN5xTm!V6@@o`CG zHiKI)X=$Zs9nH@rPpTj!=t1*ynwQxcoUfdf8JdWBJ;bz(?-*a8^6pzlsN(f6o;S)n zxbms+UGQL@wDmPNAl$lIAgA$CE~Be75mL>Anr@#0Mki?_Qbeg!qT5Tjr?DwHl0oXC z5M?`aa1-aoK4Gd`ZlR9)82Ua0toFppeCr2<3-5{gZ<43!;DPSHr%XSK(@A9>p4bl1 z7SEY742;x1%RIqn9W0j)qJXAO*dTB`Y2FA#>?`NE;2C62iGFC?l&|P7CZ&jZvyhJT zK`V{#6kjy%I*;&-lfU)#1C-z5H6A4J=tRF3eUeu8J)a{FR?>>vwIDGHJTmblkX@a3?w%>TjE-P_N$n(~gQF?w(gWO)9)GOGV&*75v zC5hFqs@b`lRBLEiqs%;c=9cAYm#^0Nu)2K&cGaBcLECAV!xX*Fcah z=p7l#HVGz5j7zF5k`|G%d&t(M$AQn6;NmK#s_{J;uiZuf_~Bs;2u9H*h{HYTX>{&n+_-QwLUr(u{S zHujHzLv$~@7zJ6_vk%Rs5&}*P_rU3fby4WXw0T-V)RPUhYf7$5am&{~B&Pu{tArux zY)CKcRA1ndb5pV_<#Vp=v-Z~37mrJ1zSB*lC;ty}pLw*)H*{cPGps5xtr2(e!{=p# zr_Qt30%p9#8UiaBD|kDn(@dLAo!t~Zh!puBHmGcYiR3_gUHwZ)xc}4ZH3j2M@KGxF z=T85q5$d3vo9i*RYg-jF-N8BB-$gF<3|j&&o^>^iBUWO^A!UBlW)>=8jpdyi`7cJ4 z?cb~5L+7=Hw7Z|u>uC1NO)I?((Vm6rcm`}p!0G`f=k!Ly6B@725~Pj_vpi-jsu8ry zcEPVAz#Z*JQXzR8Q5yyBG7-7EE^hp``Whc7*8X_elZ@6Q@X$qUCO!_kS~g`EX6dZp zG1lPXA)nBfgJ`O8qQ}Fg-oMB zJKyP4%e%1;>(&H%63c^=FKxKK6+KbJmEVMs6CEYm-~0T9=Q``XenPfWL~Vc#TGd`C zFr0(;4|jQ%wXPxTh||wju_zevIrei(nnUxWCn*EYmqy;v_{Et$-F(J4?vh$BMMN_e zy%cpt>0Vkj)U=xDhZOr@@$}l_a4Q|NVX0-xzE=WM=ZTx365zeSAkDqzPxTrYcF4 zNQIgP2A%@dTHwju0IDcb(U*`A)O2NQAIU(ZeVqaQ{;hCutr>bqpAA%DBQEGu4Ap+ErZ7r}oqqcQ3qas=<%8fxxW4Y}by^YmNpFvv zi@^`I0TIi9)tlAK%imU?kPqpd+UY@6q?mdDhm2a=bFM#n{-V(9dQG7|XR~Ru0? z&QO*!U>S9>()qLVu=|JHH#|n)WClSyXO`U7Gyj94?uD(eEZ8m z)uEk5(l89dBD1BhZRD-+x#g%;=8O1k=iF|$H4IMIC9jX!WG_N5Vo}lH!y$5}^+f;N zLC%fAok5(&6Gd(*$DWkrYSy<}*+N50HS!zv@*hj(P5MXjd4nbKG7;(!&TU&hlb+bH zhphKK%W|2m>0DC@+x;IaZkyTn2avDGR{WRAXRR=a@~=|?qx7M5jNb*~vJTVDA2IJo zhGkR26_h75{*BADh8ilLX~RV zgNl0&fxOK*Tm70qH|yDiE*;LHgcYXOrHbRIbD_7@kz~7(*yn=`L+qiVB3D1V5z92Q zGhB(E&YSjmzlzGh1Hw$E#ZEsn;A{-ED8=l$mJPVpi_m@Y)5O{uxt5xiQy(47@4AGf z+h)VbqPtNCpp7BMp@H#6c|$6=o%>HL)8My#th4MIn=1k4GQYJ`5HVqwm7tuTzi>YR1w*;i~?e@nY+Uv7D`SbWKjPt!iwQ5mf zm*4M;^daIc)!p4x8E9~+h#|l)gvUGdGg!^INVpXRT`?(m#L`aY1=cytuBE9rkr3InuaCA@I7q`yu`igfL<)zL3a#WoMd4JmUR7yT=Hb9S@r+&&R-BY(egVp<2sA+M2-L#I`)s zEni=eE!rqG^EB0HdS8_qR_~x~Nb2R~llXexU*2B#{P??CmO^>QR>b!O=QD|z&r0hK zHm9S81$``5lp5aFggq<8mT0v)CQr)aC1@)N4Zmi1qeESPbjGXrT&IN2zYCo^!PM%qob0>tTQb*U@ITth&S81c>q zZS28fr@F(OjNI&}Sq17#5**geN2~>YH`s1^Q8AdRIb?YPtoJIvd=n)Y2Caz+=SMz2 z3&9iO#>jHE1}}Gcy+oyWpd;%?*Zba=7eC+nz;{utn=pNt{_Nh*5gv+!-MY(Vyz}g8 zfFZg{v<~ni-mj|1LdRIzO1#{e@~KrC78M^)xhzaggqUq5R5QLQPFsISTU{M!g2%*z zXDb)Olx^`vsW%y3gsY2O@|}hEl)mv`qo{ev5_=J&$9JC)-?wrh8dSDZ_XL>dFI7Io zzCIF-lY7<5p?c{^kT?if{U?W9%PcA8w>d&kDIr=gehe65lG!y z*5<>!y1Zk%b->h=xrA*!Q8^LUGXaKbKoMGiI~vROfR;^^n1$LrU@5|RH*{Ek^|qE9 zo%8=_9f8J7!Dh^)JL)x{Xe@(6Gx->0iC@6uOG8Rq85Mq71}YGlA!)@#Q_kJ0SdKOW z_%+_A*DJ}1q0`-J5i(CssY+Kg=k?jsOPhJrDwYP(Lg}tf>Uaj(5&Jbj{rgf~B*sDs zYJE`bHFZdhPFilDLQSK;4p%QqBo!K%6h&@Dhky5tQ{09z>H~lFx@G_;S;g!XcF2<# z1#A0m6CpUSLobf&ZR%RJNkiUL@5KI=7A;at)3CVPgrrxBo?_3}q$j%VSx`>Vr`+JY z&Aq|bz+HdbXFNF9rHmP!`jp>1zhUj@*7-u8tiLDR)`02LK1k;{W%wN|^L1hkx^3Ft zkUej%20T*olFxeZ`+$l+r6ff2z0>%m>^; z4ceaFU&2Q$zhS+CMOKxS(Uo`5WUiao$?Or@PfxxxP5jC{9(X zNcoQ1p8LvEi{tsWEz0sPZ1|B2m!mkQ_|9Fy&3x;>lmQ7L;Ze{_yn+OT7M7lFc{| z0WoIiuj7~MK;fs<4;LDsi&WZjGQ)rU2M*}cfDtzkE4_AA-}FB(A#sU zI|!&4jj>{d-{%~qz!H{iE_c(acv{oW0R`uY%y;LWu1~@2W0Ajn?I0X(x^RZ?p%uhx z0ov7eO~l#XGYgfcV{QGkaOV8pO~I1^|Kex8$bRQZzx@?rL^nS4O1g#A`|nuGUZrgZ zj0v3E;vV+4NTI)*i{y3v=}18nbgy1nD-lT8zR zYzp-8JNFi~;0nFB%tmpzsu@qoBV7ny=P zqwdyH6yQ(FJ~%ck!@lF>qm!5YZ=%y;=&+Vo<+%tW+d+%h)gY-1;6d{BpQyJIZdEav z{t^kj@9DFL_qJqYA&Q>NpW`|H)S)O2*#w0yzpF#^vSIEr8xd#E#$_!E!ahA_Ugqai zf*UnWU6}%YZ2fB511t-_wZv>6-mxLxQbN57OSpPw&1D8UyL+{^)pUfxz2Wlktc2vkF%IeNZz^li-Fff}{0JL5XLDE##d=>xP zGOLU=*QVM*mT?n9r<{k7mpOi={-H_!#$EBy>U2sQtEzgYZ;fikycP9pJ_Tr2+)f5l zpY5-#@}UdIu-m~u(g6QU7#sLg;^TB!LcKJC=TGQ9uTw)z%&z5BvElQ%sBP8Tah_M= zvYyRECN6kVmRez>+7-&Lw10V)li`usco-3B(7(&8}230cmY#MFfbPgCm4hiRVF9QSocA8@oD$KArS`(7Np zU|#8KX-T+4(mSAZ)O#HJgyx0!L9B?@hnNgN!)Q%ZN{3~7d>nSeZRoGxKLX}vXMHXB z=vQXEqA(|jUk!s$`E%%i8*b(zbH;zmwwBIuL)a#Z`A76*m_j8I=AE(g5wYVi29i8 zV;K&6ZsK^m)kE+)_w4Sc+UbWWW@UqFOyhL-O=m3BzPT81-LFpS;?zTh35D@rAJ1%! zT#5-fiS1bkYrq~Cq!9Nj>aiZ{m?5OX8bUzST6o5VpIP`kbMOS$4_THdSyOGjP0!L* zgm|tdGFCSW{kv*^4bg*2sN-OMM+YFy;=vl0s|@D@bzd!vgna`~Xk*~eVKlay>Ncah z5^?ssW#Onc)-o$;-9t<_I2m$agK$6iCyjGXB+#0TbiEUF9fr~gZ8^w8Z7j4P`YLQt zlsxf1(TM7N;!V4`3=Bj1+58KSlmtNP+Bt+?HliIZ9+n(iaT?>9HvGbQj(t0ah$Dub z3?39)y+yr4cwhYaSj~Pp^oU7$Sgbk|wqxrSRJ8a)dA&}*-_*~wP07|=9`GZ)dG}Xh zbS9wu-A%o3GA{F=Rx&tr_`45qcD#xT2Lo6LDdFyI#^U%$$>M{BAxZc7$UH#xL;6n@aX zkd7fVdLadSP>1_Mm^)Va0?m5#bOvzZ{mu85hJIB*we!ZE=|4u^-N^brW3uS#-e%&i zoYC9OsUK=GF6JR=Gq|j&HFmcYn0c>x1Fq#`T%Tp5=TWmp{F0^1{)fWDVcGt5;S7Lt zC#hp4oQAe=tI1;pVG^DUC;*>4Z>+68Ib|R0M3~b2%sV8FraG}JhQm*mG~$n5hVtrW z^vDl!uD*-Xy2{yTdzsCbum9j)ViN<5n)=drbe^E?FR?B}p#<%fXv*M@*#ua|MO~KEl zKd!P>b~SPx9`|j?sIiK9rGmDvK+A(b>@EgRCn?N#WH@;1efs#}X4OjWlEiZkI8ddB z1P~Ztr?dTqi%o@}e3w7MUd9G_eFg)-?7PRSqGEX7T~x*8B&NI~6fO-CggLfsxUZ1* zoN&pFp&e*Y4aO%fc-1Ouq<;>YK>BNg_Xh_uhXC7?adGcY>>g{yN*b5?(=?iaNsa=P z$c&{V7;)eeWET_dNZvuFT^G5%VsJBAoy~I4Fkn21CFl4-Y{ZQnsaXQIR`$6-<2lpQ= zi*8%rY#Y%{yDifWf_m<8A3k$7cX%hLe=I-60N&22`>WUprrMbTa=cK&QD#q*u+r7He{&!O8}%76mjZmtm4h+|+G6ncL`q?!32AWY;u zl~Rm_^K?YFQ9d!RW@M?VBH)>|7>Inkh5D`GVmSE$ZREZ}k>J~&o!oNtDZ&l>t170? z^RPP@!4$@s-@s$>!_DyF)ubiBZz>1FoF_y{B~vC!?2{}{S736-Pu#4aZB+GXC4pR{ z4gd5#{LVhsm;dXdm$TIFSB- zlI%c{>m&49td!%O73KJEsLV*{#%S6T>t~~GrUMzDl{&Ox8ALdrMiI=^*?{)OMOPd7eb(E@$-9#$d4JWiMoUwFO?gc3*3eS>P$&*Z%Q0Tr2)=q}p&e3S zRW{@nEdKmvP#~Oq;IECPE<}JaB?a3>BcGZdQ!{C-M*myOJe+rJFk}?)8{(cJqzj`S7N#jD*Ef8+J#5pb3uX1yf_5} ckU{!j<4qpWppLou_bNwqWi6#5g%_Xx5A3N8ZU6uP diff --git a/docs/source/using-executorch-ios.md b/docs/source/using-executorch-ios.md index 8e075853161..f5d520f9874 100644 --- a/docs/source/using-executorch-ios.md +++ b/docs/source/using-executorch-ios.md @@ -18,7 +18,9 @@ The ExecuTorch Runtime for iOS and macOS (ARM64) is distributed as a collection Link your binary with the ExecuTorch runtime and any backends or kernels used by the exported ML model. It is recommended to link the core runtime to the components that use ExecuTorch directly, and link kernels and backends against the main app target. -**Note:** To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the `executorch_debug` framework. For optimal performance, always link against the Release version of the deliverables (those without the `_debug` suffix), which have all logging overhead removed. +**Note:** You may need to add some extra linker flags for the build settings of the components that links against ExecuTorch backends or kernels to let them register properly at the app startup. See the [Linkage](#Linkage) section for more details. + +**Note:** To access logs, link against the Debug build of the ExecuTorch runtime, i.e., the `executorch_debug` framework. For optimal performance, always link against the Release version of the deliverables (those without the `_debug` suffix), which have all logging overhead removed. See the [Logging](#Logging) section for more details. ### Swift Package Manager @@ -26,7 +28,7 @@ The prebuilt ExecuTorch runtime, backend, and kernels are available as a [Swift #### Xcode -In Xcode, go to `File > Add Package Dependencies`. Paste the URL of the [ExecuTorch repo](https://github.com/pytorch/executorch) into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version in format "swiftpm-", (e.g. "swiftpm-0.7.0"), or a branch name in format "swiftpm-." (e.g. "swiftpm-0.8.0-20250801") for a [nightly build](https://ossci-ios.s3.amazonaws.com/list.html) on a specific date. +In Xcode, go to `File > Add Package Dependencies`. Paste the URL of the [ExecuTorch repo](https://github.com/pytorch/executorch) into the search bar and select it. Make sure to change the branch name to the desired ExecuTorch version in format "swiftpm-", (e.g. "swiftpm-1.0.0"), or a branch name in format "swiftpm-." (e.g. "swiftpm-1.1.0-20251101") for a [nightly build](https://ossci-ios.s3.amazonaws.com/list.html) on a specific date. ![](_static/img/swiftpm_xcode1.png) @@ -59,7 +61,7 @@ let package = Package( ], dependencies: [ // Use "swiftpm-." branch name for a nightly build. - .package(url: "https://github.com/pytorch/executorch.git", branch: "swiftpm-0.7.0") + .package(url: "https://github.com/pytorch/executorch.git", branch: "swiftpm-1.0.0") ], targets: [ .target( @@ -70,6 +72,10 @@ let package = Package( .product(name: "kernels_optimized", package: "executorch"), // Add other backends and kernels as needed. ]), + linkerSettings: [ + // Force load all symbols from static libraries to trigger backends and kernels registration + .unsafeFlags(["-Wl,-all_load"]) + ] ] ) ``` From 146c8cb79b680827b54a9e984480d369f7618809 Mon Sep 17 00:00:00 2001 From: Anthony Shoumikhin Date: Tue, 21 Oct 2025 10:09:38 -0700 Subject: [PATCH 17/26] Update docs on LMM runner Apple API (#15307) --- docs/source/llm/run-on-ios.md | 154 ++++++++++++++++++++++++++++++++-- 1 file changed, 146 insertions(+), 8 deletions(-) diff --git a/docs/source/llm/run-on-ios.md b/docs/source/llm/run-on-ios.md index 88ad94c38d3..f096995fca9 100644 --- a/docs/source/llm/run-on-ios.md +++ b/docs/source/llm/run-on-ios.md @@ -80,17 +80,22 @@ do { #### Generating -Generate up to a given number of tokens from an initial prompt. The callback block is invoked once per token as it’s produced. +Generate tokens from an initial prompt, configured with an `ExecuTorchLLMConfig` object. The callback block is invoked once per token as it’s produced. Objective-C: ```objectivec +ExecuTorchLLMConfig *config = [[ExecuTorchLLMConfig alloc] initWithBlock:^(ExecuTorchLLMConfig *c) { + c.temperature = 0.8; + c.sequenceLength = 2048; +}]; + NSError *error = nil; -BOOL success = [runner generate:@"Once upon a time" - sequenceLength:50 - withTokenCallback:^(NSString *token) { - NSLog(@"Generated token: %@", token); - } - error:&error]; +BOOL success = [runner generateWithPrompt:@"Once upon a time" + config:config + tokenCallback:^(NSString *token) { + NSLog(@"Generated token: %@", token); + } + error:&error]; if (!success) { NSLog(@"Generation failed: %@", error); } @@ -99,7 +104,10 @@ if (!success) { Swift: ```swift do { - try runner.generate("Once upon a time", sequenceLength: 50) { token in + try runner.generate("Once upon a time", Config { + $0.temperature = 0.8 + $0.sequenceLength = 2048 + }) { token in print("Generated token:", token) } } catch { @@ -121,6 +129,136 @@ Swift: runner.stop() ``` +#### Resetting + +To clear the prefilled tokens from the KV cache and reset generation stats, call: + +Objective-C: +```objectivec +[runner reset]; +``` + +Swift: +```swift +runner.reset() +``` + +### MultimodalRunner + +The `ExecuTorchLLMMultimodalRunner` class (bridged to Swift as `MultimodalRunner`) provides an interface for loading and running multimodal models that can accept a sequence of text, image, and audio inputs. + +#### Multimodal Inputs + +Inputs are provided as an array of `ExecuTorchLLMMultimodalInput` (or `MultimodalInput` in Swift). You can create inputs from String for text, `ExecuTorchLLMImage` for images (`Image` in Swift), and `ExecuTorchLLMAudio` for audio features (`Audio`) in Swift. + +Objective-C: +```objectivec +ExecuTorchLLMMultimodalInput *textInput = [ExecuTorchLLMMultimodalInput inputWithText:@"What's in this image?"]; + +NSData *imageData = ...; // Your raw image bytes +ExecuTorchLLMImage *image = [[ExecuTorchLLMImage alloc] initWithData:imageData width:336 height:336 channels:3]; +ExecuTorchLLMMultimodalInput *imageInput = [ExecuTorchLLMMultimodalInput inputWithImage:image]; +``` + +Swift: +```swift +let textInput = MultimodalInput("What's in this image?") + +let imageData: Data = ... // Your raw image bytes +let image = Image(data: imageData, width: 336, height: 336, channels: 3) +let imageInput = MultimodalInput(image) + +let audioFeatureData: Data = ... // Your raw audio feature bytes +let audio = Audio(float: audioFeatureData, batchSize: 1, bins: 128, frames: 3000) +let audioInput = MultimodalInput(audio) +``` + +#### Initialization + +Create a runner by specifying the paths to your multimodal model and its tokenizer. + +Objective-C: +```objectivec +NSString *modelPath = [[NSBundle mainBundle] pathForResource:@"llava" ofType:@"pte"]; +NSString *tokenizerPath = [[NSBundle mainBundle] pathForResource:@"llava_tokenizer" ofType:@"bin"]; + +ExecuTorchLLMMultimodalRunner *runner = [[ExecuTorchLLMMultimodalRunner alloc] initWithModelPath:modelPath + tokenizerPath:tokenizerPath]; +``` + +Swift: +```swift +let modelPath = Bundle.main.path(forResource: "llava", ofType: "pte")! +let tokenizerPath = Bundle.main.path(forResource: "llava_tokenizer", ofType: "bin")! + +let runner = MultimodalRunner(modelPath: modelPath, tokenizerPath: tokenizerPath) +``` + +#### Loading + +Explicitly load the model before generation. + +Objective-C: +```objectivec +NSError *error = nil; +BOOL success = [runner loadWithError:&error]; +if (!success) { + NSLog(@"Failed to load: %@", error); +} +``` + +Swift: +```swift +do { + try runner.load() +} catch { + print("Failed to load: \(error)") +} +``` + +#### Generating + +Generate tokens from an ordered array of multimodal inputs. + +Objective-C: +```objectivec +NSArray *inputs = @[textInput, imageInput]; + +ExecuTorchLLMConfig *config = [[ExecuTorchLLMConfig alloc] initWithBlock:^(ExecuTorchLLMConfig *c) { + c.sequenceLength = 768; +}]; + +NSError *error = nil; +BOOL success = [runner generateWithInputs:inputs + config:config + tokenCallback:^(NSString *token) { + NSLog(@"Generated token: %@", token); + } + error:&error]; +if (!success) { + NSLog(@"Generation failed: %@", error); +} +``` + +Swift: +```swift +let inputs = [textInput, imageInput] + +do { + try runner.generate(inputs, Config { + $0.sequenceLength = 768 + }) { token in + print("Generated token:", token) + } +} catch { + print("Generation failed:", error) +} +``` + +#### Stopping and Resetting + +The stop and reset methods for `MultimodalRunner` behave identically to those on `TextRunner`. + ## Demo Get hands-on with our [etLLM iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/llm/apple) to see the LLM runtime APIs in action. From 8711ebd2ba9cd33530883552bec3d959b16ec52d Mon Sep 17 00:00:00 2001 From: Manuel Candales <42380156+manuelcandales@users.noreply.github.com> Date: Tue, 21 Oct 2025 14:20:21 -0400 Subject: [PATCH 18/26] Add Metal backend documentation to Voxtral README (#15273) This PR updates the Voxtral README to document Metal backend support on Apple Silicon. --- examples/models/voxtral/README.md | 137 +++++++++++++++++++++++++++++- 1 file changed, 133 insertions(+), 4 deletions(-) diff --git a/examples/models/voxtral/README.md b/examples/models/voxtral/README.md index 8cac4264bba..f793e8251ef 100644 --- a/examples/models/voxtral/README.md +++ b/examples/models/voxtral/README.md @@ -36,6 +36,64 @@ optimum-cli export executorch \ This exports Voxtral with XNNPack backend acceleration and 4-bit weight/8-bit activation linear quantization. +## CUDA Support +If your environment has CUDA support, you can enable the runner to run on CUDA for improved performance. Follow the export and runtime commands below: + +### Exporting with CUDA +``` +optimum-cli export executorch \ + --model "mistralai/Voxtral-Mini-3B-2507" \ + --task "multimodal-text-to-text" \ + --recipe "cuda" \ + --dtype bfloat16 \ + --device cuda \ + --max_seq_len 1024 \ + --output_dir="voxtral" +``` + +This will generate: +- `model.pte` - The exported model +- `aoti_cuda_blob.ptd` - The CUDA kernel blob required for runtime + +Furthermore, we support several quantization formats on CUDA. +For example, to export Voxtral with int4 weight and int4mm for linear layers, you can use the following command, +``` +optimum-cli export executorch \ + --model "mistralai/Voxtral-Mini-3B-2507" \ + --task "multimodal-text-to-text" \ + --recipe "cuda" \ + --dtype bfloat16 \ + --device cuda \ + --max_seq_len 1024 \ + --qlinear 4w \ + --qlinear_encoder 4w \ + --qlinear_packing_format tile_packed_to_4d \ + --qlinear_encoder_packing_format tile_packed_to_4d \ + --output_dir="voxtral" +``` + +See the "Building the multimodal runner" section below for instructions on building with CUDA support, and the "Running the model" section for runtime instructions. + +## Metal Support +On Apple Silicon, you can enable the runner to run on Metal. Follow the export and runtime commands below: + +### Exporting with Metal +``` +optimum-cli export executorch \ + --model "mistralai/Voxtral-Mini-3B-2507" \ + --task "multimodal-text-to-text" \ + --recipe "metal" \ + --dtype bfloat16 \ + --max_seq_len 1024 \ + --output_dir="voxtral" +``` + +This will generate: +- `model.pte` - The exported model +- `aoti_metal_blob.ptd` - The Metal kernel blob required for runtime + +See the "Building the multimodal runner" section below for instructions on building with Metal support, and the "Running the model" section for runtime instructions. + # Running the model To run the model, we will use the Voxtral runner, which utilizes ExecuTorch's MultiModal runner API. The Voxtral runner will do the following things: @@ -52,7 +110,12 @@ We provide a simple way to transform raw audio data into a mel spectrogram by ex ``` # Export a preprocessor that can handle audio up to 5 mins (300s). -python -m executorch.extension.audio.mel_spectrogram --feature_size 128 --stack_output --max_audio_len 300 --output_file voxtral_preprocessor.pte + +python -m executorch.extension.audio.mel_spectrogram \ + --feature_size 128 \ + --stack_output \ + --max_audio_len 300 \ + --output_file voxtral_preprocessor.pte ``` ## Building the multimodal runner @@ -64,6 +127,46 @@ cmake --preset llm -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=cmake-out - cmake -DCMAKE_INSTALL_PREFIX=cmake-out -DBUILD_TESTING=OFF -DCMAKE_BUILD_TYPE=Release -Bcmake-out/examples/models/voxtral examples/models/voxtral && cmake --build cmake-out/examples/models/voxtral -j16 --config Release ``` +### Building for CUDA +``` +# Install ExecuTorch with CUDA support +CMAKE_ARGS="-DEXECUTORCH_BUILD_CUDA=ON" ./install_executorch.sh + +# Build the multimodal runner with CUDA +cmake --preset llm \ + -DEXECUTORCH_BUILD_CUDA=ON \ + -DCMAKE_INSTALL_PREFIX=cmake-out \ + -DCMAKE_BUILD_TYPE=Release \ + -Bcmake-out -S. +cmake --build cmake-out -j16 --target install --config Release + +cmake -DEXECUTORCH_BUILD_CUDA=ON \ + -DCMAKE_BUILD_TYPE=Release \ + -Sexamples/models/voxtral \ + -Bcmake-out/examples/models/voxtral/ +cmake --build cmake-out/examples/models/voxtral --target voxtral_runner --config Release +``` + +### Building for Metal +``` +# Install ExecuTorch with Metal support +CMAKE_ARGS="-DEXECUTORCH_BUILD_METAL=ON" ./install_executorch.sh + +# Build the multimodal runner with Metal +cmake --preset llm \ + -DEXECUTORCH_BUILD_METAL=ON \ + -DCMAKE_INSTALL_PREFIX=cmake-out \ + -DCMAKE_BUILD_TYPE=Release \ + -Bcmake-out -S. +cmake --build cmake-out -j16 --target install --config Release + +cmake -DEXECUTORCH_BUILD_METAL=ON \ + -DCMAKE_BUILD_TYPE=Release \ + -Sexamples/models/voxtral \ + -Bcmake-out/examples/models/voxtral/ +cmake --build cmake-out/examples/models/voxtral --target voxtral_runner --config Release +``` + ## Running the model You can download the `tekken.json` tokenizer from [Voxtral's HuggingFace repo](https://huggingface.co/mistralai/Voxtral-Mini-3B-2507). ``` @@ -71,11 +174,26 @@ You can download the `tekken.json` tokenizer from [Voxtral's HuggingFace repo](h --model_path path/to/model.pte \ --tokenizer_path path/to/tekken.json \ --prompt "What can you tell me about this audio?" \ - --audio_path path/to/audio_input.bin \ - --processor_path path/to/voxtral_preprocessor.pte # If you're passing raw audio file in audio_path + --audio_path path/to/audio_input.wav \ + --processor_path path/to/voxtral_preprocessor.pte ``` -Example output: +### Running with preprocessed audio (.bin file) +If you already have a preprocessed mel spectrogram saved as a `.bin` file, you can skip the preprocessor: +``` +./cmake-out/examples/models/voxtral/voxtral_runner \ + --model_path path/to/model.pte \ + --tokenizer_path path/to/tekken.json \ + --prompt "What can you tell me about this audio?" \ + --audio_path path/to/preprocessed_audio.bin +``` + +### Running on CUDA or Metal: +Add the `--data_path` argument to provide the appropriate data blob to the commands above: +- For CUDA: `--data_path path/to/aoti_cuda_blob.ptd` +- For Metal: `--data_path path/to/aoti_metal_blob.ptd` + +# Example output: ``` The speaker in this audio seems to be talking about their concerns about a device called the model or maybe they're just talking about the model in general. They mention that the model was trained with the speaker for inference, which suggests that the model was trained based on the speaker's data or instructions. They also mention that the volume is quite small, which could imply that the speaker is trying to control the volume of the model's output, likely because they are concerned about how loud the model's responses might @@ -89,6 +207,7 @@ I 00:00:24.036822 executorch:stats.h:147] Time to first generated token: I 00:00:24.036828 executorch:stats.h:153] Sampling time over 487 tokens: 0.099000 (seconds) ``` +# Generating audio input You can easily produce an `.bin` for the audio input in Python like this: ``` # t = some torch.Tensor @@ -101,3 +220,13 @@ You can also produce raw audio file as follows (for Option A): ``` ffmpeg -i audio.mp3 -f f32le -acodec pcm_f32le -ar 16000 audio_input.bin ``` + +### Generating a .wav file on Mac +On macOS, you can use the built-in `say` command to generate speech audio and convert it to a `.wav` file: +``` +# Generate audio using text-to-speech +say -o call_samantha_hall.aiff "Call Samantha Hall" + +# Convert to .wav format +afconvert -f WAVE -d LEI16 call_samantha_hall.aiff call_samantha_hall.wav +``` From 2e63fbad1fbb7c26b988bebf02acd8dd0b9dbeea Mon Sep 17 00:00:00 2001 From: Scott Roy <161522778+metascroy@users.noreply.github.com> Date: Tue, 21 Oct 2025 13:34:25 -0700 Subject: [PATCH 19/26] Update success-stories.md (#15309) Update unsloth QAT story to use qwen example, which has more details than llama example. --- docs/source/success-stories.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/success-stories.md b/docs/source/success-stories.md index 013f81dcae5..bcf922eb0b4 100644 --- a/docs/source/success-stories.md +++ b/docs/source/success-stories.md @@ -106,7 +106,7 @@ PyTorch-native quantization and optimization library for preparing efficient mod Optimize LLM fine-tuning with faster training and reduced VRAM usage, then deploy efficiently with ExecuTorch. -[Example Model →](https://huggingface.co/metascroy/Llama-3.2-1B-Instruct-int8-int4) +[Example Model →](https://huggingface.co/metascroy/Qwen3-4B-int8-int4-unsloth) ::: :::: @@ -125,4 +125,4 @@ Optimize LLM fine-tuning with faster training and reduced VRAM usage, then deplo - **Demo title** - Brief description of the demo [Try →](#) -*Want to showcase your demo? [Submit here →](https://github.com/pytorch/executorch/issues)* \ No newline at end of file +*Want to showcase your demo? [Submit here →](https://github.com/pytorch/executorch/issues)* From 5f6167f72343d70e22179c779067a98428fe415d Mon Sep 17 00:00:00 2001 From: lucylq Date: Tue, 21 Oct 2025 14:35:16 -0700 Subject: [PATCH 20/26] Add gemma to supported models (#15328) --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 66be37bedc8..87bb50b93a1 100644 --- a/README.md +++ b/README.md @@ -198,7 +198,7 @@ ExecuTorch powers on-device AI at scale across Meta's family of apps, VR/AR devi **LLMs:** [Llama 3.2/3.1/3](examples/models/llama/README.md), [Qwen 3](examples/models/qwen3/README.md), [Phi-4-mini](examples/models/phi_4_mini/README.md), [LiquidAI LFM2](examples/models/lfm2/README.md) -**Multimodal:** [Llava](examples/models/llava/README.md) (vision-language), [Voxtral](examples/models/voxtral/README.md) (audio-language) +**Multimodal:** [Llava](examples/models/llava/README.md) (vision-language), [Voxtral](examples/models/voxtral/README.md) (audio-language), [Gemma](examples/models/gemma3) (vision-language) **Vision/Speech:** [MobileNetV2](https://github.com/meta-pytorch/executorch-examples/tree/main/mv2), [DeepLabV3](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3), [Whisper](https://github.com/meta-pytorch/executorch-examples/tree/main/whisper/android/WhisperApp) From cb63da3dffe06802807ca2bf6997dcea9f499d6e Mon Sep 17 00:00:00 2001 From: Gregory Comer Date: Tue, 21 Oct 2025 16:50:43 -0600 Subject: [PATCH 21/26] Update build from source and getting started docs (#15311) ### Summary Update Getting Started and Build from Source Docs: * Integrate Windows steps into the main flow with minor Windows-specific callouts. * Clarify top-level flow for building from source - add a table by use case. * Clarify building ET as a submodule vs standalone build. * Re-order, re-word, and clean up the content related to building from source. * Add info on NDK build for Android. Tracked in https://github.com/pytorch/executorch/issues/14791 and https://github.com/pytorch/executorch/issues/14759. cc @mergennachin @byjlw --- docs/source/getting-started.md | 31 +- .../using-executorch-building-from-source.md | 352 +++++++----------- 2 files changed, 155 insertions(+), 228 deletions(-) diff --git a/docs/source/getting-started.md b/docs/source/getting-started.md index 80672ac9d14..2ae37a6278c 100644 --- a/docs/source/getting-started.md +++ b/docs/source/getting-started.md @@ -10,9 +10,9 @@ The following are required to install the ExecuTorch host libraries, needed to e - Python 3.10 - 3.12 - g++ version 7 or higher, clang++ version 5 or higher, or another C++17-compatible toolchain. -- Linux (x86_64 or ARM64) or macOS (ARM64). +- Linux (x86_64 or ARM64), macOS (ARM64), or Windows (x86_64). - Intel-based macOS systems require building PyTorch from source (see [Building From Source](using-executorch-building-from-source.md) for instructions). - - Windows is supported via WSL. +- On Windows, Visual Studio 2022 or later. Clang build tools are needed to build from source. ## Installation To use ExecuTorch, you will need to install both the Python package and the appropriate platform-specific runtime libraries. Pip is the recommended way to install the ExecuTorch python package. @@ -25,6 +25,7 @@ pip install executorch To build the framework from source, see [Building From Source](using-executorch-building-from-source.md). Backend delegates may require additional dependencies. See the appropriate backend documentation for more information. +> **_NOTE:_** On Windows, ExecuTorch requires a [Visual Studio Developer Powershell](https://learn.microsoft.com/en-us/visualstudio/ide/reference/command-prompt-powershell?view=vs-2022). Running from outside of a developer prompt will manifest as errors related to CL.exe.
@@ -44,7 +45,7 @@ ExecuTorch provides hardware acceleration for a wide variety of hardware. The mo For mobile use cases, consider using XNNPACK for Android and Core ML or XNNPACK for iOS as a first step. See [Hardware Backends](backends-overview.md) for more information. ### Exporting -Exporting is done using Python APIs. ExecuTorch provides a high degree of customization during the export process, but the typical flow is as follows. This example uses the MobileNet V2 image classification model implementation in torchvision, but the process supports any [export-compliant](https://pytorch.org/docs/stable/export.html) PyTorch model. For users working with Hugging Face models, +Exporting is done using Python APIs. ExecuTorch provides a high degree of customization during the export process, but the typical flow is as follows. This example uses the MobileNet V2 image classification model implementation in torchvision, but the process supports any [export-compliant](https://pytorch.org/docs/stable/export.html) PyTorch model. For Hugging Face models, you can find a list of supported models in the [*huggingface/optimum-executorch*](https://github.com/huggingface/optimum-executorch) repo. ```python @@ -103,7 +104,7 @@ print(torch.allclose(output[0], eager_reference_output, rtol=1e-3, atol=1e-5)) For complete examples of exporting and running the model, please refer to our [examples GitHub repository](https://github.com/meta-pytorch/executorch-examples/tree/main/mv2/python). -Additionally, if you work with Hugging Face models, the [*huggingface/optimum-executorch*](https://github.com/huggingface/optimum-executorch) library simplifies running these models end-to-end with ExecuTorch, using familiar Hugging Face APIs. Visit the repository for specific examples and supported models. +Additionally, for Hugging Face models, the [*huggingface/bptimum-executorch*](https://github.com/huggingface/optimum-executorch) library simplifies running these models end-to-end with ExecuTorch using familiar Hugging Face APIs. Visit the repository for specific examples and supported models.
@@ -131,7 +132,7 @@ dependencies { ``` #### Runtime APIs -Models can be loaded and run using the `Module` class: +Models can be loaded and run from Java or Kotlin using the `Module` class. ```java import org.pytorch.executorch.EValue; import org.pytorch.executorch.Module; @@ -147,8 +148,11 @@ EValue[] output = model.forward(input_evalue); float[] scores = output[0].toTensor().getDataAsFloatArray(); ``` +Note that the [C++](#c) APIs can be used when targeting Android native. + For a full example of running a model on Android, see the [DeepLabV3AndroidDemo](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo). For more information on Android development, including building from source, a full description of the Java APIs, and information on using ExecuTorch from Android native code, see [Using ExecuTorch on Android](using-executorch-android.md). + ### iOS #### Installation @@ -165,22 +169,27 @@ For more information on iOS integration, including an API reference, logging set ExecuTorch provides C++ APIs, which can be used to target embedded or mobile devices. The C++ APIs provide a greater level of control compared to other language bindings, allowing for advanced memory management, data loading, and platform integration. #### Installation -CMake is the preferred build system for the ExecuTorch C++ runtime. To use with CMake, clone the ExecuTorch repository as a subdirectory of your project, and use CMake's `add_subdirectory("executorch")` to include the dependency. The `executorch` target, as well as kernel and backend targets will be made available to link against. The runtime can also be built standalone to support diverse toolchains. See [Using ExecuTorch with C++](using-executorch-cpp.md) for a detailed description of build integration, targets, and cross compilation. +CMake is the preferred build system for the ExecuTorch C++ runtime. To use with CMake, clone the ExecuTorch repository as a subdirectory of your project, and use CMake's `add_subdirectory("executorch")` to include the dependency. The `executorch` target, as well as kernel and backend targets will be made available to link against. The runtime can also be built standalone to support diverse toolchains. See [Using ExecuTorch with C++](using-executorch-cpp.md) and [Building from Source](using-executorch-building-from-source.md) for a detailed description of build integration, targets, and cross compilation. ``` git clone -b release/1.0 https://github.com/pytorch/executorch.git ``` -```python +```cmake +# Set CMAKE_CXX_STANDARD to 17 or above. +set(CMAKE_CXX_STANDARD 17) + # CMakeLists.txt +set(EXECUTORCH_BUILD_PRESET_FILE ${CMAKE_SOURCE_DIR}/executorch/tools/cmake/preset/llm.cmake) +# Set other ExecuTorch options here. + add_subdirectory("executorch") ... target_link_libraries( my_target PRIVATE executorch - extension_module_static - extension_tensor - optimized_native_cpu_ops_lib - xnnpack_backend) + executorch::backends + executorch::extensions + executorch::kernels) ``` diff --git a/docs/source/using-executorch-building-from-source.md b/docs/source/using-executorch-building-from-source.md index c14e05ccf76..da7f1831658 100644 --- a/docs/source/using-executorch-building-from-source.md +++ b/docs/source/using-executorch-building-from-source.md @@ -17,8 +17,8 @@ ExecuTorch is tested on the following systems, although it should also work in s * macOS (x86_64/ARM64) * Big Sur (11.0)+ * Windows (x86_64) + * Windows 10+ with Visual Studio 2022+ and [Clang-CL](https://learn.microsoft.com/en-us/cpp/build/clang-support-msbuild?view=msvc-170) * Windows Subsystem for Linux (WSL) with any of the Linux options - * Windows 10+ with Visual Studio 2022+ (experimental) ### Software Requirements @@ -29,16 +29,19 @@ ExecuTorch is tested on the following systems, although it should also work in s * `g++` version 7 or higher, `clang++` version 5 or higher, or another C++17-compatible toolchain. * `python` version 3.10-3.12 -* `Xcode Command Line Tools` (macOS only) * `ccache` (optional) - A compiler cache that speeds up recompilation +* **macOS** + - `Xcode Command Line Tools` +* **Windows** + - `Visual Studio Clang Tools` - See [Clang/LLVM support in Visual Studio](https://learn.microsoft.com/en-us/cpp/build/clang-support-msbuild?view=msvc-170). -Additional dependencies will be installed automatically when running the [Python installation](#building-the-python-package). +Additional dependencies will be automatically installed when running the [Python installation](#building-the-python-package). Note that the cross-compilable core runtime code supports a wider range of -toolchains, down to C++17. See the [Runtime Overview](runtime-overview.md) for +toolchains, down to C++17. See [Runtime Overview](runtime-overview.md) for portability details. ## Environment Setup - Clone the ExecuTorch repository from GitHub and create a conda environment as follows. Venv can be used in place on conda. + Clone the ExecuTorch repository from GitHub and create a conda environment. Venv can be used in place on conda. ```bash git clone -b release/1.0 https://github.com/pytorch/executorch.git cd executorch @@ -46,6 +49,13 @@ portability details. conda activate executorch ``` +> **_NOTE:_** Addition Windows Setup +> +> ExecuTorch requires symlinks to be enabled to build the Python components. To enable symlinks, run the following command before cloning the repository. Missing symlinks will manifest as an error related to `version.py` when running `pip install .`. See [src/README.md](https://github.com/pytorch/executorch/blob/main/src/README.md) for more information. +> ```bash +> git config --system core.symlinks true +> ``` +
## Building the Python package @@ -62,7 +72,7 @@ portability details. * `--clean`: Removes build artifacts. * `--editable`: Install the ExecuTorch python package in editable mode (see [Editable Install](#editable-install)). * `--minimal`: Install only the minimal set of dependencies required to run ExecuTorch. Do not install dependencies for examples. - * `--use-pt-pinned-commit`: Install the pinned PyTorch commit. When not specified, the latest PyTorch nightly build is installed. + * `--use-pt-pinned-commit`: Install the pinned PyTorch commit or release version. When not specified, the latest PyTorch nightly build is installed. For Intel-based macOS systems, use `--use-pt-pinned-commit --minimal`. As PyTorch does not provide pre-built binaries for Intel Mac, installation requires building PyTorch from source. Instructions can be found in [PyTorch Installation](https://github.com/pytorch/pytorch#installation). @@ -73,6 +83,13 @@ portability details. CMAKE_ARGS="-DEXECUTORCH_BUILD_MPS=ON" ./install_executorch.sh ``` + ## Verify the Build + +To verify that the Python components are installed correctly, run the following command. This will create a file named mv2_xnnpack_fp32.pte in the current directory for the MobileNet V2 model with the XNNPACK backend. If it completes without error, the ExecuTorch Python components are installed successfully. +```bash +python -m executorch.examples.xnnpack.aot_compiler --model_name="mv2" --delegate +``` + ### Editable Install For development, include the `--editable` flag, which allows for local changes to ExecuTorch Python code to be reflected without a re-install. Note that when C++ files are modified, you will need to re-run the full installation to reflect the changes. ```bash @@ -114,47 +131,38 @@ portability details. ## Building the C++ Runtime -The ExecuTorch C++ runtime is built using CMake. It can be compiled standalone to run examples, added as a CMake dependency, or cross-compiled for Android, iOS, or embedded platforms. +The ExecuTorch runtime uses CMake as the build system. When using ExecuTorch from C++ user code with CMake, adding ExecuTorch as a submodule and referencing via CMake `add_subdirectory` will build the runtime as part of the user build. -### Configuring +When user code is not using CMake, the runtime can be built standalone and linked. The CMake options described below apply in both cases. Scripts are also provided for [Android AAR](#cross-compiling-for-android) and [iOS framework](#cross-compiling-for-ios) builds. -Configuration should be done after cloning, pulling the upstream repo, or changing build options. Once this is done, you won't need to do it again until you pull from the upstream repo or modify any CMake-related files. +| Use Case | How to Build | +| :------------------------- | :--------------------------------------------------------------------------------- | +| C++ with user CMake | Use CMake `add_subdirectory`. | +| C++ without user CMake | Bulild ExecuTorch standalone with CMake. Link libraries with user build. | +| Android with Java/Kotlin | Use [scripts/build_android_libraries.sh](#cross-compiling-for-android). | +| Android with C++ | Follow C++ build steps, [cross-compile for Android](#cross-compiling-for-android). | +| iOS | Use [scripts/build_ios_frameworks.sh](#cross-compiling-for-ios). | -```bash -# cd to the root of the executorch repo -cd executorch - -# Clean and configure the CMake build system. It's good practice to do this -# whenever cloning or pulling the upstream repo. -./install_executorch.sh --clean -(mkdir cmake-out && cd cmake-out && cmake ..) -``` +### Configuring -### Building +Configuration should be done after cloning, pulling the upstream repo, or changing build options. Once this is done, you won't need to do it again until you pull from the upstream repo or modify any CMake-related files. -Build all targets with `cmake --build`. +When building as a submodule as part of a user CMake build, ExecuTorch CMake options can be specified either as part of the user CMake configuration or in user CMake code. +CMake configuration for standalone runtime build: ```bash -# cd to the root of the executorch repo -cd executorch - -# Build using the configuration that you previously generated under the -# `cmake-out` directory. -# -# NOTE: The `-j` argument specifies how many jobs/processes to use when -# building, and tends to speed up the build significantly. It's typical to use -# "core count + 1" as the `-j` value. -cmake --build cmake-out -j9 +mkdir cmake-out +cmake -B cmake-out --preset [preset] [options] +cmake --build cmake-out -j10 ``` -> **_TIP:_** For faster rebuilds, consider installing ccache (see [Compiler Cache section](#compiler-cache-ccache) above). On first builds, ccache populates its cache. Subsequent builds with the same compiler flags can be significantly faster. - -### Build Presets +#### Build Presets -ExecuTorch provides fine-grained control over what is built, as described in [Build Options](#build-options). These options are grouped into CMake presets to cover common scenarios, while providing the ability to override individual options. Presets can be specified when configuring CMake by specifying `--preset [name]` when configuring. +ExecuTorch provides fine-grained control over what is built, as described in [Build Options](#build-options). These options are grouped into CMake presets to cover common scenarios while preserving the ability to override individual options. Presets can be specified when configuring CMake by specifying `--preset [name]` when configuring. Preset values for common scenarios are listed below. Using a platform preset is recommended to avoid needing to specify many fine-grained build options. + * `android` - Build featuers and backends common for Android targets. * `arm-baremetal` - Build for bare-metal ARM targets. * `ios` - Build features and backends common for iOS targets. * `macos` - Build features and backends common for Mac targets. @@ -163,77 +171,34 @@ Preset values for common scenarios are listed below. Using a platform preset is * `profiling` - Build the ExecuTorch runtime with profiling enabled. * `zephyr` - Build for Zephyr RTOS. +User CMake: +```cmake +set(EXECUTORCH_BUILD_PRESET_FILE ${CMAKE_SOURCE_DIR}/executorch/tools/cmake/preset/llm.cmake) +``` + +Standalone build: ```bash # Configure the build with the ios preset. cmake .. --preset ios ``` -### CMake Targets and Libraries +#### Build Options -To link against the ExecuTorch framework from CMake, the following top-level targets are exposed: - - * `executorch::backends`: Contains all configured backends. - * `executorch::extensions`: Contains all configured extensions. - * `executorch::kernels`: Contains all configured kernel libraries. - -The backends, extensions, and kernels included in these targets are controlled by the various `EXECUTORCH_` CMake options specified by the build. Using these targets will automatically pull in the required dependencies to use the configured features. - -### Running an Example Model - -The example `executor_runner` binary can be used to run a model and sanity-check the build. Run the following commands to generate and run a simple model. -You should see the message "Model executed successfully" followed by the output values. +CMake options can be used to for fine-grained control of build type, control which features are built, and configure functionality, such as logging. Options are typically specified during CMake configuration. Default values of each option are set by the active preset, but can be overridden by specifying the option when configuring. -``` bash -python -m examples.portable.scripts.export --model_name="add" -./cmake-out/executor_runner --model_path add.pte -``` +Note that many build options require other options to be enabled. This may require enabling multiple options to enable a given feature. The CMake build output will provide an error message when a required option is not enabled. -``` -I 00:00:00.000526 executorch:executor_runner.cpp:82] Model file add.pte is loaded. -I 00:00:00.000595 executorch:executor_runner.cpp:91] Using method forward -I 00:00:00.000612 executorch:executor_runner.cpp:138] Setting up planned buffer 0, size 48. -I 00:00:00.000669 executorch:executor_runner.cpp:161] Method loaded. -I 00:00:00.000685 executorch:executor_runner.cpp:171] Inputs prepared. -I 00:00:00.000764 executorch:executor_runner.cpp:180] Model executed successfully. -I 00:00:00.000770 executorch:executor_runner.cpp:184] 1 outputs: -Output 0: tensor(sizes=[1], [2.]) +User CMake: +```cmake +set(EXECUTORCH_BUILD_XNNPACK ON) ``` - -### Compiler Cache (ccache) - -ExecuTorch automatically detects and enables [ccache](https://ccache.dev/) if it's installed. This significantly speeds up recompilation by caching previously compiled objects: - -- If ccache is detected, you'll see: `ccache found and enabled for faster builds` -- If ccache is not installed, you'll see: `ccache not found, builds will not be cached` - -To install ccache: +Standalone build: ```bash -# Ubuntu/Debian -sudo apt install ccache - -# macOS -brew install ccache - -# CentOS/RHEL -sudo yum install ccache -# or -sudo dnf install ccache +cmake -DEXECUTORCH_BUILD_XNNPACK=ON ``` -No additional configuration is needed - the build system will automatically use ccache when available. - -See [CMakeLists.txt](https://github.com/pytorch/executorch/blob/main/CMakeLists.txt) - -
- -## Build Options - -CMake options can be used to for fine-grained control of build type, control which features are built, and configure functionality, such as logging. Options are typically specified during CMake configuration. Default values of each option are set by the active preset, but can be overridden by specifying the option when configuring. - -Note that many build options require other options to be enabled. This may require enabling multiple options to enable a given feature. The CMake build output will provide an error message when a required option is not enabled. - -#### Build Type +##### Build Type The CMake build is typically set to `Debug` or `Release`. For production use or profiling, release mode should be used to improve performance and reduce binary size. It disables program verification and executorch logging and adds optimizations flags. The `EXECUTORCH_OPTIMIZE_SIZE` flag can be used to further optimize for size with a small performance tradeoff. @@ -242,7 +207,7 @@ The CMake build is typically set to `Debug` or `Release`. For production use or cmake .. -DCMAKE_BUILD_TYPE=Release ``` -#### Backends +##### Backends Typically, each hardware backend exposes a CMake option to control whether the backend is built. See backend-specific documentation for more details. @@ -262,7 +227,7 @@ Typically, each hardware backend exposes a CMake option to control whether the b cmake .. -DEXECUTORCH_BUILD_XNNPACK=ON -DEXECUTORCH_BUILD_VULKAN=ON ``` -#### Extensions +##### Extensions ExecuTorch extensions provide optional functionality outside of the core runtime. As the core runtime is designed to run in constrained environments, these features are typically disabled by default. Extensions include higher-level APIs (Module and Tensor), multi-threading support (Threadpool), training, and more. @@ -283,7 +248,7 @@ ExecuTorch extensions provide optional functionality outside of the core runtime cmake .. -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON ``` -#### Logging +##### Logging Logging is enabled by default in debug builds and disabled in release. When enabled, the default log level is Info. Both log enable and level can be overriden with options. See [Logging](using-executorch-runtime-integration.md#logging). Disabling logging and decreasing log verbosity will reduce binary size by stripping unused strings from the build. @@ -295,7 +260,39 @@ Logging is enabled by default in debug builds and disabled in release. When enab cmake .. -DEXECUTORCH_ENABLE_LOGGING=ON -DEXECUTORCH_LOG_LEVEL=debug ``` -#### Output Libraries +### Building + +Build all targets with `cmake --build`. + +```bash +# cd to the root of the executorch repo +cd executorch + +# Build using the configuration that you previously generated under the +# `cmake-out` directory. +# +# NOTE: The `-j` argument specifies how many jobs/processes to use when +# building, and tends to speed up the build significantly. It's typical to use +# "core count + 1" as the `-j` value. +cmake --build cmake-out -j9 +``` + +> **_TIP:_** For faster rebuilds, consider installing ccache (see [Compiler Cache section](#compiler-cache-ccache) above). On first builds, ccache populates its cache. Subsequent builds with the same compiler flags can be significantly faster. + +
+ + +## CMake Targets and Output Libraries + +To link against the ExecuTorch framework from CMake, the following top-level targets are exposed: + + * `executorch::backends`: Contains all configured backends. + * `executorch::extensions`: Contains all configured extensions. + * `executorch::kernels`: Contains all configured kernel libraries. + +The backends, extensions, and kernels included in these targets are controlled by the various `EXECUTORCH_` CMake options specified by the build. Using these targets will automatically pull in the required dependencies to use the configured features. + +### Linking Without CMake To link against the runtime from outside of the CMake ecosystem, the runtime can be first built with CMake and then linked directly. A few of the relevant top-level targets are described below. Note that this is a more involved process than using CMake and is only recommended when using CMake is not viable. @@ -314,6 +311,26 @@ To link against the runtime from outside of the CMake ecosystem, the runtime can Backends typically introduce additional targets. See backend-specific documentation for more details. +### Verify the Build + +To verify the build, ExecuTorch optionally compiles a simple, stand-alone model runner to run PTE files with all-one input tensors. It is not enabled by default in most presets, but can be enabled by configuring with `-DEXECUTORCH_BUILD_EXECUTOR_RUNNER=ON -DEXECUTORCH_BUILD_EXTENSION_EVALUE_UTIL=ON`. + +Once compiled, invoke the runner with a sample PTE (such as the one generated by [verifying the Python build](#verify-the-build)). +```bash +cmake-out/executor_runner --model_path=mv2_xnnpack_fp32.pte +``` + +If the runner runs successfully, you should see output similar to the following: +``` +I 00:00:00.043703 executorch:executor_runner.cpp:379] Model executed successfully 1 time(s) in 15.013292 ms. +I 00:00:00.043720 executorch:executor_runner.cpp:383] 1 outputs: +OutputX 0: tensor(sizes=[1, 1000], [ + -0.509859, 0.300644, 0.0953884, 0.147724, 0.231202, 0.338554, 0.206888, -0.0575762, -0.389273, -0.0606864, + ..., + 0.421219, 0.100447, -0.506771, -0.115824, -0.693017, -0.183262, 0.154781, -0.410684, 0.0119296, 0.449713, +]) +``` +
## Cross-Compiling for Android @@ -327,8 +344,7 @@ Backends typically introduce additional targets. See backend-specific documentat ### Building the AAR -With the NDK installed, the `build_android_library.sh` script will build the ExecuTorch Java AAR. This file contains the ExecuTorch Java bindings -and native code. See [Using the AAR File](using-executorch-android.md#using-aar-file) for usage. +With the NDK installed, the `build_android_library.sh` script will build the ExecuTorch Java AAR, which contains ExecuTorch Java bindings. See [Using the AAR File](using-executorch-android.md#using-aar-file) for usage. ```bash export ANDROID_ABIS=arm64-v8a @@ -337,36 +353,21 @@ mkdir -p $BUILD_AAR_DIR sh scripts/build_android_library.sh ``` -### Building the Example Runner +### Android Native -The native executor runner can be cross-compiled for android and deployed via ADB. This step is intended as -an example of CMake cross compilation and is not necessary for integration into an app. +To use the ExecuTorch runtime from native Android C++ code, the runtime can be cross-compiled for Android. The recommended approach is to add ExecuTorch as a submodule of the user project and use [CMake](https://developer.android.com/ndk/guides/cmake) for the native build. The above steps for C++ with CMake can be followed. +For direct cross-compilation, the ExecuTorch runtime can be configured to build with the NDK toolchain: ```bash -# Run the following lines from the `executorch/` folder -./install_executorch.sh --clean -mkdir cmake-android-out && cd cmake-android-out - # point -DCMAKE_TOOLCHAIN_FILE to the location where ndk is installed -cmake -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a .. - -cd .. -cmake --build cmake-android-out -j9 - -adb shell mkdir -p /data/local/tmp/executorch -# push the binary to an Android device -adb push cmake-android-out/executor_runner /data/local/tmp/executorch -# push the model file -adb push add.pte /data/local/tmp/executorch - -adb shell "/data/local/tmp/executorch/executor_runner --model_path /data/local/tmp/executorch/add.pte" +cmake -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a .. ```
## Cross-Compiling for iOS -For iOS, we'll build [frameworks](https://developer.apple.com/documentation/xcode/creating-a-multi-platform-binary-framework-bundle) instead of static libraries. The frameworks contain the compiled ExecuTorch runtime and public headers. +iOS binaries are built as [frameworks](https://developer.apple.com/documentation/xcode/creating-a-multi-platform-binary-framework-bundle) instead of static libraries. The frameworks contain the compiled ExecuTorch runtime and public headers. ### Pre-requisites @@ -394,112 +395,29 @@ See backend-specific documentation for more details. 2. Copy over the generated `.xcframework` bundles to your Xcode project, link them against your targets and don't forget to add an extra linker flag `-all_load`. -Check out the [iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo) tutorial for more info. - -
- -## Building on Windows - -ExecuTorch provides experimental support for native Windows builds. - -> **_NOTE:_** All commands should be executed on Windows powershell in administrator mode. - -### Environment Setup - -#### Pre-requisites +See the [iOS Demo App](https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo) tutorial for example usage of the ExecuTorch frameworks. -1. Install miniconda for Windows from the [official website](https://docs.conda.io/en/latest/miniconda.html). -2. Install Git for Windows from the [official website](https://git-scm.com/download/win). -3. Install ClangCL for Windows from the [official website](https://learn.microsoft.com/en-us/cpp/build/clang-support-msbuild?view=msvc-170) or through a [Visual Studio](https://learn.microsoft.com/en-us/cpp/build/clang-support-msbuild?view=msvc-170) or [Visual Studio Code](https://code.visualstudio.com/docs/cpp/config-clang-mac) installation. +## Compiler Cache (ccache) -#### Clone and Configure Environment - -```bash -git config --global core.symlinks true -git clone --recurse -submodules https://github.com/pytorch/executorch.git -cd executorch -conda create -yn et python=3.12 -conda activate et -``` - -If Conda is not available, run conda-hook.ps1, where `$miniconda_dir` is the directory where miniconda is installed. -This is `“C:\Users\\AppData\Local”` by default. - -```bash -$miniconda_dir\\shell\\condabin\\conda-hook.ps1 -``` - -### Build the Python Package - -Run `install_executorch.bat` to build and install the ExecuTorch Python package and runtime bindings. - -```bash -cd executorch -./install_executorch.bat -``` - -> **_NOTE_** Many components are not currently buildable on Windows. These instructions install a very minimal ExecuTorch which can be used as a sanity check. +ExecuTorch automatically detects and enables [ccache](https://ccache.dev/) if it's installed. This significantly speeds up recompilation by caching previously compiled objects: -### Build the C++ Runtime +- If ccache is detected, you'll see: `ccache found and enabled for faster builds` +- If ccache is not installed, you'll see: `ccache not found, builds will not be cached` +To install ccache: ```bash -del -Recurse -Force cmake-out; ` -cmake . ` - -DCMAKE_INSTALL_PREFIX=cmake-out ` - -DPYTHON_EXECUTABLE=$miniconda_dir\\envs\\et\\python.exe ` - -DCMAKE_PREFIX_PATH=$miniconda_dir\\envs\\et\\Lib\\site-packages ` - -DCMAKE_BUILD_TYPE=Release ` - -DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON ` - -DEXECUTORCH_BUILD_FLATC=ON ` - -DEXECUTORCH_BUILD_PYBIND=OFF ` - -DEXECUTORCH_BUILD_XNNPACK=ON ` - -DEXECUTORCH_BUILD_KERNELS_LLM=ON ` - -DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON ` - -DEXECUTORCH_BUILD_KERNELS_QUANTIZED=ON ` - -DEXECUTORCH_ENABLE_LOGGING=ON ` - -T ClangCL ` - -Bcmake-out; ` -cmake --build cmake-out -j64 --target install --config Release -``` - -> **_NOTE_** `$miniconda_dir` is the directory where you installed miniconda. This is `“C:\Users\\AppData\Local”` by default. - -### Running an Example Model - -To validate the installation by running a model, create a file named export_mv2.py. Then, run the powershell commands to export and run the model. -The expected output is a tensor of size 1x1000, containing class scores. - -```py -# export_mv2.py -import torch -from executorch.exir import to_edge_transform_and_lower -from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner -from torchvision.models import mobilenet_v2 -from torchvision.models.mobilenetv2 import MobileNet_V2_Weights - -mv2 = mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval() -example_inputs = (torch.randn((1, 3, 224, 224)),) - -program = to_edge_transform_and_lower( - torch.export.export(model, example_inputs) -).to_executorch() - -with open("mv2_xnnpack.pte", "wb") as file: - executorch_program.write_to_file(file) -``` +# Ubuntu/Debian +sudo apt install ccache -```bash -python .\\export_mv2.py -.\\cmake-out\\backends\\xnnpack\\Release\\xnn_executor_runner.exe --model_path=.\\mv2_xnnpack.pte -``` +# macOS +brew install ccache -```bash -Output 0: tensor(sizes=[1, 1000], [ - -0.50986, 0.30064, 0.0953904, 0.147726, 0.231205, 0.338555, 0.206892, -0.0575775, … ]) +# CentOS/RHEL +sudo yum install ccache +# or +sudo dnf install ccache ``` -## Next Steps +No additional configuration is needed - the build system will automatically use ccache when available. -* [Selective Build](kernel-library-selective-build.md) to link only kernels used by the program. This can provide significant binary size savings. -* Tutorials on building [Android](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo#executorch-android-demo-app) and [iOS](https://github.com/meta-pytorch/executorch-examples/tree/main/mv3/apple/ExecuTorchDemo) demo apps. -* Tutorials on deploying applications to embedded devices such as [ARM Cortex-M/Ethos-U](backends-arm-ethos-u.md) and [XTensa HiFi DSP](backends-cadence.md). +See [CMakeLists.txt](https://github.com/pytorch/executorch/blob/main/CMakeLists.txt) From 261eb2dfff0ff6b8e9ddb90226c86d0e7c1fd636 Mon Sep 17 00:00:00 2001 From: Abhinayk Date: Tue, 21 Oct 2025 15:57:48 -0700 Subject: [PATCH 22/26] Fix more typos and broken links (#15331) --- docs/source/backends-overview.md | 2 +- .../examples-end-to-end-to-lower-model-to-delegate.md | 2 +- docs/source/getting-started.md | 4 ++-- docs/source/kernel-library-selective-build.md | 2 +- docs/source/using-executorch-android.md | 11 +---------- docs/source/using-executorch-export.md | 2 +- 6 files changed, 7 insertions(+), 16 deletions(-) diff --git a/docs/source/backends-overview.md b/docs/source/backends-overview.md index da2febced3a..4e7eaa1b057 100644 --- a/docs/source/backends-overview.md +++ b/docs/source/backends-overview.md @@ -28,7 +28,7 @@ Backends are the bridge between your exported model and the hardware it runs on. | [MediaTek](backends-mediatek) | Android | NPU | MediaTek SoCs | | [ARM EthosU](backends-arm-ethos-u) | Embedded | NPU | ARM MCUs | | [ARM VGF](backends-arm-vgf) | Android | NPU | ARM platforms | -| [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs | +| [OpenVINO](build-run-openvino) | Embedded | CPU/GPU/NPU | Intel SoCs | | [NXP](backends-nxp) | Embedded | NPU | NXP SoCs | | [Cadence](backends-cadence) | Embedded | DSP | DSP-optimized workloads | | [Samsung Exynos](/backends/samsung/samsung-overview.md) | Android | NPU | Samsung SoCs | diff --git a/docs/source/examples-end-to-end-to-lower-model-to-delegate.md b/docs/source/examples-end-to-end-to-lower-model-to-delegate.md index 4ef6bcd0d6e..fd14d718531 100644 --- a/docs/source/examples-end-to-end-to-lower-model-to-delegate.md +++ b/docs/source/examples-end-to-end-to-lower-model-to-delegate.md @@ -19,7 +19,7 @@ There are three flows for delegating a program to a backend: is good for reusing lowered modules exported from other flows. 1. Lower parts of a module according to a partitioner. This is good for lowering models that include both lowerable and non-lowerable nodes, and is - the most streamlined procecss. + the most streamlined process. ### Flow 1: Lowering the whole module diff --git a/docs/source/getting-started.md b/docs/source/getting-started.md index 2ae37a6278c..c095c079560 100644 --- a/docs/source/getting-started.md +++ b/docs/source/getting-started.md @@ -1,5 +1,5 @@ # Getting Started with ExecuTorch -This section is intended to describe the necessary steps to take PyTorch model and run it using ExecuTorch. To use the framework, you will typically need to take the following steps: +This section is intended to describe the necessary steps to take a PyTorch model and run it using ExecuTorch. To use the framework, you will typically need to take the following steps: - Install the ExecuTorch python package and runtime libraries. - Export the PyTorch model for the target hardware configuration. - Run the model using the ExecuTorch runtime APIs on your development platform. @@ -77,7 +77,7 @@ Quantization can also be done at this stage to reduce model size and runtime. Qu After successfully generating a .pte file, it is common to use the Python runtime APIs to validate the model on the development platform. This can be used to evaluate model accuracy before running on-device. -For the MobileNet V2 model from torchvision used in this example, image inputs are expected as a normalized, float32 tensor with a dimensions of (batch, channels, height, width). The output See [torchvision.models.mobilenet_v2](https://pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v2.html) for more information on the input and output tensor format for this model. +For the MobileNet V2 model from torchvision used in this example, image inputs are expected as a normalized, float32 tensor with a dimensions of (batch, channels, height, width). The output is a tensor containing class logits. See [torchvision.models.mobilenet_v2](https://pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v2.html) for more information on the input and output tensor format for this model. ```python import torch diff --git a/docs/source/kernel-library-selective-build.md b/docs/source/kernel-library-selective-build.md index 666206acb94..edec9567b7b 100644 --- a/docs/source/kernel-library-selective-build.md +++ b/docs/source/kernel-library-selective-build.md @@ -61,7 +61,7 @@ gen_selected_ops( ROOT_OPS # comma separated operator names to be selected INCLUDE_ALL_OPS # boolean flag to include all operators OPS_FROM_MODEL # path to a pte file of model to select operators from - DTYPE_SELECTIVE_BUILD # boolean flag to enable dtye selection + DTYPE_SELECTIVE_BUILD # boolean flag to enable dtype selection ) ``` diff --git a/docs/source/using-executorch-android.md b/docs/source/using-executorch-android.md index d417100dc68..712d79af4aa 100644 --- a/docs/source/using-executorch-android.md +++ b/docs/source/using-executorch-android.md @@ -28,19 +28,10 @@ The AAR artifact contains the Java library for users to integrate with their Jav - Optimized kernels - Quantized kernels - LLaMa-specific Custom ops library. -- Comes with two ABI variants, arm64-v8a and x86\_64. +- Comes with two ABI variants, arm64-v8a and x86_64. The AAR library can be used for generic Android device with arm64-v8a or x86_64 architecture. It can be used across form factors, including phones, tablets, tv boxes, etc, as it does not contain any UI components. -XNNPACK backend - -Portable kernels -Optimized kernels -Quantized kernels -LLaMa-specific Custom ops library. -Comes with two ABI variants, arm64-v8a and x86_64. -The AAR library can be used for generic Android device with arm64-v8a or x86_64 architecture. It can be used across form factors, including phones, tablets, tv boxes, etc, as it does not contain any UI components. - ## Using AAR from Maven Central ✅ Recommended for most developers diff --git a/docs/source/using-executorch-export.md b/docs/source/using-executorch-export.md index f0ad7c18467..140a703edc6 100644 --- a/docs/source/using-executorch-export.md +++ b/docs/source/using-executorch-export.md @@ -35,7 +35,7 @@ Commonly used hardware backends are listed below. For mobile, consider using XNN - [XNNPACK (CPU)](backends-xnnpack.md) - [Core ML (iOS)](backends/coreml/coreml-overview.md) - [Metal Performance Shaders (iOS GPU)](backends/mps/mps-overview.md) -- [Vulkan (Android GPU)](backends-vulkan.md) +- [Vulkan (Android GPU)](backends/vulkan/vulkan-overview.md) - [Qualcomm NPU](backends-qualcomm.md) - [MediaTek NPU](backends-mediatek.md) - [Arm Ethos-U NPU](backends-arm-ethos-u.md) From 14e19e87b25f30e747b563d416373be27cceb57c Mon Sep 17 00:00:00 2001 From: Gregory Comer Date: Mon, 20 Oct 2025 15:11:17 -0600 Subject: [PATCH 23/26] Update XNNPACK doc structure and add template (#14873) Prototype an updated doc structure for the XNNPACK backend. Extract a common template out under docs/source/backends/tempate/ This PR updates the doc structure as follows. Under the template, the landing page is required, partitioner, quantization, and op support docs are recommended, and the rest are optional. - XNNPACK Backend - Quantization (recommended) - Partitioner APIs (recommended) - Operator Support (optional) - Architecture and Internals (optional) - Tutorials (optional) - Guides (optional) --- .gitignore | 1 - README-wheel.md | 2 +- backends/xnnpack/README.md | 2 +- docs/source/android-xnnpack.md | 2 +- docs/source/backend-delegate-advanced.md | 5 - docs/source/backend-development.md | 1 - docs/source/backends-overview.md | 4 +- docs/source/backends-xnnpack.md | 182 ------------------ docs/source/backends/template/README.md | 53 +++++ .../template/backend-arch-internals.md | 8 + .../template/backend-overview.md} | 46 +++-- .../backends/template/backend-partitioner.rst | 25 +++ .../backends/template/backend-quantization.md | 31 +++ .../template/backend-troubleshooting.md | 15 ++ .../template/guides/backend-basic-guide.md | 3 + .../template/guides/backend-guides.md | 10 + docs/source/backends/template/op-support.csv | 6 + .../tutorials/backend-basic-tutorial.md | 91 +++++++++ .../template/tutorials/backend-tutorials.md | 10 + docs/source/backends/xnnpack/op-support.csv | 47 +++++ .../xnnpack/xnnpack-arch-internals.md} | 18 +- .../xnnpack-delegate-architecture.png | Bin .../xnnpack}/xnnpack-et-flow-diagram.png | Bin .../backends/xnnpack/xnnpack-overview.md | 100 ++++++++++ .../backends/xnnpack/xnnpack-partitioner.rst | 24 +++ .../backends/xnnpack/xnnpack-quantization.md | 94 +++++++++ .../xnnpack/xnnpack-troubleshooting.md | 25 +++ docs/source/desktop-xnnpack.md | 2 +- docs/source/edge-platforms-section.md | 2 +- docs/source/ios-xnnpack.md | 2 +- docs/source/platforms-desktop.md | 6 +- docs/source/quantization-overview.md | 2 +- .../tutorial-xnnpack-delegate-lowering.md | 2 +- .../tutorials_source/bundled_program.bp | Bin 261344 -> 0 bytes docs/source/using-executorch-android.md | 2 +- docs/source/using-executorch-export.md | 2 +- 36 files changed, 594 insertions(+), 231 deletions(-) delete mode 100644 docs/source/backends-xnnpack.md create mode 100644 docs/source/backends/template/README.md create mode 100644 docs/source/backends/template/backend-arch-internals.md rename docs/source/{backend-template.md => backends/template/backend-overview.md} (62%) create mode 100644 docs/source/backends/template/backend-partitioner.rst create mode 100644 docs/source/backends/template/backend-quantization.md create mode 100644 docs/source/backends/template/backend-troubleshooting.md create mode 100644 docs/source/backends/template/guides/backend-basic-guide.md create mode 100644 docs/source/backends/template/guides/backend-guides.md create mode 100644 docs/source/backends/template/op-support.csv create mode 100644 docs/source/backends/template/tutorials/backend-basic-tutorial.md create mode 100644 docs/source/backends/template/tutorials/backend-tutorials.md create mode 100644 docs/source/backends/xnnpack/op-support.csv rename docs/source/{backend-delegates-xnnpack-reference.md => backends/xnnpack/xnnpack-arch-internals.md} (90%) rename docs/source/{ => backends/xnnpack}/xnnpack-delegate-architecture.png (100%) rename docs/source/{ => backends/xnnpack}/xnnpack-et-flow-diagram.png (100%) create mode 100644 docs/source/backends/xnnpack/xnnpack-overview.md create mode 100644 docs/source/backends/xnnpack/xnnpack-partitioner.rst create mode 100644 docs/source/backends/xnnpack/xnnpack-quantization.md create mode 100644 docs/source/backends/xnnpack/xnnpack-troubleshooting.md delete mode 100644 docs/source/tutorials_source/bundled_program.bp diff --git a/.gitignore b/.gitignore index b166f8c9512..54572407274 100644 --- a/.gitignore +++ b/.gitignore @@ -62,7 +62,6 @@ xcuserdata/ /include/ /share/ /version.py -*.csv *_etdump # Android diff --git a/README-wheel.md b/README-wheel.md index e20b447f96a..719f753039f 100644 --- a/README-wheel.md +++ b/README-wheel.md @@ -11,7 +11,7 @@ The `executorch` pip package is in beta. The prebuilt `executorch.runtime` module included in this package provides a way to run ExecuTorch `.pte` files, with some restrictions: * Only [core ATen operators](docs/source/ir-ops-set-definition.md) are linked into the prebuilt module -* Only the [XNNPACK backend delegate](docs/source/backends-xnnpack.md) is linked into the prebuilt module. +* Only the [XNNPACK backend delegate](docs/source/backends/xnnpack/xnnpack-overview.md) is linked into the prebuilt module. * \[macOS only] [Core ML](docs/source/backends/coreml/coreml-overview.md) and [MPS](docs/source/backends/mps/mps-overview.md) backend are also linked into the prebuilt module. diff --git a/backends/xnnpack/README.md b/backends/xnnpack/README.md index 6e6be7ddb4c..7c6a7ccbc33 100644 --- a/backends/xnnpack/README.md +++ b/backends/xnnpack/README.md @@ -134,4 +134,4 @@ create an issue on [github](https://www.github.com/pytorch/executorch/issues). ## See Also For more information about the XNNPACK Backend, please check out the following resources: - [XNNPACK Backend](https://pytorch.org/executorch/main/backends-xnnpack) -- [XNNPACK Backend Internals](https://pytorch.org/executorch/main/backend-delegates-xnnpack-reference) +- [XNNPACK Backend Internals](https://pytorch.org/executorch/main/backends/xnnpack/backend-delegates-xnnpack-reference) diff --git a/docs/source/android-xnnpack.md b/docs/source/android-xnnpack.md index 315dd747006..4a85dec946b 100644 --- a/docs/source/android-xnnpack.md +++ b/docs/source/android-xnnpack.md @@ -1 +1 @@ -```{include} backends-xnnpack.md +```{include} backends/xnnpack/xnnpack-overview.md diff --git a/docs/source/backend-delegate-advanced.md b/docs/source/backend-delegate-advanced.md index 752bd1cdc02..e82e5ee035d 100644 --- a/docs/source/backend-delegate-advanced.md +++ b/docs/source/backend-delegate-advanced.md @@ -6,10 +6,6 @@ - {doc}`backend-delegates-integration` — Learn how to integrate a backend delegate into ExecuTorch -## XNNPACK Reference - -- {doc}`backend-delegates-xnnpack-reference` — Deep dive into XNNPACK delegate internals and implementation details - ## Dependency Management - {doc}`backend-delegates-dependencies` — Manage third-party dependencies for backend delegates @@ -27,7 +23,6 @@ :maxdepth: 1 backend-delegates-integration -backend-delegates-xnnpack-reference backend-delegates-dependencies compiler-delegate-and-partitioner debug-backend-delegate diff --git a/docs/source/backend-development.md b/docs/source/backend-development.md index ec5ceb3b37a..40c50a8ad11 100644 --- a/docs/source/backend-development.md +++ b/docs/source/backend-development.md @@ -4,7 +4,6 @@ :maxdepth: 1 backend-delegates-integration -backend-delegates-xnnpack-reference backend-delegates-dependencies compiler-delegate-and-partitioner debug-backend-delegate diff --git a/docs/source/backends-overview.md b/docs/source/backends-overview.md index 4e7eaa1b057..ddb55f2afec 100644 --- a/docs/source/backends-overview.md +++ b/docs/source/backends-overview.md @@ -20,7 +20,7 @@ Backends are the bridge between your exported model and the hardware it runs on. | Backend | Platform(s) | Hardware Type | Typical Use Case | |-----------------------------------------------------------------|---------------------|---------------|---------------------------------| -| [XNNPACK](backends-xnnpack) | All | CPU | General-purpose, fallback | +| [XNNPACK](backends/xnnpack/xnnpack-overview.md) | All | CPU | General-purpose, fallback | | [Core ML](/backends/coreml/coreml-overview.md) | iOS, macOS | NPU/GPU/CPU | Apple devices, high performance | | [Metal Performance Shaders](/backends/mps/mps-overview.md) | iOS, macOS | GPU | Apple GPU acceleration | | [Vulkan ](/backends/vulkan/vulkan-overview.md) | Android | GPU | Android GPU acceleration | @@ -50,7 +50,7 @@ Backends are the bridge between your exported model and the hardware it runs on. :hidden: :caption: Backend Overview -backends-xnnpack +backends/xnnpack/xnnpack-overview backends/coreml/coreml-overview backends/mps/mps-overview backends/vulkan/vulkan-overview diff --git a/docs/source/backends-xnnpack.md b/docs/source/backends-xnnpack.md deleted file mode 100644 index 42e76741ec8..00000000000 --- a/docs/source/backends-xnnpack.md +++ /dev/null @@ -1,182 +0,0 @@ -# XNNPACK Backend - -The XNNPACK delegate is the ExecuTorch solution for CPU execution on mobile CPUs. [XNNPACK](https://github.com/google/XNNPACK/tree/master) is a library that provides optimized kernels for machine learning operators on Arm and x86 CPUs. - -## Features - -- Wide operator support on Arm and x86 CPUs, available on any modern mobile phone. -- Support for a wide variety of quantization schemes and quantized operators. -- Supports fp32 and fp16 activations. -- Supports 8-bit quantization. - -## Target Requirements - -- ARM64 on Android, iOS, macOS, Linux, and Windows. -- ARMv7 (with NEON) on Android. -- ARMv6 (with VFPv2) on Linux. -- x86 and x86-64 (up to AVX512) on Windows, Linux, Android. - -## Development Requirements - -The XNNPACK delegate does not introduce any development system requirements beyond those required by -the core ExecuTorch runtime. - ----- - -## Using the XNNPACK Backend - -To target the XNNPACK backend during the export and lowering process, pass an instance of the `XnnpackPartitioner` to `to_edge_transform_and_lower`. The example below demonstrates this process using the MobileNet V2 model from torchvision. - -```python -import torch -import torchvision.models as models -from torchvision.models.mobilenetv2 import MobileNet_V2_Weights -from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner -from executorch.exir import to_edge_transform_and_lower - -mobilenet_v2 = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval() -sample_inputs = (torch.randn(1, 3, 224, 224), ) - -et_program = to_edge_transform_and_lower( - torch.export.export(mobilenet_v2, sample_inputs), - partitioner=[XnnpackPartitioner()], -).to_executorch() - -with open("mv2_xnnpack.pte", "wb") as file: - et_program.write_to_file(file) -``` - -### Partitioner API - -The XNNPACK partitioner API allows for configuration of the model delegation to XNNPACK. Passing an `XnnpackPartitioner` instance with no additional parameters will run as much of the model as possible on the XNNPACK backend. This is the most common use-case. For advanced use cases, the partitioner exposes the following options via the [constructor](https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/xnnpack_partitioner.py#L31): - - - `configs`: Control which operators are delegated to XNNPACK. By default, all available operators all delegated. See [../config/\_\_init\_\_.py](https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/config/__init__.py#L66) for an exhaustive list of available operator configs. - - `config_precisions`: Filter operators by data type. By default, delegate all precisions. One or more of `ConfigPrecisionType.FP32`, `ConfigPrecisionType.STATIC_QUANT`, or `ConfigPrecisionType.DYNAMIC_QUANT`. See [ConfigPrecisionType](https://github.com/pytorch/executorch/blob/release/0.6/backends/xnnpack/partition/config/xnnpack_config.py#L24). - - `per_op_mode`: If true, emit individual delegate calls for every operator. This is an advanced option intended to reduce memory overhead in some contexts at the cost of a small amount of runtime overhead. Defaults to false. - - `verbose`: If true, print additional information during lowering. - -### Testing the Model - -After generating the XNNPACK-delegated .pte, the model can be tested from Python using the ExecuTorch runtime python bindings. This can be used to sanity check the model and evaluate numerical accuracy. See [Testing the Model](using-executorch-export.md#testing-the-model) for more information. - ----- - -## Quantization - -The XNNPACK delegate can also be used as a backend to execute symmetrically quantized models. To quantize a PyTorch model for the XNNPACK backend, use the `XNNPACKQuantizer`. `Quantizers` are backend specific, which means the `XNNPACKQuantizer` is configured to quantize models to leverage the quantized operators offered by the XNNPACK Library. - -### Supported Quantization Schemes -The XNNPACK delegate supports the following quantization schemes: - -- 8-bit symmetric weights with 8-bit asymmetric activations (via the PT2E quantization flow). - - Supports both static and dynamic activations. - - Supports per-channel and per-tensor schemes. - - Supports linear, convolution, add, mul, cat, and adaptive avg pool 2d operators. - -Weight-only quantization is not currently supported on XNNPACK. - -### 8-bit Quantization using the PT2E Flow - -To perform 8-bit quantization with the PT2E flow, perform the following steps prior to exporting the model: - -1) Create an instance of the `XnnpackQuantizer` class. Set quantization parameters. -2) Use `torch.export.export` to prepare for quantization. -3) Call `prepare_pt2e` to prepare the model for quantization. -4) For static quantization, run the prepared model with representative samples to calibrate the quantized tensor activation ranges. -5) Call `convert_pt2e` to quantize the model. -6) Export and lower the model using the standard flow. - -The output of `convert_pt2e` is a PyTorch model which can be exported and lowered using the normal flow. As it is a regular PyTorch model, it can also be used to evaluate the accuracy of the quantized model using standard PyTorch techniques. - -```python -import torch -import torchvision.models as models -from torchvision.models.mobilenetv2 import MobileNet_V2_Weights -from executorch.backends.xnnpack.quantizer.xnnpack_quantizer import XNNPACKQuantizer, get_symmetric_quantization_config -from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner -from executorch.exir import to_edge_transform_and_lower -from torchao.quantization.pt2e.quantize_pt2e import convert_pt2e, prepare_pt2e - -model = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval() -sample_inputs = (torch.randn(1, 3, 224, 224), ) - -qparams = get_symmetric_quantization_config(is_per_channel=True) # (1) -quantizer = XNNPACKQuantizer() -quantizer.set_global(qparams) - -training_ep = torch.export.export(model, sample_inputs).module() # (2) -prepared_model = prepare_pt2e(training_ep, quantizer) # (3) - -for cal_sample in [torch.randn(1, 3, 224, 224)]: # Replace with representative model inputs - prepared_model(cal_sample) # (4) Calibrate - -quantized_model = convert_pt2e(prepared_model) # (5) - -et_program = to_edge_transform_and_lower( # (6) - torch.export.export(quantized_model, sample_inputs), - partitioner=[XnnpackPartitioner()], -).to_executorch() -``` - -See [PyTorch 2 Export Post Training Quantization](https://docs.pytorch.org/ao/main/tutorials_source/pt2e_quant_ptq.html) for more information. - -### LLM quantization with quantize_ - -The XNNPACK backend also supports quantizing models with the [torchao](https://github.com/pytorch/ao) quantize_ API. This is most commonly used for LLMs, requiring more advanced quantization. Since quantize_ is not backend aware, it is important to use a config that is compatible with CPU/XNNPACK: - -* Quantize embeedings with IntxWeightOnlyConfig (with weight_dtype torch.int2, torch.int4, or torch.int8, using PerGroup or PerAxis granularity) -* Quantize linear layers with Int8DynamicActivationIntxWeightConfig (with weight_dtype=torch.int4, using PerGroup or PerAxis granularity) - -Below is a simple example, but a more detailed tutorial including accuracy evaluation on popular LLM benchmarks can be found in the [torchao documentation](https://docs.pytorch.org/ao/main/serving.html#mobile-deployment-with-executorch). - -```python -from torchao.quantization.granularity import PerGroup, PerAxis -from torchao.quantization.quant_api import ( - IntxWeightOnlyConfig, - Int8DynamicActivationIntxWeightConfig, - quantize_, -) - -# Quantize embeddings with 8-bits, per channel -embedding_config = IntxWeightOnlyConfig( - weight_dtype=torch.int8, - granularity=PerAxis(0), -) -qunatize_( - eager_model, - lambda m, fqn: isinstance(m, torch.nn.Embedding), -) - - -# Quatize linear layers with 8-bit dynamic activations and 4-bit weights -linear_config = Int8DynamicActivationIntxWeightConfig( - weight_dtype=torch.int4, - weight_granularity=PerGroup(32), -) -quantize_(eager_model, linear_config) -``` - ----- - -## Runtime Integration - -To run the model on-device, use the standard ExecuTorch runtime APIs. See [Running on Device](getting-started.md#running-on-device) for more information. - -The XNNPACK delegate is included by default in the published Android, iOS, and pip packages. When building from source, pass `-DEXECUTORCH_BUILD_XNNPACK=ON` when configuring the CMake build to compile the XNNPACK backend. - -To link against the backend, add the `xnnpack_backend` CMake target as a build dependency, or link directly against `libxnnpack_backend`. Due to the use of static registration, it may be necessary to link with whole-archive. This can typically be done by passing `"$"` to `target_link_libraries`. - -``` -# CMakeLists.txt -add_subdirectory("executorch") -... -target_link_libraries( - my_target - PRIVATE executorch - extension_module_static - extension_tensor - optimized_native_cpu_ops_lib - xnnpack_backend) -``` - -No additional steps are necessary to use the backend beyond linking the target. Any XNNPACK-delegated .pte file will automatically run on the registered backend. diff --git a/docs/source/backends/template/README.md b/docs/source/backends/template/README.md new file mode 100644 index 00000000000..e7cb037bd6c --- /dev/null +++ b/docs/source/backends/template/README.md @@ -0,0 +1,53 @@ +# Backend Documentation Template + +This template provides a standardized structure and starting point for backend documentation. It is intended to provide a uniform experience for users while allowing for backends to customize their documentation as needed. + +## Template Structure + +The template includes the following files: + +### Required Pages + +- `backend-overview.md` - Main backend overview and introduction + +### Recommended Pages + +- `backend-quantization.md` - Quantization support and API documentation +- `backend-partitioner.md` - Partitioner API reference +- `op-support.csv` - Operator support data in CSV format + +### Optional Pages (and Subsections) + +- `backend-troubleshooting.md` - Common issues and troubleshooting guide +- `backend-op-support.rst` - Operator support documentation (RST format) +- `backend-arch-internals.md` - Architecture and internals documentation +- `tutorials/backend-tutorials.md` - Tutorial sub-section + - Use this sub-section to provide tutorials for your backend. + - Tutorials should explain how a user can accomplish a task, in a step by step manner. + - Some examples might include: + - An end to end example of lowering and running a model on a specific platform. +- `tutorials/backend-guides.md` - Guides sub-section + - Use this sub-section to provide guides or how-tos for backend-specific functionality. + - Guides should focus on providing information and building conceptual understanding, rather than giving step by step directions. + - Some examples might include: + - LLM attention management / static attention + - Performance optimization guide + +## Using the Template + +To use this template for a new backend: + +1. Copy the entire `template` directory contents to your backend's documentation directory +2. Rename files to match your backend name (e.g., `backend-overview.md` → `mybackend-overview.md`) +3. Populate the content for your backend. + +### Additional Customization + +You may need to: +- Add backend-specific sections to any file +- Remove sections that don't apply to your backend +- Update the operator support CSV with your backend's supported operators +- Add backend-specific images or diagrams +- Update cross-references and links + +Try to keep the landing page (`backend-overview.md`) simple and straigtforward. Use the child pages and sections to provide more detailed information. diff --git a/docs/source/backends/template/backend-arch-internals.md b/docs/source/backends/template/backend-arch-internals.md new file mode 100644 index 00000000000..66c4a27eb4e --- /dev/null +++ b/docs/source/backends/template/backend-arch-internals.md @@ -0,0 +1,8 @@ +# {BACKEND_NAME} Architecture and Internals + +This page covers internal implementation details of the backend, and is mainly aimed at contributors and heavy power users. This is an optional page for each backend and has no set structure. + +Some topics to consider: + * High-level design of the backend + * Details on the lowering flow + * Internal debugging tools and techniques diff --git a/docs/source/backend-template.md b/docs/source/backends/template/backend-overview.md similarity index 62% rename from docs/source/backend-template.md rename to docs/source/backends/template/backend-overview.md index bf992c1ffab..666b70e1584 100644 --- a/docs/source/backend-template.md +++ b/docs/source/backends/template/backend-overview.md @@ -4,7 +4,7 @@ Provide a brief overview/description of the backend. At a high-level, what does ## Features -List high-level features of backend, such as general operator and hardware support. +List high-level features of backend, such as operator and hardware support. ## Target Requirements @@ -18,27 +18,37 @@ What software and hardware is needed to create a .PTE file targeting this backen This section describes the steps users need to take in order to generate a .PTE targeting this backend. Include a full code sample for exporting and lowering a model to this backend. Make sure relevant imports for the backend partitioner are included. -### Partitioner API +## Runtime Integration -What options, if any, does the partitioner take? Are there any other export-time configurations that can be applied? Document each option. +This section is intended to tell the user all of the steps they'll need to take to be able to run a .PTE file on-device that is targeting the given backend. +- What CMake targets should they link to? +- How is this backend compiled from source? +- Is the backend bundled by default in iOS and/or Android pre-built libraries? -### Quantization +## Reference -What quantization schemes does this backend support? Consider including the following, as appropriate. -- What operators are supported? -- Number of bits? -- Static vs dynamic activations? -- Weight only vs activations + weights? -- Symmetric vs asymmetric weights? -- Per-tensor, per-chanel, group/blockwise? +**→{doc}`backend-partitioner` — Partitioner options.** -If using a PT2E quantizer, document how to initialize the quantizer and all relevant configs and options. +**→{doc}`backend-quantization` — Supported quantization schemes.** -Include a code snippet demonstrating how to perform quantization for this backend. Document, or link to, a description of the parameters that the user can specify. +**→{doc}`backend-troubleshooting` — Debug common issues.** -## Runtime Integration +**→{doc}`backend-arch-internals` — Backend internals.** -This section is intended to tell the user all of the steps they'll need to take to be able to run a .PTE file on-device that is targeting the given backend. -- What CMake targets should they link to? -- How is this backend compiled from source? -- Is the backend bundled by default in iOS and/or Android pre-built libraries? +**→{doc}`tutorials/backend-tutorials` — Tutorials.** + +**→{doc}`guides/backend-guides` — Tutorials.** + +```{toctree} +:maxdepth: 2 +:hidden: +:caption: {BACKEND} Backend + +backend-troubleshooting +backend-partitioner +backend-quantization +backend-op-support +backend-arch-internals +tutorials/backend-tutorials +guides/backend-guides +``` diff --git a/docs/source/backends/template/backend-partitioner.rst b/docs/source/backends/template/backend-partitioner.rst new file mode 100644 index 00000000000..981e5744aed --- /dev/null +++ b/docs/source/backends/template/backend-partitioner.rst @@ -0,0 +1,25 @@ +========================== +{BACKEND_NAME} Partitioner API +========================== + +Document the partitioner API for the backend, including configuration options and compile specs. + +- ``option1``: Description of the option and values. +- ``option2``: Description of the second option. +- ``option3``: Description of the third option. + +{ADDITIONAL_PARTITIONER_DETAILS} + +================ +Operator Support +================ + +This page lists the operators supported by the {BACKEND_NAME} backend. Operators are the building blocks of the ML model. See `IRs `_ for more information on the PyTorch operator set. + +{OPERATOR_SUPPORT_NOTES} + +.. csv-table:: Operator Support + :file: op-support.csv + :header-rows: 1 + :widths: 20 15 30 30 + :align: center diff --git a/docs/source/backends/template/backend-quantization.md b/docs/source/backends/template/backend-quantization.md new file mode 100644 index 00000000000..4997a56e248 --- /dev/null +++ b/docs/source/backends/template/backend-quantization.md @@ -0,0 +1,31 @@ +# {BACKEND_NAME} Quantization + +Document quantization schemes and flows for the backend. This should include a description of each scheme and a code example to perform quantization. Example sections for PT2E and quantize_ are included below, to be replaced with details for the target backend. + +For each supported quantization scheme, include the following: + * What is the quantization scheme? + * How are weights quantized? + * How are activations quantized? Static or dynamic? + * How many bits? + * What is the granularity? Per-tensor, per-channel, group/block-wise? + * What are the steps to quantize a model with this scheme? + * Include a code sample. + * If the quantization flow only supports a small set of operators - for example, linear only - note this. + +### Supported Quantization Schemes +The {BACKEND_NAME} delegate supports the following quantization schemes: + +- {QUANTIZATION_SCHEME_1} +- {QUANTIZATION_SCHEME_2} + +### {QUANTIZATION_METHOD_1} using the PT2E Flow + +[Description] + +[Code Sample] + +### LLM Quantization with quantize_ + +[Description] + +[Code Sample] diff --git a/docs/source/backends/template/backend-troubleshooting.md b/docs/source/backends/template/backend-troubleshooting.md new file mode 100644 index 00000000000..851c04f34ea --- /dev/null +++ b/docs/source/backends/template/backend-troubleshooting.md @@ -0,0 +1,15 @@ +# {BACKEND_NAME} Troubleshooting + +This page describes common issues that you may encounter when using the {BACKEND_NAME} backend and how to debug and resolve them. + +## {COMMON_ISSUE_1} + +{ISSUE_DESCRIPTION_1} + +{SOLUTION_STEPS_1} + +## {COMMON_ISSUE_2} + +{ISSUE_DESCRIPTION_2} + +{SOLUTION_STEPS_2} diff --git a/docs/source/backends/template/guides/backend-basic-guide.md b/docs/source/backends/template/guides/backend-basic-guide.md new file mode 100644 index 00000000000..44f86d8bd4d --- /dev/null +++ b/docs/source/backends/template/guides/backend-basic-guide.md @@ -0,0 +1,3 @@ +# Using {FEATURE} on {BACKEND_NAME} + +This is a placeholder guide. diff --git a/docs/source/backends/template/guides/backend-guides.md b/docs/source/backends/template/guides/backend-guides.md new file mode 100644 index 00000000000..dbeaf25742a --- /dev/null +++ b/docs/source/backends/template/guides/backend-guides.md @@ -0,0 +1,10 @@ +# {BACKEND_NAME} Guides + +**→{doc}`{backend_name}-basic-guide` — Guide description.** + +```{toctree} +:hidden: +:maxdepth: 1 + +{backend_name}-basic-guides +``` diff --git a/docs/source/backends/template/op-support.csv b/docs/source/backends/template/op-support.csv new file mode 100644 index 00000000000..66af56d6a44 --- /dev/null +++ b/docs/source/backends/template/op-support.csv @@ -0,0 +1,6 @@ +Operator,Compute DType,Quantization,Constraints +{OPERATOR_1},{DTYPE_SUPPORT_1},{QUANTIZATION_SUPPORT_1},{CONSTRAINTS_1} +{OPERATOR_2},{DTYPE_SUPPORT_2},{QUANTIZATION_SUPPORT_2},{CONSTRAINTS_2} +{OPERATOR_3},{DTYPE_SUPPORT_3},{QUANTIZATION_SUPPORT_3},{CONSTRAINTS_3} +{OPERATOR_4},{DTYPE_SUPPORT_4},{QUANTIZATION_SUPPORT_4},{CONSTRAINTS_4} +{OPERATOR_5},{DTYPE_SUPPORT_5},{QUANTIZATION_SUPPORT_5},{CONSTRAINTS_5} diff --git a/docs/source/backends/template/tutorials/backend-basic-tutorial.md b/docs/source/backends/template/tutorials/backend-basic-tutorial.md new file mode 100644 index 00000000000..23d76857116 --- /dev/null +++ b/docs/source/backends/template/tutorials/backend-basic-tutorial.md @@ -0,0 +1,91 @@ +# Preparing a Model for {BACKEND_NAME} + +This is a placeholder tutorial. + +## Step 1: Environment Setup + +This tutorial is intended to be run from a {SUPPORTED_HOST_OS} and uses Conda for Python environment management. For full setup details and system requirements, see [Getting Started with ExecuTorch](/getting-started). + +Create a Conda environment and install the ExecuTorch Python package. +```bash +conda create -y --name executorch python=3.12 +conda activate executorch +conda install executorch +``` + +{ADDITIONAL_SETUP_STEPS} + +## Step 2: Model Preparation + +Create a python file named `export_{model_filename}.py`. This script will be responsible for loading the {EXAMPLE_MODEL} model from {MODEL_SOURCE} and create a {BACKEND_NAME}-targeted .pte file. + +```py +# export_{model_filename}.py +from executorch.backends.{backend_name}.partition.{backend_name}_partitioner import {BackendName}Partitioner +from executorch.exir import to_edge_transform_and_lower +import torch +import {MODEL_IMPORT} +``` + +### Model Instantiation and Example Inputs + +Instantiate the {EXAMPLE_MODEL} model from [{MODEL_SOURCE}]({MODEL_SOURCE_URL}). The export process also needs an example model input to trace the model. The model takes {MODEL_INPUT_DESCRIPTION}, so we'll create {INPUT_TUPLE_DESCRIPTION}. +```py +model = {MODEL_INSTANTIATION_CODE} +example_inputs = ({EXAMPLE_INPUTS},) +``` + +### Lower the Model + +Next, export and lower the model to ExecuTorch. Note that the `{BackendName}Partitioner` passed to the `partitioner` parameter tells ExecuTorch to target the {BACKEND_NAME} backend. +```py +exported_program = torch.export.export(model, example_inputs) + +executorch_program = to_edge_transform_and_lower( + exported_program, + partitioner=[{BackendName}Partitioner()], +).to_executorch() + +executorch_program.save("{model_filename}_{backend_name}.pte") +``` + +### Run the Script + +Save the above script to export_{model_filename}.py and run the script. You should see a file named `{model_filename}_{backend_name}.pte` in the current directory. +```bash +python export_{model_filename}.py +``` + +## Step 3: Running the Model + +The .pte file created in the previous step can be run on a variety of devices, including {SUPPORTED_PLATFORMS}. ExecuTorch provides runtime APIs and language bindings for a variety of platforms. This tutorial will demonstrate running the model on a desktop using the Python runtime. + +### Smoke Test + +First, we'll verify that the model loads and runs correctly by running the model with {TEST_INPUT_DESCRIPTION}. Create a new script, named `run_{model_filename}.py`, and add the following code. +```py +# run_{model_filename}.py + +from executorch.runtime import Runtime +import torch + +runtime = Runtime.get() + +input_tensor = {TEST_INPUT_TENSOR} +program = runtime.load_program("{model_filename}_{backend_name}.pte") +method = program.load_method("forward") +outputs = method.execute([input_tensor])[0] + +print(outputs) +``` + +When running the script with `python run_{model_filename}.py`, you should see {EXPECTED_OUTPUT_DESCRIPTION} printed to the console. +``` +{EXPECTED_OUTPUT_EXAMPLE} +``` + +# Next Steps + + - See [Edge Platforms](/edge-platforms-section) to deploy the .pte file on {SUPPORTED_PLATFORMS}. + - See [Model Export and Lowering](/using-executorch-export) for more information on model preparation. + - See [{BACKEND_NAME} Overview](/backends/{backend_name}/{backend_name}-overview) for more information about the {BACKEND_NAME} backend. diff --git a/docs/source/backends/template/tutorials/backend-tutorials.md b/docs/source/backends/template/tutorials/backend-tutorials.md new file mode 100644 index 00000000000..15e226dd5c5 --- /dev/null +++ b/docs/source/backends/template/tutorials/backend-tutorials.md @@ -0,0 +1,10 @@ +# {BACKEND_NAME} Tutorials + +**→{doc}`{backend_name}-basic-tutorial` — Lower and run a model on the {BACKEND_NAME} backend.** + +```{toctree} +:hidden: +:maxdepth: 1 + +{backend_name}-basic-tutorial +``` diff --git a/docs/source/backends/xnnpack/op-support.csv b/docs/source/backends/xnnpack/op-support.csv new file mode 100644 index 00000000000..5350fed8d12 --- /dev/null +++ b/docs/source/backends/xnnpack/op-support.csv @@ -0,0 +1,47 @@ +Operator,Compute DType,Quantization,Constraints +_to_dim_order_copy,"fp16, fp32",,no dtype conversion +abs,"fp16, fp32",, +add,"fp16, fp32",PT2E: static int8,alpha=1 +avg_pool2d,"fp16, fp32",PT2E: static int8,"ceil_mode=False, count_include_pad=False, divisor_override=pooling_region" +bmm,"fp16, fp32",, +cat,"fp16, fp32",PT2E: static int8, +ceil,"fp16, fp32",, +clamp,"fp16, fp32",, +constant_pad_nd,"fp16, fp32",,no negative padding values +conv1d,"fp16, fp32","PT2E: static or dynamic int8 activations +8-bit weights, symmetric per-tensor or per-channel",constant weights +conv2d,"fp16, fp32","PT2E: static or dynamic int8 activations +8-bit weights, symmetric per-tensor or per-channel",constant weights +dequantize_per_tensor,"fp16, fp32",, +div,"fp16, fp32",, +elu,"fp16, fp32",, +exp,"fp16, fp32",, +floor,"fp16, fp32",, +gelu,"fp16, fp32",, +hardswish,"fp16, fp32",, +hardtanh,"fp16, fp32",, +leaky_relu,"fp16, fp32",, +linear,"fp16, fp32","PT2E: static or dynamic int8 activations +8-bit weights, symmetric per-tensor or per-channel + +quantize\_: 8-bit dynamic activations +4-bit groupwise weights",constant weights +log,"fp16, fp32",, +max_pool2d,"fp16, fp32",,"stride ≤ kernel_size, ceil_mode only for static shapes" +maximum,"fp16, fp32",, +mean,"fp16, fp32",,"4D tensors only; dims=[-2,-1] or [-1,-2]" +minimum,"fp16, fp32",, +mul,"fp16, fp32",PT2E: static int8, +neg,"fp16, fp32",, +permute_copy,"fp16, fp32",, +pow,"fp16, fp32",,power=2 only +quantize_per_tensor,"fp16, fp32",, +relu,"fp16, fp32",, +rsqrt,"fp16, fp32",, +sigmoid,"fp16, fp32",, +slice_copy,"fp16, fp32",,"no zero-dim tensors, no dynamic shapes" +softmax,"fp16, fp32",,dim must be last dimension +sqrt,"fp16, fp32",, +sub,"fp16, fp32",,alpha=1 +tanh,"fp16, fp32",, +upsample_bilinear2d,"fp16, fp32",,no dynamic output sizes diff --git a/docs/source/backend-delegates-xnnpack-reference.md b/docs/source/backends/xnnpack/xnnpack-arch-internals.md similarity index 90% rename from docs/source/backend-delegates-xnnpack-reference.md rename to docs/source/backends/xnnpack/xnnpack-arch-internals.md index 8b4338e703c..52bcd3704cb 100644 --- a/docs/source/backend-delegates-xnnpack-reference.md +++ b/docs/source/backends/xnnpack/xnnpack-arch-internals.md @@ -1,4 +1,4 @@ -# XNNPACK Delegate Internals +# Architecture and Internals This is a high-level overview of the ExecuTorch XNNPACK backend delegate. This high performance delegate is aimed to reduce CPU inference latency for ExecuTorch models. We will provide a brief introduction to the XNNPACK library and explore the delegate’s overall architecture and intended use cases. @@ -6,18 +6,18 @@ This is a high-level overview of the ExecuTorch XNNPACK backend delegate. This h XNNPACK is a library of highly-optimized neural network operators for ARM, x86, and WebAssembly architectures in Android, iOS, Windows, Linux, and macOS environments. It is an open source project, you can find more information about it on [github](https://github.com/google/XNNPACK). ## What are ExecuTorch delegates? -A delegate is an entry point for backends to process and execute parts of the ExecuTorch program. Delegated portions of ExecuTorch models hand off execution to backends. The XNNPACK backend delegate is one of many available in ExecuTorch. It leverages the XNNPACK third-party library to accelerate ExecuTorch programs efficiently across a variety of CPUs. More detailed information on the delegates and developing your own delegates is available [here](compiler-delegate-and-partitioner.md). It is recommended that you get familiar with that content before continuing on to the Architecture section. +A delegate is an entry point for backends to process and execute parts of the ExecuTorch program. Delegated portions of ExecuTorch models hand off execution to backends. The XNNPACK backend delegate is one of many available in ExecuTorch. It leverages the XNNPACK third-party library to accelerate ExecuTorch programs efficiently across a variety of CPUs. More detailed information on the delegates and developing your own delegates is available [here](/compiler-delegate-and-partitioner.md). It is recommended that you get familiar with that content before continuing on to the Architecture section. ## Architecture -![High Level XNNPACK delegate Architecture](xnnpack-delegate-architecture.png) +![High Level XNNPACK delegate Architecture](/backends/xnnpack/xnnpack-delegate-architecture.png) ### Ahead-of-time In the ExecuTorch export flow, lowering to the XNNPACK delegate happens at the `to_backend()` stage. In this stage, the model is partitioned by the `XnnpackPartitioner`. Partitioned sections of the graph are converted to a XNNPACK specific graph represenationed and then serialized via flatbuffer. The serialized flatbuffer is then ready to be deserialized and executed by the XNNPACK backend at runtime. -![ExecuTorch XNNPACK delegate Export Flow](xnnpack-et-flow-diagram.png) +![ExecuTorch XNNPACK delegate Export Flow](/backends/xnnpack/xnnpack-et-flow-diagram.png) #### Partitioner -The partitioner is implemented by backend delegates to mark nodes suitable for lowering. The `XnnpackPartitioner` lowers using node targets and module metadata. Some more references for partitioners can be found [here](compiler-delegate-and-partitioner.md) +The partitioner is implemented by backend delegates to mark nodes suitable for lowering. The `XnnpackPartitioner` lowers using node targets and module metadata. Some more references for partitioners can be found [here](/compiler-delegate-and-partitioner.md) ##### Module-based partitioning @@ -54,7 +54,7 @@ After partitioning the lowerable subgraphs from the model, The XNNPACK delegate The XNNPACK delegate uses flatbuffer for serialization. In order to improve runtime performance, the XNNPACK delegate’s flatbuffer [schema](https://github.com/pytorch/executorch/blob/main/backends/xnnpack/serialization/schema.fbs) mirrors the XNNPACK Library’s graph level API calls. The serialized data are arguments to XNNPACK’s APIs, so that at runtime, the XNNPACK execution graph can efficiently be created with successive calls to XNNPACK’s APIs. ### Runtime -The XNNPACK backend’s runtime interfaces with the ExecuTorch runtime through the custom `init` and `execute` function. Each delegated subgraph is contained in an individually serialized XNNPACK blob. When the model is initialized, ExecuTorch calls `init` on all XNNPACK Blobs to load the subgraph from serialized flatbuffer. After, when the model is executed, each subgraph is executed via the backend through the custom `execute` function. To read more about how delegate runtimes interface with ExecuTorch, refer to this [resource](compiler-delegate-and-partitioner.md). +The XNNPACK backend’s runtime interfaces with the ExecuTorch runtime through the custom `init` and `execute` function. Each delegated subgraph is contained in an individually serialized XNNPACK blob. When the model is initialized, ExecuTorch calls `init` on all XNNPACK Blobs to load the subgraph from serialized flatbuffer. After, when the model is executed, each subgraph is executed via the backend through the custom `execute` function. To read more about how delegate runtimes interface with ExecuTorch, refer to this [resource](/compiler-delegate-and-partitioner.md). #### **XNNPACK Library** @@ -70,7 +70,7 @@ Since weight packing creates an extra copy of the weights inside XNNPACK, We fre When executing the XNNPACK subgraphs, we prepare the tensor inputs and outputs and feed them to the XNNPACK runtime graph. After executing the runtime graph, the output pointers are filled with the computed tensors. #### **Profiling** -We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](tutorial-xnnpack-delegate-lowering.md#profiling)). +We have enabled basic profiling for the XNNPACK delegate that can be enabled with the compiler flag `-DEXECUTORCH_ENABLE_EVENT_TRACER` (add `-DENABLE_XNNPACK_PROFILING` for additional details). With ExecuTorch's Developer Tools integration, you can also now use the Developer Tools to profile the model. You can follow the steps in [Using the ExecuTorch Developer Tools to Profile a Model](/tutorials/devtools-integration-tutorial) on how to profile ExecuTorch models and use Developer Tools' Inspector API to view XNNPACK's internal profiling information. An example implementation is available in the `executor_runner` (see [tutorial here](/tutorial-xnnpack-delegate-lowering.md#profiling)). [comment]: <> (TODO: Refactor quantizer to a more official quantization doc) @@ -142,5 +142,5 @@ def _qdq_quantized_linear( You can read more indepth explanations on PyTorch 2 quantization [here](https://pytorch.org/tutorials/prototype/pt2e_quant_ptq.html). ## See Also -- [Integrating XNNPACK Delegate in Android AAR](using-executorch-android.md) -- [Complete the Lowering to XNNPACK Tutorial](tutorial-xnnpack-delegate-lowering.md) +- [Integrating XNNPACK Delegate in Android AAR](/using-executorch-android.md) +- [Complete the Lowering to XNNPACK Tutorial](/tutorial-xnnpack-delegate-lowering.md) diff --git a/docs/source/xnnpack-delegate-architecture.png b/docs/source/backends/xnnpack/xnnpack-delegate-architecture.png similarity index 100% rename from docs/source/xnnpack-delegate-architecture.png rename to docs/source/backends/xnnpack/xnnpack-delegate-architecture.png diff --git a/docs/source/xnnpack-et-flow-diagram.png b/docs/source/backends/xnnpack/xnnpack-et-flow-diagram.png similarity index 100% rename from docs/source/xnnpack-et-flow-diagram.png rename to docs/source/backends/xnnpack/xnnpack-et-flow-diagram.png diff --git a/docs/source/backends/xnnpack/xnnpack-overview.md b/docs/source/backends/xnnpack/xnnpack-overview.md new file mode 100644 index 00000000000..5ef92c81126 --- /dev/null +++ b/docs/source/backends/xnnpack/xnnpack-overview.md @@ -0,0 +1,100 @@ +# XNNPACK Backend + +The XNNPACK delegate is the ExecuTorch solution for CPU execution on mobile CPUs. [XNNPACK](https://github.com/google/XNNPACK/tree/master) is a library that provides optimized kernels for machine learning operators on Arm and x86 CPUs. + +## Features + +- Wide operator support on Arm and x86 CPUs, available on any modern mobile phone. +- Support for a wide variety of quantization schemes and quantized operators. +- Supports fp32 and fp16 activations. +- Supports 8-bit quantization. + +## Target Requirements + +- ARM64 on Android, iOS, macOS, Linux, and Windows. +- ARMv7 (with NEON) on Android. +- ARMv6 (with VFPv2) on Linux. +- x86 and x86-64 (up to AVX512) on Windows, Linux, Android. + +## Development Requirements + +The XNNPACK delegate does not introduce any development system requirements beyond those required by +the core ExecuTorch runtime. + +---- + +## Using the XNNPACK Backend + +To target the XNNPACK backend during the export and lowering process, pass an instance of the `XnnpackPartitioner` to `to_edge_transform_and_lower`. The example below demonstrates this process using the MobileNet V2 model from torchvision. + +```python +import torch +import torchvision.models as models +from torchvision.models.mobilenetv2 import MobileNet_V2_Weights +from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner +from executorch.exir import to_edge_transform_and_lower + +mobilenet_v2 = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval() +sample_inputs = (torch.randn(1, 3, 224, 224), ) + +et_program = to_edge_transform_and_lower( + torch.export.export(mobilenet_v2, sample_inputs), + partitioner=[XnnpackPartitioner()], +).to_executorch() + +with open("mv2_xnnpack.pte", "wb") as file: + et_program.write_to_file(file) +``` + +See [Partitioner API](/backends/xnnpack/xnnpack-partitioner) for a reference on available partitioner options. + +---- + +## Quantization + +The XNNPACK delegate can also be used as a backend to execute symmetrically quantized models. See [XNNPACK Quantization](/backends/xnnpack/xnnpack-quantization) for more information on available quantization schemes and APIs. + +---- + +## Runtime Integration + +To run the model on-device, use the standard ExecuTorch runtime APIs. + +The XNNPACK delegate is included by default in the published Android, iOS, and pip packages. When building from source, pass `-DEXECUTORCH_BUILD_XNNPACK=ON` when configuring the CMake build to compile the XNNPACK backend. See [Running on Device](/getting-started.md#running-on-device) for more information. + +To link against the backend, add the `executorch_backends` CMake target as a build dependency, or link directly against `libxnnpack_backend`. Due to the use of static registration, it may be necessary to link with whole-archive. This can typically be done by passing `"$"` to `target_link_libraries`. + +``` +# CMakeLists.txt +add_subdirectory("executorch") +... +target_link_libraries( + my_target + PRIVATE executorch + executorch_backends + ... +) +``` + +No additional steps are necessary to use the backend beyond linking the target. Any XNNPACK-delegated .pte file will automatically run on the registered backend. + +## Reference + +**→{doc}`/backends/xnnpack/xnnpack-troubleshooting` — Debug common issues.** + +**→{doc}`/backends/xnnpack/xnnpack-partitioner` — Partitioner options and supported operators.** + +**→{doc}`/backends/xnnpack/xnnpack-quantization` — Supported quantization schemes.** + +**→{doc}`/backends/xnnpack/xnnpack-arch-internals` — XNNPACK backend internals.** + +```{toctree} +:maxdepth: 2 +:hidden: +:caption: XNNPACK Backend + +xnnpack-partitioner +xnnpack-quantization +xnnpack-troubleshooting +xnnpack-arch-internals +``` diff --git a/docs/source/backends/xnnpack/xnnpack-partitioner.rst b/docs/source/backends/xnnpack/xnnpack-partitioner.rst new file mode 100644 index 00000000000..a0881aa3a6a --- /dev/null +++ b/docs/source/backends/xnnpack/xnnpack-partitioner.rst @@ -0,0 +1,24 @@ +=============== +Partitioner API +=============== + +The XNNPACK partitioner API allows for configuration of the model delegation to XNNPACK. Passing an ``XnnpackPartitioner`` instance with no additional parameters will run as much of the model as possible on the XNNPACK backend. This is the most common use-case. For advanced use cases, the partitioner exposes the following options via the `constructor `_: + +- ``configs``: Control which operators are delegated to XNNPACK. By default, all available operators all delegated. See `../config/__init__.py `_ for an exhaustive list of available operator configs. +- ``config_precisions``: Filter operators by data type. By default, delegate all precisions. One or more of ``ConfigPrecisionType.FP32``, ``ConfigPrecisionType.STATIC_QUANT``, or ``ConfigPrecisionType.DYNAMIC_QUANT``. See `ConfigPrecisionType `_. +- ``per_op_mode``: If true, emit individual delegate calls for every operator. This is an advanced option intended to reduce memory overhead in some contexts at the cost of a small amount of runtime overhead. Defaults to false. +- ``verbose``: If true, print additional information during lowering. + +================ +Operator Support +================ + +This section lists the operators supported by the XNNPACK backend. Operators are the building blocks of the ML model. See `IRs `_ for more information on the PyTorch operator set. + +All operators support dynamic input shapes unless otherwise noted. + +.. csv-table:: Operator Support + :file: op-support.csv + :header-rows: 1 + :widths: 20 15 30 30 + :align: center diff --git a/docs/source/backends/xnnpack/xnnpack-quantization.md b/docs/source/backends/xnnpack/xnnpack-quantization.md new file mode 100644 index 00000000000..e3a02d4bffc --- /dev/null +++ b/docs/source/backends/xnnpack/xnnpack-quantization.md @@ -0,0 +1,94 @@ +# Quantization + +The XNNPACK delegate can also be used as a backend to execute symmetrically quantized models. To quantize a PyTorch model for the XNNPACK backend, use the `XNNPACKQuantizer`. `Quantizers` are backend specific, which means the `XNNPACKQuantizer` is configured to quantize models to leverage the quantized operators offered by the XNNPACK Library. + +### Supported Quantization Schemes +The XNNPACK delegate supports the following quantization schemes: + +- 8-bit symmetric weights with 8-bit asymmetric activations (via the PT2E quantization flow). + - Supports both static and dynamic activations. + - Supports per-channel and per-tensor schemes. + - Supports linear, convolution, add, mul, cat, and adaptive avg pool 2d operators. + +Weight-only quantization is not currently supported on XNNPACK. + +### 8-bit Quantization using the PT2E Flow + +To perform 8-bit quantization with the PT2E flow, perform the following steps prior to exporting the model: + +1) Create an instance of the `XnnpackQuantizer` class. Set quantization parameters. +2) Use `torch.export.export` to prepare for quantization. +3) Call `prepare_pt2e` to prepare the model for quantization. +4) For static quantization, run the prepared model with representative samples to calibrate the quantizated tensor activation ranges. +5) Call `convert_pt2e` to quantize the model. +6) Export and lower the model using the standard flow. + +The output of `convert_pt2e` is a PyTorch model which can be exported and lowered using the normal flow. As it is a regular PyTorch model, it can also be used to evaluate the accuracy of the quantized model using standard PyTorch techniques. + +```python +import torch +import torchvision.models as models +from torchvision.models.mobilenetv2 import MobileNet_V2_Weights +from executorch.backends.xnnpack.quantizer.xnnpack_quantizer import XNNPACKQuantizer, get_symmetric_quantization_config +from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner +from executorch.exir import to_edge_transform_and_lower +from torchao.quantization.pt2e.quantize_pt2e import convert_pt2e, prepare_pt2e + +model = models.mobilenetv2.mobilenet_v2(weights=MobileNet_V2_Weights.DEFAULT).eval() +sample_inputs = (torch.randn(1, 3, 224, 224), ) + +qparams = get_symmetric_quantization_config(is_per_channel=True) # (1) +quantizer = XNNPACKQuantizer() +quantizer.set_global(qparams) + +training_ep = torch.export.export(model, sample_inputs).module() # (2) +prepared_model = prepare_pt2e(training_ep, quantizer) # (3) + +for cal_sample in [torch.randn(1, 3, 224, 224)]: # Replace with representative model inputs + prepared_model(cal_sample) # (4) Calibrate + +quantized_model = convert_pt2e(prepared_model) # (5) + +et_program = to_edge_transform_and_lower( # (6) + torch.export.export(quantized_model, sample_inputs), + partitioner=[XnnpackPartitioner()], +).to_executorch() +``` + +See [PyTorch 2 Export Post Training Quantization](https://docs.pytorch.org/ao/main/tutorials_source/pt2e_quant_ptq.html) for more information. + +### LLM quantization with quantize_ + +The XNNPACK backend also supports quantizing models with the [torchao](https://github.com/pytorch/ao) quantize_ API. This is most commonly used for LLMs, requiring more advanced quantization. Since quantize_ is not backend aware, it is important to use a config that is compatible with CPU/XNNPACK: + +* Quantize embeedings with IntxWeightOnlyConfig (with weight_dtype torch.int2, torch.int4, or torch.int8, using PerGroup or PerAxis granularity) +* Quantize linear layers with Int8DynamicActivationIntxWeightConfig (with weight_dtype=torch.int4, using PerGroup or PerAxis granularity) + +Below is a simple example, but a more detailed tutorial including accuracy evaluation on popular LLM benchmarks can be found in the [torchao documentation](https://docs.pytorch.org/ao/main/serving.html#mobile-deployment-with-executorch). + +```python +from torchao.quantization.granularity import PerGroup, PerAxis +from torchao.quantization.quant_api import ( + IntxWeightOnlyConfig, + Int8DynamicActivationIntxWeightConfig, + quantize_, +) + +# Quantize embeddings with 8-bits, per channel +embedding_config = IntxWeightOnlyConfig( + weight_dtype=torch.int8, + granularity=PerAxis(0), +) +qunatize_( + eager_model, + lambda m, fqn: isinstance(m, torch.nn.Embedding), +) + + +# Quatize linear layers with 8-bit dynamic activations and 4-bit weights +linear_config = Int8DynamicActivationIntxWeightConfig( + weight_dtype=torch.int4, + weight_granularity=PerGroup(32), +) +quantize_(eager_model, linear_config) +``` diff --git a/docs/source/backends/xnnpack/xnnpack-troubleshooting.md b/docs/source/backends/xnnpack/xnnpack-troubleshooting.md new file mode 100644 index 00000000000..508acc06351 --- /dev/null +++ b/docs/source/backends/xnnpack/xnnpack-troubleshooting.md @@ -0,0 +1,25 @@ +# Troubleshooting + +This page describes common issues that you may encounter when using the XNNPACK backend and how to debug and resolve them. + +## XNNPACK Backend Not Found + +This error indicates the XNNPACK backend is not registered with the runtime. This can happen because the backend was not compiled or linked, or because the registration code was optimized out. + +The XNNPACK backend is built by default for Python, Android, iOS, and in most CMake presets. + +* Set the `EXECUTORCH_BUILD_XNNPACK=ON` CMake option option when building from source. + * Either by passing the option during CMake configuration or setting it inside the user CMake logic before including ExecuTorch. + * See [Building from Source](/using-executorch-building-from-source). +* On iOS, link the `backend_xnnpack` [framework](/using-executorch-ios). +* If the backend is still not found, link with `WHOLE_ARCHIVE`. + * Pass `"LINK_LIBRARY:WHOLE_ARCHIVE,xnnpack_backend>"` to `target_link_libraries` in CMake. + +## Slow Performance + + * Try reducing the thread count using [_unsafe_reset_threadpool](/using-executorch-faqs.md#inference-is-slow-performance-troubleshooting). + * Small models may benefit from using fewer threads than default. + * Try values between 1 and 4 threads and measure performance on your model. + * Use [op-level profiling](/tutorials/devtools-integration-tutorial) to understand which operators are taking the most time. + * The XNNPACK backend provides operator-level timing for delegated operators. + * See general performance troubleshooting tips in [Performance Troubleshooting](/using-executorch-faqs.md#inference-is-slow-performance-troubleshooting). diff --git a/docs/source/desktop-xnnpack.md b/docs/source/desktop-xnnpack.md index 315dd747006..4a85dec946b 100644 --- a/docs/source/desktop-xnnpack.md +++ b/docs/source/desktop-xnnpack.md @@ -1 +1 @@ -```{include} backends-xnnpack.md +```{include} backends/xnnpack/xnnpack-overview.md diff --git a/docs/source/edge-platforms-section.md b/docs/source/edge-platforms-section.md index 1396806b4e0..209986507fa 100644 --- a/docs/source/edge-platforms-section.md +++ b/docs/source/edge-platforms-section.md @@ -65,7 +65,7 @@ After choosing your platform: ```{toctree} :hidden: -:maxdepth: 2 +:maxdepth: 3 :caption: Edge Platforms android-section diff --git a/docs/source/ios-xnnpack.md b/docs/source/ios-xnnpack.md index 315dd747006..4a85dec946b 100644 --- a/docs/source/ios-xnnpack.md +++ b/docs/source/ios-xnnpack.md @@ -1 +1 @@ -```{include} backends-xnnpack.md +```{include} backends/xnnpack/xnnpack-overview.md diff --git a/docs/source/platforms-desktop.md b/docs/source/platforms-desktop.md index acbdb06a6b6..ba22786576f 100644 --- a/docs/source/platforms-desktop.md +++ b/docs/source/platforms-desktop.md @@ -9,15 +9,15 @@ ExecuTorch supports desktop and laptop deployment across Linux, macOS, and Windo ## Available Backends by Platform ### Linux -- [XNNPACK (CPU)](backends-xnnpack) +- [XNNPACK (CPU)](backends/xnnpack/xnnpack-overview.md) - [OpenVINO (Intel)](build-run-openvino) - [ARM Ethos-U (ARM64)](backends-arm-ethos-u) ### macOS - [CoreML (recommended)](backends-coreml) - [MPS (Apple Silicon)](backends-mps) -- [XNNPACK (CPU)](backends-xnnpack) +- [XNNPACK (CPU)](backends/xnnpack/xnnpack-overview.md) ### Windows -- [XNNPACK (CPU)](backends-xnnpack) +- [XNNPACK (CPU)](backends/xnnpack/xnnpack-overview.md) - [OpenVINO (Intel)](build-run-openvino) diff --git a/docs/source/quantization-overview.md b/docs/source/quantization-overview.md index 4ac886b9ed2..81b15f6c8bb 100644 --- a/docs/source/quantization-overview.md +++ b/docs/source/quantization-overview.md @@ -28,7 +28,7 @@ These quantizers usually support configs that allow users to specify quantizatio Not all quantization options are supported by all backends. Consult backend-specific guides for supported quantization modes and configuration, and how to initialize the backend-specific PT2E quantizer: -* [XNNPACK quantization](backends-xnnpack.md#quantization) +* [XNNPACK quantization](backends/xnnpack/xnnpack-quantization.md) * [CoreML quantization](backends/coreml/coreml-quantization.md) * [QNN quantization](backends-qualcomm.md#step-2-optional-quantize-your-model) diff --git a/docs/source/tutorial-xnnpack-delegate-lowering.md b/docs/source/tutorial-xnnpack-delegate-lowering.md index 3fb079f24d6..5c88246b0ba 100644 --- a/docs/source/tutorial-xnnpack-delegate-lowering.md +++ b/docs/source/tutorial-xnnpack-delegate-lowering.md @@ -12,7 +12,7 @@ In this tutorial, you will learn how to export an XNNPACK lowered Model and run :class-card: card-prerequisites * [Setting up ExecuTorch](getting-started-setup.rst) * [Model Lowering Tutorial](tutorials/export-to-executorch-tutorial) -* [ExecuTorch XNNPACK Delegate](backends-xnnpack.md) +* [ExecuTorch XNNPACK Delegate](backends/xnnpack/xnnpack-overview.md) ::: :::: diff --git a/docs/source/tutorials_source/bundled_program.bp b/docs/source/tutorials_source/bundled_program.bp deleted file mode 100644 index 8afe3cfee262a8796008ede4714e8bbb07221131..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 261344 zcmbrm2|Sj~_dk9ovSlekN(mt(MYd}WcTtwoh87YnC{YU0qEI1QvS-Ooq*beXjyiD^7xRO`u$ z4)x^ihkJ6x(>=M6sTV_E2B~gKgm&?GmLm16*qwxYgS)R(Sabi_ug0jFJRP2}t8)i1 zJ<2>@1T#F4!G}SO3F`coWyNmab*o`NnxFr4fX9E0!qe}6cor^SwrY7` zSm=`F%NR}QPhr1!ETTkXOP_U8lR?;Dd4_Y$MfnV~859|wF~K5+UW$(BB`o*$d!s`rnEzE_GWRPZ<#W0(}i(wvv(C0!Zzx_w<5|b19{Ew_KHar=G zazgM$c)v1UW=c1&P{xfx*oF{`Mffz?^&fpgTZEX(Fpa^TVJ3r6R)``Ix!LSm)RsQ1 z-jzWpC&VQY-c9UU)RxJtUf6~ZA4T|T*tMuF!WeX5aAa^{n8F~G6~gGX^hJloMjM2(Kc$X2aI)7op!R8GyltVH|@!!vqE^25W|i3^^k5)7Z6WUq-Ncp$;J$ zMEKl({WD(-Vs*kcg*FQBqfrcF7)%+=87vrtdV~;f|EFK{U$&~V`k@S(4B8C340;Sg z{ev0w8SF*mfBef<;oK$EAw=MBzP{{Qs(Y5{nQMf;6Y>e~C1DcwRT%fe`-V*=JT|3t z&zJ*QF60y9PZIJ9V^)|8<-Hlq6nMO^OupPmu|Yk`kk1gz;KsmbFk%qiXwRhB zJ>2bKey{+4!El42fgyt-g29C084J`Lh6ska z3?>Z93_ON6%%KeoWeo8Qa~T{NbQm77fGA>!VhCj5GyGtI@`&LyLlgs_L6e~`!z1R{ ze1<57MGTq@k61Ge3^fc<41__GfvM+B`_LUrV#4dsAA91KUGf%-(B(h^<1VC4X7#K|9{-~lkD1dw(863uW3J~l^vFvJdG|1RVJ@tn!yrr^JsIzX zNc(RX-&c|H=P&=*BVP7(SP5qg72dlMP`-~LPb-QIumpRhe18zc7H?Z3B7 z{*V8r3+q^#(UbA?d-%HTGGhL+6e*j3`%nM5_0)Cyulcu4qW+ut+ux%8i(u_^_g_R- zh4KBnKXY0A|I2?u9alx{S|u{BFGT1mB6ME&81cGojr}j}UHaYLfAgQPJ${dTxBra4 z{^LKV-+pucoiFa;>-JwD3y|7D2Sq|SJSJ`nX^7NZG$AnLzVM(FOpi0sVY zvYL!nnL#+V`K+b@yLhBwnIK=(-~G91*>-B6@9q>*N2<8${$!Grp@L&m-zasRJ8G|s5?a6qHMdSk+UxY|`9V^H8l)L?q z#L77$dSgZOn*7!``gh(SqQ{T%b^Ddo-4UU$7ooR{=zS=ncjRw<#=mp5o^k6Q_rYv@ z^+n1?tZdm+?(VlPEB}dmgWvLozjGfExju~Vk9`p$^dTbj)E;_w?5HsMzsGzotN-6( z=g)ZE>Cw}DADvtY6BA>+gazx7cth`f1Pu6ceLeT~h`W_Me!u^~u4e7~v zFGTdEFup(cu@SpwEMjcw$5Nl}eHdF_^~|l^G`4PHal$UU$7Es;|9{-i?M%1O4^eE= z|Mxv4)Ww#b-4Mq8A3Yzs_59~r$MSca@|XSZdJc-{vHsZ;Bck{7VMY_$^WSU{@(J;^ zM@RQK?)c5m$UH?hb}Z|T>z|DNC$9e-W2g$5ShDQl>y{^M-{B}yp3KU#d&=GMHJ6na z_msQ!&iuVk5srayk1k9~JsB@mM2|V+`x9R~Md(vR=nW!z_lW3aTY&B)oKOFxHzGRN zR^*p3?pd7}TZ%JZ3IDORETgkb@=rWSF{*G3|M;PY&f~G|LwDl;jvrzDa}m2X_2~JZ zxa#{`&!7I3_0&tKb!+L*o(;lQgr`ev;m@M5$1ndGAG29o|682>8Cxk9ZIZutSWKBn z8$#W}HiYR<8@xW6tn}yEM1QxzOaH&)N7%NIR|w1Qxa+wmOuDc8^b+b2`cgOt33-J{ zmEMSS(pp?-5BykWHSCGBWy>s4I%#}5&qNv;urQ@v<)G@aJ|I) zJr2V0`}!{#;fRQ~A>=m^;n(lJ{-@u<=eB4YLVn?zYLSS{+<(ak+ZAm?7_S*3{7L`f zXWO;zSp2hQ5b_J{{Ih1bDk6LCU$R2mMBDl&;(B)DLN&rM6rM@`Q>Vzv!t_r+g*t?G z|FPkZpZ?e&j4U)M6)Mp}rBJB80_D z0vE0HUFIDU?7NVWJ-s4@=TkrbEw8m8Y)N3~l4T*rp<%&+zFPmGOG~J|oAhsO|I_~e zU-|zp_D72J`=55%v0~`I%=Yy9uSPQ$2%R8|VZY@o!{@JD#IF7`b%alZ<3la#*F1a% zE~)QnlbTk2I>KRv(d0GLiI13Nv&--siQ4e$%`TiDno2cd&|;J)}CW1~wPi+xVN55f^2CebK8OXuU-j3}!jhC+yF|ke6~Gcf}8_*G|UPyj3Wd~9`4q#kPzkWMoL z8#x7In;%B*F{}mYV}rdXM=HlgV!;doF-N85OM%K zbIei+KN`D9cYFR;hMh)`JzhKCUI4mk}1nUF6ZF0SS zST7wlxW373n2qGfci4ILIkt;g*6;3XX!F2!Ts^N;gG4#2U~SHR+7$AXt2UFgNl=lu zIWlBF>KQcR>|L@p66tpJveB~jIR}VM`tfD3J7O3Z#!K5Y_bR47^aun=jjTU4@AFVq{>LY9i@)wj6yv5{P# zgF)5%(Iu#YNcQGzYVTIihHYA?-LDk+g@@_B9TaErCfn#~OJUpLxx!=0?&OKeHciiD z>aM&HuYdhwqz&)$N>1`^$N$vFmuO@8^dww4a5+`rCsKz#SvZi+$792KVLNmZC8a1% zucHL}L>j`<&0|nkjF0D6^~L(pNz`}E0qRuR5BJtTA*#yhuxe-`JQI(=`rY5i>3SWQ z)VG>m%nGahI2LL~KD;#jp#;^!Y+`r2J-%Jn3lHNB2 z{T_5tZ}lQPDxO0!Ci01m?RwO9+z4X_2H<3)cxb%ji0=-U;|XKyONMK@B& zc%3O^M^PReeP2w7k2bWZ%q9kLo4~&)9{d&3&|-uI>}gD;im$Tp?X+3Az$FTwuFIik zTkKKHg#$gu0-WHQNmj=1z!FO{(mr_xM9nZIcRz+$pRP>D^AWwF`C1-+kX;YBsEZ~f zOJnD#DQF+=1n-8PC1Qa-h_037g@*(YT(iefKY8#iEgH}4*+!FdGVrQViQu7N1SQg0 zQ1ms7t|G6gR8=jhko3m*+mFepg*)Z6nI<+H;Ia86AP>G)aTVAIwC}r*SmM z&kT%e?dYy)9pu!CZJ1bL$l>i9H0qi`IA@u(pvgB7&yFg`v8&za@mOWJl`cne!o%@m z*f;LxzAfZ>_Od$T;azsYkZ>D>m)~pCmlfM+ zqFyq_SrLpiIYLj3t|j^}C1KdPFkHrqtdp`W#6CyMU_*Q+OjXE1qsDOVX~Iy*Kd}s| zw>>9gr_ZJjM;gKQu3D~ccpAQ(Z;TCH6}ag{HM!V74wpE7ARBjy!{5>4A^lq`X|ei? znyns8ikb$2$>2k@(We){iECuNNfr#dTa5nWqdD!~?*)U;DAom>Jt3HUtQelm&%l|M zv7nVS0tOkaqSwo2;(U1q=t$3dVw1zC6t5gnk0N-y9uRMXJTM}xnS3X zButhch!Qu?P%~#`*m5-vKKDwYiw+IMMJm47S8*`O_0N)@yQ_hN*~Pqs!6JI1_^eV2OA?Q$PV*u zpnEtIwoZFMZ*@vxq3?Vga%luu4s!#qzOD4u%`A|oP7r=C1P+-S!}L=(1xl~^*r0HZ z%vdQ8FNTc3lQQnm_^p|4SR(;r-rCYbu?a9`Y77=ikAlFJlOb=b2TGAFe0A>!mCfa$ zD# zvsGlQ^&m(~427k<8rt+}F{(TDMMu>K#I9osW*r`aHM1XbiJoUj-a0>gOR|V7H-XJn z^>iYyhKm{@kHwo);U#hy>KF)4m&R7^4as&0#)6c0bdJm!*JCvst(^p;}`8t*?&4$I8o zLK6+)>tO+neYOSNKb#@EUkQjs-bs2+ei*pMm}4)!AUwaMkiNaLlB5qhNz5ND$J^XZ za_jjXA}eVJr!FeO7{f$z=)fj2X61M^jLyYur!n;8v~qB8x=nHh+_wHOK%PW*zE2fJFKeQhF?xu3Q_B^W4|t)5dsm( zU!WX}_)3UnixFl_dq9G9g2^$jSSaE$@WiL9)GloR7hU9!ANI6T{`%*Fn4vqcxNkmo z9F>JZdAWGrr$1LAI~~S^zv6~CT_R_W#KWr1+tGOBMC;grIq2h^fUC-e!T?`eEV{7_ zYx{f>99zhP7wPjsBKjwpALjxZFDF2J=~OiRD+=AqH`6U*iNs;bcGS5)7+h!df@n!T zj_J1yFJ#*j;q^6S)b(XBXn7LM9(;;~?c0XW&t+ryz^iqRoyG8_G8L6xn$ar~ z*>qaUM6_J6T`=Q*0F3)Aj{71~@WfzG9MaTGPj0lN=1U3o)^M`^w$Bj2Gls^kvWJ~V z2)bD0v2~syrir@|>#^A+XcZq-CKlr`Sq>6@?x0?yJn43oEvQ>iit!gz!Q#bJGQ52g zxV@I4-#^|Yi~ddpkB^dc@R?Mr3DI+Dti%Gr3!N15vWz`@l6H{y^^Um6L_o}ZcGCrG zqUeVU2IQIB3P=n{f~da7NNayv{95IW60@^0>!mJKPii6eRMyjcn+#BlQ$$6B7TW8? zcKkd!lWw<2gr$$takv6-^RtUE;p`W3=#Ul^d6<$_y_e#a{)1uO&|vKQa~SNHmCQZe zpa~aW#o!0!)3hv{l1qzr(%UA*L~pDEodY{y%EBF#=b28zOk!YG)+f^PFqv9+BoNt^ z?4Hn!Km`qdGDGP{okCT?v59kk5Vga(`23YTl#eNZiH^zeK{g)uwfI7atUg{;+5$6n zHj>iM_AqX1F__7JC!y1Q@ol6m)^h_8-F#tNBO7bqA=rOmO}Hl|9Src0NB!!+lcV$L1_9eaad#>rcguRrgXLt)Grn^K(gVv?}~O z(ZngYn_+*sAuxSQJWN~sLNM@{6_!of0rLwSh+xwX!MT~spzo)C=xdw|Rn@AvXKWVE zt35)~Kk>2sz7xi*@W$c6ZfIsO8CuPDKwertKCzd84ND4$o}(qbHTyU*^vI<(PWPzM z#t~RGD;5unr{P26rF7<)C}{UA!t2I2xEIlbU=l48+#R7wM^3kdLGXkAyrD;?7ptIk z>w2m?VjN2B_5iDPDcpV36OS6q0Oi0bICDfIOx+nzpDC>)sfMAjI-vrc-X+018U!*| zzmsPH9BSwcr`0#VP_sRssAHoOqKq8|KFNWA*46kLiU zNbg_jAXRyvD;)omY#=MJ&k;|Ubi|yDS(6H~8U%deN6d*^HX{t|08XGw;ijbs>us49lF4yB_CzzajR7D zH84VxavgkLHWiOea>5QjKa9FqNK+QI(-ia7sFyMyw(s(T>0b&VP*Vz`PSn#?w4B{1 z5s=XpNDaCs;@b(>CE<^;ytrwUwq;iXn%Q?e1CnI zcAW7Na2usix%MQDas+bCEE)RAJ>(|&m5`@%x8vmXaWJsP4F?oTgYOk(yl8Dfl04?q zAhpfp(}Y}fjq|6?Re6G~d$MWsFSox5gccah57vNx312FknY*jQ2!29$joPRD3vJI1I)xsi>mza)8YdPF`&yR@B%7EDU zMew)zM%p)89wjA%@M~cOb(~ay!>^8pm0vvJzK<9Vt2sqVn5wN|@6+PKQBXRwFX%te1`{g}Zt_KOkl*!{^XruWTW>ppRD1%i@K3_g zuB~JFnlbHK~wAd!rg0T@um$!l2Qbm@aM$$m8$tt%jBa`{l6yNOyH z@~4xBt$$u{kaS>mlWVe*ITsN=oW1>^}xX=Cu`&QY^`I84=lDr zub0kP`92(XeZ0)=&JKm9ip|vN^8(Pk{E)tx+z%h-Y(n?78Mx`EGEUzRh=JLCVA7g7 z0{PoFX+!2hq%$Xj_ETRRG$s;now`D_V@j-AKJ~$X8;R)Kw-1Otkmro$l)yi!1e(@u zp&ir9@z-)?GJf}FXs_Q6Joi%cVrw_Aj#&Kb#Y~u9ag+?5S4ZAV(yEhO7l|ud)(NC8 z$b-c}HC(T=hYtVsh0Y2p0@VXmVz;j#rVa=9F&DaDb`b&24o z6bs6;vQRG`0Fujt7} zW8?(&f@gPmB>MdZI6I!Li>C*WAWKD1UE~Glnq#?L!*Xd(l^OlfrigUUVw8%?fVAQi zT-P*%&K_Y!{l(9anJfRM#tCwmD#s(`&L3F7@AWG(~ zz?@r1HjOMnBmGo7J~9_mn);B17GFrKe>51R%!F%8W@3R@qhQjH7-)-lNDJpDW08{z zH0=n51umudzDNSD`t^mxNfXh7dqeGSO~A=#yttKNZ>aH;5_si31IFn~fO=RAF0Wk) zRHG7$0+rzP%D0^HyYRYx_xxa+>qHzCmW@l_6@bACEzr5Vjjl@11~-2h(0%DZhEC0) zTDpPI>xU|O7JVj0z5BsBA2F~~Yo>a0m0+Fs7P$3Tfz+fg1I?M6VgBb!^m9}+e&-v3 zSg%J!@kKdI9i9dH!xRO-Qnq9BtxVYWQ=Uvd=YTUqT}j=i-E`#MZDeN4X>$MeT5R4B zkMRlTXuaGh`f#ffEv)vozPT_7VLf|(w3$I<{Agm(yBFEv=8Su;brH3(MO3X|JB*jp z$2V8As2Z-py!QsjE?*PF*-cKk>a-61Jop+hm>)|jdIgfg7w$0FYy<8e777dBe-s$@ zpU#~NvqgnH<#k=oj^Jgr0>m>)xG|W3Z*PSY|A-J+J76kE4EBU`CB@X`h9f3Uc}rFY z4#L=(1EK0lAM^<|0ExXTpnUcOxW#wEHPf?+bj>u>>$3zOi#HN$`9ma#BXpOyDZDLD zVS8{_ki4=1zuucd?EiOiC`|F1!^m6$drcucy?em_K!@bPuQ8= zscTO7t`DD%&@BW5`xI>Jl?lDd_Y?U37dbpHi^z-bKn#B?SYF7-{*#&o7j@*JI8fCx?LPIv+L;C zVOG#Gs2KlVq>I#YKV4@KPo}uv zSBQIRFVgpKx6^v>SG2#3BxaR$kwZ~uIo)Y@2+7ca5s~lcO~v65g|@J4jvBn*wh+o} zw-c!eTXFkkWh`Cii)Y@RCx;eh5Sj2z5UkJEqTw5HZ_!)QV&O!VVj7vd@D0gtSwI%Q zTtLk4m!s~>f~AO#=I)>s5P0UojXa3{bvZ=XZ%gp71+`RtLy334}P>g_8Z-xSP54b52FvX zBIwJ*vhXD$6JF_iVZ(zYOb^{jUY^V+e{0Tx3m*!hM(~xCWL_n^+_yuSuNJ2Gd0>B| z*{ITF4nwuVV66RFa^OWgcso2G_Xf?tIrHrBo9#$g`XU(`2W1eol63r5V}c73i=Z>F zjhrf7fun-O(dfWx+?W}ROHLf8QmQj>)cpz8#}$j%dMgc#ZjHhNXZpkS#|8LHIu#AA zj7h|N9?EXb#Lq$2pb*wT3{6|OX#ZC7F7Fe?_JI^HJHk>GJ}Sy(!Rw9~SXlIn%zL?- z+`JbF57W}%Md^EP;yOM)@ut-9ekvqY1e5XaKas!dZ_sn!d|>*uDcCVg6Z~(cfcv71si8x`d@p!$qncioJ+GkLaf$ayH(cm=@jEO)s5DFf0DzqvW?BT zF-up%G)ZSzT&xI{>0ju!C4Bs{sg#~JoQ_9_zooOw|02EaW>MbYe0CmFj*@%qxODD^ zAaT+ftSy^^-#yx?j#~m8)Z{}4d_t6E)n@7dvn;d*{&Dp#;8>nJ}N74^?+fg3kQS==*d&cRno&41p50 zKZMct=V@eWfhO4Oze`$bN=fDv4Vb=U2<~|`mfUc7OYH`@kokKQ;renh5KQafR_eMz z^$=$?@>Ic_z=x_(Pmn2YqngK!vChtoD(&t^eRxrVGC5^72?6i zk$C&@W75kz5l>FaCH>m=5m(PTE}y+mnECJ?ckz-U){|N;(ryD6Wj+*+UvtLWPt8!v zWgtXqY{jId#$?8MZ<@X4y+BugES>FJ0a3A+=;J{@shL42-k+ZXPIL81o9-r5yRZP~ zyx&h>c4^}>w)Sm3_=VW|S!1WED^7ltN)E-O!;zi2)Vgdeo}Xckz1uv&a`YmUGEk#A zgObPv?_7MMGanxGP9&YdGvSkR7+#v1O72XOfcCMIFi18AURCci(86 z8xLYe1Q7p3Wo&mU#jEdAutGkRn`7WeGKVcBE9}5Ct?iG<%4Fdcc8hMkybl{r3+dRwL7S`L7G zdraoteMV|KT){s)oQgZ`rd8SR>GQ-=Fm-Yhv^<-LcLv0Q&8(fY%sCLW`|v@2GkbS2 zY!g^j=;Do!yXi-BRrnee%UxW)9#*8ygUUhYj(wc64t7Rtg^lJH>A$eel^+`r;yLIrYL=~iz*Fx5h*VZ?Fz9ruSJIJ8UZ)APZW?192haP;gm^#h)i+KOc z$LCl4xE4iKuJffhpw2c(_D!MfdhwWXEe>0f4d|2XR1AN1l{OC2MKQhpFsaf4VtKvr zm(*Z3L5YAhwkfQ@jN%GAKESge| z3GGwSmG_;vPD{Zgz3XJ3)J%M_tPqW7U!b=9{;+q66`AxE>A>KvsBvo})qYlj7u^%F zY*r~u(b_`?eG;Q;b6p@nS_#s!ol)9Go;I;_$frtXf`bkfa469pw465JPPfHyI@1l( zCMIIRS1(e=#%1!57})lDFJ1ri0u5bVN8V41#>2EP9vi-eesJ-ETdUKs%H0~BY$o8G zJQX-Grx057O32x+OhV1Q(9U%b_F7#6{r4)stzOUR6K!$mt1(BAerqjld+Gq&+6R#8 z7X`5QS0QG7io@N<_|V^CD=bocPJEp=!i=l~#Nh1+ax0>O#;2~PWzl)yc_s-nnzZO$ zVg>y92INUi25$SP0Xo$hnE1K?J#>15qfQLY(RYEdCya5w_GIj}_W_v^|MxNVF(Gj6 zavG+Wt)pB0*1|G*FPu6*3i=dmL!GgJL1PUuU2-@~nk9i+uB*wqb7S!CoM0HU*#kGp zSm4sd0kAZ83HlzqNk8@xgX?qs(2F0-wsskGU5GjE{%nD-bZyXbw-my|XdHE@7s&7S zMbm<90L@agU-dLh&$q$jRYCB>&5G1dNkb#u*PPmVU!)5oVEy6)#Pi#28n($AB~R2J zb-cY1Ywq-h$vO()zdsk$s$yYIU>2Fus)Kd4nP^a11m9z(;Pr*&;G+G&DofplgzlE0 zVX4mrX>g3zOKX7Gc_}i^Eff0PjV3e20$|9Dx8$ae0ra}o$dxYNN$7zC#LE3IBKx)i z)Lz}9b{IhRz8J42THaq!q8 zfY$ijpm)R9P`-0G@d#kg+m%_+uWU7#WODR`!m~PERcoT6HydYcQ@|?it*AZwJF&c_ zk6!+XXzV$jlWelw!iWZ z#>}Htl=IAmup!y#*!qF0w3}j1bOnaSrGUMC9$MbGDd<0|0HZC6;iOds@;)qpaW9wR zLA$x&t-l>M@(u`s1AdX19~;5D&J{j?SpZ)WQsDmW1h{af1fIOw4tKiJu;Snlv>u!f zhfkiQ^%tYTZcqp~e9g!5QJS!fopF!YRfd&!%rInDE@~9IVrxnYTvbpeLr0fF%AA2v z`)Qc9#D)9R(P;rhK8V7){Q1PqVs&@4I~WF0Z>Dz6RgdyWqa7I=F72Dy*-TBgIPt zIX6RgPH$03oj;Ed%vmr8=J&25R$o@3PRSaqIaz?^>>U@6y)Wg*Mbg}aiv-7)5a+dy zplNAJL#7_$cm?;UBzaDU{xrd&@O!jwqa{wel}}%VDdO&JL-9!7Hj-@VN0aRiQFk9X zJibT=6btuI#dl^{^i~B--+du<2PNTLmL_P)>w#RmDQPkpg&zCj=~uo1ww6_roB%QO z*pbEgH)RvABSS!T%qZM&+6LO*Jf?N%%E#1PU_f zg9W3(*VqC1pQq47PysC5PMdl1Bx02uv>HytRuwBwPF{kiw`ezef$L%^(^J@B{ zb`blXWFq<(RFTRz7Pvb^g|@!lE~xFAFK}Pr#QLg3l+Nj)I^P+CZZ9D17WN>=Rw<^Q z>UHSZ3%~R-hN!q5w5dYNTJi)39=7SEc1;R-dgTg@+qa+aO9oTZJMw^&G;lAot)ciS z7Xal{Z^;v4`ppQd_pr14>7x#qBux)^4+FWlZ$n7`y(7T2 zY9_hv`GdxJ01Yx(OM0hQ(gO!aqQB$|`g)BCI!G&%7gYnG<5)Epb$JNJZ0dte;z;cd z^~cIGU(%W4PyE-6!%Fjm0=bs2f}*mMq)qHE(y&wpeW(e_eBDY~-!CAyUnNkzDOqIX zxLIVjW*@9A90#WTC()8NTW}sO1}3svu;%7@+A$%POVJ)?4l_So_MDtGUtDwu9=;Q~r_(b_|^h@?kvbPkfj7ySt+Dy9*Qocr|AxT$pQ zMNQ~XJxtQWMqtP`K7Ca2g6tJ@z*EIZr0JJA^yr!5Ht@An8r7ekrdJm!0>2|$FtSfFQ6EnQ zrn6$|v=8Qx$`(&r;Mql@HgyVydh0-L`8_MX<7Rqqlq2$+zmTpI!@(l553Z@|OP2F2 z(A1}klbLsuN;%4d`hjVjs;(jIJ>?8CsTvrWG6bJ+lHg@`h4AI1=-0tRAepTzPX?)@ z*h(q1ohAnz9*+ef7Mp4EyOVTdgA9UM7uDNzT%c5Mf|7Ql$lenZ*ta_e2ya^2v8Wp} zxsX*$h{@1A8h2M0Be`~>7}p!?ZvCPSsYbM^Ydqlvc+^FmZ>P>R7q`4n8M8ksV~FQ@ z?vc1AE$!7xA5FMMN_V{_PcImO-EkAxb=DZ$Wu;Irqmj#vO05ft8%g}fl?ZkhXkgvy zm*fgx8lws&fafjGHMpM^6#g*5oHrwI_e)D?eK?D|!!rlNP;m?nyGqnk&j|MEJt7;$ zo{&}&q(0%I2Iq|XM?syVnB@fMgEh+z; z8R`69Msv-)IQ`9|pmpt6nw#WIG{gF08oxKT`oG}{eAx8vXAgPo{u=4vK|?N1!jP|P zNmt}S>eG@#?OxZCUvBJM4l0MOYlFzhEy^UIIEP;EQl6{Q5NkrxDO{CsJV@g3S^@xc0vqbYiHy%7vgR>iuD9A0>PhpbUh z!JyZ}X|B>EPU&_zZM7`mG-jP8ojDqyFJq4W(jT}xg{ruBl@Xe@mLGFjeub0mGYaJ{ z91=*reM(n0Dq`ysY0lRP(4|a;$R_oM2B*UmT=l@TrqJ49niUG?V2}RK$gg?=4USuA z!jR(>n!XV25w8hLnNJlA0&EBKztrZ6PKL>k#C+UF#Rk~8ZO@8I#iyJO8S|ESC7Y3jU7a?W++~l ztD?=D9}xcPNIKO_mOY#HbNe)!iRoWr)(uiu1r9zg)J0rVF!q5KG{uXD10IiDNM8y*>olPNvps4(RbFiiYfzBCS_E1W}JwQ36NdR9Q99@)`?J zkxfI&lmPCYB?dS9Vep<-(s<8{pp_nm``d!c*Zuo8cXri|)dyJ?(}8sxrRFR(V6N;MqDL2HLVa8unL8;tCr za#nAe+mCPtM<3HV6EPacH=>Z{1m7Pz!O&&WY~8(!x<+MF)4MmVZEu;w2G!AEI_sR3 zQrIb~v`B?E`8+Z{J6&R@xs+m2Xr3PeuqY*|%u}QaIHi zSyc0BA;~=_PDpzIP+S6wS8KW_=Sv&(61&1}K1PHE^GypdEt_(Gb@rW1du zO1kE*FL9AuA@C{ROnUQ2;J&JFwDws%F^wK7m^GB(tj?|U!oo5V^!HR6qFP1lDtqI` z&9^A8Kn+ZP?iF0GT|!n~6UW---n2DZo~EDWK~qvqou(w8K9I7kR+b5Gm`j*qbqaC1h;Su66C!f~7uBA;Y z;)#saI9R#H60E~4Nm15%_WkHKL2Fn#;iZLGyIf77;qyO|<}-3|O6>`)JljPi?n&Wo zd1ug^|A_<{s*~pSEJ6FZk92{d8C(xNLKp7oi*+Z(DDUh>qW3Y5)J6GHpTEs$Q{26} zC>K9L<%g}DL7%fEbz3%RJ*b1$?420sbdX2e#u47Hm$W0w7K1Ac0GDWCu89*=UcAVC z?VJE@Q4T~hWDeOCHWGLV_STX^eo`;Z<3zKRlD6tBnto#)3AdNQ4yUoCx#c*Cd|W_d zPMAU37!{OZXNA1AV)U!K6kZ6pLR76^(HA$yqEg>PqV1GVItOb&$R$JEz}}lE<&~1D z`*dMJ--#Hv=p1Rdca}u9i9^TpL3B7<-yfNBmPU=)Mw^p8X=|4?c)GK?gm2tFX9dvH ze95&L+frA(OxiT`2iJPWRG`#aLH&)cSsy)bkKfO;byMgYTKY`|lqSxi{3V)nx7%tu zbpwEg;1cy3sEYgU-lNq+-jIah6_hV)icO*Gsb1r7P~3Hr$jxvirrO4|^N|U8+S>#* zT%BR0u^XARY!oyf`$ZOxX``m^Q|p@GC!J&>g{Gz@R_VV!5rYyr?AV}9+GqC$2c_{a zVB7$Fb*_= z%Dpe(A{P}BJ=ZFM>*8Sg^pXuc+HDWodC8P#Ihebu|B3xwS(RXSh%D8<{)(7(z9xG8 zHHgv-4SeAAl30)UN(beQ#;e5|(8P1b-9?3DYn?LYJ~ptPG}i=vT^|aPj|_?F>&7}? z&K4ub+C!}*TLYvNTP4J2QV+jXJNdv24-6yt;hF>P+~5prR@+h+pFTugd=J_9B%0Vg zZl%p|kjp+IiD8zG=x=nTu7933%B|c(&fOb=i@L~rXy>hskSDq}BmitKQvVUsjD5tY8 zlJl>{~pGJ}xb?Awu?EKEiItbL1x@FCi9?bmwuXDrHQpENi)>TLve=MQAtD}yoe@zxt&eWzM`X5MZ zbk-JT zxfGK_Xbl%Ks>!5r&M0}fm==W*!h6(0c!58y)<~)l{^5@#eR_Wg7+OvJ^O6JsM=uhk zI|krUHi@LI#D;5qN%RdVE5h#7WT;?ZJ$n1 zkE@v^YRN^eBuyGj9o}(=jw`{M8M;LOupGFoFoXl|oFPH~5((GQ#Y&f(0&f*b@c(|x z`sCdSc%AlRXCQvGE_Mt3vZ93S^|Z%!wMQgiZyMn%4ZseyT{L&=lDh1SXnLeg0_$QY zli-!fM6dlI$J>5Z5O1Oj53-cdKXNScBc*9}sv<13mV@>Wh4lItb9C;iB=;r*RA=2H zyLSvBUZLz8m(EOLJ=h67W>ygEDshag-%VSq1G&nj#a!x4LpU<&GL<`Njn1*+puXt{ z7vD<`f|8dK|M#9G;cg<$Gt)%VoV@~m#0n~-sfP`UVe~@SecHS{oceEEDah39_UPN)Ie)THPpoaIc1L9P9{kl^;p1ZO+8V=XYIr!0VI3B z>GYfb6YH1SW1`xLxvp?#^3APe|FS=;v-(KM{q9o3t3{BwwN%tL>=LRjSuzJFJJf#p z47mIsNZ9fSOjmD&C2pRm7C$H>9776N^E&q4>CJHQPFE(nCX@eaGLpJ zE9W#5$47(2?E0BT^R48NcOP>6DnOF=f;3Oo3B?mSVbi2`lCHW%nvZiS^cg|LxTk#1 z?1^dncZ=ZzYK6e82*}<27w8JgVg2XMY;?B)SZDSfoUFQ`EWD4%PQRg&QvqlkWKIwL zI%0Un*U!zZ_1T?uifIx?;xjMR2{{5?2QcEc?L*%67_yxL9jJ z5jR_BX!$Cp{LOv#IU|{2bO_}K_W`ZtK_O#?9psPN3wQSnVfq~_Nz=!X6>CSJ=4Nk9 z)A7x!ED94nYQ%QOgM0=ZfTJ@yvuj0JBr(~nH0Xw-=OhoN`PiAYoeYFz$tChg?ao5a zJb;`?zUOExam)0MEOgmc5RP@nF?%A>xdYc+5Bt*?^EL>UkHq#bq3laUC#JGBXFHBL zp<&8vF)-E%TMNF6aXtQ|fv04+XJJn!+wM=0fJgT|%qHXHw^#7Sqy) zvdU}GuxI*3n3>cKBjt8j-FE=!3x+@%EvC{21+}$%Lu=AQit}40R#vSM;6AS;TUMx05GQ~?clXhg6C_mtc4J@5@ zY~}C$-wVO%;ys9a(Rw|>Tx^B5FM46-xQ=9a z^$Vm9cYrVFA~3?|Fx4(@A?;2_A;ZoFWaWvXanb)M@FxqBR}Kd0)<1=^-UC_Oiz6Tz zI2db_xHn!t3=7|hRQcBnvd#OOy6ot|n!Tn`;=?YOJogYOjB}K&|5OXojUR=ym^xv7 z_kk=e_y?|q9T(K;8$@+}Js3{bQ^wfU6yEWnU_DvFo*s_C$`$Vfqh)6p^Y@={$itqg zHxH!cje$(^?Vc_?rAe&GzDAi}JBqS+6S1}CI9R;DOb%%k{9 z-u>Bep$9Yh-2|Wa3Bkg@9od+Ct}oiH1DThcvn}W0aW7Xk!j1FKo&4CuQ{k+1Oe?V) zXUNF@<4i~}ChD^wqn{VmkUPee&lSIy+Ti<_9nj&4A7$VQ;pAc$)@qqS;c+3Xsk##0 z9*t&2d3~5f(F-(vjFj8H2r^eDkmtloK<6&#d9)TvCtoDl5eF#AbYgW4>!{0RCzN*f z#xBJJneSK|OuZR^DX~K^;zK<|gQ1OyR!GwS zHPiecMj1ip$nda9OzX;^Z1YQyOLqPesM!KSjxj5aD@T4>UEm z#TVZ7drFBp zgIl;JR!fS;BVo#yKCE(0si1gWqfBky2>FYbP_5TctXVpONhB_(^_$--YZpoB^hhff zxc)U2eqKlk$ror@!vo5HdIQeo4#d`{K@fUVN7X<1k)f?Yl+5X;%g8wkp8Mr2d(9?5 zaK@B9tEv4~9)%nWMTZyHD177#F}=2lilS~o@DP7CRi$8U19HgXW*?}$!_UYWC&VD% zGD?? zFD{rQWsz^pF<)?|ob}tOOO**K{%2HL?DS)%y=7R%eHY_`Qb?0$gX)8aQnKk^LF>0h zY;P0LaMTuB58M?7?ha<&O(zKJxOOKkBhS*i)RyRk(?`7|#&hde?gX=4{?Vvk%r(W| z?~vP&)8yIXFNg@s1x1HcU8!F;rk?g(X#96I#Kbm}?X(Y|Kf4W@?B;dsm7BM6QbF?L3 zAq>Z;A^Rb({|$(Zh{ioTS5l^FZU_^#`*k*wmsO% zA~{yxbY>16c7W%G>m+V9W9qpF`7RnnDHX>_-EUXerXPBv!aPv9l7CyI1y_ZlqgWjO0O6!Udm;G2wYk!>o^FCVrz>{4m z9KzmK^563M7PxZGn}u_&HhD%Ts{87SiKBx_e)XJSjD3 z@OwE$6gi>q#Td*Ty9kWs`A{?e0Z2{UNp{jjthO^{p2f#u%uo+(t2hnPR=y+Azf`cP z2itVTfu%mq0NMOtobSFC$|fG*`N`pk_ZcMT*y5f%Gj@GqM@)WsoHWPnSV~j}W{m47 zZt@OfQ%BpeKqcQ*^*)fYQpPj~{V3)AF39O~gCg$@XOc^NzQ`S-OVIBo$%{svTB{@V z-_LaxClxGW{RN1BxCB&_C6w271Y1%v4DVU>9jED!G4p0ivQ0ko2kp=3X8Ed-}YDgdl&^Z7PN_i3iE}ZIkHs$2O4I z^`Yzs=i$mR6UMR!GxL`r82r=?qk0cxntlTqyBxrR?$whfMj+2V4s=QVA0*x$}IO z`lmucQ}RaqWbBN4lHY(NFhZN1Ir#J;Rk%no5l@KN)Pv=hT0!aSY)YKr z#9H=kC+!0VF#1?g> zig%l0bA}HKe0h~7-IJm38GeRFtAxt`RZ`$45ft0LiPEuhG?^F64Begys-z$2%C{)y z8B8Rr3ug+`4+A~ZFH?(eAxT#JE>!MwBJbrVNZoU)SekwTGE;&DHRkFPJ=Vg7j@(Nc zdlFnf49DX`5he0AU255dZjqrXQ!ETTMJd}}fa?A+!RR?b9Q~I! zHl5N#vMvJZYmbv&cO7oT4`YfmJ9XOWEnLrj8K&L+O>i*fIS1KM(hkm5OAI8$I!LH0HcPnL^zJ>RW2=xOi1*f}RF)Qj%&>Z^&HwO4K?e~Ak%`BfZLJ_f&AuP457@RtW zvXz%Rve#ROu#%kyYW&ch*hxq zVO_y(I{9l53z+tfit>jsHn)YUAHJtJDfd;kS+IFFA*|%bL8vOdrSm#&i)~mF2h+=)8CYpJd8<-I|?5io9N+CH zFO*@re$%^h>L_=F2ngumr;b_A0JZ=mWSwWPdJ4%fWH znPXuzO7^W4s}}pRwy}wH$zIOZ%0rm;#dO&9NY0c_(WsXGESUb#3MSh+U|`Kz%5+~R z7M_iyhXy_yN43zBYV*1=e5|g;7fsY<~G!%IRW+jOjm_qcv)_Rm65I>{e7FDoA4i5@h+9Md-yz<;syFd zuD4A0VT~8Gpemn9k^4;1)wCy5$tCD;?<%-eGfF7c!ns{?%xXIYE62NG-aH-2PX@7q zef+nZvIS(LMKD?yi&cZTU-7_=;cuTwSHs_Le}5+5HI5|z><=rNo=*;wwvj~n8Vd5H zOnziC$RqB+&-s8QPkNzuTp-G|VKnv`e+QFq@$cRTV%*P4>bZ}aZhfWjBmbkkDchjo zT_{PXAESlyqWNCcLQ|WAS+(AY^5eFG?_L{rE!-X7za5Be=MT}f(W^-AeG_7D*ke=g z+mN~F3^nroyzh?Q7#jAK8mAOf@t=U%R*Rsek1qyt*7;E1a5QXxP3m4pDXuqC7oM@G z&X-~JxnV3LAq2w})%;oT6P$W?#^$)2;F&gupC?LP0|+qqt_#+6dI1hx6FsochjS*; zI{D5|V&&`tadfc-dQFIB70o>{;?*T`i0MhCCp=Mhw>!msnJuR7*_Ku>tnD`i zEEc{8S-^eae)I_T<*hkZ#%+eO$uH^4MJ3ioN%2ObH(I%5lk*xG#yv6 zEw{ytzQ+mQ*|XL=#iDUQ97$EZsLF0DaIh<8@8-!Y;T=e}89wZV&JPHQ67+JC6Nb1r3Hb)(wfc7p2C7KlvsM)V0_o}+orW{wGS z*!Gyp`gcd;>jLp4&mDzsyg?PAN62@QIeKP2h4^1rLJapqBzqo;Sq=xGDWy9)Ci6T? zo{Z&M_Qq@XkAPv3D8?1@d~kg|%`0%glRS%}{^zZzxO)u7*p-km=M^OPbEVc_f21hx z=jzvuCc~j9R)4wz0w09Ym;nbU$}$3ih8(BFx0a|7zJ|4W_JnySI-=(|bCzl>2i=Fm zG=0W#NZTe-VOu#Fl6+BL{!OV`GwV!fE$2da9;el#qgl|;Yk`l)%qldR)&A2N=5Mh= zDN6@S5?H%k6C``|p}58#5cD+!6Gznx#=TFd@XkP{x+23$%g6rZ z78bdJY?zan`>H3)-t(Bx8XqXb%nj@pn_|*w1xm;C!+i^eG1=T~Fs7RcnsBu!PrAzI zEk8{3i2>Ea9^#VoHaLQH!_1(KkU2br2_PXO-wNs5Xgu6P^> z*f?AZ?RPFw1ZUAp-3Ng7LJHOHSw;4Jl<1M+g+Grn!|X|M6!@z%7PM?8oq_LPH!U$B zCj@gB@x1TT7^Ymi2b9}2u&*qL-Qe@jn7v_4_ViCdaS6mP&TeQd`3EG=eRPug!%%}^ z%s8uD$iC8t5?dUot?MgLcYZ7O-GIylY_Mv{IdWS&h~d>7NUiU}4y+4hs!<(4AIO+A=G|3Iwl*bgGI zjWqvIGpNoLgjrNu{BaJ{waF9NkbwJn7u_&ooEfgALtYG5_YQF2nc5#g~ zE%^rJKByGrqjyqnQ5@7bekX;QyWq%~HEGY0sCITnts;*yKU&b+(?if!!MVKY3*qS+ zo}c>5DB9YbAlp}_Y$}Uk$^NOZWkzq*ocm0k=EF&`<+v_tC)ZliTu`Q)MENGONja<| z$qSc5mSrOh&FIOzXEwvwAO{vWYXs*1Hi2TQHiIYmp&|DHG{0FzO}zJKM2HDCwylJ; zb+;+dF&bMp=fU*5*Exsv7Tiv5fq`cSFbn>T_q=!wY6JSy{V48FvkK7P4x~M*uQ2AU zBTIvJq05N~G*!^Bi@5;+aS$>Hk42 zoxTj3ZF1=2jH{4#{F&%Cns;Xfau($0SQh`21F2`N5bxd{gu6bupz3T_RIGdi*|t+4 zcPSHPagIWvf`2obCcwU)d=~ko4HU&Dl$O{mN)~w|@P z8{z&(uAkgrEG}QbeOTEiNM6+r`T^6G3Y!&>nf8e1?+;P6?|f?Avsait+yXWFARn4(tuL$lh3|kl{h;zn2Rzcv ziq)+vh50UbN!4~=Y3}cj%e?ne)WTjA6MdMB4im^~IOkiIxTF5tesW6ML&;BivbV`G ztm@|{lp0Y;CSy1c;dhQ&;$Ki`*Ad8nW?j?$m85Qshe8)SaA;_y`psYIdQ=$Jx$%yq z(VSU5dsvxtQwMpkFH_vpNItW0_AT#`PF{0_0$f7b;!3{5*1Ll;b35d34WN13JFt^O zESR+4B{Jr3qWO}qv`yQODemtObrt?Nbefbc*cOI48RbwtZ6K(3&k+j)+@Wwa?+)01 zNpv1G67&D+@*5rDzdhOmgqP1IWFM=`(D(puNv zh-3X&sC^I{GsTBlRc(j1WhGSOdYvL=<}B|(R~B_FhNM6L47pqLD8qFT$wqomTBlMn zpU{id7`IWb^borH<&Bk|-3FLdazhx;w=nDUMw+;|l7#_xdS6EQU2VkqW? z_+om+}Z_B#o_FG zcVAXNc^EbY#4uxFX9yWJh-J8+fY$SYsQuAjT<;dh^mUP9!DrUF#2LI7I0-h>9!_Qdx;NI4a@=IeJIO}2^DiWC-78ig3?Eygz9^pSnA9~P}ykt zUj7QKn{9E~mxC17bvcD+R&hS8M34@@E2>Am6W+GCW9phDlFu;5*hhU>Yttn$bz?Y| z26v+NtuJ8A)efk2X%e%xgkyML6Y$vRj4{(nNp@%=wYPL;j*pZq^h+gWCoF?6HvBtS zXM;}Td6qeKIw-7%qPFu7H1&Wl#;oJqUN=5(%Wp!me?R8%W*)VT?}JIV&cLnxE^Nt4 z?m;)7WIVlCO#g%PF(Wi$ZtWex;aekA-g9AbhxoH_;wDuOZla;{T$$(6-IN+b z6cq6YiZ<_}MQf_A4lltpw*U+scAK`G zbY@N-f6@i>K(h=4TIk(jyiR;0N+%v(o&KO;^q$yEuWApE<3H)1zMPY; z49F7#Lqt-1s}@5zyO{aTS#0g&h7)&tvpmf;kWacoBl`76#j8gkAHIO{C2s)Nd2v3J zGhs<1bdR5RLc?$UNL_7BO-3!HvHhZcMKILl--qnq%0be(LfGbI%^qLr&q}?yAMs)+ zriBDxfT;^UzVD0K^*Cjkb|zAg66-bxS}*M*M`s9pKmB@eVs|w z>4uou5-VsVL#S@(YQ9@O7uNp5HR-Cgv?IugRoZj5?CE_d+u4oTEBW7FHGo|z2xhaN zd!ckeKB>)JKusk&W2iUp$9yVko|KZ@N=4Tq57KGA$LlMTg{Vi{pmtm{X#e;kG+Mf| zg!3!lq;+=;JTjZg+Bq|y(Vy~cBUxatwOltp0P<;iF#OLTrn|J$@j-#8oY|el{df_+ zJ9TC)nY*B+rz>8$(~nscyHLxxQy{7QH>^U_1Wr4z(zfk;;G&GRq<(?yEiEuV zTSsHu>mkd&H`9OkO&Rm$14s?~L`m#*cv|I$C0pyMF3BBRk`6%S?|?F?6_ckcah|OU zDsR7m=G!kpQ|<`YmvOe*O;3p(9#Z~)-JrFqM%)$R$9Ih29@Zl6xmh1phE@Su_^$9)3H$-83h z(k{$<(FSNX^u*FfSy4oP<*x2VmLH2q zIZHKL_JZ3=YfKHufa_cBF+64_9QvOG)oA_0F)61Db9a&qCspk7Ri?8Y|B#u>#9U zt9l0SxuUL@eisYd5=b3T44SDt%QQX?9>+Vgl6_iOF7;(IgI!VgZzRmiazrct4s6#b z3zlX-gOs279CAh~+_H2)iBC^R{$R&K=`^%hH<0ZKYc~F@H{0>8C%au4&2pFY2fcSU zXlWge&9!pMwi6)kz`p`-q@`PzZLoCNKG;&?fd~HV%WjDw7_<^lDpiy0OAnIwI13KR zU0I2ee|wMYrp)|5!L-BQ)Uxb0-?uGT%G19=8?;ME&gch)3*Di9`2e=8-Uh!P^JA5% zjb!m4fHe0PLfm=Y1=*MfEwW&IemH~~CqEPN{->v^N9W*F^9ZK?_B$-I$fh_;rDzK` zC@B2^HH->@%n`}pVeF2He-{cllZrt4*`8$_hy-=>c60J)vN34#Cg!?+5qzH`7<>#4VmiqdG>P)zmY}*@ zPYs{Ckz~MFvg|bibzg5mW~!~I!oxIwqnt@pHUPWsxnLUZi^p3{;-=70z>d(JA*{~Cp!7oD-{^GZ0r zUxqRmbA(hvjXZN33frnEfN?!d^O} z!xAM1_Nbx8xz$kOFp$p_JA~>%-O-_+k-7x>VY0!LwY%R0)w>Nm8;LL`s1+(_ZH3nT z!6a@HP<1~`II_$JGg=i)@so|RbxL2H-NA>&WrkvdX9mSFTQd$K+Z>oh#VPX34@YC_cv}7~9OrWlELC%WV(wm{s>sJ+K>{VaPN7Q8FVv`6 z2e$v~$CiKZg!>{oup7^!v6wSLr9Lv4nA4k8JM}`-n4$Rey8?AhQ^4^_7@OBiiZYxo z#6PfPwSOzPhVIYO?|lW$A8#nOsUws3wV^{!5{x?hifX#Or1i1hEMr4|+!Mb6X8zF~ zRpUGX9~=h!@gv071cJlhHK6bd#^%sgsC%=YYy+I}W!wmMW4J#`4s4|G+18+5_fg1) z&r~Ms!;C$KK=psll=PDui=Pt9^y1&j@W>j7IDVF#R2oQ_=EC%0cXYDyzLd8AuxPRL z6SRvmrkG_1xm{h^$68yob#J6={=C`bl!<8IUC)MuhoVPw2rIScEMjqYG4uN*LE$`B zZ0pjC^s#FMRsCym;V2huTNOZ(<39@#{hoqGQZCev*$(w%9ENfM$#pc$3pP(sxS@_RoJOZ^73fXiIl>fy&U<9H77 zzyzrHI+&&Ay@3Y(JIFnLp7dsyX~7l`)Hk`4?XR!tQn3f~DBm8!eMgO-MF{%eb9kR@ zj}na=^B&U#@);iJdU60}WOakLK^@TI-vV-)?9TH}o1o?MBXa6_l(b)l2yGvRk%Oie zR%+i1iNZL_wW}dT-GejPb1W$Ntq0B=9*7_Jm4NBCV-&UWI26sfoSI0YXprk;r`|*ttc_!{zj%fOZaS!s^7Z_%f5NC z9g~Lf{h}8fafxEneP5HxJcx3tSJ1UM-V6CI0W>Xt37H)i3z=hVAah2eDDP~+j92^+ zKW!%aU3^yF&b#myIJ5OfhOoG$XTVUQN286)qzX0ndaX%m^U9h;xHH~>@l(uKPn4 zx(_Q&9JwcKiQ$@W5In|=xqbaeH7^{Qa_0&tIL6Q6OZ~;?GlMX#(=4h@ok98f9@xC% zDH!;j1%a!07fYTO%pYRK)^_jDBmur~a4@ z4)@0~vT|r{>xQ}cp>+SU16t4R!BQf4@1126sH1iXUOO$>cPnSMHr|nyC0z$i8l!?; zZ>c$L6!psIGtjUomU(3!Ik0(DxA-jSo6@-MHdknSFb<|SSAcQ&KJrbsMCG#g@V21? zYN}U*)17xz>^=xH*YT{?h+@jTG?!Yfc8IOztHg6lg7L_o{V@L=&l?Rq07>&sgMo8h zd3Uu`k}JiW0hTPYU>PXx?&f{tJ>dFWGt{1Q6`RHtQ(@#6@Pq+4v5yPtKm8|WO7uc_ zU7kYw(82&6vLnstscXgU?r zeaigI-{azakmSG47QF9Wy&% zn^|9s7{8Y6E3BD=Pbq}*dsLcNx{&v_FXZ573zgffQH+kkmo1!?IL3dA`@_X;4_weY zb0}ud{6y*>dVoZ-Pq!?woK*d8hrA| zZ1i0Rg-wH4aIy`{v%E#~FPz~sS%1b{YeD<#V!`UVl<9x)7e0lL#Hb(*8ISO;u7TZH z()>Z_@Xj8eI+-!c@p9Dsy4HJAo5_Ctb4XS}k6BT6ccnaE*QIOHYf+m&?WhtMX z(e%+8D0=3DNw0Qst&DT9Z4Fep?33Wlzss7qMHQpURyvMn zNp$;3zv2{BulJ{fXenwABtxsYH`Nc7Gt2x4H01pRX#r=1tWSO{aYs*9mGK=EHeYo5 zXA)hh*N-&sR8RB_jRth8Byqbflde~Q@n&yUI(a2%?9PK^Tnr`sBGU6+Jy_L4n@7VZ?2AHm6O(j!Nmhq zEw^=rK26kg_zG1d6hfVC7gW5AB*mT%N@q1 z(gE$u%~(M!qq2QHnY#J~^!jRtL+c$`WoC{bdjpXBKO2_wt{7AsK9bflP&A*_9j$!Z zNptIiC=WXiN45mBnv$OAG}WEobu$!eC#{6(e-uH)iuLrd7oTyCmbHLfs$i4 zbRMS%VOhHsrne4Zqo0Q}jp{F9{=LtT`|O-(@OvPvyyAe$jc$1Tybp^x%lAf+_bBvh z(8-U?BfR`rn6|zL8`4#vR!+fu>D{pCSUH) z{N!G&;rd86?NvWa?eP<($5%nMlv}HwfT`k-YdQruIfdSx)ZeT zWiV!M%%M%K-q_~45sF?mQPet9S{Bt24dDanHqWf+ym~=QmJNC**|2=y6(D`@!6Zj= z$YB0JNQtP1I;&jz=)W7LS}V}EYj<{SYbm6fC4na6JOoL7uyI2zskVtaNroNi>itpn z^ese0^u_VE-dKCN1GUSW=|(Qk4OBQnM#mmZ6~9r)S7x+{d_v9X}uqx#Hr*g^RFR-^mKt}yKp_Y ze(H`P^+7Cm(l()Ta~mlZ{Ud6N8o|)j3law`1nqDBlojBBt+bmnK2yjl^(7d5rqPAY zA!u>l6TSQTv+4sGpn5V{X_fVbwBH11kUgW&MMucOK9mYRWJBN{fh8tfq|Bqsh3ekV zDg9{~Hofg3YT^(>oNi1hAdW+dE7a;V}Q)*#{C~$}pWQRHk*1vbe^uA6Q z7;u1^@-rdii@?Gy_JD2GGb$Q&9}>p+vt=oJsdjKjo~61Y7!&t$4yXe&)t-gCvNkx7 z7t5+|J%Z%+H#|cvS61t{LHc`s@1D&-Oc~6xJWjb{?G>J9`OsCg{J9g0(LbQsJjW^Z z*&!5=HA=r-hkd8J+h7gQNy!HFnaO+!{$wb?%#tDI=KYI#As$a;270B+YgHUyLHVcOCbD7 z9c9&pux6JI#IqGlwxo!b53@mu!cr(o>5rAWM}z#A2zG6IFP1=uKy7em$Jbf1Do@T2 zPCF+$p6`hUN5Iv{}dxS_*0H zUumDr2M=@zWc9b};aWe=qc3=@lbru18t?orNT*$bJc~xqUu{wD_{SFaNqe!i$(_*{ zv|q^WP$TL;nNYGt88yD=TwHJr8~d;$8t*)$(7pp%qoo&9R`8sTdjmQ2{!SXR_4Mhc zHG9oCduhkdyJOujW#B4`;5qiF{as*UeIMqT@Dy4?kQF@OEdRJ$qWtb#&?_E`gKtD) z-;*QQ%r{Z|{~zlD9&Uq=?;1(rI_gYegDI-!UKTMlpArvxa6YF}Xz$txQe#(;4GduR z+@s4hc;nr0#B*zvEb}yj_EYPiTJOW?b{FqbmOZt8$mRa+@ zub9(^Np-kfls&AY`2UclEbohHvvx!LNN3pPJcwC)1@Jq0w$c^8>zb|u_O!4+i`V~; zRM9tdn%lR9xMBN*7-u)ulvN6bwBKN+Ni3=juOZG+DVF)W@%JHtb9*VG>(LO5O5i%c z?9~O`@!lLNdG{Bb`ZtoPTV@Gu(VlE-wI$}) zAA?M*9I825PRYAGNY39$)zUG-z%p<4WrIJH1cfTCk9e?ldlJ{TgNO|UF$0-Irf2K>Tf{Y^RbX#cM>cryjV?i7j(*LgR%dZvX`qR zSl_UPE|~ItG9`^<2f8r7FdG&$VGZfT?r7211ly0jC#w}(K|M7S5(g_-^)h#G`e?>l zc|L)q9EC5l9I(D;M~pP|VIMEPfy(p>(L~mjMd%f{t0(u3{)?j5;m+6vMzHkbneZ{i z9%EuZL6c1*RsOzNG&#}{i-uM~OiMlK5_zt_nodg&Tu0hPLhkw= zptj$mixbB}nz9XaWp}Bx@)~HGr7SI`CyQG*T$DJbiVLJs=<%BaD_i>+O!x5q8}}ly zz8o=$_h!{Q-X~R&L7DT&is`SvQzrkto6=0q2(l9wA@5@W_ZRJ%BvT67KHETM^PKnV zzZMd1ouu5G{!r4g51#vSopxX}wm*@uhK>`VRXHAbLj%_R^%}hAHc*w*C90b3!E(3G zB}Kq_rIYtDu+T1{Oy5gFmj`Be+Jkp@6d6cuZLMrg`v!||_h6|moms{ETyhTSj`_w= zx?3z`+UX(Gw)8D!?EMjH7G*<~=VOR<7=*js`uQf9e6Nkt!uo() zvlr7=+C%k)2=u(x2K9bhDJP_!j{EdSkC~1bM>~Yv(?7!O-+E#*R+0Y26oKFC$P&e; z!lmVM7PsvJDF-`}LFMMb4IQ0^4}rTQX8{8K8O zb%l!0^he{(&EnINUKn-NnPu62hr}15XnS-Bi_gCara#?=c%JjA7=DeKS6qeA@9v!C zH0Rwp%cwSB8MJsm<~!*wA!+Xrrg)VAnd{@|f|~EWQBSG)eP>u`%krL1BpzyKY?9(GR zX0W+RZC6d%{Fyxa+_r$cO*zNU_oTWP29W*r4IXbd!^HW?;83O^i+96GX89c!_ZxsI zm3>&E%Wh%*@LLdDz;%D07Erkj#Dtq~sVXp7?dAT6IlxhGCQ;4ph?u3SdOs90*M z^TzQ1I+JRsi#R>X2BjXqg2{GkHoc5zd!=JVN&7*a-$FN3bAH3>)@|Cfgl9?)M$-2s zJ+a|?1TEB<8P<( zGt(P$i#x-GRyTfE?mW2;9*si0e~ z9eGb%C-6%MW*PtAk65^vX4l6sb?9!8q-Keg8ar0;lOEKgkCJBLe~_!VA;$gqme>h3 zH2NQfNz+DP_>#^T55+Klo50jj8+36Nae~$AJ(O|V0(CI})yFoH!%90U+3SK;!#c9T zF@u?6_9o%NnGyJL=|~p7$%!?Lji8K2W~6R=scd}tg?CBEGKq&BB`zER`NN&ixQX+} z7X45)My)G6{u(r6p9%U023_$yQ)YVGz&ilSppv!-2}QPy6I)8@pa@Kq?-xJz-vx_* z=d4ERQqq5_RAyV`fWmnx&F<}wipEpoU@sTua3Yu)!$v?|*DQ)Jegmmj`T5B6iITWH zrD~CqIZxoe@;Ohey%R>4di(Lasiu?aZ>KW}#r&=(uVK)dwo^#z*a&62{Lm@%1@F`j zV*zE=q`xp+aIl#{4n3!GCQpjq7{i)9)Rd_8g1UMM%cBuY`|Et(`RT;eXO^5+$ZO9e zIqU^h;vO;HHJ%E+I-^5mJ+x2X42|_NOtzc_fsG9E=05@_SqV%G9)PNs&&boK1S+{s zku`Y$t37oD;kN^1%>7k3{(B#Om-cPlo*GkjGF;&I9B-$zj~k((HWcLF zFGK0J7TQrC#xi#XvyV-!a9YdzLO9dV_AnG;7Inw;j-7CPJ=dyMgt3HGoO|l?ojB`6 z^YvD&xg%!-yFLTW{1(c!Y!TE?Vue&k&U7Dt3O5EK{vSnW9~Wc(z41{JC8eZHdQdWn zlt!w#&xZ)5q#pASQbGDL`o=u3%+5ZU;h-+%t{ zVw&#zoO4~*`}&}utdV3U6mR*DR2}e2pfDj)P|WSFns={MA({YjJ3T zxDWf$GDgl*M!JJsTTJ{Q9hCPn8fiOl6)L3+y+5R zXcPEE4`nAW)7+Z&4C4k@K&$2e=(vAmd4`zuDv&G1s zSr~ndn0Z`uej4Cn}j168o(FH@c#-+;M(C!oN8Fvd3xMZZc@9&d9R zO2?jqIKAUgPh6*}(5D#vfq|mHoT=YLX?FZ3WdnX#vC^}2OY#z!rpA<&cKZyr-li&}YZrZayrUxzDS0J+dThacHrVm_n<|JHtw4>@a?Eoo z0poHruJyStJkodP$1WIf`!^l9`tA)Vcpk)^%{ns4i)ljkh3gm;Gmgv2vr+WsExt`R zWC8Uf&?9LmkH|Jf$<+=*u>Ba8Z?O}#LK0}tI6`&x1PmvSWWt^=Ap56S$oMyi>mv4O zBB!;Xs#ylE8@%}WR4EIhxq4Z;1wX52#@cZ9?ST^z{ zT()B17{rj6}xF;CPNGT+-#nZ)AdpmR8iAHhvdGb!q90u}XMxv%8G1%0jnA)ZXUof#3 zy%)&0-3q7{4j~~<@9(ML)^>v!ZM1XeNV)F7 zc0rcuj_Q?FVosE0Y6=mKN zFyN3Aq~tw;>aF>rynQk0EEcTj!d_G+EW^YNF+%-F%KCK*!dTs7ke4n|Y?JuWe6tIW z^bF?O#k)Xq(;Leuhx;qTmM2!wv-ay9(2ZIt?C$8uKN;BZhzV~X{`CaVEczQl3&OEw zqYsO08_M(kw4pAIeYpE8_2U`z;Z@+pQoax2rLuSoZ?!*`xho&UWD>2 zL$Suph3$wKL3(GZ7<)Jy;@3~bN2cC9^zeDga3u>R$B*I_Z$mb`7wzHyx-i$bb}Vkb z0pGoRC?D{%Bfk+il1Kdd9m78A;Ld9YUUKvolxjxuE8mHY@>@OT6|56xWjeCV?EaJ= zKMUH*T|&!0U0Klb7ua~Y6O%tn6hcxjh(}s_Fuxlf{4MiirZOM$bWDclTgWo5ZM%^^}$O$MckfsXsIcl3tug>Aew5SN%7Xdv)eH z87qjPsIJ6{Mnqn2Rx z@nW%Q{~Rcq79uDQhG|+qnsbL!L&2!`7Bub@$n(;E2gA_^Amv0px}6PRAs6~{o%?q% z8B`%Ib7r(pPr~Xok44GlyNaT1_w(CFSWY`^43F&(9-o#`P`kG{qqqLrC*^+ zL(C_WA3?ZlTUnn;|*@f*;#jr^?@q-QKTEIPdo zW?p^zFzpOau78g)(O?X79p8#o69%*5FSkLmalB^PBjPvgafKXxD~z>{hWgLupdQZ^ zD!v>unt;m`_E>kW97FBDg4W3r4Nrw(=r&_`Vyj>#--|HiLj_b+Y$vATSl+t-H8?nu zXZxd^-QhC+=yiYAp8go(?t3te&l*g!BG2J?Yhjz65wp2S&+ikq?9-YKT$9`b?pRs# zkdY=r!q)8&FSUVso258lbsxSa(~_UuF^(yXMk&gsZGmbldtReU#M+F%F(RxdCa$yR zEr-Wr+grLf5xY2Ku#1opL!F`V`dGcWGb-f@R(ttAW<86EpPj$;!?HQ zb>K?VzeFdw0aJTi7uK96Ht@wElquGNI&3v+Z+68y(y=^oj0Orb=->Ew&xN|q5?=74 zBiy;)mzgwP7u0zMf@UDyUl%e^ZLnl+6P++9BNa8C?Py)Vcz#MDMjrKNlAgB}Ek{}f zZ+R0+?B5CXb<>3-aS|rkQ=l-)d_ftTpW=%5j@;*9CoYK^Dmu3TF=op_HPHrIwA0be zz!7H^^MYFF5(Q=a$kDE$A^Q}9fA~=|b)qaNPi-SRzTcs%x zNAtYJPXu|Q3aggoVol2^uF?}xwnHCvCylsqz9|bh+kj?tuS@*LALSD@8m4PThdCFq zAiWPy3_OkHf;(m3?h7UkbD?ST50Lcg!{v_xSnW+UWGqib=Y6y#E{wt*q{FnjyEDJ} z(?ELAAKFv`R-d%T(%qCzNvy@nKpTGLWe=9#_z?Y%cIESmNAc2+FF@XLtXSkV2jm4k zc*L`RP#eTp@$+?{5Z_}^Kr=L_wxEfDJNHuSGkt9X)O|yr{|))58b%>*`oIvAMW-c2o6{i`(@E$ zDBAuNC2Q^}TzU;*RfbQYdgmdr<;-)j-eQ|j9ihUws|^@2OU5noO`$&Wk#IU`0Fwvq zQnZR`XfWHK`^d-eeJYwAF_<^0Ot8LNwU{2g0n`~bs4VL#wwSkp-}`T3?7#nmMFaa$ z{^vZL{5q6;jnwgZZ;DV9y9wfAO!$OD19;vt6dX>tu&Dk!K>jgVqfTp9)Ia&3FfFhL zOEK97Z1NkB=5NC!sXa?iwBwrp?S+DsjZicz7Hltcn}e7<&7v% z4GV&dR~N8!!Z6r3!IKx9pd09s|HL*|gzPzAK$-kj2)-B0Djsh_Bd?8E@O&)2N6!;8 zav4aSy)T5^ns$snod;_Kgll_p%0orHe-iza#wZdNpm)V^dDm}|BH<2{t$eBEwN|H93M?vC2mJ zES_n{WN(h(svUHHbzCb%Z{7;c-A`f6h+q~V^Tv{Y47vJ>RL~YqL~YX}Pz`p2Ih3)h zrO&mo1939P+F|zBEwFAzKW?<<35K6}CsMkTX>D!`a_TPR6(jkLO3II4odS7POCV%Q zvXHQzcA&r8K{DeaDt?=e`9=?+dJnNk!V5tjZ3)xfnX^zkVt$@FMtyjRsInW6ey^=D zx^*x&Y10TX4fg!J?KpPlv=cEqy0T~svwy7C?} zB!7bT^(gSM?9CG9Bx6#O12gX6z%qu(U``D2!W4TU(nBgR_c+=ngSbSP7Cb`{5^<5t^vF{X7ReVI$ zj29sh1I2&|GjL5F`JII>_{DoTlU8Jji=UGpc-&Y%4gFbd`4CK4b_iYHrh{geD?D-U z!i|S?Vgve)V4ArL(8NOr4)Z%>#z0%1p1czxBV4%T+y#wytsc{6(F|AZi#gYiLrQ2i zNE`z-mMh3hcEA{OKDR>H&hHp5H)nb0e^H*ugSiPEx$v4*n9orn8GmlYStDj` zZpT9(QASJ8o!wd8lPlLTL8IOWO)KYP%F>}AGq@+zO(urd`#+#4BM5SbpMc791JSy- z19Qlc;qdilY{ExFt}DB)QC)is;YV+a8Q!BI+Q@(_Za#q}d+fQJr4CigdNAkHmP}dh z2$jb6SW@#8V$XHq9>iT!+_Q(8`U@ETqz7ia{1d{x#$t8S7jPqewtC?mj1D7)@DULW z7jU}e9)ng}ch-{L3bLD}aF{uA~Fr=9aE-f5js_l;y+>Dfj6%^fH*N+y>RKkM_JpP&fG%EWXo&CA$mk!}YFw#|hHB z4EnRc>JGeQ(idWFj$m@TC4w{}Q&9fr59-uEK-@tiYPy0Ddud* z2agwxSo(1qPMk@4cSHir%XVfNR^;U=N)VQNLMgFJbBQf!R_sBQZi#HbMz_Ilmnf~{7sEumJBLf{+wpA9q z?LV5SA2(?>()-!rFXEuw1t|RC%JkRm0Pl?>c;X{pNH}sCYVr?Z+0kv-@4N8mH@h*nwnECL2E)3O02C67mG1|jw9Rm zxGz`E4a8W=wPl-sLWz2%P@g%3oAkRYR1Y>|HI&)0HY3)^sD+esb6`d7UBPvA0qG$q z=1i+Yzkz*`AJ~lYhew5?vT0(;--jWnKyg=zem0))B9&vzXUr31Ak9G4YTl$}q z82XcO!@RYqnO+R7*Y88q(Yuhg*_ahjMA&e-3;aH)C##PfixDf%fbqZfOx}^W)-(P2 zwE8}*YS25Vu^r7!4!QA@SM_;#l2&Z=bmlgVy_mz#Lm1wE6=U0NdG3tCEVdK%E@`U8 zrgI|Fc%}%ycp1Z8FAMT7*=R9m7vxnNpmv%ma~hR_+a`2jmCIu29k&c5<2)(1K9C40Z68O}8Q1|pAh8*@6A`bP((ZuVIlbf+slqHczPSm)~Az#Af!$L?m zm7v_bADWlk2R+{btTv`QyD`k3KZxngquYmbCw;ovA2eWva^fwY=)enhjAdoAfqd16 zF1+T-EtD(`6+#y(SVr<^i0wTN-7cJhZG8;6tFIAX^un9vL^k7lFLpdU?W7of$QosT-N8KP@7Q>(H=mXuXLYlF;+zGJ?Ds@7 zenT|phVy9Nbvg=>pC~UdU>_vu??hEi0NP%qKBT!@;knj@*WNornU0~*e19iAYVcmEZ%rSGxeLJP&ao3%cvZMx-Kd~rv8B{i;oaq8N`bES@HNW zJy_-T`w){@39+wxV`lGjV6fke7n{j=if#}18|}bEr;dX3qKV*U=?N{LBA{vVQFJvc zhdDh+&z*0`z0Qr`2?su-Re2heyuE^x7Y=9oW3IrBkwbV}4S5tM96=omrd}h;`TKX{ z?Ziuw<9y-B0w=y~@No8=(w+skM?+DPMqCnW!%6}Qz_2|YqQ#F`G5ddL(8Zn2yElTL z?&!>2wWKLDjbV~`??Dmy6W@;a;fnr@7k&<8l7^-W3hGtXLT_Amz=ma{dZFS~8hOpO zAxre+aXn-_*~gZ7FYQPD1$w-43rF3jb6EUfC&p>dfP>dHm{{M7BK|F>IsA|) zZ}JoadOLz`ZUM}C_YhQ5Z87BTExPBg6_qQ41n%@1>^GaU+^eHlT!1&Pc={2OmefPb zv7MrWbqbbM?E!~pAFzV5i0Q>T$aQ@N9yVLaTbPB7S=%xBiYZS}5hH#0f7p7j2GyIL zG!SN-ogMg*HewbvxboY|ZG7(SRQ7SqqJ zAdo3sZ$ZuZJmM(-g1oQaP;Pxg;WK9(d0#zHv+fgC^r*nv`*ShFpbO;E|Mk-i(qgD1Fc2&^_8QQY?5lb?qfRe1cJD@}c3YGmF|v^L@o5F}?%c z3%3K0`%(&RQTE*M?=X~J{epFqwu8g%#c0+h<&IBjc8?r@uRhW|HMmMB`*0hj>jf5G zJ{v>KyD{6Skz78e5vyK(Le;GX!G@W#1U+-64D}J7ID7EYjINALFM@j=EQ$5h=v$K& zz~xrsH7Q9pXtwPuCOPJ!VY|RX_XKd+XN`EEh&medyt!hiKNf~(g0@StD4G6RBMaY; z6Ms4LRSPMH(fu_rNLgKjQ~QJFkJx_Z^kdc`%(<7v>lB0y69!Kv}m5o2HKdX(n+emndQPU@7PG z$8t7W%54vi;O(><%BSxZw5P_hPiLjP;@*D{?Qj>|%=GA4*_$~;TqjP!FHNSr8&hPa z!IuK^hm|Wae|ISs*bTw>4>C~q+A9<#reNjc2;!VO|Gja&(<&tGEk? z(mh#??HQDMzZK?%P^P|mI4joH!zWcomU!QVWj%N0n!96}@jt`4fB97`BHmQ-iYzF5 zO&)t?3?%M4E{e$=dFa2@IO|w3=AV9p%6tD4+E&;=^|(Yx+SCFLD_wc)Iv?hGy9F$E z+Ja--D3;+Xc2fyg7kDWo z>%J?>f^^tf;>8XA=*Yzx#EmPfMb)#R7-&Pc&tQePV1pI2ZM~0m)|AZ)S3J2)a>WhXc16k-N@wzs_n0E8AMDlW@_>hGQm`^?4Js;=F=*;(47XE4Qtd-%wY!XU!_z=% zK3klaNf~Wh9W?bHg33<|6aiN{vbZn#(EOhxm%QE~#@kc>M&@9ab8|dKhP=e7L1Vcx zc!yZ9iDuo(j?CijNUZ8!5AyE6ggctSY@+3T^15k#rDj%GI@1i!8+-CQBWb2<*@Lyo zi$VJLXPCA9BP#DJfK$(r-uECLdzt*}|LWn}%~q^-V=So3$WQx6sxbAP8QUT6!E|Tu zX#6q-9^$x5XzI8dAE~?Zisc_swp9>wyZYM2(|OAx!it%Ac?8= zl_snebyKoM3+Rk(o1Jka`5N+fk*>JjgMZ5t0xbUuqaMkuORfns_0` z^)(jFm?1`_)82Q(lzdU$(8RV4mLw12MIYaRtl~9>UfxK3CceyU{|1QbRf8=->%~0f zVPRGg^#zx^F{d>@|?Z(arCylR3-zRSqu1%nptZ;*yv16{(KrAz;Y?Vr3SC zddLBMxz?Y__R%f3{Gp=F$_Z70kHwdxh7v1$1T(s_6$*w8gWVO5e3n@`Mku;s<<&Td z_qc$(>LE6tX@HEoCfulUD;nN$1Sg>Y(yKp$>)&5s+Bxc8rny=dkt9aH=?Cp*K`iFP zAa1ty3pSDmD{t;zh}iT8+&k(-{=JSGi;G>cc9DcdCh0I^%2J5<-GmiL*5kCn{iqN0 zEX>kg!%z1H5=$~om}ftZRS=(}cC!pk_t@~FXG1XH=EceTVThxdop##(_GM772zoE1~vAKc+m=E@t(nyyx6| zsOb=mMRh|&^P8kg>pAi(*BMvPJwoog479rWQ1f3is5K`cy6;&~bxsqnops@!2Z-C- za{%+dbrE8__F)rq*I~}4nW*_a74%C2cX56eF)-V%PH@k@>Wp7CN!C@!@6TSjPR)g`N{@G{mxTDl+!y1CGLpQ z@ukofVaKnfQdZ~t20YY%C=dJRB8H#;4MMHPaqZA`;+@%k%x$;`5k)PioO?{;U8lyB z=yq^ypziDew}~m&QD~WQNfgHlY?X_GRX?Kddii|_t%{(WPJiL;@4>7%knZs%tD&Kf zoL7HUV~v9kv)JNG`}JtH>OXpS&596;S2jWX5*rBH7sSoBS#rIDW0}LFkr3CR4JGFv zi-|2-DB2Sz$RA!1)S5u%9bSYj?hnB#oP(iLIy8AKfy}x2@Fb@j4}1L-LPo9;a^}qj zlZDavsu$(1ly4zmMgbO$v*kLob)YZ!^0RsLM3319>k5b~>JEa*bQ_SoeXn@_dI*c$ zKY$$zrhd@z<@m^hc-md(V2E!6Rz+;b$cu;3!F@eshf3M-VRYMGxyd)=;Rnb~t;VLy z0Ci^m{J?S(KC$Q%Cg!_<@~;7`$@DzPMyH8xg=bDUAGKpu{S;V$Q|yi{UXgwwVikb^MO~c zr2;Q*Y(V$8t8^E0abS6Fjad7Ia_2@{Amw9sOx#Y- zf~TX|1ION!=YI%s>rWza$9c%S#bW5LkJ$7#c@E8Ihy}Orljivhyw9E>PRUJ-pu2wF z|ERZQe2d_38O-E+okV5QK~R4lDb9La1!Wum!&uV?f_zAcP~Rbl_6u9CEjbP0S5IQ~ zf?6>?+z=v8+hIUPN0j~*CRFn}EZs;u|FBZu$d?kXTO10uSE(!P;W9|vWXcb%b!Ohj zM{{Y9Ntm`nk7ve?p#08!)%4DLar5!1IDbvF^Hz znY``E8%RfKo<;nbpl~c@ov`fZ4+vXy0-tM$nU(xl(4A2T)j6YCo3bmYS2~IH*9SrT zni-hUwwm&=!+5~+Fi;pagYK%6um9qAq=k1Dn(u5thYS-ipKHnE{@xC<#-8j_c1LFN z>4%Ura5uD^tQORTepqzz4VLyQhASgzhF#ME2VgIjZ}T3-NJmy8bYMEa{u=eBAHo%4 zyH-w+@mFraEbpZfb&7PwvHf%-^|Z$Fk?uSti&(ACv|{v5+JQs=2ItA+*oVJ+vCbau zys4UQ%_Hm-WrO;2d3BYhU`!_-u=EnDudYOk-b!p+w*(`g1EgOjKdlFK6^x_LZ1gau zt1whlg_7Pk!kOjY*-UI>QK&pR1q0+=L6!bPXx3Guwp%3R%oqptk2(sH^d3C#D*2BJ zgT;ND%$ZNK4X?jN%&ktbn5kNiH5dPbmRWnr^Ct@J3%jwh|J{eOpgw%^j&Z#0!Ea!H zoj8T#`eE2e7Z$kBi)#XJK-oV9D9JjgklS}r6!hxI9*EQ_wIH3k9euf8xgC={KPI+h zQBV7Var}r?2j2eZAgHWv3OlY#`N$qpeyC?(9{%tyR4yEbUb{WG|CEkQ)pUov9hZD9 zuI6Ckq6vaz??v*o_TjF?5LxkX6faLU;{j#B!WU#fA?Y;A1p<>y4Dt21cH-6NYQSUD zE40|P76l*TG!EGSl?gEr5aI^02f`sFzca=!e240?or)dL>83P&DX6-wfrydiQ0RUL z><5e?cFhA^G}n#a(0KEyW~2DF#Ez`ZLb|A0!W2Qxl&WqC(!D0@#o>&T^G zp=MA?nu7PePtf|PjJ(6U#n1=`$g}H)?I(YN$^13!vocPsr=#$^53> z2Kj@2iimsEPp9uftcyP(HjB8TPcnsdZI&!ga$l$~JOrALPF(7|N*sN=GY^jM$$$Mu z@6RQBU~?N~jnC07CcVyAIqh$;%6LCUlnH!Ql`o5aW&rgA+OaxA!lS!xM#T{7{2OcO zYtu%zd3yiGjx*t*53ghOms1!r{+U>Nl%8*Cn=xWO<-)e9gdfd4nCkhT&~UC3%L?~k z>NQ6dRcFsY;)_&>9W4VB)mZ8R8OwG*HRgqbx8vJkBgsc(%Zla6Sb4w+4119`>Tc{sOyq*p2tkm-bYI;%@r?CX0Gl@wuw>QWFDOcaM2gcV@t|iD246Th=#LIhd zq*9NCj=l`73MunmlnaKpo1v=sF_aoPFdypo4fw+WvVvqx;yF%Z5i%R6uI$8$f_jSi z{w$9`OKUhMF?<^wi?o4|m|l5aQH0 zuM_>-0$Ad(hgfZR3)JOx&@xuRRi(F4sq_|8P7i`M17qC$Lc)Kj`Y_4%>%RQ?4T%1{ z0Ad9tmK^>??;vC5Flq#4!DM{o+Mawy!w`0IQeUQW@nmh*D^az@lWp7X!s_2`!N`qj zSht<-4O=^d<8DVboc=mL<1`3SmWl)ni}0Y;i?uob!ph&CLTF1K`uS`Sc7O#7x2+LkhPv@-=bV^_ z8L=^FN7tATH{;Q1==@(d?m4|bmpXWI#i4vO^bAK2%dJ@daWK06@dZ>plf=sRC3q#X z2d~nQ2W?a}u1F=GR`wGNadP4_tWCLLU>)Y!F2ZlbGcF7Lgw_)!EY$Q1rqW$dUS*>Q zVSbQJyQy+OgLw8lW06nKKvSM#r65*iPzYc^kkC4*<$8-WHrc$(hA1q%J@!cC5jJwM4vx1j_j~ z-S5v+C%%9ual}hB%EivaP)y`1(RLTgE(xV4BOx%V+SF7=hm)=kfw5R^7qY+lJh4^$4_* zYz6l%u1vOclkoiG5a!+D%9eZ@%+w#KXDoIZ{$63g+sgg25c_;m^C@8k_5JvXs=Dkd%34BkhNpfuJC!Z$QS_(=gO7W#6Z z<*uX+dJC%QPvB*TARd!>1!{YEFpKY`Z^8x0TdH6U#5pa@XvEB0XRwVJ8W~^vV0bZQ zu1aJ)@4*SsZD>;D)%+9+ejkYS$-kh*A`=V8B@ye;0~0%c+IDN!#G|y;v;DGm7?lTLzeeyFxq+>uzd~wTo;h3D4{H(`E&YnWxP-^ z%$qH)C68>^O`!JG5kFlghMR0hhZY+wALYnmCgp+Z!&^a@l8F&}>OeVY-33L*Ul{Uz z4m3^&(hKH@)jV6g@{xA$zas^eb5CBpp1P#LdC_Y_an|&k zQ0$b9^-~=AwYww9GkgK>9PZ60Brra4W^Z0vOxVG{p6fLwBbbk+q?4^}xxUd7u^m~V! zhkEn8H)i}>@czY6us_loj=bPC1jk!bGY!|LxV#nOd+sGmdwVa6A*mG}dl zr`a(FF_<+giJx(B5cx1?h-Oh)*s|fJXmm+OZ0IW(Zq$`eb?wdcf83?_bT`N|eJRxb zNnIJNQOxV^#bS~W?6NvLnJ7;fwQRiMQ#JAgG zLC0&Lj@_+LX1WNGo5>&QB>}g`#N#g;%6|XanIAEfF_{C0#L2-tb4z#L_F*tDi?2pi zVxbs2;*3x^|21U(c?~o@Xa`wmCRUuG`TAQZH1RpGitdwYg@xGOX2CL(FF|xqH=aC5 z&I(WDV%@i!Xt<0)yD4R$k5h*udGHr44&UCkm$9GVlCk;%ymrZ~%vc*T7=MFfMQU48?yHfmJH~&KccB zzucbK^6xDO>?v?kG{I!RZuqjM8%qiA1lm2+t-M0#n=@_%j~iQoUy3`iRh?~kh@~}3 zR=l||>ytSPSDh1L7tV&}>pwv4kt+l}@nMRR^{8*Z1#>c`Jg2WAKYeI8&$YB+xzp)x zIPsE3*EX1c(Trkw2X(lunpn`+r=ZiF7hrfb1JCxC^OnaSQ2VAGG&g&&#D(PL46x=l z*WGzgcpW-*xrVLV^q7sk32z-ox!!A*JbTG598Pzm+8f94&<-hAA9BHf>M7`Ex)HVg zPE&^~@$=lx#5<+>OzJ%w?02~FsypkUW%)piP7H(^%L1Huq7ACY{s7Id(bVNRPvlp} z@=)sKF?-{~;*vYEpesS#e3C!gL3*-DjvFhvM7#Bn*&6*p8&G|?yO?8k5#{~=LRFXo z)!--AZ>|w#A@-O@d3=Za##p~LlenGrkQz&KbK-Z9%0~!6=SiP*0amnf4(f)?6U~}@ zxVrOOg;McBVfn?BKY84b)z6i(3bUbH-Lg`U|7Xm!UB3!mqejs)?VQ*~`dVgGBYIC= zhl@57PwCAUfx9PB{x#ay4Lvd4{tC2Z+{Ia>gST85#j0hSMAf%J7{6Exb<@v7=EOtT z{N9gOKZ!!qXND}Bc)}iS#20mY22JEOPWQSGwjE31R}I}XN5=>z(T-f|en@;P9m!3) zRf3zjg1h|T#&yRpX(YGGM78N`O@ymA#)g`rnUNujNFy#l4Qb;C!h{>2=`%d~4V78{ zVqTX`Xj{>NnJu;8`Sagk-Q+V^yTOZ@a{yA=;XFeRIDGtfm?5mm724+L3}xZpp(o zb!T6;2eUe22_!xk$~895D6x~HbYqavqCXI2_9wyb(JmoxD$PP?3Ph#tTQnSY7r1*5 zKFfaxR;C|E$rDqM{P|v7cYh4CU49!XAp(n%Zo{E2gIHsdDOY}2COC$7;%=jTSn&T$ zdHa{ws9M+{>avc5s`v?<+-1%po)~kFzJ9z}A>}FCZ)2s`4~!}*fsmQug5}n(%w*AO z45WN-t&)Z6+Y_NyGk_J(-ht_~|4X8d zfYX>uzoIZOb4J{3ioJHZLBg29M^dqobJ1SjsQ2@5U$get@|}#Na(^&Mnd` zIG=97YYZNP&E`H_-82$^Z1HCY7U}V(3&B`>aSXIANC8>UK|${@BYr3KWe_J)m@yI@ z+-{*6dCcT(o7mBVtZ{G$p%d7 z!&LKfA-<+RXin^bO0Ut}VV5Dw+>6D~oXJoW7%LnLWn8y(5B04jf_&adNR-SLA}>K4Pbt2IzU*ICu?ytz@)53tZpZ6lh%z5s2|3Sj*u_JF?)VAHy@Pnl z%o;E^HDnRL#B&T15kxT_G)GVH!9O7|3MH4hXhS2eYX5M9?j7g@hT$KznHo z$RrtHRhNT7f4Q+zuYRm?iUm*Ku%2@H-=R&Lhbx@uE!K@OzmXN9(f|B-lF1Y#qYvUh$=Xc$X+RWF{kQ^BAEVQG55NsJly9tSj$C>w#lfm6;u{2_(tXGo?V#O{1zL$ zJXnFnAf|e>PL%8(0;PSQLgL>uL`6(6Di@O0vNDMLfD){nodo;t1XH)>UXXsej4kRR z@Q^42SHWXd2E@#PtfHgDoUeZOXz%Zv@?6QYK&gP*7z&7POb5p|bo0yd@5Z{)+V=T|5gDx9&v032A5@@9u1IIzx2z}OSR6=>=VZ30KA(RqtRI+dmq{d&w zjc+@#pqMKlnQbl9J-Lka8ym#~J`ODECUqd3^ul;P7y0o@(A+x*cVvu*JaG_AdW1tt z`ghRz*K0x)$1s63F3AlAIvu=*MMLL-e3rGM`Dq%O7%zv+9>*|X#!wzo6^{{o9Ftk@ z6yz_InpR@3De11QBn`K)LojcMAl~qtd7yXIfU7oyVrFC)?*BlKDau2@>32PDvSt!g z-`|4u|3fT`O2yEKF6c0M08ZTEz;)V4MQ+kFP(LVC+@SpH$-#17G0Bi=r~ecZTwa2H z_bcG~>KI1%wnP3e^;OQH{P@Oanm7XmPf~x!9G^Jc*ka99^GwhqUJJ_122sA+5&V}} z@k5r@Y}IvRzG_uZX0}y}!Og@K8Dh&M*Aj)`E-tLJ;33MpjuH#D{X{<-Bh1rn#}?9# z>q|dDJ9RND(aBY(e>NUq8S2i zIr6Gj>NI&XivM^^`N7FA!J(JHCkESat9%!x_vcWiwd@bA{-lAP4dVGHeOdjGJ7U52 z5s-MM83Qb&U>M#BowmKln#G&xo$)7@=IXI{l^=JyY|NuSb>B3yJyPmfA(SWq8C1CM+IJbxde3e)Z?7omFY~f(XfXFoAoFStN+|8 zCjLH-IF7@4#I6nKHiVcP`=g=BxhwbHb_XQu{t~LModxNFy`Xj92FpCVu%z`iT>p{} zt5pIE<<@ZWnw(|)nu?J*w?H?oi_o;AFL&)>%8R>uG6$N;H79$B^oEVu)9$ zs9qB*%3NQH_3q;#$g&-+mYzb{VoOZlOE+F?kc~+(|Ie8v5B-@+ zNbM+PCfecbRh5iK4}63stNL)GjIrF`)0HbHM{6>_Uqqw#MW8#Wg~BN{#5N{PEtd3* zc^g5(^hC)?7yhZug*z4yJLmXUX!@o?UHe*e>!i=mhEj$v(F+n@T)-IZv< zFYpI$+@j3};%At%|D)*K<7!I3KYo;yNRgC7N*9Nc(vj-y_2{OH(ul?|`8IMrL_}^A zjR={D2yrkXL`36Ab@qCQq{Jc8L_~y$NDUE9zxDf{*UT&Jy`N{T&*%LP{M(Vo?Km$a zhT4glfA{CBSE!-o%6Djd(JDsF+5mF47EAMpiME9pgP{s22{GeFM<=lMzlQS8*T=Dy zzSo`l8&JE@o0W(;kiK{`Ds_pcj@v=sW(CH-{Emh<3$SqJFg|6s3+*=s3G%vMK>Eps zSyP50c+(@uNo$42;Q?&#Bu6G0ZXn9GEQ5%}=S1bx-xN_!v+-~kF&`e7v#=xIu%V-o zJV{$oHPE#{pXSM?C+|YeRoYcvCud9h74g&+I>V&&6(k#OissG_v3QviCa!U1k!~f> zI)yP)o8M4dI2KLK9+ER-S5fDRa&)&aVc~O@QKl~e_>?^G{v}Judl1aNw~pZEuOC9x zZ-{xx19;%+KOon`4ZLM-xQ4DoQk^HF#IOgetafF#1;hD;Fe{c(-2%}$cVO*6D{lF4 z2s^y)1+c^JymL3*rQ^tNCLO0~-87t~CeH%-nW-ADyFN@hbfp+ReU8nGh3t-UC=2Nz9)qJ210b4z}xcLek7DSh?#B+MeMs z?b&wnHQMp^2kCh0ttl&~dn+5(f|{3oxqP< z3Aby3{vxpTX<$qJ@AedHR|7^IjHLECPc{yk-E?|kt4MX570 zl{=E(_n?`5H-By_At$l;YHjA%@~NZOGiiA0m^SsmZbk< zEGSZNhb#KbfqFoyebu7s+Zv4SPagOu#9Z~AhK=>BMY*n@XQkm~w7pi3b1rzYy+tw} z^`tNF%=h5UiM|+Va|>$b1#iRpIuc@UQs5VvZB1d6YvqN_ZZr>o7tU#ZIz zpUg%-?7|srLTQE|Uq)x%o44R_Xm@_Fih49xPCzdC zeH*JHF`Y6cre15Yv0pc4LrmVRQU#rPlEvCZmmt%DIvUE}tTHTJn189J61+iC>I zsq}2`sfLOXS1{pL63pBmz~}s)j;c2=#ap1qa++H}GU5RwI7(TfI)J~jw&hW==dquz z0k8gQ%|ar{o!C8qg?dPsk((m~?vKQNf19!5{>9?z9{*yf8FhrBpP*6u?+|zEJ)!SKwrw)l~WI+^c=AurwQSgtRUvUqtF;bo~_#%lW& z?R;zEuVC2gJX|o0&N96p2}UOO!PLc$bq=TRaGz{UedNF+ZvP9`t8Tzb3(8;E05mR- z1HQzJhpm(IDZRV#6yGtdsq;PNc4g77`xBP0G-9gd%S7Wq54OK_IPYaLf=!bG@nA-R zg}Db)_O%i#`uAlmPh@<~bjs2X-71>D-hkC&Fl%i8M~EnMVr_bHFpWQfvS>qiMQ00} z6FR(h-B8{bKy0#&MWWI0J{Xs2#Y)zbYuaQksQxMwi#L7|we;?nOur^*M;*{a6fj=|+MDCidJ3ZV_Yn>>+))eeock!mpuxyMjB-nT{?K+RhI%@~!Fgp!T4$BKO_)b=?gq?%QM;i5Y@)xeWwE%WBe7A-pT zDuxIb3D0f#57pbwifWpf3)?N3ygHDnd?vEc^_JM4C1u|YhES&B8~G;oU{!4lR2&i^ zbkj_?Y{$)L_f29U$)~<@F=SUJ z5B1AuLd5fL_%LTIE4%R)%eM?+waz8N-V@Xr88bwX*Hf3v@f=wHw*+=?@n#M-!+7-Z zKpwqd3`@2&U_td(Oc4@-DwhLjG|dZ@2j-zpS|H0feIK7~apyZ96Zd->&HI*q6DtO* z(P7>g-uB`GN^9?n#Z&FjX`&um+K1Tkm|PKNcP z`|{wA6L?fW87y_t<=O?SAhrHaRGvA44)^y_o?@OtGA373Hex)NoNDv@x=z8TE+XzZ zj6F1K)l&TvqJJlJG1yOy2oEzC=|at zO1_aEAZZD}>6K1gGawIwXA+~v?*kk@wUzR^H--8wXVCQiajffV2Ja9j%#PbGO4d#i zrh2;a;y&iscFl`55nnoSZeJlbydMkwQY3Wlpl{liMG$=9JE)5W!kZnItm_oOK;EABwk_M0FP>rvtyB&=2jaCfg4n4Fyli4!l2mM(XpmgaCZl(SNO z?!k)xeko)*np0j!g~#aG5AQt%^n2Q|jIq?PM zNBV1GD{o;%=^Zqb-GRpneU>?696f*Jm%HK451*FvhcoTCgfeuiC;bBsV-G;f0(ai_ z%mHgV+)!Uxh!Mu-DDw$s-a-4pWnm)Z_8AXx)#t?K&0Eo*ym76{;mmzt3$)1$F!;%1 z{95D7t=r6?VfLR`H=N_P@4a}+lwLgEXF8O<8iW_PC71pnwIFgSA0kDeBIvM z$Zvyquz4^ySRBG)o5&q9;+-J*vQgu{qynXpr-i8UjSx14ayuvYi=|3)V(d;9I!dOI z8^+KhWY8eCz8m?mPHVxr(1)#D<-$6eEYY&mmTT*zV%qkXIO7O)2CQW~BfS=1EjQ&+ z+bMgV>BlCu(i!tYIk9r;MR~yuO>D_M$`Mm$?{5dAy0u?6oa)%(@5*l5IE}- z`13{c`tf7`c`}XSJT#k@;Go$O{`DVX2wpj^$r(e87u}yg@iB;gzuU2PMy|N>uaUg+ z-6`VN_vBu;9C=3m0qFPd7#8uu3$$#L#yirGTOTA=!=YW+G?%;y=ghcS!&ts;tP4-m znv=6fBAzI7;hIY{t21*I8kfx#r2jhudvkn=XDL&p7;l7@owP@aA!g?mTS04*53M(i zctJnPCKZK2-rTV~!o`qDULVn@I|GD>W5iyT-NMicDdxe+%~^G{8)cjwQIQ}(y#6>mku{DdFU>?Ra_dIe zGRP!Ox-s=rL)|$#`A*;ooyN@L;W*ZuTaSUyZ(v+f2^f`SL%u4IdylrlX=78U>n#QA z?Sr}W$N{k*28Dhv`>}9 zA05YK&=b>!2C~?kFVJ;!80X(Js6(5L5ihku#l3r=zvuwSj>^!!6j?eMQ)U`u>Y1}eDa|^w<6ogXBMEvvr#swHAHHZ2-GAMmV{D(^Ji=!v&l&O^9Cx@dNn(?t z^ol+|obUq+8{PS}F$3vLlLuZda;}I^hc;z5ru?x>a60od)C6aMyX9bRpXm2L<<7!Rku>g16kFB-=a zMpuK@U>GXpD4ll% zrmqU%(_M%0SH#MU_-`U8=l?*{S-;@$E!|m^e?49(BbJlxZ6SWve%ga23I!uRgXxkg zNZ&gM-FIa`#ZPasx%(X8KOcpPsL@RRx&f6Zw-u<<&cSp;VgTD6Chz1_TwCYGBlkPf z?xiPdtfM~A$r+-`@t8(lnL$qJ0rZ{S@2-^!%zNP=)INM&bWl!bF~cBmE|W0T%q^(4 z*au;fb1>ziEh~8049iUhFy$BOE6ns^k@}RIB&SB)#$r*rl!=)S3~}o*XI{7J3*>}6 z2kq{k6g83KdDHbGP}M%rbOiQ>$Whst{l^5T>rK!07BkT8cLZv-cj3$(uIfl!fKYNC@xr<%fec0EQAYNiPkq!SqxicOF(_U7BD&eVu*QP-6NKZB|!-8cd z4B_&Ie?28j&x*CzpNRKA`!cnI34Hb)#)xz+M6cO{vfif!g;b?01A&ptU_G4`EmQs^=Iazu749}}3)dYYH? z`hZR$9#Cl5jr*FP3*>vF3hE>3mzLrSw9kYj3cWY=Zowek?BM zG=6lj<5_eUQ$!C!<+C?}(aHZnHgKPioc#nN=3T~js*y}P@>ya1tOsa%w;Y@%b)syW zK)GJOB4=aj6mRGf<*|cBC+al>Uff50xBw=7`HN7K)Cktm_n^S#Fe){^nsVg`R&Qy_ zTm;Ib4!y4lHmyb3?jdaN#{Sgh;mFoemZ;E}6|Zn*^{AHObi3F#sNII z?{3uI8pwlA5=Usdjbf=4(4FZh%FFi)aW&mBuiA{UHro9-Eff;{#|n`H$k##H8g4U= zWkz-9ksE`V(e-7RHhm9x)w!{fp06P%p+DQ$+lVLa>CM_EA4cCT<5-P46Rl{ss+c_y zRYCa*>6Tf7_o5?0($X#{r!2a1+a`^ZVI9aWKLNd4BY2`mAeY>=L!-Mo=x0p3?u(S) z94RBFaGFBu6DsBt`)ON&KJy;BMM%{ev5p)a%&sAp3UNR?Cf7jH4lka${}g1V=`i)U z|3Si-^QeAv3QYGiXrH$Y)I@HUSPa!PBtL*qZwH?AbqA&;P-d5S!|yhQaLaoK(fKL) zBX)+1Emj6hyY>;LdY=)am%c%#xOLe4cNBQ1y%Nr}`SHeuu|lN8fgibL$-J#?*|w46 z_?tsxxFqi&s?zU@smg1(Z3#JFo__<$s|%Xwegjyi-42=;(tElgfXS8=31!D^QSLKE z2!j?lmS3?#|8r7{dc2azXlf0>sM}QK#3pNX?fCSs{?0VB*7(OV$%P_ zrSJX7b9+NfR?C@D&?V9A&yj5EItlZm41r9#8cVLy*=GasqUJJYKKwGK&b|gtd-cKn zdOdc94Pj~CPoZR4KW=30%w;dOh>FWcu(D$#M0^coNfW<-{=z=s{V@imliS4fupQX? zO`oa#h)r)7%-xEudGOLppqYLWGFwMcAM#ewwmEKmwLM~`8_k_B-}BTwRYUBGr)X4g zP(1wR0`Q3^Q8UL5FWl(O(zm#RF)=CLBmz&oJ_E|8k73c*)1drvcP>5Xz;!m$uI$%i z_-vUc>#Qb+`L`gZ?=9!q3p-Hux$-@`Jn32U3h`4-3 z$env1vX`F{Gv_2jLGm}CIs)4NWz7p;1ko(WgL#vyBQ+$Ll?*R~sdUfl@c#i}w1ZM^ z{{zEkjAY(tL($|*HMEwGWgqG1t_etk(2zgH_^!z~drbf{B&SQ&J9}umvjEz+I??-CI%F67FxI^R-KpPw zUZKNHU7x^itFeq^I$%1>3eq3gLfmiVIEnI?Mki$~^y~!aMZZUJjgRJ#6}q+gAhVeCl$*wu$g7WrT+xzd*t&mhZwFFM~J&pa*( zTqC|k`K4Z5dcu@to@hq5JGv}>FwHZ5G3E2zJy^C;sSqL?!<@$xlj!v`VXd7Bi|d{v z2HvoToqF9_T>K$XwI@g8?Gq1@6=PU^(@U`4Mw#(Ng&M>A#QRy^CGb{dlR`iSX=9x+3 zc-4&ssQSfTR8FyAsWhX`1QRy1cQ?iwfcaUyhb*^J%&uCA%4JtIgANX1k`yCF&amO+ zrj_$zp%@aI@`Y0$19;nLBXIGwW&5i~kz1&Wc26DX{vnm-4S_6^{LSI&Y=584 zc*Mz+$;OjI{x?&eNL*3vCOV_GEBHfGI?H~mLXX9LS*NQdF>S~3E0ta>_ND<#Ed34R zUIn2h-yH>CV;+?_7z6hVW@|qU;E6?hL0hz4VOX04!Im39wwH8(u=$5 zIM5Dz0y^Bfh+3CNny?j=DNS)u#IkpI_`+Mr>6wkMTz%QLR~}5dX9}Du8O9e)XWXgt z4m{go$k$i9amA=|RLY7K;qqDNW=$SAp8)Eq{p?vMbdvuiQc%G>crh=Cg>Cp3W$Pab zIm<6$Wy}DklWj*188dd}Z4j$HHJ*B?4T?zpa;&wyD`wB>&w+|K#;i>V$gLt54EZm(mrj0GNYR)H~1(XF&x2`XAuJ~LEtq@yqRlDch;Oi z8CtcJPgmIT*kcc|QeJ>D^}S)Hl|HYVP=K{tH6XJ)k4<(3C^dU2HU|W-TzMfjzIldn zyLh4Q{sgY-vsQ@T7zHWs%R%wB4P)q?CP^E{?<<42)jR68OKmZJsuKjhSb`pn<5=gZ zdeC&VK<1x8SpTan^RNhHx!vdE?D_7z&ha_K{hNs;t-3s2LigI7Ow|8Aj_LY0Kx;F- zD~?%--gj?`DUXgque1?doB122JUN7VwE5Qk*n}I;(>?RZ4NY9bVAe2fBV@D=;5NVa z;CcJV8ACijMfp_BK1G>BcNZ4lISE3Yl>%cK5Zgn+{kAt@u__z`FRLMKjtw_za)iwG z()9+Et$cYyU!pGBSL2L>!I9A`5e|E(@WTHXPgEs|ubxvEwP-6e zN=wDIUsS@^MFC7LeSJA2BFHq0An3LK(3_|&OYbMq`kk2sdt|UscKV*(KVu6|33IS zB!GO30Yzn61Fo{N#tVaqMf$Z6vSnIPQrVw3cpQbyUurOJcnF&r?8=Pf`yf4VAs%^V z#b)UA=aoObL2bo1MZ~~?T;=pd<88MO605_-X@3pi5znuQ_S3wWi}eUrNpnn_Aam|_ z)PgN2qvzvGtRV5C9=_QlOf*eGo6K&Uj~&O_{+a{fe~jeEbce9%zuEI9_Y_cWbk(%B zkL0SpJv?)Ey@Hq-lOea6n9-{Xpuu?uOpgFAtsBbh2lr%6@m&xy$BQ+nn{e+M;t3o% z=_&bG56wSW^SnK@XZuBi;RnkgaN2nKKI%j9Pt~GqTanl}c?E+_ZF)?Q#_c?PR!mnj*(qUZ(12tHO){_U`#=Ey|4 zXS^80wST#RB8oB@>o3(`yts?@8o1d8aew1}tgUn+%5U{n6r9dN^+h^weLN;SwjROPlH;|jn*rNc zOc}Q&zD#!3AGCJ!HCf4{xb%9sC_hT9y){F4Ql}MPOS3KIzV}6mYd51h`2@IcH)K0! zxU;XZzU+mw7mGIx=9Nun(PWAjlZ_}B%qBW9_ZOex;k7<2_D~+mzx@ykR}CchDDd1V z?I7{E45|mNAh}5Mx12UyZE+10Pv1cNnjl>5PP>8PLU+@h^!>=8UiQ;#Xs-%n5k_5t z#><(PPSGc}KlLcbkMT_X;|gZ4doP%-*FxglOJeQn1~G;Bd-{LOfb{lWc;C*Cr}TJ< z%3h)-Ms*)8=e!3iA4l#mBaq1}=7H&;4k)fUC|32Ntfs?V@l`K<))_pOOQSytRpq+a zoY052efQ^=z#wq~M3kTuI_J_R01>HwTN1Y)Nnc?@KVIbq#1wkzOtvgeHyCmoy$-}0AAeOqmKsctb zWoqeaQR2E!k?J{$zpFChs&V5{z4SVmJg}ktft%7^#yYloAYax8fHfY&@IG%c#uQ1nF0LM?1NsW0-&x$p#%-;;^%=u!%vP)o|AmN?Q zQlY_`x*AI>(e>(J>PY)=zMvKO*$BB=Y;;X$cC_Xsj)^3COnhqOzxcKZof&n;kF)syWF z?#rbYUyE5^-FTu}Pn7?N5D(@LqWr@ObfvvyTW>l)@ep|`5hS}igzTcLV$1#) z;BnuHc0+wx<)~Yj(!3F4W?iK0Yy-B%{f=pr-A)tFf~xO9O;hha)DQX#0=MhIoDnZ^ zbw(3(EVhCZV~LyIw}YO+e^C2)w?^ALU5K;&CgdlRQ*!SJ>S+LrZwi2jD79$%moY@F zr*r(~Tud+KkdT^|NpS$j4q#6lL>JwT}=M2#Hr5}bl%nBN1}#t z#ZhBO-?STaKj^XM?jJC^I0uXGB!Fw4DOU|KM0WWynC~!RsakI+ew8JdHr>L*{okUt z&vK#7&I_$ypGH}CD~Q@KTcG(I**nW^BZd8UEaf_5+GeI}E%d!IF)19?I21{F(k}zr&Hw$mzx6^W4zL zu})C+eW=(O?9UY6rV^L*HpCQ`5%+Bri{G;fB~hUsUfZY}dZq|3?6BeH`a6&tBE-LS zLjUc=@c6hCVx-2r%VI2>Htz>UQ?69gISn4G+<3~(w|H>0Coe5@=BeKfiIF3T^^uWA zOr{di`;`Mz6%+q-kHEFZyA>(jMnJ^L3^Ap<3hE;0JCoc-Il7g&HJAQBUe7`47bhs$ zO1>FM8kAZ4Q7^t1(@Z=F9kN!4oU{Ye`@X}1kTERrk%aiZ{*-gR21<(#MJUbq>&clY zpF$4Lg=#_jZw#>x|AxKgJz4%~>d*Wx&^~XqXwp%Jkyj5v{LFgjXF#mggKLDCre)~d zZ5(Ty*iUS^Z^s*Z)(H{sEt$4qIe1_8V3|8NV*KD(?7UKk%2n426YH*ts#8uvWu2U< zJUumTik|#4J$ETPj-geJ3D@>cEo$jJibj`cE_ZSfmeVsfZ?+GMs+MBJ*5_z2Duku} z@2OZE*M~RWq-=J#%cA!SU0${+kmZ~shd3vfrS%^oIRn8&s=>f50+YUL6dn(?;imEF zsJCG#GaP;v6BoK8UoembYsc}AD@HTzsRoT?&psg{Rx3!V7zAcMft=q@L%5tnhqn`C zKcU@T#jk<{<2AXHPr%Hjy_r9ka0l5DtO(UW>R+EhkzxX+pN6r!ze;&O17Zr^2k7v* zM01>Xs9s+v_8K#q=lr<|B@=T!2)9sb^kVCQFm=v((zWbS}fz-J_#R3(eC)j5%R~r#6XL)Q0urE zLMzq@wJ(lgVoYD5E7*>w-VDL@Ifm4=wcs6rU(vom%G;leWX)M781;ad8b@Yh*fL^j z?DJ+j7gHuDJ4%@OvKNmH7|2fiEak1^M{}3u^zMwg1Cu`Z@+KvDVKQf6L4g-_S6t7Ew2ZIX~BB1sy$kaifgC3#0G$>H%EecL~l& zC9ajFHA)OSA=AYG5_(mDlCl>wkC`*0bbpp=^@e5?@sRO>GSb1t5bwSmo4PvD<@|l9 z46Vn;ijktB*)0fv`2;mR+|f&I%iSG!;o2k@9@V1_UNrUPq1-}j{!`ACv-=5G7WHD< zby|(l?{Bf*6?hosWZD#iK&|IY_wEF2TR0yon|iY8{~N`$>1`TGKXRp1hk;FnB}@Hf zm-ukA88dPCj{E=iX0b_IQBhnC3X2w0jTj5VAKMcHF79!!q9(?W^UIjlPH2fCcv zNbZ;>QJ!ck%BCk`=Z+iDe99J54_z1Qrng|*nOsqg_r;VMF1#Zu5mhhE1o>H-LweOf zwsnd4>N~xM`p1g4!{0&f=t2zLOHP)JUx-KT0(Jk;pP|f7_??+p-fsf!j{AVL|7CIg zl)h|gbZ_R{SI*ptofY0PhMXg=SamxGB05h%SwG4wk0^w~DsMh#lRbOaF^bjxqY+*# zkumE_wZzlxg_7!_Xhd_>TDw4=DwV--Mt>&R2EETFiE-u+P`j<4XnJ!r2DMqUwx@Rl z>y=)hB-Xb)?~5o~JPTI#9?4e|@4oHmGx4hoST>zA?tTwossl%hqOTH{aOQEePB{qE zw)(NWv^3PrzfBpAp5Wd1x3FNs1m0$&!J6-8tZW4BM{WnRXj5;NDC`g=4?PtnKBJh_ zzERLn@4WcOZQ-4zA#aRzL%D3JVD0r5tnHn+TIfRa*T5Vv4&~YPkA&vvnRx6(2%pn* z4!iF3DUW_$%#PiJ;oE(g zcG?~klHK_$u3*vI^Rew^kr)<8`LTd$p!sVqs*k5Y^y5s-p6!7hPGgw-+zK&njR9N# z&44BIa4e~|r8~zySafh0U)@}c%UxW^*ZK&4XsdM+!=O$7A^$Dc7}Vf~ZqV z(CNwy%(zJB)h|*ZIJXH+oBQ%b6Wy4k{~wA(JrmX{moeKO8DRA7nV7V*50kb!it_LI znjFW!L1Md0(b!{+Xl72Fm?b{kw2;xP*`H-w|AN}mN{lj^z+3~(S&sKvXb-K$tfCT# zdYVPP^zX>Vl)~04K5XfT9&EXp3HJ**2#O(L5HYP*j7T*AnMbME9Ca0RS8hXNpZ|9r zTVcVOTc`}~M_CWLU*&6sckk#vJW$U0s)xA1nC?)G0X*0<4-JnV#HI#oUVFG7hAwg8 zR;9hT<=)$vwXGiyFVDf|wg6_eYY@v=>B`$;Dg^VcF6c6r^ZM+*JYtX+J@2#=9BIi0 zh0*uv(j1WOc`2yZQ2Cw(>^QY{`*OR z)`Q7j{fx~8Cy41kfk|FW0Qntj+QI3uTjvGlJ28ZDAs1R-kLBKPlc256kEdrH0!g=Z zASsZS%!ex|3w+cQg$cRtbd zS`*0bzcpdG`~ShxHIB^Qz>mwtRi0}EV?O(}0nhkIeyFWJnAQIhZjB$w%o+ok`pyq* z+UUe0`k#gDKc<4VM_-}Q^tB*75qP3~8Z;aKgGtxEfc1qtsDAwh{H5eGJQu{wJ0PahSm~~%c$UM-A9mi{MXP67myOIKYLLxfSH^HE0JeQsE zg_`C}2u?kXUUbK^{(Tb0jm;ppt|z)@oxtb}KQ7sxLpG zH_9LOTqSBRf580l`aC|xfT=G<3))w^LAH&)^*T1pH`kt}Uw2`Ne|Hnxt2OldoC!uZ z6EXgU8Ky1`M9n5*D_u5aOZpd@7w zb2vAF2U^y_VW$R|qU*yoeJYTfyKw1rwU8b`{XyAe{9HebryhSUPPZG#zWzFfzqo9{ z#om4_&{shD-lvK;GT^rV?162i)@*9EBd^Qr$u`Ckt8CnR?3jNT_O{XtN#B~e_tN9B z2S%`(Kkr~!=rC?Dx(5rZv}cVUbn)N=8}2k~5@wwHf@LqhLg5e_Ud67VT6d$+xw9Kr zOfJI;dp)*hb3YzME?oINQK8?9_B2Z-3Tn^+ql6$#-aZ$ru7&&#TtWloli=naO8K+s1?c5JwB<7&@y1!>! z@)mgK1Kf4C8xOlmOsTX?NdNW*mnZdN5r1D1*AO$Y<5&!yF}30n$wp8)@6xz9Z2;M( zn^>@L3r6_-j;hJ!o=HPrLEE8x@ybqtnGPy~MU9p$u6`maz1=ivtNfW}&;)dU^%UCR zf2eM32lZQW-mIXEdWsd7bz6cB#9Z3uWyo%=4q{u$%~y4FDrS6k%r=ix0A!`iqJ93&vvFpo6HpBn_9Z?!z(*5-`2x z6jZLJ9s1oe$ci|GS<74KUNe^2v?-YGCl{W)mAsNW7mAY6YlW2Q6S>pt`4IneH&kD8 z;S~e#LS|<)w0r*xvOzgGXj*q3=sOHIf(}b_*o&d%8ZoND8XJb`bM%nqEBIUBvXgzD%KK#p084L6TTqU{d!2HLrW~=phx*5V`{rZs;@FnVES0g&un~ z#*-zUAReXPGtt{ULnuDD9I{@tLGu4Xp|)TTMl_xjFO!+*5S8XlB#*PP|iV8x*(gJMAKZg59lNWWq zf_+Ue=ec7~Kv?!^j3d^K%Yt%945$zePNZ|q*IJ>J&M1R+l27w|5@f7-1u9qS4PH79 z-pl_7iN`(JnemoPY3d=QRM~KMhek-wGJ(K^=MX-Qa!u=_#hPEwqVgweaazeB^015r z)g4=G{b0f+6K`r7t;Rv|OheH19>Y^E^k-Qa6S;dXd8!8XfZEN&K;m6Zs}f&sX}${# zoCfl^T@R^)@Uysc3eEQRxuc~2JfXwliy0&G zvHfxe{hN)(gVzI?*@a&G>j!?<5*b(G5AIhvXqj-9x9ZMQc+)7@8F?X(khp#6y zo%IZADc>($IZFtR?1DM*efiX>-rVgw?R4uJAT#nFG{)_~q-U9^s=pzmOO9fOIt7h< zHiPoZe9yuUj4KVZiXwbnpl#L=mKw7VQhmzA?3|SlEx0hFkHkxs?ZX*=2eLUs2J?J> zOWyp?1(f_zp!^8USJU8_xY;?;(%f-ET?XhgEY$S%FhnbKd?aoh0CCv2K7IYnB z$_xIXdr0bDNbGS0UF%gWyKBZ%2W*0w%gnfaGi4z6w`l5r7NF@NLX*!u;@2$2-Hl^d#}xzeR*w`b zTBR&ha!Is4EwI!Z4+Q1RekhNAiXCU}po6d#GP?)x_Bi4f4K!fU&(7dxGgo%ubbsz- zFc~fBn<~HNBQX6UtUX8>_v^kaY`YncTh<4olCOX&d@-)pAs*MZ{!Hbr5qHOo;eK5^ zG3ICs)GLRhb-@N~*8V^TpE|VdGm5jY8Z>BG0qofV%|JMzF;qT36zdp&xdC6H(iC8Fl#7Ys};k(^<<`PGcoW+Umic= z5Ex4-qhIm`=F|*j?v?+bS$7#zoxF$%$w$F2YY?At+?Hip1Y+AdA4n;c;`pF(VcI{=%yiU6bnoCOG zr3)2rOR@98zo1VZj$(smL8xP_OJ>F%HaM}v@D3%-$r`s{;;a-4O!-}n@jsj~Zfl(w zvaLH)jSq!R`-zlCp1`vFy?Nm}cUG2afo)d*3NCXfzjNM?7}K|f^mS4w=q=-^wpNfb z@jiGBGv=wM#$xB6dm&!?FE*NZa!vL=3{@S2l&*C0vG-+?q=jhk+K^k?1`^)clY2d) zxZuz0Ap6Y;q1ZPJR9iNPX~#FSV0#$4k&J^Wb4Le{FR7EVP7rvwE6oMi>=h;KYp>{VWA$ z1Px`jk2_KOZ>?v}-;Z$H_8?}lT)`|CRYPD;2lnPDy>0XZnNzPpX!I=|WbTH%Ai#6tRGO9p!kvIbNj@UG?2K5tuqdmU9 zxcb>vY&+W%X4FyUcOZ2!7q=k)`V|^pX|ULCF4i9|!L|bc@@{i5sD~~Kqz+%A+n<S%? zh&E?!Slsj!!8>XIHcj4#iXSE@k1YnH&*r=`XD4>H(f2+(lDNu((7e+h53i{QSI@po z={ANJ-#9HcShk^dy0>Dv1LH2ws=*yMfmU3IEXs|UZ_we1-qXeQ5r#Z&@doj5;3(>B zpF{uqx~%vqxuB$0-1Rnf9}f-|GdBy6={%a7E^>n0d;dYf4p*MwW5N4LMsla{(Ga)m z7O_pwi?(x#iTLdS$T#RPqmVH?{!%D9Rgs&xD;4UziDyoILA~UGZ2hoqtlDe+CiG8@ZLQq_=Wr^V@M0rm)&3&0U4_+O}I}As$=2d%9 z(Q_%(-&>9HQ^uH9*oE;c^jPO}NA8kUi(!hRAbIVC9UqTCaqC@}cD@$W6Yq(ZqwJZ9 zq7IBan*^g@9$@SEkxUM|1?96Hs9*Gv<|;X0F>fsM$g^R|L8g$NQh{v;uLw!1W<2aX zf#;chK>cAB(EiGoI)C3UwXb{zrb!Y|PU#e5x-21fLMwJoyoFM)NO}e~ifwmOpy3*Q zU)XZgJh=dY>jyw=cO&LXoFv(#$kY%|Ib3hEw$shJ7nENw^SxmuxJcRPr&J(%%FfhSRq!Z*x= z#|h~|jY)rIxTp<_&z}%NTk0VCLJby=x&RhmjW}O@16vBID-mzV((PQC{;Xf2Cg3(+ zpdDJnD>oLM@+WBDjTbBbs>GZ)npt$$im9W12=ZOl3Zs@(j6PNf33T>SC;uy+x-^=H zU*3gPMK7RpZ8oL@^@c+?p>5O#XnIrunl0p$D!i;{bs?_mmMoALofaOp4JC%nOmb4> zLg0e+=yY2Rq1#ccGHAf$pUG8rJRh85-+}(^tDvXn%JO&`7>4GfQS1%qn12-AymBWXv$mo~A=W5pga* z4QDZ7zmsbmA{)0*t%vrb9vo+G`gQ8 z1Rfp1oi3_Cdc~W254a*`x-?;XTQ;adhjUeB8m2w<=k=yNc)_FK3~CG zk`K^avH+S-4o2+)xu#}>3YBA=1c@tGRDSD1^+I!&n`Y0stdF?9?;!4SXBaE^bRWZdZ-U|}c~F?xmzh-?@#KPY z;J4qBt5UPX=)z;jth%s%Ch;{Eo1!Ff6$X0zhaI*PFlS30gp>2bs7ub(iY7tdauhSX zug0xDcAV!OgVqg>JmHlmzqZeTg;38!zCBJjHX)F=9XW_GB~PGa+&he0a$RWKn=3Z- zOF|`WW#vMS|3(hoGv&H9Lcs$ zQSio@`!Tg(w|L~4DXZV0!9{fEPnv7Qw5x+b|I-XeY`7$<2T~8@)Io9hvR>TqVhPrV zQLngi3=h7r6`bP8E%M3-j3Z3Bi`#Iv?U*e;e9@7w?q|f(J?ml4uNfFx{}IUBAD*c)ir~JokNFpU>z08O}B<&A5Xn?Ht#MpgT55jI)tq?3Q4z-aTKaT`%YI zpVI_Y;wi=9ey5>vpD9mrr|;PvHOeWcnKE=3$ZnWm(#Y=c`JpYpNV?Qg*Nt$b-ja4k zZbGuZFHgB1M%`ogwQ&Q_!-Os?-dTpA^Y0K{zWzWyh8==pgn)4?1|d@f(aavsOU@kv zr_kXnnQs${?v3I#r`O~0>akodj4w~5yo_|zWNb33Cx07dCBmdwTD1?7#uG#5=sBDk z;>r|vTv+z-a7Z07SSTJ`3Z>(yFMZ=dh4(RYyzXSe?b6OeM5Y}?TYIp)w|Uq(Wh_&< zjN@B!qL_&xFzHX^HxFNr`}Mmq{dEI&NIEg;W&_4agFz9ok-jZC&=e7a>O-%!x>ygi z2>^EMUt6xV8G*W?!$ET=TQKf@2b>R5rffC&nV*Gm@hb6|E=Tc-nHz-qoq@di%Rp9g z_#PTpECb_3j=VO)m&pywweyJkGq2R0rH7mF(sUqpXttO;d^@IIPZnmzRfFp@7dF0+ zDK9mCLEifzyzzP#Bz-aBieny_DtRq>AGYC(ws>$019R4VcN9xKk|cU(*npGG4)DGv zf?_y%3`8k}6laKS)InFD=D|9QhVhYq$#~P6>yVJ{z#@GdnIyqebQt~|+6@kf`s!w} z?YK32yWNsI*ewC$rE^eqJP(tb%b@ypB+uE~lcjFm3$1evxN&<3dyp+~RdI!)$itm! zd{02c?e8(G`vFv@9u<=FEg{6D92!rb$B-M>L=Sm5_vH5>rt&)0&2`223(rBL951*w z4C9ljkGg21gr_wB2WGSnbjTXXs?5V#n^;NO;CHap&6Rn3N3c9Y8H>7kka%?S&}geW zkAF!VSLGuiTGE40{&y6A-E6{KWY$bwGLG0R8$m5M#$wV(3$_WYHFgU)uQ%oj>w(PW zjVZe_T*57~ZTZaG-MROtpV92-M(AAo5;i!>hJ=P_nRr^k(EJi>hFPTvC*u@0zt7tV^O#-Qis zo-FB6D&F}-p0v*!v2+wMrMi`aCO#XM&$eNi2}gxon|{nZJ&f0mcIPprpK)p_WevVB z!LBY}?lx)<#A?2P?l%iz!Y~JxdSNB7FMBYq^>49l9PNvmMljX=i$TxfA*%kCGOb4= z`R&U1(c{W3L-_;BH>*x0Lfj4Fku9Ly!-$vK`mLtiEDjO!T%m(%obuV{MZS}0|xU&XULoVolMY}yAp3{Aa?xk zAE;k+4PL8!SpIibm_WU66D|ew6rP6ry*{ADb@JWtAuP$DjXaq$5dX8}GjH5S$)Jm( zbk=j=-$Gei{T zBP~b05!*-6KCs+|#a&E?%1gxd&omShm<8A`C9U<*O>IZB8&^4eRm|#T$kg+n$c0JydDPyXH&?e$4v^nFg4hxzIW5kk*G<1>2f~x6X!MKTh0<+7& zZBi6V^|TW5zl~z~E|f`kc_9Y!K3p?ls9<>FJ-#k*T#M1Co&-(j;|1Z4{S zG1EaOl#bqt(#AiqbnIX*Tl`s2y*jCwUTe>!xL-{F{U4BodGkHNeoT4SNNn*a0?ohz zZ1l6|%B@dC)ekcj>DBeL zjo`Y|2hsBUWlTCYkR=b=2_60ex%A6yNdB%ubn7tZ@%<~%#*p;aZyeHUTCm>f1TOFG z%B=F}?m>5L*^?QVdMFqxl*wQgw;I>G3oKSx3esa?JU{qPSXLjv7XLDi81`y0|M>XaEo{+~a^G^2827ZyVEE^FR!?-71(ir}(dBe;Bi zb9q;cj9Af5?0NTebnUMoMv1OG_1!sCjgbVVESe3n;|D~853b~=xq!)sI>hAk?Kt{^ zHQNwI{%yb8TKUu>oQy44^3{N=cRYd28@Gh{BkfsxRvUQBBAMIkXYlQO81tSq3YNYr z!iIeph4v^nc-z;7YtB8#nAL_n{bez>8>T{{!8@!NLOpxYr$C(+D_V~I0-pA3K=bnv z&{g-*ejFp?Ez4W5%RCpazp&v&_WjxEWkYzyjAPI>uRo9d`7Vq|Bu2H1n>b-qI8U#P zLhlL%Slkv>n%8wZ^CAW?cRF4RxZCXV_>pIGH?! zKGxrViqL%fgxZ2hw08h-1`kE1X^gJY&5%^Zu|9ji+d#wi|IfZ+0o3%SQ4n+L6@# z)RQl`HH6hyy~TMpKD^}FAeQnBSd6L{C$Ed$MO_2=An~IxqJHX6Wg@P_svD?^cWsSI>(b zuLSnI^eg(E`xo!bw`V)e-I#>lb-v^iboRN1u8qTa{Am&553NV@6~=r%`PMAOp*&}* z8;^P142212VBOMw{N@0fGj;_DoxdBi`gI?%an?!Df7=ZCFDc7alcAOL3DnjJC15sN z2dc~HkU7wOi zbd!){>%n!CUn}Hkw&k+qwHOfElQ%HRv^|Mt;fMOLkTKW6xypo<8vTRJ=oqSh_Gi=m z!+5ek<@6W13+lfOSn>5XEcUoWU47ex7u$$SRcwv)Z(!j?ngb4y2R=58#cub9%H{${ z=$8wxBi&fWk_(vEF^W|z=7Qg-60D9Pw%fHZtbDft3VE5(5K|x&_kE0w$}+5Z`7ajJ z%r@f?F$p#thdR%T_%zv`U1>MrQ;rdz`g`&PHc;l_`yWv~Uc?3euQA7kLY>!socx$P zt}QOdV#7K{7-M|ZC2Vc7BvxS?4e9`YnjR6!pW`mPvNDwl%fR~_b)Y>M%2U=~2G!)3ig=fMP}T1$s^~0N-w(mqVW%M_FA%1Dc4o!% zfn_)!fbGOMQNM^KmW2TinYar)@&u;z?t{vHZXh>Vf~v7IP~kHP3YV-y%P8Uj4YEe{ zv-9QBUXko+Y+s(_^$U2NE=FQ8jSmC+7bnjXqCXjY`<%_^NSG z$D>@4OMaLtFhAFew`@1#sqHFk?8*T*6>z`JThM#zX>{2*9h>j><>UAD;kk*o>8|h} z`J)$OvW*F9&K?#TuH=ySTL-Oj0h(ziqkd3?iY`;`rzYKJ%S~{o+bguUycfNbB=35(4m@&wSo}vbUQ^(};&u)K^&g8+KKC?ST-KN8-}wuTzcc4cZOXC4%a;AI zgS?V6ZCU5>d(?Hk8nb9`(3-IY66fv3M4G8B`}buFvU{oiX-kM1ev{v@>S77dHdMzZXi(cu2pg*{)r1rpq9q1`r9RNZbY z>m2eogsfLEv%Q-!Z}nx|veJ*o{qtCG2u{yUKmuwxO@SfeD$up zM0-+qFuYCYQe`Y^%cJPeyxHR5K;Ce=7(!p(#h5=4B5tj~)^ZDWykHQw8RE>FRbgE5 zDG`l571%|)^^SKMyz|6>DYxAgbW2hMO}vKnk(j7ag+GIOgeA~>K<+v16~!OD$g96$@`ETO?#mHdO@mR4ay&V#ROtC$PKhO znsmwp>TG^Nd*;+%z5|zO!5A{XK$Kd&gY$VNJToT~+jse(!e%9^=MUx$*X((6z|SzF zAdrs_^kE%~>%cISyi;!;DfBf%p}6lvh$6m7+jeWVc+)VZvv?;+yauzCWA3BMIz}`! zHe+$J1ekp33FfE#PK=XgL4KrEP!+x*ey)?~H{F-pAGhYqpGR^_ftbn*6#OKNq+VvV zkX!H$8kYShqnkR)w;0;d2$|2CFm{CoQ@t+3{?i#Sqgx0| zR?*D7T?UH(n2 zZqcf%>=e@DIby*9Tb}YDn1@WQ5nlg6y2ib}Ectpi=BXsC(6tWF5Ab2~#dn3$i;wZ9 zi8IgM=>(^R0X$%~2Gl>kQ#7<57h3OK#Y$m6b{X_x=^gW7|GQy4<*XzA(J-7Z?M3~c zdKa$rY6lhZeDyC7npWCieC#l8v@4p~jFB*pKP9}>B#b$%5ukNoBwPBQD}V99g=a6; z;AD%tSaR?PCW$X#<1-}e~J)?M;o&rj$<`ZNkHt=>ZO z$4Fk7^*dTU1>V4eiQ9Kk`y!0|BikgDLmrH0hYaT}_M}_ibLSOv=@~xb33(gW;GHQ$ zSZ8l%h`X~HLQg!yx>uzre^?Lg1;??*?+U2)U)Gkax(A_s>F+H+gu2I>+T>4VqP)8~ z2K;HwlZMiK7IGK6T-w3@1Z8=MC9=rZizSbv{-B?um|)k96_&g3)Ki86o0S0)I@=}J zegtXrCt=M_fzNwv#N2-O;{i{O(9HP)Bih2CdBAY)@aBbRxx|6Bwo93QpD!zZKNF2( zO))y^%Z30o-B6Z zc2vIXFB%eCApQJc<{6iTx-TyRueZ}Ze~T@5_=)Zbq4wWcBY4Q`<>+fMj!6z36hmI07PL#fX_w|9DE}NzKJRsk%C84N z^K~m^YOadywbf$Xxtri-d=pbvEJDw5KCJFz9*npv)fKeQ2BHe3N| z;$boJY=4&j(>V6_Wk0UI+X<1)-PpHt#0byyEu%MqWc^?vv*Eb7CeV_p z{3Z&IAN69Dzm;Nx=*a57m$N@YEm-m1X#U*11?7{?wfW0KnGI!4qvrKx`Y#{Flto3ToS1-i zE6-s59opd@+btLuZ^esi!?1aJhfo1pmEVHV)^g9)oo$VQd_E7Y8yAt=AMz~}Bk{PRa* z?F9$$SwbkYn-t7sVx_ov39)-;{tLOL=3GAQo0xF_GU!}}Fv~q_Xz#5TQt~{Z%GQBp zJZM9$a0SOdGUD21HD*8YWLf1>{@~?MZtU5Y?v2yo&JVON!&F77gBJ3=2J)KCUr@5- zgF<^G19E!x0VT?xX??dD7Y0W8t-AQ<{KgI!-)2!Xwb+TLvC;=^PE(Ea+T3QD*vBsB*Fe-N&_XWn(v9cXtSw zKlNm3gZD%6+88+RZo)$xbHt{Y_qgSd1&=$_1y1?p;H3QslBTnQ-nmTFM4l3S{|e@o z54RH2b`MT>lrobo4!ptHA9Y&;1qWL@t{gNPlYCvF`bS6Bx!wk2e_syzX?+A`&NOr~ z9>CIu=%IGl6_{))V~0;%z}VHj;b!7E{_TYylO3B2r{6lWs-dqSf2cE?oPV5p%lvuM z!Dw_C^i_=6cNMB>N1&08X@qCTziz; zdqC#9DcJe!Cs6)7xmX3tKNbJy;DceHg^tdxOP-Cn~!Fgrt5pXq4l{l;`(@)fXGmQ6R`| z$y3ao)|a`A9?b$S2NR#E5*k)D3z^$~2GxL%Sb9oemD&5z<-qsQO1f?~8bfDdB-hvE zf=Wpp3JZo{xIZy<$;VMNk+iqvvDD*pPLO}_q;AaBsQEmMH zmO4tD8mJTRC4{V;4wJXr@(jcOP`}(56&|Bl>?vPfy{tF0PVCJqR+pjWWCu2%JRTCu zhO)Y%aM+OR#5=YP=FZH~#emE51>_p7?F@uk%s8Nhi49rn8~jKGarv!pX{> zEY`dVB+(ntQ}P7l6NdA`s>7l+<%fO$qi;drc&Jbh;3@ry`*fu@Cgx>BYRW`pmke3i z^H5gt;ulCB@wX7Y$eT3>o7ps%)Px_HXigcd@~OIQB>+F(A@d@U-&+_=KVi6d zlVd&9-t9o;Mq)62ir~Is16inL7ta4i8I4IVF|c+J3z?TICK^40>$`fg^K-(vX3t8& zwLX%o8g41{TTX}xFJGYkZU(NL`U#sdyuhI%0UD#p6A(!p9ko)>UORxS+@7Uu^Fr|$ zW!{aVa9KeB^%uQTEJ$dFyw@5i^zjGT>}}|{nSKr*gPG-AYp(dWKTh7`!}*aWm@v|W zRnmUIi`}6i#=?+}zv5nej)rF={+AK9Fe^hKqSo&8V@S ziW=V~qC;vUXrA|nn8-CCF*g=ihY9!F9m)3nOMRpzLqIml58J!;VCvN!(0-1*dWWLR zC(E)?d1|#__S>J}LY~{M1EW}RXadaeh~!PtGa>nR(wKiBt#aOFoN2HZ>sPxoufF!o z*(8wbquPWcr2oYw)(A1DIHdNOD99~S6$u|Vqi4ugls{=fjKpY05@)NXAndQc%CrgB;yIXM1W=}3V`b?1Q>dr&P zJr@)Ev;aHj%BFrF&5Z9ofco+8!H+U>%z|Ta>n7+t?ZNUzV_xIah{^MNVtmO(jG=CT zx^Eid;8`hL6jn^t&_|&-OS-4`8L0TNH*Z`^J?^8*wH51zvbvf;UOKP`lQhm3K2jHi zC2?r2MzMJT!D9$Fn-$7$&m^J~WOD_`d!;2Z1 z5-xCQsM!8>5FQRWP8!{5QGU^a%g-TJ<~6}L=KY#+yRDHAY*W_RN=Zr~JKfgW4O@Ya>fK<5*w)m}IZ zHiKxkesL9VjUp~j$td*hU53d|CXg4U3DSwz;<- zgtC>2aGvATn^nx;D;lr5iK;tGh{;NsFpuA`cZi&D-a$7FECfscpMLx$s+LR@V>aZZ(cfOBor}sYBY= z{hWB5$w!pB9D}O3)6g^}l+_>X$0llA+4G7I5HkM=+#1@O<;TZ@Y`|$T*4>8de@haJ z?M=aM_AQLxW5p}7lj#ggfUYORg1&kUB$s+KMr>o3;0pYsFYP&34u?k7E>ztNP^`a4 zER(If!O~U{c9cuHDJqDk{vWoUKMXbAems$-g7k(FpLpMlFTWJX49khjFx(hIzPy1BgCdkT zeuI>)eVBFEcaZvfKJF|T#^#Lf#*+s1#IliL++nmHlHNW9>-QHy^ZiIJ%{>TCw$v*! z+hRJ!zh>ajA$q?mV9&4K-lfa-RlGkNO zh!c?^)(rd??Pjil>r)(9g2PKpTKWvLK7WMVmh~9#^A#8DErSOA^ty#oRlsSCnMp?7UAh$$ort~78j@dCx ze*=rV_2j82sU|DIQKno$7)?0`8FBaoAJ%i*^23!ax|HBY} zoVjolRBV0^o$L4DO#7SAQP2n@4g@p#(-54g4Cb0?&D0e$n$D)fD8JQT@Ya_=sdolu z{4apDeYlU3kc@z0t4*jojbi$#NQ|06zqgVfuyAjwSZL}3i32~OIyjMd&U>K!rADmj z^5W;e3}#l30$79PpRm3zn(yb7MUDH4`?eAHx`2GVbvMM;=|f0w+6);rk?j0GL0mQ4 zS@FmAkzBHBtQh;955Gfx&|=jjG{}kOD=ml%s*>}S6~G)E9GFSUD4rEgti*xeK-p`! za1xDK!`32y6hZEMe?FdOF;(>vG2(49T5ok`$!1cBn`elr!Cq|M0Lp*acjHT!jpAp`$OE&~ zM_4~)40Vr$^UC<$=>2LZHpCatA=s?m$M>I;P{ynJ6dYP9scMq zq}_V~(X}+A?ff6;dkz(KKmDVm3*Z|%;>DhQUTDYqx-pY>Pvf}yz z&?Jr^hTBvu{K*hYmk~?U;uAV-v?X3{j%fWL2-6sOH5Ts0rnei=a%vNdSlFL8ZK9u_ z)Id0!?#@rXGvxBFHm$5HgiBTw3CfVs#EBUOT|a&UgV~gq{1Si_!8$NsMx9XIixmN1 zo4d40Fk-1HIj+O1*ne{b*^Q0-IZv0zJ6!xHInY)-xck$CLj7JDB9K0Rn z(+pTel^?ToJPPg;yR!$x_RTz;184Vz^Yh0H*u2$!c*E1Df@HIT&6zolt+^P)^GRn_ zpPsMHpGum-!PK&hTamo@dq<{z^s8XD{R3roBUr$%tFdkv%~Yk|3ti5ZJk$6QB!p1k z!Qw&8{=5lKUEK`IylBu=MhVi&Qp_}XBBs|jVcDh-=KQl6Z&{ZIX`>1;J$xtVY*kqN zdj>`(j$s|LwU8;X;tf6f!-Ltqcx}%bl!|M`$~wwC{pT(ux4jV*wGx&^{L6q38%S@X z41>NvQ0*|ohUXO+(4Bhf7e0luKV;0$Ka2E=RoHpQgkO&w!RKGFVN)NH-@OOT0nHmA z>)&-~a>dUge}OTyX<$9aj}sXbzqRcejauE zuJ{DXKeeF$`ZLN_rU{(|k*u<3E95=WLz6rVGg3~Wiu^%w*LDfgPnU!%I)4`9k%8V0 z7N}df2@{Iuq5b7C>~qstR_T5c91;rgZg+21XW5;5+zRD32E^%2FGRVYmpFp_7Y;Wo z#Kj*5vYbP1ytg zzzfZ-Aa2kx(K`D(oN+ja8yC%l)~D{c|DM1klDlWzoM?A=fX+ID4n^`{0b3~3sQRHn zY$!9s?6;>N^9|$bZ|@b=vIzdTj^4M0PCRRS6Xo3cqD#|Jj9BWy=HnpNkm87wpS?l- zZsOUE@#Q)3(Of4nfTa_Lvdotgfd#fH^kY}3o5RJPa_@B4EEwMDFSXA=houZ_Zl! zIqgNg2hHH}w+h|JJeZJY%A1H&t{b;H$fDeX4|D0q5>M=cg6R>wqR(7x_)!mP$0MRf zLY>urO~;ZhTXsFI2UDJqV|-3Naas?Q*Nrg(&D(9DJfMe$Kk8BPbcI;DXd|d+tcGMq z1MU=8jf#c&kb1#_)opTRGygH=O6A`;^V2yDVa;ZC`yMnd$?FCcU z)0`!DWd(2CSxizt-hf9$W3yGr{+Etr5fW}4J{x1Zldt^OojBs$C^q4{5I)kY7q=d< z1l)2OKzE@lsD+h+OOI~Y8TBh{=ENmBaSW5elxu%q1Pw!}PjLrA+x7n3+g5{#LoT8w z#{ln43*~hyg>j?odD0 z5FW^5Q%j+%`W60hocdrE(EKnooEd%Zz_KdJF=d9FCk&ef31SLF?UL}xJ{6!J#RVNK zQMj)5VanNi#6aC(=21V2>)-qcDLW|3GPWlc$83Ox741-$(+`qX?ZqiZp*+EABgQWI z3Du1+MY)F#k5INoZRN*uyLaNwIy(1FJa}GICpP}{59(t6R!lDb4C&3Az-3|(lid3U zrC@*)|C=V4egvEcim%sT^2( z{ckK8Q3$bXS|H-`Lde)o+M(tj%xVwgF()3uW6}gATPm~?(tdS&$B=(_Agd6r3l1q~ zG1K?Dcr(af9eNZPVawQu{EfSbA&ZIXO`Ce7^vGHC{j}fvYLBi*m(0E zJoJwUo@!tvPRX<6wUTD6yZ!+e*nNS32F+#ZY*^N)$Ky%3;y0ELEf5E;A(Ba;@@b%d(|c|-e-$uAILX$_!5pdWX$LM8^SEN>_nGGDb(Tj z5-u(xy~6mlSaCf~PzO>D=tCDM!B>>~MuD5(YV6plgZ-_0v2$i3=v(FrlEp_A6_IDf z!+zV*+ZTBDO*{6@D}=TD=*}{?Wnj&@1C&=&iTo&K?!ri``M8UAu%e*4y%VLkv$3kH zJ9%8oV7r+Yi*oDFDtsNlH>)rEO!J5A$Y!C?8-+N%DbFJw;*^2CS^K99G1KK|6h|sp zQ&J1md%82fvo}HR6$CNYFF}<-1RpkuSQWd1(Jk~ZyYl_z@3;7I^C_-ewRJ6Meov;I zqpkS#D{1P1fm(@Mn=t;YDQ^rogsJ3rP*)K*SPW**g&pWuy$M_P*--b#AW@#VR4l}w zF|q0@NXv)96t7Wi<;X5bkJt@=7V*VeDB7W2)_c)Y0tKik)a zOD_DaXm`9INO$$cs1P?+{KEi#v5$-yx6Q`X?Ky&`M-R;V`V)&nlTlYp)+$y9L5g&l9#t9c|pC1YmizI*!o)mR@=By=d>KM3UA@$5bEUo;~ozC z#hn)}`h)=wI0QCEaCOBfA!YY0RNibC&DN};E?avpU%w0`6Auc)Vmt1d6V6JF$qQsd zyLE{H>d!{=)Ugdh(px{Sd{!h#p82uRzkHaR2P56T11mz7gJtD;*wAdjTAnnbuVol> zPV-|fUo3c9{}2|_XE)}3W6Yj7=3eFf`O4A~v?>f_N@Wk?jSUbaFW*u>)=MGoW{IHB zIVUu1QiHU|Q6XW>pP=^|Cqzg?xO&qjLEDgrua6k;;==XNHHoA}o0YcO}4J*)Xc;M2bk=G)DI>HAQIK!_KUtFMaN z)qY&n-AM>(Z518H>=hO1-BCU7s-T5`P*yY$I&Pl?m3b&Ac2z;%*f72Q>vahE;k| zWA+DBUEhXfyT>u>FMWCQE`RQM(Uaxc2C%L}4%}~XKNfw=faj|WFzVVStUELi8k$nD zxLV34jCEtqy$A8^9%r!9*PmxD_=j?hdqA`1oREHNHpC*`VaEls>8FUhyxNvLQp0#c zs}FRp`2&^1UJ8nVYoH-lFQ}Kj6_;-^=jsj=qZ*>Qs@hGP5}yJN+crS<rV4wW&N6l4ja#D^!8wWG4~Ep@}p7aaD^KyBeM}X89OuFO?zu6G!^X_TbQQ`+CyJ&2 z?>h-`=k3>rgTwg8)E8A#ZgDw^S)QWak-dFcy2n@0EwW_Wmk#6}3n{-J^oJPgn}coM zKC}~>A=FvRdFEcq$vpTI3RT^(=5R2dX}lhs#|>cergd7KXO@_l63T8a2;yp;skScs zJhZ$xim^ldP~q|pyS`FB!S5cpjqA?cC;70mAN-ke&;lZQ+{V~`qcL}0H{P&CEu3#2 z!&Ck{CD_K8P3z}8p-<;jAeP|tT5%YX!F8{=gmq* zC+F_$^E5-|5g*BFO|1CRKR@D&O6o#eLVK;SyMjygdWf19!1RJbaB%HJRpLBB{{1n^ zyX+DVhd1EpU%Iop?$5yM2KiIc4Vaqt`l`7)gGXiRaTz>E@##jA`DZR$SM`y|uH$4&L9XXhN-W}5h z+47bLMm)>Hnq}_!ETpZmWea-jK?l;U{BAsfkN`KvhaQ3KId-7gOLw{A74ZJo9_+cv zP(HO{6!lBV8F#M2Z?vmRock7v$Bl;l2_2vxc3afVlkw!A6)>XSo2$3f2vx!$uG+-3 z4QlH6joyb##~AaD@}r01sJptMTykF!6MmMUZsAd}qOnQH`dPvqf?rcU zZ5V4wE5M{Hdr@`Gk*DcCp!FFA^VV;}fU0vizma+z{K-QSBN6Io-;`zk6{NNvP(PGd zf34Rs*X|*zA3rUZC{KaYMlGr~oDo$9P1?F(duB&oh+N9`gowpr>!z((lQWoAJW?80 z3)jh@%`FdAYqnEw$$cTVV-z}!8z@SB_llj={r=!sF!xp2@%&qMs4ub>Uxy3)%6)gf z*~*<|Ej)#><_DmWyld73xR((}NHQZ{w5Ibe9K%^?pF60neGe1A_GV_A+M!(&%v)zK zg0!2YIre@QRCOs53zgHMYPbZ)Z}MT8yXWHZADvhuG5p&`pN0JmYOFfB4K~cBZ{L6J z3T2fAy1a;l(s~1CS;fuDE;EZRkMmw|0V$f-aJIV z%0_Vd{{MT*{t~wg9?R16$D=c8dqs<7yfq{f_v~)tz>Sl#PtoaHibF@Gz!t?ZHsf8IO{^u3JaZ2H zpSKDq+w?_n_|=ho%$8j1Vn%1ma(oaql07w-^7=zHC?9@a5$i{tvS%NPdqNEO@e4*w z{b>&Bl)Hq=?>?ZW%Yf7SP(tQ`j(CXum6#fOR4Hp5; zt~~kJ6;ZnWqbPR_4Qf13?7aiMSiJEVCOv7)CzuZ7aaprqv+Y18{U3RToDV_Ny<$xL zdLHEkH?&bZccaW}o!B+Sm_P3`m__7004Mrx$K|z(C+RzwZxMrI4mAgcg%;w-jC631#w5D zi@?{`h~-CYfdZ#!ZsvB0{5oyW`U{XBo&1aqj?DVzXo&FM39ZSlAYCBCxXW9GRFhxe z)_w!#ApILZ>cV(Z33ZQ}zQP5$A29XGIl<`=wZ1&-4+U#GYt4%;4B=C`8B?y)q4In!wnPl)y8BlJ!|B8wzA~JJq)o%(V*{Wt zfP?m*HCXpy6L!t2z~izoCV8dTb_)L^K5)3$@+^!=olSVlci}8XY({lKIV3-ri}iMW zdB>PMH0&SB{MzzBYwC~QBdmE-kC9xRNX*w43Sq);{aD3cz1V}xavptZ6q80=5^MWB z$G9U_yzDnSHrb{huUpmt?sKI~D7NOEOIN@}(%MUkBs_m!Z>%r~1@$i*6~)!#A@4eA zZ;SpCED}Pw6?Nt{JfDN=?f+`)>_SocYcSI|55nU8{h7DNX)yO1%2*nC^;@;#$k(J* z7;P+6eMvFBvJ!SQNpPW#5UaJ4d0uCWMz1OL&6*5TyS# zmL<>a2^HHeigyQ67R~kqSZxSlAqm%ohVR#6@+;Z{KKTG{UvHz{-;Aq%4H9Fc!f-~A z0r%T^4Hb;B&CZTYS$j}O7xsa}#8M%u>j~-$3*q^Ra&$au&FY?9$63y!dEl>+%zE7! zm_E;rr!U)p@)w>8{ifc$`7+)8p4@=At*J0m+c@ii#COxa#d2INYr_ul0L`&O6B8(PIf_#z!-)bS|_ne=HV`Mzl8|?HGFT z%Ba(rR(2S2iPIYMcpEg`9Sms)=w~S1noN6_@LCrG!v$B=QvW{4>gR9+ppcopTQ zUlvfW#z@w5)Cf}(n_$ZCKFl!AjhE33CwY7ZN1Ki2vz8O{W%X-8b*MYeT-}2g8Y~6f zh*zTVq-41DR}|B_XG2uQTD)>z#*K*`DS11KI=>E!*_W3?$2c{l{<=?$eM)!5zv%nZ z-~{QR!7!)Pj(P4niOH+tAt$vL*JUq9{ky}0n!2SrONN7C?pn-tMlh}$2b$cgVob)r z;BbSO`(F;i3nSXq1+Nlj^t9sgv&mo);L5aXNw1*njyo|#wm66I3GN=G%S+kuq9C68 z!JmJ8?ZT2wdh&`T@9=TGz~4?E#tYTgF!V${CNF6hJ+D)zvSboK6w4SGu^ z#>@`p@}(oeA=VXM3zmHQupp*A^BZc$?Gfr$Z^XJ8XA= zuK0u^)%t*#e#VZ6jLH||cU?if3 z(*eO;HDI?`v*ZrMYiq!)?;7ax|BOrLRbk~?XCAug6hyR8E=fNV)Zul|@}(Kp{L`E9 z24CLv(=xiV(>-?U3L$0~J)|$NY}Sl6Cr(vBOiU4ajHPYmGN0{`0+zb+fWHv7Z9>1*kI>h zj7y_Mfgjk1s?qC$ss#tOlMP{QPy-FW$WUg}jcKPkal>Uar+%ir>+Y|pPAV6b(cvs> z+)b<%moI$w^V0~zmnnhG0k2C>jp(OmIc6{^yPiK$iY zdUXS8S5gs@icR;yBDn(X&uF6@ejpT>g@{u=FYu-G2s(xB+M`JEe`vca(z~J zz}xIC{=YA$@Y@QUFZN~X)mJfbWie>2=b>GdDVL6!gyFvjvy5Wmsu%lk>9-{?)yk8% z`+X4n7HHw|wJnesIFYpdM<;Am3J|*;|AR`o`}`~u2`QQ#6zYXA?@~=IDX_<{#q&FjT?qAyIX!N zraS|31>lmSe}m2{N$9$J5bXV;SkzWe-jMz?aZc#$HZK+%ruAW|-w$ElJ0GEYdN|KB zB39MrZJvbip3sLh6<4Tm`VAzn4*VZQXCD=F`u_1zQeu>n5;005q8dHT z=eiZqqgJaWqSfka$HTT-E!nI!Bqdsr6+@&DC1OzKbKQziVu%nCc}PU0JeK%fzyF-$ zIL@h=&*#3c>-~Pcv=x#)P&aRfXnH*f=6$ETdUgTCl*dANCuJxvZ4x7+$fc}D`{1t+ z#O!VsP_%^l|20Y2SSDdz&70{SEECi{X?7XQwb|{5V8`Dy2N*Vz85KIS*y}QyJs%X) zM%@#grg|{t^Q+ouVyz`Rnsd)#<9Y5qZ=T_D9>NbaV)+?*KE0l()z&QqdH-JUd6)$Y zryalPr1K~%)#rLO2E2CXCvg0f3l$x3ULcCMShB7$dZ%EwG zU(wENBj{QB^O!|9K)q&yqHdo%mxLK()j{Gc{kair=G=rGe^Ad$QHQdUZV5i3Xb&zJ6_Y`SU_o9^kZ2GG^4=$m5;LuP|y5}h>#|`1k z?J_2eItcag1;h%WPN4b>N?rDe=f^p6wa;5^)OaUmUgdzQvLk}7!2q1^^=Dn-8Z7!* zL{Fg`Qw`lHI<8s|p~ubm%o&q-^xn}tsnwB7@vG22=d55G`9mO+Hvfr9yF6GqeXmzk`Y@vjjMX3cgqcQ`to_VHjJxFrc?o7X z{R#OZW^903a#j^O_CeJ?DOgNz<>IzVP$_XEo~xh?ju2R zC|p}V_#Enr&cNiphAh3|8%hUImVSK~DCXB=RQ&Hy^FtzZ{G*3;m-3)&un%^eb%W54 zo2cJ>$E$ujJ?HlPM{~e@{N7B?Od~@f;!rZI`t1PP@24|JTL+rYO2sGVN3aFNDACk( z7gZVl5OTm-{LjRarFrBF-pn;-OHD}J(Ol22C^4gP1r?nDV%J|PZeTA}#1cMq2R zvl*m0QD^UvE4P`15OJqge3T+%bvqAZ<-swmWGQ1Z{2N$pALeh-n@QdMps@HBCawu$ zl9Ydhjo0kCbWW+D_0k97B6X(}kr+|;R%kly%fpCynmFMkCcYfXe1295_QCXd9Df|G zryUi-wwZ%|rz3Z4Nr#q#&!9`0!lJG;;`H%@xoO`(9(9gfmxe(+;zm8{)PV}0h(}^x zZ#gFwBWv-z4Tifz=<_oMg71^(F*;f6xtFp_J%%aLC#*y1+WsiHHX0?>J%#&y`Y_YN zEXp3f5fXB$Q2U0yo86|cQWs(i44cTzFVVAcv={G6xC6=|x3mtW|G=>i#Hf3=937T6 z2=Z>*6?IeTP9*;-$~_LBkw*>?o95M`{I3ph3fl%U>c;8sd<%Jd{}na|yYh^IQ&^pK zBnH2C=CXAonY1Jio(^h+vJo%EkeNSV)9oRm+I^H5*KH{F8b%DH3o>>y%$*fcrXcRy zBOz{5IhZ?&C<$Bx5uS$Ofaa&W|=FCno4)}FLho9SNvJ#k2Te;4=t~QrkZVn+^|{Eo^fNjhz#8rvusGo-w86`us)&V1 z6Bm|odN4NY3gE@j{=BkyAlH7^!79x#4J#uPlU?8;NwE9Ss?aE<$nWPEh`416JoKZ*i&>bS4qv?bpESa_FMA zLXT^=zX91iIkta%3F=+(+N#SAZ1H-1u3tTzYn={ZvE?y1ws#V{?mC7)9ef9hvwJd! z`wxW-e{y9FsuY~g5FhLhn*Hy3Eu?+jjWKWNU38RYCbY*Nd$A|eL^nd-aFq~x(TTNh zyf18g&3G$yo-2Mpi1yCKsMzuBcBBc<%k0BHSQ68C;}|9}3K84SCO}rv6_juO_e{|0 zDXj72P&U(H7z?S_fTp4ts;F-mQv6!XTGovP{nVc~|22Wv(2lF&eb2m3M4ojB{Y<`5zgSe$wsliXP&#;;L> zGA9LnOfF(``bTUJzbmHCEC*@f9kIC&ag!YHU=U?lTh?Y{W<4=;4ozl7lihi=y#YBf z1m-!j+Ye8QgL&2)>Wr0)L&e*Z&>Z5xKhO4|`|C!nLemV6MZ~OJe_0XsejL~o zh?EO|3*YIxQW3cZ6JEB$qY@h)BW@-qlmTyjd<{%zyoZ((8jNWCB*dh*;-}TgsD7*! zW3P3AzCVYGh0{>}A`<-94d%WzC-Kw`IWrE{ryh|xGtQ`k%{PGOX|@R&$CBW)^$^Z; zT!^94N7254y3-+kEaN)$yU)6D@l#M4 z=KeUEx0~jH;qfZ`bgvRl*y@pw@rj`FP6oT^dythxe5yHbQQ2=27WZ~!85J+FOVxy` zhs4`Euv#q6k@3(o6PfkNFsPnO^ZzZc#ByS0n_uxmExnf<1Y1}<#*#}P>=h&ns)dOB zyFydYBvInNPB`|~m>GInqDkT;E}Q-VVh+%3_QyBiyo4Bte*~hkr4zF9N3v$V8GH=p zV#I|v@X3vO01f9svFJN_e*MWid>d;5vWXw!!Mi#upnA+>d}0^G<$9k5-E}0dAURXt zoW+iOAB?Tq42jb(p;NsYjJMTbUAz~QzxhXq8fS-bnbbWPvk+sxwSe4VhgVaggJ4{i z4i%50A-tMcbfG>hluY8foo7U|ew6=v{sJAwd={_#Fol=t_K7~*{)fs0>Um7_VCt$CZK}DBd}RXH zOSRyuR-VSN$M0~_4rA7yO`RHzIg5-v0g2zG+@arLVM%8o^Vz$DJX=3ub;(n-rCDL* zLl-{Q!-_@y?S-MgxieKjhLF)}36kwD+?Y6b&Ze0VbzER}b1#G8$?G8h%N(pUW7&>c z^1a^}$!z>nF!h&A$WI~8pOj`ezh`OF^@xf6K2^{?|3_Gw705HmaZ>(r0IN2Bj+wqz zTub@>_?!+%Sl*qNE!!c)B<+KhRp!iohCNs7_Yu`ox4`*uHfNZ?PH( z>7UGb^N?qd>{)D_oFKVl2fv=cIJ3fCF^m92H&j*$% zH|IjZV3zs6OE6Jp&fPi%KK<}8-k_l#;;vR`PNVP8f=Dlo-h51Sy^Heva#6Yb6z%37 zooV0w0BUBqQ6B#>I3-MlsYgs$)a;eubDUh;!Nft((9C6pCktCHXKd7INE>uosGe@Y zeS)4tY9*c3qqm~`(+hCw+JJiN2NT1Md@5%R$OAW&A0Q6Ess=UI#8UopOb{$1UTD-W zT_FF_6p99|$C>+r*o&_LEXHs?L?;qgFW|FQQAa##ufDwQWswjbauvNtcyP@c{8Y!xd&winZ$!G z73%i%VN>-i`G~C+tne4|g-?tG_Tek+4U_U0HMUH3klr_1D^~s=F_}iWqr~Po#C^I- z9iL0Etj>%zc@sxz?+!uH(E^2g9GQ)t1y9;Q><30?z@vs*^3-75#*bo)lyZhm#e&b6 zQn20?Aa*QCh2nj?AS8diXtdFgsei5%^uhy}a#RpApnO4k9Px=SGKd_dM#Yb|EF`lT zJN{e)JD$jxVxkMqtR;`8J85yyi*VJ25b_Ry*FZZ zu4^!|-w7<+pdgpWW{7#@4J|1~pfSmSh{r!-{F)kUp^k!s^&~9Xe+ngj#E}2@xi&13 z?r&=!3KdTd;>Oj!yiETynw5=bq10o^9`Ft&DUD*tJ_n(GKRKEMri#U`BiV_++}WqC z!~#hmM^$B;DE+vP`b4jxxsM00^(WVp5$%M2^%J9h&BSCMInz=1X>YDpvhK&Y0<+Q(% zKNi)EqgdrIVt+5$E>yvRSZS15$r^uAvve&1%zYmy#7_VqRh zKbHc!-~Sd3$>};bhPo4Hhk|k8Ud$LfoVhmi<0j;4ZErm-)O8!q?Ysx`q+%(bY~scx zTR)1*TYo{oBD#}#lf#B)<}C*bQ0)?b#%j+vrg=*}{{Ahf4F5zsZ3jQj&lLVLFp zqI8NrQ|&ImSGMG5F{XXA?-n$l>H?D8)HgIM1IeDZVDi$NR~T;tgQ?U}=)4M&hy_Ca z%%|v(3#{`)FMiRbAL}~31&ZqTW5d`rXiuFp^JhbO-I%W!`I3Bgvn$20Z4qETl$bWV zhVsm#b=Z7i0E_>R-mlhkq5gXZ#xLx_4-6X38hTd1;!7S}`63hJ`XzrKq40&VcUuQktl{2J4QAh^EA zgN3x7C4ONLx;?ODdE@)Q#-YJnqTDYSKiZB(dN!HIfC_e{uxWj8@f10}vhsu-5b>%XQ^faWRj$MN zyq&~&UP*q3Lh1#3Tl09wJ>*BHd&$h#_|{X8XFTx*N$Za|eV!q+v*^h@9N&X}w=Ec+ zJDE#|%80$@fx11u+K}(QkbPqrsC0`3r?~ELDrExq_NLw2ztSw9FM3eJ;ckwhESUs~@xL zT?)0i1KAT|q^QM}LfJ3Zh`oLQ>b(2&+{uq1Zl)9@pUw)3tuG-&xj{_%!H?^flOYKqR{keKwYm3LR^OvGtl&A%{fmYt!N!KJ)!Jv{6$pGQ(&l3U)I{w zmKhUU*s|A9;`ZsYkc0|QmFEj#m%UNi+9s6GqB-`7nIPRjT#3|YV5+T0RsVm)Z2#XO zdDbT|9JGV@52bYWNrghBPWp+rU{FDOQeqXwMa!UJuQ5}bkJl=Y+o@?yKz7%@Zn|4?^46{|Nsfx7g1p={bSROMU%pLx5)T&vsQ^T$Tf*S;3ReprRh z1)fYjb({F;mK*nPn+odbZKzp!K~UBl0a^71^w?Pk_j^ochVM4Ps;7C7bzmnfcujo9 z8$Y0Sc?I%|d*SX8GoI*v6!WG%#0IMZ%1m|>bf5l3dD{um`dzTt-v68E`DQSS*yjtO zwpSpB{0;a2-#wQ)5GH2#BL-@d-StO5%s%e zWA#EU&g<&SSiT=qji8_Vkz$3!bvE8kpxwq6PoCg^3qDlzV-Xk53FhHOJZhgaTlGN3 zm9FQs-ZVSOO5O;z&kR|S#~HAFZo=YODuztEh#8$4^jY{y)SS64rmd%p?YR-W2`7UZpST4C-Zj0EfBSCAaC|F<4dT^EBj$EXoLw^_NyM7`pMIv>cb z$mx_3&#-Gp9qdZ!&h!1B;;yjK+%c&VdM%m2bOEo$4!5D`w42dA^F9-Cc@g+q z^hN3F-WVeXXm7UUTd6DARZRYYmfH~Gd<=7%f?4jhBM^MZoJ-2jW2l5&j&ui^*3*_B z=sk#+9kIf)9XrIRH|rtFoaU1&BwYSSu{be3kf){2L8G50u@f1Cng8<1(CTuG=D35H zr0joZ7KD!EIVUIcyDr3@nlud@!hHDCCyvbb+Dohp{~~^_31%$Xna!Iwg{Lm2TxA%8 znrwTJk1&Cb(*q&)wu~kA{Eq&i<1p{@W${G{oNTl==!w{g+S(=>ZD;UR;*!`eCK#G*Z<=rZnv`il(^aeN!t_5TJT z6F-SH=8XC5E5hOt#G*cF2o(!Fxo^TZ+Rd!T$Y9E(ee`B21Dv_#@>=xYaUJTfxH8XV zCte-08Oo$%K~K`3KPjYhdHpJZ2dSXtSQ)4mtOosgn{mkk89VXbo>@Je!2RpKqg}Hx z4_g`!hJTr3-LbdWuyq>7yz}EJz9V?L=`GCYK8B}F`4#4C^qA(KP*LG(##)*Kv8KS1 z+0XT7Pfq*stTSgpE|bB=EP?{nP;{&w)DO4OO4S>!o8|aCG1m@AKQoZ! zAM)eM*r9Cgps~z(JGtuqwPt09zF;xm4b9|AHZ5pCWqTIHjvm7tDa&5n{{(22ak#7J zKz^!+4X^YtWOtVnr|U+o}eH2wEs$T`bd;~UM;pS3PJyGpTXw(9gy!kFElh-Vfg-N)KJh0`@$!+%3jxX9=j7j$bS^b@LC^5P&MmK)IsG;Pux^)A6p7+D=hElGc zu?OWR{}3l$q4&De158(CpgMDkqN&e)!F9GCuSonGq#v`fW$ku!&CzF0HBs2nT|~>} zI&hrQNX(X0oHuC*Yb+;5{(>4<8&1w{HQh(XWI)1dP8n!JlzGHswThmP7GcDq-wrxc z14YCHa$*=x7DBDJ;>KS`b7z`YNf-S;Q^XlE{=5bi|Aj;9^+4w2dKuK}U&Xv_#NQl{ zhuM!T(J*!lw0ozD>Jg^|(Q6ocb&vAJ_evpV_AqGqJ`0U)Td{fRYcQ9dhs>t6u;|+b z%ob0g|I~O$J9t5qjNbrZv@^&ypTI-vmkQ^LC$N}a6T#Z(Jyh&F4N6%FB~@-T5ViKP%^$>iI*v_+;vxMs~*o;GMNv?x75Za80C=BtnS8c$9eDQue7 zi1O9?Jgn$9to3EgV%{j$MsvLo?|5uj@eO73ow)MbQFJOFgeyJ{V&|LOxKrR0d2n3gmIit_c>hK-S`uioO|6 zyls>(PgrBhb$@E`g?(S18rBKAM~?($+#P?sh{;wt*Pu_FcbzchPn%Yf)3 z{!Hg-!et%wOjy1F<+c4VsL_&l9o!CTd8Q)#brAc|(T!gu|AEu&A3&0OOcCKS8j5G` zL^|G~GTmJ3d1pMAZaWRS(o*obMLV?bzWjx+4O>B32eltvHjFKKjW!4$ZS-QFo_)YZ znKN(q2bMuIz^;s_+m8Zd9yL-8s7x%lD{Y*0|v^@|^G(hWmd z)FxaV=}TGaCPhohNn)XV0L72{VBXtaysl;lJpIj^OZ@IDo_4A+Y^epdn2*AcUVSl( z^7)04{n&}G{kg+V;_yu94J$s;e((cj6tm0Gb`9N46z;4=*M~=~j6<`<0{1_D88uU1 zgXHKPLF4;}Q2(L`!;&nRY_cD7G{{Ha6Lf#N?8=hwKfq5x#O9d!8N<@0T(@qpD7{!M zI_>e@n>@J4CT{ajpDf{PNU>`rJ(=43w?d}qe0;WR-Nz_swb^MnPw+M9`t6x zjeaadxdW>tU$FguM%0NQ_jB?I@=|t#g5L(Qs^N7IhkiUXIvISM6y)M0mMecM+TG3o zhpGfn=a&hQ7hd3^o$hbfD0cni7I^HM+kt|)x_5I3rv_^PF;XdRa+M)u63&wd3Y zZFk@m6OB;4eW>tgj}wy){uR5fKEw7ASA^7GTv*CJ^8EI&;34ZKLg9KF9%dQ=VdH3K zcZzbI^8W<&Jx`eDNtwTQ?*#vgF05kcZBQOzY~BS|R(^r_kWG&<`aZeVPLJo=6OAGL zjuWeHDw|D$b-7*E`%KUJ(&}{%?hHJav>CbC0u6`4w&DjYyK}q=3 z`7*YTRtZ7l?qTGpa>#2RgZ_qdpo(olCod14Ls`A_-v0d1>4ALpEHiFcFbRG1zY9TE zFQUV~zaeuyWsf&q#*H()xn}(|NE>T~?STzKgzrnJduu|U0~2OB-GQlY?gwR`m1n}= z@1#z{CCHOz2+i(v4nI!w!Y{t;Q$GtPp?lJ*^W9kNt6FU8Zi7*ZNl+9&n%LrrkP-eP z>N5R!eWp3Njg28Le5aUx#DM!8bHWu}{aEW=#>0)iq4z@ti`$h-*)v-fo9WJ_MPYc# z(2y4s*H+q>x`CyxVrD`%Mwo9EWW{u6jH!fr^4(;Yx1r9g7}c-rSm!c&F8855)Y_e> zex68v!vv_WtiTG|t>0bk!Lovh(KW#Z6XUi*O#wN#2M^~J5>H4>)n{46UtX0dWtBbb z*|G;-+<*B3RC%8ls&(s-jVZ-SrvR35$p`Ivzrr%J|AFIQ_sPw>P7Jcn2hBo%jP|5H z{zVz@diMkuY*i4m^|a{o3*Co;_Y10(@q*K4UtF=M8(01jC1})Uf)vJL(Uzz5nXkp1 z*Cxzq$0@A4dRB0#Jb;;M9r%A6iP!7Q_@i?!%%Q$N88+VBs>9<0drBZQo=WesoIAxJ_y z;z$)J>MZc-WkaU%r~W3}s>pVAgN(oOv8%O+SW%6bUwsrccg8TyhpA9@wpxrgH{)5O zFM&+40PBVnh)$X-u`?`C~sWdAd%J&s=~Z=JQ2m z(E)`t=ba!we@~dH=*xT!PhgSe9l1vo7;b(It)<3d)AedWHDxZWeMfzzKbK+C+$Dnl zl+|?BwE?q1R=jB&`PcS$$Lyp?2%6mq*@q`H`QZVco_q$LX@fN1q z(|szZ668hMV(rhtyfCpHzsI`sbX!E@iG^7I=m1t9u1Aj_U0BSgYP8<^kC=M&AlPl% zjQOhDpa{0&+-N8d`L1@h-rkng!0yRwKfgz@C4Y8}cA^_^JV+r5x5-V!AR0mLbujS-24{M7{? zp4|Nl7X4F-*^z}<=U0aH7Aa6tWy=zu{EKxToxpde4Qm?xJ1QUSQjCalVmb3{s7HDU z4urci7_f)y~|yYNt?JnTSnudWv~=q1f;v^)TJ1Q2u!WsE17wmG2MW zihsR$#@lh|6Mq>rTd5!Tv#IMv*R`FL+uJ|mf z=k^rTQ@-J+LpMRP;)1qD`2nrhf5cU7yFfYbsSwiCASy?k(bg_e@R~hon0~{T>D+y^ zk}1c$@~lkxsZ+uHRew*;miJ_zeSKM8zBkmX*WmTj!7T64Z81JK8`J-BVnxI+tRZ+! zg#Hn+o&0vm$^Cf7&|?sHg65-#{MoMP?#w#Oo7ekfVp`D`lz2SXHve!EB6g02@Gt8j zQ`(Dn&D;wIH`7|df^wnJoP53Y3UEhe8! zM2Gl9!EtsHPQ5pY9XQlP5m6_^+RylM4n@zWtU=KX3A&nqht4d>YL z=+9Sh-im>2I^{f_jL$;*ALpTXyBF6GKO=7CN%WZe0*Bn5z^!xiu{Qk!@IT3Kq^Tmd z^RH09a1;2(UI3#ny?E6vBW8TgjK@aQ;9jW_kN8~%*V_Y`>8(Z>HJ#3GdrcwFGzhiE zy+J2kBQ9MZ%&LY>W)3^tSyPWr)GUu6H}NjP;ISUliYB1>DHiWuGT`#_ClxB|e9`op zlxwC|Lsa5mXdUx}x@W|OSVes5)G4evPC-sNYcBWFW0pIIu(InrQ69P!tat7a6H8T~ z>UNJhbT$gtzlr(eR|ju<4Sj7<5wDt#a*)9nS`Nx?2 zP3{WG^0SIs8O`uwLm*3%=ps|H6Q9k&BrO&VeSv{ zewVTN<5tj~%RqBXg-;fTLFtgGm~Aqam`44X;q+AOQhbAiv8%AT`7wTq+ljMZj^e7) z{|VOJq=H!`_2fH6(5?3-j`BBbc()cM`-b5(?-6`v#$>)^JIxILv*4;UBeBshnEC%f z-B~wtFn7@h$pdGx!*eoAYe^NHK14ulLmq}$^+3rvSK-sZ;~4UOEgE}PqGsScaQuyC z%wA7G5;IAx*(<@Of?LrY z3QH2latpUWR{iKSIJYH2hC$EahZ$2Ac~tm*n2q2tjM z43iKaN7*L&pPPg#dS_&edjJW=qflZI%sM{zgDSmqFl}Hk^IYq}gMP4LxtnrmFZ)7J z-wlKq(`6|C&l{9eyop_KL{YE53qx{dgHxZrs0nQ#hrncJ?*1cAq^$9x0BfGN`W&P# z31ll$-Dr;gnf9c+#gN!%pwYdA2FEARZb*BthYzrFmJN@XneP-OfQ)qB)ZXKNDt- z>&Js~T-mhIG8TQEAT%~wmi8qVr>dmDdRifl* z1H^>HLCA|A#hB9WC~sFndcak5Tp2_eeDe3m)?iTV8glWbh#`Qp)dJfA(7 zmsQc;{$dXLe9%FA&n1{4X+y*EWGEdsh@D6s!Tf((3q07LH?8<8X1>^t@@1m7b;3#L zSo0&~?&!hPmHl~GWi2clXu|gvPv%YOGEw#0M~oZ15OwCOgu){ROt=ut^rzFkK($QJ zWd({cs?FdZJBBIEu7FXWQQUWo9aBw+5GA7>Af{A-x;K8rSDuTqW=rw~HVbjdLCk+( zFWh|Ghp!0i&m_C9DpGWzBUSZoE>QmQZ?QhiIcKV{#T^&)fO0I=k12=ZUmVT$X`mC#QGlb4>#j5Yw zsPnQwRqZRVj@=J>!#!Apd%75sD-|UU)~x993rGtX2kL>fTE$!9oIEHMnxA%pq;Iqs z>T(pS*F6V*?LUmlpM(CV7N9J93RBX~%D>)^`d?BOb=Vgp293d_Im5YZ$8h@H93no4 z3#^{sjl~Q;gfVvhJgmDTxuG759=#oTNbV+-%=udz^9Ol%bT%M~&xDA7-7w=yFn1V~ zDKg%01B-jZcz(JwUN$Fc^-SgObj6@vat_;biLECiJGu3FS7o z`mwT?9jJYNoBY;;FzR#xSjYIYx2K)isf(i-Kj+8gGcCC+_$g{{1aYN9DJXsc)c$<{ zGU^9`^(-S%wXH&kW8Kh(_|4Yo4&u|&HK@7b%Jq{vG5gRB%B41m%AS9U8D)P%+W1jC zVpe}vymtepclBiPUz6bCSPOP+s2`WyzavD%oCNjUd5T)EiCouu4Vn&q6;8b~;=XGJ zvf2NQrp&krw~jn0R>e#{DvGFZFp`TYh{awQ3KEx}p5>@t5*px06{q03;D(fW_ zT{=qS*85m9st0tGn!*)Nf$RT7_xOrYSRu)V>x*3Z+jGNt^5Z!W`PQCAx%FmkD-|q# zcM;ScJ%9}j))0|j1CHL8pmyyw&{8+vGnzV96LPdkD@~ZMPayfwPQ&%XQr_Nk5F{kCmEOFXAj`DdKE7DB;9%J!@?VVUh7l&9^>R}G^(=KLslR6mH1XrcV$PG_y}vwF(P zktg<0FZ#^QqK?33D4*uaWgD8H>G$cVO>@LV=U3Q#U5E0g{dg~8+ZH|<0_q#JqT#q; z?%19IY2BWPI?u)6@$)v&oI5V+o*vd}9)1)(C;D>B__HWaa>20t00@e7<5{Iu#LFGS z>+{_Ct37tK+q#4azUx8#EkhxBnSs`~-+*-s!>K)bv#z=qs5@AO^IM3^xxk8*9i`6b zhgW#T(v@r86+&6iQ1lu48kJ471G+Sx<$UbH{b?3qJ=6#CjI2O6*-LDAU5ZB4U z_(|a9y9`*(yG^L5`5$H^jb!E}Szx%m0%Wf)VR4R}HQ1~~-OmYv%?ce_dlFM&X)DOz zzs46o_h$jS`Z9gmk2fVXLi@cFqEELnFn{?Qd~)1~Zc<<&tGl*LLAt#FK80mMdwvkJ?pqIb0Rwr>&<4trzEV{DIs=sF&WH{j zy&#`Duz8hgq0`8n9rJKtQH!=>*0y!n_$S>bCP|p)kcp_Oe=BMYEkM)ZDMaoV&)8?m zQ8j3U^iLOX^;w-mi1nklvu^YtJ=1MiWom z6fOI-VdkLT>_c~Zo?tl?t&DwGYZyIS%bnP&Gi_k>i=4NP$P=~#koH*Z@UH8tvBU`r=`5?!AeNXCFdj` zEbv1|K-GC%P;D8;BZzg@@le8@EdPX%#sO@>0SRjmZ$Q(AG~sDjPdT59amn z#w6l(QNHpguL%3qklM5bpZ-^cAsb#pcy=nx6kND&J`Yt$54FRJ5CF&zssN ziSiTVQQG<$Qd^#))pa+fUggbJz4v1;iu75B{(O|0?GfcuR|%em{=9D3Jy1SeEtIYE z0!4q?S47Q%cyD5jWj=*iyItUO%^Ym_4pb~Y3DSlf;bK(~i*ddK*?;ZFgbg$&%IeLP zA0L-Y{%1HZNEyK+#{{voWm;_NI1hT2j2mY4g4x$iSl*}ApsMl~e7loNsN9-|_bA7r z-)wpNpSBpYhkR%=Z$j8=%F$$X2_eMV@JY=OH7VX;v*A8umS2RsW8Ap2<9K#`i7PXF zGKu^Dwi4n}9tauMyCAbOA04*r6k}71VBRh}uG>9B%#|n6u534PbjWuwZol~SXC3IG z-w;D}DP)gZ4RwEB5M=#wP$%)wRteqdnR-FE`*}1E^RkCZ4|66NvL1BB#MLYMQB?Z} zX_F83LiPDbnqlQB+M`a0p-ai>>7vK1Psu`ILb5mw)SjoH4t!TO2|CMbL%Jy45|_br*V z`yZfb94%x&@nZaMXQmXAAoln{lz63KOU@z8y6DR5Hlzp=y8ymA+l5EW*eNz$rFWf6 z5*(`@$HFgnf$o->m=oTY?oBU=6`Y9D-Va5aACf@*_g|v&=C6vd-Ul&nO`9lB?*Vc8 zy|B8#hZ)>9V(M9|vFzwL%C|~U{VrPM{Y3~_H&U?ke+4HJoOwt`55ZK=hOdy&94mH) zFq_K10D1R}Wzk^^kUK1pS*Pj?b@poo zrTRB4E8Zj)pGrh!>=mug$U)$s9F1&}EswA{C4@)s!69qptavLi48H89et0m(&Crlv zi@ex=qBi2|Gqf(aE;`Lzi4LCCLf+pegqxJid}JLp-=? z^|(4u!6f5A5uS1vR*xUg4*W8O&FoLzsrs$x-+d5P&Deq&v+6NIoCvR0N@>3~gmsp= zviOkQpsQVp^0+R+W=9`l9=M7@DSEtI*NCZW0+{h|Bc@s3CFJSdh3~iYSi(&u*abE~ zT=B1Hy{1vHi85nV8$Uucu~j8k&T7-`hAJAW_%P@kHYplThC_JTeV8vB%UfJ$U?f&T+U-{IL?@w# zY$UTg_!SH%Ux1F$lw)ufXr6Nl%zpOZ>iFLkZ1g~GelQ$E$M$CG#$)1>9Aoa&gP38X zP{??zf=6G-Tis;Hl1@zG4tv6Jww&(k!(KwbZ!#uN-l=f7FomVx8p>7FeUNl*#I#Mz0z8OnDEoJ7Zb)a$|xz*MsrQV<;c$pj9oVJN)e> zD03djd}h&mf7(HeHM#&%;h|U=+lPgijAkJ|G;jQEhN8`B1kdtRq3n8ZzV;$x9?eC# z`OZ*YUUUX)hS-7Wbv>?J@8ET8y)%pPCFW+i2S_#Mp!J-L$?MbLF6|l?yd21CQYP}4 z@c!Utz<6BGa#5XVtne5e$kl;|1fNB2Tq9Nrg%SW&tAp5(`Y&Ah=EMx6%FyabFwZ+o zc?qvl!ERqUCU@GiTH>QeTV;azyOUtKM}_jfaoV!fbE5jsvx1(>DDveNc*T`fLP$R^ zR3-l=`b+%Ke*o>1mea2E_#WWiYsk5+K&Mm_Y@N9a%}x%Y*H>68&`rU)qtu|r` z%6`kPKO_F~QFM&$%he&Hyj*jb%O~Iz_#Adi)i6 zSQl5)495AIf`xVCy3HjD<(y`%ZbOZzF6b{BiUU|{?7uK?kPGv-_W|SCraV2k7z^_q z_^M;}?33&wSkwI{2&V9QCy}11E-c#oDXe-cLcVG{JUKd+S-+Hn!e||2oTR--uVl=1 zc!UXCri1VOe#GPmgh-mvhLYbP#NC9gbn)P-3nOvF=!x8SJawpdWI^=OCm1}#nM>E) z6g56mQKH-nx!Z{mx9wjz=0m(i3pZ|k@f(&seu`UrI`hh?;Y>Q{0_2wMgPh8deB%NS zR`#hw$ZH|j!hjcO6Jf`Eri=zf{qN}A;m7@-&|S61c(U<*7K&OJU%Q5KC^oU!x?>-N z9X$be|DT;Z$pOMYTQUDli$Q<$P^KFY3`WF?DDJGoh|V*Z=jy^Yx(#Q!zfg}j`6p45 zzE#`MIFNO|Gv>=CnX~vSv_qJ_0VI}lpw6QgQ(r6<(*}2fY;g$6e(J!eTz3@gN3xh7 z@-a4>_A0H3pi!GZGUZcquTVa$z-*N2`HRX@w%`~>Qj22+lE##Ny#<+3o>3|QDI zI@44}iw227tV6g6c8>KBUYksP65@nD-%ft{`6UA%nQ{N0ilA_CGX&Ynndbf>)KzOx z8@&)Rat07*@|sp@>Z4WMeu>HHPa*ehAZvQ$1s0J4bV+WcsO}VK!w>$Mg zUmrth$z*gjHerwCK(2-?I_Wn|*yfD(19dkXswfwWj<}=v{g)ZK8a0DHLu`hXd9QeD?iOtoavzChyeKs%I!z zObTMq7jv$BvO}Soc$!@90=f5YK+Cf)Q1$O+tUvw)4X04n*9cGypdMA{W}G(3grDdq z<5Y!XqZ|!+^^x^3TbRUrbuDNe^E)_5PGC$>Fr+nK5u3VM^LDSBV$A)2AnK1|&|EiS z`HnK4d3G{0eS8CxzxT!v0|PPEgB-idY?&)X@h=w20GyVG@l>!K&W;=a^zeT3x~Br$Oq~~XN>`A&=BOAltX(IgFa#{ z&Etl0M=s~O)x%jD(O5ID-JpNdlc&FE!hyaDW;*Qvd3pMSLvR-@8BmFfo9lvY`UPT+JV(jAX7Tf^U>4WsywJk_!n8+z5J`8HhO-0EXK#^^Fqiau+ZvjLnom%^(RL%F7@pP(G_ zNRhhsGD`2g5z>mUpy@aA6dhD>{X36AI_4%W_&kDp$o|8)vUT7Xl?frz1A@VSlUY@* zGn0>!DHhEc&zo0xaowKZA?@UG@HoDU=KY62?sXSJ9ys#+v&3KO^9zJ{PC@$_)bou0 zAY@1fLbM~X>3KUSxMjenc2OB?LA$5?=BaauL`Ln@T@p39$nWJFSum^kkM>fh5t%w5?C$_UYVzTTOwoiKkN<1z^(@uW# zrC_Fhy+u)|cMwvYzG2q9D$IU$2ZCf*F-7aaDv$pkMduz6WBR`Fsic$?>0l~SF%>CI zsph>OO_EYtt-ek0_@h73;dwZf5Ev ztV$s7QSoHZPV9_5HiU9JhaSxE_3jzNYjU3boB|o+vrq%a_2D;Z?}~q8yn72oUdlp? zS7FRdIR_g86})LjJEj}aj6J^xao!4rHp$=Q1)?5{t<#}?Wi-#}T8zHtq@5qIfNSF% z_?j}}$6b-}L*#iWTtS{N)3IU-v91%AZV@HF9~D)*pNi6nQZ~lSgDKS1gSlxp$gUq0 zM<<4|@)d5Fn~;T-{x(gj@G-eE^|U+$W92n%$iAGfcC){Fz7r+dMDI^Q+R(}mFk zOn6ztHS{(j7IIz>_HcFu%>RB3;}V>iMP4gfEjSLEC@t<7s>;UdGyIET#SCIPUhjtqMkP?YuT_Zc zH;nI$@641x-4JW4wcw<-WfKne<;u}}#Tm!$VV^(?UTH+%iHU*2!``;MFqV9qolTix z;Qt{0hiR~-eK^ld&BqOgh!-6nB2=8Zi`hyl1ES1YI`_UP~6t)~)Rvbg6VZ9J|Xc+YwcLnLbc{t zDk|%r33BqWyc%fAn|2Pug!TjYWtSOCF|&pfrBO`2+E46Q9LA!ZvPoxt7&730-mKhk z8WbO!vFxJ~8wN+=jMJe^mhBI%z_>Oxlx0`9f%JgC(0-PDxG6uP*8K$*-H_t;UzOaD z_%S499(nmE+cDW5PCPg{do_u?qDLoz6w?LmKyRK|K8hPN1&e&3p)R&}VnQd4c&I&; z+b*Ruf5U3fJlF}Vc6DKFM|a--jC^IcE(!I+&w{G|qEg>@C6jJlCnjz`jX4=>K&=x& zIqo0S+HHZfCel{y$d5k%q9V$}gsXoTB7`oa`#J5a4K1h6IF24d{)4TkXj%q_i&ZEy z8^Bs3wxjIlku0agjyZUSGm8cz7C(|^x-@Uz=dBx;_IwPgCCi2EP#-vSlDN&c-omm# zPrfOgo|WVt7-=#GGY@y;-fkQZ^!MQBbc`1*?m*@K&Ele`{dnAVcivRu0(}Z1d16>E zknZxrT)C3>n_|x`Ro%GHNP5>pl-O+Q&0^0Bm|b-fjXk}Xjl)PTX`3wi6ivZ0=}IBG zM_-mZe>WQHsE6mc7OZHfh4yUxX7) zLzzKH7B$UDLTrdkoL`|v?Mg@LK5f_M1Y~3QwvTW-$CU@4AI6H(eL%8$rI5I!FDst7 z7bDH*0i)b$TK8(m3Nq$j{@+U|H-P5lG?39dr5}2aOgrE=wPkS(&>m%0UCKT9=WtT9m zH3kdb#<1cNVC_p=AaTk!w5iVnlfqnVvMB+rb^}(gD#ww z!XFku%EI2fgjn0@{@XD+T@M}iu449+!7QKH`)hVYu(gjmvu(}3Ok(y{QRFxh5~gK= z?YpD+v>P#ZoK!+>D`g6tym7;POin%pjXsRenKpz=qY5E={$QACMSjr) zGnTnnggWIr>P2tF+FuV~LsDl^?LZmVI~wXMDLoTCyB|+kd{HdZ(VyL=5bi!PqWr;o-1c`i zYGxdRgrY^5rS8d-1C~R%?*pVnuQ*c`%pBx-;7)fj&8fx^%|2J5{A)F6%A$k>SsCq( z$|3l!l=;$GCf{=b z80sivW30`Y?HPNRRO8GNDYsR-x=1uEh!u0*%2@uAC@ep<5EcbX`E<1zWA`IjqH_i2 z9BYR}r?aSfYy!#6o%w+0_B`pf9{Ri|2E|}k{i0ldZg|%wmQ2#X_gD7(_)L2C8p1(l zJ&Nl_G>G9lo}jeO1zS~1z;)JOZZ@S6%{q1G(MJoYckNfYze0AKD=mbQEmq$*cIc6^RlrY#Vkx5;_( zV&VhHQbmc4^o%@)I;gkJ$7_2-dEOjf-f#*9&6`Gj`dcTiPyB-JyEov0i2~p7*pvnS zLw&lwi%{eJULl{cOP{%tGKl~5g5=K(^Q_)rywfU-{NXBAEZGEhX%;Lvko1gR{Y3e@ zk5JctIFs(bAf&9{23fmCvMJdnAy9?mRsn>{4ku7R?j3dP{Z z0o-QBYD~!9C6*^d^A1*ziO;FOYE})JtRKO(eLPSygEZrdKS5EJC3l!ay`joi*z%+| zQ>AiIx^JeCxMeW2+F1(I`UP;+=Ed0b;7UWajf+c79x@&{Dw=3#3$>O22?iCE%AI*H=8!f!C$8@_kti4hDR zE;xZ!sh-qXR)AF#J0P=iBf6Y3qGxInswhV+Us^7h*yUl>TMcB--iLnH{yfS%faMgv z!JKXvFmmDpEckZ>mv(sa_6Mdsv{}jSvVOdLO)bVfGiIp`E1}i>Je?5`{FNerD>c@REh^ww=C<5nsFxx*{vGvs_L22iN zCn_zNYh72?(92ZJb)b&mY~ZOunb1V6NB4Kd)D`C;n!TXR!0dL2kNimIz+lK#oQ3I` z-MBn_D!NDYWHPS9bALrJM>SJjO%Vt}lXi59!o!7+qir3gUg+p4J z71ZwBjDc0;qp6*XJ~0Ep@6W5~_`8C)UR{j&)@78hN+z#^AD!Q2C{^vngm31U@N-d29L|35rvH_&$Q1EUfHb%oGEL$`MVr^Fm_Y3VfE48FfK_?-;>=xvYO@Nm#`|{M^ zi9b-57e0Ih4j0Z~dsHeW{0W#ouRCk<`WfeUT8=f>U!#?;8BEzcfU9m6 z2tJXmm^$w?c038Bvj}j;3C3jS?+bl?aN)VH-heH!|NHn1WYW>2#YIg5?@)Si+1RII zZ7<^XJ9)6t@%A(~#6kM<=a4@*0eIMF(0_ITHSK(>tZPyA-4(^3Oj!D|OlZitEiM5w z@<{gM&xv84*l8iA%$N+x-J-BrW5awNMDbXO4%+43_{HGEy!+Qc(|^^NJv3XC&g>)> zs$WCHPCu@0{YzY-_h9E}|M`y2#_}@_=(pOEm(RS3c>!05F;oOf?86)WCO^c2kr;E* zk10#3`*Uhnp8Q;e+V=l}@)+s5Atu~x^;fLCOx+QEfw$fn4JU}>quUt|%FWLq?b2^p zW9QF(rg~9dE6sp&hp_nm(_z)#C|*<&$ji*gQ`9XHVh2qTj&JP8bLaKt(vcvReQksC zC$zJ5^Pui0Rfw5!JLcV~#pKKunA5v6H+X~#;f+;TF?u(s793INw~mE`k1}+K{))P^ zL0~(V&XERFIG!wLk&nIEmUcV(87Dz-;6Rp4%o|-H1J(Gm3dJ8UVNN6c8NcW0ol0F; z#{Ybo+#(dW#omX6s5KCK`#-Vd>lI87-3Ly@0?`Q9g%34RtodelmQa35h;6aKV9H9Q z35P*gJcl)juTw(P5ch8+^dW<7x!&rXAe{B-3+=FZ)4affUmQ(JV4)1GL=Bf*F@B*_^F5EOE^cI&T+> z9bzfESl5HfagX?PFrC9o`w;h~8{Cf#<9S7A(Iz|-GP4tYOrSWM)c;A z3tNP-i9M;;{R}2dq7G`?eXwtr7q2cI#v6$5lQPp0boV!jR<|vo(VliKm8AQSKG0xT z3L0W^?-A*&-h3KWPic?T8SrL^D{Bpy3Yn)Y(J3vMC-~2Si+$f?w2yQbwmUCwcW^5zxt?p$q-aQb)bRok7<_EFs7=4Q&ag>XsP=wKE@or2#Aa0T@PW)|Lnl#BrG z=c+-Qx6Kd)TcJIDD>jcYW>u4$LG|<ZCjUUW1w|enyb6fFK znJ;nOD^Y!UBN~q6i!$Mg;PAf?-dt+K8fQB5Z5_jz-4RO`du(5dnz*{ZIx|ukHsckK-ux0eBoYm9{srnwHawp`d2uUHdqRhgi?j;!~ZaCCP&?kBr){@aqi+G z(dFF=a39)1nYu@crkg%2UiAyOFWHX9OMxqW&!KD8Zt_%p$CeFYtZWH!15ezfz2IJ< z^`FCNtLu)Eidn+_#DUD^$vcqy02a;vhKko5&HDX_(pYaH;erVhm9z(ZJrv9TmkFHN z@zlAmG3~CL#i~Pz16x$8SmBFnH#zbu^Gy;j1<(}D zM0q!@*l_R-)}OA%T-|DDxo*j8BFJm%G9AO6j{^It60@&JQDQMcpOd)*X4L9HX?9nL z`+f(6WOuGGn}hW^#N=?kj5gCLXFW!RRzsf)OOkrClu7YeKs^#u{yhnq(mB}b6$KmK z$hjJgr7#P*)B zLB7R!wCtb`g#j^W6<36@d(Meki5sZa9|}qE7y));tXRf}a#ZBMfVl7wUSDR;i`5Dq z`LPP+l>z8;kb0WT&Wm@K$l0j~+FSJ2gSP#l81Fk3RBNY0Vz4cfnH$3ztAlW2ML)i6 ze;X(!Zibrlr{Gt08Io!#+c@8kZHWX{-a?+y)1$<+%NxLOx=5I}lzJD>yat&O>H8}` zL5GCAr}wXms*!DiVX8rY%6JG@=dK6M=1O$8$pJ&N@Q|AQG8-dr3wk|`XVkS!?yd1pF5YluZ@Hx^ox-qDIp zBhQEOX$HLyXQ1V~6IP9%I zx4LpnFlg$8%M)$5(%})Tc`=;NzG%TDxtA3!?r$Kr{E1j^-vH9jro7rH8|vTm;;Lo4 zL@6<%;_1Ch?BxfE{(fxQ(m?JiZU;#x^7VPsu6?kjNT^Pb=XNG}XdGznPg58cy%uv! zqqxg8KUV#DJ*F(Xf<+^%&J5Mz_N z=cfYeC>X>w1;z@tDvcg~+vUP-3l`HxQEL)%jDInskmN0SH9-->JI*21A|3uYHU9p+63L{BQw8 z-!S&iZK!k@3$-+t#!`l`-jOt%nXV8oA%2{0o0yVDJDEcP`~dlTqV^5qQw)cZ>}PCY zI_V)sM+9%vV=#Rq{d+V!KsS^!y^rsM88O^n1qboQ1yQ_UNiDSfTmwbfS8M?fxv(l?lti(LlWe3k4UWGf?SPrWHu1>aNDjr8H(m75@S^fR~_+nYK5 zDrM1D%p==y0V#1Em-Wv^TM3GXeM92O8?NW8C*Wy z16xf1$|4csd!5FDVTVv!GZECW*HO0gxY+b|AkMorgzKV`F(TK4#r<;%{rfp`*PX7^ zS9Sq?-kHM^Z@S0b8HUw@9x?;S3mZL>_X#I0f5>4`nO*{?c64Sv4i04bl2_31&_m2v zu@%be&O^H^?OArs#*FTyElLJp!JHi+i+(Mp>?Gf|HR<%JR_viZ58c;(g?4kAt?Cr~ zB6&8os*hqq$55^qx(8HyU5NV|&O7=>aP6q6m>faf#2J*qBi5=xYAu#^Pr^^+4XHk} z5G>COX5KgCJigl^^v-z&xnaYZVe=K3@$*{HcyAXC?Tf{vnFldwSr-<&|F$3D8oKf1g7^XfNaQ5CsKO!-Ltp=>x$FWrrWzdxt!YA`dM&A9&$My&8aKejFN zCXD&pjqB!?3aw_9g2|=^tUCJ^)w3Lg9!+MfP_M%)T___nz?a8{{~~I9e4%Y`IglNh z)js0F+NV94W{oLkCYtjZ)IS()q(O73CpTM9UaKJ`=-urwBwP!{z>eN*en&nwOj!rg zjnl+v#6Jkyvfz1+CXyhczy*9ytilcA^!ad@Y-2ys{UgYsmNSaH{tPZ9S+^T-%B zd(v>qxLi}j4?jW~fLu_H)q=)BBj(Q_Hf(W#!sg?7C}f!^d36KL>Lx?mf|sCd-wG0| z0GPcvki1E;P&~DPJfnWhDkT-#PY&h!Ow#;3ZN;SYubBHdh|BFp2`1a0f_zY=K5p+I zW?1TbCTN)((=9wKloZj~_3|9ZdXC51pN@j9DBzmg-*CKNAMP-R*!{026Nkx#rHKlt zyfci&&b%+gxY+YJw>~U!RBy@`zYbZp*^0#vqTb8ZTlLCk(=bQ8h-zIA?ztm~dPK-8 z9{Whh?-K!yC)zRjc_mbzRU-Szl9g}128J(BAXzyC;+{N(hLObGQ+@9xZ z7{-p6g)xoSNx`C-cyzvF^~y{xw6$G?hEFoi4{wH&UxN6yfZdoX1aqH127>ivFP^&W zHC6=KF|)ldL19LGk{i@d7(Nd(DF2mqFqkVs+}N7o19_f~G+J(;4$Z#kc7g6u%WEJd zpZLVN-MPGvFSt7D!DdDyh7%ViZEPoQZrg<=^nEVI_1Htc_qUkYpE7*!rlV@qH?bB2 z$a@h?$%*9CD!iET+Cgz4F@3h(vtkXqW{YC23Dfx53YoHDtm%)5bSDX6 zW9&xp_G6~R96XP$FDnJrc?)1G|AtZ{f!mgouXq!A@*k4tBU~qL_#>Fj+(r3vc``O! zcm>Z_M6h&X!x^H=6!nG>?mvJsy;>{oSk{%Lp1uoXazl8cUr(;hr*|HD9$^99CKc=AeJ1fTNg2+q&i3|8+-L7DbhG_xQN z*O5&^$%esPTR#(9qAM`_M`Nz>&(^0sX@}BJfxK|)ZEU}G99mbJGTp;Bl&|Q^J*g8` zH$W|p-%q+^T^W>k{sa0u%TUwoA;>oz3UMXRiF{VPSo3oM6c$r2*{tP)NIa)Co+ThI zS1|p^K@fRdh4Ft}$G8&85EDzex(nrof89g-Y8lrXtwOt%!&&Xfvx0OF7xL@JLBrKA zm=H+3`LQ$;Eg8-|??`y&#ZcUmYtQW_*s{xR;XM210MHfZibXL_IQ}=r)efJbUiu03 zp;s|H$%dytXabARe$?O7ok>5ZiYoI2QTWrF=iEMs9YQ#3{oyx|FBuwQcq`?JeTYSn z5R94qB2W^rQY^_lhS38?vaC8=ZX56t8j=@^X`8pBt6P7(J0Xy_45K~s4bhV0OWuH>JHttI>OfmLpj>WA0Uq zTs(q#mJVXoUNSyC#fz(Z2Z@>(7q-HmvG|7HkbQp)udWKbL3>|_o!%idIsSvKRVQ$n zPbXGh69Jin_M+kO0^xg-2Uji`4H4#Ud%hwCRYiTbBrhL5fL45c`!8WuSGsED!diCP(g3Bi@ zjJZQR`gzVc!(=V`wX8w&F+=!;VmDTEt`)=0=s7#t_e{=j)Cn~EDHvP&Gn4;4LZ#6) zEQ${1v2EFq6HdFH!Ir#WcLB7H9m18T`|?kx?3wiUQR3~tXrEace5Us0Ct=1TE7sxI z0m}9EynI^+XbpovQr}5zJwg74V=EOeAJ}l|R~Zw625_Z2bsGOg{UpaGLhRdvn4)k1 zZZwoxmxZ%}hq*W>L`wZ`-fYA4AfD-y0`uRCaB*Z2h+N8z$qVgMYJ<7b$C&-EFE8w} z3$6a{%xVI(@HCX}wcWynLmwopVRE#f+7J5MZR8=j_YIRn`*L|is6KOU11bVLLD{wC zqW=PW9%(cSV_y|v?kOWKULn19`ANOcfq^(_Uo?By(U~<|{z6rv0=J_Ks^oT{U=nA+ zBP}S$wdydWiimQXk5GH`2(}&`#vji957ZN*MXH8nHCy+<=>Nzw(|0KZb*aakfle%Y z=^at38Y^04WrAJ)P5SEun-l(9Rd=m|E zm127ODG1t=g(b~9a9@ELkInd5w3_=B?3RsWd1GRD^mb3)I9bj#7Q~OAyY`G#%70jK zgT6^!s)&2M2qZ$WqU{@Ti$)NkWp!s}VL6DpvZq3A{!xt9HRJt2Ia5xl0oh`A%q&m9 z?HR;Ba1CIgMxK0oKn%}_+JLQ=oiMM{KNz`MK`ie^t9?AW3wuA13v$*hp70&?T$%T3G z0_DFbO>|@Fn@hlT@~^~=h=(MbR?u8*74q}{z%+Zt6{G4PmsmPaFHk?-u>_d#TTh;T z`4ptQJORlTv`hHM6Q#OkVB7mgOcd={8L_)fX%|*DsvFN=G!Dm8|E@(>H(pWRgSF0o zD3txM2cv$GF!|FM7XM`}MDi`jsB3-3a%)y6*mE=T|D`?I1#bTgan{fMRQ(M;+44D*SrY-2f`c+{jla9d_LZA`n{PQ2~(c5EHu4^!SA z!9=rYjJtFKQfB`nj*hov84uQD+nOg(c&|6BE~btU`{^jZl%?44|N6z*X{dc90o~bc zV(>0&eq@*{D}F&c?pG$fqlLJ5PXl`+O`85HXRlfp?|RJ zp!!RK(wVksm^D=)`+P|BzGuSqzlZU%Rr!$q=nEG4j{)6e+OO2#7p)!uCiUpbq?rcM zXAfzS~cl-OGyqgc% zXRE+^xWHP3O`uEO0jjvSqU4Mob&FRBg-zrYkC=yrAB}nDmTqXa;Tc%Y?a!69D~X?> z5t@FpBO`|`_h$aA&HV}n<=-YB;ZsOExe#=Jg>mioVo~zBQc<_epQ~?9hWOl}Eb+il z$W#`=JgJP8|2~AP9y#fyYbHYcg;|hzwgszn2T>!t4sNL?l)q~fR*ZM$=F^9Ao%tY^ z*7OSUslU%qbwrUW$podB3HR;oz}8YeRd$kQ<#n`cVo!xUr2*d!i{Z8I--OtT)k19t zd0ML@DYH2dz0;0kq~QuYjUUV%(3QEg(u@$g1IsEXXSE=ZC!F#}mz>AsLvvwcPlgiv z>pQ&6HRW}teNfvv4-%Xwq1AdB)0?$nC3NP#2f8rD-r$xUjG6k&v(N1^@)%_rUu45h+<6-g4v895!*Is z@kFyLKe4|nFZqG)JlBp2x=uq_V%rdwVDSd!cSl3~2{}u+a1w4$Cg%90Ua;-#2~_*X zAxqf~p40o&@70R3CR0oucYwH3E5tSIE|k06M#-u_h1~p5^!evIX5R?mKEF{WGKJ3K zS#Id?s1&ypQik--zv7}!*6iU}fmP5f*cyIaNVE`%>99hf+L#R6M-Aq)ZR9Lv?mei( zdDzsy9Snnxiw(Ce1m)DZAWbZXgupPg+VWYfPFjZfd(Wc^y0Fj!d#*o3Gf+~VsQzfj zbf?Xk@x(z~ZuwX^VrN`x(1SiGwEB^33b}RT>0G?x2@WV-j`Y-$U2fqrhErQ zbuj8aA_N^*vdGkGR6KJ-&FasvpE{eg$dVU3qn|3qLV8g6rr#D*KDPwedaZZt2Iw z!ah9mPkUbbWem2@6L8TZ2X12R$NFV;V}7n*@Z!1>Wc|&U&#GV0PTPgGji-#d$^)B9 z{ir)rfChD@nDE1MXmGfNat#QQmIA$%zcWkq@M5a7mV(9kRUq|k1=aPHShyydIt4~S zBK6m$PprV|=uS)<^*5$SzKL~1h*#{Ch6+F8n=T~I$Kp9)IbsNLXNZY#cRm{P+Enr*o%@Ww zf0^@Ko3AiM-J4Gs>C7TdjN;N;yNO91hPo~GSnx|VO!||w%V(B6I=dT_5ktgmdm5NU zIz#>cEP1MuWxh zEinEfaZ6@|@IXI$N6Nfe;R`=zxIs)Ce>1Kyc7>ETvq74{c87LY*RVbr7HNXIHkNzcn}8PV<*(05A8>gnjuvne?JJ&e&wlT8F+T zFG>KK3^)dc#lE5$@!H&;64&H=9_TKVi;{6S6|Unf!7x2R5&Lcq>RYV28jC)$Q`vy8c5le8XPw+>-(wl!F=YddSXjmV^aQDW5EO2EP-2er4?i*ap7p<@Se+pK2;yFi(}v)$-t@LX_M zNxgqFN)_5akBgBDl@MuUfQy;*v)eqEIMi}MSNlb8>X5(Eq;9oHFi zi!Q*RF>z>-}YM!T9{wqfLS}_hU)Ka#;zzZ{s zN!NEEU+CveqM;&F^x3x`#*_@@W*BMM^{+o8iF8YRRf2Fn@gvk1fl@`d44jv)+p74Yo`*b(4_0^8@&hu08$7Aa3>Xo2Zdd zZYyOKMotN0i?lvW@>x`z6KUtztxO@=9((3mb1z=8xCkV@K7!T2!OW%ZF3JKMv8|NY zLZHwq-}Yo5NtupI8DkVpPZ6?E*errgg5Y%p0XCiV2?6+d}0b)ROjK`n)g=69gg z!kMKs<_L;eO&HILFmu{oVha&}X=t=QDgPN{-mU^^`hGELC3PYHHkj$!?+d!&zrgKz z4%C5;kliI26e**K4LFduQ113f*+6bsa2Vsi8nZQrPvNq9M<(^_Bos~j5uJV=%G2Jj zrM}2Ow7KKS)9i=BzRANm`?D|8wv2{Q3sW9>Jqioozr_nyro8kADX+1onQP%WA*W|4 zG*#K~n0Qle*gj5Zx5&WS$G636mt<^4ZZmZpWMZw4Gpaf&iII9%m=fv0nifUyM%i9? z#XEDqZ@u{REAITEEQZBZ>(R5^mS>0b;rZL>{fh3)UC;l4t=_${R$MM7zjoph$?L#< zrXzU1$ykUj-il>0|KNgKjHkPQBkk-Jct1aiF#(;}!x5&;WaD-mHpGi(CK1D?V+W{* zUjT)d3)JMQQDv5+SFJdvzmE~DT+2{C@2Xx}I~Y|zDa2E^=(!HgJ)=8F9=daC(Wg@r zBzIgQU9T5g_Y(%5kNl?|5v(C`fbhkq8#DRwIG$Mt<^AY{=7sdk1wtnhyY1`6$qi`5mp~ z_o#n5^?)_?LjtpYd zdBdxsPC~DBYb8uw3 z2XC#f#j*{jMW<)xY(=vpclhGQ^ZM)q#jGDOle}ZL-P`bDZhziy9QCh{k6>!eQbD>l zhlHtN^i8UP_Uh%J9&`h3mEl~L?#Et|7r&;4_Q*bN;N6R6F6|ajJRgR0oP2o0^>DH9 zCSvBQ2gJ#KjP;$3x#2ePW6BFb;%hA$*gbLVF1q`k`z&h0%0Tmzi_k=D;cB-x zjxUSmZOvuWA9FsqF4dC!>bqh7Z#s~;{w5fDE~0N<66SodWmaK_G3q#JfAB%E#eWD- zdDTI)@Nr1KO1KQMWMW1>jV#FRrpNLq{&MATMNIi`=15x&M4Aef{1oI^q zvB}*Lau+>D-GNa;T5tg>*S{0lu_5HIXWTo6qhzxRHm&jCWoOTdt2WrvE?>#FeCx_h z?L2tP`IFEp?M=^0UtYXN!V=fsA+P6SQEe3`^r3H4`oCqUmi>wHs(pIfTno5x%buw` zmg=959mxE$H$sqU7HVJBif#)jZ#H{Av~TUoTGzZ4Y7PzL1uw#wV-{t8|42mb-&@6u zA&23(lY+_9xT3xCF9@d2jZbM-EXY3>9=2HWgpeoVB5N5-{ig)mZg%HsBj88F~+>|Vg7y60FoQUhD=YfZTVnU43x>-;RxZGXJKqYH&&5IUc3F@VNZlWd+-2w zb;p@i*Ld;_@?c1!$syR5498g*%m2F)3ntxxw9rtjciuqUtlre^_7d`+{0o|OEA+8< zRtpK6$#Xt_3q;Mg<{IU4al$}3t6kSD#16Y9%A{un`EGi?)NioZb}w$&5W+XLS@Lr~ z+OmsBDUYAnoB5oIz`_BHsb-FWgq5ziCSwz^{GFgZqZx0Hh-Ouvo};czD@4yMf$f8A zd5^qa{9(j1Pz~WBiKn}>=mRIfYWF!@^CXHDg!EzA-!EZvlP~v~@jqy6jOOt}Z-T}& z2%WwH3+h5humNJQ~q4NRJ$DZ^eWu(aF zTJS@i$wTGj4zA`Kp*8lRFkx>X|1#d0-3=qJ@W#0)H!n~m&~q(`Gn8sm)`~k_$v<~t z5@b9hj_&xsM5&#?BqPS@FP^yqdC|wHHPzC zS=upBhYX>tCT&cm$^+xHBiyV_<@Wkd=yWBr+b z=kB~Z#S{FzRS7x^Z`+FeI(Sq z2*m39uH@A|3k_krh~IZvkhrZ@Oe0p7dg3-Q!jF8f?p$m}=63(|#bPb&n)bF0UrGS3gu34nme$0Kb+*`P^e0h0t5%JB}m$*oEE+mW$(~ zPhlxBsZA~)!OYDw!La(iqE7!mOd5S0QnOZL(KZEdJnYN!w2{nur7yqx4|#)!Qt$Eb zWMM)qo!>pWvGy3+E5GO`h+D!a>+u5Wu5L%GSO%JKH-+S%efmVopT^$$EV@3r2=l!U z!IBe}T;_KR9s0Lm;-ax&62BbcV~4_*vOr$!_864so@3FKF>oq0iZzDm(7|UAlLeQc zXGa&V{%`^k>PkeV>4ETkzp#EkqA`1c8f*6+?P{iqUDv8(oB&Y3;(c3G0EP;H;v1 z43D#D0OgLo_%J*h+J4yuhGk)_r6HVU*JTMhS(#`sIWAU*tta0M<%%0kgvWGmsCrB6 z(;JkrirotHe|-SWUN-FVcUQh?J@u2MaVVutTc42eAkaaT_I)3qBYciJu}$NnS6ZodQC95bFuT9cu3e}!i42%)Hd5;n}8 z1lj6M;PTUZnEqiPQ(oDD_1*KKE%F{(HpzHJeHO%~MBwA^j2EV##L_uZE}3oyc~PBN zZ1`bl+_(+Wmr^d6{Bq^fj6nWBZ&8x}skCXL9kV6BL3?Bg%EZ3x<^jrCI4lz7tWeN6 zIq`x+w;}ueC1`u&%6rtl+`3%}$BougoiTz=maNLvG9=GHg z+ph|X<&?>YCk=&s_)JCliTrCqwd53LR%T;ucBc4vwUo&#w}%w3cnkBk_vW&q0@3Yk z082^!6LUY;V!>@-mN!j!wp%}@`e#URpN>Ja3-A%LQ;F}RTM6^utOfUv7r}UD1fM^K z`~i1rV9NZXus}|8hGQbgh}&#o<<4g4%CVw*XPR>ZKpjn)_6?7*u>3yo-TP2w6b175 zBjU0NgSiWx*RfZlM5jll#7FuM4A!sE_3%(8wOaueOFFQ<;VStMOqs%K7&oUr2l>KD zV%AP#nq?_@Tbmm%emR)u94`f{r;)6q$&KHa-S!G4`#*#1`6#SUS_h?T?U-sqyM7vR#C6ecg}i>m`n2tYt%E7&>h&KQ zrkunnuRXcEBLs{7drf(wYfx28EXRGnV_ES~%-k0O@vSxF$IMV9y*vZ2>I3PWKY=s- zzS#}-6-&7L_IGqX9l@%qLe#Bh%xZ8GMqlt} zwO&tQhV^^MS$Y%Yy4`1Vt<@koVk5}rUqsF9x7d8clzUQ+riM{g@k9*N6uYv9-zvm& zz3JcndogwgPHypX=M&Pn8M(lBjBTJBZu-1D< zEas4s$vw7VS-BU_-LVkdrAMG}Pzzr=)2 zlhh$WopZ7!3$P_*2Td-tr}!;Xloy`VyQck(vYU^@hMKp6+N>KUoDG3@8~QMHr*^E6 zyRjfUC$@%qYE&zhV%w?j7&r0;_?`NMM^edu_OuVH<)fj#!HV}-CtAq3VCZ zpgE~u(?U$f>;6o;`-M;u>&Z(NJF_Kc1h(v7@&G;P%6xwIVTt*1=(Cxg<7d0DqE9Qh z-~E7oH+{KTem5{P%ZJ#OPNKu3(-Q#)?u8;I+0++z>tzbSczz=(LSEfe%pS9Lkf;5f0I^FYMp@y?tL*#j?k1+cQ0*+N;%Ho+uuC0d&D;!QJc zADY*PEEbARcL)Cl39}t+N6frELR_qZmDpF}_ld?V=^6+&clTn_mc2sxe0T8N z8pCQQ)Bjn$Kg$@t1sjSci9VKIEOW^uaP>Qn4d+d%!@UJ&FCyNppE*Xp_zT+~?Z?*E z(;^#?0i~ZTE(e1~!6QpYUL5LtUurwl|ZxN1{}^T*#kB8TWOL%zsQIyPHVdqsH@s=HEI+ z&Waak==>d2180Hmz-#pGv4u2LFRX`i*n0hUWkgZTct$w&9;ca&W} zjo*n`sHmwz9~JqU4pW}y_Ishp&6pLwISvljx-mte49YHj6Q#c?Mcc@YiowwfMLyQKtdmiw`M#X?k#`xw$X=Km-<_qZ6-_m7WK zQBovjDy5{9NF&ue*R6Cum=z%#zFL+cn^Q|RtZ|Hp6k?o`Ly3q%n&-N=)Dkg7h)6kR zBU&vH@wRQyDOBIzT z?#koCsQdNzJ$&x54%^=Ug3@gjAPaQm6%JPX{N~|o;S3kHWY-w>(q7JzK7J6TjV+YD zjuJeq9zeXL5463a-jPQsuKX*6n-wt@-yDHicV+mMSUKhId|CX#K6JK>!(~_9Sw*TH zPaC)c)sDt2J!+R&bBuNyBYq{vnF-gAe+6YhH$=@7BbIZt5fZjv02jwdZvOTGq>mp8 zQzrMJY*<$ozql_HjCN+_n zHi>>?iL6EK;GyfULmwWZ(ZKf|Udf;9x-cXI!D}DH3GfTdAV+hZm;7y&` zVG!U#-HC;vFnyr~JL~GhB^|_@pC_jtL?@Wu&KBN;7_gvgj$9Ez{oJk8!RQ(6r+@7+ zYWmI)j4sT8y#79HO0^-2UH%4j59@_^iz-a517x?l^3E5hA(9-`)uojXwz(T<@)o1+ z+6_UvRf9QG*Q4sxR3RX%9;)BJrf#(vSCDHZ?dMc<=s6z4tPGjD>qRlCPmZX3{6b#4 z!jM~!HD~%;9>B`LCh*7}#;Uga@t9qApjtsYDg!ys82kfj4$sGUI-g~a*12 zkhv3I`_75Q2h2vNF(#~YfGO9beZsb*p}o^-JSU2h-sLlMfjeS>O=GHFEn1!&+a@rH={fW<7!nk_e zV|i;FqC;1E2w3Ze>e&C}9=#huZN4Ak2VVw_)j=VKKIi2J?8$#)%H>TW4hx~((>eoK z7>KBCu;G`-3}nyAKj)GED?FHB!7IkTMp4_H2i%&1o!PE5$61WAwl`qjT)HD1ibDHk z0lbiU_3FVlxlZm%p(TiD6>B$^|=0ds&<@D!W z#xRdqkVqXlKd~ReZTqmc$Mr(R)GyG`=*06!Qy<2)KlfSfL42QfZHSFK*U~Oi*SH^B zSN7(rsk^b2_QHqfzDJ8IAL+l_^14xHu<7?YVfjItDT*f&kFD;6J|q$MlI#3g4mpY=@hep9ASRG)n{fC=D3hobYXjD76MSg?P-~J) zyY4+$O5f9hglZ^@`&U#&s)Zt1A==Dy;c}0mTw2x>9#{^ezTY>rKW@WgM!p6YJqd5A z6*0D|6IDspD5+a1sP-p_jTOToE`Xl@TlZ?GP4{87NrljsJAx0(A!p2tv5+-l0;-7V zmDQIRdb=Ouqds1|FvXSSIqBiy|J{VTzQ80$-U>gK(lhX{d-!#MF&Ezt<1-DMxC>?4 zr9VZmto%VtGYZ6_4N=^5W0Rm=@EX%*Td)P;*2Er-!M2S@VB4Se-1Dvxi+{Now*~cK zGxxZ%hAW{wWTuQaXFLXtO-~S)N%%)yBv7z}Uk;fd^ydaeSXd1y2XL~@Z@-oCnRAa}z z5b9p;Lx8S4|ci!<_i^^O3wsRV}8N8KLI?`HMYE zMcJjbqGZb^p(fLl)YW(4!KEm+-*Yf)^w-1o|B>%)#&$@}Z^W7-#7GXi4>jvT!MIy3 zdLDcZuY35h6t6FMc+g%*3K)n+Ut$6r6e@zhp={gP5|2w^SaZ_%w(KJ6>w5}uWP^AM)kVUZtSB5eD zLuoj^C$N_30+ug#W-8H(^=^|g2ZLUi6KKuT_FadvcWWtUIUJqW(Z0ne5lOagc?)}GtJ=>wr%q^XnlZzL3`Ga;bM$uyCI)N8I>d&j@zJircsn1!HDrnzV z!pUEzbVi*tP{^SC-CA`%R8>at zplb)f!#NO4=bQ88J%(|sd-UGk?$1p7#6aHM+oG|{EpYTRX3}#3!o_Pr?1aRPS)K}J zO_bMoFvOhmyaC+cOJD9_mqOX#KhenPB?R>GWa&oFXwH8KXZg_HC88QU-OEur4g@zB zR~9qE9O_=1GR4IU)b}BW%|Is}Gh`XfU6!Nr#C2_xZmyuzlZgxO?uE){Igqiyk4qnu zhwAAFLASBUuWhLu=}C+p_1!V~(Q(SbpO(vR(|bel3@c}!2DSerLE@Y3CwnfJYYM*$ zlJf0B!WI+Oc$l0b^$P{7AFVLs=RsULZjm54HUTv4okB+TD~ujQ`|xGKs2fe6#Z!jl zhc1CU-;d(g5Gi-rLY)%%4MDcsL$Ewcz1)q5M2+S&8Xc#8a&fh&=NHCPy=H-;rAAa; zxg(a%d=9N$HTczpGL=1gahGQ{TyvjhAO}x^y7)EdYnnlQ!i#S_V8QAxCz4~Qn_RWu zng0A(w7Pl=G~bOes%9{&|4)mil@EyBm=2P%e+5s|Z*b3|H=n+h<{~bFnD(|BlE0q4vrSd@zRgf1y2@ z(K|z!VQc&!TjD*}DBZ=2uLSDFR3{3LkD|?-Q>T~1~Z1v_TtL9VR z%A3jVpBEMyR%68mXEx=17gqd-32Xo9GnUnB#IPob{k6qnds zz}T~S@cK6wCQTCX*##foZJHhTFtO!EPya@Rvj(z9?}j`{CnO!(Bo@mk2hh4((1wR_ zm%$sLYR*ka5f0(bdA2-n#uK4fLcNy2`(m;0SMS=_rQ<`PWk@ipqv}DSItodT??asD1W=bQ#`iYly1eVd!lqL9TeRY_ z>N1q}PZ6}=;xN@wgLKznlJC_Z%K9*ezm7o71>y@nS_nyd{88a zIFHF_>Ul?ST_R&eP9fBX){5=F=8)&LCodDnVB(+=EMW8BaNV;PPtMv+T)eIF8JW(^ z?bk+f;*5n8i=?ze`VVqwpR_#RfQ6xiAGUdo2Pczjr2i?jIB3BG+{}=X8+~nFR~9wW zhI@!xP~ya!gSBvdVWBD09A|H9GDNMsNG_rg2Gd zXXkKo!p1>fnH8|@c3fKAg2GoX=1^+G+q=I4)q`IlyxN^RJgLW)z0SOFpcAUk`l4jl zC^2p$$Hu9|0?N{sG-Xi#riqvfJ^L}`D?9YaFu=l)Uohp|Rmh8^e8aLbbcuNZvHv>& zF4o>?+HWOjZdM7E9c^H4v;pPMEtrkVC|38zh?qvv_+A{tT?{{g%4-oW%)bPa)x;_o zT7*?gj-m3BEwLdO&5w3N!0TILBkk;h_KsktdoPRX7Hdo`JOG-EPBA4n7}|a!_tIVRd!6Wev`JQ#-At4h?Vw-F^e;35j>uMMYml=kiB~g$d-5rh4RPXD(}nl z7x(6i&+75dC4Q`V3GI>|#tJ#)$LpB#8e%=hFc!84Wp8|-VwM5TB$dKK8KAQjdY{6s~y73%MH}YoK z@Q@jF?(y?tP3t2-7qv|*XD=l;wsI$$Q76JD^6QXJ%F{pV@KO}h;4WaYu66p zXTOc$r8o51;a$`txoapYU;TM8aqAN>zkCnP3%;OR_-lx|Pwal0Db<>dW=#X41ZDg) zvC`!ZEWbIFDR+%UmE}?~#asLkXQFZaIl#T zU!M$PPH%vz9yN#uKiKo!Kqp>3CX~NE7Q(8%?qN*mBIr!{N@voP#s3 zg!9fv2cY_p6*qT&jeOt-Eb78B=9Cg^LV_`U2l>Zr!k8ZQ7cF=qBuew!`F+@7dlT*>PebOq9m~TUSYr4*4DbkN1-0oA z*;j++M;uwUin^nh9C<*-23R=f6u1l@4f@SbFw4>djaG%S^XD1w%>NG(qW4norBj>t zo!(it`M&i-0OI1@YKxDNGB3}W7*Isfq8mA_mxfN7PpA;#MrR0C~!sWtgM-_8{~ ztd~PaP7pLX-^ABPY`8N1AEDHkvASq3W*ylE=Ao2j&(GIpQx3Q3c$-+ZPtNu1iIp@; zgSQSkv+1`ynce@oaNj+CEY0vSti5VXdD%TUb5(Edbj_VP&30pvc9q~%)rVI-{s)|V zs<8Q$BiARsPqgF`6fgH+mHPzDiE>4qo4!1Lg(0f*qQENm6DlA7EF|iB@>{j`Jiz;| zSpR7UroXKcP5)bsHDh|Snmi-MKO<;7?0M2}4qWCoP0*PylS{`e#PugbnfhZtA>POU z^sFiO-~Jp{u6&5SFZN|-i36Bg>i}8tJ^108Q7mk>6B^I*<8f0O#EL7ftY$z8Rt}C} zcIpz?|EU<&xfg^i?41{A zVO+PPqQtm>SV6aP!Ey62c5ynn!EzBZc5H+K@-0ahycT1>?FWzhHV}U^A0SF`NH3Z8T}FrM!#IZ zg{mLK_cfv2_zy@MBtm4c6-#Iz$~2?rh>HjGWi5A?f$C@rXtX-YkF>yk%1~J?q#ZW; zuxWS=Hm+qXdzX|2bc|vXc~_=9vt28DcuOuB*hO1&W-4?}@4|EDdh&U{*>Lrj%f5=b z38?Qhm{+W51KrE9m~?VFEQ|Ld&!7|cEal*}BZS2*-6HCKJrBxW|H%ELBwW7xGuSCy z+1eAtR2{e(rym6-ANCK*^o|H|aeGnax*RJKiMgijL3f^8l=E{0qiO~P3o=mqiw&Q) zZyT!jIKYFuA-rsnC`k7o5*5Q=f_BH>@FU2UHJ$lWH2(K6gk2s07E_3itQg8vf%k;N zuW4t!-UVW!U!v}AvsgH43N~41!OYR`Xvc9Fi#rn4D?Fmz;LuRo!|rso`i0FDHLWtSvSj$|uH) z4bzPE5%Lr>P;!!HZ4ZN?SicD!z4f?4-vifv{ElJ0l*A_{4$HA~a$m~;)|B`|u)IV4 z_Vb2Vx8w$9*!+vepZl;xnyr;RCVt$^AoM)^2po!y_<1W+UiAGNHdT7F)SFx2!DjMr zE+0vKl9O1!p-LJmS{QLgS*a`aLL1AP&GP=W!gCK-JKHV_$83b-Yn4ud@%y~ zn05Fen&zz&Hi!)ihw}7sX{g*~AtcOR56yc&LEQc?;*uTVTw=c(-#bOJ_PA1VRgb`! z-qCPtkRQuyxG&Z$J&6|+s2{P&7-vpB4(TQLuwb7v?-+Om%|qy%HiLM3v)eBc+Xao* z^lg}EW6wbmNnRZ4LKI7OgX-^W+8bY#mpg{BJNwMoWnw~{nB&E6 z_4Z?}CGF(An}&@xE@)Rn@6jXcA=Sl;oO+(jA=npF%H`C}zA9v08;)faH;Iw)s~GL= z%Bm*B_XWEFd6#-1%^bB>2Ld4pG--%h$L#tS+fyTY2kYP>x zv~yuJCmX`5q`oXnL&>F+YSeo5W{2lL#BEdDdGZquZ+@d)oiCl0_XI;}csMU5E>vJS zIjg4RP(Jk(IxpzQ6XQRF=8q@jV84sXUuQtdXv!B%`T~juTYhk{K8qP>kBzl2af#G` zDHBTNO3%J>ml{V@3=0vW^A4hPUlgy}@SNTO2_Tah_$A&P4hF6fTv>Hf;6pVi9XS`2 z2X2%omdq5{!aESqIe>WIc6@)$E(lub#Cks^W=U=bG<+rx$jN53I&=VEdJbf3UX5Vw z5q+4hPqIAK>J`SM-3E!?6CpEL&UH(Vi1E*hLGg4G&a}Np{d)4kwMKE-jdf!4wr^Os zAxo@UL)@wUN5P@24P2sEVcMI^sQYUyx%7%f*(`EJhY*ux&3#a6$7vOB`#{Q%br7#5 zU&*T0k~NY2d5$cYrPh`}aegIra!+H$14~wsjWBDI36pKRE5!cZhsVc{qTSUnR%aW? zYU-lEBjpZUzG}xDDhH!z90hfeRy(JFBi5Bp|?MR#pk(3~Ccp}Gst_<1W_VBCrVn)9Y67jK2TnZU$SYh8_`*xiux)4-WuT%#yRR53&&at^^BvTu|35C{pji8Q z4;YUocF2z|T-wozZ5wt%&X#>xz0{gb(Rgr&KGgB6Zr3Wm*9rmamSFm50N1zXT%A8& zC|qbrZi{$SbdJE37$fG^_deaF$PZvQON{A%fI2o2%p>?dFt@L$OgbSXuRI18QN(!d zAHiZC+yTC=6Uz3Mh?Q+wV3fED;#yP~y?Y32$|eS`^fbEkn@PR$eq8q=2nvmJ>3%;E zRBb9^ho@pjXaLXiI*w}>e1d{8e}hiXP-}D`3*-If;bc1B*Nw4bZMJO~Xh%6L{bL4=^b*FaCkxMbs#5}BKoF#fO}QxBXE`R&du&woF;(qz0WLoK#@ti^Ig1alj= z3E0y;psbD;TAntcY2FhdV5lGHPb8K?FLH~#cqZ7?1+$C^2Z&*tEh_GB5fjY^u)+d0 zI+zBa>e)TPf0~3Zv2bOnm-Wytuq#(?Yt*vpHmo$EeciX6TB-R|$lmaYJi@JzQ+f#N zrk@9u^Db;TPcu5o3|{pQWT}nr&=?IMk1+^k#{zTOOfoqmvl$*Ex_G zoy27td#+BsiRlxFW26WYr8YjGc{Ue14?RWiz(8hmr5mq|$w$4Yp4_>S*igbSvF?F6 zQ{T=kiM!kl+NYMI+k!Ko%)cqf+MfI;wDn#u$abp5ro>jk zqfZ<(yT8M%{0KgodTqs*zC)&y5z}@3)lb$qTu={K>z7NNQprz&+V-a!(AE73swv&+ zZk8pdbt?q)HU^UrJr@5%&K2f`7cKbJ$hGh$4D%~lbwLsB7hAil< z8+Yh>5KZdMdBD)iSRj20l9REh{6K^4>qD5X)Q^(GNvt}~xHns|uo$AH{R1Vyu$c!Jog zZkj5@+M03A%ZaciEr@GZ-i0{%pV+oF0o?EP;~lptXFAV|nIFE4#_@faIQf|%kh{j75@mUKam50=UK>7yQ$4ii%?Ij2!&6)fh zkPH6Y#pDxj@hcBxSzGf#Ee*hobQAW>(wlc$59Kylp{#u}ITWblo7Hj(!&Ne#bg>-Y zAEdt03|pSNW&oF*8>8L0p*K^RdD6R!IP#&RwT`-A7Pj63CHH)Vrk=l`-4Nn&L_P5{Y36l>;E#{QfQ&yL;;t?S;yvOoh?>3bS;)=foq&#T&DFT=TZ*HFGd z7{qdj1Kv47gA!s3JfC$2(q8VtJCl5v*BB{VH8z5!nuoJwVtA;&AcSEFHf2o~s*j!^ z)?5=F9(oSd#`Fx_SPUOmHzb+O+EO0p6_S%D~ay)sKw;3<^e1ce93846S zkx+LeUx?4_!q#Tp2jge;sQg_kO8y5z(p2JQjNS*cs--NoQUl6v{aBs3E%CZn3$?$r zP@Zh1ApLSpw0Kwq+x9u~)7vdsW^Y5TXzBs#WA>s_GgQp_;{!^Y^qJ(czNn1M(5h<+ zwaRb3aP5pAP!;kCeGTbrzDXXCMVp~=<2r~Thh6nn`Wlvw*T(rcLB_ais7jbjeYt9( zFgcj1Ytq0qt1DZ7%aG^!>;|8S53nM*3pbtgA9k$x8y5N!A92|Rta@4iDjvz!wD>dQ zrOlA_ej7>-(mr~}0Jf@mI9qu)8}}F8#+Dsdq2<79NHMvM&L{NwqwaF9)Gm}Sq&`;0 zTm!ypi}YQ^e@L0mmCS!_9Q4N~@P2Vr*~&WS%0c1q6L)=xy~#P48&?tErH z%BDTj2>N@-J>uxjug{}9n9~k4I`9-S?6p|+{s-lStXQERxk#n$^f{t?#nT6ncsLsJ zocf^oj}Mr2PY!PO_3*rmJd5UXZg+eTD^^IE>Fw)KacLtyKdc7%n{D{5AuFDn8XEjkzopKP1rh)9zWv!<51%5s1LVMvS*iK!6)?dG(Q6Zwiu}H4!tBGFDo2Wx-f!^7&GK_l)4 z&w778%*%``AJX&YdmGd(>Q4Kz4AElE@1UzPB%k+m(fCRrU-C-GR^2j523wrW`5MtL<>CNJpu z36jh^C@biM?KLWRz1V=&{9T9dL&KTJ?lee`?}Eyp2;MyXKTH^UopOS7Mu$FHo18%2 zv3NLZN>~Z!`Pu%dVj=C{e6z9S^0pLwJo9-IaU{S>7*C;PAIsICdlLD%Z%x2I$2+94Oy1 z>abv1Z;Q#2BhVHqVRrd>*jD0?Gw%0ib$eSOZ(6%>HPVXZcme7Yw#j!!_J9#7=_b7NPY{$M>0>pPGo-ueT@%3iFfu@RpKZNQ2vpRlaK z4mxMp@VfHLLPuvgbPVke@dM67uCE6d_eb5^2%hISyitCyI@1s zw)~XXN&8=&p94NWP>K$+5|ED`z!J;;gP5ImJfB!2cGHbm;kSu6CEJ*3N1ny9tvPUg z`DkVwZ_afmR*O2nU1F@J51&4}E6;oL9i9AgPfy=h)v+9q>!6m2$!WM6bq@=OXPPr&~lM_s@^Wm)_ccHzvGdVVlv_EWp zd2x?-pj$Co>sCbk`MWgJu^$21h5fmCa1h@e)}N<)cZb^fE%2i%oGqDc!wSj@FgxHH zL>&Zvaj^%hetHkJDaokjPNJ%w5wp*SZN4vI&Q(S%-Rc-QTXu<#n;hA5-(gI7vbP|& zR??ZNSsSx-G;bT=0RhvCz;i0)_KFs1!|opk^&d8Z`ixpgY~DuCMyD^ z`8p@UDy<5%NpCr=k?r8jFbC- z`c*YJg&YSX+mSqZnH5(KjTiRUo3g_3G`B9f2+z;_gEqE&eKtBU*^0k}fZ2}d zbKIRvj~8LfCEBmwdI7qd34*6?5H|~qVr74?6Arg;fsWQRNQoy_S-L*vRr`Z3&w+MU zEBqv9duY`^3Weg1J6L#S02G|kBWnu zpM{3y-8k3VkJ_HULDTH6xR~Z~%E}V$%;FpP+}e+~)MOLSkTS<6jH~;-7AkjVW48Tf zcyCD=OQZg*;YQrV*9D_mWAyxF%d7u2;92T!Aj@eHl6F&m6Z**&Nz=jgP!LO4 zwilWGDUc1iDLUz`1Ba;vP-v>b1%v!~p5JI_b9pKT1>eQ8|Isf0OCRC%JUc%2r6=#y z-hj-FwoD$9he_AJLf)lO{Piw+_IIR0T&6kKf8xQ`QjQ?);uW->8Okr;?7}s-N>TZ0 zny=5He?U?97S&bzp+4&e#CD-Se^->aw(~P?`vX{8ztK?mU4|bn`LXERW!s%3-#sNmOGweU19_vM{=!6^)CX>QgV)6Kbna=}bOJALFkofY7R+H#cc?NdrcUop2piA?;(mQ6Ky*V4? zW4&1F`TLmVH4PP`XcrJ+57N=SMMdTwtfB8snUS7;w1<50ZYsl(!Rcxj|2s6@pl7pwE3q2;8$rAJ!CkLo}T` z9PEebal387Z2VS#p1F#cXph@l)TH(&ivm~a(j8h%e{t-E!>6vbmmD-yodo)YlN7wS=e}=nA`KS zaZb%>uJ|iKSX>ss67H`C^>e?+W{9O@sxXmca^CeQ2>FGc6 z)Nso8Q@^7lzYlMWp=Zc|4#=)eL#LHpS+_WMwsIupDTHBi4~rqRLpm?YkM!l`v0Yi3 z>L3PeQ40(2d_<*Frck#$k=%*rVAc~`*15Y1%*RkhMozvz^?BeM(_qRHf5z_kF=a=o zUwYjd(dEl}u`;zD^(UW!8i1$)7D1 zIzN?S*|XIc_dWtti|d3U%Q8IC%b&MB`32r21n_`=vIKrH2eR)R#f*M>TwDGc(uR#`J;^A#@G}VyR8w#wpzZUtMOnmd2m~stiVkU7NScHem6hI<#prVe5bLVed=rS(a`wxUF|%@pBe|`F3(8=MECxGP`rrEA?oZYR9Bi zm6-lUfpr-#(5Y!RsLxzPlV8o5rl|(iNoiV5b*C5~Lp$9oZv~0wh0qap4RpPTi_){J zkX+-yJIrV%qBCU5=U+hiYcE#yWFYY~gSh|bK<<+2O6+%zKWf8yR>e#@pZCH$OMG}; z%)dhA(^KTa+6mh3);wwxWiF>Z0CivT*SwLS-MCyl|Fj27+Fmaxk8LZSY4;JzKIcQ8 zW+L)KT0EUKoE1feF>Ob0kemMihf96&j>3T#kaMbKO?ODYI0LjNI#Dv@l9-r(5RLZS zAx|LfmlOQOHJyRXEo(3@s=o{Y*0CTvAY(>`2+d2KSkWz0o-*MZMnB&}yEAWaB=4xH zLM`aN$jN!xg>@>9gSMt8){dw^`;S(<$)QS6l}*6-+6dUammJE+8%23;9H`FO2%5jV zxgBw9SH7q7#cwmP{eV7m>n&mGvzN5i(1U62ne(I_BVqYB4_D8I&yF&E3~L{5 zMysd|xW81N^BdGT3i}UQ!e*k~{A1L&y9|ZfeNc7ltk4Eav9;e@OmMgdstuI;wOvH# zMF9v0#EnwkVSE93jC&Cu`a3xEeN!~spme84782VPZOi7mhP1&e}_ z=twiEXuUnCS-uMMK95HiA3B$ndV%IzKPG*ZCR|l}bIk_>A?C;eDDJVH+|)K~O%d%+ zv%`4WPvkbbyG`&|Tn9N-j;zUi8PtXMz>Ef(504EN>O9qG^U{zy1a2%k^fT19r@`)I z3m#ov2L|_tF#W0srt>jlHAcVUd7}tsP(ci^k<^7s&%~CW2Ql^W`+nxf-$20R0oXD3 zAy$VEWk;1#9_IZEJdl}j-MlbS6EHzoTQZ2ppBWBWc3ZKH9TO`;PvOVr0bCi{S1Tp| zs$2I_d|F%p3q0({oTXA;X8j4(UCsy&FDb)4x$9>Swd4&P&KC~N$BAQ2 zSQ{J>U5?kFn+xs9I_aG`_^9X=J&3h^$rqxXHe$=Q=_ngfhsEM6ut_oE9z%9xk_oYG zPZKN0ORkMxpNlSY>p=TkDEJuPgr=~`LWe^+-uXkqWCn}nzQ6WmF>&|N;k6el^`pJV z?^6WJ`QAJxJQgZ#qo6JCtB`!!f@PFHg|HtpFu|@JHD{xt=~OeiZywC^e;LD;f2P^& zHgoc1_h#zq`KY{QhcUaRLTc}WxPT&PirMrGaO;k8t-u@K9t2mvQOqulI2IcO(Dd`f zbgNmI(mR|h|2Ic;I$_QdYMx_JQxI3Eoyo6ySl;w~r_erc2QHAm#($#m zKbO(r>Q7idFb$%UE`a+seU^A_5ylwbrE^;ays4w}%@r@E%et)f?mvddoFYb#MIW~J zC)%CtRKQ;1`BuE|fW$C0o#~84g=Yt>As(}=V6~Rbw&3coOAU~e<=Ty|K@nDhEkCh%~@Vsa{R7@xSQ^6W+moh?#N*EB5U|UDX#dl1jK zx(VxIKVXG>KFrW}WiD1KIBiEfsu4rTS#S@U|Lz2LJ7cCk?EpTzb3t8IrH%al9|YtM z;}-s8*qq*lIH`{@y&FSKT9sfliFS7T{h4O-bzCD4VeKvrQ2D#SpZDv_Dmp2zYCz934Ul-k8kZ~^#nT`5W$%e=@3HDM?oX`7X+sa9?9LI<%#B#2 zAziuD?t)OfJr}B4!dZ>;IdCrt=E~nF19aRCmC;MZy)&r?!5B@i4}NRc(FF4 zR7^^6!}br2$WwP?&bA=Xr9K4p(~Y3EJL2bjm~s_PUUIj}gYf*HCJ3lHBPjkFi+RBX zV!-VP9D9SfXMfg-YmX!5b=fAQT-1X)VI2DYK|G5$UhEEiFC>xVjJ!oG>zQfre9ke@ zZwclJ-k;&@#IEdc>MN8_^F#LWAUJ%of|h>4Y~h$NzPsFm-3lJYUD$R^8rJ~}?+200 zhR))RO<;6-C{ueMl(z`!#G)K4R9hlEn`g(02UDNPrx)#--a}9sb?eAy^!~RVtawNy zPpLiwddGUQJnx?{D9MzWzBb_*Psua*=mai)MQ3*Nfz-9(*qnYDE#_|r-TEh@xJ%&5 z>p|FZbppg2F;?8;9O`$fty$GZQX0VLh%c|8wc|%`wa=^{S z$W1MfI>dvXCANZAe<6gKE`S2@CDvWKD+J7YEY!rg!<(lbEaPh)STQF8|?!gxl&rMO#6$2s%F!?Wkke9`YDc%JTOuq5jvCDv>T#OQoA)>W?-;{A;vs$ejWx4lG8*|T6D?abtT1m1Sl9bSulc-xysacy=1 zNMlVwwW&jxF^b-gJVeXBhcF*0&Dn|HlC{weZ#|>D`Ytb~-27SF=5tGQvL+Af*y z^Ea-)X2Qd4`*C$_4}9?0pVhRc;+%G4wxGh6Cq<8istx+gtWQ@ao3T@fnRW+#=DP8X zqpVo+FXn8{Za3DnG6ZVMH>2)qm0T4NEn1PD!6N!KO8y#&o$671&44akHcVhE)Aqw! zb_mU>KiziH1LO2pL0gau(_0@we5wep$SM+QCa0qA6!BgRI>p43E-216;O!f(;2eb| zE6SkV+OF9s@h$RGjyeJHw))I8bQh%4PO+7Cz^bL+aC09YRvJQcgt#LRJHHk!4g_#V z!HHKKJd84z3P_zi3{S+<^J?x*keYX8U#%EZKN$qEA=Rj~_yu#GM{)U-JXBgW%bOzx zvDfJqJVCsV#xJw6>2)$jKgh$*+;F~f(pxNCoDEL9HMiJ}d)o-HYS`RUA-v`k>)Qtt6 zlJk~b>(HUw56JtJ49qu*=MFL8g-g3JmH%I2l<-DwYK#@NB$EM?D7;&=F=nT*(pe>-v6js{GL{R76k`SP>2 zv}bqy2uW^*&`7fhkJpc(gF1K`H*-O=?J_2JyNXT6y9?7=M|0V=BQ$E=i&Zc7V8EtU zu~9t=T1L*r2EE~|Ryl^t9e;uH9`tYdWR%v*w+8m_>CWQad(pj2#;h;@a)cN-mXH^H7Z#3TUS;-kKS`j;y}XR;o6xGFlQun0QD(z@b5v~A4wV3 zkr7}&aX6FA9Ku~De8#*z_b}tt7LZK+pFFAm6bz~LW7*U_KYWt*6D$8A&a6M{Sdm5h z3>%j0yBQq*K8WeFc0kb|S5e*2BrlrSkGEGJ0YwizR@UAHOs_r^%S!z){qNgCv>iRq z9{mKeo_n;qx9jB9$KJq7r%PyOLVmX+ebIGpe->2r5VZM!Lh~qF=6EHTn`HNA(QbQb ze_+58SJ?CYZwn#tvjuAr{n^C+)HRRI7C#Q5ok8mX2I`<`AU!KZDzP4dT-SBU$~+R#ax_Vcp0( z;C8+gJq*WT*u4qJz5Kam$6!Hfv0p6k&Vv+%J?DQ?&hO=5*6lwhURHbtWJO!B?8rXR z=lT;c-Ea&We;>wuhFw7Q?Z?`$^W9l?^Eqr;K;7ZoTp@$^;}$=c!9^$Po?Yz#1BpG; zMM}iP-sUWER~V)Y8pP9<)6XbhC92AP!;Z%XAjWkIo(*;7IWw$ra$P7_Cr!{kpB2cG zw!RVC_!jWE5r}H`MNn^35p(Pj7`>Pc>J>*(ZTK6O{ii1H>vp(4)0Y(rU9kF<5qEJN z2I+&Sdmld^WGSDp@PAh*hd&FaSJGa?IDnVsSz+rVdnWty38cSu$7tGdzNvR-$uCB* zJU=xktISaQaTFSTbA*ohJu#;u4%G$g1$JyZOp6X;jnr2g??HUZ9^{XDk$ygpa0*G2yAY7>M4m78D0JU=VQ?Dp!T#*m2|>aQ_Nf)|NPH zvH@EgYQ)-I?b*sEb6#<)4|Dtf-S-3&+ACiYi=MPVv;*amggklOtX+b7t{|!}F~3&X zcW`Uck#kkXYKPsWZj~q3O`ayou97pX?hPb`Iq}p5R!r}UJ4@Q>Oj({Ts8}P#^p;`F z^jopuQur1vR?u1G*Z}U-z7ZtkNvzX7K)a4+O!1uzW==!djH<4@cTaa_x_chf1$8AB z{$PI3#*D>KesM}SOQvwh1f@Dnu-s3KhhZF7%8b-_(JVM^S#Vmo@5^Xz(EA04|5x^9;LOsQI!8{qO2gZaiB2NV7>_ zPG>9QSW(tcM0<#Qv9R173ZI&DO-T;9;T(jj1s2R=vMt(t>C0**SvYMX_50h0F?p;D zH?H{%MiFv27)AHo70<-9)V*SFA)}W!6J|M*Q?U~EH`(we zZ_1oSQqJ^ZPj>!CUsil~HDw$g%T0Zc3Y~HFsKYzjL>oh%c0~ns&nT<5l$<`kVQBGR z2g+otd@KGQ$P11SY-Y<9Ovb+6N-%57q@BAR<$-IK3KeE;ASnz+ zl_Cb?awdV*&c1AobP)gi2Qj~j-=elVz}^Q#c)%2Y%=sTN_OD$R(%waI*}d;_SGk<4 z4cfG>PosGJS2r|UHkx?<CAAfS_-v-t@6wC_szxA`{XQj-dwm}l0ELvJxLF4_KqRn5#D12q5 zGFmtnn!-9UnxOJG%LlOv+nZ?K!6as$U9l$ST@# zHy(!CJ{d6YQa_eIZzpQQ>HHqG3-!x?6Qs@q$*&45x9beFIY{o+J`XV4DGGh60(fsD zf0iLzE&i-k@XvFGk=CLB!lw5TqPt01#diyqV6qlx&hN;Z3fE(^WH|L!Mzh%1GAxd` zE^3PnRi4*_IA8S;Q_N_epP#Q2t;vJ*=(bK@V4zz-NJDR^&&A&YDfQD0LJjT_TM<+gc( z%HbwNek7jtsAlq|E)o#lWwurl12Y6lA5TDLa|GnS z5$E;Kce=EEM_yomkn-}@#ACV#h4+X#Q`8Uj%?oq}!z_9Di4fNQLB(3vQYUR;4XAI= zg4ne(menF(RME9)cXUkrwfe@|f8nM;_QZH2OM#tb`C2c?smClg;-y(dMiSbQ29 zYwn0ocgeW7RSsl^(euTFo{ztGXR5hNQLBkm+4f|_fvf}dj}*at%QNhI$eg#53wz2c zM{ed9#8Pv^L0J=kKf}6nsp<_lETlQX;s-F(uNO=HR}@NnKLs`Z6oM9|p?{DkPd@Qg z$l3G~rIDQ>_t7lmJznCL$EG~qaF=M|Bk;*T%$Q`VHETImDHc*6K(jc)S1Y@LjbG!) zQ@<>$XiiqSKalM#IEsZ6d-1U$DFRwOpL*!qd)Su^TDd$5^&H2;GIu9O@2;?E*Cd-CA2 z5O(bsFQ(iy7$xOHG4uX<(2vL#l&4};9y)7KN|xgOSR1Am>aj4v6%r&*;KRHQygqaW zXr^ZCDmx2YHZcX&vn?>L%!aj}_zv~HJBd@)iFgm>b3I@IFVeD4od9?`K zx!sDV9H;rf@ncXr&=1ton-D+b1el+=1r=QjsTWI~>A^0bA9@>Ny8ANSta>QLnV8?x z18b)giazm9EZ1rhws^l5WcHm{a}hDvo*WP~5m!{X5#G@D^)@uP)T4IHzXIRn#FBod zP;MmvmG6K0nkAFx{@^Mo>l4JK)3=E;1e!lf8Q46y2pL6fsBSIA{WCpT=}rxr(3z{S zlCr*M3se;+i6Q=PCw9x(kfkNL@F~MN)cx2DpNY*C@H7pgpV9rpNexDKZ9p{jWJbNC zz-3z?(^Vb-|Msq|cM)-3YzOnEmpNdx;u^7#j$={RYLv;A67K?u<#Sc1J{3TjurDAR zwnq3m!h}gj?-h$fCt=Nef9~<$G|V3MH)O2*jt!5lpw0Sy5JerHobEfoCZ_}QcC+CQ z5)nLJe**8MW|VaL?AtI>hvUo`&*^2(CgcxfAuXp+_9_-~57{yKp+bn-asm9(%vsLb zY>1*9;ssk5-uQbka};}WwPiEL#s9=bYsl@DK)aU*hL~a33tM&(*Q=WckC?TQ7@6I~ zDz|T_NX`-K_c(CfiAVVJi;U&FeG`fYT*mC}7Cie{K2Cfe$j4S5gK(7x&oF!hwM*;7 z>ibGuvZ4!*J>$tD171V%^65g@;RSTAnhi=HJF#t>C9n3}4X^)Q1x>TSsB}%L_Mj4SA^3hPTaG86jSzitWs`Z%(j0AuFL*` ziDLx_XUkA}$OGd$?iZ6xh|RK3&2mg{U{mvXFxu9r#!C=1 zR*ROwo)EvINo+Z~5-qQU;ORPq(%l*G!6u#V57S`GO&^|OE28Je;Y=I+3Eca4VF!nq za+SG1*WI(n<~jCU6Kbzh$o)ZEI7NuLcO4}j-*qy}RB`2~P+qt!0CjX9i{JZN&=u1i zH8BWY(tWJ@Wgx1jSy7+$ywEjS$}dO+Zjs!VMLTVU+FwF>>2W94_Q!72{<;{Y?vDk{ z)?a*Uehp>qek0eN6x?o)`A231vAOXHBeJs35DGU zLgEPpG(I~5^@krpRQ(F5Fh5M^mPfGjr7tTDbl_$FCOmvp2r8pve8=XOL)H4-&`z1v z7Sl_DWx`<8&!p~HU4l??!IibI+kkrNuU34geCva67%|d==Z4!s;_D!&J=qN`b$zkk zYbX@RXwDn$%#s!sp=$m@Ov-tNF=v6Nwz*K5#QI^x~Ob61|bK@ZXqslw^)wrtDFJ*KR5bt1amy$1otAJNDu8T`MEVhZE6z_0gWmN$lj#{Vj`z0`me&37e&jX1VF zlb$JZ{$`&SkK9UG&wd|t4nt2u(Wkr6FtY|*&Xl3%EzR#e5y+TIo5|sa(18;PJOsO6sBwzMr zLkhf@dvZP zU4V#jU-4iVs`6|+r`sr&SZ1Z5A z;J*PIF&Ao2)j{d4bwbk)1-S3EX3+`%LF0}xQ9ay}<(p&(`lcVMrYLVdr5|MjmIv^H zNv|N-z=nGo(cIg2JxYQW;MZ0|)~L!8Z@mzB`oQ7*GudGyQddHHHT~vC8>-?m9aBvXk>SL?=4?cB>QR3uhjRO{ zZ{Yo%W?cRLENPrf9-+$aEb{sT+_9B9{FXyeGWG~Yh5bi!{rec>LTt_~;>aHwj5Q?> zQOVzf?LB9voPAuC-<{_AtDfq#N#TNKIz6l0I-zpS8qhco6B1|0c{WC(!b^dEEyH-@ zV0+4m8M547y@f^+D?yrCy--+S=)q*-bCqO}* zz}S#alsmoy)xL9}wXz?tq&$+W(11(Q-G$om3$W?5oK+CxruFw#SUP6AkRMbcY$1++ zrIzlo`i>`uM?Y5DtjE?vo6(`;A(V|+EJhFYoFR2n2+!!=i91}StP{<(Wv?cQIcsSa;x>|iOBz1@VK{a4^v>KxRh z>CkwN4~ySDlBH4(y`j~Tg>Ry4{6Xs8y7%MelhUxjcLlZ`@F%DB4Dq>5e{O#05EiWv zF@E+mtXIu|+O7($e*F(B=su(Ewo4U6&&1YyyC9MHRdE5XOxvSc4Ey^rX2xAb!&9DY zLRUAYKd)x;SRahv_X4yfTA{edWzhIhU(7d23`<%AL3(?%|J|D}`r*c9lR`19dYK@b zohc@_{R=s&!(i057Xz~2g8tna)tMKrJTCA8N=pu7Ot*gA$h9|S2cE^~7u!JVHVtZ1 zr-E{`3pPL&_{|ArL%v$`_UV@(`EMDy*`|pXhMKaXKYl>?*8i~mOCDz38^rcL_GE^{ zEh%0{d(Xf&th-Z=sS^OaFJ@x(v2aW&bYVsFFXB9!q1JS}g66+uL25TwF4?$CEb3c@ z+QMnnFKI+sx9vjs%vYFv;{*ou?8Y*3WW?wbz%iS$DOsz9mYhG(+~^59w%M?mXu&^t zi)c)Jq44_+5ZkQAb9X!P%0af=EPFUp#;w&&UNoF_?An!Yyw`zWIM9R7rHrwqb_gt& z`cYaUX^_u~vujd(^~8E3J6h5_<3qiyty>aQP8dDGp)F>lz`yd?92STl4ybX&~R&8##T4gMPUHNqze< zgQtbi)=~*xE`3<~>JWA=(SU8dJ&eAWk(j^ggm`*RE#=C#fK)dTaz}Ww#3iO!d8`O-7H$Q~v%E!<^+|nj#C}jWg7vwEn zhgm_?*PS#+ms}o!kqd~;5UnCdauz6m&JbrF9m#WFmJ(ZYBzNoh54x}JOMjk(>diIO zz5apmLZqKh+e-w6jobS@QrA0;vm-kq1V#LooHk-?-(3 z3k#p`#ft53p!|6NYg}V127e#P{JM^2*-Jbi>}oXFuD9Uw<$lmuuubsn*bW)nJlO^6 zJ!`M}2$Huxm^D|$+%{c+qCJc_vou4x>B1%L+l7hsGVWk>7PPVFR3_Bno#9GuaLs3d zegBDi_5Kn~h=Xn?u~F_#RB@g6cQjld#4ByRSe=e|K9(A2ZK9d5*}v%WNWq&aoD-%Fgi5Qd2Yxh$avM8J6Mfi*=4Vx;H?$+IJ5)I zE)l1vJeU`}KaCzLbJo7031w}63z7_b%wI7AlC4k-s=rNKnl=#3T=}uQq2$7Q0ueJ> zP#g0J8h;Fiv?y{LX}{zCJT*W4@-5^=yo0&%7A$rzWn14@3J3qR<3p4KS^8HOKK7m! zFZS^T$%J|_KB+q@6MI7SjUO2GdlVEY-eUV-TGR%Qg|UxcV)aA1kGOPa_4nUkt`j-; zTIwJt<^?#Ebl?UVML6(}j!c@?2b*3{9(@*hQgU6Ha!-x!l70|x?0Z4*_EYj~M+Hjv z%mbUN^f?v&66K$}qV!}r#B_7wfuH*^>e-Mxuoh|`XP}<^yH8nXt{eXa_s{CdWy+g) zk7fe#oc8nae#~-G3ZC}2W|CFcON?d>hT2PBtYCNt7Wdm0l*BZvB4sr~vU`=dh(5*fF}XESd6-iY;`8pxbG zJ%WmiB3Sf1gh>@gF@`!h;V)j}qOSD&(|Saee0wzWoV5wG6Jo@&8Z&nASH_3L^5gY85KIYJS4f3~JPWhQE=!;tI`=)iii*!TcQeVGoD@!gqETL+fo;>nfq zX1?0dM}Ymhn%JF>OLXb=Fp&DWweGp7Ur#=JWmlG=et@b@6X1P_KQkCUoL7%Gf;y|W z7`6NzNE8PA!ePq$uT*o})R8=90nMtm(`(+{E)@Ibq3%{Vgj+6#r!Iq-V+V4KQlI=< zL>JyT$yrbgD;1SfKI7=y)ZexO9+A{-j>y_x*F4pncNK*X_+9mK@eD*7h01A6y;9_qvbfkwF6g=_g~k5oTNx zx(&T=)AvajOxce(QS;ApA)EN)iJ?Qm@_;cPJ;s|g?{eh63TW2 zF(zI91WunJKFl)8Dc%PjmRkxnF%_Wc*;l0Bb=g7g^h2?yZRyk zqJJGp9A?V?g@%b?)7|*zJSqP>y%+ml;?1J_?!*}IouYo_YQ`6>XD9ZovXon5$%7kKU3ZNo7hSWAZ zARTvtGG?LVgrnz>&sw}D?ZtGeDOeNrjk=C>r+!!gmU}v})`lKTw|WIgPwWPDo-vmy zkE3SbRmd252fcQ<^8e2nd2EY-#*Cig4&H}_-E!u7YbUXJ80|)(Gn9H1i|Nw7T(+nq zd6f^K;?xR|n>9n4TP;rhV9lll1kv5H7co1eT=PpWPWv;i=vGNi!#U{v`6EW^`$G1} zp&+Sx?;E*u7Bt$0;>^{Z7(2WV%VRrX{?Kf}+^qo{eIG&BbpvJC>@2u6*$?$5OGQn>Syj=(6i~K==o+K`2i2E*v#1gWw(`;_wsCtO zo{(Y4rSEHm%<=hHNIMc)o+Ytxmw}~(_II0xKx%L5(9G@u-=~k@Ifnh23xO9SqgEj zLvObF_YN%mU&@M3@>S_`J`2@f`tn-!PbgYDj6YS$SzD_D!-b2G{d75;*&f1zGY$CU z^PO04GcWErpYnMpg80rU{kg%(VyOHynw3dSnP&buRl_SSD!Pxvuo>%7>+m}?Z#QA` zS%yp^zEo}UH!=7S>En?*fc28OHM zd42zQD4cVi^5--&cXbq;3$n;(-UXG%m(WA{7&hLy4%OU=W=t0`r8$_F_T7#34|+g+ zjsX{!8_~X>ys-5_eD)SUW^3J_bvy^m=+alzwm#Q+_GzMAE3r|98xU}yH}mqPx%NO0 zCOfL+F8BPo|Bn7lvECRe+NvN}6U^(D; zD8b~3xuTJx0m_pNq1JK;PuM^n;;cS`ggmpF8J~3C#}2_*e zQBpa8drO_zj>1kXGlyIrIZEuC=fKr++HsDkg(dIoxwP^wCNB!btSP2!a=CX@FmTHl}L)NDnufE-1eJJZgx2R60!<_Yyh5Zz13{3_hI&%Xn> zR&K{-YiFYQuoCJ{oED@W<`}%dh^zCnF{t7f)Pzolwq95JrW<cAb4_SH6ylu@0CeLxB_l2CFPy2~cd9GYRJf4(FE7sPc zL)+!O`Q!!E=dJF-b00XdOv9Vt;rj{4Yz<_dLT^UvE9Nlh0ER~b)~dIO{y37i$jzXI zvS|^-rx&O@TnLdE-YpIua5?wKa8|uExu`(V_b|G zX?HPky&616uB4t&Z?1d&6B>y%WwDLi9dl~Ww4Ds(TK+~1u&4bBeNOdDkE)8WmUzn^ z*w`4t6}meZYw!z?BINZ-KMdwEG;sw)@-VfSFW}GpBTGvB533NxXmB$G3$pXuiyO_?WDVlDR(mV!K3(p`q90TDai516o!^~j=(_NRT0ItY_29+M zx1oK?7Sw+JC>o7X;cUGlcl%Gt%_Fm+bzy(DbGJD&%Aj5Mh%p%GM84fG%_^BV7P24r z0nZd-u$*#d;cvS_2KBtv-FARxKml$%MvN*O>aNBETe~%of2gm3psPoSYdxHu`$Njw z?e-%p-2hp(0W7MzBNlzg2ZP4zs7!qSG{us%k49w_^RSFXO1Sl>( zB&rPUS)p4z#Jr+?|9r||?4mBy&UE629E9XO{qTxz0866TMQ|}alMjbD*>=*pGrf>qgkAa!5IiHc6f#9>ni8Nov-GfFIozytw1J zk(BG*EF}5TfA>EJ%sESWkAy%74>UrLdxs(32zbUn7TA8a7$jrbg^(G<_sl(tk=O2H zJn@L*?>T_}tPjgbR5Ils}4w(}(Y4qI4E|pK)i& zn>g*t76{4rGQ`GFw*}>aPomOln(Ae-32)VV@YPSrSyn>4`?d2Rx^^{;u5@4psyx*H zYpJtYTZ_e;;y^qULT=$@Ao=ZAU1oeH+T4(GxrIBgB(9QXR_ecDPjy0C=teaBLhOX3 ziI^S5*og;}uZlFJOzdwU+z#d*Pqk1ei^iB6Ly29~r^KT<5ER4Li+|sBXI3jl@-b^I zdEhou?<#gG3>&atml9R;nBZS~Ji4l=U+n#nwLU#j0iF zu{O0ExE*|rmM=R%;*Nu86e$JiF*$~Ly1<*QetfJ&CZ_(j7)@?kv&P#~VB;?KPCeXS4{agN9r(aD|}s_)ValKc78l6q|DM z8aDK%p6ASA7#V2>?M5G=ef>&s%UOY%eM5v)q->Mp5Z-iW7Vw=~NW6Uprg&Zh?R-yF z`#W1^(PJ3P=&~5IyA9zEzxU%oRl(eQ?H<&)h3M)l6iVgEeKG$pi86Ds?&BxVCgI#EaRem>G?#7V3ZgeMMmQ`7oN8 z*!3Z|Ip?GcLEC!t|o z35J&*z}L6`qkJT}MBnVjui?^*yhjVrvS;Oq7cTkjS4*oSy~$}Uh;VZ-z7&SUyY6Baw}clzw@gIFs=o*Wn_SlR^gj91%0cO)Kb zyS;?=skbnB$wE=P^t4X?+i3idyBmM{_%owVomlK?15!BiLd@KM1o_#AkhDJs zm0SK(WquyUntr6?-by>Jdv^;rCMj4z{v~W1N7?euCJ;IFyAU5k`N#MaFmL>X%AN%( zwWfidz1uNvnIVt(r3X{|cT|k_bY@fXu457F%y^&S%yYdH-*%e#25-)S)P{PG>+Jb+ z9|u;o%7e|wl<_(z+DAT?v(jYRLmkhCkjob#aeO>#Pd|d7;6os{2|@b-ro50o3)KU5 zLpd?{Tl<{>Rb~)xChn&0CS|Y}&O+HG6R{CG@T%xDm=pdEeTW?!wP`Y>PwK-wf{eIy z=YN>6xDy}SEKk^ATBvNQguNb$aS^UQF~-0q?``qZDa0Z%AHJz6TMiep#y78roSi4Plylgho0WM z!DinhtnG9G4e0Jt)0_%X9<;ko8o(CZ8OiEzbmH$%bYdB+MnTcRAl7oihxP60#)@T@ zykX}`@K)0q!1paSNu<21MSoHslEce}=G4Ehz_=n}Pj;jG-I^nUwtgl^E{7AxY5+#c zwGgjw73y1P2TB|a`Sen3-Fg)F9vsOs&e1)%-xnd#a2|CxSHKCAPE7B(LYMtaM*f68 zT%-Ad)v6JYE8UN3X&$-+m~eIKXHRURCs^mPvgR-Cpr=ijPPN;oy zt0cGYSkwm{5Y&T4uxPV?AYYRwmc}#-s!PP!A2k~FT4OFRwP&ptwUj44C@3S!d~S;Nbokhg2Fc&GcQOY_}OxFK~lJoydipPyueK55Zvk4=CN`2Jz=#i7pC5ZvM3% zRTh+O-~Wr?us;P#>vjn_s}5oEl-YuAXaM)o^S`ResE1pSS3Lf&LU z?(aW@7ukLVj|vUuI(yRWDn^hUmkU}mA3^(TWr_6bc8ENz2Z#CJU`m<=3lBMoDtbO? zL;oQT@hvppYR8rH+I<7kIs9J}7_-#_xn{mCSdHq*Jzn|pczqBt_eb&M#B?~g)tJ?q{1lukMsZGDNi(Y+ zEO+F0$Sdu@9YV6P`t|@;)BQ0P>s(PcLctq4Y{K$|lpF79#N)ToYn@yxhTorvTFWuI zZ2|rGgAFR4@R2y`k8>d4k2EZ@KS2bZ457cVL;a5oA066k;>ig8E_vMx_^ms-KYvBD}BR-{I6J%c9jVjj> zpzX3tr<~xR3oGiy%Kkb7D~)=vDM1{S69(zDYfcGKQ%_MoP{v=*6__M{u%PW#qLZWp zxop*3C`Shh?q&vuhxCtG@*44u;%*6?q0e zv_Q+}Y;o)xC+=TCuWgcoqSG+n_q}23vF3SOjlm@lX{8!(xI*79FC0+ zVcz}8v*5W6hV+rM?Dt>bLvtUN{Zj`i;R>$4UxIgLdh@+u=B#C^2Ue~655rb0#jr6` zQJHrf!-|fIDUWk8e0l+-Dq}JHS~$jAMnKU(DKF2>f{M;TT&=ZcxsJmjvg=Jae9MXD zhpSQj;iVX~|2DS2LGX|8VQm}!2iLC3S+U;>EDer@gnR`1w}ZGYR$%7m_keQkPi&1o ziXPs}q4>fQ+_~^F+;cSI^4rEJee)Y?Pqt#cg%da=T!u?s2e3SD#+R=i#KL}V5R?Tj zbS)mt4IVUtWwV55W!bZqg*U9orn7LmG>PEkXg!?w!@~?4Va`ZVA_p;(M$Jp?C zXKM&LNnQpS&91ozzbBaSDYJiKdBsAwHIY2L$8O{PuhvYx;RmpUu6*wz51u~Ng_oy? zVMDLcyfq0}{ubh(6*vhV3n?2no_bKYpZ=fs3X;DDh$+*~qUHZ~qB_8e$yRav+UCW) zS0`h5Px1prY)4-66|DA|^4ZR#`RwDsJeM2=qr?(&=DG^}Gu^)}^PpoEhiB z49?_Y>-@){eG`b@i-_}*LiyYQ&!N!V6dLCq6x4TJ$T>6ugGMKjk75wl;3Kg)i#Tc0 zcGOIpjyv02+3D>?)ME@3mA~cUhZpqQdZ-U(EEvk=t8QR^PC0z z2=B{0__h{jKGw{gZA&!Z(xhUseeXUDn{rx6ZhDH*!Zk=-C_$N&?we(DoHF+w#ILkv z$qRqMqL1#}p*VzPjI?IiK?j zUGP(IEAPqlyT(BRonJf@Be+T3V1Dc12rf-2hlX>EADiEWm6sCZw~+cNNpyyw_qk#7 zD8`-hpv`DHC=XTX8dI3?hR!LwavnjS`P~X*_o7^Y@ z^N)GtX8#o$ZIi|5yPL_0(32ru(;)VOEh{C5^sO|3<`=u5IH-%L z-NA^A zZ?jNnd+W;%J1TkGG1?LS8X(O5w=3I1-uvPw(L#k?EynL@7M0ISg_Hto9<8hcd65zd zhFk~DFh|wjU2R$2-4y7n?#<*!C*w7DbAD-}2UE5ei1KBVF>e1*E}5#-mHqx0<1Y~p z&Ez3GKttwp(t+iEd<*hxH=%FnX!bH>IFBl{hi$L=GUd^peC9J>ra9V|*FTPg>X~ua zGUI`$eLsM?IM}d=y9V4~;(NHZg0YwMI~tZ!Ce$6x#HO21aAue-FL!W2$$+`g@U9lj z*WErc^p#Ac9Z8S`YD z4ML%>6rF1=SV8j#tV_&=?40hHwlM`I)358EfAi*!Ef!22_7IB<)?>wiEc`H?dc;#( zsN*?Q#C1W(3**SzIo9u#&Wn-XtfghgPMQ6gFCw2aP z?07bPzN*N5wSe3wmf|bq0p#R~I754v2669T1D4Q6`%NQeKH|`UY z@DYyQ-;GNuB@nVU6^fF-fO4WICaWyC?VV99f1eE`x|y(Rjdr~79A*3a>_pi+Gv25% zV&QAwfjJ30O4l6+kNbGKWp)c28?=G;Ry30rq&iAy zE%ax#kuO1VYLQ@O+=IWrrC?DJ{xsjP<5o`nd06aVC^q#1lWs>LV$pTf?0ibD5z6og zu1vG9UeuQ>P(3poWG649&nD{8KlKvo2Z-44>k6nJJ{dLtj04Y~?p*q&8TZ$luqo#@ zK=SdypnpyMg?T<0nPA3BFZe;NvICYLU4m8r-T`@K2yeRNhbt!n*KW%cHDlfjhj-et z=o60!=19Lcr+Q&g+aQ+JDU?e*)tooa~s%n+lo_RtS-O z`++QOvk+ZS11cN|2j{x6Fh^&2zO@5?yt*4_-G~KeJyMXZ)Z$#hpT};s;T`F|n>SfS zuH&mh@r>O#(_PAQjSWHe$1kW4b=PUe^#OUrXb7_m;l`JOc&;eLj2CL~e&No2s=D%; z-}Zni?kPxWh6@?KsY1wuby(VI7{nRoq0vxc2OXUOT{kFswG$9`-Hq5adoY^viR#F5 zEcPBDT8@(P@LP2Deb)fe0e^$vVJB8yY|2gcGafam7a#JEFN+h-fMj*LkoxTqT3Neu z%aTW!GXDc)TJD9$_lvnUXX`TznmYv$ifFE`Am69a4QDM6e5R3*LCiSK=(VkY7@n%lTgw(TZL`kfI!FP3oW#Y!>rzje4W$AmX+ zXhAarW2SmDhdiSHh)?(RWTm3zSV$MWIBW3jvjZyoCj#tp6|! zoZEq$`vviw!!I$gn7E<4`iYJ6e_+I~UaV+qA>@bK^5Q;rSl0azRt!3f#R)4RDD4&K zznJS%Kg`GVx3ys@ndD?2iXl`nL~qWwL1LWc+7EwdvY0`=7ZIhm(`4spD(V^|U5W!fW67TZMwG{k4#}gK{TLsW7%jIh>x^4!MSxaZH$kk1g2_ zP4ikn9y1OK%WCn|R4;ypW{^tvC%S;1H=$~pJIzH~gs6L8F@EA}oNP0i1^NF*y=r4s z@Nt^Iq&f*r%Gs!#^+c7HY{C>~9eI4g`|%y{}I`X!2 z-BGr$D`bmnQK{`D#!r0#nO+CMsQVjCe&ZyV-20E*&^m}dc!pk+l~_C|5_~*+Fz0`j zeCA`u0(*M0HzotPqVl;|eU&@|U;FdgB`XC`AhQvUi&N;_iN?E_m9-o?A(CLWIy3U zaXH3WU%>4A^^i7p1Xn)%ERMG7z~{B9SmdZpg7R0jm|@hNEx+f*U-!6DlCe6={tuyk#tb;Ul(>I;D|K~WY0iJxf#yc<1&2MfZ;~&=$)*zCwEh|@j&$S` z|LVpxStepsY$z%!cfiCGhU~D13$Hy{g|gsjFz*jhL@v*UzWnEUUkaxSZP{rR5ub_8Z0X-I~Ovb@#A7ycZ7|Z4VJQ&DfOWJd}Li zA!PP<;jf#kv9WSLdR-gHmq_}t)O}mg!0a*RW*M=dgPmByiSF#wZ$2z{!)&M+@*Ok= zPegHHZ(hEX=5Pac3Ty>=J0hq*nwpHdP3NF`z6dev)`7$)Rj0bs1M(B!h#9*dqQl*j@6g5gn3God`6Gn%zU5^ZwO8&w#P{^+sy$!@G>xX z_7U_d2P|FJ3>o+AnDn>;3e}X~+Ws0^wrvGn?_w~o*^jEd?O5N*mMKO)5W|j=&(CF` zD8Cbk%UgRf|1s9geyahSGI$T{Y`Tu&sRMb(XV$E^$6MiaL>t!BK80G3L-ZUT#8kI~ zSpK!cVyfpjEKYRg^{1~xM&K^NL9-dANQ_t`y@rg4=^#(~6T&`wi`t*#QL!}_J*|&} ziQR6D+xi|AQ)58ZupJcDHG=+TBa|Map3M18;-U~gCb^POQn%?jwu~Bx@y|`+Db2PlJL$)Fx~j{32qG4I1CNNz2G!YNKb{GV4*s;i@rHzE%Agi(71Jo65dZ9lC@u7Y*!pI= zBiW!hSiy=jPl;N6f2ak@Z21!dC$&Y;sa$aPHJXcedxjx>PhyK{yO1>KHA+fab^Z}f ztY|s)dwUEMi>Lh$wd-O_ZplMG%GJq?D9GHKa&NM4Eoi z>)e`zQlhukj}0qYJG9%1kj-KUC7U9NAtGc;#31H%Znct#Awon%G(<#fWaE22kMAE4 z8S}bd=RVhUJuk@+*0JOax_lhVtuv}{D%}^Cj`w1k?n5*5zF>t`~YXHGu$D@hu4W=S8t(aK!4Qe&WMI-gP3v!bw=h+ z#cNlr*jzftB(ZIBUATdK#^0Y&=kZhwtL`EuUbDi=qXU>+7Rcs&v1Rqc*Flo$Hk8?} z6_wE&#EmO^@%a-%c*NE1sQ&U9^n9tPnjR$NuQ@BUZlQPjY+^qg>cnOvGiK25ES`E7 z!kkBap&UuMSXcF0h)?omd)D^kt3D~%>WUF8$T$xh%t-sv%>c*I=P>?3snGJ=g~$A{ z7b~wyASn=-TaGO~&BbPDUjcG#EmpiW{rVb8z~D+&O+T?Q&>uQA6@2gYVk;MkYnpMTV8Vh1ZdTykG%=zZI=T0*HJyW?(hn1zRdfG=wL$*e$WRHD;|)-D0p%5R^ z2iq+V!Rpx$z~cBZs7t>tIM*yf)wW6KnB&4V2G4}drk_ChvsUgJD`BO3%(x`e4s=Zt zE{k*)>n3*yFU+m^a8)QP`nNZ(ZoG_XN6Lf*nLUpko(Afvo}z;EKk2t0Fm{uIr7k!M zlBz6l{$&uDy&A`(H+7)Ng?-T4&6j2U=D@6W8A94pfzESpzhb2`x45W*ntWT9JdZT- ze{zJl^ia;0w}E==RjtLi3V2OkrbL$wIE#9~9hcm}j#UO|7V%b0OuP(6iw42`vH#a0 za9*pXW7226GtY6Yz}l~G!1O^s=3DEaY{)2_|p}kmX*f8E|HIWrXZi6i1 zm__Xk=DL?#8mxe!L9vYrC`5D-Td|?!b8q6A=m}K0MHK z6pN4jANahzg}yeyyrZ@V&Ca@kHro!S{piPCtG<9baue(~FyUq!-ixV!Aq4e`LkYHN zRrTwk@yS(e)BOnHX|$h^<%>r(BYAGXdIN>V)%Z>EOyspab*^q zI6Z<_SMCJo4j*tW-i+_W{``uuFMs`_A9K-;V+QHNsq?^^JjT|`bseuo z3kx-Qjo!i107tIrO8yW3p2V%|5cOwWz^unItQp;t<>qunqmQ3~(SA|cY_45t?LdCn z7HA*l$*m?xP^VJX;$j;1c3HFDkj6k+*H+J!9kWvA4m zc=~BmF12>y%Hl~ZXaljyTx>y-_f&p6U;sC}K9UE$8HZ8*dxAQSyjIp)sPX;{!u!4e z+2EI=S?prW-v0r!e4UwFp((7T{b|_rd`R*+J*VD#y8&~?3<&YI843gS0@Yk^~G3jPP6o> zAT0B1!iqErYv-k?>=}=VQ_3N1!6dx6(3O{ss}!S>J(zJE>E_<_JRUd!Ilk6B`DiyD z^7lxtdQSVL<3pKrNhs!j`jgmZ<}8g}$Gn?6=$+}r;$A;O-K{;^q92T*^S}{cjrJ_~ z%K!Bq4`5!qY?zwZhs{eYn8V{b$gg}LmK(o8=~orJyA!}Qg@?qrky&{65M_F4uGcFk z3X=XxjI6O|n%dEVbRBV-<4Q$4>Lw|B@)l)*pTzvAdDMB}jrs5P3ax!RFvs{Iq#o`7 z`HFE6b)lfV5(KEf->@@7)W=QQ0ZW% zyX~fJF&fQxO(P%fwE1B6!(-9%XF7Y=-qFT>@rL$0FHrmHF)qA8eSp^Py!6Q+W;F66 zL@s)dixk&vCx~BX(%IbwgP5cp$x^{ z-Yj0a997eOdD-6mqSvvX7#6Fb za079q?831vBp-4}-!>qxq0G2mZkpxEOXL5?8^ym>R-5_Dgg7I+b7YQ$z`5tOQhT&oB-ppu;KU=+|7nc!pk%^sn z?Qc)MoMzyRYgI6~|45c_>^o}zqRhxI<)Hm(50p*$A_UDC2@4(#;X#A8K-e!W;=}Xy zTsH4LCc8d_&O3X_hiS``@F^x=v|#zR5}uUy7lfS;WoBi!VDVe>J}0Du+m+97?lEbh zJKdrGGbh%Z@f3U{#2}fUDwMAu&dYw+i#n5MTK7}W(P79TaC>aXWZB;GT4Ja-j`Aj! z{twjQc^w!3Y0l%w13Dw248tkE?VgbjPr?GY6ls^y_XS=%DPtn(?%Dsmru@+cx!ToD zs4BXRF=ypG@&nCW?-|DQwO~opG@xzkMGUBYSbsPJ^!)~*q|_RW)vm1BY7g42dWMmS z{n6;~GU_k?5o&@aQGcWZYnAt68uQmQJ9r53Z{CTKp&`87tr!k&=*txC2AGRy@FjJq zMIRyd%=>mBl{z}qW9o#2{j~2H;}1`kQ!cB-i_a{h&ZwjQd;udL(d-3uk0=o`iH92G zasz6cC-COIJ$T{;V9#Dtwm#$mPWAETig{bnZ2LDM!RQn`^bF>?7QbQRk5;TOi=I)R z#VFNrK{2c^6tBJypQ(TQp_L{7a4?jqe{R+))|+5%yah;FsV}P0lt-O+rf%|)e9qsy zu;Y{o5A-^K@vwn<)`vi_^+eWp?_mDjl62gqPpLU3*xu^}c@G=jdSWzUvCMW|YETXq7R!iOpZ{lg0&>qjGLFil6n??3(6C&Mi zK-uemguq7fe3a}!y|qT*DGeZPx`h#^wn50eTuhlAz-4PAq3AQ^?hiThu&HsF+5a3~ zJnX=lEt()+pfkGR6*g}zC62ZoE1!KAuUJV~#ywlEh*3h+6Us?1Bt}q)8gt4=@{L_W zNGmxg=u*t(>946DPuZ6hy|Jb~pge3F{s@Nu(u3)&Cx*yyzLxxJk|(R#||1h%MU~=f|S9Zo&AI1EAxMH?%BrXL*$axK(F&);w}6 z(-bH%dF{a{KN^$lG;2=fJFrvDwqz)e>`xILEVPk1F7{gXzXwe7;5_(?p%ECFLX zdP7zCUZgn^G8y@AB_pVNFJN6C(}{g2whH z?40!gBA>@&UU%{n8{31~e*-~!=pMHHzFDl;Hjc@+1~U(u6NBURT_85>QBhY+izph6V%F=% zvtU2s4Sf(;!sG*(o8ba;j??>__m)d{8?y4EPjsL8;Ftd_9OAsadED-PJjbL9PjPhO zd!D-T&zrk2$*&K!^6jIUW_2kF-iAzRO2P#Y@N?(+S1Jg`nwj3Jfxifj)3Nlf2Kx==IbeS5*qT%m?x8 zEyV2G(iOsMX%-2*rfo|gX7+b0<`qvoz5~8kpGUc>j#)xcTO|aC((m_v9G_xs&rXoO zAAWiV=3dc*{~{e@ci1ES3~V}o4>n$M%$Q{D-@ zxwi|OyP3QV1$VIiemkUEOyv5w)nfLXO?ZFZ04B+v$gQ5*gR<&1qf482|l@Q}&Hpl`|&qVuU|fo%n= z-De zPzc);ElB5*Zqj5gs#*v0l;eS1HO-l~wcW;cC1O~(0sBh2gv2TljW70MidXg^PwUFs zUCW@gsT0)C*TVQW-rOp5D3qeKbz=m}DE8npJWeB* z-iGoG+dyG44kVIpC6e|>*pfwFUeo^siwBvorO=oeozX(YJ@Ty0496HJ;;25&B{ta+ z{6PDde78iDT9DRo>J7TSvEX;fTc>$HSX7%{);ibegpA!!K^LqQs`FYf?ei7T^ji!r zvn0%9-w>AZ;0iRjD6zW7++azUaWhUG$ z<_UaqwBoH9C8$%$K^b8UveKJ^s@EwnjOfejZlsIS#&x31tW|Dy<0$54OToZY!5j{I z;fxW)9%~xOI>|>iit>=^*Dc!4@iJEZS2ezCbK?nM9sKHwEEAO8zl{A!LwXg-rbE5_-h}QygdzW)`YUTe$+SisTJlt7{wiQi$EQH z5E3>$f=^rMyhu7FDw`C{r}8E0N*-vFJzJr{;RNjh$NAOKtl5}z0<{`*T=mkN$3~d( z8Gbs*f8QNW>~rSqE9szZ+c44M5VWry&AKkPX37ax=PVbQvn@5m4*T;Q7X3Pi`-*N{ zoop*eC)5eL4X@8z?6%;tdxKH6zEp6%!+62^mzXbC;i)JaUfc5=XbKj- zXfG%yQR+17kRY2qMCd!4{G*3mxIE-{65 z;6@yGZFd>wJQ~fdmUKY!xMJ$6?#C<-4d<~|y?LyE19o%^LSKmoZz&y0=gvu*T}P2= zZVdC!v1ipQ52Bnp0h6X1vd;4y${tFX-MS`-Ce6I%%?mv9)ts-6x(1!uGWLnMS4p;| zXwmBp$_BPU$eQuIJpV1k9_kLNcg9S2j2K*|cf{oEI+Vmt6!KS)uO{RlasD7bp070l zt5;sU$k7CAPS3*T8MJFSv;tR>{`D-;f_LmW24#xdsO$DW$RF?sGoc>~+eqG^wX?A8 zauJwZeuYUd&VcL>%0VbRgvzr`SoDWAM76|2+0rx&`<*zKbG(>Et0!-Jp%YRH?3mlT z_vpT$4JF1a{gPrXfzgixsC%CH)C>PZm2H7gf9o77_xBV+vR`9SZv~{;R6^vCF6^Zw zn3wJW-uAN{vv429BGX@qK3TPp`lBzuy@>kSYQKYQ@iLGcP5{Zpx$?kpHOOAPhN{7D zi1}EAah3;x*Yzfbs5Rvm&I`I+39oPhUP3I2`(1|dyn5=HKSkVxrHw-TlU7uJf2ox$ zY$hi6aAsuHA2en@qF$UL&N)Dw{fWELs4N}gM;8dX17FCu@eq|q^0jr!Vu(}HoNYRW zcN`tT>WoVS<)=ZS<^gfbKFk*m_VQ+$m-Eos&xx-*>dn$-7K%PLS0UVdJI)}Exbfq@ zT(y}pML&+=7QgkN`~7f~{=5u3#=XJh10S$ny$+L44hD7UKiWixvAlEY4QPv&;_!wc zEdFOFUhqZ1in=Vp4+s1>AG{4*|MuXzj`Q+aBgeAP>qA)7;^U~?Ak#*hlUMp(2D&{6 zNB7g)uzau^OSBw~I(9=VTry-Ka1=BSeT1;T_Q7Y_AYQcRASkIjrZV(6%1svG8JAu> z=#e!RHT{OW{H=ILTn%)79>803kAT%e3*wq@l4l+%q51hGwC}qLos=2M&voUiKUlJ_ zLO<$k|Ei7L>%xtk$r4;x2@NlQL-{(&-#ayk`jymybM8KvojoJ!?tF*7cZae4zxXg& z&5?6vNy%blV7m~Owg(c+rebXEf9M}gJc`<};=w#yZq#@JYi(4Zez#Yy`Pmzkrstti z@S*3e4Qn>^z)dMr~V^ zJ+Cilhcj8OEG^-TSfO^}Iqvm1LndVw>b^W?vn^{|qY)EwGSTSE3eZ07#KO(vSw^OW zJn}2{G?J{I$ec0hbD=r53F`phAux0KjygxgLN!4ATb8RcutRKssDV$hc zCHcC(e8iy54qQ6rGUyjYiH-MfKw4)x)R~;ejx|rw^_B&<`{ckRKNo}5*lMtT*$mGj zhVuMzx5P?QZ|?99`7O`ghP537ST%7!Yn}%1%8G4}yd(=7I}5>iSSVQJT?Xs2YIHcZ z9t%A7k(O`|+0%_Ed1}ULYDTmAqyu1~qb%y%1VR1p33-k9BX*2W$GLF|KF8uX&RXF> z9j`U#GrbQ$S>Fg$?_EF|GIgg0IPldcw}IwwV!tR_#0v0ZM&#!V^KFL6iAO~JjAf$z zl_AXfZ$HLG+7lL=az_^(>Q=i55eJGfd;&2pC*`3-U@}x}>`$FOJG3d+e3)`z54NgM zLM+%je*0hc;hC4>ak$=$l^-Atqsu34)aON@9O|Xbzwd>sr+uT()0N*2GhuZp!h1EzIq!*GHV?FePD0uG zUm$6&E${ek6)uh#z)X4b3XdSfEL`Hptgl@rN@5<_GVsNT; zAMVyO8?NoKI!X|E~Cf!HIS;NJ2#lx2Sd12=DOa zEKInmHS4d2svm54m31?9pl*a2M%4Q`rw^XmB;~4&_Cn3*zP$Qu5WBY0oyA=)#SZm0 zG-%%db%vEf8L>@2gp%*gz@8nw*^PxSz6uR@u7O$Czr^OCOX%k6&UD{oLcHvx*wFPf z=|pwH`qku{8rL6PJ!>#$X9}^JHi|VC*J1pbP@aYwEbrX-~`tv=ypJ7&*-nwCod_xYa>rNCXyKab>C@qr{Sf4Hz*b78*YIWAPsPtjF(0=TQ%Y zss(*nK`T9f-7bi^`WSR}bmMiCKS1awx+@Q<)y}Bx0OfySg8J~k=M#qq5NqNgM09m# z*8D4`89f%wRyPS6`4~a+WRqyA?ZUUbrSHo7>zH7@6`bE?2=&3+v8I$V&Utc({FW|; z9eXDvwgyn9e=o?k^}%Pd2qcylgkHl_}`bFqSD-cGo8TVZ~+PncDjN zP0;)<3&Q$L5*7#pnb{xt;^zl_d4p>rCU2;~g1jrPa7CfpOLJaDfj(b%49BI zpkbB*9mX8MDbFS_yI^|0f7~bBo;-@T-Vf%Qv8SMG#YDbfJ9T0k{g1l2hJo}?TXf4j z3i?a$MA^A?(6{wuSFQ|XQe_ZSQjXAhP86D*=}GhBd5k@|4=rm+&kq>R^rd!!;2+9F zLob%*aStqtz4*f$gIKb$gk^T2zNwf!g5+(9JleyTyB+I6?}-Co(1W@Jj*>5PwIO$* z9QlmuRN}`I^69^WLel-qXhjJC6?F;knLVD(eBsFS(^m*=<6V5HmU7dZ<~(b9ARkp2 z%FXV3u}q~GK8f(BKtl=MSr>Lv!9`5LY_V z&*g|Zp^yBm2NqFn4D$u3W`t^+?Kiz{_P!ZnfJ z0U;$AJJ}U!}l2!GxQj%b$(;@E9@|`|-$Z8F2}0Y4!-yO3i(^g+vRHv6qFsAAm=Dq=V7qnOI4g z%7Q;>{%S5Ij)n_wS7$+;ZyoTl>rnk5?7V$O2(Ks@&*HlUvX-37xPE~bd4M0__tQ?i zF{>2QBBdJt^#~!%`M)4k~p4Z=Rc$5KadG!k@_gsYh z*l@`3xd5$+ui@H7U#^C4TGdoyYy51%YYm%Fy)9R0`SJxy=03-ZkLfx7Rl*H65jQYE zje!<@_{<5!_gGnuGR>s({8A5IcU-~Z_B@9Ve!*OwJ5p3GcZS6Evr&1w^t^uW7gUX# z2p!JOT%It3>2?oxMV?_hd5`F_t{{21wS~31#!f@Ho%cpmURm zFE7}0ZD9_YrHZ0*Q`Y&K?m9HNSc>W%R_EJ3ofPV5Pv}fN&{q!FaNQ26pzj?d?$P+M z#2LgO_;-xBC#D~(6ZeW`xA#KMse6zdim>rn4`v_lz-neZfW*pLbm=mPNsgbzw0S$l z_T%d@tSlL&buS?BQ7Y!P|BDF`|AEX51+_E1BdvDGqhr(1^{0ufX0Hi!8swrR>#AI~ z`i0yjKY&LHpT&lww?P}!g}Hs`iz_D#hIZoy^9&u9|VzqH8BE~vy= z`P47ne;@iBKLXcQI5XM9L0axcJk@S}1f!*wK)dw>RF$McgPR{JWgp~GPW@Ts?skyg zNf8WOPC?zoWKg&e%O=4Cb%92ren$$r?HCHWm0d*z-RZPBuH?6!d%mnk09);}0n59R z2H!Iisw#G2<=`B28at3B53R#3-{|`he+dc~j^$&h`}?QkY}l!bw$L(1j0{QGwk-arhqaQFU9TDellxOWrs_Xkl&nHM= zy`1=hKYOw~$rUV#JAgrl5+GPIfVJ7^KsUn(OZF4HVNxTSq>koA>zAX)|Hd-sVOuD> z_#Um!UZ7s^11Q<`K=i4RbGswd z&93)`&$%PHdeI@w_)JWdA-2Q|AI9QK>!57oTWF50z{tW+f}+<3LFO7H&PdmyQtr*1 zOsGTf`Cu-u^M;BVPp*Ani0L1Pa@mokqEj=?Y{v7@tmzsLcZWTj5vAwh# ztFvX5`D3BudLmT*G!p`e9~WBHn-?F+NBws?S7@i;^1+)WO9r!q*~V<_-O;>kLZ+xZ z5_Hb!765-+M2bkoK4CzW}Tc z-ym;<8*`nV3mRFBXhNAiX7d)MQOgB&!hhQBwbVare3ZUNt)hkL22fExR`R7pTTXuD z%n{e|rMerN@{ccbPu&PHkBJ-Y5+mA4Z{wWb?0A9o1=RdYnhq<(oWyIW+&&D7ev`0b z^OKOhB8U}r7%=r=2QWL`h-t~`LdCIxtfTOEaCl?DyRIds>yg_+=ash*-eS+&dIv&9 zTX&{^HHgc=9Mofuqq8MtoWm46%`Xnu4Q zYrADa+%}W*VcGpyRqr$``+69zO>y96hi-`anU1Wicq95|)4nLUFL5NUAuT$Y$02jR zy^-EASF17oy93u688ElX4AlH@3r2->!YT4s>K^^=tGxL|n7YD^$A)ggjLR0>IqwT|ncJgj%B#;!qp&_=q_W`S!k`H6~Ubb?&C_Y=zu=w_w+8 zFJ@Fd3}$RD#kxOjP}Nls-8*k#VS*`(XtQCiU;SC4b_&S;*9-Mh7g71}qC9PLAmwT& zVVX&&(4Y^-ppyR3eri9qWV=GFO${h+-W3d_rcCqGd!abSjJ0_7fwuP#MTy~Fa5hN< zZc+z!-nk&%upcw)K4DZB7x+r|^@lW5<<}^%xat5*-A`Q&d#{`i>O$Vduhaoz2WY71 z&2-0QY~@Ni!}Ysx56zJt=V+(p-xDs<>{UBP59*e~sI2@)KKOzUmsQ^qHU3>dzS)vT zMH}%%sRPT8quJ2*m?&|%-&P) z{VK6I;5#ltL( z!EC=1R6eAc@3-Ediv27^zV5+v!%qrsbc+zSO(x^+lWiDHi#vN=Q2y4-O+-nL?f`My(;{@tAZ>kvUR-)hWzy zdXJKXv7{Hx0OgVd?C3I_H9x2(E!+ttvzk$6a7r*fG>T=q%z%7zCoEWV0~5#n0iB(j zA^$WN0uN2(vhFEzUF%yg8vP#KzU30z`#&+_MJHog0>&{Ot2gftD!i4&-z(c9}=Kh6#JqRmHl z@>3(2-3Y^*DFSnG3TA3Hh&McnM@3bhs2y~ZGBFQ?%Io{kx#%7meccB3R^$Wt^^^SF zf8#j6P=_56XHXqCitX8G%o|`nEL}przal->z8cK!wr7Lh%N2}h&sJ3TGqnGm4Ka6K z;Y9;m){tOB?C^US@>?3(&FKICGo)n~ePq|(!B?k|tmucqkVmZ5{-a%(-Rg~yOB|`W zZ(Ug`nzN2Z1C&&qlq<(_KaUJz)ntqj(~dKyq&YJ&FcNeD9%8Jy79D<11^1XwIM+~M z`q6Uojc6e9%Vtqps}+(*euboMTfr`gvc%t8&}gOuT2Cv1gjW_U_`m+FYR?Y1GD6_G zZ%Lq3-q1?B&jzDAGthND@gjEJ2X(B67=2j6-7L~z?Jy}*{=Eik*VMwaUfozo{RYqt z^$}_)1K6-~2Id$M=Q3yxk2hhODEt9{67^ zG{@Y7j40yzHob;m2l53D6|@cS?_#R91b^)|fXjw{7L`wR^0X~Y#O%04`Ey$qd+JAU zyc5Ek|J?z)J95F=_W~qat_HKb5{TH*9hHzP&md2Cju^xeBRsfs-g!|vf1Bv`gFUwx z@);s-2jiJU#?tI%kVL(iRsXZ)ojrY--R>d`9uvyf-yX;+{iruMaye?OvO%~1j8-CZ z5l!t~nc2K%K{9!y5P#?pePvE8-r1jd z{1nQ<8dHUs;$WU(T@U$xrid$llC#x+y0e%GU0B^;3sEshk6|0^u-Y?#?GKdmxLen; zEZ_wye_8K4CnN`*yRE~tSJb1@*M!Y_Foa8Hn)oSxoe8b}TAaJtgqJQD!*8E7V1X?? z`G*@l=-Dx2ikaSQTFn@m4Z5=Lru}&MVFRuxegHbnF6x(GA$NGD!l?Ihu&3v}&e@#w zm>;x7E^?ed+lU1!sc*`dK9hGqRlPZaKDm3jpgMYw_8x5aO4S)-`k-lP&*6 z8>asq+?(&iD$$FTl^zhy4CxNhKy!u1WqG8yQCz)6glW?LY*r)isEOS`H}|t#-;oX) zi3fJPcVo(?U1GxyOJ3cz8&k@wMP;{SvC1X|GQSjn&x)^5*Dq5vdeMYw_Hybl$kOU8 z?#RhP3gUY@&nM0T2gyv7{05@2u?16?c?k!z&DoW47Tn0z6r1~~Fi^=Uvo?S$yLIE& zD9^3AHIex7&7#?$okDE$0MN}gXL(-qzPR<5d`8VcF7;W5Nll+H{ND;l*lfg>ivC=+ zEE1$|T(DzgJ7(y&z~MjSTsk=ibx8$+x+F*}%0=o7a^cdRLs{sL&fLyh;JLOVAY-B@ zli9Qh{kINdZ0sA*{tuuozZjSPK%UbR7lnxEY3S}se)aJmV2>5;%GzqQcA+NRanw<$ z`V%4;4@_=NhOmMRu(Yw^>0O4>=C9|^BT-VZ40B%Ihse4V(c#!bD6uMq zXOuHZ{P!HXj@t;5)BpNe9S9-*rXibuAdpqcWsq({taV{8gjH{)UcGX^^pCF0(7=LK ze)~xJ&`Z2xIf(})+i+D610h8;;kvL7LJLd(lEYcF)DnwMbx4qJ$%2J%#kSM2x1xvHUD{@)dhbfIj>Pvk2J^})VC3=@?icOw$L%0JVoIlxcsIM zGx;@;H*cdogfs{mwiHtT?Fm5|{2s!VoD|(w}|!f{*ds4*KL^aO3LOt^yREFl&!z#%xYqZQDd-8Q2b!VlT0pSlx_j| zO#KdqJq+cHcMUUmu^zN)EelHGo zaAx`~A0Se?L9o)Efv|hwII4C6lgu;6;4jpTBmW)8PxEJ~5@UW)V!+zR58)@;eYvz^ z4@eJ&Ve6hsbUGc(<4;Y5sEt`z_wE>mJaJ&_XIV4xH@fRx@fXU)bkHd_3+q2v@cf13 zX%3JJk}2eC8WzevD-Rd2^k7#?V%beNqMGq~$8&3q`|F z17?*o9rFE%-{jORS{T$}yj+R-0n=fc-$Wid-<7#_sR5P$ZBf213+sb-BELcWkPrES z(%J-HAMVfUXHMex=l5p?>qD7*;55*!<2cjPfmv^(zL0%WFlwM97^N4&_N0+aReem% zTXqAI{oA1al?IB|+xtj zNM1eD>ZjZSi;){J@$(MM3?9uK{j7MV$zjY&7|vD~9Dy*!K8P-UfTrJwQQ_n%BqH&w zg69e*|2^4f4W>JFb7DHHjsz3ITr<5@G^J778UU z)D;yR$SbX`K$uMnsB2D((gkl&x8oPBCf@cn9YHun?40$Nf?_oxsTcFF`ioytbqU@!{E#tYLHl7TMeeV{`hf zubsive$=}=J{y#tNvNJYQTQ~*lNask#F(vnpmk>l<}R_}IgiLUvV``=Ejgm}SRi-) zX2kO^5VxyrBNn~Wg4xVMSb4;g8C71vh4rSa_1#u9wpmEI>#HwVwZ9e|b{E6oGCN{;+`<#RT$$ST{W)i897-q7 z#M+8<$TnyND=#O`4yjRU@&+{zqJ*PaGS)7RU_O}&(siOxwN4>6Xk(y#XBm8-HH;@G zoP_-33t+V<2jn{^@%#aKLdZ`&m{)}xvzR4g+NIxN{c>+!CZjC0>2Nmlt(3Kw7DLi@ z88bDdez?SH>V4P=npwVL;+fyE;0Fqooc+DwdG4!E6@Rh-${>f05 zom~VaZnQ7&%Z1c4cR{y@fE=>O(4ws!pnk`JTh3$J%!=el}x)7#^DV!&$H)j**?8>1WDyMKYl>a|ISXeTgwWoOoT=*P?py zW$o}40bG~3KwfQBjFnSE(4Vp>*1zA!_)k*!`~g^I$9vNIH={l)P+<3o+xb}uvY;&} zP7dHcx5qJ4|6t+^cZwzPJ(zsvOH^LpDj25wvZSwS29)`on<~bB4FZQtO&Akt%IzLAmNRS&FP^^-X5~tG&62BFX;0kOoo7Lzz5=5f z=&x_}XPWRpOyB6iQmcZQL(o8I`K=M1pV7N<*gA~8d<-I7v?#yVjmhkfiOo+lP?h~qE%pB`(@l7(uJvS28r$BCzMp*7q#w%z|NDGb3p*< zVN%hq_a=Jg_CVC~K!~}Kij2SKjqz zjmMIp`Qa{5q%;XRO9pa%#uJo}dxD8gbT2MBhN{{{;*1qfLF(`xWY#g5n`_5fBdb7v zJPVx@s27%I*9BL)v;8Gbyt3pO>i!r5>o;5QKwDp4wWc4JQeT*LgD>N1FOSY ztb2A&Jb1#3>D47-<=%c^p)q1f2W!yHA($te`wAw~EvSG4c=1M0KKG{}Hfz2It1fy1 zr)H1jvV(MA8k+)=0jIUglP2-7D`wDo{|FX&e!=!3QZ{(2BR`k!$7>8ok2{-A?`AC; z?W1lxUl%duni*xIlA!Xk3vQfh%rZ5B5VW)!)1sFWlP3=9DG#~&f7HYLB!H=EKZ$WW z5ZyBNp!E0*u{A#l=g{XJG=C7tW;^kkc|({^xGMT+C`a#Z#@qrFP`cZOHw3)J6Rm-i zO%|9{`=1!GaRSY2!`Q)v-dtruXBhEM_EZNF-}a3#=fD8AC3_I7>(L;nCQd=$H3Hi; z!<{QBgIxcC^gnNJd8$8gY^lfYbvJ=IU;r=k-6OP}v4Kk0U#Q!0AeUTjko!;X& K z;N~6v&AHs%~ij6xrp{mDqG52dKx;*T|7dnmN zxf&ak4HiYIUzre3`oY63u3UB3QmC9Ug!Ek}emmENbyBy<-J*fK?BRAX{`puYBYw~o zYY7VzR|?v-ftVs0NqpL7)OB^omeYO7ANCp=f}h~>q%JJ>%RNlmQb-wU>h=45obaK{ zjG2HDPZ>)-mf|2@965?dKdpnRzF$%CDVVxygE7BzJmjuj1G;M`2OWhI#c`wQ;=~tjn z8TCIM>dMo0B#SAVNAv$<0rK;ELBDaCSYdC#GVO|?(rprNd@z97{OQfL4V$nyj)z}Omfi|@-_GdfZ6Xasv`!1%(_p6t{20A@Dy z1V;IqLixfW%&7G-=pIfN?B@lut`ml{$U|?C-!bFE`zyFx{%>IO)swBJUc1#Z4r25t z8`gf+jAu-K3F5HW6OWl1~N$YBs;v5tDb$1%IQu}l>c zDOL|Vjnae5;MusLEZ#2zS}atsD$Rh^l?0>emJ0RCKgBT3Phyy`7-P)tVXUeio?Ps~ zzV;_h-QJ$uKbV+pQ!atyfxawa+9<4FP((b;T|#aCCRE6~@w8vh2}Xx5VW3kzXphkB z;+ZHqO!a0RS*ci=LNkOZ^|h2TkX{zJ#liLXw*Jap4_{ zJW=-H7Ve)SXB7^fY)YjIGm0}HFV0!g)vC2p-EHWYJruTA1TtMskr?soO30=hWxKosO(IZxiHMcA_ld2uvvsVo`P5p+9Arq^FJudt6-D=iQXK?|xh`xz>#B zFTO*@PlrKL&`WOf+L!Oo4P>Sjz4+Wz6RwYn5bCc>xnlbw>cQ&Gq|rvKKD9S%Ub+oc z27ib(X+zkwSzUSHH4k?Gs24Ad_vC4nxnO+8g3qYi2U_ZZj;xD^!BXHF%Qv9O`BPZR zg4vvs3aBY0#sp#yP9O;7&JaUh%V>l5gd`Ek8z1eCBn==7B&Q6ecAHbaU8=!flglR3*8#HRQ5P0|y=wAGRse&m>&-Y?8 zPf=FsGxZ`?cjof4-eC572$+_1Vi9roK;qFu3~77<(F5#xxKl2wWR2pqRpjrl_9y@B z7*xb{=92ka<;j0d#mEKvP`Km{A(q{;97UQ}vxJmtMFnRtdXsPNO4F*3=2jAxUuUp&6GKhGOz@Dcj^D zBo|Xt`Mr^7f|hV9_%-i;q3LpxZ_oh zbGmPaK#yu{zDBIYW5an`QZdNVX`a0u3pqc?D;csMf)gzGs(=C9teJdaZbR7JSDwUh z{}1-hRnVN5E?3;{g9hZA;J2$`ceXB1*m?(&qqm7a*A8Mrp$}6Zo&d>bZBf#*L6m$O zFZ?tz=_Tu#^>>ATrC89 z4q(RJyYhJBd_l+<%pzW1fck~ht+QklH@STfGo4z7Xs(E&U-vj&=*rsIc1(kKU*QpdN6STwH>G;SF|Ia=E2djCa!_FTDk^Ee@M zV+5306RUiV6)&C}%9?e0ab@!%v0l0rF69s73Dau9W_^dK>|-Q%-subu5hJ$%aB&$JY{@}BrwYbDlQYftsq#I&x^c-!dtvOn zuUJpn$D+57u-M{HtdCg^iZ?4sv+1L*zq|#?hO7bQG9z)#HDJxqfID+txj;K{Fhxmb2Hh&LY{OMOw>(DQ2<_v7gB8}0AFHWa9v?XctMoF_kn$a=1w)yPb{pl5*3@^d7W3XXLpd=%^v>wTOIW(T{_d7CVx+rPy*uAclf z`6zU!dfFOQCi-OI;om)|3)2lr63p65-FD!=EzKxJ=w$$j74uHHnPN8EiqUF z$F=j!}lQ2P6F@|D&? z8g+u+D%a;V7hj=eQBSV?O1V+DYhvc4TVOEmGd#`9g}62KSP)73Nso`{L(ejIFk+`I zGr_$_PZn}N1;oOEEPC5b2##{%xw;;#eMumT$Q+LGk)hnrz?KJgXrRpUnUFI!98-sd zVY9OVvOb+y{#kogWVKz8P=SpyuQNCtq%(KHE2w-rke@9um{M66BpX6`txAH4+|CXxc@9YuPQrmzfHQOQ=+hd*!E)| z`m*RwaxCf|Bkt~?^SQ)`JAeHLV&uP}EL)>Kb)D`dIj! z`JqTDyK!;=ljKmQ;FoGN@h`!8J0EB`J%TmnyhHi$Zn&kznHj2l_$K9OzUTybyH8o5 zUhjjLQZ|@J+dZS_YNs%J!U(p)e*`c8sE2W>zTDY94K3F=u$GuVu;5Ei9{Ouvp0wo? z-A&hH%e>2wNZpZj%jx}LNq5S9OT~uES3uI-j1ewV!K`;a%5HoswfWc~@E$vH{|slI z{9iZNxlhX4qo!es-x+ZJ?Jlb3PlYn_(=PsfG%tQ|2GT|&W-SPz+z@3RA4s_6{1e!? z;Wo$~y9uWXjM%YBrfl6=9iH*lh2NMkoPBca#A~mG@aFQ7$RGTKn0MabFiD3;EWd-b zD~x#AZ}Z`k;$1%}n0BWt$e26>gxqcLZySC||$YmiL-R znd`AX(R-L7lL|w)YS~%g!Bc&5ho#D&WLH9uR(u8@A@4|0fGiKLTSaRE3I~Kk4I%Ri= z3pYgtt>I&!sG~10s_+FDr=hG>{Y`XM_u>I(JXk#v!+S`xn09vv&sqEh7JVqf!>??( zPir1|0IgV%5AE$zUSQ6(WGEyZWnNYvp77u!1bfXQ&u{{`k9`eg7fGY7rn6^FH&H2d zhT=GT7VH*YY0~Tp+&YzZCM9IMz!Ce&wCg%U* zsgoY8$+Mam5>qhV#uRev%Q5;(6iUem;O@2nGwet5$%f=1^;wI~HTRH6ZKAU4d{Noo zi(BTs!Q@}}37SoJv7GJ>a_S8TpPDZ?H3YLO#W`^KUJtI?b59+eHV$824CTt43^C_G z1e~VtxyF&Sy5gV2|2hcTauc2vIgsUiq|eLE3|d@GSa{V+P+SS*HFJ7l@KQ^tnsWgy zjdOwTO@#<=YnJR^gZ5P;d5xD3(}qgKn#oeG?U9F36R6{v=D0<_oWp6#5GL8k)rtdq z1lI)@Fxho0<)L3d!3saFeEyeQ8(MKXw>2Pb3M5(aWxu`F2P*;RA}E!dxNXTXczZD zt+($E+FrbeNpvhc-MSMZ&Jaf}96=})qkkEUt@2y=?Fj8XEV}WAO$f>T z=VSJ$tzuu1AIj9|M23O#!84?!h&?wg5xq*&3jmY!Mz|*)J zZ<&`4Pn+F&lUWt!ojWNO)z*S``ccrtp1>;mFOYs`H8lSE4itf#grrVJJZGsXyV8CI zwpEQFU(iZam38Bev)iEKV+*wXc^+yUzJdLPCX76h4Q(wO(Wb3C-2Lp#qxVIl>#89v zlP!Sy+j3U)I11F?CW36G8p;O_;I50^c*I{}31|$8oR;{{pgV6VZO6HA|w7>nXbj z@>5e3tnIoJPdlW?s={)hX-P5`1)LO=g-T&mA5W&3FoY-X+XBmC{kWWV(mb^cGaaX4 zqFZNFHue;sXW0`&Wrt8V@(w6Y+yuoq$|)K&QO3*;`>KyE>cm4Pb>$J|pFv)y0jmWb#0@zQvrUN0d(f4bpk6H1bue@4>&xeiYQuQ7 zBRkvEn}@GygVaAIP=DDTuf3%m=E+j{G-C);{E;FgPozEm79+Uwu?o(Ab>^}OhvaPa zHDVw6@%A(awkUZdTlYl5o~zyYl-Hr$=7_tXnLa`t7iiB+g1%s0-{<1qg{Ca$H0`Di zhHx|UPY{3ND=0FriccfWd53lot2pb;=K1$$2}bVhmbbt(eoTyhoQZN+gOAb-S!jhd zlWBg#)KojzL(ISX4FgzPi4~KMx8db?-hkO!A8gzC7TV{pf#_`^Jn8*2XdCg4IP&MP zrp*MD8S!#w-wvU^emZL2pAz&6*3!;Fhee!gg*A7*NLQ~0jf)nFMvN2Xb-zHMVgH^nOqL4X|}l zH2En=13N>WihEWAYw#YnINS5~Nyf}(yoV4z-UL5AM83Ea<~@7aYT+uFc58 z_kr@sV7c?NT$H9f7F^%ka>;-MFn;08S19!PJJ4&SfNWVY* z<|MsAHoYEwr;p-EFaE}+?muDt0~7A^DGQym{}kLwhtMO}NHpAIOadGtH}Hv<%?XS@jB)lX6Sjy)Qt~yrXce zzAtkgw;j4v`0_EUY*@!xdJp9WKz-0jF+;1zWxbxOReodSo);@H>T)R->oFd%E`SHW zB91h5;8nW*jy8Xc0r{^!82ry3bcJ9Za{V0jiae&Bt`3t@PiED;%@_{j#guWB{hqo8 zl{N#_^XT5%=t{iIIVI|l3#YKzrw2qH-wo35<4}51z->1}xk;7{vlq5ur1g0CY2JmG zuj#}p`!rClp(ktpIs%jj`<6x?--EUt|A0!^sy6Ycf%+4a3G4q|^z$=hxzBE4{gm^f z=~FA-`eHK{^zddMB8f+r;VEW0sqp^qR;+bW4`@5A0{eyiS z{-LKn?D^TQOs}jftMM>p5@#z^nLiPlo^;_h>)wgdqp6}MB}`aBT#nYI4+W3?P%@XRW4BoH)H-wx0dJ)23e-!0!slV{UF>&m*ixAkg0i3_sV)6Du$Q2%d{Q68V*kHut zubu(<%U#q3;K|$%y+Kvz5K(!xFGv>6z{t^miQy0A!m;SSTraE>H>e>U*qwM4QVUj{ zDX`3rDd>5&g?#oEV%m@Z-awh6x>rWb{_IsyX|6-uKS%IUfdxzdo+73f+7f%<2No?1 zUBmJ@eg>0cf9g7Sg=~k=>#0hVP^{p8&Gq*dRQn`70K?-hgynR>Jg(w9y|^OApQ{4;qv?0bi=B=WG$;H^ zs?P#fd3lLDFWRvU3B;c?-z{eLBYs_|K3lc03zzUog3TAop?z${vr(bU_5gXfdN`vC zBk%5gC-C^YE6-f3#MATMq7SoWU$*sOssr6wj?Hb%bnnCK?WE|YlrWiDhWf~nLEK}X zE|(nHFGk95iQ1zcJd3*J9k->Stf3oZr~C5De@obvuw;l@4Tk&~euFOFHA)Xy? zKwR2;qRFlMl)21>=o$$YeM=K+i;bAOLV~WFhVxAc4op!o3AL-wquj=dmzZ|pF1->9jWtS+lQ0?pd4n9`6-d&VGQ;vPdiXGFgp zeYobFnb4Mb9-bO>=6bF#U?TQlSub{BgkN7Kv-D6at~-L`eR}4A;E+?tu=SL$%~J_LFYd& zp#FI3UrUxSO~fa4%W*Z7yp?jj8v|I%f1e?`fb_>Jr!i#uUdUM*fQlifg|wyrAve2? ztjoWso}kAzbkSvx`nxdurX!%IoQ3ge+r;pIw?gof=qFqVAV;$nHtq3G*gn zufx=*S-eABYGiJ6X5~G(-humfnoz*u`w}8$G#a7s)9>g@)LWLOG6XrBv-SbLtxn{=ge(Av)J`<;M2l1UYj>YgE5^8xX0@0IOZYnd0Yd(c_gq z-`(oOwX^>frL*q{5l5$iatpCB$I7_Er%bfBJcQaEUHO$i9C@tuNR~X|jVS3aZ~{YP zl0z4^?4k+VvrgcVZ$y#1J^}51J$P5xkI9PjO7mVQuy8?N)V$197ai%x)!ipU(U`MB z?Ni!Q9L&T0L3_9hZ_vwzOj$FUj7mo*VzyMSqO4YZ zcaWuh#+KeYF!!bhPu`CxkvRAkPkIPNzx2V>_*AUCL%BBcF(qHN0fQn7?bnSm)&LV0>8 zIoOlQ>o7R-LcEgWMSFcKA-%^IP{!?6mv>)_ zR(6Ke-F`-hG020Q17AU6QY^}>yK>n%bGWkR8Mw#I19c$fz3h6RyJ|Q@>|20^o=zy4 zo+PxMX%yV8Y^WQp8e|i$3M$qFt`(GHwEl~{4Sp;navwyu*kVZW72?dALf)KBVC-YX zGlM>ZO>Jk`7Tkqb3sEx-7rHZ_u>GL&RtU;j*<#ZL zN6r#AVn$awuV;}rak@8a3Lr*}!5^ULybF}Z74i+leM=6XBpAd#gQ!*O(b&;|S=0<< zdHoryzd(1D0mOg35hlijT!ZjgRf065Gqdq=WHAGWv4put(Xhaby)5WTedginr%-nhx=bEgn}MpZ#` zpBQLq&4ts0RT#6a4I?+qgTkqHl>3%o!Go?W$WF?WeJq9G)2ngCPH$GFvmexBuR`+( zUC5i-hJg>LOS>mM+kZZT#Fm$MDBPRNEe7&7cR5%7lC4(LKA2Yd)6 zh6B+y)s`#AUQr7bj(qlIU7k6ohPpv4nBQ_ycfxYFFw|KPt90 zUcgO#U6{%74lE9}V#%rZQ1iS{?w#z(CeplC_t=(q7})XrN$1co&Wr0_HROSt_G8uz z9TxX>7tQWT80=_A+HfGqEcE1-Z6#pgZ_E=3#-hv^#7s7ak_RCXCE2#j%#J)*Gj%|s zKVX=HgCSFHnGW8o2l05FD$#F&IWrK9xrDMsvirnG|7ypbUEc}e&%%Z1id$&A{a>2z zNw=9oImD)9>OCel(;CLhXY^*Tm(pk2b^-#|TJVa)9^9F7%I=vn(QN%0kkx*dYivD0 z=Ju{s>F{D$!{q^VW-~!$W+cigC&01OJ-Ov<4QhYX;N(Yx`6;Ozcb+#4<>~FX<$)&) zj=BUf#N{u{a%R!?he6(Y2y8y};N=(Nz}cf3+}=~BaKdyU#<>BaFWR$18~yo|rTVOJ z^k~*Zx_hO!1FN59OdT&yV$_FQxM%!eZu8|Xw0^!NES?a|>bnnPMIlB)s}1=AURiK= z`p#r||EbB9##BST3YyPv(M%Q0Zy6e~44-r?Ic?8O2-EBeS89wZ=~T|m_&NOrP%bh)r zfa}FfR5=b**YCRvGXvdNzVZZ}`*v(*KbqhCj;i(QPh!-l3lJI6lP@dtXP<%;yjs?o zZONed^irYPyUvAYNwV=)l`#waOuvccKOlLIHH44%qQ1L8CJCJ>Y73%7n?a9o%a0I# zzP=YPdsiX)E*ZdczBqzvUs1{Bs6Kped|x(YyA?CLk%_4$sZezPt+*gi#*_)oa@i73 z!6dE+ugTwpN5+q05<3DL&%BEY-_A_3^j}dmX|$*=>BdqgwPMm%(w&zj;h`74Tsv&D zs1EgEn)bo!vOmuY$%O;qaG5o)C(w3?79sacJ35jNCET(fKHc~g7oEBS)#Pk9n1u~e>V8-^2Xd$X37OL2%-wMYls6Pn&-Z>Pe{~I2RpjrqUkHW2oQEXG zH=xYX6C^LG$M`aJF}hf?x;>6uv3{$NK6)^_HBQ1R<$FGWzp~Au>Oxf#JbsX)>69*)l)y>riQ^x zcH)Bi&|(=YUr`6o-A$Rjnx4!29HAgD)OKN|9Y)V%i zd20-|FC59Wdtya(>|dC-`-_-z_65A08^lv|j9FaH2}lp_!K0JbLTgeO)XeA9_0?b;Bi`JT#a*?_<$#Rsh@YOKf@K^1m+b%Y3${f@k9` zh<|cQG#m0Bq~`yMdhIJPez6y(1$;zD{T(38apsXvQOUw0Iz(EdfXygx+x zj>DZ<2F#sqeOhTRQ6tyL&8%C&mb9X>+CbVD>=RVyzf+gGu3WZg0E><6!2-8?@vPr6 z@f-ESMh=kB^KnqfaH9;!Rwb%QC%tyxp1V?Kq~fp%3w~IPLH%4AKlwY#{Kl(OSB26( zHi)PGCdp9iNFUyM$`x+xabu@D|ABSe z{n)!JR$Nw8CXDJP@Z=*;P}hrgXPeA1dQ}U=nFMkB-^QTEn)>}Nn8E&ya+WdT4%RI# z0y_&G7E<^TQ&x^7^K>yBDGy>*5xQJc$>r`VPvNxv0o-+E3C2mQK^1%-bw6}tvZY3H z<>x0QsbOOvc}gWrrgx`w#T{In5z2f#n^FFm`Y~)b30}lcQw~{yr}bZ=-wQfB ziUQ!Q^+^6y@|n&u7pw`{gg15(E82N9Yrby*M>bP-FZz(YwbBh9P4(xixB)YXwc>13 z3d;7q#WJM}+RnIxNxR;FHu;`t{M4PP3bpc}8xodmWhc}>K92GK)T3bO!}Ry|WU{}! zncXEPmix0FRT^*kqhanW{MW03^rA0nH}>U++g*9y`-`B+uMi6RQpeh1TlRn7hrK$A zyu>ITndQJ*{Wpk(3mwol(3Hm$Ye-q2UmC8HDoz$!u|Cn7DfN@oQJ?hL{&(iwd44Ep z`*as=*Y3vF@`KoH)k=Fy8fq%2hF zE*SHP@+uLh)a?VS@VtjDuaA2TF`t6ihqZz1CF{**n@QM~GdBEIPCuS$HWi{)5)*XG zS8VBK&qF%(U{(#iSyLYolW$H#ZQDAu`rMB<7nGwi(3^W101NS>?`4}lYu~v9V%p!K z#QlU?bF{w@y*CfzRvtt7AAuO>UW}PL=E4*T>^5&{hxQMN=v(5#9Y_1Jva3=^-x0(W zjW>i$%G2wm3f$dWA8Qv5;rV*Ip{;}1{*k(T=gwhl_ZcS^T|}DUK3BfC(T&%X55VXC z-Kke_q&jA^4Yn`$hDsgE{7ncUp0)){X_W9_bsm^(+zjd067h8>nw>V@FV*_wK_2ZN z8zxEk&R}8-U3(2h7TK_8bq~^SYB5(@j>0rkUNgmzh3tI`nF*8`JA4XH(_K*Zc893V z^Ar^7k7CZ1lf<8|hn5N3!NHh%f@trYvwRe{OhtG-NXi;|Te1+xE-cTrMr^uR0#&c1 zEc$f@rp@imtJ0j9dwUkt9Vo!onq)!aKNchrzd*9yQL$)HIgX;9E@kjn!RKrMH&b2$ z)x0r6>#{A7R`4&L9&gN=_EQEi=`}?7&O!T!G&5Tm5fgkH6h$zWH)}W;d^m&cf3;E0 zyl$8VSExOuVLawqGfw#>kU7~>Uam=>n^aNXxbAyU&9xFw&$VZ2|3l#PTFxR4QGR)U z2-eAmbDs^BAQ`+}OguuLbNS_A=|d)?^x_zZj#1z!J2Mu|gRs@C1RIr zOvyi!6FeO9+@n$5yB9z2?8RG;0{^*XFy$+%uxMzln71rh*h6>ph&wxg5SUCFGy&6D zH&E@O4#4@;Y53b?NP;`qboVOcj336cd_<@pwE@x}Y!KT!>~Qf=OJ+OcIXYL=3!6x% zb>4SMj5-p=l`SFivR-+_#~#hg3sNv*TOc$~SPRo444K+e0m^^Ri)Jp}PqxfdyHZtz3W#}>4 z#ub9>yDfSgHs?M|EV#!=6COuCl`WOJeD;|^T(|2fRBak7O752zKum^uM606-IuAhjzjkwz?EHmh43&7h?>iIA#s$Izv%I1+z<6H7NDlv4!J9_ ziSoR(c;&)9(91jr%!T7G3GCpNttmoVGW=g5uswl--Sl8##0@Y~6v8zkLSj z{z+&PFc6gIT7>$YiJ-h}1d(IwQN43923pl)$7WqFH9RTmC43{j;5Q5k4PlwVW1zZD zmp6~?$ph0$@tcykS!aGwXYa4-nt9Vey67l6$^%*Uw_#j$<0szwPI~VK3^)H|H!Du=%k%m_hq!wuFx68BI_gIA#Mw_Uvw~*o z49cMd%7cRA0?CN{>R8Z_0lXYQwGK+&4nqMrH@PTWLYS|NQ|l$!>popfc+iZ&s7 z%pp{09Wh+*7qK|QlIfRQv7!%~1Yep9c9WN4+j0e$PRtTA`k#l`+oU1u>xinJ*2HP< z%{Dw%u(4imG4j1uki;p6HBVPS0U>E~w*HIWyA62Z^lY>&`wXdhE1{7*9_2?}`QXfs4kmaOLRjf={jo`?QicU2gU0c)0+bdy)=Su@c0S0zWNp$MJM`lb*H1^Bel}h`k?B zGbTfwbZ-Yl1vv1`Y-1i_@(MJ2v;zBn0VVH<*~f+oMLM1kd7?m^b8{H0>FEZ#30|zc z*%?R~%IwI6N1@huiFH!a9 z+)_Q#dLlBMEVHlQ}T4Ux|+@R|oPZO{LtyW3@Pwy`N| z=vINU%7LQn>wRbq>ci6Qw+V8+f$;QrC`(^>Sqv`C#a7mt_{v7S>BUiKtnq@jvQwDy zu?V&Q#fiZklOgwdcOJ2&1gsdbQN$#u=xohgt8b!VXAj=mhkE3ud0}vx20LQiSyn?C zCezMar*$y*jq_*YOZB+^7z3t%mbwDB>F`+Ua?gF`&cwmwxx;;;Y(<;6XcTc%SDN$i z18z9k@;g)*QCG>7RAPd*5VP8VN0V>e`-l(upvf2bYA_E!b466fn2GUwqETl!`GxD| zip3vn*rG*sptHe{d&JS*{_!T!hWc~sk7tXrtEO1A#t-%0i?HZ{g0*U)It z!nI)cjk=5L_KB~nym|B$8GD{)LK%uPqD1zA97is!*h|iwKd(lUh^=7%IFwC$6~d~< zmZJT_Iw-6D3v2#(i5u!dSX~^kD(SO1Og|s=xNW&?Vw}$v*;nO702tzS_YJZ5CHw6o8 zwqV|z3hFBV0=rM!@Z6ikZeKf!H=3;l<;08f>P|s?*)d(7dTBZ670d?DNi;uH4`BJ) zp}cU93v4SPrd0f8xw2xU+Lkm?+nugl&rKhKpYO%J;lo*RNnbvB-z%^=F_Z_6yogZ& z)SEYEGsZO0^(-{X|IV%_SGvuUO=YyGx2{uQ#K={Bp!pjQ*Tp6AqM8E6- zm%sL6stX&0l<9+cqx}_BEvpbzvqlT5$(z*Nnna@)cI@f)QGDXIF3kO)1G^F@;hCDw zJTp}T+Jn|YSz{8^7=A$Qm|ST2yMwsRHwEVS7cr}P@%Yla`12RWTX$}R_=}$WQLGF3 zD)$M^qxI2t&|u!2=FBJNJMsmJ0A`YK23s!tjWbtU^80!HxpdSHP{v#q4HnmfOfe5M z1fCSu4&?WV#XY9Uk*mKdpv|Qpo9uoarJH^w4dXuSUrb)2?-L>L-QVDNfO^UXbYs!; z{Xp$0!)s9rnl&7lQuz*=o*%==-CBINhhVt?;AJ9v5<*w^`@S>OVLE-vH@b=MTA1>?h znMY}#({MxxH+U~zdSuOW=M7|xZd~?5#>13nQtE%I6K%Gf#g2uKP;!I`(H_OrnNcf9 zj}mh%agp5KZ#GEBeGuH9_%q2^6M5BRVly04gH8KQ98=Vv|D55-#}L1x{pGmzIGXc1ap z?1x)VjQQ*p16V@g02X;`6xx641chW1mOsCV;h(i)SczyBjnzA=(DlD?9UusbG-la70KmeGY?aa^X`Lo0c+ps*+h&^pL=Y@IOG4&Ps6n;KI z%OL?wa@YrJm4leU>-A9Hy)RcyIwHgeX9*z&YOHL#51y$H(XKv(XC5!d@H4wm=6X+U zP;dq-mbtN%;rdLOS1s6hoyYdmomtHXS5ObQiA7WFMOpo3QFGi!49aomHoZoJ?Dja2 zzSt!i0A*K-ytrYw4Gh3_(v$Y< zb;Bbg7z+(H<}H&u@uHp&MAiPDVE4Tz*QlSV(_b;>)sHeH2aI^yh&mK6y0Y*mkD<-% zG0e&P3lald_@js9N!k@EIQfv)`jj{YZuf-vSLXaiXA>5B-I*n?S_1EMdNF5{7|@hF zQ#aLLqfCnqukURR$-jAUpI_TB?htuud*=*0OT7IeyJdnF0tMy7-Rh!${ty#5-KV?j z@s!$qn7F|g+V9c5W9k2U{#0Nwt26s41Zg*VG2QlMHq%du zh1eE@p=3lC9`5BWc!yBF^VLVertXGdON`XGi621w?#@$!Or$J=5;*h$b!Tners=@>}7gtnMYJe=pDRC+=v z>h}kv8|gyQmR8i#nIh@Yn|`}{P1E|&x5}Oaa z!aW;@u}YI(%y={HB*sUA%hoRZ-GTuuSYg1T>h__w6vcN9{h908jnERwnC>~sN?i}- z^|8-I+o&L}C3cRzRFD6RGv&6EHlx8M(lcE5LDGa)>_RhYLwgVAkvovNT_bLICS%V1 zj?tW6hdD1^g1q)U)K#~GLc2+LXCYAM6YWXP-4)kS4kdk57NiVwvS z*}M>%*Hf3qBo~%mTL$v0-TB#T1DV}*U2>Q_A$E!_&}I}YXU)=?02t1oYVFCktZdHgpqjOq3-eDSv9 zAGUb0tXkq=Tr%O=uis+$3YFOM?`|kaBESFAPQs|`2A4R3E78~#O<&S)9nPl(`Xc7Lv$mTT=8QTf!`d^0G zk6c;I883*_xN)0NUr{xBn_4+3U)`MJ0a*w1d9bg7-U(jZ3I}j4ovV#|(ja^9GmPFT zV1)e*OaxE(GL~{Nwn5CRvKNy*RYTB7U7mjAttjz7BbSw)ftU?ptZvL;)-oWB=}irx zy{DAd7{_4#>mJ;)$2$ly_!~0wOTfO0y4kjuf+nL4%NO-SS%Fz;{9pIQV(Lzih_BF& z`fIW&=cc@~UN}Ww=#XcZF|o#xts{@V{NYr{s4jt)&kEMMp(`|9F2PZk9a-haPQ0PJ zmO2rWVNMr&);@1KX4Gp@eqjj|G!A4}CjLO};&y@gJwxT~PCTxZ82ZZ_g#hXl){S|H z&8f@q{yBQTzWJ<{oBx9;qn|@?)mqYFHxa{oASy}kR23=Y5t|-jYIGjm?;XUVwXUFS z*^Fbqe#fkf9mKmE#8%N=(5{a;b6zqJcMqjJ&1wa+c|Qnjh8DxEq+ZNm7h+DdJ6l9r zLR}c;1lDSY6{h^aIEBERCq4${%o4285%@C7S=pWL&6LyMpqZa7b*zpC@wF|>S=g7m zUz6~phZnHJtut|Q{lTEy2ej|&58K>(FgsV;8T5z+mBm9$8~X(29rR|hzk71W#+R5t z8H|wU)@;xE5q$g=cb;=Sj3rC{hEPj)ZnkU&v}ToIVM%Xp=_&hg+1BJsn)E%@+tA0_F&pk1Ad4) zT~la0k-RZjo=hpyQ2jovqX9%JJFeZ7XmnaPaKyn#IivyGuVYG(oZEFa7g@;zADAQ{SH zUQs7whN$k=nLGa5mDfwQ2<=L1WT*F_V$Wc%oUTOa*eK}O(uMM>heYSygL%ax8!JR?t?j_{N*RmMcI6cY-MMqWPlD!QvD#IsL3xUzWCVb$#?Vf-2&9x?J8+K7JQ(@j4>o#@YOPQ4Zk&bc#rb5A~M zNLLmW?!@eSEQfaA8W?|+v9t@DA*7yiqkCUq${HhiFU#!5>h!IKnti|)-+o;CCtnSTL;<@A&Z$6~SiI34IqmDV){3&4+82Oc1WP@4)(7mxUtdW|ZY8AU{ePa-XBZ zlvl(CoT3zqR(`<`vDVDSW|}Db6a&TguY!`YThU`u!Q^mnUi8>p$S~^0V_W+0?BCkp zwOJqTIIkGg5kH{*L5|4o8M8!_$LKR5m?!QI=1*?+;W?3{=Y@WRE62B>p1J@Ql^4OH zWqU9^K84s2!`Zif4z#xlW`#z#v2689tlD%9J>7G_R36GCv7OkKOjbht|Li#0Xyq z;qxfBloQ4!Jq^TKePilzApc1IBf)PVb(QUva-C@|#4(&e-Luw$v{eUP``?G8`9Cmc z{STP4ECVw;^~AE0FiiZ@5sVXvlTZJ5e8M(R`n(n+2Nnx8n%n3+rXKA-uLEW650t#W zDCQ^hWO+mG3dVznv1VN>-Vyzh-U)L{C8`(dC{Oai=5!jI>CH@VA46 z7-<1)=rn@+3@U)U=bP!fJ%l#Poj{#r%>skEGPSw`Tt_BDd7cYo+%;$E3r6C^m*(8D z{L(J-n@L`KN%iEA8HX8lv z`>_t`4lFwmOj^}vs3;rCKh383KDJqyJgx>>zh4n;U!O%A{}ZBK?g40dNLl44=Y;m3 zw_)34cRo)xg4L5p)v01MudmoHXcOk4G)xrA7l;`5?JfL|qVw>_`g_B;jEsoPBB88? z$OzBp+!D!3DV5UF)RIa{$~PI=dy~i>iHz`k&Rv>HX=!L^i7KPhaIdr7JcozSOp zQYd02!a2AofLR>#%i7x!_q;l)b~T#ZoID0<{aVSZGq>o#f8zLw>4j!%KDQX9zM*nY zuF&?SdW>hVj@v19hZyv?lGeIG-tEUC8e*;hbxu;?`*;tHuQ|zwb<1MRGzsvO?IB?+ zMbLYe6I4r_r9VzOpjRqmp{N;y(RK;qwLgZ&EM*KNDb{~Gl4A_>>AXm7EX(p7rXdYu zK*08|Chtvg-*XOs0OQMTc53-_ZzAUW%oaY|Z-tL^B;i^9NZ4BUjx2es&d#69RAS_0 zxTnqLJLCOqXUTS;tg9#Lg9-*ON@2ZP2AE{hO`kIlo)6=bE^?E`j)W`3=ifOlC*X*n zTPlqB*Zrae7e29Gt#0BQ_>g!`vcyTo6#S(e+KBNmjPF*2# zV$XR6{pl>P6C;dlQU(7L*NBpL5K)-*k9t@LN&Ai8#P_5V2&7x7!K!9DaK@6jpPGUl zHPKv%xh~&sWe%-rF~oP|F*1B?Et%+Pi(`{n4#Rt-a9`Pd(rGV8+$CO^?>mIAoPn-KM+1yR_FLy^><-4Ek}@Z$rZd5|)|z;(xJ;TJ9i|2T3QNl?Zn=W^OB=0aY5ZKS=1Ok;H|{r|t6 z6BI)}esBcKDAs4b(-PZE?vY<#1gsPKZp##whZo_W5yN^dC>${!P2`5?^QifxNEpjF zYAM1z)(fSXcZN8;b;3sv-Kbg8Y|fE&hOGNwhLK^c=RIHoC{>EHJXrwW{oe~3`S=kT z^fD)I9pAa2c1i5MJ_}lBmyu4YOjUl3#-b#~A90H0MeZ7)=iIqW>vRRVG8ZBzGnPCr zHlY5;)41F}YS6DXfprowFTtm~)MZH$(TdfEsty+%Yd0H)?N3twzw79-)sr#UR}W$s z<5NUr1@#$p;uVV7zO_1=SQJ{~P=YwxkJg0X0X@jMG96o&o#VV}zH;7wYlwe5V@&L1 zyV46Qg<@hZD13a8I?A&i0$EowxXgu$++`un!%C1i&w$Ncr+9x=N1Aum6pZ|gNUyyz zZ(F#KEbyNTVhisQ|5*{-58vzbk^ca_dDWZ&hGyA^JsI-7!*5N zN{vpAz^N12IeJAvo@+@Fh2;&TRpKT$c=0Az!`_PnYh+oMco7LaD+50aVyVaX>jakF zroDU46YWiL#C}>mS!AgM{(t*u*#3zy#Gcvp9xA{V;737~Ol%?ypQ=@HM3mx(8@I>*SPhoDCN& zq_Ie|i{=^6K!usNiO9-<7T*=j1Geior|lxfx_(%9$|F$*0B@j9+yn^7mdEPzjLjg! zyvtFB*c!(6fCt1FFYr7kGW9D-w2^~U#_fJ&kwtCKKH`fqGDyKgW1_B`Lj=35$Ur>1 zhbqhRVQ*R9tE!UD+9Sm@chnP>n|JN zly^d+l}tc#{uJ6K8A4MGM`6?a*K}uJIMq&{2=Wg(6t1Pz-k8lB28+nxn0W5W0jAF{ z-Xjz_HdElzT~55e?;z;{eYW3ayYYexwAIR;CNb`goCP~SU-yvx%+ zENGt0q-H&?WQg%~!pW7n9~is*m@Ym(%{q*ZM^f*qz2s4Y6ZSiNrrU@;4(|QU4fI{6 z19dgTJM;;W+G2q|r?0pCcw^4Gl17sV)>$7kwwYA2Ztb9XmJ~v+QMU*g&i2q#(!J*d zV>V5vW{-PF{l6mSElndmEoxvATtuoDOOpP+x5Q}46596OirUY9P6MyFLjSOkb{Bmo zM(cM_?+Ou6kM<^Q3v;Nvi8*+MuOj{ZdSG!*5^Pz%(QMo;L6FA)5y|$VkunR2k69h* zeHuvJi6jKgn+-51hax%AeBS~APHJ$4A8OY~{TNy37mK07CT&bT?@i3|e)1u2Hu65( z-*b^dRq!qc_+fOIqzV=2R;Iz_S==O^n+r)<=_~%gY34N$Sb~pemhd_A#H!ODiNnQA8H1m|gC^^PmNc&9eN*Q}>b1$tNw1CcFHuH{l#tT1@ zuJcd;nH7I%uigs&lig2R|L+57V(-@6hB@G-G(aNFwQ0SsJaxRCMKkxXZi)2^TLuNn zu;M>!99D`Z*CvX=g}yt)wivMK_z_yUPz=b*K7r0M|QHB}39#*8IiOQxf5#S;4EAlv;W8N>63 zx{%vv0Jrk$h`kw-yoMXZm9kFf4VT#>S_Ab?h(k@-1W;R;PcVKtz}uZM0Z(r|sFcs9g$EoVB$MSG-=z!O zk9LxK0ahSae~*}T6${kZ`DbA^9U|`z5-*<$veolB^)h#4oy#I*XxRi1tGY&?znw^I z9mc`dYqN1Ul4+lU+uT5?4G8`oVZBgl=vZDvdsc|DnYEg3-NVlMwVhnour+kttL6;O z-Qg}YJ|~xoj9};FiD+h!Oux)j23xDC82@!V^(uP9wU?bG10TxCvLa2)bhu09Oqo|I zC!g(p-jKY67RGZ+CLNblz_z%D471;hE&IIIFYF=imv7Vh08PvptmiuXe$tT78l*n! z3NNM-PbV%M1MN4A;lgeQ99u4cwnL0{SK@?8iqcq^qz)eT`ZT61n?~lbGvHA*(asn_ z+@f9i?sk2aCsPyHvuwCX&>f*q;($cfu83l zov$PHNkPm*dYX6JQ_mUYv$<8lNRTu41|Qa%ORGip(01KwYS8_TS6Lzfy(`Rk1A7S~ zbMiYe>pa2-2aUy@qAcrgbDiwg)@Is9KdIg~2I^OhH~U!eri&pm!RK^LU4tYxwIyV+7XRBfBsv~3)xk}3Q znK!74v5;n(LjFeq^j>Gr>85GmGwO}N>-KI^#(G{Z{5wZ{)bsc%#+$7yC?-ZLE)j3j zD(dmo8k9Jun`)Ur(NnfZI>Y?#+o#Y0<_~=rJp=nruWWT zp5Od`b{Qf^TJ7Bha{|z38S5i*%RdF1R*w7#6jK}7A$3XDw*x9N2r4DTrWmM z2G?@E;;P)Aolzvdu7zt;vLo_ZPVh-w9m<$5*=MMNYju0bpMNTj4@%VVtff5GET4*x zBL8qor$?bJV^a0LI!9Z}9BENkI29Sv5v-r|nQpbbOuTEIDVLxRa&s+-%5fX0bW?_n zZ}idNb1(VBchXQ>J1kCZeQzFwJq?$=RM=Oj^IoQll?QBvqGjf{Y(R=Xg>*k4o7fdKW~$qh-O}Dl_XeYD#DcR z$D~bRB)bnBCYA9cG0EX8F*qVcR2GfEpi4|sa}Wv#+*8S@$C6lZJ&g9Qe#sXsQKN$^ zzjB>d8OQJA8bPO}J$6i3OF!*uA`+TPpnmf*H&t{tY$M}P>CX&O^^a*a=C!K{z{Vg$Qjif%80v59VrS@fP#gxDLWpzUqYM;P)XpG*4HM7#g6xo4{diMcaG737DQ z-kCz%6&Z7R(43FF(L?)Rv-v~utAO9G0G2c3i2D6@?wX%8xZJ!+FR3urMXU?45}yc> zTLOu_JklWNAEZmnhVg*?xjS>^G0EN;+7Gkb&fT9vGqYymeZi3wu?{jtdwVF-dQVJ( zj?;o$+t|)w0+;w%4}99%=%3~iB0`svz6y0{3~>Yl;~>)gYZj<$slZ^T1Jp>4##m)@ z$fMQ7jd_onSl&LQ){28`k4Vm*y|j1HG5YFyE>#(_gZ3>ysn6(5T1^-e+PzyMSbEXWjx=GdfY0WxeyJ3T@Aw z;S`qV6Q5N@bO2A%U!xhr!!3rFyAVbOuC@~~@xLscHy$okF+Js29rfB|4BihXLATxOcooI<;EA$bT&qNK?5fCZC`Y_-nZDZcj*4854yYn8s_J~8VG(*NvF~gv# zC&{CgeZ0`$k?tG&f_872i0MVj(6L-dWPXaG+rBD(8SBKTWm@T{Z9?y10TD&jByvo{mL+AdTTT5i5f=-b{r(Va@&d5!#G-!YXlhL0xu1uuzuMX z^62qZvgWoU_M$#lIYSc-!xUhBPz~*vx`38!9ErW%F}zrLJsk|W!F%CksuUx_bk1?m zoqU7X=N1z&#{s(Y88XHM@4eeiI|KFx(Au!^J4;( z%x)yP>zvu|Z8ZO!tFKG-<(j9qXvTD7c39G@-Of1 zVF*5@cbf;!m=c&!Oxu=Tpk99Uybs&ET`O0>g1%hF;=D=RbQ-Dp=&k&{EPd=&mxXR- zP&e@CAihTn$RqAPpE`Ji`i}{uo!SzBW7?>;mJW7^*>Js|y*W9f1kU$n5#2emhI~oa zLW{V`jOX&0R_lHw^&i;Zew@&aPJkXh(Jb@gO%JhLa`ow#H0S3BVs<8<6ja=%DUJFd zLtTN-XIfx}2aQh`a@8s0QSdpQ*!qtmKGrKpNbW`6Hot?@O0We*S3P)j{xxacI>4!| zVGN)^8BkxgiEoWasSekoePJAArmzh7duvi4G^IZAO~MZAgVd)i zL2zMOAyKk>L*0HaqWw!I<0$rQ4jD^h&@5*N({;g~KQcIQh27boc5<_}u3fUq-Es8gHpVqvd5Y-_ek4(GE`~gq z&J}7XLEzl6;L>)Rn0?y9)h8J8X1aISS^f9kvh^jDe6k`<) z+@l>w-MFGX+0>-$G4=3Iq`R?5s#dpCXz8on!|;Uq@;=VGe?>;P{+4b zji8=yf098v)*+%bg&L%{@on0sbkrTjJ71#1cZ%*H-RjdIC&Giz304sLRL`R-EKjyV z=!BnIzOdaXK+>ryvhTJL=18#~&xmB|^W|zwgu4j(wApi_T9dIY&JOu#Y5a9g8Uren zL2_7uc8+~a9tnGS7uyR&?B;J8CZ&(4Gy}YbUl7X`|47eyEil=Bo%9}cX1;GH(&l!M z7QL=vTsnY!7LEBne=so|Ks1Ls6zG)ePmKWnVSF}>ci#zEQ$6+GpBo4!R)0i%^ zm&SA#(pUK%RAKuw;+rxF{oDTVf3~V%&W!iGTM6Lyu4pBKkzvUgmW^&QUVsu)*4;~&@e#*DqAO|YZz2@hl0 z^XEGmH0x@~g9Tlz>r4@ZPT%Ry&C|fk^cR0ckNs{GYlN3nSijZGd0?V4NGqkRK=6JP zM3!1(@DXJ&)|SAcEw$`>|I4*roQ~a#yQo`F3n?I5sK;ItvhSEGO3oRiGVYhiGd5$7 z+N*>nmXg}*8c$9I7Wh!S0L@aZOnuP=o?+GW5a}7{# z)-Nu9Ko8a5PKB{~PT={5GwEAI{b4&7Jce=24f?pwj{>F# z=);O;Wyq1A#$R~MI<|IhWQBKsy zWgfpVQXVv)TEVyXu6Sq?^Uc_=<9UtoY_?P-UFV$ei~DrE@?|v2T^b7ymh0e}Pa+Ut zYQY%b(;#t`Jq(j^bir%D^I>vmxp5W@jnD!wS5p+($F%eYrmNKRWOpO$;+h>oZYlt% zmlV^End^cTIijJwlKADJqX zoWzBkzfU145Q%5+^tq(d?LCbz@g&;8nJl~LK@8&Zxwddu;(4Z=PP7w+A2*(nyxr3w z)N3pp=vISA_?0M_l~9=zV{nQ=9t}yUrys}9!2_d5;)9}_G&s0`Ug$o-7$46C`Qw$L zy_xlrKAQ`V!WiS$dn%@v5CE%U%3ovoYz1A?d4*-}^}Y*5t`1O*_jdT0Wl^?%7m_e| zLMy*zk;<}J;Fl~4-KC{OMQ<$jSjfOSSxq#PPU1So%F=@ABE&49k$COP8@D+#Q}|;%%d1^l58zx%oGJl5O_4xNVcCD zSfYjL^BFVJiE+w`i})q8S>|KYL3-tl9P4hJ#yt9&^vZT;$aK{N_ZcyywUK2VkG`Pw zQBMdre+IaZucm`1FVp%+N7`ZaO^{lpPJ)ivW5Mq(Vq9v5>1J&Axvh@+M5u7uGn{FU zsumO-zC;Et=+K)#S@*ys7nD;oBB{;AR4L5?w!URMoskA8b1j3^-w)vi3>c^R$R9G6 zmxclpNl-5?BPR2wK=0pb+I_8ybckpP8f97c>{$cmtz+z^(ngZk$z~ogCwx_VlZMo7 zW8e8mocVA*RBWL1;>tv#zFLBA%{Kt&;x6j4s1_Y`63N>q)p%jkZ<-_!VfvjNO3hTp zhyzYEQMnn&k6LIxd=mAK9>Bkkg}AZ52>)#sfma94^Gh9ui0XWIE+g0vl?+8O&n*J} z?LLX`&)Ne&*$ove6>)lgH0%zrpeApRqv=-ymj6tMokkcajB6CMdyd90$20I5$pq~! zSIG6&$+%@-A+FbIft;u3`1zN!P(dml<}ZxEPcM#IcjVtCf*Eco(G`jbfkG^gp90pQ zlK3EZ3J&fpz)>y}KzUjceV{grIE-6@fl3$1`uJFyx>1wgAP4x=J&6>ZiGWdY$#Cv` z3Ljyef*JuWP<#@g_uWcp)SHe$Z&zX9n*v;#EsOt{zwm+5DLipB98+6zG1h()2K_z( zxzc@H>yZf3!H3b+VJR@OEEV#ehSJ;O$-LsJ)8yM%6YSYrhv6q;AvBYc*g2_m{Dl;_ zu3U<%++x5+7z+26oIqFkS&(+C1+J{ogTU)ZMp=cT(T^0g^pL`mJ6foieuk|7n1ByE zUBKq2DUR4XN%;2tAbl)fhUP1d!%UZx*2ec-p-0oEXIDS!P0X6#vww?`J>1-IQ)ATgJ0 zUpp0logM+Jo{z?ePnEPR{t;OeRe_t%?S`M7--s}_3=SC&6=QZ_$R;5172a^~A5X)MEkoY%zn1n~Zy0U^YUmG|MKvO4HYN`?{7_hM##IVn$jM;3@4#~o*LacQJBv={H-L3JMHvAaR&du<#W zJ_T3(x0%TOy2KU9kD>>cx^oKxEYNv8(;`H(;Mw&ih|R7i2e0~~tc)(c^wtE)Z~uYr z)=Vz?Wh)u=@wLGEyg!a;O2Eeb(R9H^5m0*&4hQ5k$xY=|ApRx-b|k+bo}zhJlV5^% z{t+}_t&xWpM*{KV#aJ9*c@Vn=ad`byIBJLA z7e0AAnls#;43BlM3Y7;dF!E46zTXjtBV__HW~d468I#mV(;pH-ETOYJ8OvYq!YkX) z(gkUIkQ&8;jBFF#`TVsoWUoK2Ykop)=jC9znh^F+S_B^&Z1Bz$DcJV21k5)D!*G@x z6qZFX9i;*tNB-jD)-Hg(xs5pBlz>$(Q-xnP@}$piJM1sbrM>t#WI%@pp;_)z-6=smTNH3QL=qmgQEhrj$&GLLgbxwufYY(GoHU&&&|%nW?1 zphm0Zq~PP~M07MfilcjW0E|n4XiO%#JO2N?rLwc8n=}WMfnnDK_~~Y69k`=|o@~g2 zMiUo2w&X3hTw*qeCK_}5ye(mM(MR&PWHnrNuYp&41^C%23%_m-!!x3zq2XUSF3>K6 zh3o6_&FDJR=s5{qI~L)BW7`Z+}8k?9}kW=V};XokQ+-t>mG-2&evKzMJAGe#E^V41yk^T{&Y3%1^O0jr+IfuVRoM^3|%+^A*}0f?zZiaxZ(^gSiG33`=2LOD#!TY z?ivUfQit6cBIJCZ7oN|IW_PVfF6QiRFt-W;-Fe5UJWBv5Lp?+<6e8EsJ-Yiv*p2QXu@m zB*ZE=YF)dJ=h>bOAteaqqWogDDI zyaSQA^UGAS>Dlk)G|Ddq3$Mz+CfyBS(>#~vS0dGG8G|*g9?+zC7~b@>aS5>{BA2D{I;kk~CJL2X+e z+8;Yb5@O1zqHG^$(HRKK@2-G=Wp~Mf#TFzyDTDr7B#t_&1TV1+;o{2OyyxoEcvER7 zZ9SLHdP1Yn5OSe+_IO;oYA*CmQ$=O*F7Bw47+&6t@O^ z49cO-CKsoF$pDR^TBuB^!xz7PQ^)RUD4IG2FRhFw0aHV0LV6i};)V(^Syqy(6!nVT#I{>u~DPCU~f^ z9^@V|4`yK&9=()Eb#_o9leZ7e*T;eUoEi8jc@z9MaD^mFnA3$$)A90bG4OcBI#n5i z!@a?TyedjV+x{#3VbMeU{R7M3QEE9J4{yPP?7b}AKaWVZEAU(2uc4#G4|111(s1$j zuQbH5kJmCZhg{VdI{E2N7A8(7E2I|FF-x_<(8+}?PW}(n;|(!)f-~A|je!x2%yE*t zOTNDG;vQPhhuH()$?R||{POx7S=OF|kFPir1Gm%g`C=*VUyy@~x9mnuiJe6L{Sk;c z;*BlCyWqG-9-CWEK%~zLS}A^w#x;!Q551Jbl#N+flBYsqSFJ_ErNuPbI1GYsM+j9{ zS(B*a4bZUV75_tH5e{wLgIRsHU|l_qUY~3NmrH$7cm4$Q-!zvD$yI`U;8xVwT8C!0 zePQ@7V>pEdz@E<`ST!#dS4zym2Ok`u!Ia>2^9m|0r3Gh1hsczo3RP+P0)ru+g>| zqspXl_l{{G{_u&Q~Cnra-GyI`iFrq3P*0blbX-qG+?7r=VW#^`0c}OR}MYjc>EVraPe4DT{U>mJ? zl|`q-N?>^V61+KBgkQJ&@?Nt_pzGUf>O9#Uydp|KH+47f`$dM-cj=-Ud*}WtRKSbb zo6tBt6H@%PQO6DSR3SeA4i+XLsLOMrKW4&&q9!gac>>n#&xOga^KeYbW;jpP=2G?l!iQJo?LB(C!nWg@LZv2`9 zt&hTJzCkMXMN|4QsTq`mkK)>U%}}_qhaQ!WCg+F&wAPAaZ(cE|JY)U9p(zj$vklDF z<+SW+JPwa$Y{2%Mr*t-^kTi>QTsnRW=;j}S+Ex8DVtp08JGugPjC(?CJ}$)8$SmA` zaz4BY(8qNX_kn-HJ7VJ!i`S?1(V2@Qpzd)p{H_@RI~JtUp(V96gmuQQ6M02Pzp94+ z41LJbwX2~rzZA@_EQLV(B2eMiftF1)-}*%p#%7x1;wuaASL7A?aCi&6s4t<-duw4a zzZ{1;Uh*fm6!AJg+KIAUE@U!4O;c4RnR50xl{>H;c-MSLj)|m)-bdrPl{e^yi8qPi z#YW)MB0!jY43z^ap?~8t&`A_SFI_Xbe}4g~)m#qyd_qv4`QpxgGl!>9QrOF#qpFJ! z!*=V85g7a99GNjD7SuAv;maFt zl4qf?c5@21XCM`N?w;kH_x?wxi|3%8)J#m5h@|#a>iF}cJ)N?eC-T`oAlYyX&5}y# zm(jAIA)SWbd*|Uw!&3U>5zD1L7za9?JD_3uNl@$ygY7dG!%7z=h*f*bjgtDq*_f}v zCs$K&%+e;fx6zX*u)OAr-)uK^(+20bZo!T1nM}Kz0|{qsAo|)Z`nk9g7iO$L8`=nC zisfK`u`Ie9d64`gnowAL3a+&2fxgOk^jFEp6@`WL#r69f*TeD^vbEUuIu-8qALOr> z$#K&SKF|WkOq{xUh&Gh2f&Y?i@sjF6{97Caf&z8YE3!!#cr6XR%f{e;H(4H2Kc3XS zIxR5Ynnl}_UXXagJaaCSi0QO9wCB-kysMFb+mFc;xdqc;P`eN_^+GUzL=L#^n~be( z$I(jhHA&|kV77+_goGHO)u|f#*ZU<6tv`v&T2z@wUKQ&{GXAA)A(lxtL#faWKmBng z!*;H?-cFg0xIP&Uzl_9NjQ2nB{9Z^ov=j@Eg@WSC82n+}MuUQ9!M)?j{GTI@n4`4> z0+wz@Z#fM#{<(x3aZm-rdpTUcn9}v#qcMJ02bH)|2)>KTVE>jqux0XbxZzb#Ur8Ba zR8Iu1^p&N>QC|q}JrVO3c~Ps7c)YDM4u4%%hsbmeGmfvqJy%Z9S(^%BGV^q8v2Wvl zO|t}-@ew%fa0^Z|S7F@nJ+L$_jdfKufs&sal>eLuuj+zeLUk6(4Ijk>`@@hG5CDPS z8(TiqTr`73DdGZ+iR7fPd@-(UbzEZgK@dDjFxsdDJRn9o_U+K~Xd(p6Nt#D<; zF!dVM3@#b_L4ArX_^eq9erpcm&#FuGxN9WHg~gyqfFu;juBXK|Gtv5tBFr;Cg6Vs7 zV8IhnjIK-L)nA+8(tpvoRVEVGjc*1sVJghFEaTr7D zcttXfUGtpioR1>*ni1H4uN3wEZiJ0Ri6CNslWu&Qj?WJg49hwRH?`jo>4JLvd1(i^ zRhok%H-Dl_ca5X|*_yED<2JnBpAB1QZoup-iNxXEb>eVzGc0b@z^J?7+-SuF&Z{^N zYr|AgdR7?u;+YIJ5()C9-IeZ zYd@16`H5s~r3$W*--tVfr=afPUR<@(o-V((5_dU`qj6SMxcq0LaM2Pw8o~URdAUVg z%kKRUbbx8$+OK`#6qtHJ!3Twh%lF+$sT)uN8oRHp)SAW)l z7h{52eT@c#OS!o6%W>@1X=eHOZ6Mqr#I=*7!M$uZag5dlp+O-Oomzu=KjvYnbO7T# z8{qnu08V+KFO2r^hJeQ>P+Hvw(rflWky9R*SG$1u)ehrQmSt={U4{Q%1mYRP060~g zhO+%0sOx&3cN@jdhz|$hdRQ9FzhR2;S$-h>ULU23-*b2Mm(s*TuZiG_Ihoc`2h*Aw zVd%qXvbcRN*r%J4O+~sWNi*SigF4-{B9C`?I)To($1(*YL!oJz1o$}~1}(8&d~KjH z-q6^93p`SBdbJY8YIPEvypbv#D1xP9eNf*&5p<6b9AGSn^XD%LZiii`@2*TGpZYey zea;3;kA$JjuJtG`Vh{Eah4?A=9bHozgGqPH&~%{$=9Df&#dYB@a)LYEu`(CKwCmw{ zf*icEXWHaiKm0ajB|1*lq*hX5ghbcix6ElIf2stvSQ-Pz_Obi>1PwU8s(|<{v4W-3 zPw;9>N^oU-Dh%ox!QXL(So*$`e>g9Lx+PyEYnYaEePaQ*3QaIkdM49>R)Njg9XMMq zoGxM9ehXP9=sj*CGt~~meYxfMcyAw_+tLi*er40jTPx|l{%Wv%+Q+H3Z>9n7LZD@T zGhVPzMV+m#d9GXj;{0cO-7RvJ?Xu zuhq^m2HrFt1-HF>u<^!wGQVm$PBG%(@A~bu#Lp58D*SP(p9;LGt%RA=h4{xmiT3ZE z0je!i(0b`YEKpP|dE_Q|ZKA5i*$4{u(?Mm=*Uqg;SIdYy+xGyM~3@=TBA4>U{ z(lsC2|0|#y+dbh%%@+axshG+L57P#57qoIZM$E(2Xz+zOq{FQg&E{Pf=x!~-t^O4_ zhV^Uilvyr(s+L1_p0C25F_W-BcQ<}9Ov0BZ)9_%_3u1gjpB{_-N1ctn5$mCNINHCM zKN7wf)JI9oR3=aVgP^ll{yJ6{Y6?;`Mt`(nBwj4Ij_aSUUdv#GjXJqBlb(TAJTuw66~12vA~_-*Cz zE+U^FdsiCNM6S~fCw#!*S~mJ_d`<1nW}$IpB0lTz#=TQouy|<=8eHCk%ZCl{kzf%g zGqaml{gnojyj&>vQV#Yd&8HK6@?kOKJ}*2x2NZ;dx#+HD+*vyV4xJo>Z%z#m-}j0b zYeb2jSTQ_wI{+ms4`YXoH&=Bto$l4GfYLHO@N&?Bd&A*q-Ci%;>j!#uHK(@q&{%a`5bGuJ6-~BG==xgEJ)uLdm!x)TPdm6G_*J97u zpVWJ7GksQROUJ0EHjq)`)ABk5F|d zUHt8PpVmm(;Y0NhD0nA{XYPdIKvq5qMMJ=01BWX#eaNa4U+Lx}0;1~hgbe9z#W(Ge zboQxLFlTooOk7li!R-C!ED4kI#MAvT+`Dln zKvi9tS6h|ocaMUL>9ugulg5-iuuC7T{=@KygQkptbS7|`Q`_rAL0bfr|Gf8twW zwrdhPeqRMAs}i91s|ZevD@Nn%O_(UyjEg-dlP76i^vt#Q!rDa#@XoYYoYSU;n?41w zxmcQ?xO5!;kkA92uYsuDH<~ogE`lXjO>kF~GcLBtC%atN(^v_f#9dju|jBnros0&$&15=L1>LG#>dNJxk<5LV011ngVOa(1W_y|fdovDW z%;h)y<{t}aw2UM^ZySr!I@|f>wUJeA)G5f%-Dia>uF@mKNn(%wS9|^2WAjDf0M}%Ld|Bbs$ zqYRJXn$`@m(dsB(u3k%{TnnhC#Sx}|r=c^u+uU1~4c=bI(c^73EO!f}nSt4m_P2++ zbyblC!J3@zbs2m=?=iU~s)WD8N8)MuLKrwwkM_gg`A-tQ#A#az%}~){l# zRxcVKkBq>Bs}|w?xEYZ1GZU(GhRBIGMlj~o3Cs`OgB4PnVN=&JI5Mx1Rw*8V6pKjs zbgK#yEx*|lgg9+q^S~_fA8iJYA{pf{p z5g4>t0wtFRf$2{X*qL99zD{|#pf(Lwj);TQqwyH<$q?TfO^1U;F+^VFCn=u&gy?u0 z64#IexZ_J6_chfL)kd_?R=Z-n%kIdMivAp7v-9y=kr2o`!_=4%-p@=58!|o$)9f2@ zSDQH|y-*=Oa>wBhE~4RwkK+3(mPL%nq@&J%B~Qn?@wLg3Fh^ztCja)rR#_L2JG>li zTWZLgp9NS`;fUX7Rl>Kl5-iXQhdPtxaMs)&OC;lA&&?Vv341?}j&mCW7Yj)Aam;n}VC;B2oTdAo!$h#SQn4 zLbt~;jA^*Tdo)Es{mA#!UtSvag{`3H%L$R_ss-+{7#3b(-qG|zXj6>>O9^M3qCbxK zZ5M(tD+RKQ{n+eZ51$4{@LLvoqjz~24*Hakk7uJ{$w7|P9Z10P$>FGyt%~!^6k*cV zQ6O?x3idn`2bCXRNs9M&a`pKhc&}0p7rK(ERIojo@7NFHGoNth$EwlIMdi@7wS_1v zoaZZlSCV~J1N=_L-kOk}Nr*!(n)aH2lGS9?>x%?0y-3hMUCy;GMrG;OF*2I{525EpQIueqM^j@J&}~>XK~ua6}#|^v2_l9V^lBdk?XAoQc=t zpD+efKFdl;(ow7%pwZP3;{b}^I>tqTe2n9p4=64 zr1w4JAbX?>{Cy!u^)^Z2l@v-Bv?Zh0@@#nepFR$=bI~{ZF#UUAB;I@82qwvl?N+1! z2ZXlV&cy zH9rQeyN8L1YYX@!THz?abgH*U6SUkL;FEzK)rt!zs^dqqIl~YGPD`U{!a<0NjD&uX z%Va>S2|sNvBhM3#k%n6(^xtw5oNrPO+b+vPl(Gw1*b)xr>yvSu{ZV1mWKf^nLwn~Y%I}cogsf?F1PTqcygBkrxU`eth zicjm(}2TuIDS9` z&Yk`rMduxl<@d&MB^4%Lc83CAfG{27)%XV&if%{9(yF^&72tSFnyRGN?(P zC4JBG>2J`f8&h#i(3jEmERX8qT6)Xs_jX^u54vuFi;R>d8Y zj6z-GYLL0uimwiu)78eAu*vHbmv*iNpG5m%TZT0xMxCKb)8kKsN%zom3Zt|^2DALO)O{Gzfi zdZ=lQ84CU^Wapm}ObD!ju!4NZi*<%3xd!wN&j(NABJg{sNE_ekQjvNc;BD=KN%kW6 zN-PkjoG8Ord5@?{WiQCqjq*(w7GPWJYpM|24%<9@@L9`ZdS{n4ahvD@h2A0b)UY?U z{7l6FMLE1==}uPuR)awKTJ(6XhK}P7Q*aX@8U7PVX|)S_Jt>30$Y7`#jmNo%A~5Uw zWjc6oh#vZTkqlHNk~ihkAUX64f1b|uOxgLl|Td*g_8~Y zU5QRk65ev}Cso^Ju+Xj*(+&Fh(-*YinrUNP+0rjWt0bSch*;ow*8RBhnup6q*jX%! z($&l}AGbIk?F!h2wRRJRTI|N3OLn3e(|GE-YKY&F4szy5DfRoelslVvo^EJmnkf+17`maL!qbYIN_Q%sTD24 zg$=1#YH*Q6@?pY(%| zxd>jB>4Z0q5g30{4A;4y<*UvshvmJgc;Rm`eDE6ov0@N#aurlR?_nJM zgJ2mUg(sA|V5PktPBUu&zNsVAO|!A~PXgY39z)Z9biif)8ni1~O!l3RLf?%SXz-F3 z^xJ=yhH0~HVGa1f%`qKVaxWGK^&b=Q$Yq55BoH<+7HqfPB)g!U`A~t3(Ut=ardD)f zMG5hf`b>Y>d>|V8nWo%01xgl3;N4T~v!1mW2cA76`>)(1auG7{eP|9onXU|z@8qJ> zb$_D1wHS*xs==}3DlFOVi_$Y&aOnJ1L&xWpD86Bc?wLIs!egp%)Vu>s#=_vk>I9-$ z5QR;zl^`Lt1@{a5BP}~pVc+uz$iJSNXTP)zmd zMUe}8vDf7uNw!-=(^nqGzL!?$`mP+u@5ulKwD>ToTwOqlOZw?VuUfdO`-3h?569{-eKK?`6@E9BU@5!f{EI5WS;y+ZIK&UG)pY?c zkZtzuyU)+koQP+fZ8^QxGBo%Ui9&f-`B(Oc!MTM(7+=^zqpkEHu9gS!Om|B3oJp@8 z?Iv@@4nypBGaT1eg8Mg(f*DDYP;gC(ZolD%Yo8Q@nO6tN(5zv+5_4?U|H=Q_ z6pj7Gz3^Q20L=7mMw7e#aG_!&3cD>vWfeoZ{!kuX5mrOR7-{qid}p}((R}(nDhBs% z?BGwy%*Oc;hnoN1kc3hh$PrnDtIz%=1%^@R{aFbDeYTMYpO(Xos>?LeekT88R4hB& zH^a=T*POI;BzgNQ0z?kg@(wIY!f!o(FgLXZr;F7??Pdu$f4rM-;vbk2Xe*VBF_~YZlD~8wWdjGBJm4?lyxiU1gkMy9wmB&c^+G<~v_6 zf+-{s&#E_p|9dUC8~KjEZ@Lq-n9any`L-}pD-U4@vp6Z-%tfs+z&YF7(Vpp`TiW`; zQl!r{a*TYzmPv;<$trlHC>ZR*G`LNh-b)Kz{&U-xB!U8oFfsh^E6 zzAPY>mS0GMg(pmq(FV)z%lrx9%}{!EF^eAgz`}Xikj3~$kJAjm{kl6Y+h~t@mD*HW zrWmv0OYwC5W?;jESmhaxZnBk-xI_h7l&rD#(Q3wgJWcYM*3iFm2bfv6!Hy=I z0S@YYCckmAg$raEocv+!BNh{S4fatdD1}wrM&SUe?6Bots0-s)x*W}^YP8! z6J(sV4%qnwcGCw;3xW;o7rPp@acCl5!D5bpXnPWH7i^qEY7cZbJ8_J>XkV`su()fj#{ zZ2^mgm#7NOrDmdObknOs)+LUmVfxj0_HGt_U$`5#7lgyL_fAM$b6{qo3aCd{k;={{ zXnR?X;a2rHUVA<~8e9M^)8o;0b{E`kS_2Pja^b&2=I9($f#u7~;Lgu7>`?E8d3*)< zb#*#v_Kb!bMaL$hjH)xM1$n3;Ezq& zeCbvITDh9&oV*UKx6FbY_PW@T(MBc<&BRUK>)}b^6@HjoFTCyiNNWa6sG}hJ-06?d zkQyGnfABF~)Sm(6C)(iNCe}Y?n=T^BzWC`;E?+4=4LYP3!t-rgNXDri9QHhf0eP$8 zH-86FsOg8UBT0}@u4`~aAcQY=wty@S2nF8jP)rGYPqiBxQ9SPhcf@=W?3L?-l}bri zr5uChZvv^Y*9I#7vjKen?&eH3Po=^d#h}vHNo}k&K=*nKo=<TC@gcSuFpF*vACH4Yitt^x2t{rnyG&iwv_84KX)LLW5v z*bgfcZLz$#0Ww13$e4L8>i&@jz0Niq)ak|r3wWT@cAD(olL&2j5uoo~1LyB+8uIr0 z;<`y;Sb8~|oDS;&XQNt}ae(DU=C`0j!x;Ui?TZ7c->Lil?U3Qt51g(Rc3(B4k;13A z6DKZ^0*_{xCny9@okL2VEPqJgEksQ2{2Xn2h@P%(b$TQyRmct9+ z!gV{k@kJ`CtgXbCD{d3To2~|eC-j8QCah~@9e`sOner?aW~Oz4*5?A)_;DPL&u@qK5`Ad3Zzfv3O(REb z-*HQ3H-mdw1kCE)$G*Fn_=2a4XSeBbll?AGAJv8UZhjm!UFpuv*(u0#=4)WH^Z@gg zF7l@=RN_BYLK@`tnl|Rj;eqK%5Z3NUXR8~6m%%tBSN5TO5r=!u&ZQgDBk_to>znUy zfChzZsM9@+$yr6vt9=-=g5*)-EAu91tw$ntiTtgX!0GZbw9Z`FxEiI-?q;sris?{$ysHfRve zJ|{{O!B@2ew!KiNw%P(PZONqp+vhP@w=)-~yjg;uZA{_6XZPs%@#zp1>ID0Q{qfJM zR&+RA0~L`2Kz=i3mEkfJi;ZPF@%fm}UypJY(Renl7z_gK;ImU3{*bdJ<4p!Y+Ico4 zE}R5Mg?!lacQ$WGXf?`993_F18_2o4et6AX2TBT};Ori84BTUfZ*7YKL)O4#@pK67 z?xks4dNEAYnOx0T4Yn^gK-j|_xYT7egsa9;nLnLeutxz5CR`%%cW)5?6QWfA=mb!e ziNdqzm!r*`iR9ACKKP>GjI9gnvCnrUb)NZ@stOz8lKfVh#xf0Axy`UR+XjxLw2(Es zX3}HYv%z>{46e6H0e6eZ@KKy)R3C1mCl2j`U*#*oBhVL?OiQOrg?O0r7|HktQ{Z)h zJiP2xK?(grlK(T3|E9GBFIVZ)Ew&Xj#7UXX+Orf?^waT?P!-w0#lz;*CfLWi7y7dd zpea%tK2D29O`p~HVXi;kZnnqIyjk?T$PN1UbqecdcH--PEfkYljeb+=p)8yxDq)?b86)<*i8slL@eZP zF4BaY(K3T4s(JX}O+D6j9HR{ZN}xA2pXKQ~;lREOyppp8?an8XHDhbp|F8(vudjhw zT3xhNWe5J-^pqaXRilcHZ1Y=wCdBDVz^wh7VfVyPTF<*cj$YTp!{#gC`HFJ<)hj{& ziQ8kBcmfD__i|6df?#T96DC^sVdu~)I_dc_wl5!#-JeT%-Ol$_n zkQGppEeS=1>V~H-d7$F`4m4Mhq8%1r4L&7$(OPQ*nD?*}^WP@Gf1Sxx?&Cc2YL7Hs z8)l09cRcv4w}kH>z82#ah?16wd>F@g3SaBCp+>b0ZhrHOeiHH^2jq8w$96B~@fTxu zvK1Oy_S50uApE5?9dGZBhC>P^L?KfRRXrPMor@&?&7X@gHsy-K8%|yD_ah zA9c?6V3zi7e9g4QT{US$*?fpR$&QDSCOyok^MStTCKxe~AVI$y@T1QFWQvC3H_0gI zUTTWmk8b?u%=G_q3DRiT39(|mn6p_7Z1RNZ!p-rJS-g*N!xA96IvW}+a*4323T8Al zkhSNW5PbT8l#Zt}I}bwlgnl$Wzl~_$Ya`BE^B}LA-OFAj;l!|9$eEDKa>~ca#p`$J zm$$9#d!I#~Ts>vzyCx92tbg&RkyeaRlEx!X_u_BY6|iFVDCvk_2OFQIfJW3c8u%>> zGqq%x=2edu&O4Fll3w~KeJk1b^Eyq{dPJg^D#6`HG5l{@jnw-~6~6x}gn7L=@QYVN zJ8mVxmLsVsRFeSmx{GM_S@!e2f5=jy6c8R;i4)t3K{_rBO?7>-@N_5rP|3q{m&)n4 z)lRs}g=qrYg3#t!KA6pzM9uq7b0e+(IP*a&)?{oT6GE~uby_|ray=O?dF=r7n1nLs z14OHL2|C5yCJw@fpw=}BD(*-SeJ4+xv+X)97To}i^{$K;p$6kMq;Tq7#tkV-1wmC8 z3ff!1bWj(UXsDra2HVTzg0VqnCmjB<87!acqqXV?9ezJT_G^Y1`fv{TInV}{Nkkha z)yTl`11UHm8w4j=ALZsIvx&1(o&`q@CgUgRRP@tshT#*7iS?Zb zu$|w9OU%;odZGykR%wF4`52g>=8CPm`{3x#A0#9ufjTCa;jVw<@E4!bLykU}1?m`( zmW~DN3mE;n2Hbyk zVy99BeX~0Y&ur^P-KkCRAhZ(lWQzDDCf(TMd6+s&3XliBPIT*fTkzSfj!_}~$Zb7{ zZOUGt^*SBj&McrE)BGXH$b$dj@gYz*jDx8M6=09mbjIyp0`eE?pru8C@x(L0)~W;g zjzyy3*>0FGS%-qEWtcK5f)#(t;W^9d{yb4gM9Ll#y+`w~WW6>VpREn2t}uVEd<~ub z#S0}R{IIp71^u?Pv$Gz{QdIAPg)&;ex#+>%2?ycJ&L%K(5CQv~!}v;RE39y+=WmI$ z!N6ziT;Au8U*BG*FG9fyFqU+RK3&-Yepb3LeYH0% zyHF1f_pI<`-U}k^dYtVIn&PSfBa*B-4ohwuph`;*<5uQypHBqfUiS&;@M{OTw!I9^ zJDoA@8v)Udb?DWckD?@#+qKOTYVUraN~hlOb}U;(cRr|rXEXotpZ7%Jl_VQd{(~_` zrrf8ef3;y+h9KnsmBdwoo+x?Q5$12t#deRwFcurdllmro4_c>(-$ zlVV{)NBbUTFu3*f(^XPbA7nM?;WdJB}3n zpZS-IYN65i>X<87dKZAERySDu?#J?H8SuGr9~4J5;L`sBz+QMK=9zRtLUbS5OQVQI zWh#tQ&w_pToH1Ih1D5hxPU@yOEs;oqH&=a7K{*O8&#@pDLh&&8A%=1J-Jo>3AU=sK zMIYM)EV>c^0cZ3eHrpEJ59i>C!6h(uOBcOA{N@}6pOMVrdW^fT!nyv8<<=!Rq0!|T zz*9O1Q;zCG*QV(xS!BxpYUU1mRe50btqls-RRHh9Y&_Vz9kYkhNy;TVayk1C9S&Ln z)7BS2;${JSx-JhMXFEc}mr|Uv@)Pk?OMyoZ=Agrz-B8lH2F%#@iVxEj=22~^_+5{i zwh9uzwO^_KY?dipln^*RGAZ~?$rjHxvEJSFV1DANX!=?|4KhuHaND~HIM(x= zIG+{gtn>nKj+ZFBRraLuH%wrEZWjM}>RLQ&oQeI~cd`Wq*aXf%`3*>h(9bCZ<+tnl>vr`Y9kGfD6vvS^_G;3~dC zo(SfDR;K&+tS8&DIe7bZgj}pxNIuAmVza0OyfJWs%?0*we0~N=_=gmn<#hYa!kS{&f@1W9ZrArLiD0K|jb zz}r90V8d`J7x#1y;rGk{(a~v`c0&oIrWe46gL7c;!8scIis_HW3Ai^Pg?f3VzzvTy zh`iSi-{0;8|}FX)sUwT^Gnb+l;D(U2xnuiA-tZb7?&}@Y7HT)os`upiJY1TJf zz8!&RravOWEpM>5yx?iP43%1 z5$@Q|gcTv(FksGu^Br2)j1{=>bSEe+T8~dZEuiZa&2j6BO%Q(e85b{FL$_aU1?A!t zzGf(add>)=6(q^3m%%vna0#xN4KVGM5Zw3@1H4NqU}$!QRJSLBS?79ud#j7Ps+7(> zh_9mc$BnVnLyj&!)(AC6;<1=@C9@(LVCdv^5}+JOweRSog{cEh)iA<(>vZ^Y>kxT! za1)tua51>bZbubb4cDAy!RPZTc$Qg@dvyvpfap5N-b0tx~>qRm1HcqDHe`vq_N-i94+N?;SW3SMEy(wx%+(beS0~%F`*hGm;WM$J`u3eBoU7s zWt|Q7Zc&L4K=uA)czTTWaQ-%--0>#RiCKt48eOO*dJwijEIsj}1tUKX5vO_{<6iq? zYjho)>*}JpCO6}M ziE4EGr%IebSs_g=#PzMM*3l?C^hVDu0frpMKP6L-co*e8~Z7hkTx*@0U~UU)Ie z7|+9i=_dTgEit(5UK(0IVcYS03h0n%5tu!Tr@KeTBmOam=_z2E)*n#;s-*-dn-6~p=+z8m0NJXg z*z$K4Jr^8_H=|>r@QNTRM$7_}&!H%#Kc1MMJ4b`=`M{uR23|WM4X>NpxrZ*s@Rq0m zZ(}uju#Q1<-d;}P=xn%DmjhCVx1i8t}*!}RS__S22 z9=#&Udb_1K@b4-Y{yG}B#J?aBg>Pv4^6fD9ZZ$lJze%?&eohmM;&J1yZFJz@9R9_k zV0gT!0~||RLF_l%jFL3Qvk_bI+g5h3ZrzAxOViO#^g2D*l}-BDJ+HIom!Y1C1>Cq* z3%mAbq4dva3}^aM&!luT_9(=)#y%(^!8Y4`WbuIA53(!cw*JmnZ74r80Y{HDV$|ui zcwhD){+Is8w5+h`=)d-f+adh21uxmalOQlZasRA5$70Se1hfPzB}YL|&%w$6HRbI%3k zySu1D&Ia(^R*nlE9fmFA7E^5}ZJOoYg5^UKFrC3O)W^Bv`HWd0`X-qeCCxyoqip|u zwl@*qc!3M6{AuVk76l_o9&mAR6}`?kqeE}oXkFSLqFnri{0`?qfSej`^+>_^r8%%c zJ&)cLt%vEUYzRz^J>TrQNcem&#D}#sY)%`*2WminiV>+dA=GQ9BCNd~0+~v_$PX&- z@3_&7&vKfuxg~{4z3jjv>+ey)@eLUAD*;XRB#~+QyO4Q+pr^hKU)vl2-?c6H@sA6! zyix}3r?b)9NErM_IsVe}Zf@_LXncQOjs9eN21Wc2Bu;f1brCBB*F$W>=rzGL(UEv$ zL>u>?N`{X?YUI@OZ8T~fkKEn93Fk50!9uYPt7H@4(C-o`y~%buW4Dni?KPO9(gJFF zNx&e)RHSnuEY4aE`;ldozW(BCm!#puZVQ^@76BhKxJi0%iSJoE#iURO9yJ z^|(MB7}`#Z4mA*serqx>@hGvesziy~8ECcG8j>@CLQ*9piLyP9Ryn%$V;NdA-rn8K zu8=4?4(9165V?IPX#Kq&eDvxut<3JFz7|ODw|}845@l)9j1BzuZEle2*9;BcJ@MTY z8&HrxO6p((lh2U{4BXRc&Fh?yzV7o#Ih7Dw2cL8OG~gkcABi@Z^8GweSm+q z7q41$1196Mxr~w~Ncqch zr9Imr{;?AXB_-mp63fA92tdDTG*w%f%;_to;>>R?XxCK-t}#>jru!;E;7m6>$o7N- zoP*pjE`0)nj(BImw@fs82?n z`!R5K+8^S3%o(7ToMaGF&j%iJEuk)IjZp^^XvuX@Sqi(84GNh=sV?n2G9AXxTT z4Trka=mE(Aa$V^(Ave0<^OKKsF&jk|{P395lvzs*t=HkxR7u=DKsi#vwkv)cVYxsZ z)6F}{%$UP)dPXE@I$HuyW(GV~9i~0X^Wd6C5-i>xMec^i&1$PRqA&m2!_ zWuVsyHw?V0j!TB#(CpkeqOo%lcW7!GSSt2|+LRCU^0rQXhNT*eW7_e5My24-&eCU< zPtmJVx5$NA4&=P;Pa3JeiXOk@fS1?#!j!7lHHD_e%k1D3!INCVmG zDroR4f_r*%vAQ!5FGyt?&XBBzO2~mP-)@r^?$vmvTMb-h4pGCA9+q!&r3)8%f|pqq zc4Sn;vm^41f1pa9G9SFeEfux+m+9(}6!?*|1pd1^1?ESj;dkFJT=exkTzs$qI-5hO zO5I$TBHIPtKO4x)lefu<*&$H%@c}(|SOT;wdT7!}0K7kH3+d+?aPoa8ST5vE2P7r1 zo{fBJ{5OCpTGB9ip&&k2<>A0&F%^D+_NiGX9?wvMBJ~)JfyZTY+NiL4A zbi%dUt?3cQgMBWcPsatxvG?)}?umd0ZnLq%lrIO+@k1&sW;w#tRT;1$G#@OS2G~AU z4b&F+;fXz)@&0fV96E2nxu>NWZu|Fu{$cszjOKOt*Pdn4F1#TXpGwgbKu+xOq3>kSvW8V*4-=WJD6%6_5Bg_Hf^kTP(eD{xw^JI5JR@MmH4`k5f50E~tbwjj zA5ip+!~N!G$b@G@#OPhD;i=MUw8ohvUsen9qO9TUyf(6h>2x)*kvL~Ko?1@b$o#6U zDDS|Y%N89NIGTY`mg=M_cLo1Xr8pe@x)|LW3+a02MQGF;19zO4;-{8On3{c;Ji8l@ z!zQ{YC}#laC(lr8k8+&3V>*PJi{RhC^Z5^t&LY7%Y`3GS0dgyPaqHzwu$25k`fR4- zWL`I2zD}PW)tm=j`^J-kUUT~Rjt}Y6P(nrN1Z5CbClK4R` z3Vm)@p@okmj@hZ=h95id{%7Vp_3Xf6#U5}!)&muf^I`U*ohY(lHDe#$qZihQ>7K7`SO{91BEX1@b2k_SF#4UyjG-zQ0j!D&`t>FSRThwiANUB#73M>B{28#hxqE*at z*P(IXP;L#@ekL&KWe?nvUxXrCCk=>Kmr*M>A^!5^KDaNH;Fc|0vA)j2x|M(`6VUU(IxdwV2$+_5se^qNaDscxR*YFz|XU+^CBNH_t>!oW|I@ zJRkVCAe<~0D8*M_g_vf$npiNsGCW}sxc^>>zmy$7BfJz`Y<0om2)mb=o4_}gfvy)T zBmK*-ka5SDUnehvHiN~)ExZ~$dX-UCa3AYb6+>e9JbGhvJ&m_n0IO@nD4i(`-D|$n zQs9A-X%;%Y*n+pr^ubK$FwB3s0tCf->yD9I9*kN4AQw6)<*Z1S(5i3j>Zg89l}qN<+FKc->K56<$3R~8X8 z(ccm@zBE8bSPxMeR)jo-e5|Wx_xk=Ky7g)V-qWn(|CKd>w5@jF<9CVd52}IT&9&rN z=^ivc+enJ6BU!dO9Y^2#;l@2tMEsNnSsIdq(>JK10qC6B_}qnz zD2Tka6qi@5r5-Eg=bXtZPDi7#ZVJgw*p3&pBXQm9 zLO5$6O2v=FVed9w797{BKdF>7eR z3Tq2|lqinH?jTG&P4e?QSeMBo!hHrj*wsn2UeuZNo^w@B_!i=-+PAbuQ;OW&5Q-Z%ZR9IUmqGvLL)e{>1M?*c@xY98oWsIu@a#B2?=Sd5 zUfI9z9}Ip%Lzxd)q%f7=dD#Yw9wxGGZwgeNUIQQ2v{Us3X<+7|0=GwWal;{9kbPhZ zVI8Z%I?kMajs8Ntk2K)!^O^Xl)qv{kZpEVkLg*#s1-fDt=r0(A@@JATi2ZJAjXS_m zt`ioG>S28IE;RF8!%evSgElO#CHu`|i1G7S9RF;LW}N87zPl}`WK_jvcy6FOV|6il??>jFq$K8J$z(&-)kRiMd&5(HsNS=8j~iRys`4?uX~P%;y(Kr^{tD z&{(;VNPci9dTi?@@=-L_sX3D6i)t~cvLDK)q+v?fasK)}jr3!^5x8neWBeN-couAj zypUbw^Q>I#{z!1ZON(2YwGjuM%V3L#7%#vkjpP)#(;MOhZ;UbRKKUbkzkziPVk@}n z?kQw6u9Kf;affby_mk{A-GzQ^ubF9kXw9C}o-w_;>?}KDrXLI&P3YcF(>Nluzb-*ChLjM8M?^ z^E6#D4NazhCYK`eaoimX_~`tL?;>-XOE2AsU;4W7y;LE1OWAI3nQ@q)b@ei^%t@g}o4X)_w+G6vCX)Yj3*cvH5^wVM zI<$6MMoYdcqQLPE46}`Z+HigN_pt{D7s#`1ZdKUTx0|jG^=0!Ysqj-LkGN>OV|(GY zps47{E&r8*w)KtF&EyIB2moL3w>dg+CpQO#q~sAM*B?0(&+xZ-2skQdreWB;+D-__YMcMbz;e z@)BT5(mEQ{kO`WBjUZI#%Aast1urMt<0E%ptcl{`%s|P+YEnsNV%?0&mL*H3nXex@skFjs~ZLV!}YjZO4vRO{C_AdRjG7#LD z-f^DsC{Ad2!ZXomNL-}GEni+u^8OwG@vLyJ{p3^LqvZh*c%k zkjGOaZmB4C(gp zMvdvUsPJkN6eKR7`I}NPZB{%wb0Z|LMFGvNlF{J60_LA3v+r;=p8j%_gtr+(%K{CI z)7Xsb4N9S8Qwlzht^kiA2YB8*L`O0X;Bwwn6n-j9%LSq#(>V(@t@?27rWtVS&SY9X zmQCGf4PdgW7^a&E!<}sokoe*%KZET`pDf;m3ao3k`i>c}+7#d7R6Eg+D2Fv6sdRl! zC`4Zp2i*_*V8EuDA9N%folh#0#LM>B!;giGpwGldd>uTT8VO@9*J#I17j!H4gZQN$ z;4*Iph@D&re^rZ7u{0Td4%*0C;NNo#{2+`N+^R>im}vSQ!24n?xStLgity~ns>>jMS{J@b?coj0+=XHinQ*CnKSrwuz{`eNSn$WfFlb^Q$}P*nn_etm zb0?Gg7so;R&SspR%%cG}s?p}B7|vg*3QJn-!O^c67Yyvgjmq2b;CUHHx_FH02u}f_ z7d~K|Qo(;>dVf4%LI2^&FfV>Jo>xvr?>nLJ@Paq^?a=}0lsJ5) zR*xsP84-SKCJxzk(8VP$X#e^~$oP2>bBfaOg{3rp>TX5_wJxf!vJy5J8{w5<>y}4Z7XlfpXrbbH@Ouihhg?KC#*Qv376itVw-RRKCste=lwl~kMGOE zWYuOA%w0}9YL3yZuF257A#dQqHhbRm{7(3~%Mep+a>%pDIM7b3qU#m}Dg;l*yL@91`-^49G^(b`N1gl zT!Q<%^(Pr)y_1^-i^$H9^Ze?#0(h(#LS#?z$N@PGxS^%WIA2w?Vx}QKBy}b1+gyW| z%m1V2`=W@)(rx5b`f)1GJWKfA0ngNXXr+h|W4AAc8HHsqbJZRaQsY5Wj_Yv^6AHL# zii~qnECt8e-hz7jM66%$fD&t3Ay9~QgBCdBixc+v$wUGL|2`!SPJWcJb?Jp{)`5`C z1^$`6U$eTd7C`Yz`p}rfAPunRo&3h z^qMRRnZ-HC@VT;Y@zmKhk1?GpP=4AW*sLQB(+~e6M`skXozet6nm0_Bjt1k#6e<4g z;UqM!5Qf>m_A+gCGuVum2f_cCKe@sgt_K_feTt!ZWWWuSvTX6n_JrHcI zgS;TNt9Ud86E)lLtKt9*hsF|>W#XurIS*oQr$E^+SqQUFz%ylYacU!ST00xbxsg#S z^j-zi#4hrNdbHTS=R>+))CxtONMeO*F?>852=}&FqSUFiAfOQrP6}E0C#45Nu5W~P zlgA`6paXwjn~B3h`S8tZA9|%9hM=e0u}-3z=QLvq%sv-E&iu(F4Xyz^0l!|>bxDC& zuVTUdtrJ)ad!dX$5jvwO+4uSd=hCzo^2druRE{SID2C(F%=xgfFBBVC=GJ9RFvv2N zxM|`);+JHM?G8&x;Dpn(Dd-t_)zwA@ynWDDe=^Lh8RfDcN8rhCiugd>l=*#SFfmmU zN_~&h|8kl@fn~x9@>vdHqzG1NRluIgO6Xqnn_nNj8jer4f@7V^NMs=vD- zMaYy^zPH4yslwoWVk0gzh=CzlGu-)f2aY-9;$@3y_~K~=qPKZ;X;3nx%#Wg1ze=Ev zZx23MR|oS1O1Vf~AsA!&)V~~Q?u^tt5MHB?`nj(;eU^8*wakS&`tRkLbX9SBckB4n zq<}x^WE)y^m>BHe>48qk9KX!?Ayabq8QtV=UUc#6X-;D;n_KiNoMO{^I*@xk;ki3C&l-Uf2hehzGVcnGfx z7~;vXTD;zyjMtbaqrKP~ucy?&&qwLFXHzw~Z=6rvZ{H$2j3x22eKbrp%3^wECcS-3 z4Ky31QD!a=^5$>DD;o2_@O(1{5q)sxw?eN?E)0w5gKEeMj8!y6_r11Y@^~#C$#_bP z-x-jV+KK5hbIIwr1dwRf#^CW~aP)W)-6a>=40uqphg4c#JQ$WigpG#~3aqb;6S$ zqx_F0CiK6)&sfbX|9vTpHQcu>#)p?_*=gMG#SI6CjBp5OP6lQLRHkt9)OXi4jJU&%;9iL8c{ zN=buCD5Jgi-g|2A@wzXg?2J^1$ok02$Vm3@{{H@Rj&nMP-tYVMy07c`d_2%1_A}}1 zs@L(oRg38btEiP#HWawdK?z|QI(A|osc9&N>DSzFr$-!I|FID&UD9yYnGAA2e;v;6 zdrYc7)zQ5%f-sz@2UU%JaAI;XNtbMc-nky|@~9R(I)9j!O{nG0U&x1`4TA8Xr(0*- zvsRE#YJ2+^KlpcnVU*APF^CO^;vL#dNbQY%)?6(%JAdm1$s+`<%1hG@R8XrxH8`! z92KsTx;^7@noTx*>RrM~RK^fLmrb-IF&h)DPcr7`Tk<(^0k}mSp)6nt&)qk}u2WuE zd{7AAiVLHwqAw8~y9-Tc9>Hs2ZZv$Q2l@8T3cYpT&^6DhIZJ&-yd|)OG+fw-n-thA zwPzdXcV^@IZHmyvr*f%VC(<>nvof*aD_3+(2Rs$Z$fD_{`0L9R+Wq>s&Z(SG_?qmG z4ju$*I&aZUX@O|qdI*H{JLzJt1o)OU5hYI?q{Z_xhRH0z4thPpXW zqsTJ7O>0oojur9TBd}_PJT{3=LFKR6;Fb1=TdP-%uZ&jVK(Q^EDHj95$!xAM?geT1bB5JmeFTvA49GbdB8% zEJVL9N2l_C>)qF9=#-I zVw+{O&ZA$eA<;mHopq^jxzreq+A6W)p&!|=WQ5mu&Y}k1x>$*}SUFULlXR>g$~c`a z{%%eO%8t=lx3lPpg|Dg#Zacc_r7+;`Es!xN9 z^=uf#Fbsa~2WdzBsjB}r2;yZS^4wO)9+QbL4-{g=SvAyK>;fHJE>$=BWaLgin^ zs64n5ZD)q?BPZK<_l-u7Fy;)mHX#dGG!k73*$i>jHvGK(IW^fh4Z{Zh^4x<^oOnGF zzNyMUHJwgE&x>FxWW)7$W%$%#HXb`&gacX;aBt8Tv`59DbG$0KmzV(`wrA@+-aiA) z6TQIrdmfY?(ImEt^J#(KGZOu3ET+ynjyrw|z}Fj@5OPl#p0L@G$K?Y+haA9R)k>~e z>M$m*KgWkiv+U&IT9k8?zyPtA~pxbReE}4O`3} zkZr0)@NO`fskHRziVzrIf=%=G-ZdX~2X((fr4EnamYug?dsS zh`Ms+i3Hr?f7IlRvNuHo)C4eVbuWHRDAsUdXL8rXlg^T8w;%1AJc$am5 zl6Mc%@pA4&rjGSfzb8?b_snZzW=S`c8Sy`VT&1(0vO5{Lp;1dTzO-t=LOFI9G2zfN zG69!8WwY5yCgAaNJub3o#WBMHM7lDoP zPl>|lsORGDPdcdouYv|B1&~iN^|feHx(UwfC*w)M8mRXS##66m;tti* zV3nx_hd#F9{M>!8S1A{|BTteCccQ^l?hXm?+l&Ekyg+EqJ@RbZMiAMrjE{S^;NQkK zBr+@p$NaUWwuxfkcXTK0RJSB!zxxo2jU1_;Hvue6rqbH^()dgxAC{f6pyA;ad@-D- zu0K5BzNR%kpIJ>$c+bQK5z3(4w3%gOT;RQ^1X`vwVRYmc-qUb9HuRZ-zv+A0lOIkk zBNEZyB@*X+8zAm=38-V)j8Sg+WZRx_DD6Fi0na4iMPML?&VJ7i)tSPi(>1uCi=ms( z9{}Z@e$fAL7ETjXhHGCZp=4bR)z6g%%U(~Gp*>GjM=fFbk!L!)RS&~)+X7yzA_K_{ zAG{{K9sCU9AfU;TznykK=jqN|-e>MHELu39+Glv;2>W?c{#bypWj%%}m!STfEodZK zjh7s2aL1=2+PH2vHB?=XZED@zX746&e;kE-i>Jb~oY_z}oQ_%(qiJhJ0{?wU19L&x z!g_gYl42pt<{`OkUw)g~4y9oqgyCuN2+n;+0NJX$8&b%N`}41PH4Z805)$d@$*O$ zTvVTc`O33#kCinJJjnk4zWavB0ccn{1@-SIL12a%Q2+0ozi$MZ^}Fdv%t)iTk(t2H zkAj~!2y|*TLuy(8d1jdlF}WXg)>m(bBP+My{iy4qetscxXxqPUgM2zm4OdKe_^B4p`v2tf|_;b64Uby{R<+PA%MW zZzu9coU!}3F1Y?~MwuV+&}d!&$L81IeOEIm>+hvp&;qpImqF)T&7yuf2jJC8NcZhcSM!0vlh1_~B2d*yc9rJJ-E^&KD zXDP~($&cAC(q}h$dDjuDEUa<hS-E9w--J_SH#?z-`rjXQSjAvlB8RTxu z!3Fkjm=hj>E1%`y)y;tWi}FGHbuhjf@C4=4%TS21NUw^VfYIK^B z3N)y|kV=1_N9?l3qsx{&`g6}R*mK8`dGrd|J~IdRZ>97$%e^mHJqNFGT8w?rz}1Zg zLZ{9ds`T;>ZJ73ks~F>pozK>B-LK5)>VRZ&`{-n<%=Xwjw{)_s!3xqMP=fsSVO}~t zi+PW?qnF!wwxifYrfzz`*GoR6fqSOHzMdF}cifKY=Q6QN<`}lK8F$ilf{RWY(cx!r zNuf?Cly}SE%Oh3LVH=EF=QU!4ZY-Tp{)_s{hl3u=!>_w=0QS`+Vq$Q&bG(PDJC4398^J76VFc zQ}7&nP6_0f(^m@dkS5>&HrX@a68*^Ao4zOU(Z2NWMQd_Xa5bnaYU9l(Ry3z6m;|ys zTA^P7b@$)Hzn9lVKXEb4>8>FSK{LTvdpVVgcSAe-KJ9rXYl&)!7ZLg`N%MEkr`B)F;?N<_n+)5O?xY&SFo z*^=!LHR~S9Gh2;4IW0IfVig3eeZ_Yv%*R6?^f3H;3VOwq;l|9vjALCx-U;O3(1I7F ze?<{|94H)8(SrN}k`SAGTReEik6sWVVw5xa_n5{2E*{)NVvt=wFy8JSKMn)X|^{&Eu?lbUc z{|Xd1vkWC4C_~oxGW4090p-n&>`u#ar%A855hp=#wMhfRY68ZJwlGU72Gt9v(xU%y z#(K2zzLE`mJ}(N#XSR{qx8^}+g&Sw6x(oc&ztd?Iww&zSXLNh%9=y2U6h0hKLe+Pv z(DXeG3mq4cEw{B%YsYMSub79;5A<;EM2@azo)67M4Y-~$Pk*Y1A{}P|*{_%4U}PS= zelddYBCL)QE+Y!V&7)48n{mQXJIh5awWK`#qi-(VI$;Q)c2Tx_6!kh zd=P;b+F17?w}&3TR1Kbs2S}B+3O*MSr4Q>XP~$)^`S9O)PT`ac()B?)n}@7W>Nk6b zT4zOSWKzI4e}m4x!-aH3+6KrvJ`tyEU=BcAW!&Fgf-9Gs!`?li#A(|!e6`6Nbq-h1 z_0k2{H02vvd!BjLrDx;(xDuT7wgjH+7J;#1ku>qcW=Q{7PD&Sc)B67ofCe~WVPyd> z?zzk6k6Sd=;3md*Z`1mZR@BfX-u$u%pxlzO6{Z zeEtf3s*^>>|H>nxb<;TiiOS&j`39+ocuSTBC*ieg|MQ%^BAwqd>3t)2>>s>IyC2O1 z0jC;Hr1LiY=9`E0RgsKw>5cAX!z6HN42}5U3+;WP^nn9l^%fZrm^GcvCns=Af`@5{ zPc9wHcHiHw3uFEQP3WlKgcI*pf``o{yjtJ_m$ufBcy0%LKG?`|6@rkIVMz@)ro*_N zXq4TY&gQS1;obNS-o^9{EuOR#CJ8Y=OL`Is=t?C|R_5W-H*PrRKq!1(zm_q*H{qVK z*3@Nf1FBqHMTyvLQWdh7{7l>g;$xMdB$|LuoIe#lFo7|h)^NUD2WS0D8%LkolG^Mp z(k<1DmaKC(kdi^B4UEGtA35B2#~&m2+rVFzSKm}2gwQ3D&pc@FHzR@mBlJhKEnKtsP1^T7pfCNS z;OO!?kjqcSS6h@YNVo>4ex48IXErg0j5Jv_mcZs+(r|5w5NN4a;}PwDdQ3a=I5^V1&v8qzWEi%6T3=zU&kwIcfdoP}ydN6!J85hFBl_cc zJMCDO3lsOgCpl(+$!i5OdLp}rXg-<&?cs^gCgu+pI|4i`tjUi%fG6`PnvjV&K zlR(Jn090OLe&DUMaOHe45_>0(eQsLG>d3hmxA70TYZV30teR=vn@NmyE=8{ivGetx z1RQ#CkX8wmv3O*?HrXe|6{N$arQ29O+bskFNNfkjZOK1T-?JQ64Ze#njql3iY zQ5IahxfLe7O~S3QNn~o&81U%JgQOrq4BEM#8kiSjWTP&KkH~@jmlmA#elb`%X=5q- zd`vPe0vQI>`tj!tDR{LGQVdx~S+f#kjm%L|x&+gXErZdBVsv&N12L2Q$ZB;}np2iQ zoxf+n$U+1BW%YsH+UkLq^nI|ksFth~NrEN9Y1s2C4!5}N2QC@fG+o50&f=|}R;r&kLNhdE@u_PL{sb3L-aMCOlj^wxi~G3OYs%mXn-?j{XP}R9GUST~ z!%n$GQn(}zyu}Nk;$kW&2rLBV28K_EcHo0F5p)br0@YQ`!1f?$-Cs*SC}-g8wAHw4 zUkzt#B#OJNFH!T^vv8bApOr)OEsJq~#6I}0t_MR)TUahZ0xb6C zVO~=`njZJXysC$sSgRaNU^)7F=4FWU+)p)HsW2zoqd8QCU#7}V{pxcMCg;OtP3 zebGLs)XLt^iZ{XXs4v`xE!j|#ri4ZER*>e*!R6Ll{5_F)FuGidUk=0)v%e~!f9?~l z896NnAPE=E)Z}lQT;Lj~Gx$p&{d(vQ5jT9`nz6+j>&Zlj;+F&%* z2h>GM$@8fj$wl1&T)pK7J$LLl^J)F!>>v0+T~RuG->Q#~pR&Htig57nkHkx%&N%Vs zQ_l3iRx)#j3-l)z({=61WPd^{<|Y^5Usq?HFGGcpHW*6ZuFaw)@ylVCc>~$#uo{2r zGX}-b1T=V5k6sR4IA*3%lSuG zLGs&PdUR0)eJ*8%6DBNRUUD(=Zs{4k@M#Ymx*vu^<2GQbrztKNi~@_~MeycJ6NqCj z_Md3NH=7pW%Z|ae&-&IVa9Np5(DjBNL4W9B(^0ZnFpr8Wq~W`LO>kvgFif@!Km$=< z_}I0cFPr zSbl|d99xr^&wVx=%)3D|%mawKvOP_CGM`-qCxAq5BX_l?4lYP%kl8A$@YHt;GD*V; z{8qE~=_SQ?xs3jg?SdsRA2ER>J!=fiQY17EdY$(qjV!pkOiwj*I*z--kufrzDp) zcc#Fs{YhBxa5}D$*aT+t7vZ8WLLh!h1%v-&g2U@7_&0wI=HFmB=2xt5)mzNCy}QAD z*cry38bfl#Uemm$DX?J50FnN;hyF}$LETXkdgin~*g`j%m=p^}3ldQ>q6jmx>d+`5XJ`BHCzPQvJP0|_JhARpb4r|5^;|2JlO7OMh>JJ6S0O2 zu323fwk#|2MMBjyy?S1M^$AB`omw&ueZlJf2#tei_Wt%u z=T8gDZDl?ueb7$4NC!Q3hV{_+Fc;+bq8qpuQ zw$Fkn(tDl{7){2QZnj@_*^bq6W5G}36%|x11@&jK_<3U;M&!=M?2QHBy}*fou<;)) zkWxdJ7eLgeQ|{CuA3Sg&ANMR2zqJ4??>tWjU zJr)FmhKTH7BK4ZsN-8~nl0#J)a9CIvGd#0-mGEeKK#TELB>U+IHwz9VrE`YQGRXMH zX;@KIgxfVr(cRsTF7V0&N4@D-`Qs6NSWyH9ueRffr;a!-jP*N4RJp3kO|&ZE26@!{ zkxuRTuA{cLn(~HDc!u?Ylf^VZHbwxmnx^p|qbo^Qr5uiN&O+N8XYk6=1QL+4g+9Cx z%ibY2W2hdb-QtbVaV``*uK05kAQM#ULa)faLX%-`SElx;!gn;xIP|z%Y(67#0%15 zCjlSvn;to;Oclyw;HPd3miugA{dp@C^!ZHV{)C{+?qWz7nW3Yn>;=6S*gm143>2O! zLO|>tUgmuYtg%!e!-KMnLF-6&zD>ofXJ_duGD26$1;E)pMV&oqaTvGn8r$!>fG9hg zs}*L_lQR9p^~PzUBk7Kdk}|--*#qLdlp(nC4E)%nkNzcXXc%3~I{tQ$_sWF4y<*MS z*rg=1O@p_sn2tMk&4=;s%H&XG5c*!&2HQV7;0qE)Z=dg?y`u!Bzt+R1X;GM^2BUL1F)1y!MFYq*gA3JRL{q z7|=lt-#ui)@H_}vwUI6kze|Gu{p8m47Qy`@S=8tgB?=1@`5^gre$1o_{-&ZK(ind% z5f-E$KkcQd3n!p`Sur8mp?GSsnoh`cY}cbxZu=Yq9xBtIS|^>5_d6I{7Twg=konaAMS z&3(MW2T!!yHUa(y?*gM2g^-igPj2g_LGSau)VEt0rR!3$c8&u~og~5LqV?b~DGe_Z zM>4wi7*xF#C#7|JK;~r)9W7o8OG+B(=1mj9vPlm8n>t9WRurK43C2HKMP%%fz^^_7 z6J|c8&H9Hids7IG3%yIM&J)yb~@300We9I-Br7bQ1JIQp18%)Zz@S>8Z|_@ei!;-RRqqe$tD6` z(Qt4<5C%!a!7qt4yy0C3zDlQ1t%fLj0uv0I zsA;S|yiO^?XKe-W;5jofFmHsoFQLk{bBG?cG}1&qO;@1DYM zyzPtI*B_wm%b%06;ei7UdG`#|hr%uLu z#*TPuWeH3^y8^e}+K#xZjQI|{d2%2UI0H4j^;Zhao`lf#GgpG$S9bP`2*9j)QS_fr zC5jYpg;}~=@o$U`JWz|0_ZJ}O5fycfaEl5 zDx)ZcrCxo6_;@Sb*$oWbTJNRW0s=J!@Z+q*BsenLESpYVdvTOY`xz`6WL8{+|-oR6~{STDGl1OI0)*jmpUEf3S%#}#kc6twlDLo+ujH95p<~BdSlX30FZH3duF?go3fg~*n$JBGj zD1X?Qe(kLX!$BhyQ7fnGa%$13bcp+YP>>`Qmk0MDrFN*Hg2aT=f2QX z>nO7Qyg%u9B1j|Z1jy9jGI*sji}xzV_C8}Xu#M`rykj3Xm%3{9Z|=k+EegfdmV(dE8$(E3jWo< zzqI^HGJMmW3pZ9)lS95mIC@3}Oy@_V(jsfvHl>#?XW5UB+j8OV(m~pkKLPF9EWu+Q z+h?v1WbatFY2Y+3yb&nH_I*yI?#DvBQI!G*-R)t1Ni6e9Um|VyJ&9H&%O9JH!yJ`n z`kwWFoUSv6?3&f|l1U?IIAmeM`{nTDh7?XMJWDd%s!2#iGxWcSg7}IA#>!)z2WMHd zUbY&!AB>;I{JvT_!mxd#F4_!QqEN(o9FhyeuFnm4@_Q^Q_Kc-nH!ASwr}f0|To9kZ z*te$t$^kOfF*z$3p2#$Thlerr=~R$Kztd4lG>Z=REyIB`!jREv#CuH%qN0zFK*P!L zV6LYJGj!D8qDnpaEpmp-U0{HZZe>Dt=@1E5Nue{#T5#Gx4qo_unM)jF#;7Ek?2Jyu zFKgp4J|+k&eyoQL_a9L=S^|IdeX+4~4!pecguD+d!v-xETv}C2;xA2y@UV2;%YLVE zx*phl1JHe0KN%<9fR{_^;EkmPOqj#mj>4s62J7YtOjwAEZ#JRICB{j(*oq4Kj$xCm zKAvpP#2D!d#JH^xFE31?kMyR1p`->@M>!CFyEuBY&Z$aa0)3Y(3XRrTu*A7dXGvfY zmid-|{(V)fYpjH*PyyOim&CT=>$X1fvx`@Xmh~wVhglHY;@KU47QoUp5E% zg%w!i?hEBM;p~jsg%wqaAQodx@6I?0bNj|n^M|$EjmBgMc~XP2s@I6#jvSaRwH{*_ z6Xi*dF{=Bhz*LP^xYt?(mR@h^>}mF}>qZ^U6){06i^W)zd72i^W^w&#4NW)8EX`m}5j~?+kjFWVeRCovU!^&?P#l?hM~i z6GWu%C6Mgyr7-dRR$8)C6RL|(qdnsU9{y-ecm8LI&OtlyagY%PPc%e_qxEFx(%QUHxh9dQ4$C}Qjw0qO>a@Xg5_2$R-jF77XMU3DpUw2C>? zlT+}e^==fDG{Wyq`{DeDeQ>m<0nY7sL=|k1HSpuPx*}Z*Dp;pQW*25Hxf(*tjRTt$&|3?RpZpf zm~`0?Z?bH{i^FR0K_mvXOG3HPsLA+e>@*lNdl}51>i~JYKkdxO!{i-FFwr0j%PX^J z+mv|d8qFZjji$4%R~?>`s)kvoCE#82Je+Yv5?yDw(&2t(_@GG8#i9jm6bflu-x50Q z_j@wU%L2aCv=dLpsxUHW!r*t=Flbc<_(lNJ#Bb8`FNLvlf+l|4V~E%8xxu51HhfA3 zh(q!ToTb|c35-=UYvwKzE?I{wuO9*F`4@>%iy(F~k48hMBg~!O3{I;J*yl};|N6Tc zeM56F`c4Eczn%^U?O)UVhmtUDS3M+KsiH^yDQ6z!0ToNiH<(e^ zunR=9vWa}ibEZi&7&4x*z0YU{h%OIntLhXaksBt0rYz%YdbGfr%6HUUdJ3e5ok3Z> zEjT@YH$(P&!^fpQ;Hi>>|B_DK!3ieEp;6PcOPWPGL)eoe8(V3B&8PuoYk63-Z5sW)y9V+SQ^EP~O8Bhqh)Z{WBu2|sNU`{8ZvK5= zcsLkKj-GD<+p7WibKXInimU5k*l-C%4=y2+*|p@CbR_N`Sc@S$(!lMM7Piiqh5}0E zaJQGa<$u+JO73ouXx9OA=FKV=kJMhSae*tobdE;fh@jIuN@4$%BFr~t{JQlyRG0U| zHBVZp>;Zt~*Y$CKKKpL|>7c`>?$N(*-9bh32rgT29G+iR;8MmVfb7>OlG4g%hCSXm zDJ2Hh=`+@$PaF>1vB!~gOW`Ja_i0DuAd{0NU;pj-B8qc z#r`fS5@s&_Oq0f2k-od+}A%6c|CA05MY_M2c~-(GlW{G1rSFsF7W*k@*m zKk^QNaK)?ub~akmhUJ6`+biQfnF5G@e3^L)W=mL>4k78=lhtpYja0F~0mc!qID@2$%;!U+<$dsuev|MZ_N_Gq4o{R*%-eU`2 zk0)X_Uk+1y4&beCv*G652JD;dMRPYyL|k4CQ-$klIAL2IKV!v0{F1?ZBrj5t zw#LA(7pa`CS_y8mV(#4}dDzE#wF6IC7Js@j+z$H2JKt%ePR=&yGwu+6HBrO8WwE&X z)g)|cG=}xPJ)Cb;8o%&UDfs#IwO!ej4fkReGN)rTj4}`Uny24sa<~NzRdXS7a{ zJ@>j&8nGhl0#(+J#g_SAu%l}(**CKWD&AF*J=1)!WP>R=93GAO4mse^c$55*uqNB} zeDQifJ`}pPfW&h}bU6?JCz7%-YyMcg9odXMucyGEZ$1jGJ8SM0wr1A3md zpr}M2+30HrCBn+s{-zQoRF!G^SH_jGE5V$xHPC*d0gqjdiZ*5BE}|;N(YIquxy3{RX1TPW$azPy#3_runW>KIuhY=@ zsVA9sXpnxJs)2KoJh(sU%oVv;8hS=o;N*vPSe__~#}8a@8+j`QO_o>b{lpD8eRK*+ zYwgA2s$t&7VFj+7W({I&+x+$$F+=U?u;>*2w!HapzF;j^5i6Q{%m@vlUsI__SlZW+r9yCeP$zhv|0no zre4&sdTL2(6c6A?PAR@TrC@3rg1M{Yxfw$9(B9e@-~SEA@DM}h zYTQk~D!%8g-Y$h@2QHD-`+YEPH_HH>i@;;o*MVOSfQxNi%Sz^J;k76No~iQQ-IUyN;%20&P^cKnvCh35%BnKIIaxaWM`|X4b>x z=nyn|`<0B}#_o6N<&gjN9+}~-4!IZ3(?s8JFj4ozmK{^kqCNxs3oG!#We;{9>w^08 zm1q@MLl;b%j=CR1Kx^3m9X+^$icgJ%Yl$C;o0>T$2ROj#MX~t%t1mt|YXt?y4sf|5 zfOSnaql*-48c9yZ0y-B=(=U@-KYYQ7o@Bk=HK@DzTWi=TW26R~(Mb>daaTn!{=Q_5 zW26UY>U~)d(h8*0LL||9!F0TvI0>7*6;Sn}ExfjWz}*^|g7?mKQ~i*Y@E~y!)I@nf zpyDY!roRT22bSZJx>Wf1^%9%e2Ei}3OI(=9ftJ#4;9?(B+tWw!<@h`J6lQ6I?fh!mDKt!9FG=SLgb%^^#I^W5?;&pV znFCrYMyOjvI4T5nf?sO}K6i|RK&d8n&K?2BUu`-HS9W5?+w~BBaSO`dJOEMF9UV{|j|k?ZM9CQxuL&83Q+eU&4oi zbC~~^E2yDKe7U@?I^{R^lK11P=-fDb#P|lQ<^FMog6uOiE5nGLXC4TXu3GNJL!LE{nj!W z_S;Ed!R)3~oKIO0U(X+{bpCxH(djOwr zPy@o{X>`@6GG21VYP`hQ;4>GU#dU{&(9YkNv1|s*KEzGrtKW9w>Ew;z|Lq=RmZ*~6 zu2MAD6o(ddVYpxRgjo9`RN9Yn>S2=5@$NQNxcG`U_!~*Cjjkn;JH~REYu9QYKFXu# zBT=NU_Ur6?)lU{UCF8-u<#=#G9Uge=MykA9VX?zu{N~N({0=hkz`6>5kF7w*t2MAn zD2!@(-^a~&-SBPIO{iX92JtGBX}9rAy!Ad9ij~Fq8i#Z?>$!(xW(-oj!(Ma(Wv)`T zCw5So!E`r8ir^!fY6$&ir#`AayJyd^i3ogZ{5IKGdZeX3HlDYwUTlgvY(S4j+ziy`E zhE>71{tA3vr9h;<7GmGM9vqyw8+*RXlN;$F7;~Z?g={y$lGiEV^2ZdZ&K9m@?^<#^ zsDeaQQvOtrCw!hfkBk#bBt_oL5u>z^W~^iEbvR0VSKFy^iT1{ zH&IPqB>y8WzT^Bgz0 zn_mjMZ!pHq=`x(ITa0oZp77dE4&N@iMf&cXg9Yzz;Ml)^m|rdys|NM)S=m{}XqZHe zuC>4}t0iz}sWmrsAQ+b@s?jAw?wDE^$FRLTEO~d1YDLFDzus=#vEwO~H?{!JIoIID zupB0A+d>z0i9o;3r6|9F6l4~BzKYExU^CB_9` zumLBXZNPe+cl7q$!zlUh2E5%I#61>YNSs_6!PCJ2R;)lMSzDpwGA$h>Wiz=YXQrWN z^K}U9d4ycVF8o?E7T<=gfWsEnJH-$=*DK_qy)?XaoyP7sA7rAv|? zLx)NknfcI&S6g!mH0Px7V{CdL;7ADEI9fvF_p$raObduwP>yBY-{|FGH~K{30TEH` zKraJ1j1%ahH~$+)y)zw)>q6D%oDim`(jrcZ}VQ~L}Aa;-O{t8!M(KkxtP8GFT z?buJ$KO6;-(;;m>V|QV_;sYG#bO+SFO#zRB1U&812lv-9AKd34oDi4~2Ajj-@d`m$ zI?@Qm-rI1s$rTb;T1u4<_2J;_D{Z38oAT&$94UMk1}6h<(!qH*!Ky2Y|6UYFzOWfw zUQaxhx;x{L!Fo)8as%fqo`%|M^+9~?Wr9jZWa_z@^sDE7&SF*t4hSKrE|mwNBqQuEV}lUVv2znFJ$DY|u8yZiXBVN6dkx;1!E$6j z6VU1VDGa%mgp)N7p}y{a^vb_5@X(IKC6!Cy#PAjT_@5Sil$!(1K6g-}xs0D2+zi5Z z254?i1bkH}0VlOa@;f;ipP#URdHW@(o!LFeuhTd=LY4T*0S z%y_g96nW-EuU|!M4tUc&yfe9LSHyCwV{xdw3JdM8Z~+5nsA}|jz~y(L)iMkhmMy3E zcPG)h=-V_VzW^S8xk47o7vqS=9JG@{JRdb1J9Kn$#<~cY5z>UO-<@Q<23L~1b`3}? z3$PR6$}gD zzlj0VZ9^@-79YUQw;k;B?*SL9Zor&hld*qpG%Sk?hKX8*j3v$Ckw8l21KM7TWX7pXo}1%i&JaktWWOecBx*R31>udXkFrs{kDHqW9$$QVj<#>74Q zxedzHfMjfvR7y!1(_jiEl9C2WAt6FZxo1BY8B!@yp)#d!gOo&bd2j#U@B6;*de?f- zU3=|kKX>o_?6vM$``rCGdw)JRi%&2~H`A@Y?bl)*BYc_N#gV*ALV1GyxQa~Nk220>^LYHgnd$ ziItjmkd0seg5wj;VOEBfFrPCY@^%$(Wn0)V_A&njb3e9@QNCNnR_&_ctQ9)VD2IwO zH_Y^zF#_F}y#k(>wqG@@ap5x7OSep*;n~as)wB?z<$sv?UyhX&i(<0stM%01VZ z!}BCLHbq73V5Jl5-1CSTqk4tia?YLO`L&1lv|%i(J3N^c@_Nj!a!%zaT#n@_J8or8 zx;|#F-i&26+E?+K)^rKmH7qOvgd`5YEC5Nl?kli&DZ)xom z!RBT3am2T5wX%G1m=XH_T*3msm>}tV938FD{f{joD7Pv8$F>moL;qXn-@3OyjE)Li z!hhTG{$oqn2y8)<|LN;*dm%w(LqQh)_!np2acj0|wlFb@dB>A{r$$@W%2=JV%4ge> zg*bt{M5Z%R?thQ(Z`;3dWd57~jbSPn!{6dxd&Ylk2@OH8f8+i23ki#e{&i%2OC>k0L@c$r@Fe?ytYvx^uKqLe2JQk&rfIbo<^R= zo{Ia>WvRnY+^q!(>gwRGvW&4RSOCsAJuORkJn0$ADbvC?al_jyqq$kqwt`$?{^_M+XMa-VB9DmxKgxD`f$%Gp(S%J7T(Fj}$ z5uJjwh}V)~xU8Q1%svBGA8dg=&(6_*{)Er~V?(;!`y?by@WDHNJFwNO3_j%fu<|~> z#6a8J4(_d(%VkRXY2@l+Fgd>;B7FS8 zw91pr{y2#n{hdn>Z|H-ThCH-W&R_zBJV^T{1yJ5V&~G`1bU%v(KeaFLYL*MEJn(~= z96l3|`E*nJ7ESmrw;yiKwMHo>8a$TA;Ew=1l72M;WwiyIG_gzZon|Rbx!H_nlI1Yb z=mH9LiEv$;8L}*4Gm(FFne6{ese)z?9df+~nd*C}rlbh7$@wNZ{pJkqVfK*7Q;HaX$tr@gvkU;u826M&TvH zOq^UPPA<=g$Dzb3T?svIt<9$P-uk4N*ek0W2SX(8rseqFc0lV>(S4wd~Uvp555e@U_aA~rasb1Al3aTq042TVI-i!#M`$(RRRqV~Rk zcGt|ubvjoW_wD!Lj#xJaCoaR!gI*}2JCR>ia|2#(Ga;8JKg5M}FV4AEf?wi2h?G!1 z88g=hI=UW_*J;9htMu>iJ6HmCZ+F3khc2S#S|h@bdWBU6vmwD(p6%J>hP8KA;e1)a zKG&UR@M^XXgcW7NqTqZwGt!1Eif+PLey?fTv-9}dLlkN@E`xJA!nD9G59D4-V~=7s zPD-2sK`D87eaZn`qrH!Mow-3wckF^elP&lnY&W?&^#N_S8h`+mHhSPeHPtxPLcNkV z;^!C>+!!ST$+{U|$Q zcCm`73KYYz$ELK$O9nP_hd4^ZC8ScJfJVd>Fn#&i_*EGO0eClx`A@0)P(h>p}S35)7 zTzjxL4#rxe=gfD_SLDx9&2Q?~#jqn`SgJ6~+ZZ zhv3{6medGo5T%Vt`0#TW8rMdoHZdAeY12MV5(g*#SB-kJU z*BuBT9!jstS@^=HP0s_jca_BPbu`qTR=`IQFX3d~3ashu2ex1)9r@`5g@SdW&-SaJ zo27_v&AVV{uqm8Qa)!HmCE%w2IIij8Hln}h2iQBApw-h3s;^v+-X)GC@U0f`s(;gI zGadkF8W2(v${Cf21M{Z~xkCXQ_-*P(17}5oL7pM8mGlCGhxO#;m2i|-ct=CsxK?V4 zS@g`h$0S&G28t#p(uvhkxW*3QZBrv^hqrN7@!rE2rDA52_yw5u^gQIX-hjH%6xO-y z6}anaLd^Jls*zPh*B;yg5BH42a*a~7>2_oUM}(NF_I+gjtn*YdB?QNG9fTndMQUI_ zgj!+A`0`0TSr|MWcfIGJ;gN}CW%eoTi|_?u&&7CWXdi7J^v4@}Khpd-XIxk$3ZfbN z@sDaU#u_NHSG?89ne{I4y?YIHJHf}6+mR@}cRT)b=Lt@n;Q&0YH7mJtEVtwK20U5d zfbYHYNQKEx=nX2tE7QfO-;Pi=@&my5fX7Ts)M^Z?_NLC01-i^)eEg~}NlY_>!76Mv z_$|(W84?fRjg&fp50~)tNooGbwN#W#I7O>G>tJ5mbkM33;^Og*sGwU;G?KEBO#6j? zx=B>}n+9C;3`fn6QvA6>rsyVDjK%yTxUKXg#!SeABMpT3*e{^_hD{-??GLd{J`dKs zL)dggKxu7hf)hv5;E<0LzHXO*ah2rjDJhQYJrioh+rB5+renKtN|0;!=&rQV_*2n0?~Lf$kpXi-SQ7Zt+%_D`CSU*XO5_Ex4#oeS|5yA@37E?nmHiwRm9 zfV%u9x>&=DN(|;Oeg;?Q#Z?jPz$G87H%SHaN>n^5a^MzjWd|M|m9=Sur4$VZvb`vzS>VoWi1+;vo2(B43 zVTS&B2rw=sUT$mfV#h7gXO@D0<{U=1J@xoTX)##q-6cA79c*hX#8h21rZ4dxs;v{D zZN0%9o9I0_(CdrmUg&{qh#ursZvz{qoQmj0;VA(> zA8$jINEvL(sVAIBQ}}Zy8#o{C)5jk_5TT34ym=FPXm_MMv?(JE5T69wz9PnXoh2Qa zx5*ZVg}jxO>cltMkZhSELG)C8;LYKcFh2J#dB5Zz>M*Sj9a9HLn#*1Ey!w=_Nsfh< zqoFkG+6#6{bvK>jGyyB*kC4+#BVpENNA`DxJl%GCA1slUfW&c$sB69reOunaosL=X zD99HRJ;tHq<_Vy;#2-v{o&&cWOERO#2$$QNz?T#)_@18vy`8sdR!cr!>#Al&Jq22z zN5}9FmxqA9n>>Y!QM9Um9G-}4B-*n#K*wi$9KZ1@R+($SnxI-dsg(+uJ>$^E{4zcs zx=w62=i&+X4|GpM18m42kDJ2Iz=A;$IA)rIM^z6JZP{ut%qv2T21&lW$$7m0-V3)2 z_WoORHlk*!9M^zD(BRd1Y;Q3DJ$V)U=^P0fE|+1_hG*!Myc`P1aq9oiMSMD)PtM-6 zMNcVFI{BR;F3){K&F=|wHTgp5d2cyBJz;|`GNY{cC6>YCA^1})n)oa$CJD6@xgE+I z=sMru9Olqkbj*|Hr#`(%6z%1}-+C-D&R>9Ma_3@zLo@iu?gWcvrO>xS0miIKhOyZI z1yjbLanBA2xwaM@Wd_iG{Z*9Osl?yrtqzuUV?U@+UaTCO;H}DRKaUTrc zh0&wiah+Tm%vD=Ye>Z-GlKTgd?sJDu4H51RAu|yDYK(TD=Hb(wH$kiYEOlpDGP_Bs<@$pJADl39&;to8vTeF z)sblbG!FM9gpuR-xDcdw96lK)fx^6f&>=d8yWVLNOlERnace2(XZ2^Ya*Myy$na^zWOvNH@r3nzrUEP2Uu?*#8aDLt6M9zB7x(TR3nM*;VG?wc(rwm|-O&x* z<`Z!A)js4t>csaBZn*FIpUSSwl3Bqby#@Jk6kLPEO*>EAkk z^W&rc2VY41k1erC;Qtq2_`mTCZe|SU`Ie<@wfaZqm#%=r(^-pd$m+Dz7x4W%?04Y| z*bi}5wdnp2p7H-;8FGR?|BgfQzvB=Th5yEo6H@vgpPV4#-#UUzLRIj^zw!T$@n3wS zPoT*%Vww({`YZ5>QxabK0eEx;Vt4B@Q)LHAKAOz=#nw^XN*1HIyO{F7GN^5-=9(HMqrcRit7(>bK2Xa)`Z>P}<3tI49y zd#K*=mPCdxMD^nm*sW)WtuNmb59d=P=Aj6>m>OWxo)&t!Iu@-)>)?ux3Kd^A51TrV zv7@iXas3q|iQj=3{9+eJ{HLFzeJOUBAXu;8+atxbQY@!(M*om-)BR}fBhKGtu>orO z%88quG&d#p2hG;_L#&tP;EX17@b_8(cJn;x!=IA;UaddiyZalm-tM@Oxk}B0r;~b% zW~yt{M5Mo*Bqv+UvBl5>@*~9=^Svs#b1Pv#XZDfZGZ4hxKhXhg7YeiG@qOVj?Dg9T z*S`ziN#CksBYA5{aeE=k6{^5wNnNXu{arY_KM1ECmZm%}C$JUo!Fbn1@@`=(A)h*+ zY;g-TG|Ynj-RYE`E5HGM{DO*>si^+SQUdo!(T-?Hr06T*Sl* z9i(+NIZawW?83&B54i41Da0>`hUf+(DpLB2O*r?F@}`wS%)`CZ zz4Rd1uV;Cu3?u2!*K)Ap(ghOcZwlh|N1%Mt1aKAHbJ@EN!@`aZ(iSU$8-fYs#E-`BXp#?)?w;O3T6UGOVa>IuyV?%+ zZc+i00x^tlD}t~z9qz^EdF*DRKIE@9MaPqK+3*52G+*S65wrKe`L0^DIvE0~r4dx( zD?w5vb&`Q=2JdcWhA4N>vG(5Uk_qm zxYEMKV{q|@3vA4zdNyNFhg^@i3ffI9ym-2t290+Ft>GBh+Z2LZ4>@D?p=3CukcstM zR};rUZxqo~HozP2`30dL|_AxwFRmJBV1#GU6=X(ZsQ>9T` zQ0=oKoFEM{;qgIYN{*qmPd!$<+JWQ!5R$c21^59I;PsF>PnKs5e|B6a-!yX3qjP{X z+qN-y+!2_852bUV;b<5AE-uY& z4K@aynl&iwFGRP#IZ4*8Fu)($570rY4)<+b2~gIpQ6K06IgDj#k!ix^54o_AOjENV7zJp zopWLl-nkS6Vh$RpceDcr*ruf_lcRvXIsu@1UzDG)+YdL(93mBh88oiv!nNWq^)&fTtAqzZ z=)_K5@%gt$_t3g3gdU6-#uK-5*-cJ2>GDNiVDBeC7OBa8l@idD`(+~pJ z=fYg=Z}McNsX*&jXD(x`JqMq?JVYi~HpAk3-ON>`WN>lKvl>un$H=YCsA7K_ukGx{ zZQU*8_j@0tk6w_~J^gH9(hg#DMv-eb{G5i4C=)knBW$}Rji2|XVp?%ER4%9R!hSQP zKH@{y8a)ikgv}FjM(JH-! z*E3AcU0#eA`jp^y#VIBv$%n4W&W6;d@#N*rIiylT8iRC0(BG*W3r3N)*6EN_10C45 zsfx}oEu`n}A4hHOe6+6K4nCP$_}2Fs*`q&|4!rV%j>ZPTT73bL%nU)hx=+MYeJQ&k zh=Qg?Aj*Bohjo_e^hDVc8mN{)6K4qK)1fr7e0>h-sHL#+?R=~=+=GGjRrE<{JQ>YQ zAt^2QV5Kd^{CII}SH3{oOJw+pO4=AUrW}jvX2QXybPU(dCK5-+qVO?yaw-BWti+;1^J72fOw5QL|LAN8bNGx z<6xfr@E=;--U#x0w!lK2N#Nxn#Mg;#Aom1&uhp+E#6`kFbtawe-a;HQXn>4y_k0A_w(m@m15Du>R{Q8hTrS=bFBfEZ7~5 z`!{LgqpfZ9-0JV7WcoGuD(8$Qt{3UrtWuEOWeXt@-r()OnaoblfK63PAiUxjs{W9G zf%7GdrcMoNE|cTWc;f~!E0%*_bs-FPq|%XFKj{veG;nrX4u3XZfNjra@waFlMiVP3 z@R@r7r8ADegBDwuvw#mbS8Ab~%q95lFoi(UBRom^c^MmASS`QLc=m`Ee{R4Gx^)dl zAPljX)L&1aZ^k?k{LfnkyUiv**e*WKKV?9MT!pFl#dxZlrA{hWYqJl_X5c*g!}vt0 z2}i5!$h4P}!Txz9b>ZBitryQy(G@xH=O;) zRzTe2OpL#9md=tzxFmCgaa>h^nv?U;=0FOpD4d6<2N%|T zE1WREG8#MEzfyT)F4-QhM{-xc-6+H7!TEei8oK zwr803CYPl72t$&NDLUn(;^#X@*|p;nNiM&Xl-<6@nca1YR4sTw&Dce>IG~N0O$W}R!!>dE8exWVsrN|KYg6!DaT)ZDi>Q8kGfHGF zVT12HBcaW6__vCUF#EVIB@yr8$NVfBJX65279s;@Y%h?6bw|l$x3O4vG@f^`V?Dl) zU4~L?ZUD~s3C8YL$f_-+MJw`%^=@@MI%rE{s^h@N-vs;ij~2l!#WJ2AVW!tYO> z#hH71CIsfEW8ac?5^^*jT*_L=u8T`B+=NT6mrTKeTX*4uLKEgimI`#Q>)0PQ=iu(2 zQ2PFA0sH-p4D1|p82F--FuFV$9|U!y<18cm`YQt#$@B5o=bd=iVuS|#O2cdGMd>E? zB1Si;g2u&{Xm)fguAOZH+cjmuBvKk1+VoJ?E&#O->}Q)E6VTo;EPch&i8E>u7UHweeoj)8``2S}HN9>|6UUpk(J9x) z$i7Ol-OdjV(wTIJZv*`Byny>w#?mbFToe&+p_La(ASUz^F}i;Ot}RKY!Do-+IthE| zb>B@pbOZ5Ky%VxU9jN4O4R+1_q%ZITb5DRFINN;2i4P56IL!-f{rg$>!M7wR@GuP- zp33KAJtJ08jjzu+VVw>aZvRjq6U80Tpf4JqbuNWhrA_QFnYVOrB9Epi#M0fjT8Lqx z8Ghfk1|MFY4&$9Rf>+~tJbA2))R1yKP;UyRWy-WM)ePz~%P37)gp$8PVT@ue5gk>A z87?JYJl7f)T#dn%`M(L{E?{IPYR@-FjGJ9r_w=r@t7Qd>0J-GuDMTKfC4 zA#M}S!*NG7QKZgHpzS`0G5LA0X+WuRuDReoZqUSm*Va_ao<)x-QM6~uPZBfa3zB2k z)9F$hX^*)o$aX#)(83s F{Vx_A8b<&C diff --git a/docs/source/using-executorch-android.md b/docs/source/using-executorch-android.md index 712d79af4aa..e097722b8e6 100644 --- a/docs/source/using-executorch-android.md +++ b/docs/source/using-executorch-android.md @@ -61,7 +61,7 @@ Starting from [v1.0.0](https://github.com/pytorch/executorch/releases/tag/v1.0.0 | AAR | SHASUMS | Backend | | ------- | --- | ------- | -| [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-xnnpack/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-xnnpack/executorch.aar.sha256sums) | [XNNPACK](backends-xnnpack.md) | +| [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-xnnpack/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-xnnpack/executorch.aar.sha256sums) | [XNNPACK](backends/xnnpack/xnnpack-overview.md) | | [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-qnn/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-qnn/executorch.aar.sha256sums) | [Qualcomm AI Engine](backends-qualcomm.md) | | [executorch.aar](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-vulkan/executorch.aar) | [executorch.aar.sha256sums](https://ossci-android.s3.amazonaws.com/executorch/release/1.0.0-vulkan/executorch.aar.sha256sums) | [Vulkan](backends/vulkan/vulkan-overview.md) | diff --git a/docs/source/using-executorch-export.md b/docs/source/using-executorch-export.md index 140a703edc6..ae73cb5aeac 100644 --- a/docs/source/using-executorch-export.md +++ b/docs/source/using-executorch-export.md @@ -32,7 +32,7 @@ As part of the .pte file creation process, ExecuTorch identifies portions of the Commonly used hardware backends are listed below. For mobile, consider using XNNPACK for Android and XNNPACK or Core ML for iOS. To create a .pte file for a specific backend, pass the appropriate partitioner class to `to_edge_transform_and_lower`. See the appropriate backend documentation and the [Export and Lowering](#export-and-lowering) section below for more information. -- [XNNPACK (CPU)](backends-xnnpack.md) +- [XNNPACK (CPU)](backends/xnnpack/xnnpack-overview.md) - [Core ML (iOS)](backends/coreml/coreml-overview.md) - [Metal Performance Shaders (iOS GPU)](backends/mps/mps-overview.md) - [Vulkan (Android GPU)](backends/vulkan/vulkan-overview.md) From 571f925a45c495349038e145b7ba64b7e222099a Mon Sep 17 00:00:00 2001 From: Siddartha Pothapragada Date: Tue, 21 Oct 2025 17:49:01 -0700 Subject: [PATCH 24/26] Remove extra demo line from sucess-stories page (#15337) ### Summary [PLEASE REMOVE] See [CONTRIBUTING.md's Pull Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests) for ExecuTorch PR guidelines. [PLEASE REMOVE] If this PR closes an issue, please add a `Fixes #` line. [PLEASE REMOVE] If this PR introduces a fix or feature that should be the upcoming release notes, please add a "Release notes: " label. For a list of available release notes labels, check out [CONTRIBUTING.md's Pull Requests](https://github.com/pytorch/executorch/blob/main/CONTRIBUTING.md#pull-requests). ### Test plan [PLEASE REMOVE] How did you test this PR? Please write down any manual commands you used and note down tests that you have written if applicable. --- docs/source/success-stories.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/source/success-stories.md b/docs/source/success-stories.md index bcf922eb0b4..cddfaa6c5a6 100644 --- a/docs/source/success-stories.md +++ b/docs/source/success-stories.md @@ -123,6 +123,4 @@ Optimize LLM fine-tuning with faster training and reduced VRAM usage, then deplo - **OpenVINO from Intel** - Deploy [Yolo12](https://github.com/pytorch/executorch/tree/main/examples/models/yolo12), [Llama](https://github.com/pytorch/executorch/tree/main/examples/openvino/llama), and [Stable Diffusion](https://github.com/pytorch/executorch/tree/main/examples/openvino/stable_diffusion) on [OpenVINO from Intel](https://www.intel.com/content/www/us/en/developer/articles/community/optimizing-executorch-on-ai-pcs.html). -- **Demo title** - Brief description of the demo [Try →](#) - *Want to showcase your demo? [Submit here →](https://github.com/pytorch/executorch/issues)* From 5301a32c32b9529fdd4f936752efbcfda477f101 Mon Sep 17 00:00:00 2001 From: JP <46308822+zonglinpeng@users.noreply.github.com> Date: Mon, 20 Oct 2025 23:39:04 -0700 Subject: [PATCH 25/26] update backend cadence md for branch cut (#15277) Summary: ~ Differential Revision: D85064213 --- backends/cadence/build_cadence_fusionG3.sh | 2 +- backends/cadence/build_cadence_hifi4.sh | 2 +- .../source/archive/backends-cadence-legacy.md | 238 ++++++++++++++++++ docs/source/backends-cadence.md | 199 ++++++++++++--- 4 files changed, 401 insertions(+), 40 deletions(-) create mode 100644 docs/source/archive/backends-cadence-legacy.md diff --git a/backends/cadence/build_cadence_fusionG3.sh b/backends/cadence/build_cadence_fusionG3.sh index 93295bc9aa5..ec973401af9 100644 --- a/backends/cadence/build_cadence_fusionG3.sh +++ b/backends/cadence/build_cadence_fusionG3.sh @@ -9,7 +9,7 @@ set -euo pipefail unset CMAKE_PREFIX_PATH unset XTENSA_CORE -export XTENSA_CORE=FCV_FG3GP +export XTENSA_CORE=VANILLA_G3 git submodule sync git submodule update --init ./backends/cadence/install_requirements.sh diff --git a/backends/cadence/build_cadence_hifi4.sh b/backends/cadence/build_cadence_hifi4.sh index 33078b7ff2f..d6c2f3be6d8 100644 --- a/backends/cadence/build_cadence_hifi4.sh +++ b/backends/cadence/build_cadence_hifi4.sh @@ -9,7 +9,7 @@ set -euo pipefail unset CMAKE_PREFIX_PATH unset XTENSA_CORE -export XTENSA_CORE=nxp_rt600_RI23_11_newlib +export XTENSA_CORE=VANILLA_HIFI git submodule sync git submodule update --init ./backends/cadence/install_requirements.sh diff --git a/docs/source/archive/backends-cadence-legacy.md b/docs/source/archive/backends-cadence-legacy.md new file mode 100644 index 00000000000..21f60477c63 --- /dev/null +++ b/docs/source/archive/backends-cadence-legacy.md @@ -0,0 +1,238 @@ +# Cadence Xtensa Backend (Legacy / Outdated) + +```{warning} +**⚠️ THIS DOCUMENTATION IS OUTDATED AND NO LONGER MAINTAINED** + +**For current Cadence backend documentation and support:** +- Please refer to the up-to-date documentation in [backends-cadence.md](../backends-cadence.md) +``` + +--- +# Cadence Xtensa Backend + + +In this tutorial we will walk you through the process of getting setup to build ExecuTorch for an Xtensa HiFi4 DSP and running a simple model on it. + +[Cadence](https://www.cadence.com/en_US/home.html) is both a hardware and software vendor, providing solutions for many computational workloads, including to run on power-limited embedded devices. The [Xtensa HiFi4 DSP](https://www.cadence.com/en_US/home/tools/ip/tensilica-ip/hifi-dsps/hifi-4.html) is a Digital Signal Processor (DSP) that is optimized for running audio based neural networks such as wake word detection, Automatic Speech Recognition (ASR), etc. + +In addition to the chip, the HiFi4 Neural Network Library ([nnlib](https://github.com/foss-xtensa/nnlib-hifi4)) offers an optimized set of library functions commonly used in NN processing that we utilize in this example to demonstrate how common operations can be accelerated. + +On top of being able to run on the Xtensa HiFi4 DSP, another goal of this tutorial is to demonstrate how portable ExecuTorch is and its ability to run on a low-power embedded device such as the Xtensa HiFi4 DSP. This workflow does not require any delegates, it uses custom operators and compiler passes to enhance the model and make it more suitable to running on Xtensa HiFi4 DSPs. A custom [quantizer](https://pytorch.org/tutorials/prototype/quantization_in_pytorch_2_0_export_tutorial.html) is used to represent activations and weights as `uint8` instead of `float`, and call appropriate operators. Finally, custom kernels optimized with Xtensa intrinsics provide runtime acceleration. + +::::{grid} 2 +:::{grid-item-card} What you will learn in this tutorial: +:class-card: card-prerequisites +* In this tutorial you will learn how to export a quantized model with a linear operation targeted for the Xtensa HiFi4 DSP. +* You will also learn how to compile and deploy the ExecuTorch runtime with the kernels required for running the quantized model generated in the previous step on the Xtensa HiFi4 DSP. +::: +:::{grid-item-card} Tutorials we recommend you complete before this: +:class-card: card-prerequisites +* [Introduction to ExecuTorch](intro-how-it-works.md) +* [Getting Started](getting-started.md) +* [Building ExecuTorch with CMake](using-executorch-building-from-source.md) +::: +:::: + +```{note} +The linux part of this tutorial has been designed and tested on Ubuntu 22.04 LTS, and requires glibc 2.34. Workarounds are available for other distributions, but will not be covered in this tutorial. +``` + +## Prerequisites (Hardware and Software) + +In order to be able to succesfully build and run ExecuTorch on a Xtensa HiFi4 DSP you'll need the following hardware and software components. + +### Hardware + - [i.MX RT600 Evaluation Kit](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/i-mx-rt600-evaluation-kit:MIMXRT685-EVK) + +### Software + - x86-64 Linux system (For compiling the DSP binaries) + - [MCUXpresso IDE](https://www.nxp.com/design/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-integrated-development-environment-ide:MCUXpresso-IDE) + - This IDE is supported on multiple platforms including MacOS. You can use it on any of the supported platforms as you'll only be using this to flash the board with the DSP images that you'll be building later on in this tutorial. +- [J-Link](https://www.segger.com/downloads/jlink/) + - Needed to flash the board with the firmware images. You can install this on the same platform that you installed the MCUXpresso IDE on. + - Note: depending on the version of the NXP board, another probe than JLink might be installed. In any case, flashing is done using the MCUXpresso IDE in a similar way. + - [MCUXpresso SDK](https://mcuxpresso.nxp.com/en/select?device=EVK-MIMXRT685) + - Download this SDK to your Linux machine, extract it and take a note of the path where you store it. You'll need this later. +- [Xtensa compiler](https://tensilicatools.com/platform/i-mx-rt600/) + - Download this to your Linux machine. This is needed to build ExecuTorch for the HiFi4 DSP. +- For cases with optimized kernels, the [nnlib repo](https://github.com/foss-xtensa/nnlib-hifi4). + +## Setting up Developer Environment + +Step 1. In order to be able to successfully install all the software components specified above users will need to go through the NXP tutorial linked below. Although the tutorial itself walks through a Windows setup, most of the steps translate over to a Linux installation too. + +[NXP tutorial on setting up the board and dev environment](https://www.nxp.com/document/guide/getting-started-with-i-mx-rt600-evaluation-kit:GS-MIMXRT685-EVK?section=plug-it-in) + +```{note} +Before proceeding forward to the next section users should be able to succesfullly flash the **dsp_mu_polling_cm33** sample application from the tutorial above and notice output on the UART console indicating that the Cortex-M33 and HiFi4 DSP are talking to each other. +``` + +Step 2. Make sure you have completed the ExecuTorch setup tutorials linked to at the top of this page. + +## Working Tree Description + +The working tree is: + +``` +executorch +├── backends +│ └── cadence +│ ├── aot +│ ├── ops_registration +│ ├── tests +│ ├── utils +│ ├── hifi +│ │ ├── kernels +│ │ ├── operators +│ │ └── third-party +│ │ └── hifi4-nnlib +│ └── [other cadence DSP families] +│ ├── kernels +│ ├── operators +│ └── third-party +│ └── [any required lib] +└── examples + └── cadence + ├── models + └── operators +``` + +***AoT (Ahead-of-Time) Components***: + +The AoT folder contains all of the python scripts and functions needed to export the model to an ExecuTorch `.pte` file. In our case, [export_example.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/export_example.py) is an API that takes a model (nn.Module) and representative inputs and runs it through the quantizer (from [quantizer.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/quantizer/quantizer.py)). Then a few compiler passes, also defined in [quantizer.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/quantizer/quantizer.py), will replace operators with custom ones that are supported and optimized on the chip. Any operator needed to compute things should be defined in [ops_registrations.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/ops_registrations.py) and have corresponding implemetations in the other folders. + +***Operators***: + +The operators folder contains two kinds of operators: existing operators from the [ExecuTorch portable library](https://github.com/pytorch/executorch/tree/main/kernels/portable/cpu) and new operators that define custom computations. The former is simply dispatching the operator to the relevant ExecuTorch implementation, while the latter acts as an interface, setting up everything needed for the custom kernels to compute the outputs. + +***Kernels***: + +The kernels folder contains the optimized kernels that will run on the HiFi4 chip. They use Xtensa intrinsics to deliver high performance at low-power. + +## Build + +In this step, you will generate the ExecuTorch program from different models. You'll then use this Program (the `.pte` file) during the runtime build step to bake this Program into the DSP image. + +***Simple Model***: + +The first, simple model is meant to test that all components of this tutorial are working properly, and simply does an add operation. The generated file is called `add.pte`. + +```bash +cd executorch +python3 -m examples.portable.scripts.export --model_name="add" +``` + +***Quantized Operators***: + +The other, more complex model are custom operators, including: + - a quantized [linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/operators/test_quantized_linear_op.py#L30). Linear is the backbone of most Automatic Speech Recognition (ASR) models. + - a quantized [conv1d](https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/operators/test_quantized_conv1d_op.py#L40). Convolutions are important in wake word and many denoising models. + +In both cases the generated file is called `CadenceDemoModel.pte`. + +```bash +cd executorch +python3 -m examples.cadence.operators.quantized__op +``` + +***Small Model: RNNT predictor***: + +The torchaudio [RNNT-emformer](https://pytorch.org/audio/stable/tutorials/online_asr_tutorial.html) model is an Automatic Speech Recognition (ASR) model, comprised of three different submodels: an encoder, a predictor and a joiner. +The [predictor](https://github.com/pytorch/executorch/blob/main/examples/cadence/models/rnnt_predictor.py) is a sequence of basic ops (embedding, ReLU, linear, layer norm) and can be exported using: + +```bash +cd executorch +python3 -m examples.cadence.models.rnnt_predictor +``` + +The generated file is called `CadenceDemoModel.pte`. + +### Runtime + +**Building the DSP firmware image** +In this step, you'll be building the DSP firmware image that consists of the sample ExecuTorch runner along with the Program generated from the previous step. This image when loaded onto the DSP will run through the model that this Program consists of. + +***Step 1***. Configure the environment variables needed to point to the Xtensa toolchain that you have installed in the previous step. The three environment variables that need to be set include: +```bash +# Directory in which the Xtensa toolchain was installed +export XTENSA_TOOLCHAIN=/home/user_name/cadence/XtDevTools/install/tools +# The version of the toolchain that was installed. This is essentially the name of the directory +# that is present in the XTENSA_TOOLCHAIN directory from above. +export TOOLCHAIN_VER=RI-2021.8-linux +# The Xtensa core that you're targeting. +export XTENSA_CORE=nxp_rt600_RI2021_8_newlib +``` + +***Step 2***. Clone the [nnlib repo](https://github.com/foss-xtensa/nnlib-hifi4), which contains optimized kernels and primitives for HiFi4 DSPs, with `git clone git@github.com:foss-xtensa/nnlib-hifi4.git`. + +***Step 3***. Run the CMake build. +In order to run the CMake build, you need the path to the following: +- The Program generated in the previous step +- Path to the NXP SDK root. This should have been installed already in the [Setting up Developer Environment](#setting-up-developer-environment) section. This is the directory that contains the folders such as boards, components, devices, and other. + +```bash +cd executorch +./install_executorch.sh --clean +mkdir cmake-out +# prebuild and install executorch library +cmake -DCMAKE_TOOLCHAIN_FILE=/backends/cadence/cadence.cmake \ + -DCMAKE_INSTALL_PREFIX=cmake-out \ + -DCMAKE_BUILD_TYPE=Debug \ + -DPYTHON_EXECUTABLE=python3 \ + -DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=ON \ + -DEXECUTORCH_BUILD_EXECUTOR_RUNNER=OFF \ + -DEXECUTORCH_BUILD_PTHREADPOOL=OFF \ + -DEXECUTORCH_BUILD_CPUINFO=OFF \ + -Bcmake-out . + +cmake --build cmake-out -j --target install --config Debug +# build cadence runner +cmake -DCMAKE_BUILD_TYPE=Debug \ + -DCMAKE_TOOLCHAIN_FILE=/examples/backends/cadence.cmake \ + -DCMAKE_PREFIX_PATH=/cmake-out \ + -DMODEL_PATH= \ + -DNXP_SDK_ROOT_DIR= \ + -DNN_LIB_BASE_DIR= \ + -Bcmake-out/examples/cadence \ + examples/cadence + +cmake --build cmake-out/examples/cadence -j8 -t cadence_executorch_example +``` + +After having succesfully run the above step you should see two binary files in their CMake output directory. +```bash +> ls cmake-xt/*.bin +cmake-xt/dsp_data_release.bin cmake-xt/dsp_text_release.bin +``` + +## Deploying and Running on Device + +***Step 1***. You now take the DSP binary images generated from the previous step and copy them over into your NXP workspace created in the [Setting up Developer Environment](#setting-up-developer-environment) section. Copy the DSP images into the `dsp_binary` section highlighted in the image below. + +![MCUXpresso IDE](../_static/img/dsp_binary.png) + +```{note} +As long as binaries have been built using the Xtensa toolchain on Linux, flashing the board and running on the chip can be done only with the MCUXpresso IDE, which is available on all platforms (Linux, MacOS, Windows). +``` + +***Step 2***. Clean your work space + +***Step 3***. Click **Debug your Project** which will flash the board with your binaries. + +On the UART console connected to your board (at a default baud rate of 115200), you should see an output similar to this: + +```bash +> screen /dev/tty.usbmodem0007288234991 115200 +Executed model +Model executed successfully. +First 20 elements of output 0 +0.165528 0.331055 ... +``` + +## Conclusion and Future Work + +In this tutorial, you have learned how to export a quantized operation, build the ExecuTorch runtime and run this model on the Xtensa HiFi4 DSP chip. + +The (quantized linear) model in this tutorial is a typical operation appearing in ASR models, and can be extended to a complete ASR model by creating the model as a new test and adding the needed operators/kernels to [operators](https://github.com/pytorch/executorch/blob/main/backends/cadence/hifi/operators) and [kernels](https://github.com/pytorch/executorch/blob/main/backends/cadence/hifi/kernels). + +Other models can be created following the same structure, always assuming that operators and kernels are available. diff --git a/docs/source/backends-cadence.md b/docs/source/backends-cadence.md index 9f15656d39c..667e71ea5a4 100644 --- a/docs/source/backends-cadence.md +++ b/docs/source/backends-cadence.md @@ -1,9 +1,12 @@ # Cadence Xtensa Backend -In this tutorial we will walk you through the process of getting setup to build ExecuTorch for an Xtensa HiFi4 DSP and running a simple model on it. +In this tutorial we will walk you through the process of getting setup to build ExecuTorch for Cadence Xtensa DSPs and running models on them. -[Cadence](https://www.cadence.com/en_US/home.html) is both a hardware and software vendor, providing solutions for many computational workloads, including to run on power-limited embedded devices. The [Xtensa HiFi4 DSP](https://www.cadence.com/en_US/home/tools/ip/tensilica-ip/hifi-dsps/hifi-4.html) is a Digital Signal Processor (DSP) that is optimized for running audio based neural networks such as wake word detection, Automatic Speech Recognition (ASR), etc. +[Cadence](https://www.cadence.com/en_US/home.html) is both a hardware and software vendor, providing solutions for many computational workloads, including to run on power-limited embedded devices. The Cadence backend supports multiple DSP families optimized for different workloads: +- **HiFi Audio DSPs** (HiFi4/HiFi5): Optimized for audio processing, speech recognition, and wake word detection +- **Fusion G3 DSPs**: General-purpose AI acceleration +- **Vision P-Series DSPs**: Specialized for computer vision and CNN workloads In addition to the chip, the HiFi4 Neural Network Library ([nnlib](https://github.com/foss-xtensa/nnlib-hifi4)) offers an optimized set of library functions commonly used in NN processing that we utilize in this example to demonstrate how common operations can be accelerated. @@ -67,42 +70,99 @@ The working tree is: executorch ├── backends │ └── cadence -│ ├── aot -│ ├── ops_registration -│ ├── tests -│ ├── utils -│ ├── hifi +│ ├── aot # Ahead-of-Time compilation tools +│ │ ├── compiler.py # Main compilation API +│ │ ├── export_example.py # Export workflow example +│ │ ├── quantizer/ # Quantization infrastructure +│ │ │ ├── quantizer.py # Multiple quantizer implementations +│ │ │ ├── patterns.py # Quantization patterns +│ │ │ └── fusion_pass.py # Op fusion pass +│ │ ├── passes.py # Graph optimization passes +│ │ ├── functions.yaml # Generic operator definitions +│ │ ├── functions_hifi.yaml # HiFi-specific definitions +│ │ ├── functions_fusion_g3.yaml # Fusion G3 definitions +│ │ └── functions_vision.yaml # Vision-specific definitions +│ ├── runtime/ # Runtime execution infrastructure +│ ├── utils/ # Build utilities (FACTO, header gen) +│ ├── hifi/ # HiFi Audio DSP family (70+ ops) +│ │ ├── kernels # Optimized HiFi4/HiFi5 kernels +│ │ ├── operators # HiFi operator implementations +│ │ └── third-party +│ │ └── nnlib # Cadence NNLIB integration +│ ├── fusion_g3/ # Fusion G3 DSP family (25+ ops) │ │ ├── kernels │ │ ├── operators │ │ └── third-party -│ │ └── hifi4-nnlib -│ └── [other cadence DSP families] -│ ├── kernels -│ ├── operators -│ └── third-party -│ └── [any required lib] +│ │ └── nnlib +│ ├── vision/ # Vision P-Series DSP family (17+ ops) +│ │ ├── kernels +│ │ ├── operators +│ │ └── third-party # Vision-specific library +│ └── generic/ # Generic fallback implementations (15+ ops) +│ └── operators └── examples └── cadence - ├── models - └── operators + ├── models # 9 example models + │ ├── rnnt_encoder.py # ASR encoder (ConvEmformer) + │ ├── rnnt_predictor.py # ASR predictor + │ ├── rnnt_joiner.py # ASR joiner + │ ├── wav2vec2.py # Self-supervised speech + │ ├── mobilenet_v2.py # Image classification + │ ├── resnet18.py # Image classification + │ ├── resnet50.py # Image classification + │ ├── vision_transformer.py # ViT + │ └── babyllama.py # Small LLM + └── operators # Operator test examples + ├── test_add_op.py # Add operation tests + ├── test_quantized_linear_op.py + ├── test_quantized_conv1d_op.py + ├── test_requantize_op.py + └── test_g3_ops.py # FACTO-based G3 tests ``` ***AoT (Ahead-of-Time) Components***: -The AoT folder contains all of the python scripts and functions needed to export the model to an ExecuTorch `.pte` file. In our case, [export_example.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/export_example.py) is an API that takes a model (nn.Module) and representative inputs and runs it through the quantizer (from [quantizer.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/quantizer/quantizer.py)). Then a few compiler passes, also defined in [quantizer.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/quantizer/quantizer.py), will replace operators with custom ones that are supported and optimized on the chip. Any operator needed to compute things should be defined in [ops_registrations.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/ops_registrations.py) and have corresponding implemetations in the other folders. +The AoT folder contains all of the python scripts and functions needed to export the model to an ExecuTorch `.pte` file. The main components include: + +- **Compiler API** ([compiler.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/compiler.py)): High-level APIs for model compilation including `trace()`, `quantize_pt2()`, `export_to_edge()`, and `export_to_cadence()`. + +- **Quantizer** ([quantizer/quantizer.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/quantizer/quantizer.py)): Multiple quantization strategies: + - `CadenceDefaultQuantizer`: Standard A8W8 (8-bit asymmetric activations, 8-bit weights) + - `CadenceWithLayerNormQuantizer`: Adds layer normalization support + - `CadenceWakeWordQuantizer`: Optimized for audio wake word models + - `CadenceW8A32MixedQuantizer`: Experimental mixed precision (8-bit weights, 32-bit activations) + - `CadenceWithSoftmaxQuantizer`: Includes A16 (16-bit activation) softmax + +- **Compiler Passes** ([passes.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/passes.py)): Graph optimization passes including operator fusion, replacement, simplification, and reordering. + +- **Operator Registrations** ([ops_registrations.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/ops_registrations.py)): Registers 100+ custom Cadence operators with meta kernels for shape inference. Supports quantized operations for conv1d/2d, linear, matmul, layer norm, and more. + +- **Export Example** ([export_example.py](https://github.com/pytorch/executorch/blob/main/backends/cadence/aot/export_example.py)): Reference implementation demonstrating the complete export workflow from model to `.pte` file. + +***DSP Family-Specific Implementations***: -***Operators***: +Each DSP family has its own optimized operator and kernel implementations: -The operators folder contains two kinds of operators: existing operators from the [ExecuTorch portable library](https://github.com/pytorch/executorch/tree/main/kernels/portable/cpu) and new operators that define custom computations. The former is simply dispatching the operator to the relevant ExecuTorch implementation, while the latter acts as an interface, setting up everything needed for the custom kernels to compute the outputs. +- **HiFi**: Extensive support for quantized convolutions (1D/2D, depthwise, dilated), linear, matmul, layer norm, ReLU, add, and more. Uses Cadence NNLIB for optimized primitives. + +- **Fusion G3**: General-purpose operations including arithmetic (add, sub, mul, div), activations (sigmoid, tanh, softmax), layer normalization, and tensor manipulation. + +- **Vision**: Vision-focused operations including quantized conv, linear, matmul, im2row transformation, and softmax with custom vision library. + +- **Generic**: Reference implementations used as fallback when DSP-specific optimizations aren't available. ***Kernels***: -The kernels folder contains the optimized kernels that will run on the HiFi4 chip. They use Xtensa intrinsics to deliver high performance at low-power. +The kernels folders contain optimized implementations that use Xtensa intrinsics to deliver high performance at low power. Each DSP family has its own kernel implementations tuned for the specific architecture characteristics. ## Build In this step, you will generate the ExecuTorch program from different models. You'll then use this Program (the `.pte` file) during the runtime build step to bake this Program into the DSP image. +### Model Export Examples + +The Cadence backend provides multiple example models covering different use cases: + ***Simple Model***: The first, simple model is meant to test that all components of this tutorial are working properly, and simply does an add operation. The generated file is called `add.pte`. @@ -114,28 +174,79 @@ python3 -m examples.portable.scripts.export --model_name="add" ***Quantized Operators***: -The other, more complex model are custom operators, including: - - a quantized [linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/operators/test_quantized_linear_op.py#L30). Linear is the backbone of most Automatic Speech Recognition (ASR) models. - - a quantized [conv1d](https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html) operation. The model is defined [here](https://github.com/pytorch/executorch/blob/main/examples/cadence/operators/test_quantized_conv1d_op.py#L40). Convolutions are important in wake word and many denoising models. +Test individual quantized operations: -In both cases the generated file is called `CadenceDemoModel.pte`. +- **Quantized Linear**: [Linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) operation (32→16 features). Linear is the backbone of most ASR models. + ```bash + python3 -m examples.cadence.operators.test_quantized_linear_op + ``` -```bash -cd executorch -python3 -m examples.cadence.operators.quantized__op -``` +- **Quantized Conv1D**: [Conv1d](https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html) operation (8→16 channels). Important for wake word and denoising models. + ```bash + python3 -m examples.cadence.operators.test_quantized_conv1d_op + ``` -***Small Model: RNNT predictor***: +- **Requantize Operation**: Tests dtype conversion between different quantized types. + ```bash + python3 -m examples.cadence.operators.test_requantize_op + ``` -The torchaudio [RNNT-emformer](https://pytorch.org/audio/stable/tutorials/online_asr_tutorial.html) model is an Automatic Speech Recognition (ASR) model, comprised of three different submodels: an encoder, a predictor and a joiner. -The [predictor](https://github.com/pytorch/executorch/blob/main/examples/cadence/models/rnnt_predictor.py) is a sequence of basic ops (embedding, ReLU, linear, layer norm) and can be exported using: +In all cases the generated file is called `CadenceDemoModel.pte`. -```bash -cd executorch -python3 -m examples.cadence.models.rnnt_predictor -``` +***Speech/Audio Models***: + +The torchaudio [RNNT-emformer](https://pytorch.org/audio/stable/tutorials/online_asr_tutorial.html) model is an Automatic Speech Recognition (ASR) model, comprised of three different submodels: + +- **RNNT Predictor**: Sequence of basic ops (embedding, ReLU, linear, layer norm) + ```bash + python3 -m examples.cadence.models.rnnt_predictor + ``` + +- **RNNT Encoder**: ConvEmformer-based encoder with time reduction and transformer layers + ```bash + python3 -m examples.cadence.models.rnnt_encoder + ``` + +- **RNNT Joiner**: Joint network combining encoder and predictor outputs + ```bash + python3 -m examples.cadence.models.rnnt_joiner + ``` + +- **Wav2Vec 2.0**: Self-supervised speech representation model + ```bash + python3 -m examples.cadence.models.wav2vec2 + ``` + +***Computer Vision Models***: + +- **MobileNet V2**: Efficient image classification + ```bash + python3 -m examples.cadence.models.mobilenet_v2 + ``` -The generated file is called `CadenceDemoModel.pte`. +- **ResNet-18**: Image classification + ```bash + python3 -m examples.cadence.models.resnet18 + ``` + +- **ResNet-50**: Deeper image classification + ```bash + python3 -m examples.cadence.models.resnet50 + ``` + +- **Vision Transformer (ViT)**: Transformer-based vision model + ```bash + python3 -m examples.cadence.models.vision_transformer + ``` + +***Language Model***: + +- **Baby LLaMA**: Small LLM for testing transformer operations on DSP + ```bash + python3 -m examples.cadence.models.babyllama + ``` + +All model exports generate `CadenceDemoModel.pte` files ready for deployment. ### Runtime @@ -148,9 +259,21 @@ In this step, you'll be building the DSP firmware image that consists of the sam export XTENSA_TOOLCHAIN=/home/user_name/cadence/XtDevTools/install/tools # The version of the toolchain that was installed. This is essentially the name of the directory # that is present in the XTENSA_TOOLCHAIN directory from above. -export TOOLCHAIN_VER=RI-2021.8-linux +export TOOLCHAIN_VER=RI-2023.11-linux # The Xtensa core that you're targeting. -export XTENSA_CORE=nxp_rt600_RI2021_8_newlib +# For HiFi4 (NXP RT600): +export XTENSA_CORE=VANILLA_HIFI +# For Fusion G3: +# export XTENSA_CORE=VANILLA_G3 +# For Vision P6: +# export XTENSA_CORE=VANILLA_VISION +``` + +```{note} +The Cadence backend supports multiple DSP families: +- **HiFi Audio DSPs** (HiFi4/HiFi5): Core `VANILLA_HIFI`, enable with `-DEXECUTORCH_NNLIB_OPT=ON` +- **Fusion G3 DSPs**: Core `VANILLA_G3`, enable with `-DEXECUTORCH_FUSION_G3_OPT=ON` +- **Vision P-Series DSPs**: Core `VANILLA_VISION`, enable with `-DEXECUTORCH_VISION_OPT=ON` ``` ***Step 2***. Clone the [nnlib repo](https://github.com/foss-xtensa/nnlib-hifi4), which contains optimized kernels and primitives for HiFi4 DSPs, with `git clone git@github.com:foss-xtensa/nnlib-hifi4.git`. @@ -199,7 +322,7 @@ cmake-xt/dsp_data_release.bin cmake-xt/dsp_text_release.bin ***Step 1***. You now take the DSP binary images generated from the previous step and copy them over into your NXP workspace created in the [Setting up Developer Environment](#setting-up-developer-environment) section. Copy the DSP images into the `dsp_binary` section highlighted in the image below. -MCUXpresso IDE
+![MCUXpresso IDE](_static/img/dsp_binary.png) ```{note} As long as binaries have been built using the Xtensa toolchain on Linux, flashing the board and running on the chip can be done only with the MCUXpresso IDE, which is available on all platforms (Linux, MacOS, Windows). From 89b5071c1aa6343cd5a8f2119cb9697eb5af399b Mon Sep 17 00:00:00 2001 From: Gregory Comer Date: Tue, 21 Oct 2025 18:12:05 -0600 Subject: [PATCH 26/26] Minor doc fixes (#15336) ### Summary Fix heading level for build verification, cleanup wording on Windows reqs, update android presets since they were split by arch. --- docs/source/getting-started.md | 2 +- docs/source/using-executorch-building-from-source.md | 5 +++-- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/docs/source/getting-started.md b/docs/source/getting-started.md index c095c079560..845db806e02 100644 --- a/docs/source/getting-started.md +++ b/docs/source/getting-started.md @@ -12,7 +12,7 @@ The following are required to install the ExecuTorch host libraries, needed to e - g++ version 7 or higher, clang++ version 5 or higher, or another C++17-compatible toolchain. - Linux (x86_64 or ARM64), macOS (ARM64), or Windows (x86_64). - Intel-based macOS systems require building PyTorch from source (see [Building From Source](using-executorch-building-from-source.md) for instructions). -- On Windows, Visual Studio 2022 or later. Clang build tools are needed to build from source. +- On Windows, Visual Studio 2022 or later. ## Installation To use ExecuTorch, you will need to install both the Python package and the appropriate platform-specific runtime libraries. Pip is the recommended way to install the ExecuTorch python package. diff --git a/docs/source/using-executorch-building-from-source.md b/docs/source/using-executorch-building-from-source.md index da7f1831658..aa71d8248c5 100644 --- a/docs/source/using-executorch-building-from-source.md +++ b/docs/source/using-executorch-building-from-source.md @@ -83,7 +83,7 @@ portability details. CMAKE_ARGS="-DEXECUTORCH_BUILD_MPS=ON" ./install_executorch.sh ``` - ## Verify the Build + ### Verify the Build To verify that the Python components are installed correctly, run the following command. This will create a file named mv2_xnnpack_fp32.pte in the current directory for the MobileNet V2 model with the XNNPACK backend. If it completes without error, the ExecuTorch Python components are installed successfully. ```bash @@ -162,7 +162,8 @@ ExecuTorch provides fine-grained control over what is built, as described in [Bu Preset values for common scenarios are listed below. Using a platform preset is recommended to avoid needing to specify many fine-grained build options. - * `android` - Build featuers and backends common for Android targets. + * `android-arm64-v8a` - Build features and backends common for arm64-v8a Android targets. + * `android-x86_64` - Build features and backends common for x86_64 Android targets. * `arm-baremetal` - Build for bare-metal ARM targets. * `ios` - Build features and backends common for iOS targets. * `macos` - Build features and backends common for Mac targets.