|  | 
|  | 1 | +# Pico2: A simple MNIST Tutorial | 
|  | 2 | + | 
|  | 3 | +Deploy your PyTorch models directly to Raspberry Pi Pico2 microcontroller with ExecuTorch. | 
|  | 4 | + | 
|  | 5 | +## What You'll Build | 
|  | 6 | + | 
|  | 7 | +A 28×28 MNIST digit classifier running on memory constrained, low power microcontrollers: | 
|  | 8 | + | 
|  | 9 | +- Input: ASCII art digits (0, 1, 4, 7) | 
|  | 10 | +- Output: Real-time predictions via USB serial | 
|  | 11 | +- Memory: <400KB total footprint | 
|  | 12 | + | 
|  | 13 | +## Prerequisites | 
|  | 14 | + | 
|  | 15 | +- [Environment Setup section](https://docs.pytorch.org/executorch/1.0/using-executorch-building-from-source.html) | 
|  | 16 | + | 
|  | 17 | +- Refer to  this link on how to accept 'EULA' agreement and setup toolchain [link](https://docs.pytorch.org/executorch/1.0/backends-arm-ethos-u.html#development-requirements) | 
|  | 18 | + | 
|  | 19 | +- Verify ARM toolchain | 
|  | 20 | + | 
|  | 21 | +```bash | 
|  | 22 | +which arm-none-eabi-gcc # --> arm/ethos-u-scratch/arm-gnu-toolchain-13.3.rel1-x86_64-arm-none-eabi/bin/ | 
|  | 23 | +``` | 
|  | 24 | + | 
|  | 25 | +## Step 1: Generate pte from given example Model | 
|  | 26 | + | 
|  | 27 | +- Use the [provided example model](https://github.com/pytorch/executorch/blob/main/examples/raspberry_pi/pico2/export_mlp_mnist.py) | 
|  | 28 | + | 
|  | 29 | +```bash | 
|  | 30 | +python export_mlp_mnist.py # Creates balanced_tiny_mlp_mnist.pte | 
|  | 31 | +``` | 
|  | 32 | + | 
|  | 33 | +- **Note:** This is hand-crafted MNIST Classifier (proof-of-concept), and not production trained. This tiny MLP recognizes digits 0, 1, 4, and 7 using manually designed feature detectors. | 
|  | 34 | + | 
|  | 35 | +## Step 2: Build Firmware for Pico2 | 
|  | 36 | + | 
|  | 37 | +```bash | 
|  | 38 | +# Generate model | 
|  | 39 | + | 
|  | 40 | +python export_mlp_mnist.py # Creates balanced_tiny_mlp_mnist.pte | 
|  | 41 | + | 
|  | 42 | +# Build Pico2 firmware (one command!) | 
|  | 43 | + | 
|  | 44 | +./executorch/examples/rpi/build_firmware_pico.sh --model=balanced_tiny_mlp_mnist.pte   # This creates executorch_pico.uf2, a firmware image for Pico2 | 
|  | 45 | +``` | 
|  | 46 | + | 
|  | 47 | +Output: **executorch_pico.uf2** firmware file (examples/raspberry_pi/pico2/build/) | 
|  | 48 | + | 
|  | 49 | +**Note:** 'build_firmware_pico.sh' script converts given model pte to hex array and generates C code for the same via this helper [script](https://github.com/pytorch/executorch/blob/main/examples/raspberry_pi/pico2/pte_to_array.py). This C code is then compiled to generate final .uf2 binary which is then flashed to Pico2. | 
|  | 50 | + | 
|  | 51 | +## Step 3: Flash to Pico2 | 
|  | 52 | + | 
|  | 53 | +Hold BOOTSEL button on Pico2 | 
|  | 54 | +Connect USB → Mounts as ^RPI-RP2^ drive | 
|  | 55 | +Drag & drop ^executorch_pico.uf2^ file | 
|  | 56 | +Release BOOTSEL → Pico2 reboots with your model | 
|  | 57 | + | 
|  | 58 | +## Step 4: Verify Deployment | 
|  | 59 | + | 
|  | 60 | +**Success indicators:** | 
|  | 61 | + | 
|  | 62 | +- LED blinks 10× at 500ms → Model running ✅ | 
|  | 63 | +- LED blinks 10× at 100ms → Error, check serial ❌ | 
|  | 64 | + | 
|  | 65 | +**View predictions:** | 
|  | 66 | + | 
|  | 67 | +```bash | 
|  | 68 | +# Connect serial terminal | 
|  | 69 | +screen /dev/tty.usbmodem1101 115200 | 
|  | 70 | +# Expected output: | 
|  | 71 | + | 
|  | 72 | +Something like: | 
|  | 73 | + | 
|  | 74 | +=== Digit 7 === | 
|  | 75 | +############################ | 
|  | 76 | +############################ | 
|  | 77 | +                        #### | 
|  | 78 | +                       #### | 
|  | 79 | +                      #### | 
|  | 80 | +                     #### | 
|  | 81 | +                    #### | 
|  | 82 | +                   #### | 
|  | 83 | +                  #### | 
|  | 84 | +                 #### | 
|  | 85 | +                #### | 
|  | 86 | +               #### | 
|  | 87 | +              #### | 
|  | 88 | +             #### | 
|  | 89 | +            #### | 
|  | 90 | +           #### | 
|  | 91 | +          #### | 
|  | 92 | +         #### | 
|  | 93 | +        #### | 
|  | 94 | +       #### | 
|  | 95 | +      #### | 
|  | 96 | +     #### | 
|  | 97 | +    #### | 
|  | 98 | +   #### | 
|  | 99 | +  #### | 
|  | 100 | + #### | 
|  | 101 | +#### | 
|  | 102 | +### | 
|  | 103 | + | 
|  | 104 | +Input stats: 159 white pixels out of 784 total | 
|  | 105 | +Running neural network inference... | 
|  | 106 | +✅ Neural network results: | 
|  | 107 | +  Digit 0: 370.000 | 
|  | 108 | +  Digit 1: 0.000 | 
|  | 109 | +  Digit 2: -3.000 | 
|  | 110 | +  Digit 3: -3.000 | 
|  | 111 | +  Digit 4: 860.000 | 
|  | 112 | +  Digit 5: -3.000 | 
|  | 113 | +  Digit 6: -3.000 | 
|  | 114 | +  Digit 7: 1640.000 ← PREDICTED | 
|  | 115 | +  Digit 8: -3.000 | 
|  | 116 | +  Digit 9: -3.000 | 
|  | 117 | + | 
|  | 118 | +� PREDICTED: 7 (Expected: 7) ✅ CORRECT! | 
|  | 119 | +``` | 
|  | 120 | + | 
|  | 121 | +## Memory Optimization Tips | 
|  | 122 | + | 
|  | 123 | +### Pico2 Constraints | 
|  | 124 | + | 
|  | 125 | +- 520KB SRAM (runtime memory) | 
|  | 126 | +- 4MB Flash (model storage) | 
|  | 127 | +- Keep models small: | 
|  | 128 | + | 
|  | 129 | +### Common Issues | 
|  | 130 | + | 
|  | 131 | +- "Memory allocation failed" → Reduce model size and use quantization | 
|  | 132 | +- "Operator missing" → Use selective build: ^--operators=add,mul,relu^ | 
|  | 133 | +- "Import error" → Check ^arm-none-eabi-gcc^ toolchain setup. | 
|  | 134 | + | 
|  | 135 | +In order to resolve some of the issues above, refer to the following guides: | 
|  | 136 | + | 
|  | 137 | +- [ExecuTorch Quantization Optimization Guide](https://docs.pytorch.org/executorch/1.0/quantization-optimization.html) | 
|  | 138 | +- [Model Export & Lowering](https://docs.pytorch.org/executorch/1.0/using-executorch-export.html) and | 
|  | 139 | +- [Selective Build support](https://docs.pytorch.org/executorch/1.0/kernel-library-selective-build.html) | 
|  | 140 | + | 
|  | 141 | +### Firmware Size Analysis | 
|  | 142 | + | 
|  | 143 | +```bash | 
|  | 144 | +cd <root of executorch repo> | 
|  | 145 | +ls -al examples/raspberry_pi/pico2/build/executorch_pico.elf | 
|  | 146 | +``` | 
|  | 147 | + | 
|  | 148 | +- **Overall section sizes** | 
|  | 149 | + | 
|  | 150 | +```bash | 
|  | 151 | +arm-none-eabi-size -A examples/raspberry_pi/pico2/build/executorch_pico.elf | 
|  | 152 | +``` | 
|  | 153 | + | 
|  | 154 | +- **Detailed section breakdown** | 
|  | 155 | + | 
|  | 156 | +```bash | 
|  | 157 | +arm-none-eabi-objdump -h examples/raspberry_pi/pico2/build/executorch_pico.elf | 
|  | 158 | +``` | 
|  | 159 | + | 
|  | 160 | +- **Symbol sizes (largest consumers)** | 
|  | 161 | + | 
|  | 162 | +```bash | 
|  | 163 | +arm-none-eabi-nm --print-size --size-sort --radix=d examples/raspberry_pi/pico2/build/executorch_pico.elf | tail -20 | 
|  | 164 | +``` | 
|  | 165 | + | 
|  | 166 | +### Model Memory Footprint | 
|  | 167 | + | 
|  | 168 | +- **Model data specifically** | 
|  | 169 | + | 
|  | 170 | +```bash | 
|  | 171 | +arm-none-eabi-nm --print-size --size-sort --radix=d examples/raspberry_pi/pico2/build/executorch_pico.elf | grep -i model | 
|  | 172 | +``` | 
|  | 173 | + | 
|  | 174 | +- **Check what's in .bss (uninitialized data)** | 
|  | 175 | + | 
|  | 176 | +```bash | 
|  | 177 | +arm-none-eabi-objdump -t examples/raspberry_pi/pico2/build/executorch_pico.elf | grep ".bss" | head -10 | 
|  | 178 | +``` | 
|  | 179 | + | 
|  | 180 | +- **Memory map overview** | 
|  | 181 | + | 
|  | 182 | +```bash | 
|  | 183 | +arm-none-eabi-readelf -l examples/raspberry_pi/pico2/build/executorch_pico.elf | 
|  | 184 | +``` | 
|  | 185 | + | 
|  | 186 | +## Next Steps | 
|  | 187 | + | 
|  | 188 | +### Scale up your deployment | 
|  | 189 | + | 
|  | 190 | +- Use real production trained model | 
|  | 191 | +- Optimize further → INT8 quantization, pruning | 
|  | 192 | + | 
|  | 193 | +### Happy Inference! | 
|  | 194 | + | 
|  | 195 | +**Result:** PyTorch model → Pico2 deployment in 4 simple steps 🚀 | 
|  | 196 | +Total tutorial time: ~15 minutes | 
|  | 197 | + | 
|  | 198 | +**Conclusion:** Real-time inference on memory constrained, low power microcontrollers, a complete PyTorch → ExecuTorch → Pico2 demo MNIST deployment | 
0 commit comments