High-performance MLX-native nodes for Nodetool on Apple Silicon. This package wraps the community MLX implementations of Whisper, Kokoro/Sesame TTS, and MFlux FLUX.1 image generation so you can run state-of-the-art audio and vision workflows locally on macOS.
- Local-first – keep data on-device by running speech, TTS, and image models without cloud calls
- Optimised for Apple Silicon – uses MLX kernels and quantized checkpoints to achieve strong throughput on M-series chips
- Drop-in nodes – integrates seamlessly with the Nodetool graph editor and
nodetool-core
runtime
All nodes live under src/nodetool/nodes/mlx
:
mlx.whisper.MLXWhisper
– streaming speech-to-text using MLX Whisper checkpointsmlx.tts.TTS
– Kokoro and Sesame text-to-speech with optional chunked audio streamingmlx.mflux.ImageGeneration
– FLUX.1 image generation via the MFlux project (supports quantized models)
Their DSL wrappers are available under src/nodetool/dsl/mlx
for use in generated workflows.
- macOS 14+ on Apple Silicon (MLX currently supports Apple hardware only)
- Python 3.11
- nodetool-core v0.6.0+
- Required MLX checkpoints managed via the Nodetool Models Manager (see Managing Models)
- Open Nodetool → Tools ▸ Packages
- Install the
nodetool-mlx
pack from the package registry - Nodetool will handle dependencies and expose the MLX nodes in the graph editor once installed
git clone https://github.com/nodetool-ai/nodetool-mlx.git
cd nodetool-mlx
uv pip install -e .
uv pip install -r requirements-dev.txt
If you prefer Poetry or pip, install the project the same way—just ensure dependencies are resolved against Python 3.11.
All MLX nodes rely on locally cached checkpoints. The recommended way to download and update them is through the Models Manager built into Nodetool:
- Open Nodetool → Menu ▸ Models
- Select the
mlx
tab to view the recommended checkpoints for each node - Click Download for the models you plan to use; Nodetool stores them in the Hugging Face cache automatically
- The UI will keep track of model availability and prompt you when updates are available
Advanced users can still seed the Hugging Face cache manually, but using the UI integration ensures consistent paths and avoids missing-model errors in workflows.
- Install
nodetool-core
and this package in the same environment - Run
nodetool package scan
to generate metadata and DSL bindings - (Optional)
nodetool codegen
to refresh typed DSL wrappers - Build workflows either in the Nodetool UI or through Python DSL scripts using the
mlx
namespace
Example (Python DSL):
from nodetool.dsl.mlx import ImageGeneration
node = ImageGeneration(prompt="A retrofuturistic skyline at dusk", steps=6)
Run tests and lint checks before submitting PRs:
pytest -q
ruff check .
black --check .
Please open issues or pull requests for bug fixes, new MLX models, or performance improvements. Contributions are welcome!