Emerging Optimizers is a research project focused on understanding and optimizing the algorithmic behavior of emerging optimizers (including Shampoo, SOAP, Muon, and others) and their implications to performance of GPU systems in LLM training.
⚠️ Note: Emerging-Optimizers is under active development. All APIs are experimental and subject to change. New features, improvements, and documentation updates are released regularly. Your feedback and contributions are welcome, and we encourage you to follow along as new updates roll out.
Emerging optimizers represent a class of novel optimization algorithms that go beyond traditional first-order methods like Adam or SGD. These include optimizers that use matrix-based (non-diagonal) preconditioning, orthogonalization techniques, and other innovative approaches to achieve faster convergence and improved training efficiency.
Examples include Shampoo, which uses Kronecker-factored preconditioning (arXiv:1802.09568), and Muon, which uses Newton-Schulz orthogonalization (arXiv:2502.16982).
Emerging optimizers have demonstrated significant practical impact in large-scale language model training. Most notably, Muon was used to train the Kimi K2 model (arXiv:2507.20534), showcasing the effectiveness of these novel approaches at scale. These optimizers can:
- Achieve faster convergence, reducing the number of training steps required
- Improve final model quality through better conditioning of the optimization landscape
- Enable more efficient hyperparameter tuning due to reduced sensitivity to learning rates
- Python 3.10 or higher, 3.12 is recommended
- PyTorch 2.0 or higher
git clone https://github.com/NVIDIA-NeMo/Emerging-Optimizers.git
cd Emerging-Optimizers
pip install .Refer to tests for usage of different optimizers, e.g. tests/test_orthogonalized_optimizer.py::MuonTest.
Integration with Megatron Core is available in dev branch, e.g. muon.py
Coming soon.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.