Skip to content

[Windows Support] Add latest xformers wheels to pyproject.toml#1753

Merged
danielhanchen merged 2 commits intounslothai:nightlyfrom
versipellis:u/versipellis/windows_support
Feb 20, 2025
Merged

[Windows Support] Add latest xformers wheels to pyproject.toml#1753
danielhanchen merged 2 commits intounslothai:nightlyfrom
versipellis:u/versipellis/windows_support

Conversation

@versipellis
Copy link
Copy Markdown
Contributor

Resolves #1359.

Adds the latest xformers wheels for Windows, supporting CUDA 12.6. I also went through to check for any missing xformers binaries on https://download.pytorch.org/whl/ for torch==2.4.0 and above for Windows; these were the only missing ones.

This is tested with two acceptable configurations cross-referenced and checked on the PyTorch Release Compatibility Matrix with python -m xformers.info and python -m bitsandbytes, logs in gists below with pip list:

Happy to test on more configurations if you'd like me to, but I only have access to an Ampere GPU right now.

@versipellis versipellis mentioned this pull request Feb 19, 2025
@versipellis
Copy link
Copy Markdown
Contributor Author

Also added a couple more small notes to README.md.

@danielhanchen danielhanchen changed the base branch from main to nightly February 19, 2025 10:13
@danielhanchen
Copy link
Copy Markdown
Contributor

Thank you a lot! Appreciate it!

@danielhanchen danielhanchen merged commit 07aea40 into unslothai:nightly Feb 20, 2025
danielhanchen added a commit that referenced this pull request Feb 20, 2025
* Update __init__.py

* Update loader.py

* Update rl.py

* Update rl.py

* Update _utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Better TRL handling

* Update rl.py

* Update tokenizer_utils.py

* Auto patching

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update rl.py

* Update tokenizer_utils.py

* Update rl.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update tokenizer_utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* max seq length

* Update rl.py

* Update rl.py

* Patching

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* NEFTune

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Extra replacements

* Update rl_replacements.py

* Update rl.py

* extra RL replacements

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update _utils.py

* Update loader_utils.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* autocast

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update pyproject.toml

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* GRPO optimized

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Selective Log softmax

* Fix GRPO bsz

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Fix TRL

* Metrics GRPO

* Update rl_replacements.py

* Update rl_replacements.py

* No compile

* Update rl.py

* Remove docs

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* llama-quantize on WINDOWS WSL error fix - edit save.py (gguf saving breaks) (#1649)

* edit save.py to fix gguf saving breaks.

* add check for .exe or not exe file extension for linux and windows

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* unsloth_num_chunks

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py (#1754)

Fix typo in comment: know -> now.

This was printed when running the Llama3.1_(8B)-GRPO.ipynb example notebook, so I'd expect others to run into it as well.

* Optional logits

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* fix an import error (#1767)

* fix an import error

* Delete .gitignore

* Update loader.py

* Update save.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* SamplingParams

* Convert mask to float (#1762)

* [Windows Support] Add latest `xformers` wheels to pyproject.toml (#1753)

* Add latest xformers

* Add a couple of lines to docs

* vLLMSamplingParams

* Update __init__.py

* default num_chunks == -1

* Versioning

---------

Co-authored-by: Gennadii Manzhos <105049664+everythingisc00l@users.noreply.github.com>
Co-authored-by: Seth Weidman <seth@sethweidman.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Ben <6579034+versipellis@users.noreply.github.com>
danielhanchen added a commit that referenced this pull request Feb 20, 2025
* Update rl.py

* Update tokenizer_utils.py

* Auto patching

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update rl.py

* Update tokenizer_utils.py

* Update rl.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update tokenizer_utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update tokenizer_utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* max seq length

* Update rl.py

* Update rl.py

* Patching

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* NEFTune

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Extra replacements

* Update rl_replacements.py

* Update rl.py

* extra RL replacements

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update _utils.py

* Update loader_utils.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* autocast

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update pyproject.toml

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* GRPO optimized

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Selective Log softmax

* Fix GRPO bsz

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Fix TRL

* Metrics GRPO

* Update rl_replacements.py

* Update rl_replacements.py

* No compile

* Update rl.py

* Remove docs

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* llama-quantize on WINDOWS WSL error fix - edit save.py (gguf saving breaks) (#1649)

* edit save.py to fix gguf saving breaks.

* add check for .exe or not exe file extension for linux and windows

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* unsloth_num_chunks

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py (#1754)

Fix typo in comment: know -> now.

This was printed when running the Llama3.1_(8B)-GRPO.ipynb example notebook, so I'd expect others to run into it as well.

* Optional logits

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* fix an import error (#1767)

* fix an import error

* Delete .gitignore

* Update loader.py

* Update save.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* SamplingParams

* Convert mask to float (#1762)

* [Windows Support] Add latest `xformers` wheels to pyproject.toml (#1753)

* Add latest xformers

* Add a couple of lines to docs

* vLLMSamplingParams

* Update __init__.py

* default num_chunks == -1

* Versioning

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update pyproject.toml

* Update pyproject.toml

---------

Co-authored-by: Gennadii Manzhos <105049664+everythingisc00l@users.noreply.github.com>
Co-authored-by: Seth Weidman <seth@sethweidman.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Ben <6579034+versipellis@users.noreply.github.com>
danielhanchen added a commit that referenced this pull request Mar 4, 2025
* Update rl.py

* Patching

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* NEFTune

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Extra replacements

* Update rl_replacements.py

* Update rl.py

* extra RL replacements

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update _utils.py

* Update loader_utils.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* autocast

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update pyproject.toml

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* GRPO optimized

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Selective Log softmax

* Fix GRPO bsz

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Fix TRL

* Metrics GRPO

* Update rl_replacements.py

* Update rl_replacements.py

* No compile

* Update rl.py

* Remove docs

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* llama-quantize on WINDOWS WSL error fix - edit save.py (gguf saving breaks) (#1649)

* edit save.py to fix gguf saving breaks.

* add check for .exe or not exe file extension for linux and windows

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* unsloth_num_chunks

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py (#1754)

Fix typo in comment: know -> now.

This was printed when running the Llama3.1_(8B)-GRPO.ipynb example notebook, so I'd expect others to run into it as well.

* Optional logits

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* fix an import error (#1767)

* fix an import error

* Delete .gitignore

* Update loader.py

* Update save.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* SamplingParams

* Convert mask to float (#1762)

* [Windows Support] Add latest `xformers` wheels to pyproject.toml (#1753)

* Add latest xformers

* Add a couple of lines to docs

* vLLMSamplingParams

* Update __init__.py

* default num_chunks == -1

* Versioning

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update pyproject.toml

* Update pyproject.toml

* Export Model to ollama.com  (#1648)

* Ollama Export Model to ollama.com

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Check for model_name

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* subprocess use instead of requests | added check for ollama server

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model | fix

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Push to Ollama

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Update cross_entropy_loss.py

* torch_cuda_device

* Update utils.py

* Update utils.py

* Update utils.py

* device

* device

* Update loader.py

* Update llama.py

* Update README.md

* Update llama.py

* Update llama.py

* Update _utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* __version__

* Update rl.py

* Bug fixes

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>
Co-authored-by: Gennadii Manzhos <105049664+everythingisc00l@users.noreply.github.com>
Co-authored-by: Seth Weidman <seth@sethweidman.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Ben <6579034+versipellis@users.noreply.github.com>
Co-authored-by: Jyotin Goel <120490013+gjyotin305@users.noreply.github.com>
danielhanchen added a commit that referenced this pull request Mar 5, 2025
* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* autocast

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update pyproject.toml

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* GRPO optimized

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Selective Log softmax

* Fix GRPO bsz

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Fix TRL

* Metrics GRPO

* Update rl_replacements.py

* Update rl_replacements.py

* No compile

* Update rl.py

* Remove docs

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* llama-quantize on WINDOWS WSL error fix - edit save.py (gguf saving breaks) (#1649)

* edit save.py to fix gguf saving breaks.

* add check for .exe or not exe file extension for linux and windows

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* unsloth_num_chunks

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py (#1754)

Fix typo in comment: know -> now.

This was printed when running the Llama3.1_(8B)-GRPO.ipynb example notebook, so I'd expect others to run into it as well.

* Optional logits

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* fix an import error (#1767)

* fix an import error

* Delete .gitignore

* Update loader.py

* Update save.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* SamplingParams

* Convert mask to float (#1762)

* [Windows Support] Add latest `xformers` wheels to pyproject.toml (#1753)

* Add latest xformers

* Add a couple of lines to docs

* vLLMSamplingParams

* Update __init__.py

* default num_chunks == -1

* Versioning

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update pyproject.toml

* Update pyproject.toml

* Export Model to ollama.com  (#1648)

* Ollama Export Model to ollama.com

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Check for model_name

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* subprocess use instead of requests | added check for ollama server

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model | fix

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Push to Ollama

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Update cross_entropy_loss.py

* torch_cuda_device

* Update utils.py

* Update utils.py

* Update utils.py

* device

* device

* Update loader.py

* Update llama.py

* Update README.md

* Update llama.py

* Update llama.py

* Update _utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* __version__

* Update rl.py

* Bug fixes

* Bug fixes

* Update llama.py

* Update _utils.py

* _wrap_fast_inference

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* SFT dataset prepare

* Update pyproject.toml

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update utils.py

* bug fix

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update __init__.py

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>
Co-authored-by: Gennadii Manzhos <105049664+everythingisc00l@users.noreply.github.com>
Co-authored-by: Seth Weidman <seth@sethweidman.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Ben <6579034+versipellis@users.noreply.github.com>
Co-authored-by: Jyotin Goel <120490013+gjyotin305@users.noreply.github.com>
danielhanchen added a commit that referenced this pull request Mar 6, 2025
* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update pyproject.toml

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* GRPO optimized

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Selective Log softmax

* Fix GRPO bsz

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Fix TRL

* Metrics GRPO

* Update rl_replacements.py

* Update rl_replacements.py

* No compile

* Update rl.py

* Remove docs

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* llama-quantize on WINDOWS WSL error fix - edit save.py (gguf saving breaks) (#1649)

* edit save.py to fix gguf saving breaks.

* add check for .exe or not exe file extension for linux and windows

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* unsloth_num_chunks

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py (#1754)

Fix typo in comment: know -> now.

This was printed when running the Llama3.1_(8B)-GRPO.ipynb example notebook, so I'd expect others to run into it as well.

* Optional logits

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* fix an import error (#1767)

* fix an import error

* Delete .gitignore

* Update loader.py

* Update save.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* SamplingParams

* Convert mask to float (#1762)

* [Windows Support] Add latest `xformers` wheels to pyproject.toml (#1753)

* Add latest xformers

* Add a couple of lines to docs

* vLLMSamplingParams

* Update __init__.py

* default num_chunks == -1

* Versioning

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update pyproject.toml

* Update pyproject.toml

* Export Model to ollama.com  (#1648)

* Ollama Export Model to ollama.com

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Check for model_name

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* subprocess use instead of requests | added check for ollama server

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model | fix

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Push to Ollama

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Update cross_entropy_loss.py

* torch_cuda_device

* Update utils.py

* Update utils.py

* Update utils.py

* device

* device

* Update loader.py

* Update llama.py

* Update README.md

* Update llama.py

* Update llama.py

* Update _utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* __version__

* Update rl.py

* Bug fixes

* Bug fixes

* Update llama.py

* Update _utils.py

* _wrap_fast_inference

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* SFT dataset prepare

* Update pyproject.toml

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update utils.py

* bug fix

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update _utils.py

* Update __init__.py

* Update _utils.py

* Version

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>
Co-authored-by: Gennadii Manzhos <105049664+everythingisc00l@users.noreply.github.com>
Co-authored-by: Seth Weidman <seth@sethweidman.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Ben <6579034+versipellis@users.noreply.github.com>
Co-authored-by: Jyotin Goel <120490013+gjyotin305@users.noreply.github.com>
danielhanchen added a commit that referenced this pull request Mar 6, 2025
* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update pyproject.toml

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* GRPO optimized

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Selective Log softmax

* Fix GRPO bsz

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Fix TRL

* Metrics GRPO

* Update rl_replacements.py

* Update rl_replacements.py

* No compile

* Update rl.py

* Remove docs

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* llama-quantize on WINDOWS WSL error fix - edit save.py (gguf saving breaks) (#1649)

* edit save.py to fix gguf saving breaks.

* add check for .exe or not exe file extension for linux and windows

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* unsloth_num_chunks

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py (#1754)

Fix typo in comment: know -> now.

This was printed when running the Llama3.1_(8B)-GRPO.ipynb example notebook, so I'd expect others to run into it as well.

* Optional logits

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* fix an import error (#1767)

* fix an import error

* Delete .gitignore

* Update loader.py

* Update save.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* SamplingParams

* Convert mask to float (#1762)

* [Windows Support] Add latest `xformers` wheels to pyproject.toml (#1753)

* Add latest xformers

* Add a couple of lines to docs

* vLLMSamplingParams

* Update __init__.py

* default num_chunks == -1

* Versioning

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update pyproject.toml

* Update pyproject.toml

* Export Model to ollama.com  (#1648)

* Ollama Export Model to ollama.com

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Check for model_name

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* subprocess use instead of requests | added check for ollama server

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model | fix

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Push to Ollama

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Update cross_entropy_loss.py

* torch_cuda_device

* Update utils.py

* Update utils.py

* Update utils.py

* device

* device

* Update loader.py

* Update llama.py

* Update README.md

* Update llama.py

* Update llama.py

* Update _utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* __version__

* Update rl.py

* Bug fixes

* Bug fixes

* Update llama.py

* Update _utils.py

* _wrap_fast_inference

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* SFT dataset prepare

* Update pyproject.toml

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update utils.py

* bug fix

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update _utils.py

* Update __init__.py

* Update _utils.py

* Version

* versioning

* Update _utils.py

* Update llama.py

* Update llama.py

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>
Co-authored-by: Gennadii Manzhos <105049664+everythingisc00l@users.noreply.github.com>
Co-authored-by: Seth Weidman <seth@sethweidman.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Ben <6579034+versipellis@users.noreply.github.com>
Co-authored-by: Jyotin Goel <120490013+gjyotin305@users.noreply.github.com>
danielhanchen added a commit that referenced this pull request Mar 8, 2025
* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update _utils.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* GRPO optimized

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Selective Log softmax

* Fix GRPO bsz

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Fix TRL

* Metrics GRPO

* Update rl_replacements.py

* Update rl_replacements.py

* No compile

* Update rl.py

* Remove docs

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* llama-quantize on WINDOWS WSL error fix - edit save.py (gguf saving breaks) (#1649)

* edit save.py to fix gguf saving breaks.

* add check for .exe or not exe file extension for linux and windows

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* unsloth_num_chunks

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py (#1754)

Fix typo in comment: know -> now.

This was printed when running the Llama3.1_(8B)-GRPO.ipynb example notebook, so I'd expect others to run into it as well.

* Optional logits

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* fix an import error (#1767)

* fix an import error

* Delete .gitignore

* Update loader.py

* Update save.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* SamplingParams

* Convert mask to float (#1762)

* [Windows Support] Add latest `xformers` wheels to pyproject.toml (#1753)

* Add latest xformers

* Add a couple of lines to docs

* vLLMSamplingParams

* Update __init__.py

* default num_chunks == -1

* Versioning

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update pyproject.toml

* Update pyproject.toml

* Export Model to ollama.com  (#1648)

* Ollama Export Model to ollama.com

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Check for model_name

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* subprocess use instead of requests | added check for ollama server

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model | fix

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Push to Ollama

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Update cross_entropy_loss.py

* torch_cuda_device

* Update utils.py

* Update utils.py

* Update utils.py

* device

* device

* Update loader.py

* Update llama.py

* Update README.md

* Update llama.py

* Update llama.py

* Update _utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* __version__

* Update rl.py

* Bug fixes

* Bug fixes

* Update llama.py

* Update _utils.py

* _wrap_fast_inference

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* SFT dataset prepare

* Update pyproject.toml

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update utils.py

* bug fix

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update _utils.py

* Update __init__.py

* Update _utils.py

* Version

* versioning

* Update _utils.py

* Update llama.py

* Update llama.py

* Bug fixes

* FastModel

* __doc__

* Update vision.py

* Update loader.py

* Update loader.py

* Update loader.py

* version

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>
Co-authored-by: Gennadii Manzhos <105049664+everythingisc00l@users.noreply.github.com>
Co-authored-by: Seth Weidman <seth@sethweidman.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Ben <6579034+versipellis@users.noreply.github.com>
Co-authored-by: Jyotin Goel <120490013+gjyotin305@users.noreply.github.com>
danielhanchen added a commit that referenced this pull request Mar 12, 2025
* Update llama.py

* GRPO optimized

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Selective Log softmax

* Fix GRPO bsz

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Fix TRL

* Metrics GRPO

* Update rl_replacements.py

* Update rl_replacements.py

* No compile

* Update rl.py

* Remove docs

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* llama-quantize on WINDOWS WSL error fix - edit save.py (gguf saving breaks) (#1649)

* edit save.py to fix gguf saving breaks.

* add check for .exe or not exe file extension for linux and windows

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* unsloth_num_chunks

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py (#1754)

Fix typo in comment: know -> now.

This was printed when running the Llama3.1_(8B)-GRPO.ipynb example notebook, so I'd expect others to run into it as well.

* Optional logits

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* fix an import error (#1767)

* fix an import error

* Delete .gitignore

* Update loader.py

* Update save.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* SamplingParams

* Convert mask to float (#1762)

* [Windows Support] Add latest `xformers` wheels to pyproject.toml (#1753)

* Add latest xformers

* Add a couple of lines to docs

* vLLMSamplingParams

* Update __init__.py

* default num_chunks == -1

* Versioning

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update pyproject.toml

* Update pyproject.toml

* Export Model to ollama.com  (#1648)

* Ollama Export Model to ollama.com

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Check for model_name

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* subprocess use instead of requests | added check for ollama server

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model | fix

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Push to Ollama

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Update cross_entropy_loss.py

* torch_cuda_device

* Update utils.py

* Update utils.py

* Update utils.py

* device

* device

* Update loader.py

* Update llama.py

* Update README.md

* Update llama.py

* Update llama.py

* Update _utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* __version__

* Update rl.py

* Bug fixes

* Bug fixes

* Update llama.py

* Update _utils.py

* _wrap_fast_inference

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* SFT dataset prepare

* Update pyproject.toml

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update utils.py

* bug fix

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update _utils.py

* Update __init__.py

* Update _utils.py

* Version

* versioning

* Update _utils.py

* Update llama.py

* Update llama.py

* Bug fixes

* FastModel

* __doc__

* Update vision.py

* Update loader.py

* Update loader.py

* Update loader.py

* version

* move use_modelscope to _utils (#1938)

* move use_modelscope to _utils

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Don't use revision when loading model_config and is_peft=True (#1949)

* More syntax warnings (#1944)

* move use_modelscope to _utils

* fix

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update loader.py

* Full finetuning and other fixes

* UNSLOTH_ENABLE_FULL_FINETUNING

* Update loader.py

* Update loader.py

* Update loader.py

* Update vision.py

* Update vision.py

* full finetuning

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* max_seq_length

* Update rl.py

* Update rl.py

* Update rl.py

* Update pyproject.toml

* AutoModelForImageTextToText

* Update mapper.py

* Update pyproject.toml

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Batch samples

* Update loader.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* Update loader.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>
Co-authored-by: Gennadii Manzhos <105049664+everythingisc00l@users.noreply.github.com>
Co-authored-by: Seth Weidman <seth@sethweidman.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Ben <6579034+versipellis@users.noreply.github.com>
Co-authored-by: Jyotin Goel <120490013+gjyotin305@users.noreply.github.com>
Co-authored-by: Kareem <81531392+KareemMusleh@users.noreply.github.com>
Co-authored-by: Wilson Wu <140025193+wiwu2390@users.noreply.github.com>
danielhanchen added a commit that referenced this pull request Mar 13, 2025
* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* llama-quantize on WINDOWS WSL error fix - edit save.py (gguf saving breaks) (#1649)

* edit save.py to fix gguf saving breaks.

* add check for .exe or not exe file extension for linux and windows

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update llama.py

* Update llama.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* unsloth_num_chunks

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py (#1754)

Fix typo in comment: know -> now.

This was printed when running the Llama3.1_(8B)-GRPO.ipynb example notebook, so I'd expect others to run into it as well.

* Optional logits

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* fix an import error (#1767)

* fix an import error

* Delete .gitignore

* Update loader.py

* Update save.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* SamplingParams

* Convert mask to float (#1762)

* [Windows Support] Add latest `xformers` wheels to pyproject.toml (#1753)

* Add latest xformers

* Add a couple of lines to docs

* vLLMSamplingParams

* Update __init__.py

* default num_chunks == -1

* Versioning

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update pyproject.toml

* Update pyproject.toml

* Export Model to ollama.com  (#1648)

* Ollama Export Model to ollama.com

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Check for model_name

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* subprocess use instead of requests | added check for ollama server

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model | fix

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Push to Ollama

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Update cross_entropy_loss.py

* torch_cuda_device

* Update utils.py

* Update utils.py

* Update utils.py

* device

* device

* Update loader.py

* Update llama.py

* Update README.md

* Update llama.py

* Update llama.py

* Update _utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* __version__

* Update rl.py

* Bug fixes

* Bug fixes

* Update llama.py

* Update _utils.py

* _wrap_fast_inference

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* SFT dataset prepare

* Update pyproject.toml

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update utils.py

* bug fix

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update _utils.py

* Update __init__.py

* Update _utils.py

* Version

* versioning

* Update _utils.py

* Update llama.py

* Update llama.py

* Bug fixes

* FastModel

* __doc__

* Update vision.py

* Update loader.py

* Update loader.py

* Update loader.py

* version

* move use_modelscope to _utils (#1938)

* move use_modelscope to _utils

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Don't use revision when loading model_config and is_peft=True (#1949)

* More syntax warnings (#1944)

* move use_modelscope to _utils

* fix

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update loader.py

* Full finetuning and other fixes

* UNSLOTH_ENABLE_FULL_FINETUNING

* Update loader.py

* Update loader.py

* Update loader.py

* Update vision.py

* Update vision.py

* full finetuning

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* max_seq_length

* Update rl.py

* Update rl.py

* Update rl.py

* Update pyproject.toml

* AutoModelForImageTextToText

* Update mapper.py

* Update pyproject.toml

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Batch samples

* Update loader.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* Update loader.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Temporary patches

* Update loader.py

* model names

* Gemma 3 chat template

* Bug fixes

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update llama.py

* Update llama.py

* Update rl.py

* Update chat_templates.py

* Update chat_templates.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Revert

* Update _utils.py

* forced precision

* Autocast

* Update vision.py

* Update vision.py

* Update rl.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>
Co-authored-by: Gennadii Manzhos <105049664+everythingisc00l@users.noreply.github.com>
Co-authored-by: Seth Weidman <seth@sethweidman.com>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Ben <6579034+versipellis@users.noreply.github.com>
Co-authored-by: Jyotin Goel <120490013+gjyotin305@users.noreply.github.com>
Co-authored-by: Kareem <81531392+KareemMusleh@users.noreply.github.com>
Co-authored-by: Wilson Wu <140025193+wiwu2390@users.noreply.github.com>
danielhanchen added a commit that referenced this pull request Mar 14, 2025
* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* fix an import error (#1767)

* fix an import error

* Delete .gitignore

* Update loader.py

* Update save.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* SamplingParams

* Convert mask to float (#1762)

* [Windows Support] Add latest `xformers` wheels to pyproject.toml (#1753)

* Add latest xformers

* Add a couple of lines to docs

* vLLMSamplingParams

* Update __init__.py

* default num_chunks == -1

* Versioning

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update pyproject.toml

* Update pyproject.toml

* Export Model to ollama.com  (#1648)

* Ollama Export Model to ollama.com

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Check for model_name

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* subprocess use instead of requests | added check for ollama server

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model | fix

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Push to Ollama

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Update cross_entropy_loss.py

* torch_cuda_device

* Update utils.py

* Update utils.py

* Update utils.py

* device

* device

* Update loader.py

* Update llama.py

* Update README.md

* Update llama.py

* Update llama.py

* Update _utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* __version__

* Update rl.py

* Bug fixes

* Bug fixes

* Update llama.py

* Update _utils.py

* _wrap_fast_inference

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* SFT dataset prepare

* Update pyproject.toml

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update utils.py

* bug fix

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update _utils.py

* Update __init__.py

* Update _utils.py

* Version

* versioning

* Update _utils.py

* Update llama.py

* Update llama.py

* Bug fixes

* FastModel

* __doc__

* Update vision.py

* Update loader.py

* Update loader.py

* Update loader.py

* version

* move use_modelscope to _utils (#1938)

* move use_modelscope to _utils

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Don't use revision when loading model_config and is_peft=True (#1949)

* More syntax warnings (#1944)

* move use_modelscope to _utils

* fix

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update loader.py

* Full finetuning and other fixes

* UNSLOTH_ENABLE_FULL_FINETUNING

* Update loader.py

* Update loader.py

* Update loader.py

* Update vision.py

* Update vision.py

* full finetuning

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* max_seq_length

* Update rl.py

* Update rl.py

* Update rl.py

* Update pyproject.toml

* AutoModelForImageTextToText

* Update mapper.py

* Update pyproject.toml

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Batch samples

* Update loader.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* Update loader.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Temporary patches

* Update loader.py

* model names

* Gemma 3 chat template

* Bug fixes

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update llama.py

* Update llama.py

* Update rl.py

* Update chat_templates.py

* Update chat_templates.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Revert

* Update _utils.py

* forced precision

* Autocast

* Update vision.py

* Update vision.py

* Update rl.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* vLLM fixes

* constexpr

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update save.py

* New models

* Triton windows update (#1976)

* Update pyproject.toml

* Update README.md

* Update RMS LayerNorm implementation, and list compr. change in chat templates (#1974)

* Update RMS LayerNorm implementation with optimizations and testing suite

* perf: optimize list comprehension in get_ollama_eos_tokens

* Update Zoo

* Update llama.py

* Update llama.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* grpo fix

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Update vision.py

* Update loader.py

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Ben <6579034+versipellis@users.noreply.github.com>
Co-authored-by: Jyotin Goel <120490013+gjyotin305@users.noreply.github.com>
Co-authored-by: Kareem <81531392+KareemMusleh@users.noreply.github.com>
Co-authored-by: Wilson Wu <140025193+wiwu2390@users.noreply.github.com>
Co-authored-by: Akshay Behl <126911424+Captain-T2004@users.noreply.github.com>
danielhanchen added a commit that referenced this pull request Mar 14, 2025
* Update rl_replacements.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update rl.py

* fix an import error (#1767)

* fix an import error

* Delete .gitignore

* Update loader.py

* Update save.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* SamplingParams

* Convert mask to float (#1762)

* [Windows Support] Add latest `xformers` wheels to pyproject.toml (#1753)

* Add latest xformers

* Add a couple of lines to docs

* vLLMSamplingParams

* Update __init__.py

* default num_chunks == -1

* Versioning

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update pyproject.toml

* Update pyproject.toml

* Export Model to ollama.com  (#1648)

* Ollama Export Model to ollama.com

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Check for model_name

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* subprocess use instead of requests | added check for ollama server

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* create_ollama_model | fix

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Push to Ollama

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>

* Update cross_entropy_loss.py

* torch_cuda_device

* Update utils.py

* Update utils.py

* Update utils.py

* device

* device

* Update loader.py

* Update llama.py

* Update README.md

* Update llama.py

* Update llama.py

* Update _utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update utils.py

* Update utils.py

* Update utils.py

* Update utils.py

* __version__

* Update rl.py

* Bug fixes

* Bug fixes

* Update llama.py

* Update _utils.py

* _wrap_fast_inference

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* SFT dataset prepare

* Update pyproject.toml

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl_replacements.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update utils.py

* bug fix

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update __init__.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update rl.py

* Update rl.py

* Update rl.py

* Update _utils.py

* Update __init__.py

* Update _utils.py

* Version

* versioning

* Update _utils.py

* Update llama.py

* Update llama.py

* Bug fixes

* FastModel

* __doc__

* Update vision.py

* Update loader.py

* Update loader.py

* Update loader.py

* version

* move use_modelscope to _utils (#1938)

* move use_modelscope to _utils

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Don't use revision when loading model_config and is_peft=True (#1949)

* More syntax warnings (#1944)

* move use_modelscope to _utils

* fix

* Update _utils.py

* Update loader.py

---------

Co-authored-by: Daniel Han <danielhanchen@gmail.com>

* Update loader.py

* Full finetuning and other fixes

* UNSLOTH_ENABLE_FULL_FINETUNING

* Update loader.py

* Update loader.py

* Update loader.py

* Update vision.py

* Update vision.py

* full finetuning

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* max_seq_length

* Update rl.py

* Update rl.py

* Update rl.py

* Update pyproject.toml

* AutoModelForImageTextToText

* Update mapper.py

* Update pyproject.toml

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Batch samples

* Update loader.py

* Update loader.py

* Update loader.py

* Update loader.py

* Update _utils.py

* Update loader.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Temporary patches

* Update loader.py

* model names

* Gemma 3 chat template

* Bug fixes

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update llama.py

* Update llama.py

* Update rl.py

* Update chat_templates.py

* Update chat_templates.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update vision.py

* Revert

* Update _utils.py

* forced precision

* Autocast

* Update vision.py

* Update vision.py

* Update rl.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* vLLM fixes

* constexpr

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update llama.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update _utils.py

* Update save.py

* New models

* Triton windows update (#1976)

* Update pyproject.toml

* Update README.md

* Update RMS LayerNorm implementation, and list compr. change in chat templates (#1974)

* Update RMS LayerNorm implementation with optimizations and testing suite

* perf: optimize list comprehension in get_ollama_eos_tokens

* Update Zoo

* Update llama.py

* Update llama.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* grpo fix

* Update rl_replacements.py

* Update vision.py

* Update rl_replacements.py

* Update vision.py

* Update mapper.py

* Update vision.py

* Update vision.py

* Update loader.py

* Update vision.py

* Update save.py

* Update save.py

* Update save.py

---------

Signed-off-by: Jyotin Goel <b22ai063@iitj.ac.in>
Co-authored-by: Nino Risteski <95188570+NinoRisteski@users.noreply.github.com>
Co-authored-by: Edd <68678137+Erland366@users.noreply.github.com>
Co-authored-by: Ben <6579034+versipellis@users.noreply.github.com>
Co-authored-by: Jyotin Goel <120490013+gjyotin305@users.noreply.github.com>
Co-authored-by: Kareem <81531392+KareemMusleh@users.noreply.github.com>
Co-authored-by: Wilson Wu <140025193+wiwu2390@users.noreply.github.com>
Co-authored-by: Akshay Behl <126911424+Captain-T2004@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Windows Support

2 participants