Skip to content

Conversation

@infil00p
Copy link

@infil00p infil00p commented Oct 4, 2025

No description provided.

ariG23498 and others added 30 commits May 13, 2025 07:03
* position ids for rope

* cleanup

* no need for mask

* no mask

* more cleanup

* add back filtering

* more cleanup

* revert the signature of llm's generate and forward

* use self.decoder.lm_use_tokens

* use torch inference_mode

* add back comment

* fix bug

* add back comments

* add back comments
Implementing KV Cache for inference
lusxvr and others added 30 commits August 21, 2025 15:52
Multi-node training (and a few other things, should have created more PRs!)
- Add export_executorch.py with dynamic shapes and int8 quantization
- Add test_executorch_export.py for end-to-end inference testing
- Add test_executorch_accuracy.py for numerical accuracy validation
- Add ONNX export scripts in onnx_export/
- Fix language_model.py for export compatibility:
  - Remove dynamic RoPE scaling (data-dependent control flow)
  - Add position_ids parameter to forward()
  - Fix SDPA to use explicit masks instead of is_causal

ExecuTorch export produces 6.0GB (unquantized) or 2.3GB (quantized) models
that generate accurate descriptions. Both exports tested and working.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.