[WIP] LLaMa-Torchsharp for LLaMa 3 #1380
GeorgeS2019
started this conversation in
Show and tell
Replies: 2 comments 2 replies
-
Found a better home for it. The project has migrated to |
Beta Was this translation helpful? Give feedback.
2 replies
-
@GeorgeS2019 I'd have to review @LittleLittleCloud 's latest code. If he already took care of Llama 3+, then nothing else in that direction. Other than that, I made a contribution to TorchSharp.PyBridge that would reduce the memory overhead during load time allowing you to load slightly larger models faster. It'd be a matter of ensuring that that my fix made it into PyBridge and making sure TorchSharp llama is using it. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
@LittleLittleCloud
=> question; are there challenges that you find that prevented this project to continue?
https://github.com/LittleLittleCloud/Torchsharp-llama
Continue further:
@ejhg
Beta Was this translation helpful? Give feedback.
All reactions