-
Notifications
You must be signed in to change notification settings - Fork 106
Issues: horseee/LLM-Pruner
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
what is the difference between pruning llama(llm) and LLAVA(MLLM)?
#90
opened Dec 15, 2024 by
wefwefWEF2
Post training more than 1 epoch leads to performance degradation
#81
opened Sep 22, 2024 by
sidhantls
Creating custom configuration files in hgging face format
#75
opened Sep 1, 2024 by
sriyachakravarthy
I would like to ask if the current version is suitable for qwen.
#66
opened Jul 29, 2024 by
wangxiaoxue
No pytorch_model.bin file in the tune_log/llama_0.2/checkpoint-200 folder
#63
opened Jun 22, 2024 by
hebowei2000
Evaluation:UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
#58
opened Apr 28, 2024 by
manlenzzz
Previous Next
ProTip!
What’s not been updated in a month: updated:<2024-11-28.