-
Notifications
You must be signed in to change notification settings - Fork 10.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merging tensors of larger models #1
Comments
Thanks! The bigger problem now is that I am out of disk space, haha! |
Leave a tip jar to get a @ggerganov bigger SSD and / or macbook :D |
Its kinda pointless now but I was able to merge the 30B and 65B with this core bit of hackery added to the convert script.
|
Fixed with 007a8f6 On startup, we go through all the parts and merge them dynamically in the |
…l-instead-of-wget-1 Update command for downloading the weights to use `curl` `curl` is preinstalled on macOS and the new command is equivalent to the `wget` version but avoids having to install `wget`. This should save people some time.
broken change: delete original profile ggerganov#1 from q_f32 profiles
broken change: delete original profile ggerganov#1 from q_f32 profiles
* kquants_iter for hipblas and add gfx803 * Update CMakeLists.txt with hipblas kquants_iter and DMMV_F16 * remove dmmv_f16 for now
slaren: Cmrp fixes
Nits found in binary renames
* a chinese word formed of 3 chinese charcters but the first 2 is not word * tokenizer-fix * E5 Pretokenizer bugfix * whitespace fix * remove extra wpm --------- Co-authored-by: Mike Fan <[email protected]> Co-authored-by: Oliver Ye <[email protected]>
When `llama-batched-bench` is invoked _without_ setting `-npl`, "number of parallel prompts", it segfaults. The segfault is caused by invoking `max_element()` on a zero-length vector, `n_pl` This commit addresses that by first checking to see if the number of parallel prompts is zero, and if so sets the maximum sequence size to 1; otherwise, sets it to the original, the result of `max_element()`. Fixes, when running `lldb build/bin/llama-batched-bench -- -m models/Meta-Llama-3-8B.gguf` ``` * thread ggerganov#1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x0) frame #0: 0x000000010000366c llama-batched-bench`main(argc=3, argv=0x000000016fdff268) at batched-bench.cpp:72:28 69 llama_context_params ctx_params = llama_context_params_from_gpt_params(params); 70 71 // ensure enough sequences are available -> 72 ctx_params.n_seq_max = *std::max_element(n_pl.begin(), n_pl.end()); ```
* [example] batched-bench "segmentation fault" When `llama-batched-bench` is invoked _without_ setting `-npl`, "number of parallel prompts", it segfaults. The segfault is caused by invoking `max_element()` on a zero-length vector, `n_pl` This commit addresses that by first checking to see if the number of parallel prompts is zero, and if so sets the maximum sequence size to 1; otherwise, sets it to the original, the result of `max_element()`. Fixes, when running `lldb build/bin/llama-batched-bench -- -m models/Meta-Llama-3-8B.gguf` ``` * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x0) frame #0: 0x000000010000366c llama-batched-bench`main(argc=3, argv=0x000000016fdff268) at batched-bench.cpp:72:28 69 llama_context_params ctx_params = llama_context_params_from_gpt_params(params); 70 71 // ensure enough sequences are available -> 72 ctx_params.n_seq_max = *std::max_element(n_pl.begin(), n_pl.end()); ``` * Update examples/batched-bench/batched-bench.cpp Co-authored-by: compilade <[email protected]> --------- Co-authored-by: Georgi Gerganov <[email protected]> Co-authored-by: compilade <[email protected]>
* fstring ggerganov#1 * fstring ggerganov#2
* dictionary ggerganov#1 * dictionary ggerganov#2
#1) * Fixed a bug where debug code was included in the release, resulting in an undefined function error. * Change the path of the QNN library when building in termux environment * Revert "Change the path of the QNN library when building in termux environment" This reverts commit c6e26a3. * Changed so that GGML_QNN_DEFAULT_LIB_SEARCH_PATH can be set from command line arguments
It shouldn't be hard to merge tensors with my https://github.com/kir-gadjello/zipslicer library, but it's pure Python! If you want to keep the project pure C++ you might want to write a standalone gist script that uses zipslicer to unpack weight shards into binary files.
The text was updated successfully, but these errors were encountered: