Skip to content

Commit 7cc408a

Browse files
committed
fix: Allow N-GPU Layers (NGL) to be set to 0 in llama.cpp
The `n_gpu_layers` (NGL) setting in the llama.cpp extension was incorrectly preventing users from disabling GPU layers by automatically defaulting to 100 when set to 0. This was caused by a condition that only pushed `cfg.n_gpu_layers` if it was greater than 0 (`cfg.n_gpu_layers > 0`). This commit updates the condition to `cfg.n_gpu_layers >= 0`, allowing 0 to be a valid and accepted value for NGL. This ensures that users can effectively disable GPU offloading when desired.
1 parent a1af70f commit 7cc408a

File tree

1 file changed

+1
-1
lines changed
  • extensions/llamacpp-extension/src

1 file changed

+1
-1
lines changed

extensions/llamacpp-extension/src/index.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1182,7 +1182,7 @@ export default class llamacpp_extension extends AIEngine {
11821182

11831183
// Add remaining options from the interface
11841184
if (cfg.chat_template) args.push('--chat-template', cfg.chat_template)
1185-
args.push('-ngl', String(cfg.n_gpu_layers > 0 ? cfg.n_gpu_layers : 100))
1185+
args.push('-ngl', String(cfg.n_gpu_layers >= 0 ? cfg.n_gpu_layers : 100))
11861186
if (cfg.threads > 0) args.push('--threads', String(cfg.threads))
11871187
if (cfg.threads_batch > 0)
11881188
args.push('--threads-batch', String(cfg.threads_batch))

0 commit comments

Comments
 (0)