Skip to content

Bug: export-lora does not accept GGUF files #8581

@suncloudsmoon

Description

@suncloudsmoon

What happened?

After the process of finetuning, a LoRA GGUF is created using convert_lora_to_gguf.py from transformers. After llama.cpp implemented support for LoRA conversion to GGUF in recent days, the same support hasn't seen extended to export-lora. When I am combining the LoRA GGUF with the base model GGUF, I get a GGML error (see below). This is the command that I used to export lora: !/usr/home/llamacpp/llama-export-lora -m /kaggle/working/base-model-f16.gguf -l /kaggle/working/trained-model/ggml-lora-f16.gguf -o /kaggle/working/full-model-finetune-f16.gguf.

Name and Version

version: 1 (a15ef8f)
built with cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 for x86_64-linux-gnu

What operating system are you seeing the problem on?

Linux

Relevant log output

error: unexpected lora header file magic in '/kaggle/working/trained-model/ggml-lora-f16.gguf

Metadata

Metadata

Assignees

Labels

bug-unconfirmedhigh severityUsed to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions