Skip to content

Feature Request: Support GPTQ (Quotes: GPTQModel 4bit can match BF16) #11024

@BodhiHu

Description

@BodhiHu

Prerequisites

  • I am running the latest code. Mention the version if possible as well.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

Hello,

Great work here :D

Is it possible that llama.cpp support GPTQ quantized models ?
The GPTQ quantized model had the advantage that it was fine-tuned with a dataset, which I think is a good reason to support GPTQ:

Quotes from the GPTQModel repo:

Quality: GPTQModel 4bit can match BF16:

https://github.com/ModelCloud/GPTQModel

Motivation

The GPTQ quantized model had the advantage that it can be fine-tuned/calibrated with a dataset, and for further fine tuning.

Possible Implementation

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions