Skip to content

Add support AMD gfx1103 780M APU GPU and HIP_UMA #32

@atelepov

Description

@atelepov

Support HIP_UMA was added in llama.cpp PR:
ggml-org/llama.cpp#7414

Support ROCm can be implemented by example:
https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU

This all can speedup working with llm on ROCm and AMD 780M APU GPU

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions