Skip to content

[feat]:Add support & docs for connecting to local OpenAI-compatible endpoints #40

@Arindam200

Description

@Arindam200

It would be really helpful if Memori supported (and documented) how to connect to local OpenAI-compatible endpoints.

Motivation

Many developers run models locally using projects like llama.cpp, koboldcpp, or jan.ai.

Some users don’t want to send data to external APIs for privacy or cost reasons.

Having a straightforward way to configure Memori with local endpoints would make it more flexible and self-hosting–friendly.

Proposal

Add instructions in the README on how to:

  • Point Memori to a local LLM endpoint (OpenAI API-compatible).
  • Configure embeddings with a different base URL than the LLM (since some setups split LLMs and embedding services).

Benefits

  • Makes Memori usable in fully local / offline setups.
  • Supports wider community adoption across different backends.
  • Increases flexibility for developers working with custom infra.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions