Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add base_url param support for LLM / VLM #1185

Open
d6l8 opened this issue Feb 23, 2025 · 0 comments
Open

Add base_url param support for LLM / VLM #1185

d6l8 opened this issue Feb 23, 2025 · 0 comments

Comments

@d6l8
Copy link

d6l8 commented Feb 23, 2025

Is your feature request related to a problem? Please describe.
I need to use OpenAI models with the base_url param, but simply putting OPENAI_API_KEY in the .env file doesn't work because the function create_chat_completion in the gpt-researcher\gpt_researcher\utils\llm.py doesn't support this param. And your code for the OpenAI embedding model supports this param.

_embeddings = OpenAIEmbeddings(
    model=model,
    openai_api_key=os.getenv("OPENAI_API_KEY", "custom"),
    openai_api_base=os.getenv(
        "OPENAI_BASE_URL", "http://localhost:1234/v1"
    ),  # default for lmstudio

Describe the solution you'd like
Add support for the base_url param in the func create_chat_completion, which reads the base_url param from the .env file like above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant