Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error When Running Ollama LLM Test Script for GPTR #1087

Open
hayjohnny2000 opened this issue Jan 26, 2025 · 3 comments
Open

Error When Running Ollama LLM Test Script for GPTR #1087

hayjohnny2000 opened this issue Jan 26, 2025 · 3 comments

Comments

@hayjohnny2000
Copy link

Describe the bug
When I run the basic test script at https://docs.gptr.dev/docs/gpt-researcher/llms/running-with-ollama#run-llm-test-script-for-gptr, it produces the following error:

Traceback (most recent call last):
  File "D:\Users\Chickens\Documents\code_projects\gpt-researcher\tests\test-ollama.py", line 14, in <module>
    llm_provider = get_llm(
        "ollama",
    ...<4 lines>...
        verify_ssl=False  # Add this line
    )
  File "D:\ProgramData\.conda_envs\gpt_researcher\Lib\site-packages\gpt_researcher\utils\llm.py", line 19, in get_llm
    return GenericLLMProvider.from_provider(llm_provider, **kwargs)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ProgramData\.conda_envs\gpt_researcher\Lib\site-packages\gpt_researcher\llm_provider\generic\base.py", line 77, in from_provider
    llm = ChatOllama(base_url=os.environ["OLLAMA_BASE_URL"], **kwargs)
                              ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
  File "<frozen os>", line 716, in __getitem__
KeyError: 'OLLAMA_BASE_URL'

I'm running this on Windows 11, with a 3.13.1 Python interpreter in a Conda virtual environment.

To Reproduce
Steps to reproduce the behavior:

  1. conda create -p D:\ProgramData\.conda_envs\gpt_researcher
  2. conda activate D:\ProgramData\.conda_envs\gpt_researcher
  3. conda install conda-forge::pip
  4. cd D:\Users\Chickens\Documents\code_projects
  5. git clone https://github.com/assafelovic/gpt-researcher.git
  6. cd gpt-researcher
  7. pip install -r .\requirements.txt
  8. Copy the script contents from https://docs.gptr.dev/docs/gpt-researcher/llms/running-with-ollama#run-llm-test-script-for-gptr into ./tests/test-ollama.py
  9. python ./tests/test-ollama.py

Expected behavior
The base_url property defined in the test script should be used to initialise the ChatOllama object.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Windows 11
  • Python version: 3.13.1

Additional context
When I specify the os.environ["OLLAMA_BASE_URL"] = OLLAMA_BASE_URL in the test script, it produces another error:

Traceback (most recent call last):
  File "D:\Users\Chickens\Documents\code_projects\gpt-researcher\tests\test-ollama.py", line 14, in <module>
    llm_provider = get_llm(
        "ollama",
    ...<4 lines>...
        verify_ssl=False  # Add this line
    )
  File "D:\ProgramData\.conda_envs\gpt_researcher\Lib\site-packages\gpt_researcher\utils\llm.py", line 19, in get_llm
    return GenericLLMProvider.from_provider(llm_provider, **kwargs)
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\ProgramData\.conda_envs\gpt_researcher\Lib\site-packages\gpt_researcher\llm_provider\generic\base.py", line 77, in from_provider
    llm = ChatOllama(base_url=os.environ["OLLAMA_BASE_URL"], **kwargs)
TypeError: langchain_ollama.chat_models.ChatOllama() got multiple values for keyword argument 'base_url'
@assafelovic
Copy link
Owner

@hayjohnny2000 does this still occur? @ElishaKay perhaps you can help with this?

@ElishaKay
Copy link
Collaborator

ElishaKay commented Feb 1, 2025

Welcome @hayjohnny2000

It's possible there were breaking changes during a recent refactoring.

I tested the Ollama script in the Docs as well recently and didn't succeed in getting it working.

Would love to see a PR if you crack it

@krokosik
Copy link

The solution is to simply remove the OLLAMA_BASE_URL variable from the script and function argument and simply use the one from the .env file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants