Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

relax litellm provider constraint #820

Merged

Conversation

arjun-krishna1
Copy link
Contributor

Fixing: #755

  • Remove provider constraint on litellm generator
  • Fixes breaking garak test test_litellm#test_litellm_openai
  • Exceptions on model non-existence raised by litellm bubbled up

Copy link
Contributor

github-actions bot commented Aug 11, 2024

DCO Assistant Lite bot All contributors have signed the DCO ✍️ ✅

@arjun-krishna1
Copy link
Contributor Author

  • test_litellm#test_litellm_openai passes on this branch but fails on main
~/garak$ git branch
* bugfix/litellm_provider_validation
 main
~/garak$ python -m pytest tests/generators/test_litellm.py::test_litellm_openai -s
======================================================= test session starts =======================================================
platform linux -- Python 3.12.3, pytest-8.3.2, pluggy-1.5.0
rootdir: /home/arjun/garak
configfile: pyproject.toml
plugins: requests-mock-1.12.1, anyio-4.4.0, respx-0.21.1
collected 1 item                                                                                                                  

tests/generators/test_litellm.py 🦜 loading generator: LiteLLM: gpt-3.5-turbo
test passed!
.

======================================================== 1 passed in 3.98s ========================================================
  • Fails on main:
~/garak$ python -m pytest tests/generators/test_litellm.py::test_litellm_openai -s
======================================================= test session starts =======================================================
platform linux -- Python 3.12.3, pytest-8.3.2, pluggy-1.5.0
rootdir: /home/arjun/garak
configfile: pyproject.toml
plugins: requests-mock-1.12.1, anyio-4.4.0, respx-0.21.1
collected 1 item                                                                                                                  

tests/generators/test_litellm.py 🦜 loading generator: LiteLLM: gpt-3.5-turbo
F

============================================================ FAILURES =============================================================
_______________________________________________________ test_litellm_openai _______________________________________________________

    @pytest.mark.skipif(
        getenv("OPENAI_API_KEY", None) is None,
        reason="OpenAI API key is not set in OPENAI_API_KEY",
    )
    def test_litellm_openai():
        model_name = "gpt-3.5-turbo"
>       generator = LiteLLMGenerator(name=model_name)

tests/generators/test_litellm.py:16: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <garak.generators.litellm.LiteLLMGenerator object at 0x7d1108f95d30>, name = 'gpt-3.5-turbo', generations = 10
config_root = <module 'garak._config' from '/home/arjun/garak/garak/_config.py'>

    def __init__(self, name: str = "", generations: int = 10, config_root=_config):
        self.name = name
        self.api_base = None
        self.api_key = None
        self.provider = None
        self.key_env_var = self.ENV_VAR
        self.generations = generations
        self._load_config(config_root)
        self.fullname = f"LiteLLM {self.name}"
        self.supports_multiple_generations = not any(
            self.name.startswith(provider)
            for provider in unsupported_multiple_gen_providers
        )
    
        super().__init__(
            self.name, generations=self.generations, config_root=config_root
        )
    
        if self.provider is None:
>           raise ValueError(
                "litellm generator needs to have a provider value configured - see docs"
E               ValueError: litellm generator needs to have a provider value configured - see docs

garak/generators/litellm.py:129: ValueError
===================================================== short test summary info =====================================================
FAILED tests/generators/test_litellm.py::test_litellm_openai - ValueError: litellm generator needs to have a provider value configured - see docs
======================================================== 1 failed in 1.06s ========================================================

@arjun-krishna1
Copy link
Contributor Author

arjun-krishna1 commented Aug 11, 2024

Exception on non-existence raised by litellm: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=non-existent-model

>>> from garak.generators.litellm import LiteLLMGenerator
>>> non_existent_model = "non-existent-model"
>>> generator = LiteLLMGenerator(name=non_existent_model)
🦜 loading generator: LiteLLM: non-existent-model
>>> generator.generate("This should raise an exception!")

Provider List: https://docs.litellm.ai/docs/providers

INFO:backoff:Backing off _call_model(...) for 0.0s (litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=non-existent-model
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers)

@arjun-krishna1
Copy link
Contributor Author

I have read the DCO Document and I hereby sign the DCO

@leondz
Copy link
Collaborator

leondz commented Aug 11, 2024

Thank you, will take a look

@jmartin-tech
Copy link
Collaborator

@arjun-krishna1, please follow the fine print at the end of the bot's DCO comment to trigger action again.

@jmartin-tech jmartin-tech self-requested a review August 12, 2024 05:19
@jmartin-tech jmartin-tech self-assigned this Aug 12, 2024
@arjun-krishna1
Copy link
Contributor Author

recheck

github-actions bot added a commit that referenced this pull request Aug 12, 2024
@arjun-krishna1 arjun-krishna1 force-pushed the bugfix/litellm_provider_validation branch from 4b24bc1 to c8a53c2 Compare August 12, 2024 13:27
Copy link
Collaborator

@jmartin-tech jmartin-tech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is definitely an improvement, and it exposes another layer that needs to be accounted for.

Based on this change if no provider is passed the litellm.completion() preformed in _call_model() will enter an infinite backoff when it raises exceptions for a missing api_key value depending on what provider it autodetects from the model name provided.

A try block around litellm.completion() is need that will capture litellm.exceptions.AuthenticationError and possibly raise garak.exception.BadGeneratorException from the original error to cause the run to exit. Also the @backoff.on_exception would need to more explicitly only backoff on litellm.exceptions.APIError instead of any raised Exception.

@arjun-krishna1
Copy link
Contributor Author

Thanks for the review @jmartin-tech
Updated PR with try block and only backing off on litellm APIError
Have also added a test that checks that a Bad Generator exception is raised if litellm is given a bad model name

Copy link
Collaborator

@jmartin-tech jmartin-tech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To complete this constraint removal, the code needs to fully support passing model type detection on to litellm. This means removing the class level ENV_VAR and raising for specific errors that are thrown when litellm cannot determine the target API client to utilize.

garak/generators/litellm.py Outdated Show resolved Hide resolved
garak/generators/litellm.py Outdated Show resolved Hide resolved
garak/generators/litellm.py Outdated Show resolved Hide resolved
tests/generators/test_litellm.py Outdated Show resolved Hide resolved
arjun-krishna1 and others added 5 commits August 16, 2024 17:37
Co-authored-by: Jeffrey Martin <[email protected]>
Signed-off-by: Arjun Krishna <[email protected]>
Signed-off-by: Arjun Krishna <[email protected]>
Co-authored-by: Jeffrey Martin <[email protected]>
Signed-off-by: Arjun Krishna <[email protected]>
@arjun-krishna1
Copy link
Contributor Author

To complete this constraint removal, the code needs to fully support passing model type detection on to litellm. This means removing the class level ENV_VAR and raising for specific errors that are thrown when litellm cannot determine the target API client to utilize.

Hi @jmartin-tech , I think I've resolved all your comments so far
Please let me know if you have any other feedback or if this is good to go!

Copy link
Collaborator

@jmartin-tech jmartin-tech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Testing shows one more minor UX improvement need.

$ python -m garak -m litellm -n openai/meta/llama3-8b-instruct -g 1 -p continuation --generator_option_file litellm.json 2> /dev/null
garak LLM vulnerability scanner v0.9.0.14.post1 ( https://github.com/leondz/garak ) at 2024-08-22T10:36:48.688446
📜 logging to /home/jemartin/.local/share/garak/garak.log
🦜 loading generator: LiteLLM: openai/meta/llama3-8b-instruct
📜 reporting to /home/jemartin/.local/share/garak/garak_runs/garak.703878b2-c57d-49f2-8af0-bff238965aab.report.jsonl
🕵️  queue of probes: continuation.ContinueSlursReclaimedSlursMini

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

The raised BadGeneratorException can report the message used for instantiation to guide user on how to determine why the failure occurred. Committing a suggested error message and landing this shortly.

$ python -m garak -m litellm -n openai/meta/llama3-8b-instruct -g 1 -p continuation --generator_option_file litellm.json 2> /dev/null
garak LLM vulnerability scanner v0.9.0.14.post1 ( https://github.com/leondz/garak ) at 2024-08-22T10:37:55.025515
📜 logging to /home/jemartin/.local/share/garak/garak.log
🦜 loading generator: LiteLLM: openai/meta/llama3-8b-instruct
📜 reporting to /home/jemartin/.local/share/garak/garak_runs/garak.3838265e-c493-4bc7-a5de-0a047d016900.report.jsonl
🕵️  queue of probes: continuation.ContinueSlursReclaimedSlursMini

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Unrecoverable error during litellm completion see log for details

garak/generators/litellm.py Outdated Show resolved Hide resolved
@jmartin-tech jmartin-tech merged commit 933c41d into NVIDIA:main Aug 22, 2024
8 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Aug 22, 2024
@leondz
Copy link
Collaborator

leondz commented Aug 22, 2024

Thank you @arjun-krishna1 !!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants