-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
relax litellm provider constraint #820
relax litellm provider constraint #820
Conversation
DCO Assistant Lite bot All contributors have signed the DCO ✍️ ✅ |
~/garak$ git branch
* bugfix/litellm_provider_validation
main
~/garak$ python -m pytest tests/generators/test_litellm.py::test_litellm_openai -s
======================================================= test session starts =======================================================
platform linux -- Python 3.12.3, pytest-8.3.2, pluggy-1.5.0
rootdir: /home/arjun/garak
configfile: pyproject.toml
plugins: requests-mock-1.12.1, anyio-4.4.0, respx-0.21.1
collected 1 item
tests/generators/test_litellm.py 🦜 loading generator: LiteLLM: gpt-3.5-turbo
test passed!
.
======================================================== 1 passed in 3.98s ========================================================
~/garak$ python -m pytest tests/generators/test_litellm.py::test_litellm_openai -s
======================================================= test session starts =======================================================
platform linux -- Python 3.12.3, pytest-8.3.2, pluggy-1.5.0
rootdir: /home/arjun/garak
configfile: pyproject.toml
plugins: requests-mock-1.12.1, anyio-4.4.0, respx-0.21.1
collected 1 item
tests/generators/test_litellm.py 🦜 loading generator: LiteLLM: gpt-3.5-turbo
F
============================================================ FAILURES =============================================================
_______________________________________________________ test_litellm_openai _______________________________________________________
@pytest.mark.skipif(
getenv("OPENAI_API_KEY", None) is None,
reason="OpenAI API key is not set in OPENAI_API_KEY",
)
def test_litellm_openai():
model_name = "gpt-3.5-turbo"
> generator = LiteLLMGenerator(name=model_name)
tests/generators/test_litellm.py:16:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <garak.generators.litellm.LiteLLMGenerator object at 0x7d1108f95d30>, name = 'gpt-3.5-turbo', generations = 10
config_root = <module 'garak._config' from '/home/arjun/garak/garak/_config.py'>
def __init__(self, name: str = "", generations: int = 10, config_root=_config):
self.name = name
self.api_base = None
self.api_key = None
self.provider = None
self.key_env_var = self.ENV_VAR
self.generations = generations
self._load_config(config_root)
self.fullname = f"LiteLLM {self.name}"
self.supports_multiple_generations = not any(
self.name.startswith(provider)
for provider in unsupported_multiple_gen_providers
)
super().__init__(
self.name, generations=self.generations, config_root=config_root
)
if self.provider is None:
> raise ValueError(
"litellm generator needs to have a provider value configured - see docs"
E ValueError: litellm generator needs to have a provider value configured - see docs
garak/generators/litellm.py:129: ValueError
===================================================== short test summary info =====================================================
FAILED tests/generators/test_litellm.py::test_litellm_openai - ValueError: litellm generator needs to have a provider value configured - see docs
======================================================== 1 failed in 1.06s ======================================================== |
Exception on non-existence raised by litellm: >>> from garak.generators.litellm import LiteLLMGenerator
>>> non_existent_model = "non-existent-model"
>>> generator = LiteLLMGenerator(name=non_existent_model)
🦜 loading generator: LiteLLM: non-existent-model
>>> generator.generate("This should raise an exception!")
Provider List: https://docs.litellm.ai/docs/providers
INFO:backoff:Backing off _call_model(...) for 0.0s (litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=non-existent-model
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers) |
I have read the DCO Document and I hereby sign the DCO |
Thank you, will take a look |
@arjun-krishna1, please follow the fine print at the end of the bot's DCO comment to trigger action again. |
recheck |
Signed-off-by: Arjun Krishna <[email protected]>
4b24bc1
to
c8a53c2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely an improvement, and it exposes another layer that needs to be accounted for.
Based on this change if no provider is passed the litellm.completion()
preformed in _call_model()
will enter an infinite backoff when it raises exceptions for a missing api_key
value depending on what provider it autodetects
from the model name
provided.
A try block around litellm.completion()
is need that will capture litellm.exceptions.AuthenticationError
and possibly raise garak.exception.BadGeneratorException
from the original error to cause the run to exit. Also the @backoff.on_exception
would need to more explicitly only backoff on litellm.exceptions.APIError
instead of any raised Exception
.
Signed-off-by: Arjun Krishna <[email protected]>
…tent model Signed-off-by: Arjun Krishna <[email protected]>
Thanks for the review @jmartin-tech |
Signed-off-by: Arjun Krishna <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To complete this constraint removal, the code needs to fully support passing model type detection on to litellm
. This means removing the class level ENV_VAR
and raising for specific errors that are thrown when litellm
cannot determine the target API client to utilize.
Co-authored-by: Jeffrey Martin <[email protected]> Signed-off-by: Arjun Krishna <[email protected]>
Signed-off-by: Arjun Krishna <[email protected]>
Co-authored-by: Jeffrey Martin <[email protected]> Signed-off-by: Arjun Krishna <[email protected]>
Signed-off-by: Arjun Krishna <[email protected]>
Signed-off-by: Arjun Krishna <[email protected]>
Signed-off-by: Arjun Krishna <[email protected]>
028659f
to
3f74b4d
Compare
Hi @jmartin-tech , I think I've resolved all your comments so far |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Testing shows one more minor UX improvement need.
$ python -m garak -m litellm -n openai/meta/llama3-8b-instruct -g 1 -p continuation --generator_option_file litellm.json 2> /dev/null
garak LLM vulnerability scanner v0.9.0.14.post1 ( https://github.com/leondz/garak ) at 2024-08-22T10:36:48.688446
📜 logging to /home/jemartin/.local/share/garak/garak.log
🦜 loading generator: LiteLLM: openai/meta/llama3-8b-instruct
📜 reporting to /home/jemartin/.local/share/garak/garak_runs/garak.703878b2-c57d-49f2-8af0-bff238965aab.report.jsonl
🕵️ queue of probes: continuation.ContinueSlursReclaimedSlursMini
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
The raised BadGeneratorException
can report the message used for instantiation to guide user on how to determine why
the failure occurred. Committing a suggested error message and landing this shortly.
$ python -m garak -m litellm -n openai/meta/llama3-8b-instruct -g 1 -p continuation --generator_option_file litellm.json 2> /dev/null
garak LLM vulnerability scanner v0.9.0.14.post1 ( https://github.com/leondz/garak ) at 2024-08-22T10:37:55.025515
📜 logging to /home/jemartin/.local/share/garak/garak.log
🦜 loading generator: LiteLLM: openai/meta/llama3-8b-instruct
📜 reporting to /home/jemartin/.local/share/garak/garak_runs/garak.3838265e-c493-4bc7-a5de-0a047d016900.report.jsonl
🕵️ queue of probes: continuation.ContinueSlursReclaimedSlursMini
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Unrecoverable error during litellm completion see log for details
Signed-off-by: Jeffrey Martin <[email protected]>
Thank you @arjun-krishna1 !! |
Fixing: #755
test_litellm#test_litellm_openai