Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

anthropic_api_key not used for ChatLiteLLM #27826

Open
5 tasks done
chenzimin opened this issue Nov 1, 2024 · 1 comment
Open
5 tasks done

anthropic_api_key not used for ChatLiteLLM #27826

chenzimin opened this issue Nov 1, 2024 · 1 comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@chenzimin
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

The following code:

from langchain_community.chat_models import ChatLiteLLM
from langchain_core.messages import HumanMessage

chat = ChatLiteLLM(model="claude-3-haiku-20240307", anthropic_api_key="...")

messages = [
    HumanMessage(
        content="Translate this sentence from English to French. I love programming."
    )
]
chat(messages)

will raise an error

AuthenticationError: litellm.AuthenticationError: Missing Anthropic API Key - A call is being made to anthropic but no key is set either in the environment variables or via params. Please set `ANTHROPIC_API_KEY` in your environment vars

However, setting the environment variable will work

import os
from langchain_community.chat_models import ChatLiteLLM
from langchain_core.messages import HumanMessage
os.environ["ANTROPIC_API_KEY"] = "xxx"

chat = ChatLiteLLM(model="claude-3-haiku-20240307")

messages = [
    HumanMessage(
        content="Translate this sentence from English to French. I love programming."
    )
]
chat(messages)

Error Message and Stack Trace (if applicable)

Full stack trace

/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py in warning_emitting_wrapper(*args, **kwargs)
    180                 warned = True
    181                 emit_warning()
--> 182             return wrapped(*args, **kwargs)
    183 
    184         async def awarning_emitting_wrapper(*args: Any, **kwargs: Any) -> Any:

/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py in __call__(self, messages, stop, callbacks, **kwargs)
   1015         **kwargs: Any,
   1016     ) -> BaseMessage:
-> 1017         generation = self.generate(
   1018             [messages], stop=stop, callbacks=callbacks, **kwargs
   1019         ).generations[0][0]

/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    641                 if run_managers:
    642                     run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 643                 raise e
    644         flattened_outputs = [
    645             LLMResult(generations=[res.generations], llm_output=res.llm_output)  # type: ignore[list-item]

/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    631             try:
    632                 results.append(
--> 633                     self._generate_with_cache(
    634                         m,
    635                         stop=stop,

/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py in _generate_with_cache(self, messages, stop, run_manager, **kwargs)
    849         else:
    850             if inspect.signature(self._generate).parameters.get("run_manager"):
--> 851                 result = self._generate(
    852                     messages, stop=stop, run_manager=run_manager, **kwargs
    853                 )

/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/litellm.py in _generate(self, messages, stop, run_manager, stream, **kwargs)
    357         message_dicts, params = self._create_message_dicts(messages, stop)
    358         params = {**params, **kwargs}
--> 359         response = self.completion_with_retry(
    360             messages=message_dicts, run_manager=run_manager, **params
    361         )

/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/litellm.py in completion_with_retry(self, run_manager, **kwargs)
    290             return self.client.completion(**kwargs)
    291 
--> 292         return _completion_with_retry(**kwargs)
    293 
    294     @pre_init

/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py in wrapped_f(*args, **kw)
    334             copy = self.copy()
    335             wrapped_f.statistics = copy.statistics  # type: ignore[attr-defined]
--> 336             return copy(f, *args, **kw)
    337 
    338         def retry_with(*args: t.Any, **kwargs: t.Any) -> WrappedFn:

/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py in __call__(self, fn, *args, **kwargs)
    473         retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
    474         while True:
--> 475             do = self.iter(retry_state=retry_state)
    476             if isinstance(do, DoAttempt):
    477                 try:

/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py in iter(self, retry_state)
    374         result = None
    375         for action in self.iter_state.actions:
--> 376             result = action(retry_state)
    377         return result
    378 

/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py in <lambda>(rs)
    396     def _post_retry_check_actions(self, retry_state: "RetryCallState") -> None:
    397         if not (self.iter_state.is_explicit_retry or self.iter_state.retry_run_result):
--> 398             self._add_action_func(lambda rs: rs.outcome.result())
    399             return
    400 

/usr/lib/python3.10/concurrent/futures/_base.py in result(self, timeout)
    449                     raise CancelledError()
    450                 elif self._state == FINISHED:
--> 451                     return self.__get_result()
    452 
    453                 self._condition.wait(timeout)

/usr/lib/python3.10/concurrent/futures/_base.py in __get_result(self)
    401         if self._exception:
    402             try:
--> 403                 raise self._exception
    404             finally:
    405                 # Break a reference cycle with the exception in self._exception

/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py in __call__(self, fn, *args, **kwargs)
    476             if isinstance(do, DoAttempt):
    477                 try:
--> 478                     result = fn(*args, **kwargs)
    479                 except BaseException:  # noqa: B902
    480                     retry_state.set_exception(sys.exc_info())  # type: ignore[arg-type]

/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/litellm.py in _completion_with_retry(**kwargs)
    288         @retry_decorator
    289         def _completion_with_retry(**kwargs: Any) -> Any:
--> 290             return self.client.completion(**kwargs)
    291 
    292         return _completion_with_retry(**kwargs)

/usr/local/lib/python3.10/dist-packages/litellm/utils.py in wrapper(*args, **kwargs)
   1011                     e, traceback_exception, start_time, end_time
   1012                 )  # DO NOT MAKE THREADED - router retry fallback relies on this!
-> 1013             raise e
   1014 
   1015     @wraps(original_function)

/usr/local/lib/python3.10/dist-packages/litellm/utils.py in wrapper(*args, **kwargs)
    901                     print_verbose(f"Error while checking max token limit: {str(e)}")
    902             # MODEL CALL
--> 903             result = original_function(*args, **kwargs)
    904             end_time = datetime.datetime.now()
    905             if "stream" in kwargs and kwargs["stream"] is True:

/usr/local/lib/python3.10/dist-packages/litellm/main.py in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, modalities, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
   2997     except Exception as e:
   2998         ## Map to OpenAI Exception
-> 2999         raise exception_type(
   3000             model=model,
   3001             custom_llm_provider=custom_llm_provider,

/usr/local/lib/python3.10/dist-packages/litellm/main.py in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, modalities, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
   1753                     api_base += "/v1/messages"
   1754 
-> 1755                 response = anthropic_chat_completions.completion(
   1756                     model=model,
   1757                     messages=messages,

/usr/local/lib/python3.10/dist-packages/litellm/llms/anthropic/chat/handler.py in completion(self, model, messages, api_base, custom_prompt_dict, model_response, print_verbose, encoding, api_key, logging_obj, optional_params, timeout, acompletion, litellm_params, logger_fn, headers, client)
    446         client=None,
    447     ):
--> 448         headers = validate_environment(
    449             api_key,
    450             headers,

/usr/local/lib/python3.10/dist-packages/litellm/llms/anthropic/chat/handler.py in validate_environment(api_key, user_headers, model, messages, tools, anthropic_version)
     64 
     65     if api_key is None:
---> 66         raise litellm.AuthenticationError(
     67             message="Missing Anthropic API Key - A call is being made to anthropic but no key is set either in the environment variables or via params. Please set `ANTHROPIC_API_KEY` in your environment vars",
     68             llm_provider="anthropic",

AuthenticationError: litellm.AuthenticationError: Missing Anthropic API Key - A call is being made to anthropic but no key is set either in the environment variables or via params. Please set `ANTHROPIC_API_KEY` in your environment vars

Description

I am trying to call ChatLiteLLM by passing anthropic_api_key to ChatLiteLLM without using the environment variable, but it raises an error that anthropic_api_key is not set.

System Info

System Information

OS: Linux
OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
Python Version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]

Package Information

langchain_core: 0.3.15
langchain: 0.3.6
langchain_community: 0.3.4
langsmith: 0.1.137
langchain_text_splitters: 0.3.0

Optional packages not installed

langgraph
langserve

Other Dependencies

aiohttp: 3.10.10
async-timeout: 4.0.3
dataclasses-json: 0.6.7
httpx: 0.27.2
httpx-sse: 0.4.0
jsonpatch: 1.33
numpy: 1.26.4
orjson: 3.10.10
packaging: 24.1
pydantic: 2.9.2
pydantic-settings: 2.6.1
PyYAML: 6.0.2
requests: 2.32.3
requests-toolbelt: 1.0.0
SQLAlchemy: 2.0.36
tenacity: 9.0.0
typing-extensions: 4.12.2

@dosubot dosubot bot added the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Nov 1, 2024
@jamesev15
Copy link
Contributor

jamesev15 commented Nov 1, 2024

@ccurme, @DangerousPotential the error is happening because the litellm client needs an api_key parameter to work properly. It just needs to be set with any of the available provider API keys to get things running smoothly!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants