Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -81,32 +81,32 @@
"method": "stream",
"span_name": "anthropic.chat",
},
# # Beta API methods (regular Anthropic SDK)
# {
# "package": "anthropic.resources.beta.messages.messages",
# "object": "Messages",
# "method": "create",
# "span_name": "anthropic.chat",
# },
# {
# "package": "anthropic.resources.beta.messages.messages",
# "object": "Messages",
# "method": "stream",
# "span_name": "anthropic.chat",
# },
# # Beta API methods (Bedrock SDK)
# {
# "package": "anthropic.lib.bedrock._beta_messages",
# "object": "Messages",
# "method": "create",
# "span_name": "anthropic.chat",
# },
# {
# "package": "anthropic.lib.bedrock._beta_messages",
# "object": "Messages",
# "method": "stream",
# "span_name": "anthropic.chat",
# },
# Beta API methods (regular Anthropic SDK)
{
"package": "anthropic.resources.beta.messages.messages",
"object": "Messages",
"method": "create",
"span_name": "anthropic.chat",
},
{
"package": "anthropic.resources.beta.messages.messages",
"object": "Messages",
"method": "stream",
"span_name": "anthropic.chat",
},
# Beta API methods (Bedrock SDK)
{
"package": "anthropic.lib.bedrock._beta_messages",
"object": "Messages",
"method": "create",
"span_name": "anthropic.chat",
},
{
"package": "anthropic.lib.bedrock._beta_messages",
"object": "Messages",
"method": "stream",
"span_name": "anthropic.chat",
},
]

WRAPPED_AMETHODS = [
Expand All @@ -122,32 +122,32 @@
"method": "create",
"span_name": "anthropic.chat",
},
# # Beta API async methods (regular Anthropic SDK)
# {
# "package": "anthropic.resources.beta.messages.messages",
# "object": "AsyncMessages",
# "method": "create",
# "span_name": "anthropic.chat",
# },
# {
# "package": "anthropic.resources.beta.messages.messages",
# "object": "AsyncMessages",
# "method": "stream",
# "span_name": "anthropic.chat",
# },
# # Beta API async methods (Bedrock SDK)
# {
# "package": "anthropic.lib.bedrock._beta_messages",
# "object": "AsyncMessages",
# "method": "create",
# "span_name": "anthropic.chat",
# },
# {
# "package": "anthropic.lib.bedrock._beta_messages",
# "object": "AsyncMessages",
# "method": "stream",
# "span_name": "anthropic.chat",
# },
# Beta API async methods (regular Anthropic SDK)
{
"package": "anthropic.resources.beta.messages.messages",
"object": "AsyncMessages",
"method": "create",
"span_name": "anthropic.chat",
},
{
"package": "anthropic.resources.beta.messages.messages",
"object": "AsyncMessages",
"method": "stream",
"span_name": "anthropic.chat",
},
# Beta API async methods (Bedrock SDK)
{
"package": "anthropic.lib.bedrock._beta_messages",
"object": "AsyncMessages",
"method": "create",
"span_name": "anthropic.chat",
},
{
"package": "anthropic.lib.bedrock._beta_messages",
"object": "AsyncMessages",
"method": "stream",
"span_name": "anthropic.chat",
},
]
Comment on lines +125 to 151
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Async Beta wrappers added: verify stream wrapping strategy for AsyncMessages.stream.

For non-beta Messages, AsyncMessages.stream is wrapped with the sync wrapper because it's an async context manager (used with “async with” and not awaited). Beta AsyncMessages.stream likely follows the same pattern. If so, wrapping it via WRAPPED_AMETHODS (async wrapper) might be incorrect and the sync-wrapper approach should be used instead.

Consider this change if verification confirms AsyncMessages.stream is an async context manager for beta as well:

@@ WRAPPED_METHODS @@
     {
         "package": "anthropic.resources.beta.messages.messages",
         "object": "Messages",
         "method": "stream",
         "span_name": "anthropic.chat",
     },
+    # Beta AsyncMessages.stream is an async context manager; use the sync wrapper
+    {
+        "package": "anthropic.resources.beta.messages.messages",
+        "object": "AsyncMessages",
+        "method": "stream",
+        "span_name": "anthropic.chat",
+    },
@@
     {
         "package": "anthropic.lib.bedrock._beta_messages",
         "object": "Messages",
         "method": "stream",
         "span_name": "anthropic.chat",
     },
+    {
+        "package": "anthropic.lib.bedrock._beta_messages",
+        "object": "AsyncMessages",
+        "method": "stream",
+        "span_name": "anthropic.chat",
+    },
@@ WRAPPED_AMETHODS @@
-    {
-        "package": "anthropic.resources.beta.messages.messages",
-        "object": "AsyncMessages",
-        "method": "stream",
-        "span_name": "anthropic.chat",
-    },
@@
-    {
-        "package": "anthropic.lib.bedrock._beta_messages",
-        "object": "AsyncMessages",
-        "method": "stream",
-        "span_name": "anthropic.chat",
-    },

To confirm via docs:


🌐 Web query:

In the Anthropic Python SDK, is beta AsyncMessages.stream used as an async context manager (async with) for both regular and Bedrock clients, returning AsyncMessageStreamManager?

💡 Result:

Short answer: Yes.

Details:

  • The async streaming helper is used as an async context manager — e.g. async with client.messages.stream(...) — for the async Anthropic client, and the stream object exposes .text_stream, .get_final_message(), .close(), etc. (mmacy.github.io, docs.anthropic.com)
  • The Bedrock variant exposes the same streaming interface (the Bedrock client/classes provide the same streaming helpers and can be used the same way). Examples and docs for the Anthropic Bedrock clients show equivalent streaming usage. (github.com, anthropic-bedrock.com)
  • The SDK’s API docs show the messages.stream helper returns a MessageStreamManager (sync) / the async equivalent (AsyncMessageStreamManager) which is the context-manager type that yields the stream object you iterate over. Wrappers and examples in the ecosystem also refer to AsyncMessageStreamManager/AsyncMessageStream. (github.com, docs.smith.langchain.com)

If you want, I can paste a minimal async-with example for your exact SDK version (or check which installed version you have and show the correct import / beta vs non-beta call).

Citations:


Correct AsyncMessages.stream instrumentation

The beta AsyncMessages.stream helper in both regular and Bedrock SDKs is an async context manager (returns an AsyncMessageStreamManager used with async with), so it should be wrapped via the sync‐wrapper list (WRAPPED_METHODS), not the async‐wrapper list (WRAPPED_AMETHODS).

Locations to update:

  • In WRAPPED_METHODS, add entries for AsyncMessages.stream under both beta packages.
  • In WRAPPED_AMETHODS, remove the existing AsyncMessages.stream entries under both beta packages.

Suggested diff:

@@ WRAPPED_METHODS @@
     {
         "package": "anthropic.resources.beta.messages.messages",
         "object": "Messages",
         "method": "stream",
         "span_name": "anthropic.chat",
     },
+    # Beta AsyncMessages.stream is an async context manager; use the sync wrapper
+    {
+        "package": "anthropic.resources.beta.messages.messages",
+        "object": "AsyncMessages",
+        "method": "stream",
+        "span_name": "anthropic.chat",
+    },
@@ WRAPPED_METHODS @@
     {
         "package": "anthropic.lib.bedrock._beta_messages",
         "object": "Messages",
         "method": "stream",
         "span_name": "anthropic.chat",
     },
+    {
+        "package": "anthropic.lib.bedrock._beta_messages",
+        "object": "AsyncMessages",
+        "method": "stream",
+        "span_name": "anthropic.chat",
+    },
@@ WRAPPED_AMETHODS @@
-    {
-        "package": "anthropic.resources.beta.messages.messages",
-        "object": "AsyncMessages",
-        "method": "stream",
-        "span_name": "anthropic.chat",
-    },
@@ WRAPPED_AMETHODS @@
-    {
-        "package": "anthropic.lib.bedrock._beta_messages",
-        "object": "AsyncMessages",
-        "method": "stream",
-        "span_name": "anthropic.chat",
-    },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Beta API async methods (regular Anthropic SDK)
{
"package": "anthropic.resources.beta.messages.messages",
"object": "AsyncMessages",
"method": "create",
"span_name": "anthropic.chat",
},
{
"package": "anthropic.resources.beta.messages.messages",
"object": "AsyncMessages",
"method": "stream",
"span_name": "anthropic.chat",
},
# Beta API async methods (Bedrock SDK)
{
"package": "anthropic.lib.bedrock._beta_messages",
"object": "AsyncMessages",
"method": "create",
"span_name": "anthropic.chat",
},
{
"package": "anthropic.lib.bedrock._beta_messages",
"object": "AsyncMessages",
"method": "stream",
"span_name": "anthropic.chat",
},
]
# === WRAPPED_METHODS ===
WRAPPED_METHODS = [
# Beta API sync methods (regular Anthropic SDK)
{
"package": "anthropic.resources.beta.messages.messages",
"object": "Messages",
"method": "stream",
"span_name": "anthropic.chat",
},
# Beta AsyncMessages.stream is an async context manager; use the sync wrapper
{
"package": "anthropic.resources.beta.messages.messages",
"object": "AsyncMessages",
"method": "stream",
"span_name": "anthropic.chat",
},
# Beta API sync methods (Bedrock SDK)
{
"package": "anthropic.lib.bedrock._beta_messages",
"object": "Messages",
"method": "stream",
"span_name": "anthropic.chat",
},
{
"package": "anthropic.lib.bedrock._beta_messages",
"object": "AsyncMessages",
"method": "stream",
"span_name": "anthropic.chat",
},
# ... other entries ...
]
# === WRAPPED_AMETHODS ===
WRAPPED_AMETHODS = [
# Beta API async methods (regular Anthropic SDK)
{
"package": "anthropic.resources.beta.messages.messages",
"object": "AsyncMessages",
"method": "create",
"span_name": "anthropic.chat",
},
# (AsyncMessages.stream removed)
# Beta API async methods (Bedrock SDK)
{
"package": "anthropic.lib.bedrock._beta_messages",
"object": "AsyncMessages",
"method": "create",
"span_name": "anthropic.chat",
},
# (AsyncMessages.stream removed)
# ... other entries ...
]



Expand Down Expand Up @@ -182,14 +182,35 @@ async def _aset_token_usage(
token_histogram: Histogram = None,
choice_counter: Counter = None,
):
from opentelemetry.instrumentation.anthropic.utils import _aextract_response_data
import inspect

# If we get a coroutine, await it
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both _aset_token_usage and _set_token_usage duplicate the logic for handling with_raw_response responses (checking for a coroutine and parsing via 'parse'). Consider extracting this common logic into a helper to improve maintainability.

if inspect.iscoroutine(response):
try:
response = await response
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.debug(f"Failed to await coroutine response: {e}")
return

# Handle with_raw_response wrapped responses first
if response and hasattr(response, "parse") and callable(response.parse):
try:
response = response.parse()
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.debug(f"Failed to parse with_raw_response: {e}")
return

response = await _aextract_response_data(response)
# Safely get usage attribute without extracting the whole object
usage = getattr(response, "usage", None) if response else None

if usage := response.get("usage"):
prompt_tokens = usage.input_tokens
cache_read_tokens = dict(usage).get("cache_read_input_tokens", 0) or 0
cache_creation_tokens = dict(usage).get("cache_creation_input_tokens", 0) or 0
if usage:
prompt_tokens = getattr(usage, "input_tokens", 0)
cache_read_tokens = getattr(usage, "cache_read_input_tokens", 0) or 0
cache_creation_tokens = getattr(usage, "cache_creation_input_tokens", 0) or 0
else:
prompt_tokens = await acount_prompt_tokens_from_request(anthropic, request)
cache_read_tokens = 0
Expand All @@ -206,18 +227,18 @@ async def _aset_token_usage(
},
)

if usage := response.get("usage"):
completion_tokens = usage.output_tokens
if usage:
completion_tokens = getattr(usage, "output_tokens", 0)
else:
completion_tokens = 0
if hasattr(anthropic, "count_tokens"):
if response.get("completion"):
completion_attr = getattr(response, "completion", None)
content_attr = getattr(response, "content", None)
if completion_attr:
completion_tokens = await anthropic.count_tokens(completion_attr)
elif content_attr and len(content_attr) > 0:
completion_tokens = await anthropic.count_tokens(
response.get("completion")
)
elif response.get("content"):
completion_tokens = await anthropic.count_tokens(
response.get("content")[0].text
content_attr[0].text
)

if (
Expand All @@ -236,17 +257,19 @@ async def _aset_token_usage(
total_tokens = input_tokens + completion_tokens

choices = 0
if isinstance(response.get("content"), list):
choices = len(response.get("content"))
elif response.get("completion"):
content_attr = getattr(response, "content", None)
completion_attr = getattr(response, "completion", None)
if isinstance(content_attr, list):
choices = len(content_attr)
elif completion_attr:
choices = 1

if choices > 0 and choice_counter:
choice_counter.add(
choices,
attributes={
**metric_attributes,
SpanAttributes.LLM_RESPONSE_STOP_REASON: response.get("stop_reason"),
SpanAttributes.LLM_RESPONSE_STOP_REASON: getattr(response, "stop_reason", None),
},
)

Expand Down Expand Up @@ -276,14 +299,32 @@ def _set_token_usage(
token_histogram: Histogram = None,
choice_counter: Counter = None,
):
from opentelemetry.instrumentation.anthropic.utils import _extract_response_data
import inspect

# If we get a coroutine, we cannot process it in sync context
if inspect.iscoroutine(response):
import logging
logger = logging.getLogger(__name__)
logger.warning(f"_set_token_usage received coroutine {response} - token usage processing skipped")
return

# Handle with_raw_response wrapped responses first
if response and hasattr(response, "parse") and callable(response.parse):
try:
response = response.parse()
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.debug(f"Failed to parse with_raw_response: {e}")
return

response = _extract_response_data(response)
# Safely get usage attribute without extracting the whole object
usage = getattr(response, "usage", None) if response else None

if usage := response.get("usage"):
prompt_tokens = usage.input_tokens
cache_read_tokens = dict(usage).get("cache_read_input_tokens", 0) or 0
cache_creation_tokens = dict(usage).get("cache_creation_input_tokens", 0) or 0
if usage:
prompt_tokens = getattr(usage, "input_tokens", 0)
cache_read_tokens = getattr(usage, "cache_read_input_tokens", 0) or 0
cache_creation_tokens = getattr(usage, "cache_creation_input_tokens", 0) or 0
else:
prompt_tokens = count_prompt_tokens_from_request(anthropic, request)
cache_read_tokens = 0
Expand All @@ -300,16 +341,18 @@ def _set_token_usage(
},
)

if usage := response.get("usage"):
completion_tokens = usage.output_tokens
if usage:
completion_tokens = getattr(usage, "output_tokens", 0)
else:
completion_tokens = 0
if hasattr(anthropic, "count_tokens"):
if response.get("completion"):
completion_tokens = anthropic.count_tokens(response.get("completion"))
elif response.get("content"):
completion_attr = getattr(response, "completion", None)
content_attr = getattr(response, "content", None)
if completion_attr:
completion_tokens = anthropic.count_tokens(completion_attr)
elif content_attr and len(content_attr) > 0:
completion_tokens = anthropic.count_tokens(
response.get("content")[0].text
content_attr[0].text
)

if (
Expand All @@ -328,17 +371,19 @@ def _set_token_usage(
total_tokens = input_tokens + completion_tokens

choices = 0
if isinstance(response.get("content"), list):
choices = len(response.get("content"))
elif response.get("completion"):
content_attr = getattr(response, "content", None)
completion_attr = getattr(response, "completion", None)
if isinstance(content_attr, list):
choices = len(content_attr)
elif completion_attr:
choices = 1

if choices > 0 and choice_counter:
choice_counter.add(
choices,
attributes={
**metric_attributes,
SpanAttributes.LLM_RESPONSE_STOP_REASON: response.get("stop_reason"),
SpanAttributes.LLM_RESPONSE_STOP_REASON: getattr(response, "stop_reason", None),
},
)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,27 +136,78 @@ def _extract_response_data(response):

@dont_throw
async def ashared_metrics_attributes(response):
response = await _aextract_response_data(response)
import inspect

# If we get a coroutine, await it
if inspect.iscoroutine(response):
try:
response = await response
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.debug(f"Failed to await coroutine response: {e}")
response = None

# If it's already a dict (e.g., from streaming), use it directly
if isinstance(response, dict):
model = response.get("model")
else:
# Handle with_raw_response wrapped responses first
if response and hasattr(response, "parse") and callable(response.parse):
try:
response = response.parse()
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.debug(f"Failed to parse with_raw_response: {e}")
response = None

# Safely get model attribute without extracting the whole object
model = getattr(response, "model", None) if response else None

Comment on lines +152 to 167
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Model extraction may miss dicts returned by parse(); normalize before branching.

If response.parse() returns a dict, the current code flows to getattr(dict, "model", None) and loses the model. Normalize to a parsed object first, then branch on dict vs attr.

Apply this diff:

-    # If it's already a dict (e.g., from streaming), use it directly
-    if isinstance(response, dict):
-        model = response.get("model")
-    else:
-        # Handle with_raw_response wrapped responses first
-        if response and hasattr(response, "parse") and callable(response.parse):
-            try:
-                response = response.parse()
-            except Exception as e:
-                import logging
-                logger = logging.getLogger(__name__)
-                logger.debug(f"Failed to parse with_raw_response: {e}")
-                response = None
-
-        # Safely get model attribute without extracting the whole object
-        model = getattr(response, "model", None) if response else None
+    # Normalize to parsed object (if applicable), then extract model from dict or attr
+    resp_obj = response
+    if resp_obj and hasattr(resp_obj, "parse") and callable(resp_obj.parse):
+        try:
+            resp_obj = resp_obj.parse()
+        except Exception as e:
+            import logging
+            logger = logging.getLogger(__name__)
+            logger.debug(f"Failed to parse with_raw_response: {e}")
+            resp_obj = None
+
+    if isinstance(resp_obj, dict):
+        model = resp_obj.get("model")
+    else:
+        # Safely get model attribute without extracting the whole object
+        model = getattr(resp_obj, "model", None) if resp_obj else None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if isinstance(response, dict):
model = response.get("model")
else:
# Handle with_raw_response wrapped responses first
if response and hasattr(response, "parse") and callable(response.parse):
try:
response = response.parse()
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.debug(f"Failed to parse with_raw_response: {e}")
response = None
# Safely get model attribute without extracting the whole object
model = getattr(response, "model", None) if response else None
# Normalize to parsed object (if applicable), then extract model from dict or attr
resp_obj = response
if resp_obj and hasattr(resp_obj, "parse") and callable(resp_obj.parse):
try:
resp_obj = resp_obj.parse()
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.debug(f"Failed to parse with_raw_response: {e}")
resp_obj = None
if isinstance(resp_obj, dict):
model = resp_obj.get("model")
else:
# Safely get model attribute without extracting the whole object
model = getattr(resp_obj, "model", None) if resp_obj else None
🤖 Prompt for AI Agents
In
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/utils.py
around lines 152 to 167, the code calls response.parse() but then immediately
uses getattr on the original/parsed value which can be a dict, losing model
extraction; update the flow to first attempt to parse (catching/logging
exceptions as currently done) and assign the parsed result back to response,
then branch: if isinstance(response, dict) use response.get("model"), else use
getattr(response, "model", None). Ensure the existing exception logging remains
and only falls back to None if parse fails.

common_attributes = Config.get_common_metrics_attributes()

return {
**common_attributes,
GEN_AI_SYSTEM: GEN_AI_SYSTEM_ANTHROPIC,
SpanAttributes.LLM_RESPONSE_MODEL: response.get("model"),
SpanAttributes.LLM_RESPONSE_MODEL: model,
}


@dont_throw
def shared_metrics_attributes(response):
response = _extract_response_data(response)
import inspect

# If we get a coroutine, we cannot process it in sync context
if inspect.iscoroutine(response):
import logging
logger = logging.getLogger(__name__)
logger.warning(f"shared_metrics_attributes received coroutine {response} - using None for model")
response = None

# If it's already a dict (e.g., from streaming), use it directly
if isinstance(response, dict):
model = response.get("model")
else:
# Handle with_raw_response wrapped responses first
if response and hasattr(response, "parse") and callable(response.parse):
try:
response = response.parse()
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.debug(f"Failed to parse with_raw_response: {e}")
response = None

# Safely get model attribute without extracting the whole object
model = getattr(response, "model", None) if response else None

Comment on lines +189 to 204
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Same dict-after-parse issue in shared_metrics_attributes; align with async fix.

Mirror the normalization before model extraction to avoid losing the model when parse() returns a dict.

Apply this diff:

-    # If it's already a dict (e.g., from streaming), use it directly
-    if isinstance(response, dict):
-        model = response.get("model")
-    else:
-        # Handle with_raw_response wrapped responses first
-        if response and hasattr(response, "parse") and callable(response.parse):
-            try:
-                response = response.parse()
-            except Exception as e:
-                import logging
-                logger = logging.getLogger(__name__)
-                logger.debug(f"Failed to parse with_raw_response: {e}")
-                response = None
-
-        # Safely get model attribute without extracting the whole object
-        model = getattr(response, "model", None) if response else None
+    # Normalize to parsed object (if applicable), then extract model from dict or attr
+    resp_obj = response
+    if resp_obj and hasattr(resp_obj, "parse") and callable(resp_obj.parse):
+        try:
+            resp_obj = resp_obj.parse()
+        except Exception as e:
+            import logging
+            logger = logging.getLogger(__name__)
+            logger.debug(f"Failed to parse with_raw_response: {e}")
+            resp_obj = None
+
+    if isinstance(resp_obj, dict):
+        model = resp_obj.get("model")
+    else:
+        # Safely get model attribute without extracting the whole object
+        model = getattr(resp_obj, "model", None) if resp_obj else None

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/utils.py
around lines 189 to 204, the code extracts model after attempting to parse
with_raw_response but doesn’t normalize the parsed result, so if parse() returns
a dict the model is lost; change the logic so that when response has a callable
parse() you call response.parse(), assign its return value back to response, and
then normalize by checking if response is a dict to extract model via
response.get("model") else use getattr(response, "model", None) — keep the
try/except logging behavior for parse() failures and ensure response is only set
to None on exception, mirroring the async fix.

common_attributes = Config.get_common_metrics_attributes()

return {
**common_attributes,
GEN_AI_SYSTEM: GEN_AI_SYSTEM_ANTHROPIC,
SpanAttributes.LLM_RESPONSE_MODEL: response.get("model"),
SpanAttributes.LLM_RESPONSE_MODEL: model,
}


Expand Down
Loading