Replies: 2 comments 1 reply
-
Thanks for sharing this @sumba101 A major change to the cost/usage api is currently in progress which will make it very simple to ingest token usage and/or USD cost of llm calls, addressing exactly this. I'll ping you once it's released. |
Beta Was this translation helpful? Give feedback.
1 reply
-
@sumba101 We've just released v2.0.0 which optionally allows to ingest costs for each generation object. Hope you like it. Let me know if you have any feedback/thoughts. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Although cost is tracked within Langfuse, developers tend to track it with alot of other services (for example, in their LLM provider proxy layer) and would have a need to track custom cases (such as self hosting or models not covered by Langfuse model-cost tracker)
This is an issue i have faced personally and is leading to completely inaccurate measures on my Langfuse dashboard, which reduces my Langfuse dashboard to have 4-5 useless graphs or inaccurate information
The solution?
Reading response cost for a generation from a 'response_cost' field from metadata, if present. The field shall act as the indicator for cost of tokens for that trace.
Benefits?
Allows for High degree of flexibility with usage of Langfuse and opens up a plethora of intergration of other services with Langfuse
Please vote to support this feature
Beta Was this translation helpful? Give feedback.
All reactions