fix(ui): CSV export empty on Global Usage page#23819
fix(ui): CSV export empty on Global Usage page#23819ryan-crabbe merged 1 commit intolitellm_ryan_march_16from
Conversation
Aggregated endpoint returns empty breakdown.entities; fall back to grouping breakdown.api_keys by team_id.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Greptile SummaryThis PR fixes an empty CSV export on the Global Usage page by adding a client-side fallback ( Key changes:
Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| ui/litellm-dashboard/src/components/EntityUsageExport/utils.ts | Adds resolveEntities + aggregateApiKeysIntoEntities fallback that groups breakdown.api_keys by metadata.team_id when breakdown.entities is empty. Applied correctly to all four export functions. Minor: resolveEntities is called twice per day in generateDailyWithModelsData, re-running aggregation unnecessarily. |
| ui/litellm-dashboard/src/components/EntityUsageExport/utils.test.ts | Comprehensive new test block covering resolveEntities (empty entities, populated entities, Unassigned fallback, missing api_keys, preserved api_key_breakdown) and all four export generators with aggregated data. Test for generateDailyWithModelsData only checks row existence, not spend values, which could mask model double-counting if the fixture is extended. |
Flowchart
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[Export triggered] --> B[generateExportData]
B --> C{exportScope}
C -->|daily| D[generateDailyData]
C -->|daily_with_keys| E[generateDailyWithKeysData]
C -->|daily_with_models| F[generateDailyWithModelsData]
D --> G[resolveEntities per day]
E --> G
F --> G
G --> H{breakdown.entities populated?}
H -->|Yes - normal path| I[Return breakdown.entities directly]
H -->|No - aggregated endpoint| J[aggregateApiKeysIntoEntities]
J --> K[Group breakdown.api_keys\nby metadata.team_id]
K --> L[Accumulate METRIC_KEYS\nper team group]
L --> M[Preserve api_key_breakdown\nper team]
M --> N[Return reconstructed entities map]
I --> O[Iterate entities → build rows]
N --> O
O --> P[CSV / JSON export]
Last reviewed commit: c098eca
| Object.entries(dailyEntityModels).forEach(([entity, models]) => { | ||
| const entityData = day.breakdown.entities?.[entity]; | ||
| const entityData = resolveEntities(day.breakdown)[entity]; |
There was a problem hiding this comment.
Redundant
resolveEntities call
resolveEntities(day.breakdown) is called a second time here (line 276), after already being called in the outer forEach at line 248. When the fallback path is taken, this re-runs the entire aggregateApiKeysIntoEntities aggregation for every entity in the second loop. Caching the result in a variable would prevent the redundant work.
| Object.entries(dailyEntityModels).forEach(([entity, models]) => { | |
| const entityData = day.breakdown.entities?.[entity]; | |
| const entityData = resolveEntities(day.breakdown)[entity]; | |
| const resolvedEntities = resolveEntities(day.breakdown); | |
| Object.entries(resolvedEntities).forEach(([entity, entityData]: [string, any]) => { |
Then replace both usages of resolveEntities(day.breakdown) in this function body with resolvedEntities.
| expect(result[0]).toHaveProperty("Team"); | ||
| }); | ||
| }); | ||
|
|
||
| describe("generateDailyWithKeysData with aggregated data", () => { | ||
| it("should produce rows from api_keys when entities is empty", () => { | ||
| const result = generateDailyWithKeysData(aggregatedSpendData, "Team"); | ||
| expect(result.length).toBeGreaterThan(0); |
There was a problem hiding this comment.
This test only verifies that rows are produced and that the "Model" property exists — it does not verify actual spend or request values per model. The underlying generateDailyWithModelsData logic adds all API key metrics to every model in breakdown.models. With only one model in the test fixture ("gpt-4") the multiplication factor is 1, so values happen to be correct by coincidence. If a second model were added to aggregatedSpendData.breakdown.models, each model row would carry the full team spend rather than its own share, and this test would not catch it.
Consider adding assertions on the spend values per model to catch regressions if the metric attribution logic changes:
expect(result.find(r => r.Model === "gpt-4")).toBeDefined();
// Verify spend is not double-counted when multiple models exist
Type
🐛 Bug Fix
Changes
Aggregated endpoint returns empty breakdown.entities; fall back to grouping breakdown.api_keys by team_id.