Skip to content

fix: improve chart performance with large datasets using LTTB downsampling#457

Closed
tylerkron wants to merge 2 commits intomainfrom
claude/great-kapitsa
Closed

fix: improve chart performance with large datasets using LTTB downsampling#457
tylerkron wants to merge 2 commits intomainfrom
claude/great-kapitsa

Conversation

@tylerkron
Copy link
Copy Markdown
Contributor

Summary

Closes #36

  • Implements the Largest-Triangle-Three-Buckets (LTTB) downsampling algorithm to reduce rendered data points to ~5,000 per channel while preserving visual shape (peaks, troughs, spikes)
  • Live chart (PlotLogger): Increases raw data buffer from 5k to 50k points per channel (10x more history), decimates to 5k for rendering every 1s cycle. Uses gap-aware decimation to preserve line breaks at data stream gaps
  • Stored session playback (DatabaseLogger): Replaces the hard 1M point cap (which silently dropped data) with per-channel LTTB decimation that displays all data. Adds OrderBy(TimestampTicks) to ensure time-sorted data for correct decimation
  • Adds DataPointDecimator with both standard and gap-aware decimation methods
  • Adds 15 unit tests covering null/edge cases, output size, first/last point preservation, visual fidelity (sine wave peaks, spike preservation), ordering, gap marker preservation, and performance (1M points in <1s)

Test plan

  • Verify live chart performance with high-frequency streaming (multiple channels)
  • Verify stored session playback with 100k+ sample sessions renders quickly
  • Verify gap markers (line breaks) are preserved in live chart when device disconnects/reconnects
  • Verify chart subtitle shows "Showing X downsampled points from Y total samples" for large sessions
  • Verify visual fidelity — peaks and spikes should still be visible after downsampling
  • Run unit tests on Windows CI

🤖 Generated with Claude Code

…pling (#36)

When displaying 100k+ samples, OxyPlot rendered every data point, making
the UI unusable. Implements the Largest-Triangle-Three-Buckets (LTTB)
algorithm to downsample data to 5k points per channel while preserving
visual shape (peaks, troughs, spikes).

- Add DataPointDecimator with LTTB and gap-aware decimation
- PlotLogger: increase raw buffer to 50k, render decimated 5k points
- DatabaseLogger: replace 1M hard cap with per-channel LTTB decimation
- DatabaseLogger: add OrderBy to ensure time-sorted data for LTTB
- Add 15 unit tests covering correctness, fidelity, and performance

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@tylerkron tylerkron requested a review from a team as a code owner April 4, 2026 16:19
@qodo-code-review
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Implement LTTB downsampling for large dataset chart performance

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• Implements LTTB downsampling algorithm to handle large datasets efficiently
  - Reduces rendered points to ~5,000 per channel while preserving visual shape
  - Preserves peaks, troughs, and spikes in time-series data
• Increases PlotLogger raw buffer from 5k to 50k points per channel
  - Decimates to 5k for rendering every 1s cycle
  - Uses gap-aware decimation to preserve line breaks at data stream gaps
• Replaces DatabaseLogger 1M point hard cap with per-channel LTTB decimation
  - Displays all data instead of silently dropping excess points
  - Adds OrderBy(TimestampTicks) to ensure time-sorted data for correct decimation
• Adds 15 comprehensive unit tests covering edge cases, visual fidelity, and performance
Diagram
flowchart LR
  A["Raw Data Points<br/>50k per channel"] -->|"Decimate every 1s"| B["DataPointDecimator<br/>LTTB Algorithm"]
  B -->|"Gap-aware"| C["5k Display Points<br/>with gap markers"]
  C --> D["PlotLogger<br/>Live Chart"]
  E["Database Samples<br/>All records"] -->|"OrderBy Timestamp"| F["DataPointDecimator<br/>Per-channel LTTB"]
  F --> G["Decimated Points<br/>Visual fidelity preserved"]
  G --> H["DatabaseLogger<br/>Session Playback"]
Loading

Grey Divider

File Changes

1. Daqifi.Desktop/Loggers/DataPointDecimator.cs ✨ Enhancement +172/-0

New LTTB decimation algorithm implementation

• Implements Largest-Triangle-Three-Buckets (LTTB) algorithm for data point decimation
• Provides Decimate() method that reduces points to target threshold while preserving visual shape
• Provides DecimateWithGaps() method that handles gap markers (DataPoint.Undefined) by decimating
 segments independently
• Preserves first and last points, and maintains X-axis ordering

Daqifi.Desktop/Loggers/DataPointDecimator.cs


2. Daqifi.Desktop.Test/Loggers/DataPointDecimatorTests.cs 🧪 Tests +290/-0

Comprehensive unit tests for decimation algorithm

• Adds 15 unit tests covering null/empty inputs, threshold edge cases, and output size validation
• Tests visual fidelity with sine wave peaks/troughs and spike preservation in flat data
• Tests gap marker preservation and segment-wise decimation with DecimateWithGaps()
• Tests performance with 1M points decimating in under 1 second
• Includes helper methods for generating linear and sine wave test data

Daqifi.Desktop.Test/Loggers/DataPointDecimatorTests.cs


3. Daqifi.Desktop/Loggers/PlotLogger.cs ✨ Enhancement +27/-6

Integrate LTTB decimation into live chart rendering

• Increases raw data buffer from 5k to 50k points per channel via MAX_RAW_POINTS_PER_CHANNEL
 constant
• Adds _decimatedPoints dictionary to store display-ready decimated data separate from raw buffer
• Modifies AddChannelSeries() to bind LineSeries ItemsSource to decimated points instead of raw
 points
• Updates CompositionTargetRendering() to decimate raw points every 1s cycle using
 DecimateWithGaps() to preserve gap markers
• Clears decimated points in ClearPlot() method

Daqifi.Desktop/Loggers/PlotLogger.cs


View more (1)
4. Daqifi.Desktop/Loggers/DatabaseLogger.cs ✨ Enhancement +17/-15

Replace hard cap with per-channel LTTB decimation

• Adds OrderBy(s => s.TimestampTicks) to ensure time-sorted data for correct LTTB decimation
• Removes hard 1M point cap that silently dropped excess data
• Replaces single-pass data loading with per-channel LTTB decimation after loading all samples
• Updates subtitle to show "Showing X downsampled points from Y total samples" instead of hard cap
 message
• Simplifies data loading loop by removing early break condition

Daqifi.Desktop/Loggers/DatabaseLogger.cs


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown
Contributor

qodo-code-review Bot commented Apr 4, 2026

Code Review by Qodo

🐞 Bugs (4) 📘 Rule violations (3) 📎 Requirement gaps (0) 🎨 UX Issues (0)

Grey Divider


Action required

1. O(n) head removals🐞 Bug ➹ Performance
Description
PlotLogger.Log enforces the 50k rolling buffer using List.RemoveAt(0), which is O(n) and will shift
up to ~50k elements per removal. With high-frequency streaming this can dominate CPU time under
PlotModel.SyncRoot and cause UI/logging stalls.
Code

Daqifi.Desktop/Loggers/PlotLogger.cs[R210-223]

            if (_gapDetector.IsGap(key, dataSample.FirmwareDeltaMs))
            {
                LoggedPoints[key].Add(DataPoint.Undefined);
-                if (LoggedPoints[key].Count >= 5000)
+                if (LoggedPoints[key].Count >= MAX_RAW_POINTS_PER_CHANNEL)
                {
                    LoggedPoints[key].RemoveAt(0);
                }
            }

            LoggedPoints[key].Add(new DataPoint(deltaTime, scaledSampleValue));
-            if (LoggedPoints[key].Count >= 5000)
+            if (LoggedPoints[key].Count >= MAX_RAW_POINTS_PER_CHANNEL)
            {
                LoggedPoints[key].RemoveAt(0);
            }
Evidence
The buffer size was raised to 50,000 points per channel, but the buffer is still maintained by
repeatedly calling RemoveAt(0) inside the critical section. On a List<T>, removing the first
element shifts all remaining elements, making this O(n) per sample once the buffer is full.

Daqifi.Desktop/Loggers/PlotLogger.cs[19-31]
Daqifi.Desktop/Loggers/PlotLogger.cs[208-223]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The live plot retains a rolling window by doing `List.RemoveAt(0)` whenever the buffer exceeds `MAX_RAW_POINTS_PER_CHANNEL`. This is O(n) per removal and gets much worse with a 50k window.

### Issue Context
This runs in the hot path (`PlotLogger.Log`) and under `lock (PlotModel.SyncRoot)`, increasing contention with UI rendering/decimation.

### Fix Focus Areas
- Replace `List<DataPoint>` rolling window with a ring buffer / deque abstraction.
- If keeping `List<DataPoint>`, avoid per-sample head removal (e.g., track a start index and periodically compact; or remove in batches).

- Daqifi.Desktop/Loggers/PlotLogger.cs[208-223]
- Daqifi.Desktop/Loggers/PlotLogger.cs[19-31]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Gap decimation exceeds threshold🐞 Bug ≡ Correctness
Description
DataPointDecimator.DecimateWithGaps can return more points than the requested threshold because it
increases availableThreshold and allocates per-segment thresholds without enforcing a global cap.
This can defeat the intended ~5,000 point render budget and cause rendering/perf regressions when
many gaps exist.
Code

Daqifi.Desktop/Loggers/DataPointDecimator.cs[R146-168]

+        // Distribute the threshold proportionally across segments
+        var totalDataPoints = segments.Sum(s => s.Count);
+        var gapMarkerCount = segments.Count - 1;
+        var availableThreshold = threshold - gapMarkerCount;
+        if (availableThreshold < segments.Count * 3)
+        {
+            availableThreshold = segments.Count * 3;
+        }
+
+        var result = new List<DataPoint>(threshold);
+
+        for (var i = 0; i < segments.Count; i++)
+        {
+            if (i > 0)
+            {
+                result.Add(DataPoint.Undefined);
+            }
+
+            var segment = segments[i];
+            var segmentThreshold = Math.Max(3, (int)Math.Round((double)segment.Count / totalDataPoints * availableThreshold));
+            var decimated = Decimate(segment, segmentThreshold);
+            result.AddRange(decimated);
+        }
Evidence
The method explicitly increases the point budget (availableThreshold) above the caller-provided
threshold minus gap markers, then rounds per-segment allocations and appends them all without
clamping, so result.Count can exceed threshold.

Daqifi.Desktop/Loggers/DataPointDecimator.cs[146-170]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`DecimateWithGaps` may return more points than `threshold` due to (1) inflating `availableThreshold` and (2) per-segment rounding/minimums without any final cap.

### Issue Context
Callers (e.g., PlotLogger) use `threshold` to bound render work. Returning more points violates the contract and can degrade performance.

### Fix Focus Areas
- Ensure `availableThreshold` never exceeds `(threshold - gapMarkerCount)`.
- Allocate per-segment thresholds so that `sum(segmentThresholds) + gapMarkerCount <= threshold` (e.g., floor allocations then distribute remainder; enforce min per segment only when feasible).
- If minimum-per-segment cannot fit, decide and document behavior (e.g., reduce segments by merging tiny segments, or allow fewer than 3 points for very small segments).

- Daqifi.Desktop/Loggers/DataPointDecimator.cs[146-170]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Cross-thread series mutation 🐞 Bug ☼ Reliability
Description
PlotLogger can mutate PlotModel.Series and its point dictionaries from device sample callbacks that
originate in async/background message handling, but AddChannelSeries performs those mutations
without holding PlotModel.SyncRoot. This can lead to races with UI rendering
(CompositionTarget.Rendering) and crashes or corrupted plot state.
Code

Daqifi.Desktop/Loggers/PlotLogger.cs[R302-312]

+                // Decimate raw points into the display lists bound to each series
+                // Use DecimateWithGaps to preserve line breaks at data stream gaps
+                foreach (var kvp in LoggedPoints)
+                {
+                    if (_decimatedPoints.TryGetValue(kvp.Key, out var displayPoints))
+                    {
+                        var decimated = DataPointDecimator.DecimateWithGaps(kvp.Value);
+                        displayPoints.Clear();
+                        displayPoints.AddRange(decimated);
+                    }
+                }
Evidence
Streaming messages are handled via an async handler (fire-and-forget) that sets
channel.ActiveSample, which immediately raises OnChannelUpdated without dispatching to the UI
thread. LoggingManager forwards these updates directly to each logger via logger.Log(sample). In
PlotLogger.Log, the decision to create a new series (AddChannelSeries) happens outside the SyncRoot
lock, and AddChannelSeries mutates LoggedPoints, _decimatedPoints, LoggedChannels, and
PlotModel.Series without taking that lock. Meanwhile, CompositionTarget.Rendering (UI thread)
holds SyncRoot and enumerates LoggedPoints, creating a real cross-thread mutation risk.

Daqifi.Desktop/Device/AbstractStreamingDevice.cs[219-234]
Daqifi.Desktop/Device/AbstractStreamingDevice.cs[236-310]
Daqifi.Desktop/Channel/AbstractChannel.cs[136-161]
Daqifi.Desktop/Loggers/LoggingManager.cs[423-443]
Daqifi.Desktop/Loggers/PlotLogger.cs[186-224]
Daqifi.Desktop/Loggers/PlotLogger.cs[238-276]
Daqifi.Desktop/Loggers/PlotLogger.cs[280-316]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`AddChannelSeries` mutates plot state (PlotModel.Series + dictionaries) without holding the same synchronization used elsewhere (`PlotModel.SyncRoot`). Streaming updates originate from async/background message handling, so this can race with UI rendering.

### Issue Context
`CompositionTarget.Rendering` is on the UI thread and enumerates `LoggedPoints` under `lock (PlotModel.SyncRoot)`. `PlotLogger.Log` only locks around point appends/removals, not around series/dictionary creation.

### Fix Focus Areas
- Wrap the `ContainsKey`/`AddChannelSeries` path inside `lock (PlotModel.SyncRoot)` (or otherwise ensure it runs on the UI thread via Dispatcher).
- Ensure all mutations of `LoggedPoints`, `_decimatedPoints`, `LoggedChannels`, and `PlotModel.Series` are guarded consistently.
- Consider also guarding `LoggedChannels[key].Color` updates similarly.

- Daqifi.Desktop/Loggers/PlotLogger.cs[186-224]
- Daqifi.Desktop/Loggers/PlotLogger.cs[238-276]
- Daqifi.Desktop/Loggers/PlotLogger.cs[280-316]
- Daqifi.Desktop/Device/AbstractStreamingDevice.cs[219-234]
- Daqifi.Desktop/Device/AbstractStreamingDevice.cs[236-310]
- Daqifi.Desktop/Channel/AbstractChannel.cs[136-161]
- Daqifi.Desktop/Loggers/LoggingManager.cs[423-443]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (1)
4. Playback builds full point lists 🐞 Bug ☼ Reliability
Description
DatabaseLogger.DisplayLoggingSession materializes all samples into memory and then builds full
per-channel DataPoint lists before decimating, which can spike peak memory and slow loads for very
large sessions. The previous loop stopped adding after 1M points; now every sample is added to
per-channel lists before downsampling.
Code

Daqifi.Desktop/Loggers/DatabaseLogger.cs[R256-265]

                var dbSamples = context.Samples.AsNoTracking()
                    .Where(s => s.LoggingSessionID == session.ID)
+                    .OrderBy(s => s.TimestampTicks)
                    .Select(s => new { s.ChannelName, s.DeviceSerialNo, s.Type, s.Color, s.TimestampTicks, s.Value })
-                    .ToList(); // Bring data into memory
+                    .ToList();

                var samplesCount = dbSamples.Count;
-                const int dataPointsToShow = 1000000;
-
-                if (samplesCount > dataPointsToShow)
-                {
-                    subtitle = $"\nOnly showing {dataPointsToShow:n0} out of {samplesCount:n0} data points";
-                }

                var channelInfoList = dbSamples
                    .Select(s => new { s.ChannelName, s.DeviceSerialNo, s.Type, s.Color })
Evidence
The method loads all samples via ToList() and then appends a DataPoint for every sample into
_allSessionPoints prior to calling DataPointDecimator.Decimate. This means peak memory includes
both dbSamples and the full per-channel point lists; removing the earlier break means per-channel
lists can now grow to the full session size before being replaced with decimated lists.

Daqifi.Desktop/Loggers/DatabaseLogger.cs[256-304]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Session playback now appends *all* samples into `_allSessionPoints` before decimating, increasing peak memory and processing time for very large sessions.

### Issue Context
Even though the final `ItemsSource` is decimated, the peak load includes `dbSamples` + full `_allSessionPoints` lists.

### Fix Focus Areas
- Avoid storing `dbSamples` and full per-channel lists at the same time.
- Prefer querying/processing per channel (e.g., query distinct channels first, then for each channel run an ordered query and decimate) to reduce peak memory.
- Consider streaming enumeration (no `ToList`) and/or reducing intermediate allocations.

- Daqifi.Desktop/Loggers/DatabaseLogger.cs[256-304]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

5. ToList() used in EF query 📘 Rule violation ➹ Performance
Description
The database query in DisplayLoggingSession executes synchronously via ToList(), which can block
the UI thread when loading large sessions. This violates the requirement to use async EF APIs (e.g.,
ToListAsync) for database operations in a desktop UI app.
Code

Daqifi.Desktop/Loggers/DatabaseLogger.cs[R256-260]

                var dbSamples = context.Samples.AsNoTracking()
                    .Where(s => s.LoggingSessionID == session.ID)
+                    .OrderBy(s => s.TimestampTicks)
                    .Select(s => new { s.ChannelName, s.DeviceSerialNo, s.Type, s.Color, s.TimestampTicks, s.Value })
-                    .ToList(); // Bring data into memory
+                    .ToList();
Evidence
PR Compliance ID 15 requires async database operations; the modified query materializes results with
synchronous ToList() in the changed lines.

CLAUDE.md
Daqifi.Desktop/Loggers/DatabaseLogger.cs[256-260]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`DisplayLoggingSession` loads samples using synchronous Entity Framework materialization (`ToList()`), which can block the UI thread for large sessions.

## Issue Context
The code path reads many rows (`context.Samples...ToList()`), and the compliance checklist requires async EF operations where appropriate for desktop UI responsiveness.

## Fix Focus Areas
- Daqifi.Desktop/Loggers/DatabaseLogger.cs[256-260]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. Subtitle interpolation line too long 📘 Rule violation ⚙ Maintainability
Description
A newly added interpolated subtitle assignment is likely to exceed the 120-character line-length
standard, reducing readability and violating repository style rules. Long expressions should be
wrapped across multiple lines.
Code

Daqifi.Desktop/Loggers/DatabaseLogger.cs[R300-302]

+                if (totalDisplayPoints < samplesCount)
+                {
+                    subtitle = $"\nShowing {totalDisplayPoints:n0} downsampled points from {samplesCount:n0} total samples";
Evidence
PR Compliance IDs 3 and 23 require <=120-character lines and wrapped long expressions; the added
subtitle assignment is a long single-line interpolated string.

CLAUDE.md
Daqifi.Desktop/Loggers/DatabaseLogger.cs[300-302]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A long single-line interpolated string assignment likely exceeds the repository's 120-character line-length guidance.

## Issue Context
The repository formatting rules require wrapping long expressions/calls to keep diffs readable.

## Fix Focus Areas
- Daqifi.Desktop/Loggers/DatabaseLogger.cs[300-302]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


7. segmentThreshold line too long📘 Rule violation ⚙ Maintainability
Description
The segmentThreshold calculation is written as a long single-line expression, likely exceeding the
120-character max line length requirement. This reduces readability and violates formatting/style
requirements.
Code

Daqifi.Desktop/Loggers/DataPointDecimator.cs[R164-166]

+            var segment = segments[i];
+            var segmentThreshold = Math.Max(3, (int)Math.Round((double)segment.Count / totalDataPoints * availableThreshold));
+            var decimated = Decimate(segment, segmentThreshold);
Evidence
PR Compliance IDs 3 and 23 require keeping lines <=120 characters and wrapping long expressions; the
segmentThreshold assignment is a long unwrapped expression in the new code.

CLAUDE.md
Daqifi.Desktop/Loggers/DataPointDecimator.cs[164-166]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The `segmentThreshold` calculation is a long single-line expression that likely violates the 120-character max line-length rule.

## Issue Context
Repository style rules require wrapping long expressions to keep code readable.

## Fix Focus Areas
- Daqifi.Desktop/Loggers/DataPointDecimator.cs[164-166]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (3)
8. Assert.IsTrue line too long 📘 Rule violation ⚙ Maintainability
Description
The ordered-by-X assertion message is implemented as a long interpolated string on a single line,
likely exceeding the 120-character limit. This violates the repository formatting/style rules for
readable tests.
Code

Daqifi.Desktop.Test/Loggers/DataPointDecimatorTests.cs[R139-141]

+            Assert.IsTrue(result[i].X >= result[i - 1].X,
+                $"Point at index {i} (X={result[i].X}) should be >= point at index {i - 1} (X={result[i - 1].X})");
+        }
Evidence
PR Compliance IDs 3 and 23 require <=120-character lines and wrapped long expressions; the test
assertion uses a long interpolated string message in the modified lines.

CLAUDE.md
Daqifi.Desktop.Test/Loggers/DataPointDecimatorTests.cs[139-141]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A long interpolated assertion message is written on a single line, likely exceeding the 120-character line-length standard.

## Issue Context
Repository formatting rules apply to test code as well; long expressions/strings should be wrapped for readability.

## Fix Focus Areas
- Daqifi.Desktop.Test/Loggers/DataPointDecimatorTests.cs[139-141]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


9. Flaky performance unit test 🐞 Bug ☼ Reliability
Description
DataPointDecimatorTests.Decimate_LargeDataset_CompletesQuickly asserts a hard <1000ms runtime, which
depends on CI machine speed and load and can fail intermittently. This makes the test suite
unreliable even if correctness is unchanged.
Code

Daqifi.Desktop.Test/Loggers/DataPointDecimatorTests.cs[R145-160]

+    #region Large Dataset Performance
+    [TestMethod]
+    public void Decimate_LargeDataset_CompletesQuickly()
+    {
+        // 1 million points should decimate in well under a second
+        var points = GenerateLinearPoints(1_000_000);
+        const int threshold = 5000;
+
+        var stopwatch = System.Diagnostics.Stopwatch.StartNew();
+        var result = DataPointDecimator.Decimate(points, threshold);
+        stopwatch.Stop();
+
+        Assert.AreEqual(threshold, result.Count);
+        Assert.IsTrue(stopwatch.ElapsedMilliseconds < 1000,
+            $"Decimation of 1M points took {stopwatch.ElapsedMilliseconds}ms, expected < 1000ms");
+    }
Evidence
The unit test enforces a strict wall-clock time bound on a 1M-point workload; such assertions are
inherently environment-dependent and prone to intermittent failures under contention or slower
runners.

Daqifi.Desktop.Test/Loggers/DataPointDecimatorTests.cs[145-160]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
The test uses an absolute time threshold (`< 1000ms`) for 1M points, which is not stable across different CI environments.

### Issue Context
This is intended as a performance regression guard, but it will produce false failures.

### Fix Focus Areas
- Convert to a benchmark/perf test category excluded from normal CI, or
- Relax the threshold substantially and/or gate on environment (e.g., skip under debugger/CI), or
- Assert on algorithmic properties (result size, first/last preserved) and track perf separately.

- Daqifi.Desktop.Test/Loggers/DataPointDecimatorTests.cs[145-160]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


10. Final bucket uses zero avg 🐞 Bug ≡ Correctness
Description
DataPointDecimator.Decimate leaves avgX/avgY as (0,0) when the next bucket range is empty in the
final iteration, and then uses that in triangle area selection. This biases the selection of the
last interior point and can reduce visual fidelity near the end of the series.
Code

Daqifi.Desktop/Loggers/DataPointDecimator.cs[R48-68]

+            // Calculate the range for the next bucket (used for the average point)
+            var nextBucketStart = (int)Math.Floor(i * bucketSize) + 1;
+            var nextBucketEnd = (int)Math.Floor((i + 1) * bucketSize) + 1;
+            if (nextBucketEnd > points.Count - 1) nextBucketEnd = points.Count - 1;
+
+            // Calculate the average point of the next bucket
+            var avgX = 0.0;
+            var avgY = 0.0;
+            var nextBucketCount = 0;
+            for (var j = nextBucketStart; j < nextBucketEnd; j++)
+            {
+                avgX += points[j].X;
+                avgY += points[j].Y;
+                nextBucketCount++;
+            }
+
+            if (nextBucketCount > 0)
+            {
+                avgX /= nextBucketCount;
+                avgY /= nextBucketCount;
+            }
Evidence
If nextBucketStart == nextBucketEnd, the averaging loop does not run and avgX/avgY remain 0; the
triangle area formula then uses that as the look-ahead point. On the final iteration,
nextBucketStart/nextBucketEnd can collapse due to clamping to points.Count - 1, making this
case deterministic.

Daqifi.Desktop/Loggers/DataPointDecimator.cs[48-83]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
When the computed next-bucket range is empty, `avgX/avgY` remain (0,0) and are used in the area computation, which can skew the last interior point selection.

### Issue Context
This occurs near the end of the decimation loop due to `nextBucketEnd` clamping.

### Fix Focus Areas
- When `nextBucketCount == 0`, set `(avgX, avgY)` to the last point (`points[^1]`) (or to a well-defined fallback) before computing triangle areas.

- Daqifi.Desktop/Loggers/DataPointDecimator.cs[48-83]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment thread Daqifi.Desktop/Loggers/DataPointDecimator.cs Outdated
Comment thread Daqifi.Desktop/Loggers/PlotLogger.cs Outdated
Comment on lines +302 to +312
// Decimate raw points into the display lists bound to each series
// Use DecimateWithGaps to preserve line breaks at data stream gaps
foreach (var kvp in LoggedPoints)
{
if (_decimatedPoints.TryGetValue(kvp.Key, out var displayPoints))
{
var decimated = DataPointDecimator.DecimateWithGaps(kvp.Value);
displayPoints.Clear();
displayPoints.AddRange(decimated);
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Cross-thread series mutation 🐞 Bug ☼ Reliability

PlotLogger can mutate PlotModel.Series and its point dictionaries from device sample callbacks that
originate in async/background message handling, but AddChannelSeries performs those mutations
without holding PlotModel.SyncRoot. This can lead to races with UI rendering
(CompositionTarget.Rendering) and crashes or corrupted plot state.
Agent Prompt
### Issue description
`AddChannelSeries` mutates plot state (PlotModel.Series + dictionaries) without holding the same synchronization used elsewhere (`PlotModel.SyncRoot`). Streaming updates originate from async/background message handling, so this can race with UI rendering.

### Issue Context
`CompositionTarget.Rendering` is on the UI thread and enumerates `LoggedPoints` under `lock (PlotModel.SyncRoot)`. `PlotLogger.Log` only locks around point appends/removals, not around series/dictionary creation.

### Fix Focus Areas
- Wrap the `ContainsKey`/`AddChannelSeries` path inside `lock (PlotModel.SyncRoot)` (or otherwise ensure it runs on the UI thread via Dispatcher).
- Ensure all mutations of `LoggedPoints`, `_decimatedPoints`, `LoggedChannels`, and `PlotModel.Series` are guarded consistently.
- Consider also guarding `LoggedChannels[key].Color` updates similarly.

- Daqifi.Desktop/Loggers/PlotLogger.cs[186-224]
- Daqifi.Desktop/Loggers/PlotLogger.cs[238-276]
- Daqifi.Desktop/Loggers/PlotLogger.cs[280-316]
- Daqifi.Desktop/Device/AbstractStreamingDevice.cs[219-234]
- Daqifi.Desktop/Device/AbstractStreamingDevice.cs[236-310]
- Daqifi.Desktop/Channel/AbstractChannel.cs[136-161]
- Daqifi.Desktop/Loggers/LoggingManager.cs[423-443]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines 256 to 265
var dbSamples = context.Samples.AsNoTracking()
.Where(s => s.LoggingSessionID == session.ID)
.OrderBy(s => s.TimestampTicks)
.Select(s => new { s.ChannelName, s.DeviceSerialNo, s.Type, s.Color, s.TimestampTicks, s.Value })
.ToList(); // Bring data into memory
.ToList();

var samplesCount = dbSamples.Count;
const int dataPointsToShow = 1000000;

if (samplesCount > dataPointsToShow)
{
subtitle = $"\nOnly showing {dataPointsToShow:n0} out of {samplesCount:n0} data points";
}

var channelInfoList = dbSamples
.Select(s => new { s.ChannelName, s.DeviceSerialNo, s.Type, s.Color })
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. Playback builds full point lists 🐞 Bug ☼ Reliability

DatabaseLogger.DisplayLoggingSession materializes all samples into memory and then builds full
per-channel DataPoint lists before decimating, which can spike peak memory and slow loads for very
large sessions. The previous loop stopped adding after 1M points; now every sample is added to
per-channel lists before downsampling.
Agent Prompt
### Issue description
Session playback now appends *all* samples into `_allSessionPoints` before decimating, increasing peak memory and processing time for very large sessions.

### Issue Context
Even though the final `ItemsSource` is decimated, the peak load includes `dbSamples` + full `_allSessionPoints` lists.

### Fix Focus Areas
- Avoid storing `dbSamples` and full per-channel lists at the same time.
- Prefer querying/processing per channel (e.g., query distinct channels first, then for each channel run an ordered query and decimate) to reduce peak memory.
- Consider streaming enumeration (no `ToList`) and/or reducing intermediate allocations.

- Daqifi.Desktop/Loggers/DatabaseLogger.cs[256-304]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

- DecimateWithGaps: use floor allocation with remainder distribution to
  enforce the global threshold cap, preventing output from exceeding the
  requested point budget when many gaps exist
- PlotLogger: replace List<DataPoint> with CircularBuffer<DataPoint> for
  the raw point storage, eliminating O(n) RemoveAt(0) on every sample at
  the 50k buffer size
- Add CircularBuffer<T> with O(1) Add, indexed access, and ToList()
- Add tests for CircularBuffer and DecimateWithGaps threshold enforcement

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 4, 2026

📊 Code Coverage Report

Summary

Summary
Generated on: 4/4/2026 - 4:40:16 PM
Coverage date: 4/4/2026 - 4:39:45 PM - 4/4/2026 - 4:40:12 PM
Parser: MultiReport (4x Cobertura)
Assemblies: 3
Classes: 109
Files: 140
Line coverage: 20.4% (1229 of 6008)
Covered lines: 1229
Uncovered lines: 4779
Coverable lines: 6008
Total lines: 18403
Branch coverage: 19.4% (430 of 2207)
Covered branches: 430
Total branches: 2207
Method coverage: Feature is only available for sponsors

Coverage

DAQiFi - 20.2%
Name Line Branch
DAQiFi 20.2% 19.4%
Daqifi.Desktop.App 6.3% 0%
Daqifi.Desktop.Channel.AbstractChannel 40.9% 27.7%
Daqifi.Desktop.Channel.AnalogChannel 58.7% 25%
Daqifi.Desktop.Channel.Channel 11.5% 0%
Daqifi.Desktop.Channel.ChannelColorManager 100% 100%
Daqifi.Desktop.Channel.DataSample 91.6%
Daqifi.Desktop.Channel.DigitalChannel 65.2% 25%
Daqifi.Desktop.Commands.CompositeCommand 0% 0%
Daqifi.Desktop.Commands.HostCommands 0%
Daqifi.Desktop.Commands.WeakEventHandlerManager 0% 0%
Daqifi.Desktop.Configuration.FirewallConfiguration 90.6% 66.6%
Daqifi.Desktop.Configuration.WindowsFirewallWrapper 64% 68.4%
Daqifi.Desktop.ConnectionManager 43% 43.1%
Daqifi.Desktop.Converters.BoolToActiveStatusConverter 0% 0%
Daqifi.Desktop.Converters.BoolToConnectionStatusConverter 0% 0%
Daqifi.Desktop.Converters.BoolToStatusColorConverter 0% 0%
Daqifi.Desktop.Converters.ConnectionTypeToColorConverter 0% 0%
Daqifi.Desktop.Converters.ConnectionTypeToUsbConverter 0% 0%
Daqifi.Desktop.Converters.InvertedBoolToVisibilityConverter 0% 0%
Daqifi.Desktop.Converters.ListToStringConverter 0% 0%
Daqifi.Desktop.Converters.NotNullToVisibilityConverter 0% 0%
Daqifi.Desktop.Converters.OxyColorToBrushConverter 0% 0%
Daqifi.Desktop.Converters.StringRightConverter 0% 0%
Daqifi.Desktop.Device.AbstractStreamingDevice 42.8% 38.6%
Daqifi.Desktop.Device.DeviceMessage 0%
Daqifi.Desktop.Device.Firmware.BootloaderSessionStreamingDeviceAdapter 0% 0%
Daqifi.Desktop.Device.Firmware.WifiPromptDelayProcessRunner 0% 0%
Daqifi.Desktop.Device.NativeMethods 100%
Daqifi.Desktop.Device.SerialDevice.SerialStreamingDevice 27.6% 30.8%
Daqifi.Desktop.Device.WiFiDevice.DaqifiStreamingDevice 40.9% 39.4%
Daqifi.Desktop.DialogService.DialogService 0% 0%
Daqifi.Desktop.DialogService.ServiceLocator 0% 0%
Daqifi.Desktop.DuplicateDeviceCheckResult 100%
Daqifi.Desktop.Exporter.OptimizedLoggingSessionExporter 30.2% 35.4%
Daqifi.Desktop.Exporter.SampleData 0%
Daqifi.Desktop.Helpers.BooleanConverter`1 0% 0%
Daqifi.Desktop.Helpers.BooleanToInverseBoolConverter 0% 0%
Daqifi.Desktop.Helpers.BooleanToVisibilityConverter 0%
Daqifi.Desktop.Helpers.EnumDescriptionConverter 100% 100%
Daqifi.Desktop.Helpers.IntToVisibilityConverter 0% 0%
Daqifi.Desktop.Helpers.MyMultiValueConverter 0%
Daqifi.Desktop.Helpers.NaturalSortHelper 100% 100%
Daqifi.Desktop.Helpers.VersionHelper 98.2% 66.2%
Daqifi.Desktop.Logger.CircularBuffer`1 100% 100%
Daqifi.Desktop.Logger.DatabaseLogger 0% 0%
Daqifi.Desktop.Logger.DataPointDecimator 98.6% 96%
Daqifi.Desktop.Logger.LoggedSeriesLegendItem 0% 0%
Daqifi.Desktop.Logger.LoggingContext 0%
Daqifi.Desktop.Logger.LoggingManager 0% 0%
Daqifi.Desktop.Logger.LoggingSession 26.6% 0%
Daqifi.Desktop.Logger.PlotLogger 0% 0%
Daqifi.Desktop.Logger.SummaryLogger 0% 0%
Daqifi.Desktop.Logger.TimestampGapDetector 95% 83.3%
Daqifi.Desktop.Loggers.ImportOptions 0%
Daqifi.Desktop.Loggers.ImportProgress 0% 0%
Daqifi.Desktop.Loggers.SdCardSessionImporter 0% 0%
Daqifi.Desktop.MainWindow 0% 0%
Daqifi.Desktop.Migrations.InitialSQLiteMigration 0%
Daqifi.Desktop.Migrations.LoggingContextModelSnapshot 0%
Daqifi.Desktop.Models.AddProfileModel 0%
Daqifi.Desktop.Models.DaqifiSettings 80.5% 83.3%
Daqifi.Desktop.Models.DebugDataCollection 6.6% 0%
Daqifi.Desktop.Models.DebugDataModel 0% 0%
Daqifi.Desktop.Models.Notifications 0%
Daqifi.Desktop.Models.SdCardFile 0% 0%
Daqifi.Desktop.Services.WindowsPrincipalAdminChecker 0%
Daqifi.Desktop.Services.WpfMessageBoxService 0%
Daqifi.Desktop.UpdateVersion.VersionNotification 0% 0%
Daqifi.Desktop.View.AddChannelDialog 0% 0%
Daqifi.Desktop.View.AddProfileConfirmationDialog 0% 0%
Daqifi.Desktop.View.AddprofileDialog 0% 0%
Daqifi.Desktop.View.ConnectionDialog 0% 0%
Daqifi.Desktop.View.DebugWindow 0% 0%
Daqifi.Desktop.View.DeviceLogsView 0% 0%
Daqifi.Desktop.View.DuplicateDeviceDialog 0% 0%
Daqifi.Desktop.View.ErrorDialog 0% 0%
Daqifi.Desktop.View.ExportDialog 0% 0%
Daqifi.Desktop.View.FirmwareDialog 0% 0%
Daqifi.Desktop.View.Flyouts.ChannelsFlyout 0% 0%
Daqifi.Desktop.View.Flyouts.DevicesFlyout 0% 0%
Daqifi.Desktop.View.Flyouts.FirmwareFlyout 0% 0%
Daqifi.Desktop.View.Flyouts.LiveGraphFlyout 0% 0%
Daqifi.Desktop.View.Flyouts.LoggedSessionFlyout 0% 0%
Daqifi.Desktop.View.Flyouts.NotificationsFlyout 0% 0%
Daqifi.Desktop.View.Flyouts.SummaryFlyout 0% 0%
Daqifi.Desktop.View.Flyouts.UpdateProfileFlyout 0% 0%
Daqifi.Desktop.View.SelectColorDialog 0% 0%
Daqifi.Desktop.View.SettingsDialog 0% 0%
Daqifi.Desktop.View.SuccessDialog 0% 0%
Daqifi.Desktop.ViewModels.AddChannelDialogViewModel 0% 0%
Daqifi.Desktop.ViewModels.AddProfileConfirmationDialogViewModel 0% 0%
Daqifi.Desktop.ViewModels.AddProfileDialogViewModel 0% 0%
Daqifi.Desktop.ViewModels.ConnectionDialogViewModel 21.6% 19.5%
Daqifi.Desktop.ViewModels.DaqifiViewModel 15.2% 9.1%
Daqifi.Desktop.ViewModels.DeviceLogsViewModel 0% 0%
Daqifi.Desktop.ViewModels.DeviceSettingsViewModel 0% 0%
Daqifi.Desktop.ViewModels.DuplicateDeviceDialogViewModel 0%
Daqifi.Desktop.ViewModels.ErrorDialogViewModel 0%
Daqifi.Desktop.ViewModels.ExportDialogViewModel 0% 0%
Daqifi.Desktop.ViewModels.FirmwareDialogViewModel 0% 0%
Daqifi.Desktop.ViewModels.SelectColorDialogViewModel 0% 0%
Daqifi.Desktop.ViewModels.SettingsViewModel 0%
Daqifi.Desktop.ViewModels.SuccessDialogViewModel 85.7%
Daqifi.Desktop.WindowViewModelMapping.IWindowViewModelMappingsContract 0%
Daqifi.Desktop.WindowViewModelMapping.WindowViewModelMappings 0%
Sentry.Generated.BuildPropertyInitializer 100%
Daqifi.Desktop.Common - 33.3%
Name Line Branch
Daqifi.Desktop.Common 33.3% 22.7%
Daqifi.Desktop.Common.Loggers.AppLogger 31.3% 22.7%
Daqifi.Desktop.Common.Loggers.NoOpLogger 60%
Daqifi.Desktop.IO - 100%
Name Line Branch
Daqifi.Desktop.IO 100% ****
Daqifi.Desktop.IO.Messages.MessageEventArgs`1 100%

Coverage report generated by ReportGeneratorView full report in build artifacts

@tylerkron
Copy link
Copy Markdown
Contributor Author

will come up with a better solution

@tylerkron tylerkron closed this Apr 11, 2026
tylerkron added a commit that referenced this pull request Apr 11, 2026
ADR 001 documents why we chose viewport-aware MinMax downsampling over
global LTTB (PR #457), pre-computed pyramids, GPU rendering, and
on-demand DB queries. Records the trade-offs, consequences, and
follow-up work for future contributors.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
tylerkron added a commit that referenced this pull request Apr 12, 2026
ADR 001 documents why we chose viewport-aware MinMax downsampling over
global LTTB (PR #457), pre-computed pyramids, GPU rendering, and
on-demand DB queries. Records the trade-offs, consequences, and
follow-up work for future contributors.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Slow Display With Large Dataset

1 participant