Skip to content

Fix silent iteration failure when input stream throws on Flush()#1156

Merged
adamhathcock merged 4 commits intomasterfrom
copilot/fix-sharpcompress-archive-iteration
Jan 24, 2026
Merged

Fix silent iteration failure when input stream throws on Flush()#1156
adamhathcock merged 4 commits intomasterfrom
copilot/fix-sharpcompress-archive-iteration

Conversation

Copy link
Contributor

Copilot AI commented Jan 23, 2026

Since 0.41.0, archive iteration stops after the first entry when the input stream throws NotSupportedException in Flush(). No exception is raised; iteration simply ends prematurely.

Root cause: ZlibBaseStream.Flush() unconditionally flushes the underlying stream regardless of read/write mode. When EntryStream.Dispose() calls Flush() on decompression streams (Deflate/LZMA), the flush propagates to the input stream, which:

  • Has no semantic meaning for read-only operations
  • Breaks forward-only streams (ASP.NET Core MultipartReaderStream, network streams, custom pipelines)
  • Leaves internal state inconsistent even when NotSupportedException is caught

Changes:

  • ZlibBaseStream.Flush/FlushAsync: Only flush underlying stream when _streamMode == StreamMode.Writer
  • ThrowOnFlushStream: Test mock simulating streams that reject flush operations
  • ZipReaderTests: Regression tests for Deflate and LZMA compressed archives (sync)
  • ZipReaderAsyncTests: Regression tests for Deflate and LZMA compressed archives (async)

Example of affected code:

// Any stream wrapper that throws on Flush() would fail
public class ForwardOnlyStream : Stream {
    public override void Flush() => throw new NotSupportedException();
    // ...
}

using var stream = new ForwardOnlyStream(File.OpenRead("archive.zip"));
using var reader = ReaderFactory.OpenReader(stream);

while (reader.MoveToNextEntry()) { /* Stops after first entry */ }

With this fix, iteration completes successfully for all entries regardless of whether the input stream supports flushing.

Original prompt

This section details on the original issue you should resolve

<issue_title>Regression in SharpCompress 0.41.0: archive iteration breaks if input stream throws in Flush()</issue_title>
<issue_description>### Summary

Since SharpCompress 0.41.0, archive iteration silently breaks when the input stream throws in Flush().

Only the first entry is returned, then iteration stops without exception.

This can be reproduced using a simple wrapper stream around a FileStream.


Minimal reproduction

public sealed class ForwardOnlyThrowOnFlushStream : Stream
{
	private readonly Stream inner;

	public ForwardOnlyThrowOnFlushStream(Stream inner)
	{
		this.inner = inner;
	}

	public override bool CanRead => true;

	public override bool CanSeek => false;

	public override bool CanWrite => false;

	public override long Length => throw new NotSupportedException();

	public override long Position
	{
		get => throw new NotSupportedException();
		set => throw new NotSupportedException();
	}

	public override void Flush() => throw new NotSupportedException();

	public override int Read(byte[] buffer, int offset, int count) => inner.Read(buffer, offset, count);

	public override ValueTask<int> ReadAsync(Memory<byte> buffer, CancellationToken cancellationToken = default) => inner.ReadAsync(buffer, cancellationToken);

	public override long Seek(long offset, SeekOrigin origin) => throw new NotSupportedException();

	public override void SetLength(long value) => throw new NotSupportedException();

	public override void Write(byte[] buffer, int offset, int count) => throw new NotSupportedException();

	protected override void Dispose(bool disposing)
	{
		if (disposing)
		{
			inner.Dispose();
		}

		base.Dispose(disposing);
	}
}

Test program:

using var fs = File.OpenRead("multi-entry.zip");
using var stream = new ForwardOnlyThrowOnFlushStream(fs);

var reader = ReaderFactory.OpenAsyncReader(stream);

int count = 0;

while (await reader.MoveToNextEntryAsync())
{
	if (reader.Entry.IsDirectory)
	{
		continue;
	}

	count++;
}

Console.WriteLine(count);

Expected behavior

If the archive contains N entries, count == N.


Actual behavior (0.44.1)

count == 1

Iteration stops after the first entry.
No exception is thrown.


Workaround

If Flush() is implemented as a no-op:

public override void Flush() { }

Then all entries are iterated correctly.


Why this is a problem

Flush() has no semantic meaning for read-only input streams and should not be required for reading archives.

Any library stream that:

  • is read-only
  • forward-only
  • or explicitly does not support flushing

can silently break SharpCompress iteration.

This includes:

  • ASP.NET Core MultipartReaderStream
  • custom streaming pipelines
  • network streams

Regression info

  • Works with: SharpCompress 0.40.0 (or previous)
  • Fails with: SharpCompress 0.41.0 (or higher)

Suspected cause

SharpCompress 0.44.1 introduced a call to Stream.Flush() somewhere in the read / entry iteration pipeline.
If Flush() throws, the internal state becomes inconsistent and iteration stops early.


Related issue

This looks closely related to #1150.

In #1150, EntryStream.Dispose() started calling Flush() on internal decompression streams (Deflate/LZMA), which breaks valid streaming scenarios (non-seekable / forward-only streams). In my case, beyond the exception scenario discussed there, I now also observe a silent regression: iteration stops after the first entry.

Both issues share the same underlying theme: Flush() (or its side-effects) should not be required nor relied upon in a read-only streaming pipeline, especially for forward-only/non-seekable streams.</issue_description>

Comments on the Issue (you are @copilot in this section)


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

…n Flush()

- Modified ZlibBaseStream.Flush() and FlushAsync() to only flush the underlying stream when in Writer mode
- Added ThrowOnFlushStream mock for testing
- Added regression tests for Deflate and LZMA compressed archives
- All tests pass successfully

Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
Copilot AI changed the title [WIP] Fix regression in SharpCompress archive iteration Fix silent iteration failure when input stream throws on Flush() Jan 23, 2026
Copilot AI requested a review from adamhathcock January 23, 2026 16:07
@rleroux-regnology
Copy link

@adamhathcock It might be relevant to review the Dispose / DisposeAsync of EntryStream methods (which were updated following #1151) and apply the same logic by checking the StreamMode instead of catching NotSupportedException?

@adamhathcock
Copy link
Owner

@copilot make an Async version of these tests

@adamhathcock
Copy link
Owner

@adamhathcock It might be relevant to review the Dispose / DisposeAsync of EntryStream methods (which were updated following #1151) and apply the same logic by checking the StreamMode instead of catching NotSupportedException?

Was the root of your issue this Flush when disposing EntryStream? I'd rather not do a try/catch

- Added Archive_Iteration_DoesNotBreak_WhenFlushThrows_Deflate_Async
- Added Archive_Iteration_DoesNotBreak_WhenFlushThrows_LZMA_Async
- Both async tests mirror the sync versions and pass successfully

Co-authored-by: adamhathcock <527620+adamhathcock@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Jan 23, 2026

@copilot make an Async version of these tests

Added async versions of the regression tests in commit f2bb81d. Both Archive_Iteration_DoesNotBreak_WhenFlushThrows_Deflate_Async and Archive_Iteration_DoesNotBreak_WhenFlushThrows_LZMA_Async tests pass successfully.

@rleroux-regnology
Copy link

@adamhathcock It might be relevant to review the Dispose / DisposeAsync of EntryStream methods (which were updated following #1151) and apply the same logic by checking the StreamMode instead of catching NotSupportedException?

Was the root of your issue this Flush when disposing EntryStream? I'd rather not do a try/catch

I just retested my case on the copilot/fix-sharpcompress-archive-iteration branch by removing the catch NotSupportedException from the Dispose methods of EntryStream, and it works again as before.

I think they are no longer necessary and you can remove them, given that the call to Flush is now directly conditioned in ZlibBaseStream.

The root issue therefore stemmed from calls to Flush, and not directly from the Dispose methods.

Copy link
Owner

@adamhathcock adamhathcock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LZMA doesn't have this problem but better safe than sorry

@adamhathcock adamhathcock requested a review from Copilot January 24, 2026 10:05
@adamhathcock adamhathcock marked this pull request as ready for review January 24, 2026 10:07
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a regression where ZIP archive iteration could silently stop after the first entry when the underlying input stream throws NotSupportedException on Flush() / FlushAsync(), and adds regression tests to prevent it from recurring.

Changes:

  • Update ZlibBaseStream.Flush / FlushAsync to only call the underlying stream’s Flush when operating in write mode, avoiding meaningless and harmful flushes on read-only input streams.
  • Simplify EntryStream.Dispose / DisposeAsync now that decompression Flush/FlushAsync no longer propagates to unsupported input-stream flush operations.
  • Introduce ThrowOnFlushStream test mock and add sync/async ZIP reader regression tests for Deflate and LZMA archives; bump Microsoft.NET.ILLink.Tasks versions in packages.lock.json.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated no comments.

Show a summary per file
File Description
src/SharpCompress/Compressors/Deflate/ZlibBaseStream.cs Guards underlying stream Flush/FlushAsync calls behind a write-mode check while preserving buffer rewind semantics, preventing flush-induced failures on read-only forward-only streams.
src/SharpCompress/Common/EntryStream.cs Removes now-unnecessary NotSupportedException handling around Deflate/LZMA Flush/FlushAsync, relying on the updated zlib stream behavior while keeping existing over-read “knock back” behavior.
tests/SharpCompress.Test/Mocks/ThrowOnFlushStream.cs Adds a mock Stream that throws on Flush/FlushAsync to simulate forward-only or non-flushable input streams for regression tests.
tests/SharpCompress.Test/Zip/ZipReaderTests.cs Adds synchronous regression tests ensuring Deflate and LZMA ZIP iteration continues past the first entry when the input stream rejects flush operations.
tests/SharpCompress.Test/Zip/ZipReaderAsyncTests.cs Adds asynchronous counterparts using AsyncOnlyStream plus ThrowOnFlushStream to validate async iteration is robust to input-stream flush failures.
src/SharpCompress/packages.lock.json Bumps Microsoft.NET.ILLink.Tasks versions for net10.0 and net8.0 target frameworks to newer patch releases.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@adamhathcock adamhathcock merged commit de3cda9 into master Jan 24, 2026
11 checks passed
@adamhathcock adamhathcock deleted the copilot/fix-sharpcompress-archive-iteration branch January 24, 2026 10:11
adamhathcock added a commit that referenced this pull request Jan 24, 2026
…archive-iteration

Fix silent iteration failure when input stream throws on Flush()
# Conflicts:
#	src/SharpCompress/packages.lock.json
adamhathcock added a commit that referenced this pull request Jan 25, 2026
Merge pull request #1156 from adamhathcock/copilot/fix-sharpcompress-…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Regression in SharpCompress 0.41.0: archive iteration breaks if input stream throws in Flush()

4 participants