Skip to content

fix: support DiscardNew policy for Jetstream streams#1624

Closed
QuentinFAIDIDE wants to merge 4 commits intonumaproj:mainfrom
QuentinFAIDIDE:jetstream_discard_new
Closed

fix: support DiscardNew policy for Jetstream streams#1624
QuentinFAIDIDE wants to merge 4 commits intonumaproj:mainfrom
QuentinFAIDIDE:jetstream_discard_new

Conversation

@QuentinFAIDIDE
Copy link
Copy Markdown
Contributor

might fix #1551 #1554
We would need to ensure that there are no adverse consequence in the handling of the new write error that would happen in the surge scenario.

…rdNew

Signed-off-by: Quentin Faidide <quentin.faidide@gmail.com>
@QuentinFAIDIDE QuentinFAIDIDE marked this pull request as draft April 1, 2024 16:14
Subjects: []string{streamName}, // Use the stream name as the only subject
Retention: nats.RetentionPolicy(v.GetInt("stream.retention")),
Discard: nats.DiscardOld,
Discard: nats.DiscardNew,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we make it overridable by the user?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can set it when users specify "drop on full" ? It's pretty much the exact same behaviour.
Though according to @yhl25 DiscardNew can't be used, but I'm still trying to understand why in the other issue.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to figure out why it won't work, ideally it should.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to make it configurable to the user, but let's set it to DiscardOld by default since we use the Limits Policy by default. Also, please add a comment saying DiscardNew can only be used with WorkQueue Policy.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since we cannot use DiscardNew with Limits policy. we should not let the pipeline even start and validation should fail.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, I will try to implement that, and try to reproduce the issue @yhl25 was mentioning with stuck messages.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should also document the risk of data loss on surge when using DiscardOld on high throughput and be really transparent with why it happens. Right now, users like me who fiddle with the config might be in big trouble if some UDF/Sink create a silent data loss scenario on production.

@QuentinFAIDIDE QuentinFAIDIDE changed the title fix: change jetstream buffers discard policy from DiscardOld to DiscardNew fix: support DiscardNew policy for Jetstream streams Apr 2, 2024
@QuentinFAIDIDE
Copy link
Copy Markdown
Contributor Author

image

Something I currently experience with the "surge pipeline situation" with DiscardNew that may or may not be what @yhl25 is referring to:

  • After letting the sink fail for a moment, I am letting it work and it does has a descent Ack rate
  • the buffer before the sink stays at 30k and the cpu usage for the message-variator-udf is high while logs are repeating:
2024/04/03 18:51:32 | ERROR | {...,"msg":"Retrying failed messages","pipeline":"super-odd-8","vertex":"msg-variator","protocol":"uds-grpc-map-udf","errors":{"nats: maximum messages exceeded":31}...}

The {"nats: maximum messages exceeded":31} seems to be the new jetstream side "buffer full" error thrown because we now use DiscardNew. The number of these errors tends to slowly decrease and then go up again, indicating it's writing some, and then some other datum arrive from the source .

  • buffer the source is writing to experiences the same down and up but with a bufferFull! error, which is the normal error we usually get on buffer being fulls.

Overall, the number of messages stays nearly stable (it decrease at super slow rate on source buffer) due to the huge pile of retries that tends to immediately refill any missing data. So either this is what gave the impression of undelivered messages staying in the pipe, or either I still didn't reproduce it.
I'm going to let it sit for some time and confirm I emptied the retries with no losses. I'll keep you updated.

Signed-off-by: Quentin Faidide <quentin.faidide@gmail.com>
Signed-off-by: Quentin Faidide <quentin.faidide@gmail.com>
@QuentinFAIDIDE
Copy link
Copy Markdown
Contributor Author

I added the doc changes to remove the retention policy parameter and default to WorkQueue as discussed.
Was not able to reproduce the "stucked messages" issue yet.
My guess is that one of the following is true:

  1. The new issue is due to the new "buffer over capacity" error from jetstream which is the only behavior change since we activated the WorkQueue/DiscardNew setting. The issue would then be a subset of the fixed lost datum issue, because the full buffer management is supposed to prevent that error from ever being returned in the first place.
  2. The new issue lies in Jetstream (feels unlikely but I may be wrong).
  3. The new issue is due to some other Jetstream behaviour change with WorkQueue/DiscardNew that I am not aware of. Is there likely to be any other than the new error ?

What do you guys think we should do ? I've been trying to reproduce the issue a few times with no luck, going to retry but let me know your input.

@vigith
Copy link
Copy Markdown
Member

vigith commented Apr 5, 2024

as per nats-io/nats-server#5148 (comment) the issue seems to have been resolved in 2.10.12

@QuentinFAIDIDE
Copy link
Copy Markdown
Contributor Author

So what's the plan, do we change the new "compatible" jetstream configmap to specify only the new version with the fix, or do we wait for someone to try to reproduce this error enough times to convince us that it's fixed ?

@vigith
Copy link
Copy Markdown
Member

vigith commented Apr 10, 2024

So what's the plan, do we change the new "compatible" jetstream configmap to specify only the new version with the fix, or do we wait for someone to try to reproduce this error enough times to convince us that it's fixed ?

We can make this change a configurable option, with defaults set to what is currently being used since that is battle-tested. Eventually, this could be the default, but before we do that, we need to make sure it works as expected with a decent amount of run in the production.

@vigith
Copy link
Copy Markdown
Member

vigith commented Apr 12, 2024

nats-io/nats-server#5270 seems to fix the problem. 2.10.14 release of jetstream seems very promising for WorkQueue.

@codecov
Copy link
Copy Markdown

codecov bot commented Jul 31, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 54.06%. Comparing base (e7c32c1) to head (b42786a).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1624      +/-   ##
==========================================
- Coverage   54.31%   54.06%   -0.26%     
==========================================
  Files         288      288              
  Lines       28301    28297       -4     
==========================================
- Hits        15371    15298      -73     
- Misses      11994    12063      +69     
  Partials      936      936              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@whynowy
Copy link
Copy Markdown
Member

whynowy commented Aug 6, 2024

Closed by #1884.

@whynowy whynowy closed this Aug 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

4 participants