feat: Redis streams as event source#1744
Conversation
Signed-off-by: Sreekanth <prsreekanth920@gmail.com>
Signed-off-by: Sreekanth <prsreekanth920@gmail.com>
Signed-off-by: Sreekanth <prsreekanth920@gmail.com>
e4030b4 to
be07d86
Compare
|
@whynowy The |
Ignore the failure in the CI. |
whynowy
left a comment
There was a problem hiding this comment.
LGTM overall, couple of enhancements can be done with following up PRs. I'll merge it after fixing the minor log issue.
| if err.Error() != "BUSYGROUP Consumer Group name already exists" { | ||
| return errors.Wrapf(err, "creating consumer group %s for stream %s on host %s for event source %s", consumersGroup, stream, redisEventSource.HostAddress, el.GetEventName()) | ||
| } | ||
| log.Infof("Consumer group %s already exists", stream) |
There was a problem hiding this comment.
I believe you want to log the consumer group together with stream?
| el.Metrics.EventProcessingFailed(el.GetEventSourceName(), el.GetEventName()) | ||
| continue | ||
| } | ||
| if err := client.XAck(ctx, entry.Stream, consumersGroup, message.ID).Err(); err != nil { |
There was a problem hiding this comment.
Since doing a batch read, this could be enhanced to XAck a list of message.ID.
Signed-off-by: Sreekanth <prsreekanth920@gmail.com>
|
@whynowy Made the suggested changes and tested the functionality on minikube. |
| if err == redis.Nil { | ||
| continue | ||
| } | ||
| return errors.Wrapf(err, "reading streams %s using XREADGROUP", strings.Join(redisEventSource.Streams, ", ")) |
There was a problem hiding this comment.
Should we just log the error and continue? For example, if there's a network connection issue, and it fails to read, we should let it go and retry right?
There was a problem hiding this comment.
That makes sense. Made the change.
Signed-off-by: Sreekanth <prsreekanth920@gmail.com>
|
@BulkBeing - thanks for getting this added! |
* Support for Redis streams as event source Signed-off-by: Sreekanth <prsreekanth920@gmail.com>
* Support for Redis streams as event source Signed-off-by: Sreekanth <prsreekanth920@gmail.com>
Checklist:
closes: #1369
previous discussion: #1395
Messages from the stream are read using the Redis consumer group. The main reason for using consumer group is to resume from the last read upon pod restarts. A common consumer group (defaults to "argo-events-cg") is created (if not already exists) on all specified streams. When using consumer group, each read through a consumer group is a write operation, because Redis needs to update the last retrieved message id and the pending entries list(PEL) of that specific user in the consumer group. So it can only work with the master Redis instance and not replicas (https://redis.io/topics/streams-intro).
Redis stream event source expects all the streams to be present on the Redis server. This event source only starts pulling messages from the streams when all of the specified streams exist on the Redis server. On the initial setup, the consumer group is created on all the specified streams to start reading from the latest message (not necessarily the beginning of the stream). On subsequent setups (the consumer group already exists on the streams) or during pod restarts, messages are pulled from the last unacknowledged message in the stream.
The consumer group is never deleted automatically. If you want a completely fresh setup again, you must delete the consumer group from the streams.
Tested with single and multiple streams on minikube cluster:
With pod creation as sensor:
Logs from pods created by sensor:
From event source pod:
From sensor pod: