You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 20, 2023. It is now read-only.
Persistence and fail-safety, once a workload is scheduled by the user it does not get lost even on server crashes
Efficiency and scalability, N workloads should be able to be safely scheduled concurrently and M workload executors should be able to pick elements from the queue concurrently
At most one element per GitHub ID, that means that the data structure should do some bookkeeping
The actual queue items should be just Drawbridge slugs, which are resolved at execution time
I propose to use a Redis stream https://redis.io/docs/manual/data-types/streams/ for the workload queue and possibly pair it with a hash set for bookkeeping. This would provide us with a very robust solution, which we can trivially scale (by just starting more instances) and easily debug (by looking into the Redis queue). I have extensive experience working with Redis doing almost exactly this, so I'd be happy to pick this up
The text was updated successfully, but these errors were encountered:
We want a few properties from the queue:
The actual queue items should be just Drawbridge slugs, which are resolved at execution time
I propose to use a Redis stream https://redis.io/docs/manual/data-types/streams/ for the workload queue and possibly pair it with a hash set for bookkeeping. This would provide us with a very robust solution, which we can trivially scale (by just starting more instances) and easily debug (by looking into the Redis queue). I have extensive experience working with Redis doing almost exactly this, so I'd be happy to pick this up
The text was updated successfully, but these errors were encountered: