-
-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seeking Advice: Queue Worker Implementation on Supabase Edge Functions #464
Comments
Hey @jumski, thanks for sharing this project, it seems super interesting! My only concern about relying on Edge Functions to run a long running process like this is that the process isn't supervised. If anything goes wrong or your function crashes, how do you ensure that the poller restarts correctly? |
Thank you @jgoux for taking the time to read my message! 🙇 I plan multiple measures to improve reliability:
This won’t be as robust as a long-lived process, but I hope it’s good enough for an entry-level system.
Any feedback on boot/terminate processes or overlooked Deno/Edge APIs would be great! Let me know if spawning workers on Lastly, handlers will be API-compatible with Graphile Worker for easy migration in case someone grows out of edge worker. Thanks again! 🙏 |
disclaimer: got sent to issues from Discussions and Discord so there is a chance developer can see this message.
Hello everyone,
I’m currently working on an open-source queue worker built on top of Supabase Edge Functions as part of a larger, Postgres-centric workflow orchestration engine that I’ve been developing full-time for the last two months. I’d greatly appreciate any guidance on leveraging the Edge Runtime to build a robust, community-oriented solution. Insights from both the Supabase team and the community would be invaluable.
Current Approach
waitUntil
.pgmq.read_with_poll
to continuously read messages from a target queue.waitUntil
.Handling CPU/Wall Time Limits
At some point, the worker may hit its CPU or wall-clock time limit, triggering the
onbeforeunload
event. From what I’ve learned, onceonbeforeunload
fires, that Edge Function instance no longer accepts new requests. To ensure continuity, I issue an HTTP request to/api/v1/functions/my-worker-fn
, effectively spawning a new Edge Function instance that starts a fresh worker.Disclaimer:
I’m not currently tracking these workers in a database nor performing any heartbeats. However, I will add this soon to control the number of spawned workers and better manage the overall process.
Questions
onbeforeunload
fires—aligned with the Terms of Service for Edge Functions?onbeforeunload
, are there other events or APIs I can use to achieve a more graceful shutdown process?Next Steps & Feedback
I plan to release this worker component soon to gather feedback. While it’s just one building block of the larger orchestration engine I’m developing, ensuring it aligns with best practices and community standards is a top priority.
Thank you for your time and any insights you can share!
The text was updated successfully, but these errors were encountered: