-
Notifications
You must be signed in to change notification settings - Fork 14.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes operator retry regression #12111
Comments
Thanks for opening your first issue here! Be sure to follow the issue template! |
I think this is because of the
This might be intended, in terms of idempotency - unclear to me. At least it unexpected to me. Possible workarounds are setting |
Yes, I ended up enabling |
I think this might be fixed with version 1.10.13 due to #11368, has anyone "already checked"? |
That does look promising, although it introduced critical bug #12659. Waiting for 1.10.14 before upgrading. |
Hi @philipherrmann @pceric please let me know if this bug is fixed. I would also recommend using the cncf.kubernetes backport provider instead of the operator in airflow itself as those are being deprecated in 2.0 (you'll also get fixes much faster through providers) |
I installed 1.10.14 today and while the behavior is a bit odd, it does work. If I have retries set to 5, Airflow will run 10 retries with every odd retry being a "dummy retry", gathering the output from the previous failure. But since everything works as expected I'm calling this fixed. |
Apache Airflow version: 1.10.12
Kubernetes version (if you are using kubernetes) (use
kubectl version
): 1.15.9Environment:
What happened: As of Airflow 1.10.12, and going back to sometime around 1.10.10 or 1.10.11, the behavior of the retry mechanism in the kubernetes pod operator regressed. Previously when a pod failed due to an error, Airflow would spin up a new pod in kubernetes on retry. As of 1.10.12 Airflow now tries to re-use the same broken pod over and over:
INFO - found a running pod with labels {'dag_id': 'my_dad', 'task_id': 'my_task', 'execution_date': '2020-11-04T1300000000-e807cde8a', 'try_number': '6'} but a different try_number. Will attach to this pod and monitor instead of starting new one
This is bad because most failures we encounter are due to the underlying "physical" hardware failing and retrying on the same pod is pointless, it will never succeed.
What you expected to happen: I would expect the k8s Airflow operator to start a new pod that would allow it to be scheduled on a new k8s node that does not have an underlying "physical" hardware problem, just like it was on earlier versions of Airflow.
How to reproduce it: Run a kubernetes pod operator task with a retry count set and error the node in a way that it can never succeed.
The text was updated successfully, but these errors were encountered: