-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add proof of concept implementation of argo image updater replacement #173
Conversation
Codecov Report
@@ Coverage Diff @@
## main #173 +/- ##
==========================================
- Coverage 49.53% 42.46% -7.08%
==========================================
Files 15 16 +1
Lines 971 1227 +256
==========================================
+ Hits 481 521 +40
- Misses 435 647 +212
- Partials 55 59 +4
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First, thanks for doing this work. I had pleasure reading the changes, and I'm very excited about this functionality 🥳
A couple of high-level comments:
- I think the client should explicitly state that they want argo-watcher to search for annotations and perform git-commit deployment. Argo-watcher should explicitly fail when it tries to git-commit deploy but fails for any reason (no ssh key, no annotations, failed parsing yaml files, etc).
This would eliminate magic that we'd want to avoid having in the deployments. - I think git-commit deployment authentication should happen earlier in the process and should be a pass or fail scenario. If the user requested git-commit deployment but failed to provide authentication token - we fail the deployment.
- With deployments failing fast should be our go-to behavior. In case anything is wrong with the process - it should fail and provide full details as to why. This will make life easier for anyone who will be using the tool to debug why the deployment fails and fix the missing piece. There are many moving pieces in the deployment logic, so getting an explicit error in the API or argo-watcher UI is better than trying to search logs for answers.
Codecov
left lots of comments about missing tests. We should add those before final revision 🥼
The commit is quite big, so I haven't gone through the whole thing end to end (will take another, closer look in coming days).
return nil, err | ||
} | ||
|
||
if app.IsManagedByWatcher() && task.Validated { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the token verification should happen in the router. In case the token is invalid, fail the API call. In case it's successful - continue the execution. And for git commit deployment identification in the code, I'd add a separate parameter, for example GitCommitEnabled
, so that it's explicit what the client wants.
internal/models/task.go
Outdated
Status string `json:"status,omitempty"` | ||
StatusReason string `json:"status_reason,omitempty"` | ||
ProvidedToken string `json:"token,omitempty"` | ||
Validated bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should replace this with GitCommitEnabled
or some alternative, and received it from the client.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? We are validating whether we are proceeding with deployment on the server side.
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
The changes in this PR introduces the following:
Backward compatibility is preserved.
A few points that should be covered in the further pull requests:
I prefer to go with a multiple PR approach to simplify the review process.
It's important to note that certain portions of the code have been explicitly designed to ensure backward compatibility. Maintaining this compatibility is crucial for us, and any future modifications should consider this to avoid disruptions.