-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix support for volume snapshots by setting snapshot id on volume creation. #129
Conversation
63c8ac4
to
4e9c09b
Compare
driver/controller.go
Outdated
if contentSource != nil { | ||
snapshot := contentSource.GetSnapshot() | ||
if snapshot != nil { | ||
ll.Info("using snapshot as volume source") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add the snapshot ID as a K/V to the log? As an example:
ll.WithField("snapshot_id", snapshot.GetId()).Info("using snapshot as volume source")
test/kubernetes/integration_test.go
Outdated
Command: []string{ | ||
"sh", "-c", | ||
"echo testcanary > /data/canary", | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we have these init containers? I couldn't figure out why we need them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a comment, but basically, waitForPod will consider it successful once the pod enters the Running state. Regular containers enter the running state once the healthcheck passes, so by using an initcontainer, the pod will never enter the running state if the assertion fails.
Thanks @justinbarrick this looks good. Thanks for the fix. I added couple of fixes. Also please open the PR against master and not against the release branches. Master is our dev branch and we update the release branches from the master. |
Addressed comments and based on master. |
db8bbe5
to
29f9c0b
Compare
29f9c0b
to
7138346
Compare
@fatih rebased. is there anything else you need on this? |
@fatih I’d love to get this one merged. Anything I can do to help it over the finish line? As it stands, restoring from a volume with digital ocean csi is 100% broken forcing me to operate a fork. |
Sorry @justinbarrick I was derailed from my other work related stuff and hadn't chance to look again. Thanks for your patience and contribution. |
Fix support for volume snapshots by setting snapshot id on volume creation.
Fix support for volume snapshots by setting snapshot id on volume creation.
Fix support for volume snapshots by setting snapshot id on volume creation.
* Cherry-pick: Add tagging support for Volumes via the new `--do-tag` flag [[GH-130]](#130) * Cherry-pick: Fix support for volume snapshots by setting snapshot id on volume creation [[GH-129]](#129) * Cherry-pick: Goreportcard fixes (typos, exported variables, etc..) [[GH-121]](#121) * Cherry-pick: Rename the cluster role bindings for the `node-driver-registrar` to be consistent with the other role bindings. [[GH-118]](#118) * Cherry-pick: Remove the `--token` flag for the `csi-do-node` driver. Drivers running on the node don't need the token anymore. [[GH-118]](#118) * Cherry-pick: Don't check the volume limits on the worker nodes (worker nodes are not able to talk to DigitalOcean API) [[GH-142]](#142) * Cherry-pick: Update `godo` (DigitalOcean API package) version to v1.13.0 [[GH-143]](#143) * Cherry-pick: Fix race in snapshot integration test. [[GH-146]](#146) * Cherry-pick: Add tagging support for Volume snapshots via the new `--do-tag` flag [[GH-145]](#145)
* Cherry-pick: Add tagging support for Volumes via the new `--do-tag` flag [[GH-130]](#130) * Cherry-pick: Fix support for volume snapshots by setting snapshot id on volume creation [[GH-129]](#129) * Cherry-pick: Goreportcard fixes (typos, exported variables, etc..) [[GH-121]](#121) * Cherry-pick: Rename the cluster role bindings for the `node-driver-registrar` to be consistent with the other role bindings. [[GH-118]](#118) * Cherry-pick: Remove the `--token` flag for the `csi-do-node` driver. Drivers running on the node don't need the token anymore. [[GH-118]](#118) * Cherry-pick: Don't check the volume limits on the worker nodes (worker nodes are not able to talk to DigitalOcean API) [[GH-142]](#142) * Cherry-pick: Update `godo` (DigitalOcean API package) version to v1.13.0 [[GH-143]](#143) * Cherry-pick: Fix race in snapshot integration test. [[GH-146]](#146) * Cherry-pick: Add tagging support for Volume snapshots via the new `--do-tag` flag [[GH-145]](#145)
This bug affects 1.0, as well, however, my cluster is on Kubernetes v1.12.2, so I've targeted 0.4.0.