Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable topology support #88

Merged
merged 1 commit into from
Sep 6, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ spec:
- -v=5
- --csi-address=/csi/csi.sock
- --connection-timeout=15s
- --feature-gates=Topology=true
volumeMounts:
- mountPath: /csi
name: socket-dir
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ spec:
- -v=5
- --csi-address=/csi/csi.sock
- --connection-timeout=15s
- --feature-gates=Topology=true
volumeMounts:
- mountPath: /csi
name: socket-dir
Expand Down
10 changes: 0 additions & 10 deletions examples/csi-app.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,6 @@ apiVersion: v1
metadata:
name: my-csi-app
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- csi-hostpathplugin
topologyKey: kubernetes.io/hostname
containers:
- name: my-frontend
image: busybox
Expand Down
21 changes: 14 additions & 7 deletions pkg/hostpath/controllerserver.go
Original file line number Diff line number Diff line change
Expand Up @@ -49,12 +49,13 @@ const (
)

type controllerServer struct {
caps []*csi.ControllerServiceCapability
caps []*csi.ControllerServiceCapability
nodeID string
}

func NewControllerServer(ephemeral bool) *controllerServer {
func NewControllerServer(ephemeral bool, nodeID string) *controllerServer {
if ephemeral {
return &controllerServer{caps: getControllerServiceCapabilities(nil)}
return &controllerServer{caps: getControllerServiceCapabilities(nil), nodeID: nodeID}
}
return &controllerServer{
caps: getControllerServiceCapabilities(
Expand All @@ -64,6 +65,7 @@ func NewControllerServer(ephemeral bool) *controllerServer {
csi.ControllerServiceCapability_RPC_LIST_SNAPSHOTS,
csi.ControllerServiceCapability_RPC_CLONE_VOLUME,
}),
nodeID: nodeID,
}
}

Expand Down Expand Up @@ -164,12 +166,17 @@ func (cs *controllerServer) CreateVolume(ctx context.Context, req *csi.CreateVol
glog.V(4).Infof("successfully populated volume %s", vol.VolID)
}

topologies := []*csi.Topology{&csi.Topology{
Segments: map[string]string{TopologyKeyNode: cs.nodeID},
}}

return &csi.CreateVolumeResponse{
Volume: &csi.Volume{
VolumeId: volumeID,
CapacityBytes: req.GetCapacityRange().GetRequiredBytes(),
VolumeContext: req.GetParameters(),
ContentSource: req.GetVolumeContentSource(),
VolumeId: volumeID,
CapacityBytes: req.GetCapacityRange().GetRequiredBytes(),
VolumeContext: req.GetParameters(),
ContentSource: req.GetVolumeContentSource(),
AccessibleTopology: topologies,
},
}, nil
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/hostpath/hostpath.go
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ func (hp *hostPath) Run() {
// Create GRPC servers
hp.ids = NewIdentityServer(hp.name, hp.version)
hp.ns = NewNodeServer(hp.nodeID, hp.ephemeral)
hp.cs = NewControllerServer(hp.ephemeral)
hp.cs = NewControllerServer(hp.ephemeral, hp.nodeID)

s := NewNonBlockingGRPCServer()
s.Start(hp.endpoint, hp.ids, hp.cs, hp.ns)
Expand Down
7 changes: 7 additions & 0 deletions pkg/hostpath/identityserver.go
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,13 @@ func (ids *identityServer) GetPluginCapabilities(ctx context.Context, req *csi.G
},
},
},
{
Type: &csi.PluginCapability_Service_{
Service: &csi.PluginCapability_Service{
Type: csi.PluginCapability_Service_VOLUME_ACCESSIBILITY_CONSTRAINTS,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CreateVolume also needs to be updated to return topology in the response

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

topology also has to be feature-gated on in the csi-provisioner spec

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed, thanks!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, I just realized that we can't update the deployment spec until we tag a new image. So we may need to split this into 2 PRs:

  1. add topology to the driver. this may need to be manually tested with private image. After this is merged, then we can cut a new hostpath driver image.
  2. update the deployment spec

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reverted the deployment spec changes, I'll try to test the driver on my local env.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just trying to understand, why should we separate deployment spec changes and driver topology support changes?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, maybe it won't cause issues. The pull jobs should be replacing hostpath driver image with a custom one built from the PR. But the CI jobs run with the hostpath driver image specified in the deployment spec, so they will start running with topology enabled in the provisioner, but the tagged driver doesn't have it enabled yet. I think that may actually be fine though because the provisioner checks the feature gate + if the driver supports the capability. So actually, maybe there isn't a problem.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then I'll add the feature gate to the provisioner?

},
},
},
},
}, nil
}
9 changes: 8 additions & 1 deletion pkg/hostpath/nodeserver.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@ import (
"k8s.io/kubernetes/pkg/volume/util/volumepathhandler"
)

const TopologyKeyNode = "topology.hostpath.csi/node"

type nodeServer struct {
nodeID string
ephemeral bool
Expand Down Expand Up @@ -261,8 +263,13 @@ func (ns *nodeServer) NodeUnstageVolume(ctx context.Context, req *csi.NodeUnstag

func (ns *nodeServer) NodeGetInfo(ctx context.Context, req *csi.NodeGetInfoRequest) (*csi.NodeGetInfoResponse, error) {

topology := &csi.Topology{
Segments: map[string]string{TopologyKeyNode: ns.nodeID},
}

return &csi.NodeGetInfoResponse{
NodeId: ns.nodeID,
NodeId: ns.nodeID,
AccessibleTopology: topology,
}, nil
}

Expand Down