Skip to content

Latest commit

 

History

History
118 lines (71 loc) · 6.7 KB

how_it_works.md

File metadata and controls

118 lines (71 loc) · 6.7 KB

How it works

1) The CLI

Everything in kops is currently driven by a command line interface. We use cobra to define all of our command line UX.

All of the CLI code for kops can be found in /cmd/kops link

For instance, if you are interested in finding the entry point to kops create cluster you would look in /cmd/kops/create_cluster.go. There you would find a function called RunCreateCluster(). That is the entry point of the command.

2) Storage

We have an abstracted way of interacting with our storage, called the clientset. This is an abstract way of interacting with remote storage for kops. This is commonly referred to as the kops STATE STORE.

We can access it through the util.Factory struct as in :

func RunMyCommand(f *util.Factory, out io.Writer, c *MyCommandOptions) error {
        clientset, _ := f.Clientset()
        cluster, _ := clientset.Clusters().Get(clusterName)
        fmt.Println(cluster)
}

The available Clientset functions are :

Create()
Update()
Get()
List()

3) The API

a) Cluster Spec

The kops API is a definition of struct members in Go code found here. The kops API does NOT match the command line interface (by design). We use the native Kubernetes API machinery to manage versioning of the kops API.

The base level struct of the API is api.Cluster{} which is defined here. The top level struct contains meta information about the object such as the kind and version and the main data for the cluster itself can be found in cluster.Spec

It is important to note that the API members are a representation of a Kubernetes cluster. These values are stored in the kops STATE STORE mentioned above for later use. By design kops does not store information about the state of the cloud in the state store, if it can infer it from looking at the actual state of the cloud.

More information on the API can be found here.

b) Instance Groups

In order for kops to create any servers, we will need to define instance groups. These are a slice of pointers to kops.InstanceGroup structs.

var instanceGroups []*kops.InstanceGroup

Each instance group represents a group of instances in a cloud. Each instance group (or IG) defines values about the group of instances such as their size, volume information, etc. The definition can also be found in the /pkg/apis/kops/instancegroup.go file here.

4) Cloudup

a) The ApplyClusterCmd

After a user has built out a valid api.Cluster{} and valid []*kops.InstanceGroup they can then begin interacting with the core logic in kops.

A user can build a cloudup.ApplyClusterCmd defined here as follows:

applyCmd := &cloudup.ApplyClusterCmd{
    Cluster:         cluster,
    Clientset:       clientset,
    TargetName:      "target",                               // ${GOPATH}/src/k8s.io/kops/upup/pkg/fi/cloudup/target.go:19
    OutDir:          c.OutDir,
    DryRun:          isDryrun,
    MaxTaskDuration: 10 * time.Minute,                       // ${GOPATH}/src/k8s.io/kops/upup/pkg/fi/cloudup/apply_cluster.go
    InstanceGroups:  instanceGroups,
}

Now that the ApplyClusterCmd has been populated, we can attempt to run our apply.

err = applyCmd.Run()

This is where we enter the core of kops logic. The starting point can be found here. Based on the directives defined in the ApplyClusterCmd above, the apply operation will behave differently based on the input provided.

b) Validation

From within the ApplyClusterCmd.Run() function we will first attempt to sanitize our input by validating the operation. There are many examples at the top of the function where we validate the input.

c) The Cloud

The cluster.Spec.CloudProvider should have been populated earlier, and can be used to switch on to build our cloud as in here. If you are interested in creating a new cloud implementation the interface is defined here, with the AWS example here.

Note As it stands the FindVPCInfo() function is a defined member of the interface. This is AWS only, and will eventually be pulled out of the interface. For now please implement the function as a no-op.

d) The model

The model is what maps an ambiguous Cluster Spec (defined earlier) to tasks. Each task is a representation of an API request against a cloud. If you plan on implementing a new cloud, one option would be to define a new model context type, and build custom model builders for your cloud's objects.

The existing model code can be found here.

Once a model builder has been defined as in here the code will automatically be called.

From within the builder, we notice there is concrete logic for each builder. The logic will dictate which tasks need to be called in order to apply a resource to a cloud. The tasks are added by calling the AddTask() function as in here.

Once the model builders have been called all the tasks should have been set.

e) Tasks

A task is typically a representation of a single API call. The task interface is defined here.

Note for more advanced clouds like AWS, there is also Find() and Render() functions in the core logic of executing the tasks defined here.

5) Nodeup

Nodeup is a standalone binary that handles bootstrapping the Kubernetes cluster. There is a shell script here that will bootstrap nodeup. The AWS implementation uses cloud-init to run the script on an instance. All new clouds will need to figure out best practices for bootstrapping nodeup on their platform.