diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 5e71cd88dfa..88f64a0a5ab 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -37,9 +37,9 @@ Due to their public nature, GitHub and mailing lists are not appropriate places - Read the [README](README.md) for build and test instructions - Play with the project, submit bugs, submit patches! - Go get a couple of projects nesessary for updating docs and examples: - ```shell - $ go get github.com/segmentio/terraform-docs - $ go get github.com/openshift/installer/contrib/terraform-examples + ```sh + go get github.com/segmentio/terraform-docs + go get github.com/openshift/installer/contrib/terraform-examples ``` ### Contribution Flow @@ -52,8 +52,8 @@ This is a rough outline of what a contributor's workflow looks like: - Push your changes to a topic branch in your fork of the repository. - Make sure the tests pass, and add any new tests as appropriate. - Please run this command before submitting your pull request: - ```shell - $ make structure-check + ```sh + make structure-check ``` - Note that a portion of the docs and examples are generated and that the generated files are to be committed by you. `make structure-check` checks that what is generated is what you must commit. - Submit a pull request to the original repository. diff --git a/Documentation/dev/libvirt-howto.md b/Documentation/dev/libvirt-howto.md index 39b1db7697a..1c6be352522 100644 --- a/Documentation/dev/libvirt-howto.md +++ b/Documentation/dev/libvirt-howto.md @@ -11,20 +11,20 @@ It's expected that you will create and destroy clusters often in the course of d In this example, we'll set the baseDomain to `tt.testing`, the name to `test1` and the ipRange to `192.168.124.0/24` #### 1.2 Clone the repo -``` +```sh git clone https://github.com/openshift/installer.git cd installer ``` #### 1.3 Download the Container Linux image You will need to do this every time Container Linux has a release. -``` +```sh wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_qemu_image.img.bz2 bunzip2 coreos_production_qemu_image.img.bz2 ``` Because of the greater disk requirements of OpenShift, you'll need to expand the root drive with the following: -``` +```sh qemu-img resize coreos_production_qemu_image.img +8G ``` @@ -47,7 +47,7 @@ Go to https://account.coreos.com/ and obtain a Tectonic license. Save the *pull This step is optional, but useful for being able to resolve cluster-internal hostnames from your host. 1. Edit `/etc/NetworkManager/NetworkManager.conf` and set `dns=dnsmasq` in section `[main]` 2. Tell dnsmasq to use your cluster. The syntax is `server=//`. For this example: -``` +```sh echo server=/tt.testing/192.168.124.1 | sudo tee /etc/NetworkManager/dnsmasq.d/tectonic.conf ``` 3. `systemctl restart NetworkManager` @@ -55,7 +55,7 @@ echo server=/tt.testing/192.168.124.1 | sudo tee /etc/NetworkManager/dnsmasq.d/t #### 1.7 Install the terraform provider 1. Make sure you have the `virsh` binary installed: `sudo dnf install libvirt-client libvirt-devel` 2. Install the libvirt terraform provider: -``` +```sh go get github.com/dmacvicar/terraform-provider-libvirt mkdir -p ~/.terraform.d/plugins cp $GOPATH/bin/terraform-provider-libvirt ~/.terraform.d/plugins/ @@ -64,31 +64,31 @@ cp $GOPATH/bin/terraform-provider-libvirt ~/.terraform.d/plugins/ ### 2. Build the installer Following the instructions in the root README: -``` +```sh bazel build tarball ``` ### 3. Create a cluster -``` +```sh tar -zxf bazel-bin/tectonic-dev.tar.gz cd tectonic-dev export PATH=$(pwd)/installer:$PATH ``` Initialize (the environment variables are a convenience): -``` +```sh tectonic init --config=../tectonic.libvirt.yaml export CLUSTER_NAME= export BASE_DOMAIN= ``` Install ($CLUSTER_NAME is `test1`): -``` +```sh tectonic install --dir=$CLUSTER_NAME ``` When you're done, destroy: -``` +```sh tectonic destroy --dir=$CLUSTER_NAME ``` Be sure to destroy, or else you will need to manually use virsh to clean up the leaked resources. @@ -99,7 +99,7 @@ Some things you can do: ## Watch the bootstrap process The first master node, e.g. test1-master-0.tt.testing, runs the tectonic bootstrap process. You can watch it: -``` +```sh ssh core@$CLUSTER_NAME-master-0.$BASE_DOMAIN sudo journalctl -f -u bootkube -u tectonic ``` @@ -107,7 +107,7 @@ You'll have to wait for etcd to reach quorum before this makes any progress. ## Inspect the cluster with kubectl You'll need a kubectl binary on your path. -``` +```sh export KUBECONFIG=$(pwd)/$CLUSTER_NAME/generated/auth/kubeconfig kubectl get -n tectonic-system pods ``` diff --git a/README.md b/README.md index fc5094a6d47..e95b0328e40 100644 --- a/README.md +++ b/README.md @@ -11,43 +11,43 @@ https://coreos.com/blog/coreos-tech-to-combine-with-red-hat-openshift These instructions can be used for AWS: 1. Build the project - ```shell + ```sh bazel build tarball ``` *Note*: the project can optionally be built without installing Bazel, provided Docker is installed: - ```shell + ```sh docker run --rm -v $PWD:$PWD:Z -w $PWD quay.io/coreos/tectonic-builder:bazel-v0.3 bazel --output_base=.cache build tarball ``` 2. Extract the tarball - ```shell + ```sh tar -zxf bazel-bin/tectonic-dev.tar.gz cd tectonic-dev ``` 3. Add binaries to $PATH - ```shell + ```sh export PATH=$(pwd)/installer:$PATH ``` 4. Edit Tectonic configuration file including the $CLUSTER_NAME - ```shell + ```sh $EDITOR examples/tectonic.aws.yaml ``` 5. Init Tectonic CLI - ```shell + ```sh tectonic init --config=examples/tectonic.aws.yaml ``` 6. Install Tectonic cluster - ```shell + ```sh tectonic install --dir=$CLUSTER_NAME ``` 7. Teardown Tectonic cluster - ```shell + ```sh tectonic destroy --dir=$CLUSTER_NAME ``` @@ -65,7 +65,7 @@ To add a new dependency: - Ensure you add a `version` field for the sha or tag you want to pin to. - Revendor the dependencies: -``` +```sh rm glide.lock glide install --strip-vendor glide-vc --use-lock-file --no-tests --only-code diff --git a/tests/smoke/aws/README.md b/tests/smoke/aws/README.md index e55ffe2b39b..a2e801d41a6 100644 --- a/tests/smoke/aws/README.md +++ b/tests/smoke/aws/README.md @@ -28,12 +28,12 @@ To begin, verify that the following environment variables are set: A sensible value is `git rev-parse --abbrev-ref HEAD`. Example: -``` -$ export AWS_ACCESS_KEY_ID=AKIAIQ5TVFGQ7CKWD6IA -$ export AWS_SECRET_ACCESS_KEY_ID=rtp62V7H/JDY3cNBAs5vA0coaTou/OQbqJk96Hws -$ export TF_VAR_tectonic_license_path="/home/user/tectonic-license" -$ export TF_VAR_tectonic_pull_secret_path="/home/user/coreos-inc/pull-secret" -$ export TF_VAR_tectonic_aws_ssh_key="user" +```sh +export AWS_ACCESS_KEY_ID=AKIAIQ5TVFGQ7CKWD6IA +export AWS_SECRET_ACCESS_KEY_ID=rtp62V7H/JDY3cNBAs5vA0coaTou/OQbqJk96Hws +export TF_VAR_tectonic_license_path="/home/user/tectonic-license" +export TF_VAR_tectonic_pull_secret_path="/home/user/coreos-inc/pull-secret" +export TF_VAR_tectonic_aws_ssh_key="user" ``` ## Assume Role @@ -90,16 +90,16 @@ Once all testing has concluded, clean up the AWS resources that were created: ## Sanity test cheatsheet To be able to ssh into the created machines, determine the generated cluster name and use the [AWS client](http://docs.aws.amazon.com/cli/latest/userguide/installing.html) to retrieve the public IP address or search for nodes having the cluster name via the AWS Web UI in "EC2 -> Instances": -```sh +```console $ ls build aws-exp-master-1012345678901 $ export CLUSTER_NAME=aws-exp-master-1012345678901 -$ aws autoscaling describe-auto-scaling-groups \ - | jq -r '.AutoScalingGroups[] | select(.AutoScalingGroupName | contains("'${CLUSTER_NAME}'")) | .Instances[].InstanceId' \ - | xargs aws ec2 describe-instances --instance-ids \ - | jq '.Reservations[].Instances[] | select(.PublicIpAddress != null) | .PublicIpAddress' +$ aws autoscaling describe-auto-scaling-groups | +> jq -r '.AutoScalingGroups[] | select(.AutoScalingGroupName | contains("'${CLUSTER_NAME}'")) | .Instances[].InstanceId' | +> xargs aws ec2 describe-instances --instance-ids | +> jq '.Reservations[].Instances[] | select(.PublicIpAddress != null) | .PublicIpAddress' "52.15.184.15" $ ssh -A core@52.15.184.15