Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,9 @@ Due to their public nature, GitHub and mailing lists are not appropriate places
- Read the [README](README.md) for build and test instructions
- Play with the project, submit bugs, submit patches!
- Go get a couple of projects nesessary for updating docs and examples:
```shell
$ go get github.com/segmentio/terraform-docs
$ go get github.com/openshift/installer/contrib/terraform-examples
```sh
go get github.com/segmentio/terraform-docs
go get github.com/openshift/installer/contrib/terraform-examples
```

### Contribution Flow
Expand All @@ -52,8 +52,8 @@ This is a rough outline of what a contributor's workflow looks like:
- Push your changes to a topic branch in your fork of the repository.
- Make sure the tests pass, and add any new tests as appropriate.
- Please run this command before submitting your pull request:
```shell
$ make structure-check
```sh
make structure-check
```
- Note that a portion of the docs and examples are generated and that the generated files are to be committed by you. `make structure-check` checks that what is generated is what you must commit.
- Submit a pull request to the original repository.
Expand Down
24 changes: 12 additions & 12 deletions Documentation/dev/libvirt-howto.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,20 +11,20 @@ It's expected that you will create and destroy clusters often in the course of d
In this example, we'll set the baseDomain to `tt.testing`, the name to `test1` and the ipRange to `192.168.124.0/24`

#### 1.2 Clone the repo
```
```sh
git clone https://github.com/openshift/installer.git
cd installer
```

#### 1.3 Download the Container Linux image
You will need to do this every time Container Linux has a release.
```
```sh
wget https://stable.release.core-os.net/amd64-usr/current/coreos_production_qemu_image.img.bz2
bunzip2 coreos_production_qemu_image.img.bz2
```

Because of the greater disk requirements of OpenShift, you'll need to expand the root drive with the following:
```
```sh
qemu-img resize coreos_production_qemu_image.img +8G
```

Expand All @@ -47,15 +47,15 @@ Go to https://account.coreos.com/ and obtain a Tectonic license. Save the *pull
This step is optional, but useful for being able to resolve cluster-internal hostnames from your host.
1. Edit `/etc/NetworkManager/NetworkManager.conf` and set `dns=dnsmasq` in section `[main]`
2. Tell dnsmasq to use your cluster. The syntax is `server=/<baseDomain>/<firstIP>`. For this example:
```
```sh
echo server=/tt.testing/192.168.124.1 | sudo tee /etc/NetworkManager/dnsmasq.d/tectonic.conf
```
3. `systemctl restart NetworkManager`

#### 1.7 Install the terraform provider
1. Make sure you have the `virsh` binary installed: `sudo dnf install libvirt-client libvirt-devel`
2. Install the libvirt terraform provider:
```
```sh
go get github.com/dmacvicar/terraform-provider-libvirt
mkdir -p ~/.terraform.d/plugins
cp $GOPATH/bin/terraform-provider-libvirt ~/.terraform.d/plugins/
Expand All @@ -64,31 +64,31 @@ cp $GOPATH/bin/terraform-provider-libvirt ~/.terraform.d/plugins/
### 2. Build the installer
Following the instructions in the root README:

```
```sh
bazel build tarball
```

### 3. Create a cluster
```
```sh
tar -zxf bazel-bin/tectonic-dev.tar.gz
cd tectonic-dev
export PATH=$(pwd)/installer:$PATH
```

Initialize (the environment variables are a convenience):
```
```sh
tectonic init --config=../tectonic.libvirt.yaml
export CLUSTER_NAME=<the cluster name>
export BASE_DOMAIN=<the base domain>
```

Install ($CLUSTER_NAME is `test1`):
```
```sh
tectonic install --dir=$CLUSTER_NAME
```

When you're done, destroy:
```
```sh
tectonic destroy --dir=$CLUSTER_NAME
```
Be sure to destroy, or else you will need to manually use virsh to clean up the leaked resources.
Expand All @@ -99,15 +99,15 @@ Some things you can do:
## Watch the bootstrap process
The first master node, e.g. test1-master-0.tt.testing, runs the tectonic bootstrap process. You can watch it:

```
```sh
ssh core@$CLUSTER_NAME-master-0.$BASE_DOMAIN
sudo journalctl -f -u bootkube -u tectonic
```
You'll have to wait for etcd to reach quorum before this makes any progress.

## Inspect the cluster with kubectl
You'll need a kubectl binary on your path.
```
```sh
export KUBECONFIG=$(pwd)/$CLUSTER_NAME/generated/auth/kubeconfig
kubectl get -n tectonic-system pods
```
Expand Down
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,43 +11,43 @@ https://coreos.com/blog/coreos-tech-to-combine-with-red-hat-openshift
These instructions can be used for AWS:

1. Build the project
```shell
```sh
bazel build tarball
```

*Note*: the project can optionally be built without installing Bazel, provided Docker is installed:
```shell
```sh
docker run --rm -v $PWD:$PWD:Z -w $PWD quay.io/coreos/tectonic-builder:bazel-v0.3 bazel --output_base=.cache build tarball
```

2. Extract the tarball
```shell
```sh
tar -zxf bazel-bin/tectonic-dev.tar.gz
cd tectonic-dev
```

3. Add binaries to $PATH
```shell
```sh
export PATH=$(pwd)/installer:$PATH
```

4. Edit Tectonic configuration file including the $CLUSTER_NAME
```shell
```sh
$EDITOR examples/tectonic.aws.yaml
```

5. Init Tectonic CLI
```shell
```sh
tectonic init --config=examples/tectonic.aws.yaml
```

6. Install Tectonic cluster
```shell
```sh
tectonic install --dir=$CLUSTER_NAME
```

7. Teardown Tectonic cluster
```shell
```sh
tectonic destroy --dir=$CLUSTER_NAME
```

Expand All @@ -65,7 +65,7 @@ To add a new dependency:
- Ensure you add a `version` field for the sha or tag you want to pin to.
- Revendor the dependencies:

```
```sh
rm glide.lock
glide install --strip-vendor
glide-vc --use-lock-file --no-tests --only-code
Expand Down
22 changes: 11 additions & 11 deletions tests/smoke/aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,12 @@ To begin, verify that the following environment variables are set:
A sensible value is `git rev-parse --abbrev-ref HEAD`.

Example:
```
$ export AWS_ACCESS_KEY_ID=AKIAIQ5TVFGQ7CKWD6IA
$ export AWS_SECRET_ACCESS_KEY_ID=rtp62V7H/JDY3cNBAs5vA0coaTou/OQbqJk96Hws
$ export TF_VAR_tectonic_license_path="/home/user/tectonic-license"
$ export TF_VAR_tectonic_pull_secret_path="/home/user/coreos-inc/pull-secret"
$ export TF_VAR_tectonic_aws_ssh_key="user"
```sh
export AWS_ACCESS_KEY_ID=AKIAIQ5TVFGQ7CKWD6IA
export AWS_SECRET_ACCESS_KEY_ID=rtp62V7H/JDY3cNBAs5vA0coaTou/OQbqJk96Hws
export TF_VAR_tectonic_license_path="/home/user/tectonic-license"
export TF_VAR_tectonic_pull_secret_path="/home/user/coreos-inc/pull-secret"
export TF_VAR_tectonic_aws_ssh_key="user"
```

## Assume Role
Expand Down Expand Up @@ -90,16 +90,16 @@ Once all testing has concluded, clean up the AWS resources that were created:
## Sanity test cheatsheet
To be able to ssh into the created machines, determine the generated cluster name and use the [AWS client](http://docs.aws.amazon.com/cli/latest/userguide/installing.html) to retrieve the public IP address or search for nodes having the cluster name via the AWS Web UI in "EC2 -> Instances":

```sh
```console
$ ls build
aws-exp-master-1012345678901

$ export CLUSTER_NAME=aws-exp-master-1012345678901

$ aws autoscaling describe-auto-scaling-groups \
| jq -r '.AutoScalingGroups[] | select(.AutoScalingGroupName | contains("'${CLUSTER_NAME}'")) | .Instances[].InstanceId' \
| xargs aws ec2 describe-instances --instance-ids \
| jq '.Reservations[].Instances[] | select(.PublicIpAddress != null) | .PublicIpAddress'
$ aws autoscaling describe-auto-scaling-groups |
> jq -r '.AutoScalingGroups[] | select(.AutoScalingGroupName | contains("'${CLUSTER_NAME}'")) | .Instances[].InstanceId' |
> xargs aws ec2 describe-instances --instance-ids |
> jq '.Reservations[].Instances[] | select(.PublicIpAddress != null) | .PublicIpAddress'
"52.15.184.15"

$ ssh -A [email protected]
Expand Down