From 3014904119b431168ef358832348bdab6fb5f1c6 Mon Sep 17 00:00:00 2001 From: Vnaumov Date: Thu, 22 Mar 2018 15:43:56 +0400 Subject: [PATCH] update documentation, related to additional services - mail service attach - proxy service attach - prometheus service attach - cloud providers settings Pay attention, than there is no Nginx-service default configuration, it depends on https://github.com/Mirantis/kqueen/pull/246 --- RATIONALE.md | 1 - docs/kqueen.rst | 189 ++++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 183 insertions(+), 7 deletions(-) diff --git a/RATIONALE.md b/RATIONALE.md index 27c6de01..0c012531 100644 --- a/RATIONALE.md +++ b/RATIONALE.md @@ -65,4 +65,3 @@ KQueen supplies the backend API for provider-agnostic cluster management. It ena * **Update** - install newer version of Kubernetes * **Autoscale** - watch Kubernetes scheduler or pods and start new minions when all existing minions are fully utilized * **Manage addons** - enable or disable cluster addons - diff --git a/docs/kqueen.rst b/docs/kqueen.rst index d041ba99..015d63d4 100644 --- a/docs/kqueen.rst +++ b/docs/kqueen.rst @@ -152,17 +152,197 @@ in the configuration file, set the environment variable matching the KQUEEN_>:5000/api/v1/auth | jq -r '.access_token'); echo $TOKEN + curl -H "Authorization: Bearer $TOKEN" <>:5000/metrics/ + +* Prometheus API: + 1. Add the scraper IP address to PROMETHEUS_WHITELIST configuration. + 2. Run the following command: + + .. code-block:: bash + curl <>:<>/metrics + + +All application metrics are exported to the **/metrics** API endpoint. Any external Prometheus instance can then scrape this metric. + + +Provision a Kubernetes cluster +------------------------------ +You can provision a Kubernetes cluster using various community of engines, such as Google Kubernetes engine or Azure Kubernetes Service. + +* To provision a Kubernetes cluster using the Google Kubernetes Engine:** +1. Login in the Google Kubernetes Engine (https://console.cloud.google.com). +2. Select your Project. +3. Navigate to the ```API’s & Services``` -> ```Credentials``` tab and click ```Create credentials```. +4. From ```Service Account key```, select your service account. +5. Select Json as the key format. +6. Download the Json snippet. +7. Log in to the KQueen web UI. +8. From the ```Create Provisioner``` tab, select ```Google Kubernetes Engine```. +9. Specify your project ID (```Project info``` tab on the main page of the GCE Dashboard https://console.cloud.google.com). +10. Insert the downloaded Json snippet that contains the service account key and submit the provisioner creation. +11. Click ```Deploy Cluster```. +12. Select the defined GCE provisioner. +13. Specify the cluster requirements. +14. Click ```Submit```. +15. To track the cluster status, navigate to the KQueen main dashboard. + +** To provision a Kubernetes cluster using the Azure Kubernetes Service:** +1. Log in to https://portal.azure.com. +2. Create an Azure Active Directory Application as described in the official Microsoft documentation https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal#create-an-azure-active-directory-application. +3. Copy the Application ID, Application Secret, Tenant ID (Directory ID), and Subscription ID to use in step 8. +4. Set the ‘Owner’ role to your Application in the Subscription settings to enable the creation of Kubernetes clusters. For details, see the latest steps in the Microsoft official documentation described in step 2. Manually save the Application secret because you are not able to retrieve the secret later. You provide the secret value with the application ID to log in as the application. Store the key value where your application can retrieve it. +5. Navigate to the ‘Resource groups’ tab and create a resource group. Copy the ‘Resource group name’ to use in step 8. +6. From the to ‘Resource groups’ -> your_group -> Access Control (IAM) tab, verify that the Application has the ‘Owner’ role in Resource group. +7. Log in to the KQueen web UI. +8. From the ‘Create provisioner’ tab, select the AKS engine and set the following: + 1. Set the ‘Client ID’ as Application ID from p.3. + 2. Set the ‘Resource group name’ as ‘Resource group name’ from p.4 + 3. Set the ‘Secret’ as Application Secret from p.3 + 4. Set the ‘Subscription ID’ as Subscription ID from p.3 + 5. Set the ‘Tenant ID’ as Tenant(Directory) ID from p.3 +9. In the KQueen web UI, click ```Deploy Cluster```. +10. Select the AKS provisioner. +11. Specify the cluster requirements. +12. Specify the public SSH key to connect to AKS VM’s. For ssh access into created VM’s, need to assign the public IP address to the VM (example: https://gist.github.com/naumvd95/576d6e48200597ca89b26de15e8d3675). Then it is possible to ```ssh azureuser@<> -i .ssh/your_defined_id_rsa``` +13. Click ```Submit```. +14. To track the cluster status, navigate to the KQueen main dashboard. + + +.. note:: + + The Admin Console in the Azure portal is supported only in Internet Explorer and Microsoft Edge and may fail to operate in other browsers due to Microsoft issues such as +https://microsoftintune.uservoice.com/forums/291681-ideas/suggestions/18602776-admin-console-support-on-mac-osx. + +**Pay attention** that AKS created separate resource during kubernetes cluster creating (it used defined Resource Group as prefix ). It may affect your billing. For ex.: +Your Resource Group : Kqueen +Additional cluster-generated Resource Group: MC_Kqueen_44a37a65-1dff-4ef8-97ca-87fa3b8aee62_eastus +Referenced to https://github.com/Azure/AKS/issues/3 +Docs: https://docs.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks + + + + + + +** To manually add an existing Kubernetes cluster:** +1. Log in to the KQueen web UI. +2. In the ``Create Cluster``` tab, define a valid Kubernetes configuration file. +As a result, the Kubernetes cluster will be attached in a read-only mode. + +ETCD Backup +----------- +Etcd is the only stateful component of KQueen. To recover etcd in case of a failure, follow the procedure described in https://coreos.com/etcd/docs/latest/v2/admin_guide.html#disaster-recovery. +.. note:: The `v2` etcd keys are used in deployment. +Example of a backup and recovery workflow: :: # Backup etcd to directory /root/backup/ (etcd data stored in /var/lib/etcd/default) etcdctl backup --data-dir /var/lib/etcd/default --backup-dir /root/backup/ + Recovery :: @@ -172,6 +352,3 @@ Recovery # Start new etcd with these two extra parameters (among the other) # for example: etcd --force-new-cluster - - -kqueen