Backend service for the CMS template parser
Before starting, update the environment variables if needed. The default values will work for docker, save the GH_TOKEN
which must be manually set. You can create a token here, by following these instructions. Make sure to select the repo
scope for the token.
You will also require a credentials file for google drive. Please store it as credentials.json in the credentials
directory.
PORT=8104
FLASK_DEBUG=true
SECRET_KEY=secret_key
DEVEL=True
VALKEY_HOST=valkey
VALKEY_PORT=6379
GH_TOKEN=ghp_somepersonaltoken
REPO_ORG=https://github.com/canonical
DATABASE_URL=postgresql://postgres:postgres@postgres:5432/postgres
TASK_DELAY=30
DIRECTORY_API_TOKEN=token
JIRA_EMAIL=[email protected]
JIRA_TOKEN=jiratoken
JIRA_URL=https://warthogs.atlassian.net
JIRA_LABELS=sites_BAU
JIRA_COPY_UPDATES_EPIC=WD-12643
GOOGLE_DRIVE_FOLDER_ID=1EIFOGJ8DIWpsYIfWk7Yos3YijZIkbJDk
COPYDOC_TEMPLATE_ID=125auRsLQukYH-tKN1oEKaksmpCXd_DTGiswvmbeS2iA
GOOGLE_PRIVATE_KEY=base64encodedprivatekey
GOOGLE_PRIVATE_KEY_ID=privatekeyid
- Make sure you have a valid
GOOGLE_PRIVATE_KEY
andGOOGLE_PRIVATE_KEY_ID
specified in the .env. The base64 decoder parses these keys and throws error if invalid.
If you need to add a new environment variable, or modify an existing one(either name or value), there are a few things to consider:
-
If you are developing locally, add/update the variable only in
.env.local
or.env
file. -
Make sure you have reflected the change in the sample
.env
file in the project, as well as in the sample env contents specified in this README.md file, for reference.
If the value for this variable is not confidential, you can add it directly to the konf/site.yaml
like so:
- name: JIRA_LABELS
value: "sites_BAU"
Else if the value is confidential, you need to first create a secret on the kubernetes cluster, and then specify it in the konf/site.yaml
. Make sure you have the valid kubeconfig file for the cluster.
- Create the secret
$ kubectl create secret generic <secret-name> -n production with key1=supersecret and key2=supsecret
Make sure to replace <secret-name>
with the actual name of the secret. For example, cs-canonical-com
.
- Verify the newly created secret
$ kubectl describe secret <secret-name> -n production
Make sure to replace <secret-name>
with the actual name of the secret. For example, cs-canonical-com
.
- Add the secret ref to
konf/site.yaml
file.
- name: <env variable name>
secretKeyRef:
key: key1
name: <secret-name>
- name: <env variable name>
secretKeyRef:
key: key2
name: <secret-name>
Make sure to replace <env variable name>
with the name of env variables that your application is expecting. For example, JIRA_TOKEN
or COPYDOC_TEMPLATE_ID
Also, Make sure to replace <secret-name>
with the actual name of the secret. For example, cs-canonical-com
.
To update an existing environment variable, either name or value
- Export the secret into a yaml file
$ kubectl get secret <secret-name> -n production -o yaml > secret.yaml
Make sure to replace <secret-name>
with the actual name of the secret. For example, cs-canonical-com
.
-
Open the
secret.yaml
file and make your changes in thekey:value
pairs within thedata
section. -
If you are updating the values of the keys, make sure to use base64 encoded values. To get a base64 encoded value, use
$ echo -n "your-value" | base64
- Apply the updated secret back to the cluster
$ kubectl apply -f secret.yaml
- Re-deploy the deployment that uses this secret
$ kubectl rollout restart deployment <deployment-name> -n production
Use the relevant deployment name, for example, cs-canonical-com.
If you want to confirm if the deployment is using correct environment variables
- Find the deployment
$ kubectl get deployments -n production
- View deployment details
$ kubectl describe deployment <deployment-name> -n production
- You can also edit the deployment directly to update environment variables.
$ kubectl edit deployment <deployment_name> -n production
- Verify the update using
$ kubectl get deployments -n production | grep -i <variable_name>
You'll need to install docker and docker-compose.
Once done, run:
$ docker compose up -d
Verify everything went well and the containers are running, run:
$ docker ps -a
If any container was exited due to any reason, view its logs using:
$ docker compose logs {service_name}
The service depends on having a cache from which generated tree json can be sourced, as well as a postgres database.
You'll need to set up a valkey or redis cache, and expose the port it runs on.
If you do not want to use a dedicated cache, a simple filecache has been included as the default. Data is saved to the ./tree-cache/
directory.
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=postgres postgres
docker run -d -p 6379:6379 valkey/valkey
Set up a virtual environment to install project dependencies:
$ sudo apt install python3-venv
$ python3 -m venv .venv
$ source .venv/bin/activate
Then, install the dependencies:
$ python -m pip install -r requirements.txt
Then modify the .env file, and change the following to match your valkey and postgres instances. The config below works for dotrun as well.
# .env
VALKEY_HOST=localhost
VALKEY_PORT=6379
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/postgres
and load the variables into the shell environment.
$ source .env
Start the server.
$ flask --app webapp/app run --debug
Please note, make sure the containers for postgres and valkey are already running. If not, run:
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=postgres postgres
docker run -d -p 6379:6379 valkey/valkey
You can optionally use dotrun to start the service. When the 1.1.0-rc1 branch is merged, then we can use dotrun without the --release
flag.
$ dotrun build && dotrun
Since macs don't support host mode on docker, you'll have to get the valkey and postgres ip addresses manually from the running docker containers, and replace the host values in the .env file before running dotrun
$ docker inspect <valkey-container-id> | grep IPAddress
$ docker inspect <postgres-container-id> | grep IPAddress
To ensure hot module reloading, make sure to do the following changes.
- Add
FLASK_ENV=development
in .env.local file. - Comment out
"process.env.NODE_ENV": '"production"'
in vite.config.ts file. - Run the vite dev server locally, using
yarn run dev
.
GET
/get-tree/site-name
(gets the entire tree as json)
GET
/get-tree/site-name/branch-name
(you can optionally specify the branch)
{
"name": "site-name",
"templates": {
"children": [
{
"children": [
{
"children": [],
"description": "One page",
"copy_doc_link": null,
"name": "/blog/article",
"title": null
}
],
"description": null,
"copy_doc_link": "https://docs.google.com/document/d/edit",
"name": "/blog/",
"title": null
}
],
"description": null,
"copy_doc_link": "https://docs.google.com/document/d//edit",
"name": "/",
"title": null
}
}
POST
/request-changes
{
"due_date": "2022-01-01",
"reporter_id": 1,
"webpage_id": 31,
"type": 1,
"description": "This is a description"
}