- Docker (a fairly recent version will do)
- Node.js (originally
>= 18.17.1
but check the.nvmrc
files for actual) - macOS or Linux (Windows is not supported and is likely to not work with the Docker setup)
- Clone this repository.
- Make sure your system satisfies the requirements and Docker daemon is running.
- Create
.env
file in repository root and populate it as detailed here. - In repository root, run
npm ci
to install Node modules locally. Post-install script will take care of installing the sub-projects' dependencies, too. (While the Docker containers don't need these to run, in practice a lot of development tooling depends on Node modules being installed locally.) - Run
npm start
to bring up the development server containers.- Dev env: localhost:3000
- Apollo Sandbox: localhost:3001/graphql
- Make sure your IDE is set up to use
eslint
andprettier
from localpackage.json
definitions and there are no global overrides in effect.
Run commands in repository root.
Starts the necessary processes to the background and then attaches to logs. You can detach from log output with Ctrl+C and the dev env will keep running in the background.
Once up, the development envinronment is accessible at localhost:3000 and Apollo Sandbox at localhost:3001/graphql.
npm start
npm stop
Rebuilding is necessary, e.g., after installing new NPM packages or making configuration changes in files not mounted to the containers. You can also use npm run recycle
to stop a running development environment, rebuild the services, and start up again.
npm run build
npm run logs
Run this command in the web/
directory. Dev env must be running as the code generator sources the GraphQL schema dynamically from backend. The result is formatted with Prettier.
npm run generate
See test documentation for more testing commands and information.
npm test
npm run prune
Contributions must be made via pull requests into the protected main
branch. Use labels and comprehensive descriptions in PR's to aid automatic release notes generation. All tests must pass before merge.
Third-party contributions are accepted but you must first agree to Funidata's CLA. This repository is not currently set up for automatic CLA management. If you are interested in contributing to this project, please open an issue first to sort out these necessities!
Database migrations are required whenever changes are made to the database schema.
When running in development mode, TypeORM will synchronize the schema automatically. It is the developer's responsibility, however, to write migrations for CI and production environments. (E2E tests run on CI require migrations but in local development environment TypeORM sync is on.)
Reverse migrations (MigrationInterface.down()
) are not necessary.
These commands are run in the server/
directory.
npm run migration:run
TypeORM is particular about migration filenames, so this command should be used to initialize new migration files.
npm run migration:create
The project is setup with an automatic release pipeline that takes care of testing the software, building, packaging and finally publishing it as a Docker image to GHCR (GitHub Container Registry).
Keijo is released as a single Docker image that contains the whole software. (The frontend is built into the backend and served by it.) Following images are published at GHCR:
next
: TheHEAD
ofmain
branch. Not guaranteed to be stable.latest
: The latest released version. This image is also tagged with the full semver version number along with the abbreviated major and minor semver tags. For example, if you release version1.2.3
, the following tags would point to the same image (until a new version is release, that is):latest
1.2.3
1.2
1
- Make sure you are in
main
branch and the working tree is clean. - Run
npm version <patch|minor|major>
. Always follow the semantic versioning guidelines. This script usesnpm version
to apply the desired version bump, sync the sub-projects with it, and finally push the created tags to GitHub. - Allow the CI/CD pipeline to finish. The new version is automatically published once the workflow completes.
Keijo integrates to Jira using OAuth2.0 to get, for example, issue related data. The backend functions as a token-mediating backend, storing users Jira tokens in user sessions, and handing access token to the frontend to make requests to Jira Cloud REST API.
An Atlassian OAuth2.0 app needs to be created to get necessary Atlassian credentials and values. Atlassian app can be created in the developer console. The app needs to be configured to have:
- scope
read:jira-work
(permissions tab). - callback URL e.g.,
https://<keijo-site>/jira/callback
or for local development e.g.,http://localhost:3001/jira/callback
(Authorization tab).
Once these are added to the app, get the client ID and client sercret from app settings. For more detailed steps and information see Jira OAuth2.0 apps.