-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a docker compose setup for locally hosting the infra #28
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly looks good, the only big issue is the database setup stuff.
4. navigate to http://localhost:9001, log in with the root credentials for minio specified above, add create a bucket for TunnelVision | ||
5. while still in the minio console, navigate to "access keys" on the left and create an access key and secret for tunnelvision to use. | ||
6. Provide the the information to TunnelVision |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably describe doing this though a config.py
file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i tried looking to see if there was a way to set the credentials (sorta like how many database containers work where you provide a username and password and theyll set up a user and a matching database) but i couldnt find anything that would do that in the minio docs
3. `docker compose up` | ||
4. navigate to http://localhost:9001, log in with the root credentials for minio specified above, add create a bucket for TunnelVision | ||
5. while still in the minio console, navigate to "access keys" on the left and create an access key and secret for tunnelvision to use. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't include any steps for setting up the database. I'm not sure if there's a way to auto-create it from the docker-compose
file, but currently it need to be done manually using createdb -h localhost -U <postgres user> tunnelvision
. Additionally, the actual database tables need to be created as well. Setting up flask-migrate
might be able to handle that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe using flask is how i did it - because this uses SQLAlchemy, the db.create_all()
call will create the tables.
i also have a separate branch that gets flask-migrate working for versioning/handling changes to the schema, since the method i just described only goes from an empty db to the current schema
068cb9a
to
ae4d435
Compare
bruh i just rebased on top of main how are there still conflicts |
From #29:
The intention for this PR is just to allow people to spin up the various dependent services (S3 etc) locally as containers so they can more easily develop this (i.e. if you deployed this in CSH kubernetes you'd end up running your own S3 in addition to whatever the main CSH S3 is). Esssentially its not intended for production use. I've been assuming that you all have your own existing kubernetes deployment setup and, since i have no visibility into that, I don't want to interfere with it (and also i have no kubernetes env to test with) |
@wilsonmcdade do you have instructions or something that you followed to get it deployed to kubernetes? Happy to help adapt it for pipenv (ive deployed pipenv in docker a bunch) |
ae4d435
to
71e2ab5
Compare
Fixes #25