-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Heroku docker deploy #1301
Heroku docker deploy #1301
Conversation
Fully qualify domain names in Dockerfiles and docker-compose. Move the build stages for client-report, client-participation, and client-admin into the file-server Dockerfile, to avoid issues with non-Docker build daemons being unable to access local image stores. This may be an issue with coming Docker versions as well. This commit does not fully work, however, with issues still pending about serving the newly built files from the right location.
then attempts to use the top level Dockerfile to pull in these images to obtain the built files, and release from them
* Add podman support alongside docker for dev * Consolidate client build steps into a single Dockerfile, which may help with heroku deploy
I see what you mean. To clarify, I tried doing what you suggested, but wasn't able to get it to work. I don't remember whether it just wouldn't build non- web/worker/release images, or whether it just wasn't able to coordinate between them (the clients would have to be built prior to the release image, after all, or things could go awry), but in any case, moving to a single image with all of the assets built seemed like the easiest way to solve this problem, and also meshes with @willcohen's work trying to support alternatives to docker-desktop. There are probably good reasons for merging the client environments anyway. Right now, we have three different client JS environments. The participation client will probably continue to have some particular demands for a while. But there's no reason why the report and admin couldn't be unified, and I even think that we could get a lot of overlap with the participation after we do some modernization. Building each of these environments/images from scratch currently takes quite a while, which is not ideal for many reasons. If they shared the same environment, we could potentially reduce that, and this moves us in that direction. |
This is specifically what I'm trying to avoid. I want automated, single command deploy and rollback, with no manual coordination. IIUC, on Heroku this means putting everything in a Right now, the problem I'm hitting is that it's crapping out on build time limits compiling all of the javascript. It wasn't earlier, before I had fixed some of the other pieces, so maybe there's some transient issues with their network or infrastructure (will try again tomorrow). In any case, this may require some consolidation of the build processes if we're to achieve this with Heroku. |
💯 Really glad for that direction. I didn't realize the utility of multistage builds for the frontend before last month, but I'm now a big fan.
Agreed. But further, because multistage builds are clever, we can merge all client-* into one dockerfile and still have different "build environments" in the same multi-stage dockerfile if we really need it. (ie. if we can't get it all working on the same node version right away)
Ok, hmm... thinking. To clarify one more thing: the two things we're each talking about are not mutually exclusive. As in, the "single dockerfile client build" approach (that you're working on right now), and the "release phase extraction of frontend build artifacts from a single frontend image" (that I was referring to above), are not mutually exclusive. Your release phase dockerfile could pull files from one container image on heroku, and send what it pulls to the CDN with your scripts. And this would get around the constraints you're hitting where Heroku doesn't want to give you enough time in release phase to build. I don't know how you currently deploy to heroku (though curious to know), but if it's:
then I believe the difference I'm suggesting would be:
Sorry, hard to read you if this discussion feels like a distraction from your focussed-work or helpful. Please do feel welcome to cut through the ambiguity and tell me :) |
e3773d1
to
d9818db
Compare
…ssets-with-podman-compat
This may be causing js heap overload when deploy completes and it tries to run
Cypress tests are broken, but have been flaky anyway (see #1392), and we really need these changes in, so going to merge for now. But fixing these is a high priority. |
This PR is still a draft, but is successfully pushing (at least most of) the assets to S3 for static CDN serving. It also modernizes some of the math infrastructure around S3.
Before this gets merged:
admin_bundle.js
still isn't accessible when build from docker container for some reason; Need to resolve this.