A cloud-based Marketing Mix Modeling (MMM) solution deployed on Google Cloud Platform.
./deploy.sh production # Deploy the latest version to production
./deploy.sh development # Deploy the latest version to development
GPT-Bayes consists of two main components:
-
MMM Agent Alpha - A specialized GPT model with API integration
- Interface: MMM Agent Alpha
- Authentication: [email protected]
- Function: Provides user interface for MMM analysis
-
Backend Service
- Production URL: https://nextgen-mmm.pymc-labs.com
- Development URL: https://dev-nextgen-mmm.pymc-labs.com
- Function: Handles model fitting and parameter management via API endpoints
- Infrastructure: Hosted on Google Cloud Engine (GCE) under the
gpt-bayes
project
app.py
- Main Flask applicationtest_mmm_async.py
- Local API testing utility
nginx/
- NGINX reverse proxy settingsdockerfile
- Container specificationsstart.sh
- Container initializationbuild.sh
- Build the container imagedeploy.sh
- Deployment automationenvironment.yml
- Development environment specificationsconfig.yaml
- Environment configuration settings
gpt-agent/gpt_prompt.md
- System instructionsgpt-agent/api_spec.json
- API specificationsgpt-agent/knowledge/
- Reference documentationgpt-agent/privacy_policy.md
- Data handling guidelines
test-data/
- Example datasets
The application runs on Google Compute Engine (GCE) under the gpt-bayes
project, accessible at https://nextgen-mmm.pymc-labs.com
(production) and https://dev-nextgen-mmm.pymc-labs.com
(development).
Build and push the Docker image to Google Artifact Registry (GAR).
./build.sh production # Build and publish to production
./build.sh development # Build and publish to development
Once the Docker image is built and pushed to GAR, use deploy.sh
to update the application. This script handles:
- Updating the container in Google Artifact Registry (GAR)
- Deploying to the specified environment
./deploy.sh production # Deploy the latest version to production
./deploy.sh development # Deploy the latest version to development
Access the specified server:
gcloud compute ssh gpt-bayes --zone us-central1-a
gcloud compute ssh dev-gpt-bayes --zone us-central1-a
Container management commands:
# List containers
docker ps -a
# Monitor container logs
docker attach CONTAINER_ID
# Access container shell
docker exec -it CONTAINER_ID /bin/bash
Build and publish to Google Artifact Registry:
./build.sh production # Build and publish to production
./build.sh development # Build and publish to development
Note: This updates the container image but doesn't affect the specified deployment.
View available Container-Optimized OS images:
gcloud compute images list --project cos-cloud --no-standard-images
Update specified container:
# Clear existing containers
gcloud compute ssh gpt-bayes --zone us-central1-a --command 'docker system prune -f -a'
gcloud compute ssh dev-gpt-bayes --zone us-central1-a --command 'docker system prune -f -a'
# Deploy new container
gcloud compute instances update-container gpt-bayes \
--zone=us-central1-a \
--container-image=us-central1-docker.pkg.dev/bayes-gpt/gpt-bayes/gpt-bayes:latest
gcloud compute instances update-container dev-gpt-bayes \
--zone=us-central1-a \
--container-image=us-central1-docker.pkg.dev/bayes-gpt/dev-gpt-bayes/dev-gpt-bayes:latest
Create new server instance:
gcloud compute instances create-with-container gpt-bayes \
--machine-type e2-standard-4 \
--boot-disk-size 200GB \
--image cos-stable-117-18613-164-13 \
--image-project cos-cloud \
--zone us-central1-a \
--container-image=us-central1-docker.pkg.dev/bayes-gpt/gpt-bayes/gpt-bayes:latest \
--tags http-server,https-server,allow-tcp-5000
gcloud compute instances create-with-container dev-gpt-bayes \
--machine-type e2-standard-4 \
--boot-disk-size 20GB \
--image cos-stable-117-18613-164-13 \
--image-project cos-cloud \
--zone us-central1-a \
--container-image=us-central1-docker.pkg.dev/bayes-gpt/dev-gpt-bayes/dev-gpt-bayes:latest \
--tags http-server,https-server,allow-tcp-5000
Deploy NGINX reverse proxy updates:
cd nginx
./deploy.sh production # Deploy the latest version to production
./deploy.sh development # Deploy the latest version to development
Update backend IP address:
- Navigate to
config.yaml
- Modify the
ipAddress
directive with the new IP - Example:
ipAddress: 35.208.203.115
Create development environment:
# Using conda
conda env create -f environment.yml
# Using mamba (faster)
mamba env create -f environment.yml
# Activate environment
conda activate base
Launch the development stack:
- Start Redis:
redis-server
- Start Celery worker (new terminal):
celery -A app.celery worker --loglevel=info
- Start Flask (new terminal):
python app.py --port 5001
- Run tests:
# Test local instance
python test_mmm_async.py local
# Test production instance
python test_mmm_async.py deployed
The test suite:
- Generates sample MMM data
- Submits to specified API endpoint
- Monitors result generation
- Displays model analytics