A modern, full-stack application for AI-powered video generation. Built with React, FastAPI, and Google Cloud Platform.
- AI Video Generation: Create videos using advanced AI models
- Real-time Job Processing: Monitor job progress with webhook notifications
- Cloud Storage: Google Cloud Storage integration for video management
- Modern UI: Built with React, TanStack Router, and Tailwind CSS
- Asynchronous Processing: Background job processing with status tracking
Vid-Creation Backend POC Implementation - System Architecture
The platform follows a microservices architecture with the following key components:
- Client - React frontend application
- Video Job Service - Manages job creation and metadata
- Video Job Processor - Handles video processing and AI model integration
- Google Cloud Storage (GCS) - Stores video files and assets
- Firestore - NoSQL database for job metadata and status tracking
- Job Submission: Client submits a job (prompt) + webhook URL to Video Job Service
- Job Processing: Video Job Service initiates processing and stores metadata in Firestore
- Video Processing: Video Job Processor handles the actual video generation
- Storage: Processed videos are uploaded to GCS Bucket
- Status Updates: Job status and metadata are updated in Firestore
- Notifications: Client receives notifications (Success/Failed) + asset URL
- Download: Client downloads video from GCS using signed URLs
- Framework: React with TypeScript
- Router: TanStack Router with file-based routing
- State Management: TanStack Query for server state
- Styling: Tailwind CSS with modern UI components
- Build Tool: Vite
- Framework: FastAPI (Python)
- Database: Google Cloud Firestore (NoSQL)
- Storage: Google Cloud Storage (GCS)
- Infrastructure: Pulumi for GCP deployment
โโโ frontend/ # React application
โ โโโ src/
โ โ โโโ components/ # Reusable UI components
โ โ โโโ contexts/ # React contexts (auth, etc.)
โ โ โโโ hooks/ # Custom React hooks
โ โ โโโ routes/ # TanStack Router routes
โ โ โโโ lib/ # Utility functions
โ โ โโโ types/ # TypeScript type definitions
โ โโโ public/ # Static assets
โ โโโ package.json # Frontend dependencies
โโโ backend/ # FastAPI application
โ โโโ src/
โ โ โโโ api/ # API route handlers
โ โ โโโ services/ # Business logic services
โ โ โ โโโ job_service.py # Video Job Service
โ โ โ โโโ processor.py # Video Job Processor
โ โ โโโ models/ # Data models
โ โ โโโ repositories/ # Data access layer
โ โ โโโ schemas/ # Pydantic schemas
โ โโโ infra/ # Infrastructure as Code (Pulumi)
โ โโโ pyproject.toml # Backend dependencies
โโโ README.md # This file
- React - UI framework
- TypeScript - Type safety
- TanStack Router - File-based routing
- TanStack Query - Server state management
- Tailwind CSS - Utility-first CSS
- Vite - Build tool and dev server
- FastAPI - Web framework
- Python 3.11+ - Programming language
- Pydantic - Data validation
- Google Cloud Firestore - NoSQL database for job metadata
- Google Cloud Storage - File storage for videos
- Pulumi - Infrastructure as Code
- Docker - Containerization
- Google Cloud Platform - Cloud infrastructure
- Pulumi - Infrastructure management
- Node.js 18+ and npm/bun
- Python 3.11+
- Google Cloud Platform account
-
Clone the repository
git clone <repository-url> cd vid-creation-clean
-
Frontend Setup
cd frontend bun install
-
Backend Setup
cd backend python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate pip install -e .
-
Environment Variables
Create
.env
files in both frontend and backend directories:Frontend (.env)
VITE_API_URL=http://localhost:8000
Backend (.env)
GOOGLE_APPLICATION_CREDENTIALS=path/to/service-account-key.json GCP_STORAGE_BUCKET=your-storage-bucket GCP_PROJECT_ID=your-project-id
-
Start the Backend
cd backend uvicorn src.main:app --reload --port 8000
-
Start the Frontend
cd frontend bun dev
-
Access the Application
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
cd frontend
# Development server
bun dev
# Build for production
bun run build
# Run tests
bun test
# Preview production build
bun run preview
cd backend
# Development server
uvicorn src.main:app --reload
# Run tests
pytest
# Generate API types
python -m scripts.generate_types
cd backend/infra
# Deploy to development
pulumi stack select dev
pulumi up
# Deploy to staging
pulumi stack select stage
pulumi up
The API is documented using OpenAPI/Swagger. When running the backend, visit:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
-
Jobs
GET /api/jobs
- List user jobsPOST /api/jobs
- Create new video generation jobGET /api/jobs/{job_id}
- Get job detailsGET /api/jobs/{job_id}/asset-url
- Get asset download URL
-
Webhooks
POST /api/webhooks/{job_id}
- Webhook endpoint for job updatesPOST /api/webhooks/test
- Test webhook endpoint
The platform supports AI-powered video generation with the following workflow:
- Job Creation: Client submits a job with prompt and webhook URL
- Job Processing: Video Job Service creates job and stores metadata in Firestore
- Video Processing: Video Job Processor handles AI model integration and video generation
- Storage: Generated videos are uploaded to Google Cloud Storage
- Status Updates: Job status and metadata are updated in Firestore
- Notifications: Client receives webhook notifications with job status and signed URLs
- Download: Client downloads videos using signed URLs from GCS
The backend includes a Dockerfile for containerized deployment. Here are the steps to build and deploy:
cd backend
docker build -t video-creation-backend .
docker run -p 8080:8080 \
-e GCP_PROJECT_ID=your-project-id \
-e GCP_STORAGE_BUCKET=your-storage-bucket \
-e GOOGLE_APPLICATION_CREDENTIALS=/app/credentials.json \
-v /path/to/service-account-key.json:/app/credentials.json \
video-creation-backend
# Tag the image for Google Container Registry
docker tag video-creation-backend gcr.io/YOUR_PROJECT_ID/video-creation-backend
# Push to Google Container Registry
docker push gcr.io/YOUR_PROJECT_ID/video-creation-backend
# Deploy to Cloud Run
gcloud run deploy video-creation-backend \
--image gcr.io/YOUR_PROJECT_ID/video-creation-backend \
--platform managed \
--region us-central1 \
--allow-unauthenticated \
--set-env-vars GCP_PROJECT_ID=YOUR_PROJECT_ID,GCP_STORAGE_BUCKET=YOUR_BUCKET_NAME
The application uses Pulumi for infrastructure management on Google Cloud Platform:
# Install Pulumi CLI
curl -fsSL https://get.pulumi.com | sh
# Install Python dependencies
cd backend/infra
pip install -e .
cd backend/infra
# Set your GCP project configuration
pulumi config set gcp_project_id YOUR_PROJECT_ID
pulumi config set gcp_region us-central1
pulumi config set frontend_prod_url https://your-frontend-url.com
# Deploy to development
pulumi stack select dev
pulumi up
# Deploy to staging
pulumi stack select stage
pulumi up
# Deploy to production
pulumi stack select prod
pulumi up
After deployment, Pulumi will output the necessary environment variables:
pulumi stack output GCP_PROJECT_ID
pulumi stack output GCP_STORAGE_BUCKET
pulumi stack output --show-secrets GCP_SERVICE_ACCOUNT_JSON
For production deployment, you'll need these environment variables:
GCP_PROJECT_ID=your-project-id
GCP_STORAGE_BUCKET=your-storage-bucket
GOOGLE_APPLICATION_CREDENTIALS=/app/credentials.json
REPLICATE_API_TOKEN=your_replicate_token # Optional: for AI video generation
- Google Cloud Run (Recommended) - Serverless, auto-scaling
- Google Kubernetes Engine (GKE) - For complex deployments
- Google Compute Engine - For full control
- Render - Alternative cloud platform
- Railway - Simple deployment platform
cd frontend
bun test
cd backend
pytest
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
For support and questions:
- Create an issue in the repository
- Check the API documentation at
/docs
- Review the code comments and inline documentation
- Asynchronous Processing: Use Pub/Sub or RabbitMQ to handle jobs asynchronously
- CDN Integration: Implement CDN to cache videos for different bitrates (360p, 1080p, etc.)
- Chunked Uploads: Implement chunked uploads to GCS and store chunks
- Chunked Downloads: Implement chunked downloading from GCS to client
- Rate Limiting: Implement Rate Limiter and API Gateway
- Service Separation: Shift I/O logic to the service, and have processor focus on inference
- Enhanced video processing options
- Real-time collaboration features
- Mobile application
- Integration with additional AI models
- Performance optimizations
- Enhanced security features