|
|
||
|---|---|---|
| .. | ||
| ansible | ||
| scripts | ||
| README.md | ||
Infrastructure & Deployment
Architecture
VPS (Ubuntu 24, 8 GB RAM)
├── Caddy — reverse proxy + auto SSL (native)
├── PostgreSQL — postgis/postgis:16-3.4 (Docker)
├── Forgejo — git server + CI/CD (Docker)
├── Forgejo Runner — executes CI/CD jobs (Docker)
├── app-staging — Next.js + Payload CMS (Docker)
└── app-test — Next.js + Payload CMS (Docker)
| URL | Port | Purpose |
|---|---|---|
| mutter-teresa.skick.app | 3001 | Client demo (staging) |
| mutter-teresa-test.skick.app | 3002 | Developer testing |
| git.skick.app | 3003 | Forgejo git server |
All app and database containers share the Docker network church-website-net.
Prerequisites
- Ansible installed locally (
pip install ansibleorbrew install ansible) - SSH access to the VPS (root or sudo user)
- DNS records pointing to the VPS IP:
mutter-teresa.skick.app→ VPS IPmutter-teresa-test.skick.app→ VPS IPgit.skick.app→ VPS IP
Quick Start: First-Time Server Setup
1. Configure secrets
Create an encrypted vault from the example template:
cd infra/ansible
cp inventory/group_vars/all/vault.yml.example inventory/group_vars/all/vault.yml
ansible-vault encrypt inventory/group_vars/all/vault.yml
ansible-vault edit inventory/group_vars/all/vault.yml
Fill in all CHANGE_ME values:
vault_ansible_become_pass— VPS root passwordvault_postgres_root_password— PostgreSQL root passwordvault_db_password_staging/vault_db_password_test— database passwordsvault_payload_secret_staging/vault_payload_secret_test— Payload CMS secretsvault_google_bucket— Google Cloud Storage bucket namevault_resend_api_key— Resend email API keyvault_repo_url— Forgejo repository URL (e.g.,ssh://git@git.skick.app:2222/org/church-website.git)
2. Configure inventory
Edit infra/ansible/inventory/test.yml:
- Set
ansible_hostto your VPS IP address - Adjust
ansible_userand SSH key path if needed
3. Run the playbook
cd infra/ansible
ansible-playbook playbooks/setup.yml -i inventory/test.yml --ask-vault-pass
This will:
- Install Docker, configure firewall
- Start PostgreSQL with both databases
- Install and configure Caddy with SSL
- Start Forgejo and the CI/CD runner
- Clone the repo, build, and deploy both environments
4. Set up Forgejo
After the playbook completes:
- Visit
https://git.skick.appand complete the initial Forgejo setup - Create an organization and repository
- Add the VPS SSH key to the repository for pull access
- Register the Forgejo Runner:
ssh root@YOUR_VPS_IP docker exec -it forgejo-runner forgejo-runner register \ --instance https://git.skick.app \ --token YOUR_RUNNER_TOKEN \ --name local-runner \ --labels ubuntu-latest:docker://node:22 - Push to the
stagingbranch — CI/CD will deploy automatically
Environment Variables
| Variable | Description | Build-time? |
|---|---|---|
DATABASE_URI |
PostgreSQL connection string | No |
PAYLOAD_SECRET |
Payload CMS encryption secret | No |
NEXT_PUBLIC_SERVER_URL |
Public URL of the app | Yes |
NEXT_PUBLIC_SITE_ID |
Site identifier (e.g., chemnitz) |
Yes |
GOOGLE_BUCKET |
GCS bucket for media storage | No |
RESEND_API_KEY |
Resend API key for emails | No |
Variables marked "Build-time" are baked into the Docker image during docker build (via --build-arg). Changes to these require a rebuild.
Manual Operations
Check container logs
docker logs app-staging
docker logs app-test
docker logs postgres
docker logs forgejo
Redeploy manually (without CI/CD)
cd /opt/church-website/repo
git pull origin staging
/opt/church-website/scripts/deploy.sh staging 3001
/opt/church-website/scripts/deploy.sh test 3002
Run migrations manually
docker exec app-staging npx payload migrate
docker exec app-test npx payload migrate
Database backup
# Backup staging database
docker exec postgres pg_dump -U church_website_staging church_website_staging > backup_staging_$(date +%Y%m%d).sql
# Backup test database
docker exec postgres pg_dump -U church_website_test church_website_test > backup_test_$(date +%Y%m%d).sql
# Backup all databases
docker exec postgres pg_dumpall -U postgres > backup_all_$(date +%Y%m%d).sql
Database restore
# Restore staging database
cat backup_staging.sql | docker exec -i postgres psql -U church_website_staging church_website_staging
Restart a single service
docker restart app-staging
docker restart app-test
docker restart postgres
Deploy via Ansible (without CI/CD)
Use these playbooks to deploy from your local machine — no Forgejo runner needed.
cd infra/ansible
# Deploy both environments (git pull once, then build+deploy each sequentially)
ansible-playbook playbooks/deploy.yml --ask-vault-pass
# Deploy staging only
ansible-playbook playbooks/deploy-staging.yml --ask-vault-pass
# Deploy test only
ansible-playbook playbooks/deploy-test.yml --ask-vault-pass
Steps executed per environment:
- Pull latest code from the configured branch (
staging) - Build app Docker image (bakes in
NEXT_PUBLIC_SERVER_URLandNEXT_PUBLIC_SITE_ID) - Build migration image and run
npx payload migrate - Stop and remove the old container
- Start the new container
- Fix upload volume permissions
- Prune old Docker images
Deploy a specific branch:
ansible-playbook playbooks/deploy.yml --ask-vault-pass -e repo_branch=feature/my-branch
Note: The server must already be provisioned with
setup.ymlbefore deploying. The deploy playbooks only pull code and rebuild containers — they do not install Docker, Caddy, or PostgreSQL.
Refresh Test from Staging
copy-staging-to-test.yml rebuilds the test environment as a clone of staging — useful when you want editors or developers to try out destructive changes against a realistic dataset without touching the staging client demo.
cd infra/ansible
ansible-playbook playbooks/copy-staging-to-test.yml --ask-vault-pass
What it does:
- Verifies the postgres container is up and the staging database exists
- Stops and removes the
app-testcontainer - Drops
church_website_test, recreates it, enables PostGIS, and pipes apg_dumpof staging into it - Reassigns table/sequence/enum ownership in the test DB to
church_website_test - Replaces the
uploads-test-mediaanduploads-test-documentsDocker volumes with the contents of their staging counterparts - Starts a new
app-testcontainer from the existingchurch-website:testimage on port 3002 - Fixes upload volume permissions and waits for
http://127.0.0.1:3002to return 2xx/3xx
Before running:
- The test image (
church-website:test) must already exist on the VPS — this playbook does not rebuild it. Rundeploy-test.ymlfirst if the image is missing or stale. - All connections to
church_website_testare forcibly terminated. Anyone editing in the test admin will be kicked. - The test DB and upload volumes are wiped — there is no rollback. Take a backup first if anything in test is worth keeping (see Database backup).
CI/CD
The Forgejo Actions workflow (.forgejo/workflows/deploy.yml) triggers on push to the staging branch. It:
- Pulls the latest code on the VPS
- Builds a new Docker image for staging
- Stops the old container, starts the new one
- Runs database migrations
- Repeats for the test environment (sequentially, to save RAM)
Adding a New Environment
- Add a new entry to
app_environmentsin the inventory file - Add a new entry to
caddy_domainswith the new domain - Add a new database entry to
databases - Run the playbook:
ansible-playbook playbooks/setup.yml -i inventory/test.yml - Update the deploy workflow to include the new environment
Production Setup
- Copy and edit the production inventory:
cp infra/ansible/inventory/production.yml infra/ansible/inventory/my-production.yml - Fill in the production VPS IP, domain, and secrets
- Run the playbook (skip Forgejo):
ansible-playbook playbooks/setup.yml -i inventory/my-production.yml --ask-vault-pass - Set up a deploy workflow for production (triggered on tags/releases)
Troubleshooting
Build fails with OOM
The VPS has 4 GB RAM + 2 GB swap. Docker builds can peak at ~1.5 GB. If builds fail:
- Ensure only one build runs at a time (deploy script is sequential)
- Check swap:
free -h - Increase swap: edit
swap_size_mbin inventory and re-run playbook
SSL certificate not working
- Ensure DNS records point to the VPS IP:
dig mutter-teresa.skick.app - Check Caddy logs:
journalctl -u caddy - Caddy auto-renews certificates — if stuck, restart:
systemctl restart caddy
Database connection refused
- Check PostgreSQL is running:
docker ps | grep postgres - Check the container is on the right network:
docker network inspect church-website-net - Test connection:
docker exec postgres psql -U postgres -l
Container won't start
- Check logs:
docker logs app-staging - Check if port is in use:
ss -tlnp | grep 3001 - Check .env file:
cat /opt/church-website/envs/staging/.env
Local Development
For local development with PostgreSQL:
# Start PostgreSQL (from project root)
docker compose up -d
# Configure .env
DATABASE_URI=postgres://postgres:password@localhost:5432/church_website_dev
# Start dev server
npm run dev