313 lines
9.6 KiB
Markdown
313 lines
9.6 KiB
Markdown
# Infrastructure & Deployment
|
|
|
|
## Architecture
|
|
|
|
```
|
|
VPS (Ubuntu 24, 8 GB RAM)
|
|
├── Caddy — reverse proxy + auto SSL (native)
|
|
├── PostgreSQL — postgis/postgis:16-3.4 (Docker)
|
|
├── Forgejo — git server + CI/CD (Docker)
|
|
├── Forgejo Runner — executes CI/CD jobs (Docker)
|
|
├── app-staging — Next.js + Payload CMS (Docker)
|
|
└── app-test — Next.js + Payload CMS (Docker)
|
|
```
|
|
|
|
| URL | Port | Purpose |
|
|
|-----|------|---------|
|
|
| mutter-teresa.skick.app | 3001 | Client demo (staging) |
|
|
| mutter-teresa-test.skick.app | 3002 | Developer testing |
|
|
| git.skick.app | 3003 | Forgejo git server |
|
|
|
|
All app and database containers share the Docker network `church-website-net`.
|
|
|
|
---
|
|
|
|
## Prerequisites
|
|
|
|
- **Ansible** installed locally (`pip install ansible` or `brew install ansible`)
|
|
- **SSH access** to the VPS (root or sudo user)
|
|
- **DNS records** pointing to the VPS IP:
|
|
- `mutter-teresa.skick.app` → VPS IP
|
|
- `mutter-teresa-test.skick.app` → VPS IP
|
|
- `git.skick.app` → VPS IP
|
|
|
|
---
|
|
|
|
## Quick Start: First-Time Server Setup
|
|
|
|
### 1. Configure secrets
|
|
|
|
Create an encrypted vault from the example template:
|
|
|
|
```bash
|
|
cd infra/ansible
|
|
cp inventory/group_vars/all/vault.yml.example inventory/group_vars/all/vault.yml
|
|
ansible-vault encrypt inventory/group_vars/all/vault.yml
|
|
ansible-vault edit inventory/group_vars/all/vault.yml
|
|
```
|
|
|
|
Fill in all `CHANGE_ME` values:
|
|
- `vault_ansible_become_pass` — VPS root password
|
|
- `vault_postgres_root_password` — PostgreSQL root password
|
|
- `vault_db_password_staging` / `vault_db_password_test` — database passwords
|
|
- `vault_payload_secret_staging` / `vault_payload_secret_test` — Payload CMS secrets
|
|
- `vault_google_bucket` — Google Cloud Storage bucket name
|
|
- `vault_resend_api_key` — Resend email API key
|
|
- `vault_repo_url` — Forgejo repository URL (e.g., `ssh://git@git.skick.app:2222/org/church-website.git`)
|
|
|
|
### 2. Configure inventory
|
|
|
|
Edit `infra/ansible/inventory/test.yml`:
|
|
- Set `ansible_host` to your VPS IP address
|
|
- Adjust `ansible_user` and SSH key path if needed
|
|
|
|
### 3. Run the playbook
|
|
|
|
```bash
|
|
cd infra/ansible
|
|
ansible-playbook playbooks/setup.yml -i inventory/test.yml --ask-vault-pass
|
|
```
|
|
|
|
This will:
|
|
1. Install Docker, configure firewall
|
|
2. Start PostgreSQL with both databases
|
|
3. Install and configure Caddy with SSL
|
|
4. Start Forgejo and the CI/CD runner
|
|
5. Clone the repo, build, and deploy both environments
|
|
|
|
### 4. Set up Forgejo
|
|
|
|
After the playbook completes:
|
|
|
|
1. Visit `https://git.skick.app` and complete the initial Forgejo setup
|
|
2. Create an organization and repository
|
|
3. Add the VPS SSH key to the repository for pull access
|
|
4. Register the Forgejo Runner:
|
|
```bash
|
|
ssh root@YOUR_VPS_IP
|
|
docker exec -it forgejo-runner forgejo-runner register \
|
|
--instance https://git.skick.app \
|
|
--token YOUR_RUNNER_TOKEN \
|
|
--name local-runner \
|
|
--labels ubuntu-latest:docker://node:22
|
|
```
|
|
5. Push to the `staging` branch — CI/CD will deploy automatically
|
|
|
|
---
|
|
|
|
## Environment Variables
|
|
|
|
| Variable | Description | Build-time? |
|
|
|----------|-------------|-------------|
|
|
| `DATABASE_URI` | PostgreSQL connection string | No |
|
|
| `PAYLOAD_SECRET` | Payload CMS encryption secret | No |
|
|
| `NEXT_PUBLIC_SERVER_URL` | Public URL of the app | Yes |
|
|
| `NEXT_PUBLIC_SITE_ID` | Site identifier (e.g., `chemnitz`) | Yes |
|
|
| `GOOGLE_BUCKET` | GCS bucket for media storage | No |
|
|
| `RESEND_API_KEY` | Resend API key for emails | No |
|
|
|
|
Variables marked "Build-time" are baked into the Docker image during `docker build` (via `--build-arg`). Changes to these require a rebuild.
|
|
|
|
---
|
|
|
|
## Manual Operations
|
|
|
|
### Check container logs
|
|
|
|
```bash
|
|
docker logs app-staging
|
|
docker logs app-test
|
|
docker logs postgres
|
|
docker logs forgejo
|
|
```
|
|
|
|
### Redeploy manually (without CI/CD)
|
|
|
|
```bash
|
|
cd /opt/church-website/repo
|
|
git pull origin staging
|
|
/opt/church-website/scripts/deploy.sh staging 3001
|
|
/opt/church-website/scripts/deploy.sh test 3002
|
|
```
|
|
|
|
### Run migrations manually
|
|
|
|
```bash
|
|
docker exec app-staging npx payload migrate
|
|
docker exec app-test npx payload migrate
|
|
```
|
|
|
|
### Database backup
|
|
|
|
```bash
|
|
# Backup staging database
|
|
docker exec postgres pg_dump -U church_website_staging church_website_staging > backup_staging_$(date +%Y%m%d).sql
|
|
|
|
# Backup test database
|
|
docker exec postgres pg_dump -U church_website_test church_website_test > backup_test_$(date +%Y%m%d).sql
|
|
|
|
# Backup all databases
|
|
docker exec postgres pg_dumpall -U postgres > backup_all_$(date +%Y%m%d).sql
|
|
```
|
|
|
|
### Database restore
|
|
|
|
```bash
|
|
# Restore staging database
|
|
cat backup_staging.sql | docker exec -i postgres psql -U church_website_staging church_website_staging
|
|
```
|
|
|
|
### Restart a single service
|
|
|
|
```bash
|
|
docker restart app-staging
|
|
docker restart app-test
|
|
docker restart postgres
|
|
```
|
|
|
|
---
|
|
|
|
## Deploy via Ansible (without CI/CD)
|
|
|
|
Use these playbooks to deploy from your local machine — no Forgejo runner needed.
|
|
|
|
```bash
|
|
cd infra/ansible
|
|
|
|
# Deploy both environments (git pull once, then build+deploy each sequentially)
|
|
ansible-playbook playbooks/deploy.yml --ask-vault-pass
|
|
|
|
# Deploy staging only
|
|
ansible-playbook playbooks/deploy-staging.yml --ask-vault-pass
|
|
|
|
# Deploy test only
|
|
ansible-playbook playbooks/deploy-test.yml --ask-vault-pass
|
|
```
|
|
|
|
**Steps executed per environment:**
|
|
|
|
1. Pull latest code from the configured branch (`staging`)
|
|
2. Build app Docker image (bakes in `NEXT_PUBLIC_SERVER_URL` and `NEXT_PUBLIC_SITE_ID`)
|
|
3. Build migration image and run `npx payload migrate`
|
|
4. Stop and remove the old container
|
|
5. Start the new container
|
|
6. Fix upload volume permissions
|
|
7. Prune old Docker images
|
|
|
|
**Deploy a specific branch:**
|
|
|
|
```bash
|
|
ansible-playbook playbooks/deploy.yml --ask-vault-pass -e repo_branch=feature/my-branch
|
|
```
|
|
|
|
> **Note:** The server must already be provisioned with `setup.yml` before deploying. The deploy playbooks only pull code and rebuild containers — they do not install Docker, Caddy, or PostgreSQL.
|
|
|
|
---
|
|
|
|
## Refresh Test from Staging
|
|
|
|
`copy-staging-to-test.yml` rebuilds the test environment as a clone of staging — useful when you want editors or developers to try out destructive changes against a realistic dataset without touching the staging client demo.
|
|
|
|
```bash
|
|
cd infra/ansible
|
|
ansible-playbook playbooks/copy-staging-to-test.yml --ask-vault-pass
|
|
```
|
|
|
|
**What it does:**
|
|
|
|
1. Verifies the postgres container is up and the staging database exists
|
|
2. Stops and removes the `app-test` container
|
|
3. Drops `church_website_test`, recreates it, enables PostGIS, and pipes a `pg_dump` of staging into it
|
|
4. Reassigns table/sequence/enum ownership in the test DB to `church_website_test`
|
|
5. Replaces the `uploads-test-media` and `uploads-test-documents` Docker volumes with the contents of their staging counterparts
|
|
6. Starts a new `app-test` container from the existing `church-website:test` image on port 3002
|
|
7. Fixes upload volume permissions and waits for `http://127.0.0.1:3002` to return 2xx/3xx
|
|
|
|
**Before running:**
|
|
|
|
- The test image (`church-website:test`) must already exist on the VPS — this playbook does **not** rebuild it. Run `deploy-test.yml` first if the image is missing or stale.
|
|
- All connections to `church_website_test` are forcibly terminated. Anyone editing in the test admin will be kicked.
|
|
- The test DB and upload volumes are wiped — there is no rollback. Take a backup first if anything in test is worth keeping (see [Database backup](#database-backup)).
|
|
|
|
---
|
|
|
|
## CI/CD
|
|
|
|
The Forgejo Actions workflow (`.forgejo/workflows/deploy.yml`) triggers on push to the `staging` branch. It:
|
|
|
|
1. Pulls the latest code on the VPS
|
|
2. Builds a new Docker image for staging
|
|
3. Stops the old container, starts the new one
|
|
4. Runs database migrations
|
|
5. Repeats for the test environment (sequentially, to save RAM)
|
|
|
|
---
|
|
|
|
## Adding a New Environment
|
|
|
|
1. Add a new entry to `app_environments` in the inventory file
|
|
2. Add a new entry to `caddy_domains` with the new domain
|
|
3. Add a new database entry to `databases`
|
|
4. Run the playbook: `ansible-playbook playbooks/setup.yml -i inventory/test.yml`
|
|
5. Update the deploy workflow to include the new environment
|
|
|
|
---
|
|
|
|
## Production Setup
|
|
|
|
1. Copy and edit the production inventory:
|
|
```bash
|
|
cp infra/ansible/inventory/production.yml infra/ansible/inventory/my-production.yml
|
|
```
|
|
2. Fill in the production VPS IP, domain, and secrets
|
|
3. Run the playbook (skip Forgejo):
|
|
```bash
|
|
ansible-playbook playbooks/setup.yml -i inventory/my-production.yml --ask-vault-pass
|
|
```
|
|
4. Set up a deploy workflow for production (triggered on tags/releases)
|
|
|
|
---
|
|
|
|
## Troubleshooting
|
|
|
|
### Build fails with OOM
|
|
|
|
The VPS has 4 GB RAM + 2 GB swap. Docker builds can peak at ~1.5 GB. If builds fail:
|
|
- Ensure only one build runs at a time (deploy script is sequential)
|
|
- Check swap: `free -h`
|
|
- Increase swap: edit `swap_size_mb` in inventory and re-run playbook
|
|
|
|
### SSL certificate not working
|
|
|
|
- Ensure DNS records point to the VPS IP: `dig mutter-teresa.skick.app`
|
|
- Check Caddy logs: `journalctl -u caddy`
|
|
- Caddy auto-renews certificates — if stuck, restart: `systemctl restart caddy`
|
|
|
|
### Database connection refused
|
|
|
|
- Check PostgreSQL is running: `docker ps | grep postgres`
|
|
- Check the container is on the right network: `docker network inspect church-website-net`
|
|
- Test connection: `docker exec postgres psql -U postgres -l`
|
|
|
|
### Container won't start
|
|
|
|
- Check logs: `docker logs app-staging`
|
|
- Check if port is in use: `ss -tlnp | grep 3001`
|
|
- Check .env file: `cat /opt/church-website/envs/staging/.env`
|
|
|
|
---
|
|
|
|
## Local Development
|
|
|
|
For local development with PostgreSQL:
|
|
|
|
```bash
|
|
# Start PostgreSQL (from project root)
|
|
docker compose up -d
|
|
|
|
# Configure .env
|
|
DATABASE_URI=postgres://postgres:password@localhost:5432/church_website_dev
|
|
|
|
# Start dev server
|
|
npm run dev
|
|
```
|