feature: infrastructure for deployment
This commit is contained in:
parent
3bf9b9fdc3
commit
36956d7daf
21 changed files with 1076 additions and 0 deletions
290
infra/README.md
Normal file
290
infra/README.md
Normal file
|
|
@ -0,0 +1,290 @@
|
|||
# Infrastructure & Deployment
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
VPS (Ubuntu 24, 8 GB RAM)
|
||||
├── Caddy — reverse proxy + auto SSL (native)
|
||||
├── PostgreSQL — postgis/postgis:16-3.4 (Docker)
|
||||
├── Forgejo — git server + CI/CD (Docker)
|
||||
├── Forgejo Runner — executes CI/CD jobs (Docker)
|
||||
├── app-staging — Next.js + Payload CMS (Docker)
|
||||
└── app-test — Next.js + Payload CMS (Docker)
|
||||
```
|
||||
|
||||
| URL | Port | Purpose |
|
||||
|-----|------|---------|
|
||||
| mutter-teresa.skick.app | 3001 | Client demo (staging) |
|
||||
| mutter-teresa-test.skick.app | 3002 | Developer testing |
|
||||
| git.skick.app | 3003 | Forgejo git server |
|
||||
|
||||
All app and database containers share the Docker network `church-website-net`.
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Ansible** installed locally (`pip install ansible` or `brew install ansible`)
|
||||
- **SSH access** to the VPS (root or sudo user)
|
||||
- **DNS records** pointing to the VPS IP:
|
||||
- `mutter-teresa.skick.app` → VPS IP
|
||||
- `mutter-teresa-test.skick.app` → VPS IP
|
||||
- `git.skick.app` → VPS IP
|
||||
|
||||
---
|
||||
|
||||
## Quick Start: First-Time Server Setup
|
||||
|
||||
### 1. Configure secrets
|
||||
|
||||
Create an encrypted vault from the example template:
|
||||
|
||||
```bash
|
||||
cd infra/ansible
|
||||
cp inventory/group_vars/all/vault.yml.example inventory/group_vars/all/vault.yml
|
||||
ansible-vault encrypt inventory/group_vars/all/vault.yml
|
||||
ansible-vault edit inventory/group_vars/all/vault.yml
|
||||
```
|
||||
|
||||
Fill in all `CHANGE_ME` values:
|
||||
- `vault_ansible_become_pass` — VPS root password
|
||||
- `vault_postgres_root_password` — PostgreSQL root password
|
||||
- `vault_db_password_staging` / `vault_db_password_test` — database passwords
|
||||
- `vault_payload_secret_staging` / `vault_payload_secret_test` — Payload CMS secrets
|
||||
- `vault_google_bucket` — Google Cloud Storage bucket name
|
||||
- `vault_resend_api_key` — Resend email API key
|
||||
- `vault_repo_url` — Forgejo repository URL (e.g., `ssh://git@git.skick.app:2222/org/church-website.git`)
|
||||
|
||||
### 2. Configure inventory
|
||||
|
||||
Edit `infra/ansible/inventory/test.yml`:
|
||||
- Set `ansible_host` to your VPS IP address
|
||||
- Adjust `ansible_user` and SSH key path if needed
|
||||
|
||||
### 3. Run the playbook
|
||||
|
||||
```bash
|
||||
cd infra/ansible
|
||||
ansible-playbook playbooks/setup.yml -i inventory/test.yml --ask-vault-pass
|
||||
```
|
||||
|
||||
This will:
|
||||
1. Install Docker, configure firewall
|
||||
2. Start PostgreSQL with both databases
|
||||
3. Install and configure Caddy with SSL
|
||||
4. Start Forgejo and the CI/CD runner
|
||||
5. Clone the repo, build, and deploy both environments
|
||||
|
||||
### 4. Set up Forgejo
|
||||
|
||||
After the playbook completes:
|
||||
|
||||
1. Visit `https://git.skick.app` and complete the initial Forgejo setup
|
||||
2. Create an organization and repository
|
||||
3. Add the VPS SSH key to the repository for pull access
|
||||
4. Register the Forgejo Runner:
|
||||
```bash
|
||||
ssh root@YOUR_VPS_IP
|
||||
docker exec -it forgejo-runner forgejo-runner register \
|
||||
--instance https://git.skick.app \
|
||||
--token YOUR_RUNNER_TOKEN \
|
||||
--name local-runner \
|
||||
--labels ubuntu-latest:docker://node:22
|
||||
```
|
||||
5. Push to the `staging` branch — CI/CD will deploy automatically
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description | Build-time? |
|
||||
|----------|-------------|-------------|
|
||||
| `DATABASE_URI` | PostgreSQL connection string | No |
|
||||
| `PAYLOAD_SECRET` | Payload CMS encryption secret | No |
|
||||
| `NEXT_PUBLIC_SERVER_URL` | Public URL of the app | Yes |
|
||||
| `NEXT_PUBLIC_SITE_ID` | Site identifier (e.g., `chemnitz`) | Yes |
|
||||
| `GOOGLE_BUCKET` | GCS bucket for media storage | No |
|
||||
| `RESEND_API_KEY` | Resend API key for emails | No |
|
||||
|
||||
Variables marked "Build-time" are baked into the Docker image during `docker build` (via `--build-arg`). Changes to these require a rebuild.
|
||||
|
||||
---
|
||||
|
||||
## Manual Operations
|
||||
|
||||
### Check container logs
|
||||
|
||||
```bash
|
||||
docker logs app-staging
|
||||
docker logs app-test
|
||||
docker logs postgres
|
||||
docker logs forgejo
|
||||
```
|
||||
|
||||
### Redeploy manually (without CI/CD)
|
||||
|
||||
```bash
|
||||
cd /opt/church-website/repo
|
||||
git pull origin staging
|
||||
/opt/church-website/scripts/deploy.sh staging 3001
|
||||
/opt/church-website/scripts/deploy.sh test 3002
|
||||
```
|
||||
|
||||
### Run migrations manually
|
||||
|
||||
```bash
|
||||
docker exec app-staging npx payload migrate
|
||||
docker exec app-test npx payload migrate
|
||||
```
|
||||
|
||||
### Database backup
|
||||
|
||||
```bash
|
||||
# Backup staging database
|
||||
docker exec postgres pg_dump -U church_website_staging church_website_staging > backup_staging_$(date +%Y%m%d).sql
|
||||
|
||||
# Backup test database
|
||||
docker exec postgres pg_dump -U church_website_test church_website_test > backup_test_$(date +%Y%m%d).sql
|
||||
|
||||
# Backup all databases
|
||||
docker exec postgres pg_dumpall -U postgres > backup_all_$(date +%Y%m%d).sql
|
||||
```
|
||||
|
||||
### Database restore
|
||||
|
||||
```bash
|
||||
# Restore staging database
|
||||
cat backup_staging.sql | docker exec -i postgres psql -U church_website_staging church_website_staging
|
||||
```
|
||||
|
||||
### Restart a single service
|
||||
|
||||
```bash
|
||||
docker restart app-staging
|
||||
docker restart app-test
|
||||
docker restart postgres
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Deploy via Ansible (without CI/CD)
|
||||
|
||||
Use the `deploy.yml` playbook to deploy from your local machine — no Forgejo runner or CI/CD pipeline needed. This is useful for hotfixes, CI outages, or production servers without Forgejo.
|
||||
|
||||
```bash
|
||||
cd infra/ansible
|
||||
|
||||
# Deploy to test/staging VPS
|
||||
ansible-playbook playbooks/deploy.yml -i inventory/test.yml --ask-vault-pass
|
||||
|
||||
# Deploy to production
|
||||
ansible-playbook playbooks/deploy.yml -i inventory/production.yml --ask-vault-pass
|
||||
```
|
||||
|
||||
**What it does:**
|
||||
|
||||
1. Pulls the latest code from the configured branch (`repo_branch` in inventory)
|
||||
2. Runs `deploy.sh` for each environment (sequentially to save RAM), which:
|
||||
- Builds the Docker app image with build-time env vars
|
||||
- Builds a migration image and runs `npx payload migrate`
|
||||
- Stops the old container, starts the new one
|
||||
- Prunes old Docker images
|
||||
|
||||
**Deploy a specific branch:**
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/deploy.yml -i inventory/test.yml --ask-vault-pass \
|
||||
-e repo_branch=feature/my-branch
|
||||
```
|
||||
|
||||
**Deploy only one environment** (e.g., just staging):
|
||||
|
||||
```bash
|
||||
ansible-playbook playbooks/deploy.yml -i inventory/test.yml --ask-vault-pass \
|
||||
-e '{"app_environments": [{"name": "staging", "port": 3001}]}'
|
||||
```
|
||||
|
||||
> **Note:** The server must already be provisioned with `setup.yml` before using `deploy.yml`. The deploy playbook only pulls code and rebuilds containers — it does not install Docker, Caddy, or PostgreSQL.
|
||||
|
||||
---
|
||||
|
||||
## CI/CD
|
||||
|
||||
The Forgejo Actions workflow (`.forgejo/workflows/deploy.yml`) triggers on push to the `staging` branch. It:
|
||||
|
||||
1. Pulls the latest code on the VPS
|
||||
2. Builds a new Docker image for staging
|
||||
3. Stops the old container, starts the new one
|
||||
4. Runs database migrations
|
||||
5. Repeats for the test environment (sequentially, to save RAM)
|
||||
|
||||
---
|
||||
|
||||
## Adding a New Environment
|
||||
|
||||
1. Add a new entry to `app_environments` in the inventory file
|
||||
2. Add a new entry to `caddy_domains` with the new domain
|
||||
3. Add a new database entry to `databases`
|
||||
4. Run the playbook: `ansible-playbook playbooks/setup.yml -i inventory/test.yml`
|
||||
5. Update the deploy workflow to include the new environment
|
||||
|
||||
---
|
||||
|
||||
## Production Setup
|
||||
|
||||
1. Copy and edit the production inventory:
|
||||
```bash
|
||||
cp infra/ansible/inventory/production.yml infra/ansible/inventory/my-production.yml
|
||||
```
|
||||
2. Fill in the production VPS IP, domain, and secrets
|
||||
3. Run the playbook (skip Forgejo):
|
||||
```bash
|
||||
ansible-playbook playbooks/setup.yml -i inventory/my-production.yml --ask-vault-pass
|
||||
```
|
||||
4. Set up a deploy workflow for production (triggered on tags/releases)
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Build fails with OOM
|
||||
|
||||
The VPS has 4 GB RAM + 2 GB swap. Docker builds can peak at ~1.5 GB. If builds fail:
|
||||
- Ensure only one build runs at a time (deploy script is sequential)
|
||||
- Check swap: `free -h`
|
||||
- Increase swap: edit `swap_size_mb` in inventory and re-run playbook
|
||||
|
||||
### SSL certificate not working
|
||||
|
||||
- Ensure DNS records point to the VPS IP: `dig mutter-teresa.skick.app`
|
||||
- Check Caddy logs: `journalctl -u caddy`
|
||||
- Caddy auto-renews certificates — if stuck, restart: `systemctl restart caddy`
|
||||
|
||||
### Database connection refused
|
||||
|
||||
- Check PostgreSQL is running: `docker ps | grep postgres`
|
||||
- Check the container is on the right network: `docker network inspect church-website-net`
|
||||
- Test connection: `docker exec postgres psql -U postgres -l`
|
||||
|
||||
### Container won't start
|
||||
|
||||
- Check logs: `docker logs app-staging`
|
||||
- Check if port is in use: `ss -tlnp | grep 3001`
|
||||
- Check .env file: `cat /opt/church-website/envs/staging/.env`
|
||||
|
||||
---
|
||||
|
||||
## Local Development
|
||||
|
||||
For local development with PostgreSQL:
|
||||
|
||||
```bash
|
||||
# Start PostgreSQL (from project root)
|
||||
docker compose up -d
|
||||
|
||||
# Configure .env
|
||||
DATABASE_URI=postgres://postgres:password@localhost:5432/church_website_dev
|
||||
|
||||
# Start dev server
|
||||
npm run dev
|
||||
```
|
||||
10
infra/ansible/ansible.cfg
Normal file
10
infra/ansible/ansible.cfg
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
[defaults]
|
||||
inventory = inventory/
|
||||
roles_path = roles/
|
||||
host_key_checking = False
|
||||
retry_files_enabled = False
|
||||
remote_tmp = /tmp/.ansible/tmp
|
||||
|
||||
[privilege_escalation]
|
||||
become = True
|
||||
become_method = sudo
|
||||
3
infra/ansible/inventory/group_vars/all/vars.yml
Normal file
3
infra/ansible/inventory/group_vars/all/vars.yml
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
---
|
||||
# Non-secret shared variables
|
||||
# Secrets go in vault.yml (encrypted) in this same directory
|
||||
45
infra/ansible/inventory/group_vars/all/vault.yml
Normal file
45
infra/ansible/inventory/group_vars/all/vault.yml
Normal file
|
|
@ -0,0 +1,45 @@
|
|||
$ANSIBLE_VAULT;1.1;AES256
|
||||
35356634333331616130643630356337646335653935313561396561366261356265373038363564
|
||||
6433623739346632353765303637636565613263326165380a346134336466323661393563626663
|
||||
31343664373132666566383764336532663830623435333537313136333336633938326236633438
|
||||
3561646361323539640a656134343263316563383837633931653066336238636239643465373236
|
||||
66323966323433613466666233353731353738386665383239316338333161646264663331613162
|
||||
30366637623136616137306663383030346631623432343037313239386666626266383036333537
|
||||
39636362643430623937346633666264353137623564353138393431393866386538613962643661
|
||||
36373862346636393730663665393564636463366433396533333162626232643331643338343037
|
||||
35336662306338616561653762313465363538386636303331323133383633386332663063653764
|
||||
63613565313864646362643736393135303435343162313864663038613865643631386337326534
|
||||
61613131616163323735643432656664396135633263346530383034323865353139613662356437
|
||||
32393934386139353130303865316237353865376232653563356236366435373963393237646337
|
||||
37306266363064636130633134306666326365316161383133373334313239343831396364646333
|
||||
62663662306534663638316364333730376631336332333364653462326263333861353836643739
|
||||
64656361643035663635643461616166663534356638613434636565356461353234303633633164
|
||||
37613032346663623733313966383736393838323361366237383033373133656232363833656161
|
||||
63646635373237636266313966666336353831373130333163333864616437636362623836636535
|
||||
61646263373830663166323736666333386234623430643636333066363061646161393935663661
|
||||
33393232653137303762643663396663653563646662363061633338333136303134313732356136
|
||||
34616233653562323263356530633636383465353735316238653330316164333032643064313662
|
||||
36643662323133613933363534313263633365373761663466376462326237303337396566366466
|
||||
31656639383063653962666233336166633930656534363961306238623439626261336465306538
|
||||
38343039383132313837376531353138333339303964313931393533633261303035323331613132
|
||||
61636366363966373964396232323932666663316334383863633761666330376332383564326632
|
||||
35383038353366633038623239386462386165643630623561663963343035623837353230323235
|
||||
65303635613265613537333335373030613237333463373061363366633063653365383139326131
|
||||
30333765656338356135616566316639646238326162653033643663393032333461343661363736
|
||||
30393565386134623734343165333164653532366337373430356664353637343166363430313137
|
||||
61336666386662363066323164613539383366656533393766633136303534393464623334633762
|
||||
34326664386464666461653536393665313239323937393465306634393663636364656438303963
|
||||
64643634373766323465613638653833623235663738626431616330623262366635373334643838
|
||||
64663861653861343163313836643333643730643838613364646236656337393036336366646362
|
||||
33363033663461313933323637623736303131643962333665616265396566663136303236323564
|
||||
31373665316232656239366466373932393336376437626465616233663430636362636532626661
|
||||
30633866666263656439313236663630383733306439633936626139356235366439383030613832
|
||||
63666333356466323733323737366131333033376432646162626633356438373639306133623531
|
||||
34303463626561633161396262323639353135316137643934383635636136303833333934633830
|
||||
32346534383564386364643262643936383233306133653661356138336563616261363232613935
|
||||
39613134333435303535336235613262346162613566636433383266623162663463663862393363
|
||||
63393366663231633265616463616363396264626666346666303937353665383565636238336231
|
||||
38396634323337663133386639633662663462623731323134313939613437333537333666303466
|
||||
31616230623739323364376663333730633464653434313333646466623562316466613435346566
|
||||
35313866616530643930326238306339613138646664316639663033303666643661373839356235
|
||||
39646463386236633463
|
||||
17
infra/ansible/inventory/group_vars/all/vault.yml.example
Normal file
17
infra/ansible/inventory/group_vars/all/vault.yml.example
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
# Copy this file to vault.yml and encrypt it:
|
||||
# cp vault.yml.example vault.yml
|
||||
# ansible-vault encrypt vault.yml
|
||||
# ansible-vault edit vault.yml
|
||||
|
||||
vault_ansible_become_pass: "CHANGE_ME"
|
||||
vault_postgres_root_password: "CHANGE_ME"
|
||||
vault_db_password_staging: "CHANGE_ME"
|
||||
vault_db_password_test: "CHANGE_ME"
|
||||
vault_db_password: "CHANGE_ME"
|
||||
vault_payload_secret_staging: "CHANGE_ME"
|
||||
vault_payload_secret_test: "CHANGE_ME"
|
||||
vault_payload_secret: "CHANGE_ME"
|
||||
vault_google_bucket: "CHANGE_ME"
|
||||
vault_resend_api_key: "CHANGE_ME"
|
||||
vault_repo_url: "ssh://git@git.skick.app:2222/org/church-website.git"
|
||||
42
infra/ansible/inventory/production.yml
Normal file
42
infra/ansible/inventory/production.yml
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
# Production inventory — fill in when ready
|
||||
all:
|
||||
hosts:
|
||||
production-vps:
|
||||
ansible_host: YOUR_PRODUCTION_VPS_IP
|
||||
ansible_user: root
|
||||
ansible_ssh_private_key_file: ~/.ssh/id_ed25519
|
||||
|
||||
vars:
|
||||
swap_size_mb: 2048
|
||||
docker_network: church-website-net
|
||||
|
||||
postgres_container_name: postgres
|
||||
postgres_image: postgis/postgis:16-3.4
|
||||
postgres_volume: pgdata
|
||||
|
||||
databases:
|
||||
- name: church_website
|
||||
user: church_website
|
||||
password: "{{ vault_db_password }}"
|
||||
|
||||
caddy_domains:
|
||||
- domain: YOUR_PRODUCTION_DOMAIN
|
||||
proxy_port: 3001
|
||||
|
||||
app_environments:
|
||||
- name: production
|
||||
port: 3001
|
||||
domain: YOUR_PRODUCTION_DOMAIN
|
||||
db_name: church_website
|
||||
db_user: church_website
|
||||
db_password: "{{ vault_db_password }}"
|
||||
payload_secret: "{{ vault_payload_secret }}"
|
||||
site_id: chemnitz
|
||||
google_bucket: "{{ vault_google_bucket }}"
|
||||
resend_api_key: "{{ vault_resend_api_key }}"
|
||||
|
||||
repo_dir: /opt/church-website/repo
|
||||
envs_dir: /opt/church-website/envs
|
||||
scripts_dir: /opt/church-website/scripts
|
||||
repo_url: "{{ vault_repo_url }}"
|
||||
repo_branch: master
|
||||
70
infra/ansible/inventory/test.yml
Normal file
70
infra/ansible/inventory/test.yml
Normal file
|
|
@ -0,0 +1,70 @@
|
|||
all:
|
||||
hosts:
|
||||
test-vps:
|
||||
ansible_host: 178.104.35.59
|
||||
ansible_user: root
|
||||
ansible_ssh_pass: "{{ vault_ansible_become_pass }}"
|
||||
#ansible_ssh_private_key_file: ~/.ssh/id_ed25519
|
||||
|
||||
vars:
|
||||
# Docker
|
||||
docker_network: church-website-net
|
||||
|
||||
# PostgreSQL
|
||||
postgres_container_name: postgres
|
||||
postgres_image: postgis/postgis:16-3.4
|
||||
postgres_volume: pgdata
|
||||
|
||||
# Databases
|
||||
databases:
|
||||
- name: church_website_staging
|
||||
user: church_website_staging
|
||||
password: "{{ vault_db_password_staging }}"
|
||||
- name: church_website_test
|
||||
user: church_website_test
|
||||
password: "{{ vault_db_password_test }}"
|
||||
|
||||
# Caddy
|
||||
caddy_domains:
|
||||
- domain: mutter-teresa.skick.app
|
||||
proxy_port: 3001
|
||||
- domain: mutter-teresa-test.skick.app
|
||||
proxy_port: 3002
|
||||
- domain: git.skick.app
|
||||
proxy_port: 3003
|
||||
|
||||
# Forgejo
|
||||
forgejo_domain: git.skick.app
|
||||
forgejo_container_name: forgejo
|
||||
forgejo_port: 3003
|
||||
forgejo_ssh_port: 2222
|
||||
|
||||
# App environments
|
||||
app_environments:
|
||||
- name: staging
|
||||
port: 3001
|
||||
domain: mutter-teresa.skick.app
|
||||
db_name: church_website_staging
|
||||
db_user: church_website_staging
|
||||
db_password: "{{ vault_db_password_staging }}"
|
||||
payload_secret: "{{ vault_payload_secret_staging }}"
|
||||
site_id: chemnitz
|
||||
google_bucket: "{{ vault_google_bucket }}"
|
||||
resend_api_key: "{{ vault_resend_api_key }}"
|
||||
- name: test
|
||||
port: 3002
|
||||
domain: mutter-teresa-test.skick.app
|
||||
db_name: church_website_test
|
||||
db_user: church_website_test
|
||||
db_password: "{{ vault_db_password_test }}"
|
||||
payload_secret: "{{ vault_payload_secret_test }}"
|
||||
site_id: chemnitz
|
||||
google_bucket: "{{ vault_google_bucket }}"
|
||||
resend_api_key: "{{ vault_resend_api_key }}"
|
||||
|
||||
# Repo
|
||||
repo_dir: /opt/church-website/repo
|
||||
envs_dir: /opt/church-website/envs
|
||||
scripts_dir: /opt/church-website/scripts
|
||||
repo_url: "{{ vault_repo_url }}"
|
||||
repo_branch: staging
|
||||
154
infra/ansible/playbooks/copy-staging-to-test.yml
Normal file
154
infra/ansible/playbooks/copy-staging-to-test.yml
Normal file
|
|
@ -0,0 +1,154 @@
|
|||
---
|
||||
- name: Copy staging data to test environment
|
||||
hosts: all
|
||||
become: true
|
||||
|
||||
vars:
|
||||
staging_db: church_website_staging
|
||||
test_db: church_website_test
|
||||
test_db_user: church_website_test
|
||||
test_container: app-test
|
||||
test_port: 3002
|
||||
test_image: "church-website:test"
|
||||
|
||||
tasks:
|
||||
# ── Phase 1: Pre-flight ───────────────────────────────────────────
|
||||
- name: Verify postgres container is running
|
||||
ansible.builtin.shell: docker ps --filter name=^{{ postgres_container_name }}$ --format '{{ '{{' }}.Status{{ '}}' }}'
|
||||
register: pg_status
|
||||
changed_when: false
|
||||
failed_when: "'Up' not in pg_status.stdout"
|
||||
|
||||
- name: Verify staging database exists
|
||||
ansible.builtin.shell: >
|
||||
docker exec {{ postgres_container_name }}
|
||||
psql -U postgres -tAc "SELECT 1 FROM pg_database WHERE datname = '{{ staging_db }}'"
|
||||
register: staging_exists
|
||||
changed_when: false
|
||||
failed_when: "'1' not in staging_exists.stdout"
|
||||
|
||||
# ── Phase 2: Stop test app ────────────────────────────────────────
|
||||
- name: Stop test container
|
||||
ansible.builtin.shell: docker stop {{ test_container }} 2>/dev/null || true
|
||||
changed_when: false
|
||||
|
||||
- name: Remove test container
|
||||
ansible.builtin.shell: docker rm {{ test_container }} 2>/dev/null || true
|
||||
changed_when: false
|
||||
|
||||
# ── Phase 3: Database copy ────────────────────────────────────────
|
||||
- name: Terminate connections to test database
|
||||
ansible.builtin.shell: >
|
||||
docker exec {{ postgres_container_name }}
|
||||
psql -U postgres -c
|
||||
"SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = '{{ test_db }}' AND pid <> pg_backend_pid();"
|
||||
changed_when: false
|
||||
|
||||
- name: Drop test database
|
||||
ansible.builtin.shell: >
|
||||
docker exec {{ postgres_container_name }}
|
||||
psql -U postgres -c "DROP DATABASE IF EXISTS {{ test_db }};"
|
||||
|
||||
- name: Create test database
|
||||
ansible.builtin.shell: >
|
||||
docker exec {{ postgres_container_name }}
|
||||
psql -U postgres -c "CREATE DATABASE {{ test_db }} OWNER {{ test_db_user }};"
|
||||
|
||||
- name: Enable PostGIS extension on test database
|
||||
ansible.builtin.shell: >
|
||||
docker exec {{ postgres_container_name }}
|
||||
psql -U postgres -d {{ test_db }} -c "CREATE EXTENSION IF NOT EXISTS postgis;"
|
||||
|
||||
- name: Dump staging and restore into test
|
||||
ansible.builtin.shell: >
|
||||
docker exec {{ postgres_container_name }}
|
||||
bash -c "pg_dump -U postgres --no-owner --no-acl {{ staging_db }} | psql -U postgres -d {{ test_db }}"
|
||||
|
||||
- name: Reassign ownership to test user
|
||||
ansible.builtin.shell: |
|
||||
docker exec {{ postgres_container_name }} psql -U postgres -d {{ test_db }} -c "
|
||||
DO \$\$
|
||||
DECLARE
|
||||
r RECORD;
|
||||
BEGIN
|
||||
FOR r IN SELECT tablename FROM pg_tables WHERE schemaname = 'public' LOOP
|
||||
EXECUTE 'ALTER TABLE public.' || quote_ident(r.tablename) || ' OWNER TO {{ test_db_user }}';
|
||||
END LOOP;
|
||||
FOR r IN SELECT sequencename FROM pg_sequences WHERE schemaname = 'public' LOOP
|
||||
EXECUTE 'ALTER SEQUENCE public.' || quote_ident(r.sequencename) || ' OWNER TO {{ test_db_user }}';
|
||||
END LOOP;
|
||||
FOR r IN SELECT typname FROM pg_type t
|
||||
WHERE t.typnamespace = 'public'::regnamespace
|
||||
AND t.typtype = 'e'
|
||||
AND NOT EXISTS (
|
||||
SELECT 1 FROM pg_depend d
|
||||
WHERE d.objid = t.oid AND d.deptype = 'e'
|
||||
)
|
||||
LOOP
|
||||
EXECUTE 'ALTER TYPE public.' || quote_ident(r.typname) || ' OWNER TO {{ test_db_user }}';
|
||||
END LOOP;
|
||||
END
|
||||
\$\$;
|
||||
"
|
||||
|
||||
- name: Verify tables exist in test database
|
||||
ansible.builtin.shell: >
|
||||
docker exec {{ postgres_container_name }}
|
||||
psql -U postgres -d {{ test_db }} -tAc "SELECT count(*) FROM pg_tables WHERE schemaname = 'public';"
|
||||
register: table_count
|
||||
changed_when: false
|
||||
failed_when: "table_count.stdout | int < 1"
|
||||
|
||||
# ── Phase 4: Volume copy ─────────────────────────────────────────
|
||||
- name: Copy media volume from staging to test
|
||||
ansible.builtin.shell: >
|
||||
docker run --rm
|
||||
-v uploads-staging-media:/source:ro
|
||||
-v uploads-test-media:/target
|
||||
alpine sh -c "rm -rf /target/* && cp -a /source/. /target/"
|
||||
|
||||
- name: Copy documents volume from staging to test
|
||||
ansible.builtin.shell: >
|
||||
docker run --rm
|
||||
-v uploads-staging-documents:/source:ro
|
||||
-v uploads-test-documents:/target
|
||||
alpine sh -c "rm -rf /target/* && cp -a /source/. /target/"
|
||||
|
||||
# ── Phase 5: Restart test app ─────────────────────────────────────
|
||||
- name: Start test container
|
||||
ansible.builtin.shell: >
|
||||
docker run -d
|
||||
--name {{ test_container }}
|
||||
--restart unless-stopped
|
||||
--network {{ docker_network }}
|
||||
--env-file {{ envs_dir }}/test/.env
|
||||
-v uploads-test-media:/app/media
|
||||
-v uploads-test-documents:/app/documents
|
||||
-p 127.0.0.1:{{ test_port }}:3000
|
||||
{{ test_image }}
|
||||
|
||||
- name: Fix volume permissions
|
||||
ansible.builtin.shell: >
|
||||
docker exec -u 0 {{ test_container }}
|
||||
chown -R 1001:1001 /app/media /app/documents
|
||||
|
||||
# ── Phase 6: Health check ─────────────────────────────────────────
|
||||
- name: Wait for test app to be healthy
|
||||
ansible.builtin.uri:
|
||||
url: "http://127.0.0.1:{{ test_port }}"
|
||||
method: GET
|
||||
status_code: [200, 301, 302]
|
||||
register: health
|
||||
retries: 10
|
||||
delay: 5
|
||||
until: health.status in [200, 301, 302]
|
||||
|
||||
- name: Print summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Staging → Test copy complete!
|
||||
- Database: {{ staging_db }} → {{ test_db }} ({{ table_count.stdout }} tables)
|
||||
- Media & documents volumes copied
|
||||
- Test app running on port {{ test_port }}
|
||||
- Health check: HTTP {{ health.status }}
|
||||
- URL: https://mutter-teresa-test.skick.app
|
||||
19
infra/ansible/playbooks/deploy.yml
Normal file
19
infra/ansible/playbooks/deploy.yml
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
- name: Deploy app (rebuild + restart)
|
||||
hosts: all
|
||||
become: true
|
||||
|
||||
tasks:
|
||||
- name: Pull latest code
|
||||
ansible.builtin.git:
|
||||
repo: "{{ repo_url }}"
|
||||
dest: "{{ repo_dir }}"
|
||||
version: "{{ repo_branch }}"
|
||||
force: true
|
||||
|
||||
- name: Deploy each environment
|
||||
ansible.builtin.shell: |
|
||||
{{ scripts_dir }}/deploy.sh {{ item.name }} {{ item.port }}
|
||||
loop: "{{ app_environments }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
11
infra/ansible/playbooks/setup.yml
Normal file
11
infra/ansible/playbooks/setup.yml
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
---
|
||||
- name: Set up church-website server
|
||||
hosts: all
|
||||
become: true
|
||||
|
||||
roles:
|
||||
- common
|
||||
- postgresql
|
||||
- caddy
|
||||
- forgejo
|
||||
- app
|
||||
9
infra/ansible/roles/app/tasks/deploy.yml
Normal file
9
infra/ansible/roles/app/tasks/deploy.yml
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
---
|
||||
- name: Deploy each environment (sequentially to save RAM)
|
||||
ansible.builtin.shell: |
|
||||
{{ scripts_dir }}/deploy.sh {{ item.name }} {{ item.port }}
|
||||
loop: "{{ app_environments }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
register: deploy_result
|
||||
changed_when: true
|
||||
26
infra/ansible/roles/app/tasks/main.yml
Normal file
26
infra/ansible/roles/app/tasks/main.yml
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
- name: Deploy deploy script
|
||||
ansible.builtin.copy:
|
||||
src: "{{ playbook_dir }}/../../scripts/deploy.sh"
|
||||
dest: "{{ scripts_dir }}/deploy.sh"
|
||||
mode: "0755"
|
||||
|
||||
- name: Deploy .env files
|
||||
ansible.builtin.template:
|
||||
src: env.j2
|
||||
dest: "{{ envs_dir }}/{{ item.name }}/.env"
|
||||
mode: "0640"
|
||||
loop: "{{ app_environments }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
|
||||
- name: Clone or update repository
|
||||
ansible.builtin.git:
|
||||
repo: "{{ repo_url }}"
|
||||
dest: "{{ repo_dir }}"
|
||||
version: "{{ repo_branch }}"
|
||||
force: true
|
||||
accept_hostkey: true
|
||||
|
||||
- name: Build and deploy
|
||||
ansible.builtin.include_tasks: deploy.yml
|
||||
6
infra/ansible/roles/app/templates/env.j2
Normal file
6
infra/ansible/roles/app/templates/env.j2
Normal file
|
|
@ -0,0 +1,6 @@
|
|||
DATABASE_URI=postgres://{{ item.db_user }}:{{ item.db_password }}@{{ postgres_container_name }}:5432/{{ item.db_name }}
|
||||
PAYLOAD_SECRET={{ item.payload_secret }}
|
||||
NEXT_PUBLIC_SERVER_URL=https://{{ item.domain }}
|
||||
NEXT_PUBLIC_SITE_ID={{ item.site_id }}
|
||||
GOOGLE_BUCKET={{ item.google_bucket }}
|
||||
RESEND_API_KEY={{ item.resend_api_key }}
|
||||
43
infra/ansible/roles/caddy/tasks/main.yml
Normal file
43
infra/ansible/roles/caddy/tasks/main.yml
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
---
|
||||
- name: Install Caddy dependencies
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- debian-keyring
|
||||
- debian-archive-keyring
|
||||
- apt-transport-https
|
||||
- curl
|
||||
state: present
|
||||
|
||||
- name: Add Caddy GPG key
|
||||
ansible.builtin.shell:
|
||||
cmd: curl -fsSL https://dl.cloudsmith.io/public/caddy/stable/gpg.key -o /etc/apt/keyrings/caddy-stable-archive-keyring.asc && chmod 644 /etc/apt/keyrings/caddy-stable-archive-keyring.asc
|
||||
creates: /etc/apt/keyrings/caddy-stable-archive-keyring.asc
|
||||
|
||||
- name: Add Caddy apt repository
|
||||
ansible.builtin.apt_repository:
|
||||
repo: "deb [signed-by=/etc/apt/keyrings/caddy-stable-archive-keyring.asc] https://dl.cloudsmith.io/public/caddy/stable/deb/ubuntu any-version main"
|
||||
state: present
|
||||
|
||||
- name: Install Caddy
|
||||
ansible.builtin.apt:
|
||||
name: caddy
|
||||
state: present
|
||||
update_cache: true
|
||||
|
||||
- name: Deploy Caddyfile
|
||||
ansible.builtin.template:
|
||||
src: Caddyfile.j2
|
||||
dest: /etc/caddy/Caddyfile
|
||||
mode: "0644"
|
||||
register: caddyfile_result
|
||||
|
||||
- name: Enable and start Caddy
|
||||
ansible.builtin.systemd:
|
||||
name: caddy
|
||||
enabled: true
|
||||
state: started
|
||||
|
||||
- name: Reload Caddy
|
||||
ansible.builtin.systemd:
|
||||
name: caddy
|
||||
state: reloaded
|
||||
6
infra/ansible/roles/caddy/templates/Caddyfile.j2
Normal file
6
infra/ansible/roles/caddy/templates/Caddyfile.j2
Normal file
|
|
@ -0,0 +1,6 @@
|
|||
{% for site in caddy_domains %}
|
||||
{{ site.domain }} {
|
||||
reverse_proxy localhost:{{ site.proxy_port }}
|
||||
}
|
||||
|
||||
{% endfor %}
|
||||
110
infra/ansible/roles/common/tasks/main.yml
Normal file
110
infra/ansible/roles/common/tasks/main.yml
Normal file
|
|
@ -0,0 +1,110 @@
|
|||
---
|
||||
- name: Update apt cache
|
||||
ansible.builtin.apt:
|
||||
update_cache: true
|
||||
cache_valid_time: 3600
|
||||
|
||||
- name: Install essential packages
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- apt-transport-https
|
||||
- ca-certificates
|
||||
- curl
|
||||
- gnupg
|
||||
- lsb-release
|
||||
- ufw
|
||||
- fail2ban
|
||||
- git
|
||||
state: present
|
||||
|
||||
# Firewall
|
||||
- name: Configure UFW rules
|
||||
ansible.builtin.shell: |
|
||||
ufw allow 22/tcp
|
||||
ufw allow 80/tcp
|
||||
ufw allow 443/tcp
|
||||
ufw allow {{ forgejo_ssh_port | default(2222) }}/tcp
|
||||
ufw --force enable
|
||||
ufw default deny incoming
|
||||
changed_when: false
|
||||
|
||||
# Fail2ban
|
||||
- name: Enable fail2ban
|
||||
ansible.builtin.systemd:
|
||||
name: fail2ban
|
||||
enabled: true
|
||||
state: started
|
||||
|
||||
# Docker
|
||||
- name: Ensure keyrings directory exists
|
||||
ansible.builtin.file:
|
||||
path: /etc/apt/keyrings
|
||||
state: directory
|
||||
mode: "0755"
|
||||
|
||||
- name: Add Docker GPG key
|
||||
ansible.builtin.shell:
|
||||
cmd: curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc && chmod 644 /etc/apt/keyrings/docker.asc
|
||||
creates: /etc/apt/keyrings/docker.asc
|
||||
|
||||
- name: Add Docker apt repository
|
||||
ansible.builtin.apt_repository:
|
||||
repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
|
||||
state: present
|
||||
|
||||
- name: Install Docker
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- docker-ce
|
||||
- docker-ce-cli
|
||||
- containerd.io
|
||||
- docker-buildx-plugin
|
||||
state: present
|
||||
update_cache: true
|
||||
|
||||
- name: Start Docker
|
||||
ansible.builtin.systemd:
|
||||
name: docker
|
||||
enabled: true
|
||||
state: started
|
||||
|
||||
# Docker network
|
||||
- name: Create Docker network
|
||||
ansible.builtin.shell: docker network inspect {{ docker_network }} >/dev/null 2>&1 || docker network create {{ docker_network }}
|
||||
changed_when: false
|
||||
|
||||
# SSH key (for cloning from Forgejo)
|
||||
- name: Generate SSH key
|
||||
ansible.builtin.shell:
|
||||
cmd: ssh-keygen -t ed25519 -f /root/.ssh/id_ed25519 -N "" -q
|
||||
creates: /root/.ssh/id_ed25519
|
||||
|
||||
- name: Read SSH public key
|
||||
ansible.builtin.command: cat /root/.ssh/id_ed25519.pub
|
||||
register: ssh_public_key
|
||||
changed_when: false
|
||||
|
||||
- name: Show SSH public key
|
||||
ansible.builtin.debug:
|
||||
msg: "Add this SSH key to Forgejo (Settings > SSH Keys): {{ ssh_public_key.stdout }}"
|
||||
|
||||
# App directories
|
||||
- name: Create app directories
|
||||
ansible.builtin.file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
mode: "0755"
|
||||
loop:
|
||||
- /opt/church-website
|
||||
- "{{ repo_dir }}"
|
||||
- "{{ envs_dir }}"
|
||||
- "{{ scripts_dir }}"
|
||||
|
||||
- name: Create environment directories
|
||||
ansible.builtin.file:
|
||||
path: "{{ envs_dir }}/{{ item.name }}"
|
||||
state: directory
|
||||
mode: "0750"
|
||||
loop: "{{ app_environments }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
30
infra/ansible/roles/forgejo/tasks/main.yml
Normal file
30
infra/ansible/roles/forgejo/tasks/main.yml
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
---
|
||||
- name: Create Forgejo directories
|
||||
ansible.builtin.file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
mode: "0755"
|
||||
loop:
|
||||
- /opt/forgejo
|
||||
- /opt/forgejo/data
|
||||
- /opt/forgejo/runner
|
||||
|
||||
- name: Deploy Forgejo Docker Compose file
|
||||
ansible.builtin.template:
|
||||
src: docker-compose.forgejo.yml.j2
|
||||
dest: /opt/forgejo/docker-compose.yml
|
||||
mode: "0644"
|
||||
|
||||
- name: Start Forgejo services
|
||||
ansible.builtin.shell: docker compose up -d
|
||||
args:
|
||||
chdir: /opt/forgejo
|
||||
|
||||
- name: Wait for Forgejo to be ready
|
||||
ansible.builtin.uri:
|
||||
url: "http://localhost:{{ forgejo_port }}"
|
||||
status_code: 200
|
||||
register: forgejo_health
|
||||
retries: 15
|
||||
delay: 5
|
||||
until: forgejo_health.status == 200
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
services:
|
||||
forgejo:
|
||||
image: codeberg.org/forgejo/forgejo:9
|
||||
container_name: {{ forgejo_container_name }}
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- {{ docker_network }}
|
||||
volumes:
|
||||
- ./data:/data
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
ports:
|
||||
- "127.0.0.1:{{ forgejo_port }}:3000"
|
||||
- "{{ forgejo_ssh_port }}:22"
|
||||
environment:
|
||||
- USER_UID=1000
|
||||
- USER_GID=1000
|
||||
- FORGEJO__server__ROOT_URL=https://{{ forgejo_domain }}
|
||||
- FORGEJO__server__SSH_DOMAIN={{ forgejo_domain }}
|
||||
- FORGEJO__server__SSH_PORT={{ forgejo_ssh_port }}
|
||||
- FORGEJO__actions__ENABLED=true
|
||||
|
||||
runner:
|
||||
image: code.forgejo.org/forgejo/runner:6.2.2
|
||||
container_name: forgejo-runner
|
||||
command: forgejo-runner daemon
|
||||
restart: unless-stopped
|
||||
user: "0:0"
|
||||
networks:
|
||||
- {{ docker_network }}
|
||||
volumes:
|
||||
- ./runner:/data
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
environment:
|
||||
- DOCKER_HOST=unix:///var/run/docker.sock
|
||||
depends_on:
|
||||
- forgejo
|
||||
|
||||
networks:
|
||||
{{ docker_network }}:
|
||||
external: true
|
||||
63
infra/ansible/roles/postgresql/tasks/main.yml
Normal file
63
infra/ansible/roles/postgresql/tasks/main.yml
Normal file
|
|
@ -0,0 +1,63 @@
|
|||
---
|
||||
- name: Create PostgreSQL init script directory
|
||||
ansible.builtin.file:
|
||||
path: /opt/church-website/postgres-init
|
||||
state: directory
|
||||
mode: "0755"
|
||||
|
||||
- name: Deploy database init script
|
||||
ansible.builtin.template:
|
||||
src: init-databases.sh.j2
|
||||
dest: /opt/church-website/postgres-init/init-databases.sh
|
||||
mode: "0755"
|
||||
|
||||
- name: Check if PostgreSQL container exists
|
||||
ansible.builtin.shell: docker ps -a --filter name=^{{ postgres_container_name }}$ --format '{{ '{{' }}.Status{{ '}}' }}'
|
||||
register: postgres_status
|
||||
changed_when: false
|
||||
|
||||
- name: Start PostgreSQL container
|
||||
ansible.builtin.shell: |
|
||||
docker run -d \
|
||||
--name {{ postgres_container_name }} \
|
||||
--restart unless-stopped \
|
||||
--network {{ docker_network }} \
|
||||
-v {{ postgres_volume }}:/var/lib/postgresql/data \
|
||||
-v /opt/church-website/postgres-init:/docker-entrypoint-initdb.d:ro \
|
||||
-e POSTGRES_USER=postgres \
|
||||
-e POSTGRES_PASSWORD={{ vault_postgres_root_password }} \
|
||||
-p 127.0.0.1:5432:5432 \
|
||||
{{ postgres_image }}
|
||||
when: postgres_status.stdout == ""
|
||||
|
||||
- name: Wait for PostgreSQL to be ready
|
||||
ansible.builtin.shell: docker exec {{ postgres_container_name }} pg_isready -U postgres
|
||||
register: pg_ready
|
||||
retries: 10
|
||||
delay: 3
|
||||
until: pg_ready.rc == 0
|
||||
changed_when: false
|
||||
|
||||
- name: Create databases and users
|
||||
ansible.builtin.shell: |
|
||||
docker exec {{ postgres_container_name }} psql -U postgres -c "
|
||||
DO \$\$
|
||||
BEGIN
|
||||
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '{{ item.user }}') THEN
|
||||
CREATE ROLE {{ item.user }} WITH LOGIN PASSWORD '{{ item.password }}';
|
||||
END IF;
|
||||
END
|
||||
\$\$;
|
||||
"
|
||||
docker exec {{ postgres_container_name }} psql -U postgres -tc "SELECT 1 FROM pg_database WHERE datname = '{{ item.name }}'" | grep -q 1 || \
|
||||
docker exec {{ postgres_container_name }} psql -U postgres -c "CREATE DATABASE {{ item.name }} OWNER {{ item.user }}"
|
||||
loop: "{{ databases }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
|
||||
- name: Enable PostGIS extension on each database
|
||||
ansible.builtin.shell: |
|
||||
docker exec {{ postgres_container_name }} psql -U postgres -d {{ item.name }} -c "CREATE EXTENSION IF NOT EXISTS postgis;"
|
||||
loop: "{{ databases }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
|
|
@ -0,0 +1,20 @@
|
|||
#!/bin/bash
|
||||
# This script runs on first PostgreSQL container start only
|
||||
# (placed in /docker-entrypoint-initdb.d/)
|
||||
|
||||
set -e
|
||||
|
||||
{% for db in databases %}
|
||||
echo "Creating database {{ db.name }} with user {{ db.user }}..."
|
||||
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
|
||||
CREATE USER {{ db.user }} WITH PASSWORD '{{ db.password }}';
|
||||
CREATE DATABASE {{ db.name }} OWNER {{ db.user }};
|
||||
GRANT ALL PRIVILEGES ON DATABASE {{ db.name }} TO {{ db.user }};
|
||||
EOSQL
|
||||
|
||||
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" -d {{ db.name }} <<-EOSQL
|
||||
CREATE EXTENSION IF NOT EXISTS postgis;
|
||||
EOSQL
|
||||
|
||||
{% endfor %}
|
||||
echo "Database initialization complete."
|
||||
61
infra/scripts/deploy.sh
Executable file
61
infra/scripts/deploy.sh
Executable file
|
|
@ -0,0 +1,61 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
ENV_NAME=$1 # "staging" or "test"
|
||||
APP_PORT=$2 # 3001 or 3002
|
||||
REPO_DIR="/opt/church-website/repo"
|
||||
ENV_DIR="/opt/church-website/envs/${ENV_NAME}"
|
||||
CONTAINER_NAME="app-${ENV_NAME}"
|
||||
IMAGE_NAME="church-website:${ENV_NAME}"
|
||||
MIGRATE_IMAGE="church-website-migrate:${ENV_NAME}"
|
||||
NETWORK_NAME="church-website-net"
|
||||
|
||||
if [ ! -f "${ENV_DIR}/.env" ]; then
|
||||
echo "Error: ${ENV_DIR}/.env not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "==> Building app image ${IMAGE_NAME}..."
|
||||
docker build \
|
||||
--build-arg NEXT_PUBLIC_SERVER_URL="$(grep NEXT_PUBLIC_SERVER_URL "${ENV_DIR}/.env" | cut -d= -f2-)" \
|
||||
--build-arg NEXT_PUBLIC_SITE_ID="$(grep NEXT_PUBLIC_SITE_ID "${ENV_DIR}/.env" | cut -d= -f2-)" \
|
||||
-t "${IMAGE_NAME}" \
|
||||
"${REPO_DIR}"
|
||||
|
||||
echo "==> Building migration image..."
|
||||
docker build \
|
||||
--target builder \
|
||||
--build-arg NEXT_PUBLIC_SERVER_URL="$(grep NEXT_PUBLIC_SERVER_URL "${ENV_DIR}/.env" | cut -d= -f2-)" \
|
||||
--build-arg NEXT_PUBLIC_SITE_ID="$(grep NEXT_PUBLIC_SITE_ID "${ENV_DIR}/.env" | cut -d= -f2-)" \
|
||||
-t "${MIGRATE_IMAGE}" \
|
||||
"${REPO_DIR}"
|
||||
|
||||
echo "==> Running database migrations..."
|
||||
docker run --rm \
|
||||
--network "${NETWORK_NAME}" \
|
||||
--env-file "${ENV_DIR}/.env" \
|
||||
"${MIGRATE_IMAGE}" \
|
||||
npx payload migrate
|
||||
|
||||
echo "==> Stopping old container..."
|
||||
docker stop "${CONTAINER_NAME}" 2>/dev/null || true
|
||||
docker rm "${CONTAINER_NAME}" 2>/dev/null || true
|
||||
|
||||
echo "==> Starting new container on port ${APP_PORT}..."
|
||||
docker run -d \
|
||||
--name "${CONTAINER_NAME}" \
|
||||
--restart unless-stopped \
|
||||
--network "${NETWORK_NAME}" \
|
||||
--env-file "${ENV_DIR}/.env" \
|
||||
-v "uploads-${ENV_NAME}-media:/app/media" \
|
||||
-v "uploads-${ENV_NAME}-documents:/app/documents" \
|
||||
-p "127.0.0.1:${APP_PORT}:3000" \
|
||||
"${IMAGE_NAME}"
|
||||
|
||||
echo "==> Fixing volume permissions..."
|
||||
docker exec -u 0 "${CONTAINER_NAME}" chown -R 1001:1001 /app/media /app/documents
|
||||
|
||||
echo "==> Cleaning up old images..."
|
||||
docker image prune -f
|
||||
|
||||
echo "==> Done! ${ENV_NAME} deployed on port ${APP_PORT}"
|
||||
Loading…
Reference in a new issue