Compare commits

...

8 Commits

Author SHA1 Message Date
beatz174-bit 5c600d0af0 Merge pull request #21 from beatz174-bit/codex/split-compose-files-by-service
Split core and monitoring compose files into single-service compose files
2026-04-13 10:23:36 +10:00
beatz174-bit e3d463d511 Merge branch 'main' into codex/split-compose-files-by-service 2026-04-13 10:23:24 +10:00
beatz174-bit ff2d323309 Split core and prometheus compose files by service 2026-04-13 10:22:42 +10:00
beatz174-bit 47a5908430 Merge pull request #20 from beatz174-bit/codex/update-traefik-configuration-for-trusted-proxies
Restrict Traefik forwarded headers to trusted IPs, enable Authelia trustForwardHeader, and add traefik network subnet
2026-04-13 10:16:25 +10:00
beatz174-bit 8448f2bb94 Narrow trusted proxy CIDRs and pin Traefik subnet 2026-04-13 10:16:06 +10:00
beatz174-bit cfbefed2e3 Merge pull request #19 from beatz174-bit/codex/update-security_secrets_inventory.md
docs: align secrets inventory with current .env example
2026-04-13 09:55:30 +10:00
beatz174-bit 3b3f06a727 docs: align secrets inventory with example env keys 2026-04-13 09:55:14 +10:00
git 8c82830af8 modified: secrets/.env.secrets.example 2026-04-13 09:49:41 +10:00
17 changed files with 408 additions and 439 deletions
+37 -28
View File
@@ -1,31 +1,40 @@
# Credential Inventory (apps/, core/, monitoring/)
# Security Secrets Inventory
## apps/
- `apps/nextcloud/docker-compose.yml`
- `MYSQL_PASSWORD` (nextcloud-webapp) -> `MYSQL_PASSWORD_FILE` + Docker secret.
- `SMTP_PASSWORD` -> `SMTP_PASSWORD_FILE` + Docker secret.
- `REDIS_HOST_PASSWORD` -> `REDIS_HOST_PASSWORD_FILE` + Docker secret.
- `MYSQL_ROOT_PASSWORD`, `MYSQL_PASSWORD`, `NEXTCLOUD_ADMIN_PASSWORD` (nextcloud-db) -> `_FILE` variants + Docker secrets.
- Redis `--requirepass` inline value -> read from Docker secret at runtime.
- `apps/passbolt/docker-compose.yml`
- `MYSQL_PASSWORD`, `DATASOURCES_DEFAULT_PASSWORD` -> `_FILE` variants + Docker secret.
- `apps/gramps/docker-compose.yml`
- `POSTGRES_PASSWORD` -> `POSTGRES_PASSWORD_FILE` + Docker secret.
- `DB_URI` password + `INITIAL_ADMIN_PASSWORD` -> env references from non-committed secrets env file.
This inventory is aligned with `secrets/.env.secrets.example` and documents only the values that are expected to be set in the non-committed secrets env file (`secrets/stack-secrets.env`).
## core/
- `core/authelia/configuration.yml`
- `identity_validation.reset_password.jwt_secret` -> `${AUTHELIA_JWT_SECRET}`.
- `session.secret` -> `${AUTHELIA_SESSION_SECRET}`.
- `storage.encryption_key` -> `${AUTHELIA_STORAGE_ENCRYPTION_KEY}`.
- `core/traefik/dynamic.yml`
- `crowdsecLapiKey` -> `${CROWDSEC_LAPI_KEY}`.
## Secrets expected in `secrets/.env.secrets.example`
## monitoring/
- `monitoring/gotify/docker-compose.yml`
- `GOTIFY_DEFAULTUSER_PASS` -> `${GOTIFY_DEFAULTUSER_PASS}` from non-committed secrets env file.
- `monitoring/prometheus/docker-compose.yml`
- `DOCKER_INFLUXDB_INIT_PASSWORD` -> `DOCKER_INFLUXDB_INIT_PASSWORD_FILE` + Docker secret.
- `PIHOLE_PASSWORD` -> `${PIHOLE_PASSWORD}` from non-committed secrets env file.
- `monitoring/prometheus/prometheus.yml`
- Uptime Kuma basic_auth `password` -> `password_file` mounted from non-committed secret file.
| Variable | Used by | Purpose / Notes |
|---|---|---|
| `NEXTCLOUD_DB_USER` | `apps/nextcloud/docker-compose.yml` | Nextcloud database username (non-secret identifier but environment-specific). |
| `NEXTCLOUD_ADMIN_USER` | `apps/nextcloud/docker-compose.yml` | Initial Nextcloud admin username. |
| `NEXTCLOUD_SMTP_FROM_ADDRESS` | `apps/nextcloud/docker-compose.yml` | SMTP sender local-part for outbound mail configuration. |
| `NEXTCLOUD_SMTP_DOMAIN` | `apps/nextcloud/docker-compose.yml` | SMTP sender domain for outbound mail configuration. |
| `NEXTCLOUD_SMTP_NAME` | `apps/nextcloud/docker-compose.yml` | Derived from address + domain in the example file. |
| `PASSBOLT_DB_NAME` | `apps/passbolt/docker-compose.yml` | Passbolt database name. |
| `PASSBOLT_DB_USER` | `apps/passbolt/docker-compose.yml` | Passbolt database username. |
| `PASSBOLT_GPG_SERVER_KEY_FINGERPRINT` | `apps/passbolt/docker-compose.yml` | Passbolt server GPG key fingerprint. |
| `GRAMPS_DB_NAME` | `apps/gramps/docker-compose.yml` | Gramps database name. |
| `GRAMPS_DB_USER` | `apps/gramps/docker-compose.yml` | Gramps database username. |
| `GRAMPS_DB_PASSWORD` | `apps/gramps/docker-compose.yml` | Gramps database password. |
| `GRAMPS_INITIAL_ADMIN` | `apps/gramps/docker-compose.yml` | Gramps initial admin username/email (deployment-specific). |
| `GRAMPS_INITIAL_ADMIN_PASSWORD` | `apps/gramps/docker-compose.yml` | Gramps initial admin password. |
| `GRAMPS_DB_URI` | `apps/gramps/docker-compose.yml` | Derived connection string in the example file. |
| `GOTIFY_DEFAULTUSER_NAME` | `monitoring/gotify/docker-compose.yml` | Gotify default username. |
| `GOTIFY_DEFAULTUSER_PASS` | `monitoring/gotify/docker-compose.yml` | Gotify default user password. |
| `INFLUXDB_INIT_USERNAME` | `monitoring/prometheus/docker-compose.yml` | InfluxDB initial username. |
| `PIHOLE_PASSWORD` | `monitoring/prometheus/docker-compose.yml` | Exporter auth / Pi-hole integration password. |
## Managed outside `.env.secrets.example`
The following sensitive values are intentionally not duplicated in `secrets/.env.secrets.example` because they are provided via Docker secrets (`*_FILE`) or other mounted secret files:
- Database/root passwords for Nextcloud, Passbolt, and supporting services that are wired through Docker secrets.
- Redis runtime password (`--requirepass`) loaded from a Docker secret.
- `DOCKER_INFLUXDB_INIT_PASSWORD` loaded from Docker secret in monitoring.
- Uptime Kuma basic auth password loaded via `password_file` in Prometheus config.
- Core stack secrets injected via env substitution in committed config files, such as:
- `AUTHELIA_JWT_SECRET`
- `AUTHELIA_SESSION_SECRET`
- `AUTHELIA_STORAGE_ENCRYPTION_KEY`
- `CROWDSEC_LAPI_KEY`
+30
View File
@@ -0,0 +1,30 @@
services:
authelia:
profiles: ["core","all","traefik"]
image: authelia/authelia
restart: always
build:
context: ${PROJECT_ROOT}/core/authelia
# env_file:
# - ${PROJECT_ROOT}/secrets/stack-secrets.env
# environment:
# - AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET:${AUTHELIA_JWT_SECRET}
# - AUTHELIA_SESSION_SECRET:${AUTHELIA_SESSION_SECRET}
# - AUTHELIA_STORAGE_ENCRYPTION_KEY:${AUTHELIA_STORAGE_ENCRYPTION_KEY}
volumes:
- ${PROJECT_ROOT}/core/authelia:/config
networks:
# - reverse_proxy
- traefik
container_name: authelia
labels:
- traefik.enable=true
- traefik.http.routers.authelia.rule=Host(`auth.lan.ddnsgeek.com`)
- traefik.http.routers.authelia.entrypoints=websecure
- traefik.http.routers.authelia.tls=true
- traefik.http.routers.authelia.tls.certresolver=myresolver
- io.portainer.accesscontrol.public
- traefik.http.middlewares.authelia.forwardauth.address=http://authelia:9091/api/verify?rd=https://auth.lan.ddnsgeek.com/
- traefik.http.middlewares.authelia.forwardauth.trustForwardHeader=true
- traefik.http.middlewares.authelia.forwardauth.authResponseHeaders=Remote-User,Remote-Groups
- traefik.http.middlewares.authelia.forwardauth.maxResponseBodySize=2097152
+23
View File
@@ -0,0 +1,23 @@
services:
crowdsec:
# image: crowdsecurity/crowdsec:latest
profiles: ["core","all","traefik"]
build: ${PROJECT_ROOT}/core/crowdsec
container_name: crowdsec
restart: always
environment:
- COLLECTIONS=crowdsecurity/traefik
# - CROWDSEC_LAPI_KEY=${CROWDSEC_LAPI_KEY}
volumes:
- ${PROJECT_ROOT}/core/crowdsec/logs:/logs:ro
- ${PROJECT_ROOT}/core/crowdsec/data:/var/lib/crowdsec/data
- ${PROJECT_ROOT}/core/crowdsec/config:/etc/crowdsec
networks:
# - reverse_proxy
- traefik
healthcheck:
test: ["CMD-SHELL", "cscli metrics || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 15s
+1 -132
View File
@@ -1,132 +1 @@
services:
traefik:
profiles: ["core","all","traefik"]
image: traefik:3
container_name: traefik
restart: always
read_only: true
hostname: traefik.lan.ddnsgeek.com
depends_on:
- docker-socket-proxy
- error-pages
- authelia
- crowdsec
ports:
- "80:80"
- "443:443"
build:
context: ${PROJECT_ROOT}/core
# env_file:
# - ${PROJECT_ROOT}/secrets/stack-secrets.env
volumes:
- ${PROJECT_ROOT}/core/traefik/data/letsencrypt:/letsencrypt
- ${PROJECT_ROOT}/core/traefik/data/logs:/logs
- ${PROJECT_ROOT}/core/traefik/dynamic.yml:/etc/traefik/dynamic.yml:ro
- ${PROJECT_ROOT}/core/traefik/traefik.yml:/etc/traefik/traefik.yml:ro
- ${PROJECT_ROOT}/core/traefik/data/plugins:/plugins-storage
healthcheck:
test: traefik healthcheck --ping
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`traefik.lan.ddnsgeek.com`)"
- "traefik.http.routers.traefik.service=api@internal"
- "traefik.http.routers.traefik.entrypoints=websecure"
- "traefik.http.routers.traefik.tls.certresolver=myresolver"
- "traefik.http.routers.traefik.middlewares=authelia"
- "io.portainer.accesscontrol.public"
- "traefik.docker.network=core_traefik"
- "traefik.http.routers.traefik.observability.tracing=true"
networks:
# - reverse_proxy
# - prometheus_edge
- traefik
crowdsec:
# image: crowdsecurity/crowdsec:latest
profiles: ["core","all","traefik"]
build: ${PROJECT_ROOT}/core/crowdsec
container_name: crowdsec
restart: always
environment:
- COLLECTIONS=crowdsecurity/traefik
# - CROWDSEC_LAPI_KEY=${CROWDSEC_LAPI_KEY}
volumes:
- ${PROJECT_ROOT}/core/crowdsec/logs:/logs:ro
- ${PROJECT_ROOT}/core/crowdsec/data:/var/lib/crowdsec/data
- ${PROJECT_ROOT}/core/crowdsec/config:/etc/crowdsec
networks:
# - reverse_proxy
- traefik
healthcheck:
test: ["CMD-SHELL", "cscli metrics || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 15s
error-pages:
profiles: ["core","all","traefik"]
image: tarampampam/error-pages:3
restart: always
container_name: error-pages
read_only: true
environment:
TEMPLATE_NAME: ${ERROR_PAGES_TEMPLATE_NAME}
networks:
# - reverse_proxy
- traefik
hostname: error-pages
labels:
- "traefik.enable=true"
# use as "fallback" for any NON-registered services (with priority below normal)
- "traefik.http.routers.error-pages-router.rule=HostRegexp(`{host:.+}`)"
# should say that all of your services work on https
- "traefik.http.routers.error-pages-router.entrypoints=web"
- "traefik.http.routers.error-pages-router.middlewares=error-pages-middleware"
# "errors" middleware settings
- "traefik.http.middlewares.error-pages-middleware.errors.status=400-599"
- "traefik.http.middlewares.error-pages-middleware.errors.service=error-pages-service"
- "traefik.http.middlewares.error-pages-middleware.errors.query=/{status}.html"
# define service properties
- "traefik.http.services.error-pages-service.loadbalancer.server.port=8080"
- "io.portainer.accesscontrol.public"
authelia:
profiles: ["core","all","traefik"]
image: authelia/authelia
restart: always
build:
context: ${PROJECT_ROOT}/core/authelia
# env_file:
# - ${PROJECT_ROOT}/secrets/stack-secrets.env
# environment:
# - AUTHELIA_IDENTITY_VALIDATION_RESET_PASSWORD_JWT_SECRET:${AUTHELIA_JWT_SECRET}
# - AUTHELIA_SESSION_SECRET:${AUTHELIA_SESSION_SECRET}
# - AUTHELIA_STORAGE_ENCRYPTION_KEY:${AUTHELIA_STORAGE_ENCRYPTION_KEY}
volumes:
- ${PROJECT_ROOT}/core/authelia:/config
networks:
# - reverse_proxy
- traefik
container_name: authelia
labels:
- traefik.enable=true
- traefik.http.routers.authelia.rule=Host(`auth.lan.ddnsgeek.com`)
- traefik.http.routers.authelia.entrypoints=websecure
- traefik.http.routers.authelia.tls=true
- traefik.http.routers.authelia.tls.certresolver=myresolver
- io.portainer.accesscontrol.public
- traefik.http.middlewares.authelia.forwardauth.address=http://authelia:9091/api/verify?rd=https://auth.lan.ddnsgeek.com/
- traefik.http.middlewares.authelia.forwardauth.trustForwardHeader=true
- traefik.http.middlewares.authelia.forwardauth.authResponseHeaders=Remote-User,Remote-Groups
- traefik.http.middlewares.authelia.forwardauth.maxResponseBodySize=2097152
#networks:
# reverse_proxy:
# driver: bridge
# prometheus_edge:
# external: true
services: {}
+27
View File
@@ -0,0 +1,27 @@
services:
error-pages:
profiles: ["core","all","traefik"]
image: tarampampam/error-pages:3
restart: always
container_name: error-pages
read_only: true
environment:
TEMPLATE_NAME: ${ERROR_PAGES_TEMPLATE_NAME}
networks:
# - reverse_proxy
- traefik
hostname: error-pages
labels:
- "traefik.enable=true"
# use as "fallback" for any NON-registered services (with priority below normal)
- "traefik.http.routers.error-pages-router.rule=HostRegexp(`{host:.+}`)"
# should say that all of your services work on https
- "traefik.http.routers.error-pages-router.entrypoints=web"
- "traefik.http.routers.error-pages-router.middlewares=error-pages-middleware"
# "errors" middleware settings
- "traefik.http.middlewares.error-pages-middleware.errors.status=400-599"
- "traefik.http.middlewares.error-pages-middleware.errors.service=error-pages-service"
- "traefik.http.middlewares.error-pages-middleware.errors.query=/{status}.html"
# define service properties
- "traefik.http.services.error-pages-service.loadbalancer.server.port=8080"
- "io.portainer.accesscontrol.public"
+48
View File
@@ -0,0 +1,48 @@
services:
traefik:
profiles: ["core","all","traefik"]
image: traefik:3
container_name: traefik
restart: always
read_only: true
hostname: traefik.lan.ddnsgeek.com
depends_on:
- docker-socket-proxy
- error-pages
- authelia
- crowdsec
ports:
- "80:80"
- "443:443"
build:
context: ${PROJECT_ROOT}/core
# env_file:
# - ${PROJECT_ROOT}/secrets/stack-secrets.env
volumes:
- ${PROJECT_ROOT}/core/traefik/data/letsencrypt:/letsencrypt
- ${PROJECT_ROOT}/core/traefik/data/logs:/logs
- ${PROJECT_ROOT}/core/traefik/dynamic.yml:/etc/traefik/dynamic.yml:ro
- ${PROJECT_ROOT}/core/traefik/traefik.yml:/etc/traefik/traefik.yml:ro
- ${PROJECT_ROOT}/core/traefik/data/plugins:/plugins-storage
healthcheck:
test: traefik healthcheck --ping
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`traefik.lan.ddnsgeek.com`)"
- "traefik.http.routers.traefik.service=api@internal"
- "traefik.http.routers.traefik.entrypoints=websecure"
- "traefik.http.routers.traefik.tls.certresolver=myresolver"
- "traefik.http.routers.traefik.middlewares=authelia"
- "io.portainer.accesscontrol.public"
- "traefik.docker.network=core_traefik"
- "traefik.http.routers.traefik.observability.tracing=true"
networks:
# - reverse_proxy
# - prometheus_edge
- traefik
+15 -2
View File
@@ -24,7 +24,16 @@ entryPoints:
web:
address: ":80"
forwardedHeaders:
insecure: true
# Trust forwarding headers only from upstream proxies/LBs under our control.
# Network assumptions for this stack:
# - 127.0.0.1/32: local host-side reverse-proxy hops
# - 192.168.2.0/24: LAN edge proxies
# - 172.21.0.0/16: pinned Docker subnet for the traefik bridge network
insecure: false
trustedIPs:
- "127.0.0.1/32"
- "192.168.2.0/24"
- "172.21.0.0/16"
http:
redirections:
entryPoint:
@@ -34,7 +43,11 @@ entryPoints:
websecure:
address: ":443"
forwardedHeaders:
insecure: true
insecure: false
trustedIPs:
- "127.0.0.1/32"
- "192.168.2.0/24"
- "172.21.0.0/16"
http:
middlewares:
- default-chain@file
+3 -1
View File
@@ -1,5 +1,7 @@
networks:
traefik:
driver: bridge
ipam:
config:
- subnet: 172.21.0.0/16
monitor:
@@ -0,0 +1,55 @@
services:
docker-update-exporter:
profiles: ["monitoring","all","prometheus-exporters"]
build:
context: ${PROJECT_ROOT}/monitoring/docker-exporter
container_name: docker-update-exporter
# volumes:
# - /var/run/docker.sock:/var/run/docker.sock
# - ${PROJECT_ROOT}/monitoring/docker-exporter/data:/data:rw
# - ${PROJECT_ROOT}/services-up.sh:/app/services-up.sh:ro
environment:
LOG_LEVEL: ${DOCKER_EXPORTER_LOG_LEVEL}
DOCKER_HOST: ${DOCKER_SOCKET_PROXY_HOST}
depends_on:
- docker-socket-proxy
volumes:
- ~/.docker/config.json:/root/.docker/config.json:ro
- ${PROJECT_ROOT}/monitoring/docker-exporter/data:/data:rw
- ${PROJECT_ROOT}:/compose:ro
# - ${PROJECT_ROOT}/default-environment.env:/compose/default-environment.env:ro
# - ${PROJECT_ROOT}/default-network.yml:/compose/default-network.yml:ro
# - ${PROJECT_ROOT}/core/docker-compose.yml:/compose/core/docker-compose.yml:ro
# - ${PROJECT_ROOT}/monitoring/prometheus/docker-compose.yml:/compose/monitoring/prometheus/docker-compose.yml:ro
# - ${PROJECT_ROOT}/monitoring/gotify/docker-compose.yml:/compose/monitoring/gotify/docker-compose.yml:ro
# - ${PROJECT_ROOT}/monitoring/grafana/docker-compose.yml:/compose/monitoring/grafana/docker-compose.yml:ro
# - ${PROJECT_ROOT}/monitoring/portainer/docker-compose.yml:/compose/monitoring/portainer/docker-compose.yml:ro
# - ${PROJECT_ROOT}/monitoring/uptime-kuma/docker-compose.yml:/compose/monitoring/uptime-kuma/docker-compose.yml:>
# - ${PROJECT_ROOT}/apps/gitea/docker-compose.yml:/compose/apps/gitea/docker-compose.yml:ro
# - ${PROJECT_ROOT}/apps/gramps/docker-compose.yml:/compose/apps/gramps/docker-compose.yml:ro
# - ${PROJECT_ROOT}/apps/nextcloud/docker-compose.yml:/compose/apps/nextcloud/docker-compose.yml:ro
# - ${PROJECT_ROOT}/apps/passbolt/docker-compose.yml:/compose/apps/passbolt/docker-compose.yml:ro
# - ${PROJECT_ROOT}/apps/searxng/docker-compose.yml:/compose/apps/searxng/docker-compose.yml:ro
# - ${PROJECT_ROOT}/apps/shift-recorder/docker-compose.yml:/compose/apps/shift-recorder/docker-compose.yml:ro
# - ${PROJECT_ROOT}/apps/stockfill/docker-compose.yml:/compose/apps/stockfill/docker-compose.yml:ro
# - ${PROJECT_ROOT}/monitoring/node-red/docker-compose.yml:/compose/monitoring/node-red/docker-compose.yml:ro
# - ${PROJECT_ROOT}/core/test/docker-compose.yml:/compose/core/test/docker-compose.yml:ro
# ports:
# - "9105:9105"
restart: unless-stopped
cap_drop:
- ALL
security_opt:
- no-new-privileges:true
networks:
# - edge
- monitor
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:9105/metrics')"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
@@ -0,0 +1,47 @@
services:
docker-socket-proxy:
profiles: ["monitoring","all","prometheus","prometheus-exporters"]
image: tecnativa/docker-socket-proxy:latest
container_name: docker-socket-proxy
hostname: docker-socket-proxy
restart: unless-stopped
environment:
LOG_LEVEL: ${DOCKER_SOCKET_PROXY_LOG_LEVEL}
DISTRIBUTION: 1
CONTAINERS: 1
EVENTS: 1
IMAGES: 1
INFO: 1
NETWORKS: 1
PING: 1
POST: 1
AUTH: 1
EXEC: 1
SYSTEM: 1
SERVICES: 1
SWARM: 1
NODES: 1
SECRETS: 1
TASKS: 1
VERSION: 1
VOLUMES: 1
ALLOW_START: 1 # for better security, set to 0
ALLOW_STOP: 1 # for better security, set to 0
ALLOW_RESTARTS: 1 # for better security, set to 0
BUILD: 0
COMMIT: 0
CONFIGS: 0
DELETE: 1
DISABLE_IPV6: 0
PLUGINS: 0
SESSION: 0
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
cap_drop:
- ALL
security_opt:
- no-new-privileges:true
networks:
- monitor
- traefik
@@ -0,0 +1,42 @@
services:
influxdb:
profiles: ["monitoring","all","prometheus"]
image: influxdb:2.7
container_name: influxdb
restart: unless-stopped
# env_file:
# - ${PROJECT_ROOT}/secrets/stack-secrets.env
volumes:
- ${PROJECT_ROOT}/monitoring/influxdb:/var/lib/influxdb2
environment:
DOCKER_INFLUXDB_INIT_MODE: ${INFLUXDB_INIT_MODE}
DOCKER_INFLUXDB_INIT_USERNAME: ${INFLUXDB_INIT_USERNAME}
DOCKER_INFLUXDB_INIT_PASSWORD_FILE: /run/secrets/influxdb_init_password
DOCKER_INFLUXDB_INIT_ORG: ${INFLUXDB_INIT_ORG}
DOCKER_INFLUXDB_INIT_BUCKET: ${INFLUXDB_INIT_BUCKET}
secrets:
- influxdb_init_password
networks:
# - edge
# - traefik_reverse_proxy
- traefik
- monitor
labels:
- "traefik.http.routers.influxdb.rule=Host(`influxdb.lan.ddnsgeek.com`)"
- "traefik.enable=true"
- "traefik.http.routers.influxdb.entrypoints=websecure"
- "traefik.http.routers.influxdb.tls.certresolver=myresolver"
- "io.portainer.accesscontrol.public"
- "traefik.http.services.influxdb.loadbalancer.server.port=8086"
- "traefik.http.routers.influxdb.middlewares=authelia"
- "traefik.docker.network=core_traefik"
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8086/health || exit 1"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
secrets:
influxdb_init_password:
file: ${PROJECT_ROOT}/secrets/influxdb_init_password.txt
@@ -0,0 +1,23 @@
services:
node-exporter:
profiles: ["monitoring","all","prometheus-exporters"]
image: prom/node-exporter:latest
container_name: node-exporter
pid: host
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- "--path.procfs=/host/proc"
- "--path.sysfs=/host/sys"
- "--path.rootfs=/rootfs"
restart: unless-stopped
networks:
# - edge
- monitor
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:9100/metrics"]
interval: 30s
timeout: 10s
retries: 3
@@ -0,0 +1,17 @@
services:
pihole-exporter:
profiles: ["monitoring","all","prometheus-exporters"]
image: ekofr/pihole-exporter:latest
container_name: pihole-exporter
# env_file:
# - ${PROJECT_ROOT}/secrets/stack-secrets.env
environment:
PIHOLE_HOSTNAME: ${PIHOLE_HOSTNAME}
PIHOLE_PASSWORD: ${PIHOLE_PASSWORD}
PORT: ${PIHOLE_EXPORTER_PORT}
ports:
- "${PIHOLE_EXPORTER_PORT}:${PIHOLE_EXPORTER_PORT}"
restart: unless-stopped
networks:
# - edge
- monitor
-253
View File
@@ -1,53 +1,4 @@
#version: "3.8"
services:
docker-socket-proxy:
profiles: ["monitoring","all","prometheus","prometheus-exporters"]
image: tecnativa/docker-socket-proxy:latest
container_name: docker-socket-proxy
hostname: docker-socket-proxy
restart: unless-stopped
environment:
LOG_LEVEL: ${DOCKER_SOCKET_PROXY_LOG_LEVEL}
DISTRIBUTION: 1
CONTAINERS: 1
EVENTS: 1
IMAGES: 1
INFO: 1
NETWORKS: 1
PING: 1
POST: 1
AUTH: 1
EXEC: 1
SYSTEM: 1
SERVICES: 1
SWARM: 1
NODES: 1
SECRETS: 1
TASKS: 1
VERSION: 1
VOLUMES: 1
ALLOW_START: 1 # for better security, set to 0
ALLOW_STOP: 1 # for better security, set to 0
ALLOW_RESTARTS: 1 # for better security, set to 0
BUILD: 0
COMMIT: 0
CONFIGS: 0
DELETE: 1
DISABLE_IPV6: 0
PLUGINS: 0
SESSION: 0
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
cap_drop:
- ALL
security_opt:
- no-new-privileges:true
networks:
- monitor
- traefik
prometheus:
profiles: ["monitoring","all","prometheus"]
image: prom/prometheus:latest
@@ -94,207 +45,3 @@ services:
timeout: 10s
retries: 3
start_period: 30s
# alertmanager:
# image: prom/alertmanager:latest
# container_name: alertmanager
# command:
# - "--config.file=/etc/alertmanager/alertmanager.yml"
# volumes:
# - ./alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml:ro
# restart: unless-stopped
# secrets:
# - edge
# - traefik_reverse_proxy
# healthcheck:
# test: ["CMD", "wget", "--spider", "-q", "http://localhost:9093/-/healthy"]
# interval: 30s
# timeout: 10s
# retries: 3
# start_period: 20s
# labels:
# - "traefik.http.routers.alertmanager.rule=Host(`alertmanager.lan.ddnsgeek.com`)"
# - "traefik.enable=true"
# - "traefik.http.routers.alertmanager.entrypoints=websecure"
# - "traefik.http.routers.alertmanager.tls.certresolver=myresolver"
# - "io.portainer.accesscontrol.public"
# - "traefik.http.services.alertmanager.loadbalancer.server.port=9093"
# - "traefik.http.routers.alertmanager.middlewares=authelia"
# - "traefik.docker.network=traefik_reverse_proxy"
node-exporter:
profiles: ["monitoring","all","prometheus-exporters"]
image: prom/node-exporter:latest
container_name: node-exporter
pid: host
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- "--path.procfs=/host/proc"
- "--path.sysfs=/host/sys"
- "--path.rootfs=/rootfs"
restart: unless-stopped
networks:
# - edge
- monitor
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:9100/metrics"]
interval: 30s
timeout: 10s
retries: 3
influxdb:
profiles: ["monitoring","all","prometheus"]
image: influxdb:2.7
container_name: influxdb
restart: unless-stopped
# env_file:
# - ${PROJECT_ROOT}/secrets/stack-secrets.env
volumes:
- ${PROJECT_ROOT}/monitoring/influxdb:/var/lib/influxdb2
environment:
DOCKER_INFLUXDB_INIT_MODE: ${INFLUXDB_INIT_MODE}
DOCKER_INFLUXDB_INIT_USERNAME: ${INFLUXDB_INIT_USERNAME}
DOCKER_INFLUXDB_INIT_PASSWORD_FILE: /run/secrets/influxdb_init_password
DOCKER_INFLUXDB_INIT_ORG: ${INFLUXDB_INIT_ORG}
DOCKER_INFLUXDB_INIT_BUCKET: ${INFLUXDB_INIT_BUCKET}
secrets:
- influxdb_init_password
networks:
# - edge
# - traefik_reverse_proxy
- traefik
- monitor
labels:
- "traefik.http.routers.influxdb.rule=Host(`influxdb.lan.ddnsgeek.com`)"
- "traefik.enable=true"
- "traefik.http.routers.influxdb.entrypoints=websecure"
- "traefik.http.routers.influxdb.tls.certresolver=myresolver"
- "io.portainer.accesscontrol.public"
- "traefik.http.services.influxdb.loadbalancer.server.port=8086"
- "traefik.http.routers.influxdb.middlewares=authelia"
- "traefik.docker.network=core_traefik"
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8086/health || exit 1"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
telegraf:
profiles: ["monitoring","all","prometheus"]
image: telegraf:latest
container_name: telegraf
restart: unless-stopped
depends_on:
- docker-socket-proxy
# cap_drop:
# - ALL
security_opt:
- no-new-privileges:true
volumes:
- ${PROJECT_ROOT}/monitoring/telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro
- ${PROJECT_ROOT}/monitoring/node-red/data:/var/log/node-red:ro
networks:
# - edge
- monitor
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9273/metrics || exit 1"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
docker-update-exporter:
profiles: ["monitoring","all","prometheus-exporters"]
build:
context: ${PROJECT_ROOT}/monitoring/docker-exporter
container_name: docker-update-exporter
# volumes:
# - /var/run/docker.sock:/var/run/docker.sock
# - ${PROJECT_ROOT}/monitoring/docker-exporter/data:/data:rw
# - ${PROJECT_ROOT}/services-up.sh:/app/services-up.sh:ro
environment:
LOG_LEVEL: ${DOCKER_EXPORTER_LOG_LEVEL}
DOCKER_HOST: ${DOCKER_SOCKET_PROXY_HOST}
depends_on:
- docker-socket-proxy
volumes:
- ~/.docker/config.json:/root/.docker/config.json:ro
- ${PROJECT_ROOT}/monitoring/docker-exporter/data:/data:rw
- ${PROJECT_ROOT}:/compose:ro
# - ${PROJECT_ROOT}/default-environment.env:/compose/default-environment.env:ro
# - ${PROJECT_ROOT}/default-network.yml:/compose/default-network.yml:ro
# - ${PROJECT_ROOT}/core/docker-compose.yml:/compose/core/docker-compose.yml:ro
# - ${PROJECT_ROOT}/monitoring/prometheus/docker-compose.yml:/compose/monitoring/prometheus/docker-compose.yml:ro
# - ${PROJECT_ROOT}/monitoring/gotify/docker-compose.yml:/compose/monitoring/gotify/docker-compose.yml:ro
# - ${PROJECT_ROOT}/monitoring/grafana/docker-compose.yml:/compose/monitoring/grafana/docker-compose.yml:ro
# - ${PROJECT_ROOT}/monitoring/portainer/docker-compose.yml:/compose/monitoring/portainer/docker-compose.yml:ro
# - ${PROJECT_ROOT}/monitoring/uptime-kuma/docker-compose.yml:/compose/monitoring/uptime-kuma/docker-compose.yml:>
# - ${PROJECT_ROOT}/apps/gitea/docker-compose.yml:/compose/apps/gitea/docker-compose.yml:ro
# - ${PROJECT_ROOT}/apps/gramps/docker-compose.yml:/compose/apps/gramps/docker-compose.yml:ro
# - ${PROJECT_ROOT}/apps/nextcloud/docker-compose.yml:/compose/apps/nextcloud/docker-compose.yml:ro
# - ${PROJECT_ROOT}/apps/passbolt/docker-compose.yml:/compose/apps/passbolt/docker-compose.yml:ro
# - ${PROJECT_ROOT}/apps/searxng/docker-compose.yml:/compose/apps/searxng/docker-compose.yml:ro
# - ${PROJECT_ROOT}/apps/shift-recorder/docker-compose.yml:/compose/apps/shift-recorder/docker-compose.yml:ro
# - ${PROJECT_ROOT}/apps/stockfill/docker-compose.yml:/compose/apps/stockfill/docker-compose.yml:ro
# - ${PROJECT_ROOT}/monitoring/node-red/docker-compose.yml:/compose/monitoring/node-red/docker-compose.yml:ro
# - ${PROJECT_ROOT}/core/test/docker-compose.yml:/compose/core/test/docker-compose.yml:ro
# ports:
# - "9105:9105"
restart: unless-stopped
cap_drop:
- ALL
security_opt:
- no-new-privileges:true
networks:
# - edge
- monitor
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:9105/metrics')"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
pihole-exporter:
profiles: ["monitoring","all","prometheus-exporters"]
image: ekofr/pihole-exporter:latest
container_name: pihole-exporter
# env_file:
# - ${PROJECT_ROOT}/secrets/stack-secrets.env
environment:
PIHOLE_HOSTNAME: ${PIHOLE_HOSTNAME}
PIHOLE_PASSWORD: ${PIHOLE_PASSWORD}
PORT: ${PIHOLE_EXPORTER_PORT}
ports:
- "${PIHOLE_EXPORTER_PORT}:${PIHOLE_EXPORTER_PORT}"
restart: unless-stopped
networks:
# - edge
- monitor
#networks:
# internal:
# internal: true
# edge:
# internal: false
# traefik_reverse_proxy:
# external: true
secrets:
influxdb_init_password:
file: ${PROJECT_ROOT}/secrets/influxdb_init_password.txt
+24
View File
@@ -0,0 +1,24 @@
services:
telegraf:
profiles: ["monitoring","all","prometheus"]
image: telegraf:latest
container_name: telegraf
restart: unless-stopped
depends_on:
- docker-socket-proxy
# cap_drop:
# - ALL
security_opt:
- no-new-privileges:true
volumes:
- ${PROJECT_ROOT}/monitoring/telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro
- ${PROJECT_ROOT}/monitoring/node-red/data:/var/log/node-red:ro
networks:
# - edge
- monitor
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9273/metrics || exit 1"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
+15 -23
View File
@@ -1,35 +1,27 @@
# Copy to secrets/stack-secrets.env and set real values.
# Do NOT commit secrets/stack-secrets.env.
NEXTCLOUD_DB_NAME=nextcloud
NEXTCLOUD_DB_USER=nextcloud
NEXTCLOUD_ADMIN_USER=admin
NEXTCLOUD_SMTP_FROM_ADDRESS=mailuser
NEXTCLOUD_SMTP_DOMAIN=example.com
NEXTCLOUD_SMTP_NAME=mailuser@example.com
NEXTCLOUD_DB_USER=CHANGE_ME
NEXTCLOUD_ADMIN_USER=CHANGE_ME
NEXTCLOUD_SMTP_FROM_ADDRESS=CHANGE_ME
NEXTCLOUD_SMTP_DOMAIN=CHANGE_ME
NEXTCLOUD_SMTP_NAME=${NEXTCLOUD_SMTP_FROM_ADDRESS}@${NEXTCLOUD_SMTP_DOMAIN}
PASSBOLT_DB_NAME=passbolt
PASSBOLT_DB_USER=passbolt
GRAMPS_DB_NAME=gramps
GRAMPS_DB_USER=gramps
PASSBOLT_DB_NAME=CHANGE_ME
PASSBOLT_DB_USER=CHANGE_ME
PASSBOLT_GPG_SERVER_KEY_FINGERPRINT=CHANGE_ME
GRAMPS_DB_NAME=CHANGE_ME
GRAMPS_DB_USER=CHANGE_ME
GRAMPS_DB_PASSWORD=CHANGE_ME
GRAMPS_INITIAL_ADMIN=admin
GRAMPS_INITIAL_ADMIN=CHANGE_ME
GRAMPS_INITIAL_ADMIN_PASSWORD=CHANGE_ME
GRAMPS_DB_URI=postgresql://${GRAMPS_DB_USER}:${GRAMPS_DB_PASSWORD}@db:5432/${GRAMPS_DB_NAME}
GOTIFY_DEFAULTUSER_NAME=admin
GOTIFY_DEFAULTUSER_NAME=CHANGE_ME
GOTIFY_DEFAULTUSER_PASS=CHANGE_ME
INFLUXDB_INIT_USERNAME=admin
INFLUXDB_INIT_ORG=homelab
INFLUXDB_INIT_BUCKET=telemetry
INFLUXDB_INIT_USERNAME=CHANGE_ME
PIHOLE_HOSTNAME=pihole.example.com
PIHOLE_PASSWORD=CHANGE_ME
PROMETHEUS_KUMA_BASIC_AUTH_USERNAME=monitoring@example.com
AUTHELIA_JWT_SECRET=CHANGE_ME
AUTHELIA_SESSION_SECRET=CHANGE_ME
AUTHELIA_STORAGE_ENCRYPTION_KEY=CHANGE_ME
CROWDSEC_LAPI_KEY=CHANGE_ME
+1
View File
@@ -16,6 +16,7 @@ while IFS= read -r file; do
FILES+=(-f "$file")
done < <(
find "$PROJECT_ROOT/apps" "$PROJECT_ROOT/monitoring" "$PROJECT_ROOT/core" \
-maxdepth 2 \
-type f \
\( -name 'docker-compose.yml' -o -name 'docker-compose.yaml' \) \
2>/dev/null \