4.5 KiB
Architecture Summary
Overview
This stack uses Traefik v3 as the internet-facing ingress for application and operations UIs. Service routing is primarily label-driven from Docker Compose files, with a shared traefik bridge network for reverse-proxied traffic and a monitor network for internal telemetry components.
TLS is terminated at Traefik using ACME HTTP challenge (myresolver), with additional hardening via:
- a default middleware chain (security headers, CrowdSec bouncer, error pages),
- Authelia forward-auth middleware on selected routes,
- mTLS TLS options (
mtls-private-admin) on private-admin endpoints.
Network / Request Flow
flowchart LR
C[Internet Client] -->|80/443| T[Traefik Ingress]
T -->|HTTP->HTTPS redirect| T
T -->|ACME HTTP challenge| LE[Let's Encrypt ACME]
subgraph TraefikNet["Docker network: traefik (172.21.0.0 slash 16)"]
A[Authelia]
CS[CrowdSec LAPI]
EP[Error Pages]
NC[Nextcloud]
PB[Passbolt]
GT[Gitea]
GW[Gramps Web]
SX[SearXNG]
GF[Grafana]
PR[Prometheus]
NR[Node-RED]
PT[Portainer]
UK[Uptime Kuma]
IF[InfluxDB]
GO[Gotify]
end
T -->|forwardAuth for selected services| A
T -->|plugin decisions| CS
T -->|4xx/5xx middleware| EP
T --> NC
T --> PB
T --> GT
T --> GW
T --> SX
T --> GF
T --> PR
T --> NR
T --> PT
T --> UK
T --> IF
T --> GO
subgraph MonitorNet[Docker network: monitor]
NE[Node Exporter]
TE[Telegraf]
DE[Docker Update Exporter]
PE[Pi-hole Exporter]
DSP[Docker Socket Proxy]
end
PR --> NE
PR --> TE
PR --> DE
PR --> PE
PR --> UK
PR -->|remote scrape| RH[Remote Hosts]
TE --> DSP
NR --> DSP
PT --> DSP
T --> DSP
Key Components
- Ingress & security plane: Traefik, Authelia, CrowdSec, Error Pages.
- User-facing applications: Nextcloud, Passbolt, Gitea, Gramps Web (Family Tree), SearXNG.
- Monitoring/ops: Prometheus, Grafana, InfluxDB, Node-RED, Uptime Kuma, Portainer, Gotify.
- Support plane: Docker Socket Proxy (shared Docker API gateway for Traefik/automation/ops tools).
Remote Hosts Observed
Prometheus scrape targets indicate additional infrastructure outside the local Compose deployment, including hostnames for:
raspberrypi.tail13f623.ts.netpve.sweet.homepbs.sweet.homepiholeservernix-cachekuma.lan.ddnsgeek.com
Runtime Inventory Input
Prometheus runtime inventory snapshots are exported with scripts/export_prometheus_inventory.py and committed under docs/runtime/. The latest human-readable summary is in docs/prometheus-inventory.md.
These artifacts are an observed-runtime input for architecture diagrams/docs and should be combined with repository configuration, not treated as sole source of truth.
Assumptions / Unknowns
The repository provides enough detail to infer container-level architecture, but not full Proxmox host/VM topology.
Unknowns (left intentionally as placeholders):
- Proxmox physical hosts: unknown from repo contents.
- VM/LXC inventory and placement: unknown from repo contents.
- Which services run on which Proxmox node(s): unknown from repo contents.
- Inter-host VLAN/subnet layout beyond Docker bridges: unknown from repo contents.
If you want, this section can be replaced with a concrete Proxmox topology once you add an inventory source (e.g., Terraform, Ansible inventory, or a diagram export).
Runtime visibility from Prometheus
Prometheus inventory provides observed runtime coverage of scrape targets. It complements (but does not replace) declared architecture in Compose files and static docs.
- Inventory timestamp:
2026-04-13T06:36:45Z - Observed jobs:
8 - Observed instances:
19 - Observed services (label-derived):
1
Observed monitoring view
| job | targets | unhealthy |
|---|---|---|
| container-updates | 2 | 0 |
| kuma | 2 | 0 |
| node | 7 | 0 |
| pihole | 1 | 0 |
| prometheus | 1 | 0 |
| proxmox-storage | 2 | 0 |
| telegraf | 2 | 0 |
| traefik | 2 | 0 |
Data sources
docs/runtime/prometheus-inventory.json(normalized runtime export)- Prometheus scrape metadata (
targets+ label sets) - Existing repository architecture docs for declared topology
Notes from inventory
- The
upquery indicates scrape success from Prometheus perspective only. - Use static repository architecture docs and deployment configs with this runtime export for complete diagrams.