Files
docker/docs/architecture.md
T

3.4 KiB

Architecture Summary

Overview

This stack uses Traefik v3 as the internet-facing ingress for application and operations UIs. Service routing is primarily label-driven from Docker Compose files, with a shared traefik bridge network for reverse-proxied traffic and a monitor network for internal telemetry components.

TLS is terminated at Traefik using ACME HTTP challenge (myresolver), with additional hardening via:

  • a default middleware chain (security headers, CrowdSec bouncer, error pages),
  • Authelia forward-auth middleware on selected routes,
  • mTLS TLS options (mtls-private-admin) on private-admin endpoints.

Network / Request Flow

flowchart LR
  C[Internet Client] -->|80/443| T[Traefik Ingress]
  T -->|HTTP->HTTPS redirect| T
  T -->|ACME HTTP challenge| LE[Let's Encrypt ACME]

  subgraph TraefikNet["Docker network: traefik (172.21.0.0 slash 16)"]
    A[Authelia]
    CS[CrowdSec LAPI]
    EP[Error Pages]

    NC[Nextcloud]
    PB[Passbolt]
    GT[Gitea]
    GW[Gramps Web]
    SX[SearXNG]

    GF[Grafana]
    PR[Prometheus]
    NR[Node-RED]
    PT[Portainer]
    UK[Uptime Kuma]
    IF[InfluxDB]
    GO[Gotify]
  end

  T -->|forwardAuth for selected services| A
  T -->|plugin decisions| CS
  T -->|4xx/5xx middleware| EP

  T --> NC
  T --> PB
  T --> GT
  T --> GW
  T --> SX

  T --> GF
  T --> PR
  T --> NR
  T --> PT
  T --> UK
  T --> IF
  T --> GO

  subgraph MonitorNet[Docker network: monitor]
    NE[Node Exporter]
    TE[Telegraf]
    DE[Docker Update Exporter]
    PE[Pi-hole Exporter]
    DSP[Docker Socket Proxy]
  end

  PR --> NE
  PR --> TE
  PR --> DE
  PR --> PE
  PR --> UK
  PR -->|remote scrape| RH[Remote Hosts]
  TE --> DSP
  NR --> DSP
  PT --> DSP
  T --> DSP

Key Components

  • Ingress & security plane: Traefik, Authelia, CrowdSec, Error Pages.
  • User-facing applications: Nextcloud, Passbolt, Gitea, Gramps Web (Family Tree), SearXNG.
  • Monitoring/ops: Prometheus, Grafana, InfluxDB, Node-RED, Uptime Kuma, Portainer, Gotify.
  • Support plane: Docker Socket Proxy (shared Docker API gateway for Traefik/automation/ops tools).

Remote Hosts Observed

Prometheus scrape targets indicate additional infrastructure outside the local Compose deployment, including hostnames for:

  • raspberrypi.tail13f623.ts.net
  • pve.sweet.home
  • pbs.sweet.home
  • pihole
  • server
  • nix-cache
  • kuma.lan.ddnsgeek.com

Runtime Inventory Input

Prometheus runtime inventory snapshots are exported with scripts/export_prometheus_inventory.py and committed under docs/runtime/. The latest human-readable summary is in docs/prometheus-inventory.md.

These artifacts are an observed-runtime input for architecture diagrams/docs and should be combined with repository configuration, not treated as sole source of truth.

Assumptions / Unknowns

The repository provides enough detail to infer container-level architecture, but not full Proxmox host/VM topology.

Unknowns (left intentionally as placeholders):

  • Proxmox physical hosts: unknown from repo contents.
  • VM/LXC inventory and placement: unknown from repo contents.
  • Which services run on which Proxmox node(s): unknown from repo contents.
  • Inter-host VLAN/subnet layout beyond Docker bridges: unknown from repo contents.

If you want, this section can be replaced with a concrete Proxmox topology once you add an inventory source (e.g., Terraform, Ansible inventory, or a diagram export).