docs: overhaul repo documentation and workflow guides

This commit is contained in:
beatz174-bit
2026-04-21 09:28:55 +10:00
parent 020d6ecb79
commit c7dd9f2229
13 changed files with 539 additions and 174 deletions
+15 -47
View File
@@ -2,13 +2,9 @@
## Overview
This stack uses **Traefik v3** as the internet-facing ingress for application and operations UIs. Service routing is primarily label-driven from Docker Compose files, with a shared `traefik` bridge network for reverse-proxied traffic and a `monitor` network for internal telemetry components.
This stack uses **Traefik v3** as internet-facing ingress for application and operations UIs. Service routing is label-driven from Docker Compose files, with shared Docker networks (`traefik`, `monitor`) connecting reverse-proxied and telemetry services.
TLS is terminated at Traefik using ACME HTTP challenge (`myresolver`), with additional hardening via:
- a default middleware chain (security headers, CrowdSec bouncer, error pages),
- Authelia forward-auth middleware on selected routes,
- mTLS TLS options (`mtls-private-admin`) on private-admin endpoints.
TLS is terminated at Traefik (ACME HTTP challenge), with hardening via middleware chains, Authelia forward-auth for selected routes, CrowdSec integration, and mTLS options for private-admin paths.
## Network / Request Flow
@@ -18,7 +14,7 @@ flowchart LR
T -->|HTTP->HTTPS redirect| T
T -->|ACME HTTP challenge| LE[Let's Encrypt ACME]
subgraph TraefikNet["Docker network: traefik (172.21.0.0 slash 16)"]
subgraph TraefikNet[Docker network: traefik]
A[Authelia]
CS[CrowdSec LAPI]
EP[Error Pages]
@@ -76,51 +72,23 @@ flowchart LR
T --> DSP
```
## Key Components
## Key components
- **Ingress & security plane:** Traefik, Authelia, CrowdSec, Error Pages.
- **User-facing applications:** Nextcloud, Passbolt, Gitea, Gramps Web (Family Tree), SearXNG.
- **Ingress/security plane:** Traefik, Authelia, CrowdSec, Error Pages.
- **User-facing apps:** Nextcloud, Passbolt, Gitea, Gramps Web, SearXNG.
- **Monitoring/ops:** Prometheus, Grafana, InfluxDB, Node-RED, Uptime Kuma, Portainer, Gotify.
- **Support plane:** Docker Socket Proxy (shared Docker API gateway for Traefik/automation/ops tools).
- **Support plane:** Docker Socket Proxy for controlled Docker API access.
## Remote Hosts Observed
## Relationship to Terraform inventory
Prometheus scrape targets indicate additional infrastructure outside the local Compose deployment, including hostnames for:
Terraform in `infrastructure/terraform/` captures infrastructure inventory and reconciliation state for Proxmox VMs, physical host metadata, and selected Docker mirrors.
- `raspberrypi.tail13f623.ts.net`
- `pve.sweet.home`
- `pbs.sweet.home`
- `pihole`
- `server`
- `nix-cache`
- `kuma.lan.ddnsgeek.com`
Use architecture docs together with:
## Runtime Inventory Input
- [docs/source-of-truth.md](source-of-truth.md)
- [docs/terraform-workflows.md](terraform-workflows.md)
- [docs/infrastructure-inventory.md](infrastructure-inventory.md)
Prometheus runtime inventory snapshots are exported with `scripts/export_prometheus_inventory.py` and committed under `docs/runtime/`. The latest human-readable summary is in [docs/prometheus-inventory.md](prometheus-inventory.md).
## Notes on runtime vs declared state
These artifacts are an observed-runtime input for architecture diagrams/docs and should be combined with repository configuration, not treated as sole source of truth.
## Assumptions / Unknowns
The repository provides enough detail to infer **container-level architecture**, but not full **Proxmox host/VM topology**.
Unknowns (left intentionally as placeholders):
- **Proxmox physical hosts:** _unknown from repo contents._
- **VM/LXC inventory and placement:** _unknown from repo contents._
- **Which services run on which Proxmox node(s):** _unknown from repo contents._
- **Inter-host VLAN/subnet layout beyond Docker bridges:** _unknown from repo contents._
If you want, this section can be replaced with a concrete Proxmox topology once you add an inventory source (e.g., Terraform, Ansible inventory, or a diagram export).
### Data sources
- Existing repository architecture docs for declared topology
### Notes from inventory
- The `up` query indicates scrape success from Prometheus perspective only.
- Use static repository architecture docs and deployment configs with this runtime export for complete diagrams.
<!-- END GENERATED PROMETHEUS SECTION -->
Runtime scrape targets and health signals are useful observed-state inputs, but they do not replace declared config authority from Compose/Terraform sources.