Compare commits

...

15 Commits

Author SHA1 Message Date
beatz174-bit bd23339edd Merge pull request #56 from beatz174-bit/codex/add-gitea-actions-runner-with-socket-proxy
Add Gitea Actions runner and configuration (compose, env, README)
2026-05-12 14:04:22 +10:00
beatz174-bit f8091a5c76 Use DOCKER_SOCKET_PROXY_HOST for gitea-runner Docker host 2026-05-12 14:03:57 +10:00
beatz174-bit 6a2639f931 Merge pull request #55 from beatz174-bit/codex/improve-dynu-dns-inventory-correlation
Harden Dynu/Traefik DNS correlation parsing and validation
2026-04-21 14:11:36 +10:00
beatz174-bit fae5e119d1 Harden Dynu/Traefik DNS correlation and validation 2026-04-21 14:11:25 +10:00
git 872038d0c9 first succesful build of dynu inventory 2026-04-21 13:57:16 +10:00
beatz174-bit 7a6db9fcfd Merge pull request #54 from beatz174-bit/codex/fix-dynu-api-authentication-error
Handle Dynu API auth failures without Python traceback
2026-04-21 13:52:16 +10:00
beatz174-bit d6a8979d55 Handle Dynu API auth failures without traceback 2026-04-21 13:52:01 +10:00
beatz174-bit a9a8a708d3 Merge pull request #53 from beatz174-bit/codex/fix-dynu-api-authentication-issue
Auto-load secrets/dynu.env, harden Dynu credential handling, and update docs
2026-04-21 13:38:52 +10:00
beatz174-bit 749c0d500d Improve Dynu env handling and document secrets/dynu.env 2026-04-21 13:38:33 +10:00
beatz174-bit 8f112af65b Merge pull request #52 from beatz174-bit/codex/integrate-dynu-dns-in-read-only-mode
Add read-only Dynu DNS inventory and Traefik correlation scripts
2026-04-21 12:45:50 +10:00
beatz174-bit 580e9b9aed Add strict read-only Dynu DNS inventory integration 2026-04-21 12:31:52 +10:00
beatz174-bit c77db36865 Merge pull request #51 from beatz174-bit/codex/add-basic-ansible-foundation
Add phase-1 Ansible foundation and safe validation hooks
2026-04-21 12:10:07 +10:00
beatz174-bit e11dc22999 Add phase-1 Ansible foundation and validation scaffolding 2026-04-21 12:07:29 +10:00
beatz174-bit 862ddd42f8 Merge pull request #50 from beatz174-bit/codex/update-documentation-for-codex-setup
docs: document Codex setup and maintenance scripts
2026-04-21 11:51:18 +10:00
beatz174-bit d0e7e52150 docs: add codex setup and maintenance script guidance 2026-04-21 11:51:02 +10:00
26 changed files with 1369 additions and 10 deletions
+37 -1
View File
@@ -17,11 +17,19 @@ If you only read one section, read **[Source-of-truth boundaries](docs/source-of
- Docker environment composition and `services-up.sh`: [docs/docker-environment.md](docs/docker-environment.md)
- Terraform workflows (brownfield import/reconciliation): [docs/terraform-workflows.md](docs/terraform-workflows.md)
- Infrastructure inventory intent and outputs: [docs/infrastructure-inventory.md](docs/infrastructure-inventory.md)
- Dynu DNS read-only inventory workflow: [docs/dynu-dns-inventory.md](docs/dynu-dns-inventory.md)
- Ansible bootstrap workflows: [docs/ansible-workflows.md](docs/ansible-workflows.md)
- Deployment prerequisites and secrets setup: [docs/deployment-prerequisites.md](docs/deployment-prerequisites.md)
- Secrets inventory: [docs/security-secrets.md](docs/security-secrets.md)
Terraform subtrees:
Codex helper scripts:
- Initial Codex environment/bootstrap setup: [scripts/codex-setup.sh](scripts/codex-setup.sh)
- Codex environment maintenance/refresh: [scripts/codex-maintenance.sh](scripts/codex-maintenance.sh)
Infrastructure subtrees:
- Ansible foundation docs: [infrastructure/ansible/README.md](infrastructure/ansible/README.md)
- Terraform root docs: [infrastructure/terraform/README.md](infrastructure/terraform/README.md)
- Terraform Docker mirror: [infrastructure/terraform/docker/README.md](infrastructure/terraform/docker/README.md)
- Terraform Proxmox inventory: [infrastructure/terraform/proxmox/README.md](infrastructure/terraform/proxmox/README.md)
@@ -36,6 +44,13 @@ Terraform subtrees:
- `services-up.sh` composes the environment by discovering compose files and applying common env/network inputs.
- For service runtime behavior, start from Compose files and `services-up.sh` (not Terraform).
### Ansible (bootstrap foundation)
- Ansible under `infrastructure/ansible/` is a phase-1 foundation for inventory/configuration scaffolding.
- It supports safe validation (inventory parsing and playbook syntax checks) while hosts/devices are onboarded gradually.
- It does not replace Compose runtime authority or Terraform reconciliation authority at this stage.
### Terraform (inventory and reconciliation authority)
- Terraform under `infrastructure/terraform/` is used to codify and reconcile existing infrastructure.
@@ -103,3 +118,24 @@ flowchart TB
```
For request-flow and network detail, see [docs/architecture.md](docs/architecture.md).
---
## Codex setup and maintenance scripts
The repository includes helper scripts for Codex sessions that need local tooling and safe placeholder secret material for validation-only workflows:
- `scripts/codex-setup.sh`
- Installs baseline CLI dependencies (shell/yaml/terraform/ansible tooling).
- Prepares `secrets/stack-secrets.env` from templates and creates dummy file-based secret placeholders based on `secrets/inventory.json`.
- Installs/refreshed baseline Ansible collections when `infrastructure/ansible/collections/requirements.yml` is present.
- Runs safe Ansible bootstrap checks (version, inventory parse, playbook syntax check) without live connectivity operations.
- Prints installed tool versions for quick verification.
- `scripts/codex-maintenance.sh`
- Refreshes Python-based linting/automation tooling.
- Reconciles placeholder secret files against current `secrets/inventory.json` (creates missing, removes stale).
- Rebuilds `secrets/stack-secrets.env` with dummy values for compose-config validation.
- Refreshes Ansible collections and repeats safe inventory/syntax validation checks.
Both scripts are intended for local validation environments and should not be treated as production provisioning automation.
+55
View File
@@ -0,0 +1,55 @@
# Gitea
## Gitea Actions
Gitea Actions is enabled by setting:
- `GITEA__actions__ENABLED=true`
## Runner service
The repository includes a dedicated Gitea Actions runner service named:
- `gitea-runner`
The runner uses Docker through the existing Docker socket proxy:
- `DOCKER_HOST=tcp://docker-socket-proxy:2375`
The runner intentionally **does not** mount:
- `/var/run/docker.sock`
## Registration token
Generate a runner registration token from the Gitea UI:
- Site Administration → Actions → Runners
- or Repo → Settings → Actions → Runners
Put the token in your env/secrets file:
- `GITEA_RUNNER_REGISTRATION_TOKEN=...`
## Start the runner
- `./services-up.sh --profile gitea up -d gitea-runner`
- or `./services-up.sh --profile all up -d gitea-runner`
## Logs
- `docker logs -f gitea-runner`
## Labels
Common workflow label:
- `runs-on: ubuntu-latest`
This should match the configured labels, for example:
- `GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:20-bookworm,...`
## Security note
The runner can control Docker through `docker-socket-proxy`. This is safer than mounting the raw Docker socket directly, but workflows still have meaningful control over Docker. Only trusted repositories/users should be allowed to run workflows on this runner.
+20
View File
@@ -9,6 +9,7 @@ services:
- USER_GID=${GITEA_USER_GID}
- GITEA__database__DB_TYPE=${GITEA_DB_TYPE}
- GITEA__server__ROOT_URL=${GITEA_ROOT_URL}
- GITEA__actions__ENABLED=true
volumes:
- ${PROJECT_ROOT}/apps/gitea/data:/data
networks:
@@ -31,6 +32,25 @@ services:
retries: 6
start_period: 120s
gitea-runner:
profiles: ["apps","all","gitea","ci"]
container_name: gitea-runner
image: gitea/act_runner:latest
restart: always
depends_on:
- gitea
- docker-socket-proxy
environment:
- GITEA_INSTANCE_URL=${GITEA_ROOT_URL}
- GITEA_RUNNER_REGISTRATION_TOKEN=${GITEA_RUNNER_REGISTRATION_TOKEN}
- GITEA_RUNNER_NAME=${GITEA_RUNNER_NAME}
- GITEA_RUNNER_LABELS=${GITEA_RUNNER_LABELS}
- DOCKER_HOST=${DOCKER_SOCKET_PROXY_HOST}
volumes:
- ${PROJECT_ROOT}/apps/gitea/runner-data:/data
networks:
- traefik
#volumes:
# gitea_data:
+5
View File
@@ -13,6 +13,11 @@ GITEA_USER_UID=1000
GITEA_USER_GID=1000
GITEA_DB_TYPE=sqlite3
GITEA_ROOT_URL=https://gitea.lan.ddnsgeek.com/
# Generate a token in Gitea: Site Administration → Actions → Runners
# or Repo → Settings → Actions → Runners
GITEA_RUNNER_REGISTRATION_TOKEN=
GITEA_RUNNER_NAME=docker-runner-01
GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:20-bookworm,ubuntu-22.04:docker://node:20-bookworm,linux:docker://node:20-bookworm,docker:docker://docker:cli
# Grafana
GRAFANA_ROOT_URL=https://grafana.lan.ddnsgeek.com/
+72
View File
@@ -0,0 +1,72 @@
# Ansible Workflows (Bootstrap / Phase 1)
Ansible is being introduced as a minimal, maintainable foundation for host/device inventory and future configuration workflows.
## Why introduce Ansible now
- The repository already has strong runtime and infrastructure boundaries (Compose + Terraform).
- A small Ansible baseline allows gradual host onboarding without forcing immediate large-scale automation.
- It enables safe validation workflows (`inventory --list`, playbook syntax checks) before real execution.
## What Ansible is for in this repository (right now)
- YAML inventory structure for hosts/devices to be onboarded over time.
- Group and host variable scaffolding for future incremental adoption.
- Validation-oriented starter playbook and local tooling checks.
## What Ansible is not for yet
- Replacing Docker Compose runtime authority.
- Replacing Terraform inventory/reconciliation authority.
- Becoming the current source of truth for NixOS host management.
- Becoming the current source of truth for all network automation.
## Directory layout
- `infrastructure/ansible/ansible.cfg`
- `infrastructure/ansible/inventory/hosts.yml`
- `infrastructure/ansible/inventory/group_vars/`
- `infrastructure/ansible/inventory/host_vars/`
- `infrastructure/ansible/playbooks/ping.yml`
- `infrastructure/ansible/collections/requirements.yml`
## Add a host (gradual onboarding)
1. Open `infrastructure/ansible/inventory/hosts.yml`.
2. Add the host under an appropriate group (`linux`, `network`, `virtualization`, or `nixos`).
3. Add non-sensitive defaults under group vars only when shared across hosts.
4. Add host-specific values in `inventory/host_vars/<hostname>.yml`.
5. Keep secrets out of committed files.
Example pattern:
```yaml
linux:
hosts:
my-host:
ansible_host: my-host.local
```
## Validation commands
Run from repository root:
```bash
ansible --version
ansible-lint --version
ansible-inventory -i infrastructure/ansible/inventory/hosts.yml --list
ansible-playbook -i infrastructure/ansible/inventory/hosts.yml infrastructure/ansible/playbooks/ping.yml --syntax-check
```
Install/update baseline collections:
```bash
ansible-galaxy collection install -r infrastructure/ansible/collections/requirements.yml -p infrastructure/ansible/collections
```
## Guardrails for future expansion
- Keep changes incremental (one host/group/playbook change at a time).
- Prefer simple playbooks before introducing roles.
- Add network-platform/NixOS-specific logic only when those boundaries are explicitly adopted.
- Keep documentation aligned with source-of-truth boundaries when Ansible authority evolves.
+139
View File
@@ -0,0 +1,139 @@
# Dynu DNS Read-Only Inventory
This repository includes a **read-only** Dynu DNS inventory workflow for `lan.ddnsgeek.com`.
> This integration is intentionally read-only. No Dynu mutations are permitted in this repo at this stage.
## Scope
- Fetch live DNS/domain data from Dynu using **GET requests only**.
- Correlate Dynu hostnames with Traefik `Host(...)` rules found in compose files.
- Generate local inventory artifacts for documentation.
## Safety Guard Rails
- Scripts fail unless `DYNU_READ_ONLY=true`.
- No Dynu write methods (`POST`, `PUT`, `PATCH`, `DELETE`) are implemented.
- No Terraform Dynu provider/resources/modules are introduced.
- No Ansible Dynu mutation tasks are introduced.
- API secrets are read from environment variables and are never logged.
## Correlation logic
`scripts/dynu/correlate_dynu_with_traefik.py` uses compose files as the source of truth and parses them as YAML.
It supports both common label formats:
- list style:
```yaml
labels:
- "traefik.http.routers.app.rule=Host(`app.lan.ddnsgeek.com`)"
```
- map style:
```yaml
labels:
traefik.http.routers.app.rule: "Host(`app.lan.ddnsgeek.com`)"
```
The parser extracts hostnames from router rules such as:
- `Host(`a`)`
- `Host("a")`
- `Host('a')`
- multi-host rules (comma-delimited)
- combined expressions such as `Host(...) && PathPrefix(...)`
## Route metadata in inventory
Each discovered hostname mapping includes:
- fqdn
- compose service name
- compose file path
- stack area (`apps`, `monitoring`, `core`)
- router label key(s)
- raw router rule
- `uses_tls`
- `tls_options`
- `middlewares`
- `uses_mtls`
- `uses_authelia`
mTLS is metadata only and **never blocks mapping**.
## Validation model
The generated JSON/Markdown include a top-level `validation` section with:
- `allowed_unmapped_hostnames`
- `unexpected_unmapped_hostnames`
- `duplicate_hostnames`
- `ambiguous_hostnames`
- `validation_ok`
Current policy:
- `edge.lan.ddnsgeek.com` is the only allowed unmapped DNS hostname.
- every other `*.lan.ddnsgeek.com` DNS hostname should map to a compose/Traefik-discovered service.
Optional strict mode:
- Set `DYNU_ENFORCE_VALIDATION=true` to make the correlate script exit non-zero when unexpected unmapped hostnames exist.
## Required Environment Variables
- `DYNU_API_KEY` (required for fetch)
- `DYNU_BASE_URL` (optional, defaults to `https://api.dynu.com`)
- `DYNU_READ_ONLY` (**must** be `true`)
Recommended local secrets file (not committed): `secrets/dynu.env`
```bash
DYNU_API_KEY=replace-with-real-api-key
DYNU_READ_ONLY=true
DYNU_BASE_URL=https://api.dynu.com
```
Notes:
- Keep values unquoted unless required by your shell.
- `scripts/dynu/build_dns_inventory.sh` will auto-load `secrets/dynu.env` when present.
## Commands
Run directly:
```bash
DYNU_READ_ONLY=true DYNU_API_KEY=... python3 scripts/dynu/fetch_dynu_dns.py
DYNU_READ_ONLY=true python3 scripts/dynu/correlate_dynu_with_traefik.py
```
Or run the wrapper:
```bash
scripts/dynu/build_dns_inventory.sh
```
## Artifacts
- `data/dns/dynu_live.json` (generated, untracked by default due to repo `data/` ignore)
- `data/dns/dynu_traefik_inventory.json` (generated, untracked by default)
- `docs/generated/dns-inventory.md` (generated documentation artifact)
Because `data/` is gitignored in this repository, JSON outputs are intentionally local-only unless ignore behavior changes in the future.
## Ansible Wrapper (Read-Only)
A syntax-safe wrapper playbook is provided at:
- `infrastructure/ansible/playbooks/dns-inventory.yml`
It only executes the local read-only scripts and does not call write-capable Dynu APIs.
## Not Managed Yet
Dynu DNS records are **not** managed by Terraform or Ansible in this repository at this stage.
No configuration in this repository sends Dynu mutation requests.
+67
View File
@@ -0,0 +1,67 @@
# DNS Inventory (Dynu + Traefik)
> This integration is intentionally read-only. No Dynu mutations are permitted in this repo at this stage.
- Base domain: `lan.ddnsgeek.com`
- Dynu fetched at: `2026-04-21T03:55:09+00:00`
- Inventory generated at: `2026-04-21T04:08:43+00:00`
## Summary
- Traefik hostnames discovered: **15**
- Dynu hostnames discovered: **20**
- Mapped hostnames: **15**
- DNS-only hostnames: **5**
- Traefik-only hostnames: **0**
- Ambiguous hostnames: **0**
## Validation
- Validation ok: **false**
- Allowed unmapped hostnames: `edge.lan.ddnsgeek.com`
- Unexpected unmapped hostnames: **3**
- Duplicate hostnames: **1**
- Ambiguous hostnames: **0**
### Allowed unmapped hostnames
- `edge.lan.ddnsgeek.com`
### Unexpected unmapped hostnames
- `kuma.lan.ddnsgeek.com`
- `shifts.lan.ddnsgeek.com`
- `stockfill.lan.ddnsgeek.com`
### Duplicate hostnames
- `mtls-bridge.lan.ddnsgeek.com`
### Ambiguous hostnames
_None._
## Correlation
| Hostname | Status | Reasons | Service(s) | Route metadata | DNS records |
|---|---|---|---|---|---|
| `auth.lan.ddnsgeek.com` | `mapped` | `mapped` | core/authelia | authelia [tls=true, mtls=false, authelia=false, tls_options=-, middlewares=-] | A: |
| `edge.lan.ddnsgeek.com` | `allowed_unmapped` | `allowed_unmapped, dns_only` | - | - | A: |
| `familytree.lan.ddnsgeek.com` | `mapped` | `mapped` | apps/grampsweb | gramps [tls=true, mtls=false, authelia=false, tls_options=-, middlewares=-] | A: |
| `gitea.lan.ddnsgeek.com` | `mapped` | `mapped` | apps/gitea | gitea [tls=true, mtls=false, authelia=false, tls_options=-, middlewares=-] | A: |
| `gotify.lan.ddnsgeek.com` | `mapped` | `mapped` | monitoring/gotify | gotify [tls=true, mtls=true, authelia=false, tls_options=mtls-private-admin@file, middlewares=-] | A: |
| `grafana.lan.ddnsgeek.com` | `mapped` | `mapped` | monitoring/grafana | grafana [tls=true, mtls=true, authelia=false, tls_options=mtls-private-admin@file, middlewares=-] | A: |
| `influxdb.lan.ddnsgeek.com` | `mapped` | `mapped` | monitoring/influxdb | influxdb [tls=true, mtls=true, authelia=true, tls_options=mtls-private-admin@file, middlewares=authelia] | A: |
| `kuma.lan.ddnsgeek.com` | `unexpected_unmapped` | `unexpected_unmapped, dns_only` | - | - | A:120.155.63.223 |
| `lan.ddnsgeek.com` | `dns_only` | `dns_only` | - | - | SOA: |
| `monitor-kuma.lan.ddnsgeek.com` | `mapped` | `mapped` | monitoring/monitor-kuma | monitor [tls=true, mtls=true, authelia=false, tls_options=mtls-private-admin@file, middlewares=-] | A: |
| `mtls-bridge.lan.ddnsgeek.com` | `mapped` | `mapped` | monitoring/mtls-bridge | mtls-bridge [tls=true, mtls=true, authelia=false, tls_options=-, middlewares=mtls-bridge-auth,mtls-bridge-cors]<br>mtls-bridge-preflight [tls=true, mtls=true, authelia=false, tls_options=-, middlewares=mtls-bridge-cors] | A: |
| `nextcloud.lan.ddnsgeek.com` | `mapped` | `mapped` | apps/nextcloud-webapp | nextcloud [tls=true, mtls=false, authelia=false, tls_options=-, middlewares=nextcloud-dav,nextcloud-webfinger] | A: |
| `node-red.lan.ddnsgeek.com` | `mapped` | `mapped` | monitoring/node-red | node-red [tls=true, mtls=true, authelia=true, tls_options=mtls-private-admin@file, middlewares=authelia] | A: |
| `passbolt.lan.ddnsgeek.com` | `mapped` | `mapped` | apps/passbolt-webapp | passbolt [tls=true, mtls=false, authelia=false, tls_options=-, middlewares=-] | A: |
| `portainer.lan.ddnsgeek.com` | `mapped` | `mapped` | monitoring/portainer | portainer [tls=true, mtls=true, authelia=false, tls_options=mtls-private-admin@file, middlewares=-] | A: |
| `prometheus.lan.ddnsgeek.com` | `mapped` | `mapped` | monitoring/prometheus | prometheus [tls=true, mtls=true, authelia=true, tls_options=mtls-private-admin@file, middlewares=authelia] | A: |
| `searxng.lan.ddnsgeek.com` | `mapped` | `mapped` | apps/searxng-webapp | searxng [tls=true, mtls=false, authelia=false, tls_options=-, middlewares=-] | A: |
| `shifts.lan.ddnsgeek.com` | `unexpected_unmapped` | `unexpected_unmapped, dns_only` | - | - | A: |
| `stockfill.lan.ddnsgeek.com` | `unexpected_unmapped` | `unexpected_unmapped, dns_only` | - | - | A: |
| `traefik.lan.ddnsgeek.com` | `mapped` | `mapped` | core/traefik | traefik [tls=true, mtls=true, authelia=true, tls_options=mtls-private-admin@file, middlewares=authelia] | A: |
+21 -3
View File
@@ -25,14 +25,31 @@ This is currently the most structured host/VM inventory in the repo.
These resources should match existing running containers, not redefine runtime composition strategy.
### 3) Compose runtime definitions
### 3) Ansible bootstrap layer
`infrastructure/ansible/` provides an emerging inventory/configuration scaffold for hosts and devices.
Current scope is intentionally limited to structure, variables scaffolding, and safe validation workflows.
### 4) Compose runtime definitions
Compose files define intended service runtime composition, networking, labels, and integration.
### 4) Architecture docs
### 5) Architecture docs
`docs/architecture.md` provides a human-readable topology view based on repository configuration and observed runtime signals.
### 6) Dynu DNS read-only inventory
`scripts/dynu/` and `docs/dynu-dns-inventory.md` provide a strictly read-only DNS inventory workflow:
- fetch Dynu DNS data with GET-only API usage,
- correlate Dynu hostnames with Traefik `Host(...)` labels in Compose sources,
- generate local JSON and markdown artifacts for documentation pipelines.
Dynu write operations are intentionally blocked in this repository stage.
## Output shaping expectations
When adding Terraform outputs for documentation/tooling:
@@ -46,6 +63,7 @@ When adding Terraform outputs for documentation/tooling:
- No full generated inventory document pipeline is present yet.
- Some Terraform files still include generated boilerplate comments requiring ongoing cleanup.
- Ansible/NixOS operational layers are not yet implemented in a way that provides authoritative inventory in this repo.
- Ansible is currently a bootstrap inventory/configuration layer and is not authoritative for full operations yet.
- NixOS operational management is not yet implemented as an Ansible authority in this repo.
These limitations are expected for the current adoption stage.
+5
View File
@@ -8,6 +8,7 @@ This page explains where to find authoritative files quickly.
- `apps/` — user/business applications (Nextcloud, Passbolt, Gitea, Gramps, SearXNG).
- `monitoring/` — observability and operational tooling (Prometheus, Grafana, InfluxDB, Node-RED, etc.).
- `infrastructure/terraform/` — brownfield Terraform inventory/reconciliation layers.
- `infrastructure/ansible/` — phase-1 Ansible inventory/configuration scaffold and validation playbooks.
- `docs/` — repository-level architecture and workflow documentation.
- `archive/` — historical compose/config artifacts not part of active runtime composition.
- `secrets/` — local secret material and templates; never commit real values.
@@ -17,6 +18,8 @@ This page explains where to find authoritative files quickly.
- `services-up.sh` — runtime composition entrypoint for multi-compose environment.
- `default-network.yml` — shared docker network definitions used across compose files.
- `default-environment.env` — non-secret default env values for compose rendering.
- `scripts/codex-setup.sh` — Codex/bootstrap helper to install validation tooling and prepare dummy secret material.
- `scripts/codex-maintenance.sh` — Codex maintenance helper to refresh tooling, reconcile dummy secret material, and run safe Ansible validation checks.
- `docs/deployment-prerequisites.md` — prerequisite setup before runtime operations.
- `docs/security-secrets.md` — secrets documentation and inventory model.
@@ -35,3 +38,5 @@ This page explains where to find authoritative files quickly.
3. Read [docs/docker-environment.md](docker-environment.md).
4. Read [docs/terraform-workflows.md](terraform-workflows.md).
5. Only then edit Compose/Terraform files.
6. For Ansible bootstrap changes, validate inventory and playbook syntax checks only.
+6 -2
View File
@@ -10,6 +10,7 @@ For machine-readable inventory metadata, use [`../secrets/inventory.json`](../se
- Canonical example template: [`../secrets/.env.secrets.example`](../secrets/.env.secrets.example)
- Runtime-loaded secret env file (local, non-committed): `../secrets/stack-secrets.env`
- Dynu DNS inventory env file (local, non-committed): `../secrets/dynu.env`
- Docker secret files (local, non-committed): `../secrets/*.txt`
Treat the example template as the canonical shape for expected environment variables.
@@ -20,9 +21,11 @@ Treat the example template as the canonical shape for expected environment varia
- Document expected variable names and usage expectations.
2. **Local runtime env file (`stack-secrets.env`)**
- Holds local runtime secret values loaded during compose rendering.
3. **Local Docker secret files (`*.txt`)**
3. **Local Dynu env file (`dynu.env`)**
- Holds `DYNU_*` values used by read-only Dynu DNS inventory scripts.
4. **Local Docker secret files (`*.txt`)**
- Hold password/token material consumed via `*_FILE` style configuration.
4. **Externally managed secret inputs**
5. **Externally managed secret inputs**
- Some values are managed outside shared templates and provided through file mounts or environment substitution.
## Machine-readable inventory
@@ -41,6 +44,7 @@ Before running compose operations, follow [`./deployment-prerequisites.md`](./de
Never commit:
- `secrets/stack-secrets.env`
- `secrets/dynu.env`
- real `secrets/*.txt` secret files
- real Terraform `.tfvars` files containing credentials
- Terraform state files with sensitive runtime metadata
+12
View File
@@ -10,6 +10,7 @@ This repository has multiple layers. Knowing the authority for each layer preven
| Docker shared baseline inputs | `default-network.yml`, `default-environment.env`, `secrets/stack-secrets.env` | Shared network/env material applied during compose rendering. |
| Infrastructure inventory and reconciliation | Terraform under `infrastructure/terraform/` | Codified inventory of existing infrastructure and relationships, especially Proxmox VMs and selected Docker mirrors. |
| Secret policy and inventory | `docs/security-secrets.md` + `secrets/inventory.json` + local secret files in `secrets/` | What secrets exist, where they are expected, and what automation should parse. |
| Host/device configuration bootstrap (emerging) | Ansible under `infrastructure/ansible/` | Gradual inventory/configuration layer for hosts/devices; validation-first at current stage. |
## Practical meaning
@@ -29,6 +30,17 @@ Use Terraform when documenting/reconciling existing:
Do **not** treat Terraform as a full replacement for Compose operations in this repo.
### Ansible bootstrap decisions
Use Ansible under `infrastructure/ansible/` to build inventory and configuration structure incrementally.
At the current stage:
- Do **not** treat Ansible as replacement authority for Docker runtime operations.
- Do **not** treat Ansible as replacement authority for Terraform inventory/reconciliation.
- NixOS remains outside Ansible authority unless explicitly adopted in a later phase.
## Declared config vs observed/runtime state
- **Declared config**: files in this repository (Compose, Terraform, docs).
+48
View File
@@ -0,0 +1,48 @@
# Ansible Foundation (Phase 1)
This directory provides a minimal Ansible bootstrap for this repository.
## Purpose
- Establish a maintainable inventory/configuration foundation for hosts and devices.
- Support gradual host onboarding and validation workflows.
- Keep boundaries clear with existing Compose and Terraform authorities.
This is intentionally a **foundation stage**, not full production automation.
## Boundaries
- Docker runtime authority remains in Compose files and `services-up.sh`.
- Terraform remains the primary structured infrastructure inventory/reconciliation layer.
- Ansible here is a complementary configuration/inventory layer.
- NixOS and network gear management are not authoritative through Ansible yet.
## Structure
- `ansible.cfg` - local defaults for inventory, collections, and output behavior.
- `inventory/hosts.yml` - YAML inventory scaffold with starter groups.
- `inventory/group_vars/` - shared/group variables.
- `inventory/host_vars/` - per-host variables.
- `playbooks/ping.yml` - minimal syntax/connection test playbook.
- `playbooks/dns-inventory.yml` - local-only Dynu DNS read-only inventory wrapper.
- `collections/requirements.yml` - lightweight baseline collections.
- `roles/` - reserved for future incremental role adoption.
## Basic commands
Run from repository root:
```bash
ansible --version
ansible-lint --version
ansible-galaxy collection install -r infrastructure/ansible/collections/requirements.yml -p infrastructure/ansible/collections
ansible-inventory -i infrastructure/ansible/inventory/hosts.yml --list
ansible-playbook -i infrastructure/ansible/inventory/hosts.yml infrastructure/ansible/playbooks/ping.yml --syntax-check
ansible-playbook -i infrastructure/ansible/inventory/hosts.yml infrastructure/ansible/playbooks/dns-inventory.yml --syntax-check
```
## Secrets and safety
- Do not commit real credentials or private keys.
- Put sensitive per-host variables in local, untracked files or a future vault approach.
- Keep host and device entries factual; avoid speculative production entries.
+9
View File
@@ -0,0 +1,9 @@
[defaults]
inventory = ./inventory/hosts.yml
collections_path = ./collections
retry_files_enabled = False
stdout_callback = yaml
host_key_checking = True
[inventory]
enable_plugins = yaml
@@ -0,0 +1,4 @@
---
collections:
- name: ansible.posix
- name: community.general
@@ -0,0 +1,14 @@
---
# Bootstrap defaults for the Ansible foundation in this repository.
# Keep secrets and environment-specific auth details out of version control.
# Common interpreter hint for modern Linux hosts. Override per-host if needed.
ansible_python_interpreter: /usr/bin/python3
# Placeholders for future connection/auth settings:
# ansible_user: ""
# ansible_port: 22
# ansible_ssh_private_key_file: ""
# Add group-specific settings under inventory/group_vars/<group>.yml
# and host-specific settings under inventory/host_vars/<host>.yml.
@@ -0,0 +1,17 @@
---
all:
children:
linux:
hosts: {}
network:
hosts: {}
virtualization:
hosts: {}
nixos:
hosts: {}
examples:
hosts:
example-managed-host:
ansible_host: example-host.local
ansible_connection: ssh
# Example only: replace/remove before real operations.
@@ -0,0 +1,26 @@
---
# This integration is intentionally read-only.
# No Dynu mutations are permitted in this repo at this stage.
- name: Build Dynu DNS read-only inventory artifacts
hosts: localhost
connection: local
gather_facts: false
vars:
repo_root: "{{ playbook_dir }}/../../.."
tasks:
- name: Assert read-only guard variable is set
ansible.builtin.assert:
that:
- lookup('ansible.builtin.env', 'DYNU_READ_ONLY') == 'true'
fail_msg: "Refusing to run: DYNU_READ_ONLY must be exactly 'true'."
- name: Fetch Dynu DNS (GET-only script)
ansible.builtin.command: python3 scripts/dynu/fetch_dynu_dns.py
args:
chdir: "{{ repo_root }}"
- name: Correlate Dynu with Traefik and generate docs
ansible.builtin.command: python3 scripts/dynu/correlate_dynu_with_traefik.py
args:
chdir: "{{ repo_root }}"
@@ -0,0 +1,7 @@
---
- name: Basic inventory and connectivity check
hosts: all
gather_facts: false
tasks:
- name: Ping managed hosts
ansible.builtin.ping:
+34 -2
View File
@@ -41,7 +41,8 @@ dummy_value_for_key() {
local key="$1"
case "$key" in
*EMAIL* ) echo "dummy@example.com" ;;
*USER*|*USERNAME* ) echo "dummy-user" ;;
*DB_USER* ) echo "dummyuser" ;;
*USERNAME*|*USER* ) echo "dummy-user" ;;
*DOMAIN* ) echo "example.lan.ddnsgeek.com" ;;
*TZ ) echo "Australia/Brisbane" ;;
*URL* ) echo "https://example.lan.ddnsgeek.com" ;;
@@ -49,7 +50,6 @@ dummy_value_for_key() {
*PASSWORD*|*PASS*|*TOKEN*|*SECRET*|*KEY*|*JWT* ) echo "dummy-${key,,}" ;;
*FINGERPRINT* ) echo "0000000000000000000000000000000000000000" ;;
*DB_NAME* ) echo "dummydb" ;;
*DB_USER* ) echo "dummyuser" ;;
*NAME* ) echo "dummy-name" ;;
*ADDRESS* ) echo "dummy" ;;
* ) echo "dummy-value" ;;
@@ -120,3 +120,35 @@ reconcile_file_based_secrets
echo "== Dummy secret reconciliation complete =="
echo "stack env: $STACK_ENV"
jq -r '.file_based_secrets[].path' "$INVENTORY_JSON" | sed 's/^/file secret: /'
REPO_ROOT="${CODEX_REPO_DIR:-$PWD}"
ANSIBLE_DIR="$REPO_ROOT/infrastructure/ansible"
ANSIBLE_CONFIG="$ANSIBLE_DIR/ansible.cfg"
ANSIBLE_COLLECTIONS_REQ="$ANSIBLE_DIR/collections/requirements.yml"
ANSIBLE_INVENTORY="$ANSIBLE_DIR/inventory/hosts.yml"
ANSIBLE_PING_PLAYBOOK="$ANSIBLE_DIR/playbooks/ping.yml"
if [[ -f "$ANSIBLE_COLLECTIONS_REQ" ]]; then
echo "== Refresh Ansible collections (bootstrap) =="
ansible-galaxy collection install -r "$ANSIBLE_COLLECTIONS_REQ" -p "$ANSIBLE_DIR/collections" || true
fi
if command -v ansible >/dev/null 2>&1; then
echo "== Ansible bootstrap validation =="
ANSIBLE_CONFIG="$ANSIBLE_CONFIG" ansible --version | head -n 1 || true
if command -v ansible-lint >/dev/null 2>&1; then
ansible-lint --version || true
fi
if [[ -f "$ANSIBLE_INVENTORY" ]]; then
ANSIBLE_CONFIG="$ANSIBLE_CONFIG" \
ansible-inventory -i "$ANSIBLE_INVENTORY" --list > /dev/null || true
fi
if [[ -f "$ANSIBLE_PING_PLAYBOOK" && -f "$ANSIBLE_INVENTORY" ]]; then
ANSIBLE_CONFIG="$ANSIBLE_CONFIG" \
ansible-playbook -i "$ANSIBLE_INVENTORY" "$ANSIBLE_PING_PLAYBOOK" --syntax-check || true
fi
fi
+34 -2
View File
@@ -104,7 +104,8 @@ dummy_value_for_key() {
local key="$1"
case "$key" in
*EMAIL* ) echo "dummy@example.com" ;;
*USER*|*USERNAME* ) echo "dummy-user" ;;
*DB_USER* ) echo "dummyuser" ;;
*USERNAME*|*USER* ) echo "dummy-user" ;;
*DOMAIN* ) echo "example.lan.ddnsgeek.com" ;;
*TZ ) echo "Australia/Brisbane" ;;
*URL* ) echo "https://example.lan.ddnsgeek.com" ;;
@@ -112,7 +113,6 @@ dummy_value_for_key() {
*PASSWORD*|*PASS*|*TOKEN*|*SECRET*|*KEY*|*JWT* ) echo "dummy-${key,,}" ;;
*FINGERPRINT* ) echo "0000000000000000000000000000000000000000" ;;
*DB_NAME* ) echo "dummydb" ;;
*DB_USER* ) echo "dummyuser" ;;
*NAME* ) echo "dummy-name" ;;
*ADDRESS* ) echo "dummy" ;;
* ) echo "dummy-value" ;;
@@ -152,6 +152,38 @@ ensure_dummy_secret_files() {
render_dummy_stack_env
ensure_dummy_secret_files
ANSIBLE_DIR="$REPO_ROOT/infrastructure/ansible"
ANSIBLE_CONFIG="$ANSIBLE_DIR/ansible.cfg"
ANSIBLE_COLLECTIONS_REQ="$ANSIBLE_DIR/collections/requirements.yml"
ANSIBLE_INVENTORY="$ANSIBLE_DIR/inventory/hosts.yml"
ANSIBLE_PING_PLAYBOOK="$ANSIBLE_DIR/playbooks/ping.yml"
if [[ -f "$ANSIBLE_COLLECTIONS_REQ" ]]; then
echo "== Ansible collections (bootstrap) =="
ansible-galaxy collection install -r "$ANSIBLE_COLLECTIONS_REQ" -p "$ANSIBLE_DIR/collections" || true
fi
if command -v ansible >/dev/null 2>&1; then
echo "== Ansible bootstrap validation =="
ANSIBLE_CONFIG="$ANSIBLE_CONFIG" ansible --version | head -n 1 || true
if command -v ansible-lint >/dev/null 2>&1; then
ansible-lint --version || true
else
echo "ansible-lint not available; skipping version check"
fi
if [[ -f "$ANSIBLE_INVENTORY" ]]; then
ANSIBLE_CONFIG="$ANSIBLE_CONFIG" \
ansible-inventory -i "$ANSIBLE_INVENTORY" --list > /dev/null || true
fi
if [[ -f "$ANSIBLE_PING_PLAYBOOK" && -f "$ANSIBLE_INVENTORY" ]]; then
ANSIBLE_CONFIG="$ANSIBLE_CONFIG" \
ansible-playbook -i "$ANSIBLE_INVENTORY" "$ANSIBLE_PING_PLAYBOOK" --syntax-check || true
fi
fi
echo
echo "== Installed versions =="
bash --version | head -n 1 || true
+26
View File
@@ -0,0 +1,26 @@
#!/usr/bin/env bash
set -euo pipefail
# This integration is intentionally read-only.
# No Dynu mutations are permitted in this repo at this stage.
# Optional convenience: auto-load local Dynu env file when variables are unset.
if [[ -f "secrets/dynu.env" ]]; then
set -a
# shellcheck source=/dev/null
source "secrets/dynu.env"
set +a
fi
if [[ "${DYNU_READ_ONLY:-}" != "true" ]]; then
echo "Refusing to run: DYNU_READ_ONLY must be exactly 'true'." >&2
exit 2
fi
if [[ -z "${DYNU_API_KEY:-}" ]]; then
echo "Missing DYNU_API_KEY. Set it in env or secrets/dynu.env." >&2
exit 2
fi
python3 scripts/dynu/fetch_dynu_dns.py
python3 scripts/dynu/correlate_dynu_with_traefik.py
+465
View File
@@ -0,0 +1,465 @@
#!/usr/bin/env python3
"""Correlate Dynu DNS data with Traefik host rules in compose sources.
This integration is intentionally read-only.
No Dynu mutations are permitted in this repo at this stage.
"""
from __future__ import annotations
import json
import os
import re
import sys
from collections import defaultdict
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, Dict, Iterable, List, Set
import yaml
BASE_DOMAIN = "lan.ddnsgeek.com"
ALLOWED_UNMAPPED_HOSTNAMES = ["edge.lan.ddnsgeek.com"]
DYN_DATA = Path("data/dns/dynu_live.json")
OUT_JSON = Path("data/dns/dynu_traefik_inventory.json")
OUT_MD = Path("docs/generated/dns-inventory.md")
HOST_CALL_RE = re.compile(r"Host\s*\(([^)]*)\)", re.IGNORECASE)
QUOTED_HOST_RE = re.compile(r"[`\"']([^`\"']+)[`\"']")
ROUTER_LABEL_RE = re.compile(r"^traefik\.http\.routers\.([^.]+)\.(.+)$")
class ReadOnlyError(RuntimeError):
pass
def require_read_only() -> None:
if os.environ.get("DYNU_READ_ONLY") != "true":
raise ReadOnlyError(
"Refusing to run: DYNU_READ_ONLY must be exactly 'true'. "
"This integration is intentionally read-only."
)
def compose_files(root: Path) -> List[Path]:
files: Set[Path] = set()
if (root / "default-network.yml").exists():
files.add(root / "default-network.yml")
for area in ("apps", "monitoring", "core"):
base = root / area
if not base.exists():
continue
for pattern in ("**/docker-compose.yml", "**/docker-compose.yaml"):
files.update(p for p in base.glob(pattern) if p.is_file())
return sorted(files)
def parse_hosts_from_rule(rule: str) -> List[str]:
hosts: Set[str] = set()
for call_fragment in HOST_CALL_RE.findall(rule):
quoted_hosts = QUOTED_HOST_RE.findall(call_fragment)
for host in quoted_hosts:
clean = host.strip().strip(".").lower()
if clean:
hosts.add(clean)
if not quoted_hosts:
for token in call_fragment.split(","):
clean = token.strip().strip(".`\"'").lower()
if clean:
hosts.add(clean)
return sorted(hosts)
def load_env_defaults(repo_root: Path) -> Dict[str, str]:
env_values: Dict[str, str] = {}
for candidate in (repo_root / "default-environment.env", repo_root / ".env"):
if not candidate.exists():
continue
for line in candidate.read_text(encoding="utf-8").splitlines():
stripped = line.strip()
if not stripped or stripped.startswith("#") or "=" not in stripped:
continue
key, value = stripped.split("=", 1)
env_values[key.strip()] = value.strip().strip("'\"")
return env_values
def resolve_rule_variables(rule: str, env_values: Dict[str, str]) -> str:
var_re = re.compile(r"\$\{([A-Za-z_][A-Za-z0-9_]*)\}")
def replacer(match: re.Match[str]) -> str:
key = match.group(1)
if key in os.environ:
return os.environ[key]
return env_values.get(key, match.group(0))
return var_re.sub(replacer, rule)
def normalize_labels(raw_labels: Any) -> Dict[str, str]:
labels: Dict[str, str] = {}
if isinstance(raw_labels, dict):
for key, value in raw_labels.items():
labels[str(key)] = "" if value is None else str(value)
return labels
if isinstance(raw_labels, list):
for item in raw_labels:
if isinstance(item, str) and "=" in item:
key, value = item.split("=", 1)
labels[key.strip()] = value.strip()
elif isinstance(item, str):
labels[item.strip()] = ""
return labels
return labels
def infer_stack(compose_file: Path) -> str:
parts = compose_file.parts
return parts[0] if parts else "unknown"
def boolish(value: str) -> bool:
return value.strip().lower() in {"1", "true", "yes", "on"}
def parse_middlewares(raw_value: str) -> List[str]:
return [item.strip() for item in raw_value.split(",") if item.strip()]
def extract_traefik_hosts(path: Path, env_values: Dict[str, str]) -> List[Dict[str, Any]]:
try:
payload = yaml.safe_load(path.read_text(encoding="utf-8")) or {}
except yaml.YAMLError as exc:
raise RuntimeError(f"Failed to parse compose YAML in {path}: {exc}") from exc
services = payload.get("services")
if not isinstance(services, dict):
return []
entries: List[Dict[str, Any]] = []
stack = infer_stack(path)
for service_name, service_payload in services.items():
if not isinstance(service_payload, dict):
continue
labels = normalize_labels(service_payload.get("labels"))
router_fields: Dict[str, Dict[str, str]] = defaultdict(dict)
for label_key, label_value in labels.items():
match = ROUTER_LABEL_RE.match(label_key)
if not match:
continue
router_name, field_name = match.groups()
router_fields[router_name][field_name] = label_value
for router_name, fields in router_fields.items():
rule = fields.get("rule", "")
if not rule:
continue
router_label_key = f"traefik.http.routers.{router_name}.rule"
middlewares = parse_middlewares(fields.get("middlewares", ""))
tls_options = fields.get("tls.options", "")
tls_enabled = boolish(fields.get("tls", "")) or bool(tls_options) or bool(fields.get("tls.certresolver", ""))
lowered_metadata = " ".join([tls_options, ",".join(middlewares)]).lower()
uses_mtls = "mtls" in lowered_metadata
uses_authelia = "authelia" in lowered_metadata
resolved_rule = resolve_rule_variables(rule, env_values)
for fqdn in parse_hosts_from_rule(resolved_rule):
entries.append(
{
"fqdn": fqdn,
"service": str(service_name),
"stack": stack,
"source_compose_file": str(path),
"router": router_name,
"router_label_keys": [router_label_key],
"raw_rule": rule,
"resolved_rule": resolved_rule,
"uses_tls": tls_enabled,
"tls_options": tls_options,
"middlewares": middlewares,
"uses_mtls": uses_mtls,
"uses_authelia": uses_authelia,
}
)
return entries
def load_dynu(path: Path) -> Dict[str, List[Dict[str, str]]]:
payload = json.loads(path.read_text(encoding="utf-8"))
if payload.get("base_domain") != BASE_DOMAIN:
raise RuntimeError(
f"Dynu JSON base_domain mismatch. Expected {BASE_DOMAIN}, got {payload.get('base_domain')}"
)
index: Dict[str, List[Dict[str, str]]] = defaultdict(list)
for domain in payload.get("domains", []):
for record in domain.get("records", []):
host = str(record.get("hostname", "")).strip(".").lower()
if host:
index[host].append(
{
"type": str(record.get("type", "")),
"value": str(record.get("value", "")),
"target": str(record.get("target") or ""),
"ttl": str(record.get("ttl") if record.get("ttl") is not None else ""),
}
)
for host in index:
index[host] = sorted(index[host], key=lambda x: (x["type"], x["value"], x["target"], x["ttl"]))
return index
def is_subdomain_of_base(fqdn: str) -> bool:
return fqdn.endswith(f".{BASE_DOMAIN}")
def summarize_reasons(
has_traefik: bool,
has_dns: bool,
is_allowed_unmapped: bool,
is_ambiguous: bool,
is_enforced_dns_subdomain: bool,
) -> List[str]:
reasons: List[str] = []
if has_traefik and has_dns:
reasons.append("mapped")
if has_dns and not has_traefik and is_allowed_unmapped:
reasons.append("allowed_unmapped")
if has_dns and not has_traefik and is_enforced_dns_subdomain and not is_allowed_unmapped:
reasons.append("unexpected_unmapped")
if has_dns and not has_traefik:
reasons.append("dns_only")
if has_traefik and not has_dns:
reasons.append("traefik_only")
if is_ambiguous:
reasons.append("duplicate_mapping")
reasons.append("ambiguous_mapping")
return reasons
def write_markdown(data: Dict[str, Any]) -> None:
inventory = data["inventory"]
lines = [
"# DNS Inventory (Dynu + Traefik)",
"",
"> This integration is intentionally read-only. No Dynu mutations are permitted in this repo at this stage.",
"",
f"- Base domain: `{data['base_domain']}`",
f"- Dynu fetched at: `{data['dynu_fetched_at']}`",
f"- Inventory generated at: `{data['generated_at']}`",
"",
"## Summary",
"",
f"- Traefik hostnames discovered: **{data['summary']['traefik_hostnames']}**",
f"- Dynu hostnames discovered: **{data['summary']['dynu_hostnames']}**",
f"- Mapped hostnames: **{data['summary']['mapped_hostnames']}**",
f"- DNS-only hostnames: **{data['summary']['dns_only_hostnames']}**",
f"- Traefik-only hostnames: **{data['summary']['traefik_only_hostnames']}**",
f"- Ambiguous hostnames: **{len(data['validation']['ambiguous_hostnames'])}**",
"",
"## Validation",
"",
f"- Validation ok: **{str(data['validation']['validation_ok']).lower()}**",
f"- Allowed unmapped hostnames: `{', '.join(data['validation']['allowed_unmapped_hostnames'])}`",
f"- Unexpected unmapped hostnames: **{len(data['validation']['unexpected_unmapped_hostnames'])}**",
f"- Duplicate hostnames: **{len(data['validation']['duplicate_hostnames'])}**",
f"- Ambiguous hostnames: **{len(data['validation']['ambiguous_hostnames'])}**",
"",
]
def bullet_list(title: str, values: Iterable[str]) -> None:
rows = list(values)
lines.extend([f"### {title}", ""])
if not rows:
lines.append("_None._")
else:
for value in rows:
lines.append(f"- `{value}`")
lines.append("")
bullet_list("Allowed unmapped hostnames", data["validation"]["allowed_unmapped_hostnames"])
bullet_list("Unexpected unmapped hostnames", data["validation"]["unexpected_unmapped_hostnames"])
bullet_list("Duplicate hostnames", data["validation"]["duplicate_hostnames"])
bullet_list("Ambiguous hostnames", data["validation"]["ambiguous_hostnames"])
lines.extend(
[
"## Correlation",
"",
"| Hostname | Status | Reasons | Service(s) | Route metadata | DNS records |",
"|---|---|---|---|---|---|",
]
)
for row in inventory:
services = sorted({f"{entry['stack']}/{entry['service']}" for entry in row["traefik_entries"]})
service_cell = ", ".join(services) if services else "-"
reason_cell = ", ".join(row["reasons"]) if row["reasons"] else "-"
route_chunks = []
for entry in row["traefik_entries"]:
middlewares = ",".join(entry.get("middlewares", [])) or "-"
route_chunks.append(
f"{entry['router']} [tls={str(entry['uses_tls']).lower()}, mtls={str(entry['uses_mtls']).lower()}, authelia={str(entry['uses_authelia']).lower()}, tls_options={entry.get('tls_options') or '-'}, middlewares={middlewares}]"
)
route_cell = "<br>".join(route_chunks) if route_chunks else "-"
dns_cell = ", ".join(f"{item['type']}:{item['value']}" for item in row["dynu_records"]) if row["dynu_records"] else "-"
lines.append(f"| `{row['fqdn']}` | `{row['status']}` | `{reason_cell}` | {service_cell} | {route_cell} | {dns_cell} |")
OUT_MD.parent.mkdir(parents=True, exist_ok=True)
OUT_MD.write_text("\n".join(lines) + "\n", encoding="utf-8")
def main() -> int:
try:
require_read_only()
except ReadOnlyError as exc:
print(str(exc), file=sys.stderr)
return 2
if not DYN_DATA.exists():
print(f"Missing {DYN_DATA}. Run fetch_dynu_dns.py first.", file=sys.stderr)
return 3
dyn_payload = json.loads(DYN_DATA.read_text(encoding="utf-8"))
dynu_index = load_dynu(DYN_DATA)
repo_root = Path(__file__).resolve().parents[2]
env_values = load_env_defaults(repo_root)
hosts: List[Dict[str, Any]] = []
for cf in compose_files(repo_root):
hosts.extend(extract_traefik_hosts(cf.relative_to(repo_root), env_values))
by_fqdn: Dict[str, List[Dict[str, Any]]] = defaultdict(list)
for entry in hosts:
if entry["fqdn"] == BASE_DOMAIN or is_subdomain_of_base(entry["fqdn"]):
by_fqdn[entry["fqdn"]].append(entry)
duplicate_hostnames = sorted(k for k, v in by_fqdn.items() if len(v) > 1)
combined_fqdns = sorted(set(by_fqdn.keys()) | set(dynu_index.keys()))
inventory = []
ambiguous_hostnames: List[str] = []
for fqdn in combined_fqdns:
traefik_entries = sorted(
by_fqdn.get(fqdn, []),
key=lambda x: (x["stack"], x["service"], x["source_compose_file"], x["router"]),
)
dns_records = dynu_index.get(fqdn, [])
is_allowed_unmapped = fqdn in ALLOWED_UNMAPPED_HOSTNAMES
has_traefik = bool(traefik_entries)
has_dns = bool(dns_records)
service_keys = {f"{item['stack']}/{item['service']}" for item in traefik_entries}
is_ambiguous = len(service_keys) > 1
if is_ambiguous:
ambiguous_hostnames.append(fqdn)
is_enforced_dns_subdomain = is_subdomain_of_base(fqdn)
if has_traefik and has_dns:
status = "mapped"
elif has_dns and is_allowed_unmapped:
status = "allowed_unmapped"
elif has_dns and not has_traefik and is_enforced_dns_subdomain:
status = "unexpected_unmapped"
elif has_dns and not has_traefik:
status = "dns_only"
else:
status = "traefik_only"
reasons = summarize_reasons(
has_traefik, has_dns, is_allowed_unmapped, is_ambiguous, is_enforced_dns_subdomain
)
inventory.append(
{
"fqdn": fqdn,
"status": status,
"reasons": reasons,
"duplicate": fqdn in duplicate_hostnames,
"traefik_entries": traefik_entries,
"dynu_records": dns_records,
}
)
subdomain_dns_hosts = sorted(host for host in dynu_index if is_subdomain_of_base(host))
unexpected_unmapped_hostnames = sorted(
host for host in subdomain_dns_hosts if host not in by_fqdn and host not in ALLOWED_UNMAPPED_HOSTNAMES
)
validation = {
"allowed_unmapped_hostnames": sorted(ALLOWED_UNMAPPED_HOSTNAMES),
"unexpected_unmapped_hostnames": unexpected_unmapped_hostnames,
"duplicate_hostnames": duplicate_hostnames,
"ambiguous_hostnames": sorted(set(ambiguous_hostnames)),
"validation_ok": len(unexpected_unmapped_hostnames) == 0,
}
dynu_rows = []
for fqdn in sorted(dynu_index.keys()):
for rec in dynu_index[fqdn]:
dynu_rows.append(
{
"hostname": fqdn,
"type": rec["type"],
"value": rec["value"],
"ttl": rec["ttl"],
}
)
output = {
"source": "dynu+traefik",
"read_only": True,
"base_domain": BASE_DOMAIN,
"dynu_fetched_at": dyn_payload.get("fetched_at"),
"generated_at": datetime.now(timezone.utc).replace(microsecond=0).isoformat(),
"summary": {
"traefik_hostnames": len(by_fqdn),
"dynu_hostnames": len(dynu_index),
"mapped_hostnames": sum(1 for x in inventory if x["status"] == "mapped"),
"dns_only_hostnames": sum(1 for x in inventory if "dns_only" in x["reasons"]),
"traefik_only_hostnames": sum(1 for x in inventory if x["status"] == "traefik_only"),
},
"validation": validation,
"inventory": inventory,
"dynu_records_table": dynu_rows,
}
OUT_JSON.parent.mkdir(parents=True, exist_ok=True)
OUT_JSON.write_text(json.dumps(output, indent=2, sort_keys=True) + "\n", encoding="utf-8")
write_markdown(output)
print(f"Wrote {OUT_JSON}")
print(f"Wrote {OUT_MD}")
if os.environ.get("DYNU_ENFORCE_VALIDATION") == "true" and not validation["validation_ok"]:
print(
"Validation failed: unexpected unmapped hostnames were found: "
+ ", ".join(validation["unexpected_unmapped_hostnames"]),
file=sys.stderr,
)
return 4
return 0
if __name__ == "__main__":
raise SystemExit(main())
+244
View File
@@ -0,0 +1,244 @@
#!/usr/bin/env python3
"""Fetch Dynu DNS inventory in strict read-only mode.
This integration is intentionally read-only.
No Dynu mutations are permitted in this repo at this stage.
"""
from __future__ import annotations
import json
import os
import sys
from datetime import datetime, timezone
from pathlib import Path
from typing import Any, Dict, Iterable, List, Optional
from urllib.error import HTTPError, URLError
from urllib.parse import urlencode
from urllib.request import Request, urlopen
BASE_DOMAIN = "lan.ddnsgeek.com"
DEFAULT_BASE_URL = "https://api.dynu.com"
OUT_PATH = Path("data/dns/dynu_live.json")
class DynuReadOnlyError(RuntimeError):
"""Raised when read-only safety guardrails are not met."""
def require_read_only() -> None:
if os.environ.get("DYNU_READ_ONLY") != "true":
raise DynuReadOnlyError(
"Refusing to run: DYNU_READ_ONLY must be exactly 'true'. "
"This integration is intentionally read-only."
)
def get_json(base_url: str, api_key: str, path: str, query: Optional[Dict[str, Any]] = None) -> Any:
"""HTTP GET helper. Any non-GET usage is blocked by design."""
url = f"{base_url.rstrip('/')}{path}"
if query:
url = f"{url}?{urlencode(query)}"
req = Request(
url=url,
headers={
"accept": "application/json",
"API-Key": api_key,
},
method="GET",
)
try:
with urlopen(req, timeout=30) as resp:
body = resp.read().decode("utf-8")
except HTTPError as exc:
detail = exc.read().decode("utf-8", errors="replace")
hint = ""
if exc.code == 401:
hint = (
" Check DYNU_API_KEY from secrets/dynu.env, verify it is a valid Dynu API key, "
"and ensure DYNU_BASE_URL points to the Dynu API endpoint."
)
raise RuntimeError(f"Dynu API GET failed at {path}: HTTP {exc.code} {detail}.{hint}") from exc
except URLError as exc:
raise RuntimeError(f"Dynu API GET failed at {path}: {exc}") from exc
try:
return json.loads(body)
except json.JSONDecodeError as exc:
raise RuntimeError(f"Dynu API returned non-JSON response at {path}") from exc
def list_domains(base_url: str, api_key: str) -> List[Dict[str, Any]]:
first = get_json(base_url, api_key, "/v2/dns")
def extract_domains(payload: Any) -> List[Dict[str, Any]]:
if isinstance(payload, list):
return [x for x in payload if isinstance(x, dict)]
if isinstance(payload, dict):
for key in ("domains", "items", "data", "dnsDomains"):
val = payload.get(key)
if isinstance(val, list):
return [x for x in val if isinstance(x, dict)]
return []
domains = extract_domains(first)
if isinstance(first, dict):
page = first.get("pageNumber") or first.get("page") or 1
total = first.get("totalPages") or first.get("pages")
if isinstance(page, int) and isinstance(total, int) and total > page:
for p in range(page + 1, total + 1):
nxt = get_json(base_url, api_key, "/v2/dns", {"page": p})
domains.extend(extract_domains(nxt))
return domains
def list_records(base_url: str, api_key: str, domain_id: Any) -> List[Dict[str, Any]]:
payload = get_json(base_url, api_key, f"/v2/dns/{domain_id}/record")
if isinstance(payload, list):
return [x for x in payload if isinstance(x, dict)]
if isinstance(payload, dict):
for key in ("dnsRecords", "records", "items", "data"):
val = payload.get(key)
if isinstance(val, list):
return [x for x in val if isinstance(x, dict)]
return []
def normalize_hostname(record: Dict[str, Any], domain_name: str) -> str:
node_name = record.get("nodeName") or record.get("hostname") or record.get("node") or ""
node_name = str(node_name).strip().strip(".")
base = domain_name.strip().strip(".")
if not node_name or node_name in {"@", base}:
return base
if node_name.endswith(base):
return node_name
return f"{node_name}.{base}"
def normalize_records(records: Iterable[Dict[str, Any]], domain_name: str) -> List[Dict[str, Any]]:
normalized = []
for rec in records:
record_type = rec.get("recordType") or rec.get("type") or ""
value = (
rec.get("value")
or rec.get("ipv4Address")
or rec.get("ipv6Address")
or rec.get("host")
or rec.get("textData")
or ""
)
target = rec.get("host") or rec.get("target")
priority = rec.get("priority")
ttl = rec.get("ttl")
raw_subset = {
k: rec.get(k)
for k in (
"id",
"nodeName",
"recordType",
"state",
"group",
"host",
"ipv4Address",
"ipv6Address",
"textData",
"ttl",
"priority",
"weight",
"port",
)
if k in rec
}
normalized.append(
{
"id": rec.get("id"),
"hostname": normalize_hostname(rec, domain_name),
"type": str(record_type),
"value": str(value),
"target": target,
"ttl": ttl,
"priority": priority,
"raw": raw_subset,
}
)
return sorted(normalized, key=lambda x: (x["hostname"], x["type"], x["value"], str(x.get("id") or "")))
def main() -> int:
try:
require_read_only()
except DynuReadOnlyError as exc:
print(str(exc), file=sys.stderr)
return 2
api_key = os.environ.get("DYNU_API_KEY")
if not api_key:
print("Missing DYNU_API_KEY. Refusing to call Dynu API.", file=sys.stderr)
return 2
api_key = api_key.strip().strip("'").strip('"')
if not api_key:
print("DYNU_API_KEY is empty after trimming quotes/whitespace.", file=sys.stderr)
return 2
base_url = os.environ.get("DYNU_BASE_URL", DEFAULT_BASE_URL).strip().strip("'").strip('"')
try:
domains = list_domains(base_url, api_key)
except RuntimeError as exc:
print(str(exc), file=sys.stderr)
return 1
target = [d for d in domains if str(d.get("name", "")).strip(".").lower() == BASE_DOMAIN]
if not target:
print(
f"Could not find required domain '{BASE_DOMAIN}' in Dynu /v2/dns response.",
file=sys.stderr,
)
return 3
normalized_domains = []
for d in target:
domain_id = d.get("id")
if domain_id is None:
print(f"Domain entry for {BASE_DOMAIN} is missing 'id'; cannot fetch records.", file=sys.stderr)
return 4
try:
records = list_records(base_url, api_key, domain_id)
except RuntimeError as exc:
print(str(exc), file=sys.stderr)
return 1
normalized_domains.append(
{
"name": str(d.get("name", BASE_DOMAIN)).strip().strip("."),
"id": domain_id,
"records": normalize_records(records, str(d.get("name", BASE_DOMAIN))),
}
)
normalized_domains.sort(key=lambda x: (x["name"], str(x.get("id") or "")))
output = {
"source": "dynu",
"read_only": True,
"fetched_at": datetime.now(timezone.utc).replace(microsecond=0).isoformat(),
"base_domain": BASE_DOMAIN,
"domains": normalized_domains,
}
OUT_PATH.parent.mkdir(parents=True, exist_ok=True)
OUT_PATH.write_text(json.dumps(output, indent=2, sort_keys=True) + "\n", encoding="utf-8")
print(f"Wrote {OUT_PATH}")
return 0
if __name__ == "__main__":
raise SystemExit(main())
+2
View File
@@ -2,6 +2,7 @@
"scope_and_authority": {
"canonical_example_template": "secrets/.env.secrets.example",
"runtime_loaded_secret_env_file": "secrets/stack-secrets.env",
"dns_inventory_secret_env_file": "secrets/dynu.env",
"docker_secret_files_pattern": "secrets/*.txt"
},
"env_template_variables": [
@@ -140,6 +141,7 @@
],
"commit_safety_rules": [
"Never commit secrets/stack-secrets.env.",
"Never commit secrets/dynu.env.",
"Never commit real secrets/*.txt files.",
"Never commit real Terraform .tfvars containing credentials.",
"Never commit Terraform state files with sensitive runtime metadata."