Compare commits

...

13 Commits

Author SHA1 Message Date
git f0aa9941d8 Merge branch 'main' of https://github.com/beatz174-bit/docker 2026-04-21 11:34:21 +10:00
git 6789451bd6 moved codex environment scripts to repo 2026-04-21 11:33:50 +10:00
beatz174-bit b8d9e62954 Merge pull request #49 from beatz174-bit/codex/refactor-secret-inventory-and-documentation-structure
Refactor secrets inventory into docs and JSON source of truth
2026-04-21 11:25:04 +10:00
beatz174-bit 9f36dabcdc Refactor secrets inventory into docs + machine-readable JSON 2026-04-21 11:24:45 +10:00
beatz174-bit 3c78e9c140 Merge pull request #48 from beatz174-bit/codex/remove-unsupported-labels-from-containers
terraform(docker): remove unsupported docker_container labels arg and ignore label drift
2026-04-21 11:20:17 +10:00
beatz174-bit 40a976f712 terraform: remove docker_container labels args and ignore label drift 2026-04-21 11:20:08 +10:00
beatz174-bit 451be4ab0d Update AGENTS.md 2026-04-21 11:00:31 +10:00
beatz174-bit b848d6b807 Merge pull request #46 from beatz174-bit/codex/complete-terraform-documentation-for-docker
Document full Docker compose container inventory in Terraform docker layer
2026-04-21 09:46:46 +10:00
beatz174-bit 4695839df4 Manage container labels and expose labels in inventory output 2026-04-21 09:43:38 +10:00
beatz174-bit cb92ebc70e Fix Terraform container address and traefik runtime wiring 2026-04-21 09:43:33 +10:00
beatz174-bit 5c3bc0317c Merge pull request #47 from beatz174-bit/codex/perform-documentation-overhaul-for-repo
docs: refresh and expand repository documentation model
2026-04-21 09:35:33 +10:00
beatz174-bit c7dd9f2229 docs: overhaul repo documentation and workflow guides 2026-04-21 09:28:55 +10:00
beatz174-bit 7258d150ad Document full Docker compose container inventory in Terraform 2026-04-21 09:26:36 +10:00
52 changed files with 2128 additions and 330 deletions
+1
View File
@@ -24,6 +24,7 @@ monitoring/influxdb/*
!monitoring/influxdb/docker-compose.yml !monitoring/influxdb/docker-compose.yml
secrets/* secrets/*
!secrets/.env.secrets.example !secrets/.env.secrets.example
!secrets/inventory.json
!.env.example !.env.example
core/traefik/certs/* core/traefik/certs/*
!core/traefik/certs/.gitkeep !core/traefik/certs/.gitkeep
+6
View File
@@ -49,6 +49,12 @@ Do not run:
- `terraform apply` - `terraform apply`
- `terraform destroy` - `terraform destroy`
If `terraform init` fails because access to `registry.terraform.io` is forbidden, do not summarize the error vaguely. Report the exact stderr. Continue with:
- `terraform fmt -check -recursive`
- static review of changed `.tf` files
Only run `terraform validate` when provider installation is available locally or registry access succeeds.
## Ansible rules ## Ansible rules
Allowed: Allowed:
-48
View File
@@ -1,48 +0,0 @@
# Deployment prerequisites (required)
Before running `docker compose up`, you **must** provision runtime secrets.
## 1) Create non-committed secret files
```bash
cp secrets/.env.secrets.example secrets/stack-secrets.env
chmod 600 secrets/stack-secrets.env
```
Create these Docker secret files (all ignored by git):
- `secrets/nextcloud_db_root_password.txt`
- `secrets/nextcloud_db_password.txt`
- `secrets/nextcloud_admin_password.txt`
- `secrets/nextcloud_smtp_password.txt`
- `secrets/nextcloud_redis_password.txt`
- `secrets/passbolt_db_password.txt`
- `secrets/influxdb_init_password.txt`
- `secrets/prometheus_kuma_basic_auth_password.txt`
Recommended permissions:
```bash
chmod 600 secrets/*.txt
```
## 2) Rotate previously committed credentials
These values were previously hardcoded and must be rotated in upstream systems immediately:
- Database credentials (Nextcloud, Passbolt, InfluxDB).
- Nextcloud SMTP app password.
- Authelia reset JWT secret, session secret, storage encryption key.
- Traefik CrowdSec LAPI key.
- Gotify admin password.
- Prometheus Uptime Kuma basic-auth password.
## 3) Start stack
After secrets are provisioned:
```bash
docker compose -f core/docker-compose.yml up -d
docker compose -f monitoring/prometheus/docker-compose.yml up -d
docker compose -f apps/nextcloud/docker-compose.yml up -d
```
+63 -4
View File
@@ -1,8 +1,67 @@
# Docker + Traefik Homelab Stack # Homelab Docker + Terraform Inventory Repository
This repository defines a multi-compose Docker environment with Traefik as ingress, app workloads, and a monitoring/alerting plane. This repository is both:
## High-Level Architecture 1. **operational** (Docker Compose application/runtime definition), and
2. **documentary/inventory-oriented** (Terraform capture of Proxmox VMs, host metadata, and selected Docker objects).
If you only read one section, read **[Source-of-truth boundaries](docs/source-of-truth.md)** first.
---
## Quick navigation
- Architecture overview: [docs/architecture.md](docs/architecture.md)
- Repository layout: [docs/repo-structure.md](docs/repo-structure.md)
- Source-of-truth boundaries and guardrails: [docs/source-of-truth.md](docs/source-of-truth.md)
- Docker environment composition and `services-up.sh`: [docs/docker-environment.md](docs/docker-environment.md)
- Terraform workflows (brownfield import/reconciliation): [docs/terraform-workflows.md](docs/terraform-workflows.md)
- Infrastructure inventory intent and outputs: [docs/infrastructure-inventory.md](docs/infrastructure-inventory.md)
- Deployment prerequisites and secrets setup: [docs/deployment-prerequisites.md](docs/deployment-prerequisites.md)
- Secrets inventory: [docs/security-secrets.md](docs/security-secrets.md)
Terraform subtrees:
- Terraform root docs: [infrastructure/terraform/README.md](infrastructure/terraform/README.md)
- Terraform Docker mirror: [infrastructure/terraform/docker/README.md](infrastructure/terraform/docker/README.md)
- Terraform Proxmox inventory: [infrastructure/terraform/proxmox/README.md](infrastructure/terraform/proxmox/README.md)
---
## Operating model
### Docker Compose (runtime authority)
- Compose files under `core/`, `apps/`, and `monitoring/` describe runtime services.
- `services-up.sh` composes the environment by discovering compose files and applying common env/network inputs.
- For service runtime behavior, start from Compose files and `services-up.sh` (not Terraform).
### Terraform (inventory and reconciliation authority)
- Terraform under `infrastructure/terraform/` is used to codify and reconcile existing infrastructure.
- Current repo usage emphasizes **brownfield import-first workflows** and safe reconciliation.
- Terraform captures:
- Proxmox VM configuration for existing VMs.
- Physical host metadata in locals/outputs.
- Documentation-oriented Docker container mirroring (limited, selective).
Terraform here is **not** a replacement for Docker Compose deployment.
---
## Guardrails
- Do not run destructive Terraform commands casually.
- Do not treat generated Terraform config as final without manual review.
- Do not commit real secrets, credentials, or local state.
- Keep one-resource-per-file patterns where already established in Terraform subdirectories.
- Prefer shaping outputs for documentation/tooling consumption over dumping raw provider objects.
See [docs/source-of-truth.md](docs/source-of-truth.md) and [docs/terraform-workflows.md](docs/terraform-workflows.md) for concrete do/don't guidance.
---
## High-level architecture
```mermaid ```mermaid
flowchart TB flowchart TB
@@ -43,4 +102,4 @@ flowchart TB
Prometheus --> Gotify Prometheus --> Gotify
``` ```
For a request-flow/network view and architecture notes, see [docs/architecture.md](docs/architecture.md). For request-flow and network detail, see [docs/architecture.md](docs/architecture.md).
-37
View File
@@ -1,37 +0,0 @@
# Security Secrets Inventory
This inventory is aligned with `secrets/.env.secrets.example` and documents only the values that are expected to be set in the non-committed secrets env file (`secrets/stack-secrets.env`).
## Secrets expected in `secrets/.env.secrets.example`
| Variable | Used by | Purpose / Notes |
|---|---|---|
| `NEXTCLOUD_DB_USER` | `apps/nextcloud/docker-compose.yml` | Nextcloud database username (non-secret identifier but environment-specific). |
| `NEXTCLOUD_ADMIN_USER` | `apps/nextcloud/docker-compose.yml` | Initial Nextcloud admin username. |
| `NEXTCLOUD_SMTP_FROM_ADDRESS` | `apps/nextcloud/docker-compose.yml` | SMTP sender local-part for outbound mail configuration. |
| `NEXTCLOUD_SMTP_DOMAIN` | `apps/nextcloud/docker-compose.yml` | SMTP sender domain for outbound mail configuration. |
| `NEXTCLOUD_SMTP_NAME` | `apps/nextcloud/docker-compose.yml` | Derived from address + domain in the example file. |
| `PASSBOLT_DB_NAME` | `apps/passbolt/docker-compose.yml` | Passbolt database name. |
| `PASSBOLT_DB_USER` | `apps/passbolt/docker-compose.yml` | Passbolt database username. |
| `PASSBOLT_GPG_SERVER_KEY_FINGERPRINT` | `apps/passbolt/docker-compose.yml` | Passbolt server GPG key fingerprint. |
| `GRAMPSWEB_SECRET_KEY` | `apps/gramps/docker-compose.yml` | Secret key used by Gramps Web for session/security signing. |
| `GRAMPSWEB_EMAIL_HOST_USER` | `apps/gramps/docker-compose.yml` | SMTP username for Gramps outbound email. |
| `GRAMPSWEB_EMAIL_HOST_PASSWORD` | `apps/gramps/docker-compose.yml` | SMTP password for Gramps outbound email. |
| `GOTIFY_DEFAULTUSER_NAME` | `monitoring/gotify/docker-compose.yml` | Gotify default username. |
| `GOTIFY_DEFAULTUSER_PASS` | `monitoring/gotify/docker-compose.yml` | Gotify default user password. |
| `INFLUXDB_INIT_USERNAME` | `monitoring/prometheus/docker-compose.yml` | InfluxDB initial username. |
| `PIHOLE_PASSWORD` | `monitoring/prometheus/docker-compose.yml` | Exporter auth / Pi-hole integration password. |
## Managed outside `.env.secrets.example`
The following sensitive values are intentionally not duplicated in `secrets/.env.secrets.example` because they are provided via Docker secrets (`*_FILE`) or other mounted secret files:
- Database/root passwords for Nextcloud, Passbolt, and supporting services that are wired through Docker secrets.
- Redis runtime password (`--requirepass`) loaded from a Docker secret.
- `DOCKER_INFLUXDB_INIT_PASSWORD` loaded from Docker secret in monitoring.
- Uptime Kuma basic auth password loaded via `password_file` in Prometheus config.
- Core stack secrets injected via env substitution in committed config files, such as:
- `AUTHELIA_JWT_SECRET`
- `AUTHELIA_SESSION_SECRET`
- `AUTHELIA_STORAGE_ENCRYPTION_KEY`
- `CROWDSEC_LAPI_KEY`
+15 -47
View File
@@ -2,13 +2,9 @@
## Overview ## Overview
This stack uses **Traefik v3** as the internet-facing ingress for application and operations UIs. Service routing is primarily label-driven from Docker Compose files, with a shared `traefik` bridge network for reverse-proxied traffic and a `monitor` network for internal telemetry components. This stack uses **Traefik v3** as internet-facing ingress for application and operations UIs. Service routing is label-driven from Docker Compose files, with shared Docker networks (`traefik`, `monitor`) connecting reverse-proxied and telemetry services.
TLS is terminated at Traefik using ACME HTTP challenge (`myresolver`), with additional hardening via: TLS is terminated at Traefik (ACME HTTP challenge), with hardening via middleware chains, Authelia forward-auth for selected routes, CrowdSec integration, and mTLS options for private-admin paths.
- a default middleware chain (security headers, CrowdSec bouncer, error pages),
- Authelia forward-auth middleware on selected routes,
- mTLS TLS options (`mtls-private-admin`) on private-admin endpoints.
## Network / Request Flow ## Network / Request Flow
@@ -18,7 +14,7 @@ flowchart LR
T -->|HTTP->HTTPS redirect| T T -->|HTTP->HTTPS redirect| T
T -->|ACME HTTP challenge| LE[Let's Encrypt ACME] T -->|ACME HTTP challenge| LE[Let's Encrypt ACME]
subgraph TraefikNet["Docker network: traefik (172.21.0.0 slash 16)"] subgraph TraefikNet[Docker network: traefik]
A[Authelia] A[Authelia]
CS[CrowdSec LAPI] CS[CrowdSec LAPI]
EP[Error Pages] EP[Error Pages]
@@ -76,51 +72,23 @@ flowchart LR
T --> DSP T --> DSP
``` ```
## Key Components ## Key components
- **Ingress & security plane:** Traefik, Authelia, CrowdSec, Error Pages. - **Ingress/security plane:** Traefik, Authelia, CrowdSec, Error Pages.
- **User-facing applications:** Nextcloud, Passbolt, Gitea, Gramps Web (Family Tree), SearXNG. - **User-facing apps:** Nextcloud, Passbolt, Gitea, Gramps Web, SearXNG.
- **Monitoring/ops:** Prometheus, Grafana, InfluxDB, Node-RED, Uptime Kuma, Portainer, Gotify. - **Monitoring/ops:** Prometheus, Grafana, InfluxDB, Node-RED, Uptime Kuma, Portainer, Gotify.
- **Support plane:** Docker Socket Proxy (shared Docker API gateway for Traefik/automation/ops tools). - **Support plane:** Docker Socket Proxy for controlled Docker API access.
## Remote Hosts Observed ## Relationship to Terraform inventory
Prometheus scrape targets indicate additional infrastructure outside the local Compose deployment, including hostnames for: Terraform in `infrastructure/terraform/` captures infrastructure inventory and reconciliation state for Proxmox VMs, physical host metadata, and selected Docker mirrors.
- `raspberrypi.tail13f623.ts.net` Use architecture docs together with:
- `pve.sweet.home`
- `pbs.sweet.home`
- `pihole`
- `server`
- `nix-cache`
- `kuma.lan.ddnsgeek.com`
## Runtime Inventory Input - [docs/source-of-truth.md](source-of-truth.md)
- [docs/terraform-workflows.md](terraform-workflows.md)
- [docs/infrastructure-inventory.md](infrastructure-inventory.md)
Prometheus runtime inventory snapshots are exported with `scripts/export_prometheus_inventory.py` and committed under `docs/runtime/`. The latest human-readable summary is in [docs/prometheus-inventory.md](prometheus-inventory.md). ## Notes on runtime vs declared state
These artifacts are an observed-runtime input for architecture diagrams/docs and should be combined with repository configuration, not treated as sole source of truth. Runtime scrape targets and health signals are useful observed-state inputs, but they do not replace declared config authority from Compose/Terraform sources.
## Assumptions / Unknowns
The repository provides enough detail to infer **container-level architecture**, but not full **Proxmox host/VM topology**.
Unknowns (left intentionally as placeholders):
- **Proxmox physical hosts:** _unknown from repo contents._
- **VM/LXC inventory and placement:** _unknown from repo contents._
- **Which services run on which Proxmox node(s):** _unknown from repo contents._
- **Inter-host VLAN/subnet layout beyond Docker bridges:** _unknown from repo contents._
If you want, this section can be replaced with a concrete Proxmox topology once you add an inventory source (e.g., Terraform, Ansible inventory, or a diagram export).
### Data sources
- Existing repository architecture docs for declared topology
### Notes from inventory
- The `up` query indicates scrape success from Prometheus perspective only.
- Use static repository architecture docs and deployment configs with this runtime export for complete diagrams.
<!-- END GENERATED PROMETHEUS SECTION -->
+49
View File
@@ -0,0 +1,49 @@
# Deployment Prerequisites
Before running compose operations, provision local secret material.
## 1) Create non-committed secret env file
```bash
cp secrets/.env.secrets.example secrets/stack-secrets.env
chmod 600 secrets/stack-secrets.env
```
## 2) Create required Docker secret files
All files below are expected locally and are gitignored:
- `secrets/nextcloud_db_root_password.txt`
- `secrets/nextcloud_db_password.txt`
- `secrets/nextcloud_admin_password.txt`
- `secrets/nextcloud_smtp_password.txt`
- `secrets/nextcloud_redis_password.txt`
- `secrets/passbolt_db_password.txt`
- `secrets/influxdb_init_password.txt`
- `secrets/prometheus_kuma_basic_auth_password.txt`
Recommended permissions:
```bash
chmod 600 secrets/*.txt
```
## 3) Validate composed configuration
Use the repository composition entrypoint:
```bash
./services-up.sh --profile all config
```
This confirms compose rendering with shared env/network inputs before any runtime operation.
## 4) Rotate previously committed credentials
If migrating from older states where secrets were committed, rotate upstream values immediately (DB credentials, app passwords, auth keys, and API tokens).
## Related docs
- [`./security-secrets.md`](./security-secrets.md)
- [`./docker-environment.md`](./docker-environment.md)
- [`./source-of-truth.md`](./source-of-truth.md)
+61
View File
@@ -0,0 +1,61 @@
# Docker Environment Composition
This repo uses multi-file Docker Compose with a wrapper script as the composition entrypoint.
## Composition source of truth
`services-up.sh` is the composition authority for this repository.
It:
1. discovers compose files under `apps/`, `monitoring/`, and `core/`,
2. prepends shared baseline files,
3. applies `default-environment.env` and `secrets/stack-secrets.env`,
4. invokes `docker compose` with a stable project name.
Because of this, when validating or understanding runtime composition, prefer running:
```bash
./services-up.sh --profile all config
```
## Inputs used by `services-up.sh`
- `default-network.yml`
- discovered `docker-compose.yml` / `docker-compose.yaml` files under `core/`, `apps/`, `monitoring/`
- `default-environment.env`
- `secrets/stack-secrets.env` (local, not committed)
## Typical workflows
### Validate final composed model
```bash
./services-up.sh --profile all config
```
Use this to review merged services, networks, volumes, profiles, and environment substitution.
### Validate one compose file directly
```bash
docker compose -f apps/nextcloud/docker-compose.yml config
```
Use this when focused on one service family.
### Deployment prerequisites
Before runtime operations, follow [deployment-prerequisites.md](deployment-prerequisites.md) to create required local secret files.
## What not to do
- Do not treat archived compose files in `archive/` as active runtime definitions.
- Do not hardcode secrets in committed compose files.
- Do not bypass `services-up.sh` when trying to understand full active composition.
## Related docs
- [docs/source-of-truth.md](source-of-truth.md)
- [docs/repo-structure.md](repo-structure.md)
- [docs/architecture.md](architecture.md)
+36
View File
@@ -0,0 +1,36 @@
# Documentation Strategy
This repository's documentation should help both humans and Codex agents make safe, accurate changes.
## Principles
1. **Authority first**: identify authoritative files clearly (Compose + `services-up.sh` for runtime; Terraform for structured inventory/reconciliation).
2. **Task-oriented docs**: include practical workflow steps, not only conceptual text.
3. **No speculation**: document implemented behavior, not aspirational designs without code evidence.
4. **Cross-linking**: root README and topic docs should point to each other for discoverability.
5. **Safety clarity**: explicitly note what should not be committed or applied casually.
## Documentation quality checklist
Before merging doc changes, check:
- Is this statement verifiable from current repo files?
- Does this conflict with existing docs?
- Does this clarify source-of-truth boundaries?
- Does this improve a real workflow for contributors/Codex?
- Are sensitive details excluded?
## Recommended update cadence
Update docs when any of these change:
- `services-up.sh` composition behavior,
- major Compose directory or profile structure,
- Terraform workflow conventions,
- inventory output shapes,
- secret handling conventions.
## Audience-specific outcomes
- Humans should quickly understand how to operate the repo safely.
- Future Codex runs should quickly identify authoritative files, guardrails, and reconciliation workflows.
+51
View File
@@ -0,0 +1,51 @@
# Infrastructure Inventory Model
This repository treats infrastructure inventory as first-class documentation.
## Intent
The goal is not only deployment configuration, but also a maintainable map of:
- what hosts/VMs exist,
- how they are identified,
- what selected runtime objects are mirrored into Terraform,
- what outputs can be consumed by docs and future tooling.
## Current inventory sources
### 1) Terraform Proxmox layer
`infrastructure/terraform/proxmox/` contains imported/reconciled VM resources and local metadata for physical hosts.
This is currently the most structured host/VM inventory in the repo.
### 2) Terraform Docker layer
`infrastructure/terraform/docker/` contains selective Docker container resources used as documentation-oriented mirrors.
These resources should match existing running containers, not redefine runtime composition strategy.
### 3) Compose runtime definitions
Compose files define intended service runtime composition, networking, labels, and integration.
### 4) Architecture docs
`docs/architecture.md` provides a human-readable topology view based on repository configuration and observed runtime signals.
## Output shaping expectations
When adding Terraform outputs for documentation/tooling:
- prefer concise inventory maps/lists,
- include stable identifiers and roles,
- avoid raw giant provider objects where possible,
- include descriptions so future consumers understand intent.
## Limitations today
- No full generated inventory document pipeline is present yet.
- Some Terraform files still include generated boilerplate comments requiring ongoing cleanup.
- Ansible/NixOS operational layers are not yet implemented in a way that provides authoritative inventory in this repo.
These limitations are expected for the current adoption stage.
+37
View File
@@ -0,0 +1,37 @@
# Repository Structure
This page explains where to find authoritative files quickly.
## Top-level directories
- `core/` — core platform/security services (Traefik, Authelia, CrowdSec, error pages).
- `apps/` — user/business applications (Nextcloud, Passbolt, Gitea, Gramps, SearXNG).
- `monitoring/` — observability and operational tooling (Prometheus, Grafana, InfluxDB, Node-RED, etc.).
- `infrastructure/terraform/` — brownfield Terraform inventory/reconciliation layers.
- `docs/` — repository-level architecture and workflow documentation.
- `archive/` — historical compose/config artifacts not part of active runtime composition.
- `secrets/` — local secret material and templates; never commit real values.
## Key top-level files
- `services-up.sh` — runtime composition entrypoint for multi-compose environment.
- `default-network.yml` — shared docker network definitions used across compose files.
- `default-environment.env` — non-secret default env values for compose rendering.
- `docs/deployment-prerequisites.md` — prerequisite setup before runtime operations.
- `docs/security-secrets.md` — secrets documentation and inventory model.
## Terraform layout
- `infrastructure/terraform/README.md` — Terraform purpose and boundaries in this repo.
- `infrastructure/terraform/proxmox/` — imported/reconciled Proxmox VM resources and host metadata.
- `infrastructure/terraform/docker/` — selective Docker container documentation mirrors.
- `infrastructure/terraform/bootstrap/` — backend/provider bootstrap scaffolding.
- `infrastructure/terraform/scripts/reconcile_from_plan.sh` — helper for `terraform plan -generate-config-out` reconciliation workflow.
## Fast path for future Codex runs
1. Read [README.md](../README.md).
2. Read [docs/source-of-truth.md](source-of-truth.md).
3. Read [docs/docker-environment.md](docker-environment.md).
4. Read [docs/terraform-workflows.md](terraform-workflows.md).
5. Only then edit Compose/Terraform files.
+52
View File
@@ -0,0 +1,52 @@
# Security Secrets
## Overview
This page explains how secret material is organized in this repository and where to find both human-readable and machine-readable references.
For machine-readable inventory metadata, use [`../secrets/inventory.json`](../secrets/inventory.json).
## Scope and authority
- Canonical example template: [`../secrets/.env.secrets.example`](../secrets/.env.secrets.example)
- Runtime-loaded secret env file (local, non-committed): `../secrets/stack-secrets.env`
- Docker secret files (local, non-committed): `../secrets/*.txt`
Treat the example template as the canonical shape for expected environment variables.
## Secret material types
1. **Template variables in `.env.secrets.example`**
- Document expected variable names and usage expectations.
2. **Local runtime env file (`stack-secrets.env`)**
- Holds local runtime secret values loaded during compose rendering.
3. **Local Docker secret files (`*.txt`)**
- Hold password/token material consumed via `*_FILE` style configuration.
4. **Externally managed secret inputs**
- Some values are managed outside shared templates and provided through file mounts or environment substitution.
## Machine-readable inventory
- Primary automation source: [`../secrets/inventory.json`](../secrets/inventory.json)
- Human guidance source: this page
Automation should parse `secrets/inventory.json` directly rather than scraping Markdown tables.
## Setup and deployment prerequisites
Before running compose operations, follow [`./deployment-prerequisites.md`](./deployment-prerequisites.md).
## Commit safety rules
Never commit:
- `secrets/stack-secrets.env`
- real `secrets/*.txt` secret files
- real Terraform `.tfvars` files containing credentials
- Terraform state files with sensitive runtime metadata
## Related docs
- [`./deployment-prerequisites.md`](./deployment-prerequisites.md)
- [`./docker-environment.md`](./docker-environment.md)
- [`./source-of-truth.md`](./source-of-truth.md)
+47
View File
@@ -0,0 +1,47 @@
# Source-of-Truth Boundaries
This repository has multiple layers. Knowing the authority for each layer prevents accidental drift.
## Boundary summary
| Layer | Primary authority | Purpose |
|---|---|---|
| Application/runtime container composition | `services-up.sh` + Compose files under `core/`, `apps/`, `monitoring/` | What runs in the Docker environment and how services are wired. |
| Docker shared baseline inputs | `default-network.yml`, `default-environment.env`, `secrets/stack-secrets.env` | Shared network/env material applied during compose rendering. |
| Infrastructure inventory and reconciliation | Terraform under `infrastructure/terraform/` | Codified inventory of existing infrastructure and relationships, especially Proxmox VMs and selected Docker mirrors. |
| Secret policy and inventory | `docs/security-secrets.md` + `secrets/inventory.json` + local secret files in `secrets/` | What secrets exist, where they are expected, and what automation should parse. |
## Practical meaning
### Docker runtime decisions
Change Compose files and `services-up.sh` when changing runtime behavior.
Do **not** assume Terraform Docker resources are the deployment source for day-to-day service runtime.
### Infrastructure inventory decisions
Use Terraform when documenting/reconciling existing:
- Proxmox VM config and identifiers.
- Physical host metadata.
- Select Docker container details that are intentionally mirrored.
Do **not** treat Terraform as a full replacement for Compose operations in this repo.
## Declared config vs observed/runtime state
- **Declared config**: files in this repository (Compose, Terraform, docs).
- **Observed/runtime state**: live Docker/Proxmox reality and Terraform state snapshots.
Brownfield workflows reconcile these two safely and incrementally.
## Guardrails for contributors and Codex
- Do not mass-import or mass-reconcile everything at once.
- Keep imports/reconciliation scoped to one object (or small set) at a time.
- Keep `ignore_changes` surgical and justified.
- Prefer shaped outputs (inventory-ready) over raw provider object dumps.
- Do not commit `.tfstate`, real `.tfvars`, or real secret files.
See [docs/terraform-workflows.md](terraform-workflows.md) for step-by-step procedures.
+78
View File
@@ -0,0 +1,78 @@
# Terraform Workflows (Brownfield / Reconciliation)
Terraform in this repository is primarily used for **importing and reconciling existing infrastructure**.
This is a brownfield workflow: real infrastructure exists first, then code/state are brought into alignment.
## Core workflow pattern
1. Define/import one existing object.
2. Inspect current provider state.
3. Reconcile hand-maintained `.tf` configuration.
4. Use targeted `ignore_changes` only when necessary.
5. Iterate until plan is sane/no-op for intended scope.
6. Avoid casual apply operations.
## Docker mirror workflow (documentation-oriented)
Directory: `infrastructure/terraform/docker/`
Use when intentionally mirroring selected running containers as structured documentation.
### Steps
1. Add minimal `docker_container` resource block (or uncomment/import-ready block).
2. Add `import {}` block or run `terraform import` for the container.
3. Run plan and inspect generated/state values.
4. Keep only meaningful, maintainable arguments in hand-edited files.
5. Use generated files as draft input, not final truth.
6. Re-run plan until intended scope is clean.
## Proxmox VM workflow
Directory: `infrastructure/terraform/proxmox/`
Use for existing Proxmox VMs and metadata reconciliation.
### Steps
1. Add/import one VM resource at a time.
2. Confirm provider import ID format and vm/node mapping.
3. Inspect with `terraform state show` / plan output.
4. Move useful arguments into stable hand-maintained files.
5. Keep lifecycle ignore rules narrow and explicit.
6. Iterate per VM until plan stabilizes.
## Physical host metadata workflow
Physical host metadata currently lives in Proxmox Terraform locals/outputs and is used as documentation inventory context.
When updating:
1. update locals with factual host metadata,
2. ensure outputs remain documentation-friendly,
3. avoid leaking sensitive internal data not needed for repository goals.
## Generated config guidance
`infrastructure/terraform/scripts/reconcile_from_plan.sh` can generate Terraform draft configuration via `-generate-config-out`.
Treat generated files as:
- a starting point,
- reviewed manually,
- reduced to meaningful attributes,
- reformatted and split into maintainable files.
## Safety reminders
- Do not commit `.tfstate*` or real `.tfvars`.
- Do not commit credentials.
- Do not run `terraform apply`/`destroy` casually.
- Keep changes incremental and reviewable.
## Related docs
- [docs/source-of-truth.md](source-of-truth.md)
- [docs/infrastructure-inventory.md](infrastructure-inventory.md)
- [infrastructure/terraform/README.md](../infrastructure/terraform/README.md)
+41 -35
View File
@@ -1,54 +1,60 @@
# Terraform foundations # Terraform in This Repository
This directory introduces Terraform in a conservative, incremental way for this homelab repo. Terraform here is used as a **structured inventory + reconciliation layer** for existing infrastructure.
## Purpose in this repository It does **not** replace Docker Compose as runtime deployment authority.
Terraform is used here to **document and gradually adopt management** of existing infrastructure without disrupting running services. ## What Terraform is currently used for
Current intent: - Proxmox VM import/reconciliation for existing VMs.
- Start with imported live Docker resources so infrastructure is visible and reproducible in code. - Physical host metadata represented in Terraform locals/outputs.
- Add Proxmox inventory/configuration later once provider details and import IDs are confirmed. - Select Docker container mirror resources for documentation-oriented tracking.
- Keep this phase local-state and learning-oriented (no remote backend yet). - Outputs that can support documentation and later downstream tooling.
## Tool boundaries ## What Terraform is not used for (today)
- **Docker Compose**: day-to-day application/service runtime definitions already used by this repo. - Replacing `services-up.sh` / Compose for day-to-day app runtime orchestration.
- **Terraform**: infrastructure state capture and controlled resource management (starting with imports). - Broad, immediate greenfield provisioning of the whole stack.
- **Ansible**: follow-on host/configuration management after Terraform inventory and targets are stable. - Casual `apply` operations across all infrastructure.
- **NixOS**: host OS/system-level declarative configuration, separate from per-service compose workflows.
## Layout ## Directory map
- `docker/`: Docker provider scaffold and incremental import workflow. - `proxmox/` — imported/reconciled VM resources and host metadata outputs.
- `proxmox/`: placeholder scaffold for later Proxmox adoption. - `docker/` — selective Docker container import/mirror resources.
- `modules/`: placeholder module directories for future shared patterns. - `bootstrap/` — backend/provider bootstrap scaffolding.
- `modules/` — placeholder module directories for future stable abstractions.
- `scripts/reconcile_from_plan.sh` — helper to convert generated plan config into reviewable draft files.
## Incremental adoption plan ## Brownfield workflow standard
1. Import Docker containers one-by-one into Terraform state. 1. Import one existing object.
2. Reconcile and stabilize Docker Terraform configuration until `terraform plan` is clean. 2. Inspect state/plan output.
3. Add Proxmox inventory/configuration scaffolding and imports later. 3. Reconcile hand-maintained Terraform code.
4. Introduce Ansible workflow after Terraform-managed inventory is trustworthy. 4. Keep `ignore_changes` narrowly scoped.
5. Iterate to no-op/sane plan for intended scope.
6. Avoid casual apply.
See detailed steps in [../../docs/terraform-workflows.md](../../docs/terraform-workflows.md).
## Plan-to-config helper script ## Safe validation commands
Use `scripts/reconcile_from_plan.sh` to automate Terraform configuration generation from `terraform plan` output (via Terraform's `-generate-config-out`). From Terraform directories, preferred checks are:
From a Terraform module directory (for example `infrastructure/terraform/docker`):
```bash ```bash
../../scripts/reconcile_from_plan.sh --output-file zz_generated_from_plan.auto.tf terraform fmt -check -recursive
terraform init -backend=false -input=false
terraform validate
``` ```
Notes: ## Secrets and state safety
- Best used with an import-first workflow that already contains `import {}` blocks.
- The script writes generated config into a `.auto.tf` file and runs `terraform fmt` on it.
- Always review generated arguments before apply.
## Safety notes - Do not commit `.tfstate*`.
- Do not commit real `.tfvars` values.
- Keep credentials in local, untracked inputs only.
- State files are intentionally gitignored for safety and portability. ## Related docs
- Do **not** run `terraform apply` until imported resources are fully reconciled and plan output is reviewed as no-op for intended targets.
- No remote backend is configured yet by design. - [../../docs/source-of-truth.md](../../docs/source-of-truth.md)
- [../../docs/infrastructure-inventory.md](../../docs/infrastructure-inventory.md)
- [docker/README.md](docker/README.md)
- [proxmox/README.md](proxmox/README.md)
+29 -34
View File
@@ -1,48 +1,43 @@
# Docker Terraform scaffold # Terraform Docker Mirror Layer
This directory is for **incremental, import-first Terraform adoption** of existing Docker containers. This directory tracks selected existing Docker containers in Terraform for inventory/documentation purposes.
## What this directory is for ## Purpose
- Document existing live Docker resources in Terraform. - Mirror specific running containers as Terraform resources.
- Import containers one-by-one. - Reconcile imported state into maintainable code.
- Reconcile Terraform code with real runtime values until plan is clean. - Produce structured outputs/reminders that support documentation workflows.
## Initialize ## Boundary with Docker Compose
From this directory: Docker Compose + `services-up.sh` remain runtime composition authority.
```bash Terraform resources here are **not** the primary day-to-day deployment mechanism for app services.
terraform init
```
## Safe incremental import workflow ## Current contents
1. Add one `docker_container` resource block in `main.tf` for an already-running container. - `main.tf` — import-first workflow notes and minimal scaffolding.
2. Import it into state: - `searxng-webapp.tf` — generated/reconciled example container resource.
- `outputs.tf` — documentation-oriented reminders/outputs.
- `terraform.tfvars.example` — safe template for local values.
```bash ## Import/reconciliation workflow
terraform import docker_container.<resource_name> <container_id_or_container_name>
```
3. Inspect imported state: 1. Start with one existing container.
2. Import with `import {}` block or `terraform import`.
3. Inspect state / generated config.
4. Reduce generated attributes to meaningful, stable arguments.
5. Keep lifecycle `ignore_changes` narrow and justified.
6. Iterate until plan is clean for the intended resource.
```bash ## Guardrails
terraform state show docker_container.<resource_name>
```
4. Copy required/meaningful arguments from state output into `main.tf`. - Do not attempt to mirror all containers in one pass.
5. Run `terraform plan` and refine until there are no unintended changes. - Do not commit local state or real credentials.
- Treat generated config as draft input that needs review.
## Reconciliation guidance ## Related docs
- Keep one resource block per imported container. - [../README.md](../README.md)
- Prefer explicit values for arguments that affect recreation. - [../../../docs/source-of-truth.md](../../../docs/source-of-truth.md)
- Avoid broad changes; reconcile each container independently. - [../../../docs/terraform-workflows.md](../../../docs/terraform-workflows.md)
- Do **not** run `terraform apply` until plan output is intentionally clean.
## State and secrets handling
- State files are ignored via `../.gitignore` because they may contain environment-specific metadata.
- Do not commit real `.tfvars` files with machine-specific values.
- Use `terraform.tfvars.example` as a safe starter template.
@@ -0,0 +1,14 @@
resource "docker_container" "authelia" {
name = local.docker_containers["authelia"].container_name
image = local.docker_containers["authelia"].image
restart = local.docker_containers["authelia"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,613 @@
locals {
docker_containers = {
"authelia" = {
terraform_resource = "docker_container.authelia"
compose_project = "core"
compose_service = "authelia"
compose_file = "core/authelia/docker-compose.yml"
container_name = "authelia"
image = "authelia/authelia"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["traefik"]
mounts = ["bind:/home/nixos/docker/core/authelia->/config"]
published_ports = []
build_context = "/home/nixos/docker/core/authelia"
build_dockerfile = "Dockerfile"
useful_labels = {
"traefik.enable" = "true"
"traefik.http.middlewares.authelia.forwardauth.address" = "http://authelia:9091/api/verify?rd=https://auth.lan.ddnsgeek.com/"
"traefik.http.middlewares.authelia.forwardauth.authResponseHeaders" = "Remote-User,Remote-Groups"
"traefik.http.middlewares.authelia.forwardauth.maxResponseBodySize" = "2097152"
"traefik.http.middlewares.authelia.forwardauth.trustForwardHeader" = "true"
"traefik.http.routers.authelia.entrypoints" = "websecure"
"traefik.http.routers.authelia.rule" = "Host(`auth.lan.ddnsgeek.com`)"
"traefik.http.routers.authelia.tls" = "true"
"traefik.http.routers.authelia.tls.certresolver" = "myresolver"
}
}
"crowdsec" = {
terraform_resource = "docker_container.crowdsec"
compose_project = "core"
compose_service = "crowdsec"
compose_file = "core/crowdsec/docker-compose.yml"
container_name = "crowdsec"
image = "core-crowdsec"
image_source = "compose_build_inferred"
restart_policy = "always"
network_mode = null
networks = ["traefik"]
mounts = ["bind:/home/nixos/docker/core/crowdsec/logs->/logs:ro", "bind:/home/nixos/docker/core/crowdsec/data->/var/lib/crowdsec/data", "bind:/home/nixos/docker/core/crowdsec/config->/etc/crowdsec"]
published_ports = []
build_context = "/home/nixos/docker/core/crowdsec"
build_dockerfile = "Dockerfile"
useful_labels = {}
}
"docker-socket-proxy" = {
terraform_resource = "docker_container.docker_socket_proxy"
compose_project = "core"
compose_service = "docker-socket-proxy"
compose_file = "monitoring/docker-socket-proxy/docker-compose.yml"
container_name = "docker-socket-proxy"
image = "tecnativa/docker-socket-proxy:latest"
image_source = "declared_image"
restart_policy = "unless-stopped"
network_mode = null
networks = ["monitor", "traefik"]
mounts = ["bind:/var/run/docker.sock->/var/run/docker.sock:ro"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {}
}
"docker-update-exporter" = {
terraform_resource = "docker_container.docker_update_exporter"
compose_project = "core"
compose_service = "docker-update-exporter"
compose_file = "monitoring/docker-exporter/docker-compose.yml"
container_name = "docker-update-exporter"
image = "core-docker-update-exporter"
image_source = "compose_build_inferred"
restart_policy = "unless-stopped"
network_mode = null
networks = ["monitor"]
mounts = ["bind:/root/.docker/config.json->/root/.docker/config.json:ro", "bind:/home/nixos/docker/monitoring/docker-exporter/data->/data", "bind:/home/nixos/docker->/compose:ro"]
published_ports = []
build_context = "/home/nixos/docker/monitoring/docker-exporter"
build_dockerfile = "Dockerfile"
useful_labels = {}
}
"error-pages" = {
terraform_resource = "docker_container.error_pages"
compose_project = "core"
compose_service = "error-pages"
compose_file = "core/error-pages/docker-compose.yml"
container_name = "error-pages"
image = "tarampampam/error-pages:3"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["traefik"]
mounts = []
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {
"traefik.enable" = "true"
"traefik.http.middlewares.error-pages-middleware.errors.query" = "/{status}.html"
"traefik.http.middlewares.error-pages-middleware.errors.service" = "error-pages-service"
"traefik.http.middlewares.error-pages-middleware.errors.status" = "400-599"
"traefik.http.routers.error-pages-router.entrypoints" = "web"
"traefik.http.routers.error-pages-router.middlewares" = "error-pages-middleware"
"traefik.http.routers.error-pages-router.rule" = "HostRegexp(`{host:.+}`)"
"traefik.http.services.error-pages-service.loadbalancer.server.port" = "8080"
}
}
"gitea" = {
terraform_resource = "docker_container.gitea"
compose_project = "core"
compose_service = "gitea"
compose_file = "apps/gitea/docker-compose.yml"
container_name = "gitea"
image = "gitea/gitea:latest"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["traefik"]
mounts = ["bind:/home/nixos/docker/apps/gitea/data->/data"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {
"traefik.docker.network" = "core_traefik"
"traefik.enable" = "true"
"traefik.http.routers.gitea.entrypoints" = "websecure"
"traefik.http.routers.gitea.rule" = "Host(`gitea.lan.ddnsgeek.com`)"
"traefik.http.routers.gitea.tls" = "true"
"traefik.http.routers.gitea.tls.certresolver" = "myresolver"
"traefik.http.services.gitea.loadbalancer.server.port" = "3000"
}
}
"gotify" = {
terraform_resource = "docker_container.gotify"
compose_project = "core"
compose_service = "gotify"
compose_file = "monitoring/gotify/docker-compose.yml"
container_name = "gotify"
image = "gotify/server:latest"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["traefik"]
mounts = ["bind:/home/nixos/docker/monitoring/gotify/data->/app/data"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {
"traefik.docker.network" = "core_traefik"
"traefik.enable" = "true"
"traefik.http.routers.gotify.entrypoints" = "websecure"
"traefik.http.routers.gotify.rule" = "Host(`gotify.lan.ddnsgeek.com`)"
"traefik.http.routers.gotify.tls.certresolver" = "myresolver"
"traefik.http.routers.gotify.tls.options" = "mtls-private-admin@file"
"traefik.http.services.gotify.loadbalancer.server.port" = "80"
}
}
"grafana" = {
terraform_resource = "docker_container.grafana"
compose_project = "core"
compose_service = "grafana"
compose_file = "monitoring/grafana/docker-compose.yml"
container_name = "grafana"
image = "grafana/grafana:latest"
image_source = "declared_image"
restart_policy = "unless-stopped"
network_mode = null
networks = ["monitor", "traefik"]
mounts = ["bind:/home/nixos/docker/monitoring/grafana/data->/var/lib/grafana"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {
"traefik.docker.network" = "core_traefik"
"traefik.enable" = "true"
"traefik.http.routers.grafana.entrypoints" = "websecure"
"traefik.http.routers.grafana.rule" = "Host(`grafana.lan.ddnsgeek.com`)"
"traefik.http.routers.grafana.tls.certresolver" = "myresolver"
"traefik.http.routers.grafana.tls.options" = "mtls-private-admin@file"
"traefik.http.services.grafana.loadbalancer.server.port" = "3000"
}
}
"gramps-redis" = {
terraform_resource = "docker_container.gramps_redis"
compose_project = "core"
compose_service = "gramps-redis"
compose_file = "apps/gramps/docker-compose.yml"
container_name = "gramps-redis"
image = "valkey/valkey:8-alpine"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["gramps"]
mounts = []
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {}
}
"gramps-web" = {
terraform_resource = "docker_container.gramps_web"
compose_project = "core"
compose_service = "grampsweb"
compose_file = "apps/gramps/docker-compose.yml"
container_name = "gramps-web"
image = "ghcr.io/gramps-project/grampsweb:latest"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["gramps", "traefik"]
mounts = ["bind:/home/nixos/docker/apps/gramps/data/users->/app/users", "bind:/home/nixos/docker/apps/gramps/data/index->/app/indexdir", "bind:/home/nixos/docker/apps/gramps/data/thumbnail_cache->/app/thumbnail_cache", "bind:/home/nixos/docker/apps/gramps/data/cache->/app/cache", "bind:/home/nixos/docker/apps/gramps/data/secret->/app/secret", "bind:/home/nixos/docker/apps/gramps/data/db->/root/.gramps/grampsdb", "bind:/home/nixos/docker/apps/gramps/data/media->/app/media", "bind:/home/nixos/docker/apps/gramps/data/tmp->/tmp"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {
"traefik.docker.network" = "core_traefik"
"traefik.enable" = "true"
"traefik.http.routers.gramps.entrypoints" = "websecure"
"traefik.http.routers.gramps.rule" = "Host(`familytree.lan.ddnsgeek.com`)"
"traefik.http.routers.gramps.tls.certresolver" = "myresolver"
"traefik.http.services.gramps.loadbalancer.server.port" = "5000"
}
}
"gramps-web-celery" = {
terraform_resource = "docker_container.gramps_web_celery"
compose_project = "core"
compose_service = "grampsweb_celery"
compose_file = "apps/gramps/docker-compose.yml"
container_name = "gramps-web-celery"
image = "ghcr.io/gramps-project/grampsweb:latest"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["gramps"]
mounts = ["bind:/home/nixos/docker/apps/gramps/data/users->/app/users", "bind:/home/nixos/docker/apps/gramps/data/index->/app/indexdir", "bind:/home/nixos/docker/apps/gramps/data/thumbnail_cache->/app/thumbnail_cache", "bind:/home/nixos/docker/apps/gramps/data/cache->/app/cache", "bind:/home/nixos/docker/apps/gramps/data/secret->/app/secret", "bind:/home/nixos/docker/apps/gramps/data/db->/root/.gramps/grampsdb", "bind:/home/nixos/docker/apps/gramps/data/media->/app/media", "bind:/home/nixos/docker/apps/gramps/data/tmp->/tmp"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {}
}
"influxdb" = {
terraform_resource = "docker_container.influxdb"
compose_project = "core"
compose_service = "influxdb"
compose_file = "monitoring/influxdb/docker-compose.yml"
container_name = "influxdb"
image = "influxdb:2.7"
image_source = "declared_image"
restart_policy = "unless-stopped"
network_mode = null
networks = ["monitor", "traefik"]
mounts = ["bind:/home/nixos/docker/monitoring/influxdb->/var/lib/influxdb2"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {
"traefik.docker.network" = "core_traefik"
"traefik.enable" = "true"
"traefik.http.routers.influxdb.entrypoints" = "websecure"
"traefik.http.routers.influxdb.middlewares" = "authelia"
"traefik.http.routers.influxdb.rule" = "Host(`influxdb.lan.ddnsgeek.com`)"
"traefik.http.routers.influxdb.tls.certresolver" = "myresolver"
"traefik.http.routers.influxdb.tls.options" = "mtls-private-admin@file"
"traefik.http.services.influxdb.loadbalancer.server.port" = "8086"
}
}
"monitor-kuma" = {
terraform_resource = "docker_container.monitor_kuma"
compose_project = "core"
compose_service = "monitor-kuma"
compose_file = "monitoring/uptime-kuma/docker-compose.yml"
container_name = "monitor-kuma"
image = "louislam/uptime-kuma:2.1.1"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["monitor", "traefik"]
mounts = ["bind:/home/nixos/docker/monitoring/uptime-kuma/data->/app/data"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {
"traefik.docker.network" = "core_traefik"
"traefik.enable" = "true"
"traefik.http.routers.monitor.entrypoints" = "websecure"
"traefik.http.routers.monitor.rule" = "Host(`monitor-kuma.lan.ddnsgeek.com`)"
"traefik.http.routers.monitor.tls" = "true"
"traefik.http.routers.monitor.tls.certresolver" = "myresolver"
"traefik.http.routers.monitor.tls.options" = "mtls-private-admin@file"
"traefik.http.services.monitor.loadbalancer.server.port" = "3001"
}
}
"mtls-bridge" = {
terraform_resource = "docker_container.mtls_bridge"
compose_project = "core"
compose_service = "mtls-bridge"
compose_file = "monitoring/mtls-bridge/docker-compose.yml"
container_name = "mtls-bridge"
image = "core-mtls-bridge"
image_source = "compose_build_inferred"
restart_policy = "unless-stopped"
network_mode = null
networks = ["monitor", "traefik"]
mounts = ["bind:/home/nixos/docker/core/traefik/certs->/certs:ro"]
published_ports = []
build_context = "/home/nixos/docker/monitoring/mtls-bridge"
build_dockerfile = "Dockerfile"
useful_labels = {
"traefik.docker.network" = "core_traefik"
"traefik.enable" = "true"
"traefik.http.middlewares.mtls-bridge-auth.basicauth.users" = ""
"traefik.http.middlewares.mtls-bridge-cors.headers.accesscontrolallowcredentials" = "true"
"traefik.http.middlewares.mtls-bridge-cors.headers.accesscontrolallowheaders" = "authorization,content-type,x-grafana-action,x-grafana-device-id"
"traefik.http.middlewares.mtls-bridge-cors.headers.accesscontrolallowmethods" = "GET,POST,PUT,PATCH,DELETE,OPTIONS"
"traefik.http.middlewares.mtls-bridge-cors.headers.accesscontrolalloworiginlist" = "https://grafana.lan.ddnsgeek.com"
"traefik.http.middlewares.mtls-bridge-cors.headers.addvaryheader" = "true"
"traefik.http.routers.mtls-bridge-preflight.entrypoints" = "websecure"
"traefik.http.routers.mtls-bridge-preflight.middlewares" = "mtls-bridge-cors"
"traefik.http.routers.mtls-bridge-preflight.priority" = "100"
"traefik.http.routers.mtls-bridge-preflight.rule" = "Host(`mtls-bridge.lan.ddnsgeek.com`) && Method(`OPTIONS`)"
"traefik.http.routers.mtls-bridge-preflight.service" = "mtls-bridge"
"traefik.http.routers.mtls-bridge-preflight.tls.certresolver" = "myresolver"
"traefik.http.routers.mtls-bridge.entrypoints" = "websecure"
"traefik.http.routers.mtls-bridge.middlewares" = "mtls-bridge-auth,mtls-bridge-cors"
"traefik.http.routers.mtls-bridge.rule" = "Host(`mtls-bridge.lan.ddnsgeek.com`)"
"traefik.http.routers.mtls-bridge.tls.certresolver" = "myresolver"
"traefik.http.services.mtls-bridge.loadbalancer.server.port" = "8080"
}
}
"nextcloud-db" = {
terraform_resource = "docker_container.nextcloud_db"
compose_project = "core"
compose_service = "nextcloud-db"
compose_file = "apps/nextcloud/docker-compose.yml"
container_name = "nextcloud-db"
image = "mariadb:11.4"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["nextcloud"]
mounts = ["bind:/home/nixos/docker/apps/nextcloud/database->/var/lib/mysql"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {}
}
"nextcloud-redis" = {
terraform_resource = "docker_container.nextcloud_redis"
compose_project = "core"
compose_service = "nextcloud-redis"
compose_file = "apps/nextcloud/docker-compose.yml"
container_name = "nextcloud-redis"
image = "redis"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["nextcloud"]
mounts = ["bind:/home/nixos/docker/apps/nextcloud/data/redis->/data"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {}
}
"nextcloud-webapp" = {
terraform_resource = "docker_container.nextcloud_webapp"
compose_project = "core"
compose_service = "nextcloud-webapp"
compose_file = "apps/nextcloud/docker-compose.yml"
container_name = "nextcloud-webapp"
image = "core-nextcloud-webapp"
image_source = "compose_build_inferred"
restart_policy = "always"
network_mode = null
networks = ["nextcloud", "traefik"]
mounts = ["bind:/home/nixos/docker/apps/nextcloud/data->/var/www/html/data", "bind:/home/nixos/docker/apps/nextcloud/config->/var/www/html/config", "tmpfs:->/tmp:exec"]
published_ports = []
build_context = "/home/nixos/docker/apps/nextcloud"
build_dockerfile = "Dockerfile"
useful_labels = {
"traefik.docker.network" = "core_traefik"
"traefik.enable" = "true"
"traefik.http.middlewares.nextcloud-dav.replacepathregex.regex" = "^/.well-known/ca(l|rd)dav"
"traefik.http.middlewares.nextcloud-dav.replacepathregex.replacement" = "/remote.php/dav/"
"traefik.http.middlewares.nextcloud-nodeinfo.replacepathregex.regex" = "^/.well-known/nodeinfo"
"traefik.http.middlewares.nextcloud-nodeinfo.replacepathregex.replacement" = "/nextcloud/index.php/.well-known/nodeinfo/"
"traefik.http.middlewares.nextcloud-webfinger.redirectregex.permanent" = "true"
"traefik.http.middlewares.nextcloud-webfinger.redirectregex.regex" = "https://(.*)/.well-known/webfinger"
"traefik.http.middlewares.nextcloud-webfinger.redirectregex.replacement" = "https://$${1}/nextcloud/index.php/.well-known/webfinger"
"traefik.http.routers.nextcloud.entrypoints" = "websecure"
"traefik.http.routers.nextcloud.middlewares" = "nextcloud-dav, nextcloud-webfinger"
"traefik.http.routers.nextcloud.rule" = "Host(`nextcloud.lan.ddnsgeek.com`)"
"traefik.http.routers.nextcloud.tls.certresolver" = "myresolver"
}
}
"node-exporter" = {
terraform_resource = "docker_container.node_exporter"
compose_project = "core"
compose_service = "node-exporter"
compose_file = "monitoring/node-exporter/docker-compose.yml"
container_name = "node-exporter"
image = "prom/node-exporter:latest"
image_source = "declared_image"
restart_policy = "unless-stopped"
network_mode = null
networks = ["monitor"]
mounts = ["bind:/proc->/host/proc:ro", "bind:/sys->/host/sys:ro", "bind:/->/rootfs:ro"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {}
}
"node-red" = {
terraform_resource = "docker_container.node_red"
compose_project = "core"
compose_service = "node-red"
compose_file = "monitoring/node-red/docker-compose.yml"
container_name = "node-red"
image = "core-node-red"
image_source = "compose_build_inferred"
restart_policy = "unless-stopped"
network_mode = null
networks = ["monitor", "traefik"]
mounts = ["bind:/home/nixos/docker/monitoring/node-red/data->/data", "bind:/home/nixos/docker->/compose/docker:ro", "bind:/home/nixos/raspi->/compose/raspi:ro"]
published_ports = []
build_context = "/home/nixos/docker/monitoring/node-red"
build_dockerfile = "Dockerfile"
useful_labels = {
"traefik.docker.network" = "core_traefik"
"traefik.enable" = "true"
"traefik.http.routers.node-red.entrypoints" = "websecure"
"traefik.http.routers.node-red.middlewares" = "authelia"
"traefik.http.routers.node-red.rule" = "Host(`node-red.lan.ddnsgeek.com`)"
"traefik.http.routers.node-red.tls.certresolver" = "myresolver"
"traefik.http.routers.node-red.tls.options" = "mtls-private-admin@file"
"traefik.http.services.node-red.loadbalancer.server.port" = "1880"
}
}
"passbolt-db" = {
terraform_resource = "docker_container.passbolt_db"
compose_project = "core"
compose_service = "passbolt-db"
compose_file = "apps/passbolt/docker-compose.yml"
container_name = "passbolt-db"
image = "mariadb:12"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["passbolt"]
mounts = ["bind:/home/nixos/docker/apps/passbolt/data/database->/var/lib/mysql"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {}
}
"passbolt-webapp" = {
terraform_resource = "docker_container.passbolt_webapp"
compose_project = "core"
compose_service = "passbolt-webapp"
compose_file = "apps/passbolt/docker-compose.yml"
container_name = "passbolt-webapp"
image = "passbolt/passbolt:latest-ce"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["passbolt", "traefik"]
mounts = ["bind:/home/nixos/docker/apps/passbolt/data/gpg->/etc/passbolt/gpg", "bind:/home/nixos/docker/apps/passbolt/data/jwt->/etc/passbolt/jwt"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {
"traefik.docker.network" = "core_traefik"
"traefik.enable" = "true"
"traefik.http.routers.passbolt.entrypoints" = "websecure"
"traefik.http.routers.passbolt.rule" = "Host(`passbolt.lan.ddnsgeek.com`)"
"traefik.http.routers.passbolt.tls.certresolver" = "myresolver"
}
}
"pihole-exporter" = {
terraform_resource = "docker_container.pihole_exporter"
compose_project = "core"
compose_service = "pihole-exporter"
compose_file = "monitoring/pihole-exporter/docker-compose.yml"
container_name = "pihole-exporter"
image = "ekofr/pihole-exporter:latest"
image_source = "declared_image"
restart_policy = "unless-stopped"
network_mode = null
networks = ["monitor"]
mounts = []
published_ports = ["9617:9617/tcp"]
build_context = null
build_dockerfile = null
useful_labels = {}
}
"portainer" = {
terraform_resource = "docker_container.portainer"
compose_project = "core"
compose_service = "portainer"
compose_file = "monitoring/portainer/docker-compose.yml"
container_name = "portainer"
image = "portainer/portainer-ce:latest"
image_source = "declared_image"
restart_policy = "unless-stopped"
network_mode = null
networks = ["traefik"]
mounts = ["bind:/home/nixos/docker/monitoring/portainer/data->/data"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {
"traefik.enable" = "true"
"traefik.http.routers.portainer.entrypoints" = "websecure"
"traefik.http.routers.portainer.rule" = "Host(`portainer.lan.ddnsgeek.com`)"
"traefik.http.routers.portainer.tls" = "true"
"traefik.http.routers.portainer.tls.certresolver" = "myresolver"
"traefik.http.routers.portainer.tls.options" = "mtls-private-admin@file"
"traefik.http.services.portainer.loadbalancer.server.port" = "9000"
}
}
"prometheus" = {
terraform_resource = "docker_container.prometheus"
compose_project = "core"
compose_service = "prometheus"
compose_file = "monitoring/prometheus/docker-compose.yml"
container_name = "prometheus"
image = "prom/prometheus:latest"
image_source = "declared_image"
restart_policy = "unless-stopped"
network_mode = null
networks = ["monitor", "traefik"]
mounts = ["bind:/home/nixos/docker/monitoring/prometheus/prometheus.yml->/etc/prometheus/prometheus.yml:ro", "bind:/home/nixos/docker/monitoring/prometheus/data->/prometheus", "bind:/home/nixos/docker/monitoring/prometheus/rules->/etc/prometheus/rules:ro", "bind:/home/nixos/docker/secrets/prometheus_kuma_basic_auth_password.txt->/run/secrets/prometheus_kuma_basic_auth_password:ro"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {
"traefik.docker.network" = "core_traefik"
"traefik.enable" = "true"
"traefik.http.routers.prometheus.entrypoints" = "websecure"
"traefik.http.routers.prometheus.middlewares" = "authelia"
"traefik.http.routers.prometheus.rule" = "Host(`prometheus.lan.ddnsgeek.com`)"
"traefik.http.routers.prometheus.tls.certresolver" = "myresolver"
"traefik.http.routers.prometheus.tls.options" = "mtls-private-admin@file"
"traefik.http.services.prometheus.loadbalancer.server.port" = "9090"
}
}
"searxng-webapp" = {
terraform_resource = "docker_container.searxng-webapp"
compose_project = "core"
compose_service = "searxng-webapp"
compose_file = "apps/searxng/docker-compose.yml"
container_name = "searxng-webapp"
image = "searxng/searxng"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["traefik"]
mounts = []
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {
"traefik.enable" = "true"
"traefik.http.routers.searxng.entrypoints" = "websecure"
"traefik.http.routers.searxng.rule" = "Host(`searxng.lan.ddnsgeek.com`)"
"traefik.http.routers.searxng.tls.certresolver" = "myresolver"
"traefik.http.services.searxng.loadbalancer.server.port" = "8080"
}
}
"telegraf" = {
terraform_resource = "docker_container.telegraf"
compose_project = "core"
compose_service = "telegraf"
compose_file = "monitoring/telegraf/docker-compose.yml"
container_name = "telegraf"
image = "telegraf:latest"
image_source = "declared_image"
restart_policy = "unless-stopped"
network_mode = null
networks = ["monitor"]
mounts = ["bind:/home/nixos/docker/monitoring/telegraf/telegraf.conf->/etc/telegraf/telegraf.conf:ro", "bind:/home/nixos/docker/monitoring/node-red/data->/var/log/node-red:ro"]
published_ports = []
build_context = null
build_dockerfile = null
useful_labels = {}
}
"traefik" = {
terraform_resource = "docker_container.traefik"
compose_project = "core"
compose_service = "traefik"
compose_file = "core/traefik/docker-compose.yml"
container_name = "traefik"
image = "traefik:3"
image_source = "declared_image"
restart_policy = "always"
network_mode = null
networks = ["traefik"]
mounts = ["bind:/home/nixos/docker/core/traefik/data/letsencrypt->/letsencrypt", "bind:/home/nixos/docker/core/traefik/data/logs->/logs", "bind:/home/nixos/docker/core/traefik/certs->/etc/traefik/certs:ro", "bind:/home/nixos/docker/core/traefik/dynamic.yml->/etc/traefik/dynamic.yml:ro", "bind:/home/nixos/docker/core/traefik/traefik.yml->/etc/traefik/traefik.yml:ro", "bind:/home/nixos/docker/core/traefik/data/plugins->/plugins-storage"]
published_ports = ["80:80/tcp", "443:443/tcp"]
build_context = "/home/nixos/docker/core"
build_dockerfile = "Dockerfile"
useful_labels = {
"traefik.docker.network" = "core_traefik"
"traefik.enable" = "true"
"traefik.http.routers.traefik.entrypoints" = "websecure"
"traefik.http.routers.traefik.middlewares" = "authelia"
"traefik.http.routers.traefik.observability.tracing" = "true"
"traefik.http.routers.traefik.rule" = "Host(`traefik.lan.ddnsgeek.com`)"
"traefik.http.routers.traefik.service" = "api@internal"
"traefik.http.routers.traefik.tls.certresolver" = "myresolver"
"traefik.http.routers.traefik.tls.options" = "mtls-private-admin@file"
}
}
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "crowdsec" {
name = local.docker_containers["crowdsec"].container_name
image = local.docker_containers["crowdsec"].image
restart = local.docker_containers["crowdsec"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "docker_socket_proxy" {
name = local.docker_containers["docker-socket-proxy"].container_name
image = local.docker_containers["docker-socket-proxy"].image
restart = local.docker_containers["docker-socket-proxy"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "docker_update_exporter" {
name = local.docker_containers["docker-update-exporter"].container_name
image = local.docker_containers["docker-update-exporter"].image
restart = local.docker_containers["docker-update-exporter"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "error_pages" {
name = local.docker_containers["error-pages"].container_name
image = local.docker_containers["error-pages"].image
restart = local.docker_containers["error-pages"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
+14
View File
@@ -0,0 +1,14 @@
resource "docker_container" "gitea" {
name = local.docker_containers["gitea"].container_name
image = local.docker_containers["gitea"].image
restart = local.docker_containers["gitea"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
+14
View File
@@ -0,0 +1,14 @@
resource "docker_container" "gotify" {
name = local.docker_containers["gotify"].container_name
image = local.docker_containers["gotify"].image
restart = local.docker_containers["gotify"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "grafana" {
name = local.docker_containers["grafana"].container_name
image = local.docker_containers["grafana"].image
restart = local.docker_containers["grafana"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "gramps_redis" {
name = local.docker_containers["gramps-redis"].container_name
image = local.docker_containers["gramps-redis"].image
restart = local.docker_containers["gramps-redis"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "gramps_web_celery" {
name = local.docker_containers["gramps-web-celery"].container_name
image = local.docker_containers["gramps-web-celery"].image
restart = local.docker_containers["gramps-web-celery"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "gramps_web" {
name = local.docker_containers["gramps-web"].container_name
image = local.docker_containers["gramps-web"].image
restart = local.docker_containers["gramps-web"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "influxdb" {
name = local.docker_containers["influxdb"].container_name
image = local.docker_containers["influxdb"].image
restart = local.docker_containers["influxdb"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
+2 -26
View File
@@ -1,26 +1,2 @@
# Docker Terraform workflow in this repo: # Docker container resources are split into one file per container.
# 1) Add a minimal resource block for ONE existing container. # See container-catalog.tf for documentation-oriented metadata used by outputs.
# 2) Import that live container into state:
# terraform import docker_container.<name> <container_id_or_name>
# 3) Inspect imported arguments:
# terraform state show docker_container.<name>
# 4) Copy required arguments into this file and refine.
# 5) Repeat until terraform plan shows no unintended changes.
# Example skeleton for future imported containers (intentionally commented):
# resource "docker_container" "example_service" {
# name = "existing-container-name"
# image = "repo/image:tag"
#
# # Add additional arguments based on `terraform state show` output.
# # Keep values aligned with the live container so plan is a no-op.
# }
#resource "docker_container" "searxng-webapp" {
# name = "searxng-webapp"
# image = "searxng/searxng"
#}
#import {
# to = docker_container.searxng-webapp
# id = "5e755fc8478a3d088be12a1bb26df78e2f1990c56e1f7671f0cbf9761330092b"
#}
@@ -0,0 +1,14 @@
resource "docker_container" "monitor_kuma" {
name = local.docker_containers["monitor-kuma"].container_name
image = local.docker_containers["monitor-kuma"].image
restart = local.docker_containers["monitor-kuma"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "mtls_bridge" {
name = local.docker_containers["mtls-bridge"].container_name
image = local.docker_containers["mtls-bridge"].image
restart = local.docker_containers["mtls-bridge"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "nextcloud_db" {
name = local.docker_containers["nextcloud-db"].container_name
image = local.docker_containers["nextcloud-db"].image
restart = local.docker_containers["nextcloud-db"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "nextcloud_redis" {
name = local.docker_containers["nextcloud-redis"].container_name
image = local.docker_containers["nextcloud-redis"].image
restart = local.docker_containers["nextcloud-redis"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "nextcloud_webapp" {
name = local.docker_containers["nextcloud-webapp"].container_name
image = local.docker_containers["nextcloud-webapp"].image
restart = local.docker_containers["nextcloud-webapp"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "node_exporter" {
name = local.docker_containers["node-exporter"].container_name
image = local.docker_containers["node-exporter"].image
restart = local.docker_containers["node-exporter"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "node_red" {
name = local.docker_containers["node-red"].container_name
image = local.docker_containers["node-red"].image
restart = local.docker_containers["node-red"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
+30 -11
View File
@@ -3,17 +3,36 @@ output "docker_host_in_use" {
value = var.docker_host value = var.docker_host
} }
output "managed_container_names" { output "docker_containers" {
description = "Names of containers intentionally tracked in Terraform configuration." description = "Documentation-shaped inventory of Docker containers managed via services-up.sh compose sources."
value = var.managed_container_names value = local.docker_containers
} }
output "import_reconciliation_steps" { output "docker_inventory" {
description = "Short reminder of the safe import-first workflow." description = "Compact Docker inventory suitable for export and merging into broader infrastructure docs."
value = [ value = {
"Create one docker_container block for an existing container.", compose_project = "core"
"Run terraform import for that block.", container_count = length(local.docker_containers)
"Run terraform state show and copy required arguments.", containers = {
"Refine config until terraform plan has no unintended changes.", for key, container in local.docker_containers : key => {
] compose_service = container.compose_service
compose_file = container.compose_file
container_name = container.container_name
image = container.image
image_source = container.image_source
build_context = container.build_context
network_mode = container.network_mode
networks = container.networks
published_ports = container.published_ports
mounts = container.mounts
restart_policy = container.restart_policy
labels = container.useful_labels
}
}
}
}
output "managed_container_names" {
description = "Names of containers intentionally tracked in Terraform documentation resources."
value = sort(keys(local.docker_containers))
} }
@@ -0,0 +1,14 @@
resource "docker_container" "passbolt_db" {
name = local.docker_containers["passbolt-db"].container_name
image = local.docker_containers["passbolt-db"].image
restart = local.docker_containers["passbolt-db"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "passbolt_webapp" {
name = local.docker_containers["passbolt-webapp"].container_name
image = local.docker_containers["passbolt-webapp"].image
restart = local.docker_containers["passbolt-webapp"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "pihole_exporter" {
name = local.docker_containers["pihole-exporter"].container_name
image = local.docker_containers["pihole-exporter"].image
restart = local.docker_containers["pihole-exporter"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "portainer" {
name = local.docker_containers["portainer"].container_name
image = local.docker_containers["portainer"].image
restart = local.docker_containers["portainer"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,14 @@
resource "docker_container" "prometheus" {
name = local.docker_containers["prometheus"].container_name
image = local.docker_containers["prometheus"].image
restart = local.docker_containers["prometheus"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -1,54 +1,14 @@
# -----------------------------------------------------------------------------
# AUTO-GENERATED BY reconcile_from_plan.sh
# Generated: 2026-04-14T10:53:00Z
# Source: terraform plan -generate-config-out
# Review carefully before apply.
# -----------------------------------------------------------------------------
# __generated__ by Terraform
# Please review these resources and move them into your main configuration files.
# __generated__ by Terraform from "5e755fc8478a3d088be12a1bb26df78e2f1990c56e1f7671f0cbf9761330092b"
resource "docker_container" "searxng-webapp" { resource "docker_container" "searxng-webapp" {
entrypoint = ["/usr/local/searxng/entrypoint.sh"] name = local.docker_containers["searxng-webapp"].container_name
hostname = "searxng.lan.ddnsgeek.com" image = local.docker_containers["searxng-webapp"].image
image = "sha256:6a9a175cd122c005abe2dc15d7cbfcd5109619e9dcccb511c34be244e10f49bc"
must_run = true restart = local.docker_containers["searxng-webapp"].restart_policy
name = "searxng-webapp"
network_mode = "core_traefik"
read_only = true
restart = "always"
tmpfs = {
"/run" = ""
"/tmp" = ""
"/var" = ""
}
wait = false
wait_timeout = 60
working_dir = "/usr/local/searxng"
healthcheck {
interval = "20s"
retries = 8
start_period = "30s"
test = ["CMD-SHELL", "python3 -c \"import urllib.request,sys; r=urllib.request.urlopen('http://127.0.0.1:8080/', timeout=3); sys.exit(0 if 200<=r.status<400 else 1)\""]
timeout = "5s"
}
mounts {
read_only = false
source = "2255bde19ed136d348d29ada3d274eb3dbcb8aede13b246bbc9bac19fa38b37d"
target = "/var/cache/searxng"
type = "volume"
}
mounts {
read_only = false
source = "e7a1475c1265b7d1c15f7c4da10e93461f6f1bcf50fe8030131a6398509e2e48"
target = "/etc/searxng"
type = "volume"
}
lifecycle { lifecycle {
ignore_changes = [ ignore_changes = [
env, env,
labels,
] ]
} }
} }
@@ -0,0 +1,14 @@
resource "docker_container" "telegraf" {
name = local.docker_containers["telegraf"].container_name
image = local.docker_containers["telegraf"].image
restart = local.docker_containers["telegraf"].restart_policy
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -0,0 +1,69 @@
resource "docker_container" "traefik" {
name = local.docker_containers["traefik"].container_name
image = local.docker_containers["traefik"].image
restart = local.docker_containers["traefik"].restart_policy
network_mode = "core_traefik"
ports {
internal = 80
external = 80
protocol = "tcp"
}
ports {
internal = 443
external = 443
protocol = "tcp"
}
mounts {
type = "bind"
source = "/home/nixos/docker/core/traefik/data/letsencrypt"
target = "/letsencrypt"
read_only = false
}
mounts {
type = "bind"
source = "/home/nixos/docker/core/traefik/data/logs"
target = "/logs"
read_only = false
}
mounts {
type = "bind"
source = "/home/nixos/docker/core/traefik/certs"
target = "/etc/traefik/certs"
read_only = true
}
mounts {
type = "bind"
source = "/home/nixos/docker/core/traefik/dynamic.yml"
target = "/etc/traefik/dynamic.yml"
read_only = true
}
mounts {
type = "bind"
source = "/home/nixos/docker/core/traefik/traefik.yml"
target = "/etc/traefik/traefik.yml"
read_only = true
}
mounts {
type = "bind"
source = "/home/nixos/docker/core/traefik/data/plugins"
target = "/plugins-storage"
read_only = false
}
lifecycle {
ignore_changes = [
env,
labels,
]
}
}
@@ -3,9 +3,3 @@ variable "docker_host" {
type = string type = string
default = "unix:///var/run/docker.sock" default = "unix:///var/run/docker.sock"
} }
variable "managed_container_names" {
description = "Human-maintained list of containers intentionally tracked in Terraform docs/outputs."
type = list(string)
default = ["searxng-webapp"]
}
+31 -23
View File
@@ -1,36 +1,44 @@
# Proxmox Terraform scaffold # Terraform Proxmox Inventory Layer
This directory is a **placeholder scaffold** for future Proxmox Terraform adoption. This directory codifies existing Proxmox infrastructure using an import-first reconciliation model.
## What this directory is for ## Purpose
- Prepare provider/version/variable structure now. - Track existing Proxmox VMs in Terraform.
- Delay real Proxmox resource management until import strategy is validated. - Reconcile imported VM configuration into maintainable, explicit files.
- Represent physical host metadata as structured Terraform locals/outputs.
- Support documentation inventory and future downstream tooling.
## Initialize ## Current repository status
From this directory: This directory already contains imported/reconciled VM resources (for example `docker`, `server-nixos`, `nix-cache`, `pbs`, `pihole`) plus host metadata locals/outputs.
```bash This means it is no longer just a scaffold; treat it as active infrastructure inventory code.
terraform init
```
## Current status ## Workflow standard (brownfield)
- No live Proxmox resources are defined yet. 1. Import one existing VM at a time.
- Provider auth variables are placeholders only. 2. Confirm provider-specific import ID format.
- Import IDs and resource schemas must be verified against provider docs before adding resources. 3. Inspect state/plan details.
4. Keep hand-maintained `.tf` files focused and readable.
5. Use `ignore_changes` only where drift noise is unavoidable.
6. Stop when plan is sane/no-op for intended scope.
## Future safe workflow ## File organization expectations
1. Add one resource block for an existing VM/object. - Prefer one-resource-per-file patterns when practical.
2. Import it with the provider-specific ID format. - Keep shared metadata in `locals`/outputs with clear descriptions.
3. Inspect with `terraform state show`. - Keep generated comments/config under ongoing cleanup rather than assuming generated output is final.
4. Reconcile `.tf` arguments until `terraform plan` is clean.
5. Repeat incrementally.
## Safety notes ## Safety notes
- Do not commit real credentials in `.tf` files or tracked `.tfvars`. - Do not run broad applies casually.
- State files are ignored by the Terraform-level `.gitignore`. - Do not commit real credentials or `.tfstate*`.
- Do not run `terraform apply` until plan is intentionally no-op for existing resources. - Keep changes incremental and reviewable.
## Related docs
- [../README.md](../README.md)
- [../../../docs/source-of-truth.md](../../../docs/source-of-truth.md)
- [../../../docs/terraform-workflows.md](../../../docs/terraform-workflows.md)
- [../../../docs/infrastructure-inventory.md](../../../docs/infrastructure-inventory.md)
+122
View File
@@ -0,0 +1,122 @@
#!/usr/bin/env bash
set -euo pipefail
echo "== Refresh Python tools =="
python3 -m pip install --break-system-packages --upgrade \
pip \
yamllint \
ansible \
ansible-lint
echo "== Sanity check installed tools =="
for cmd in docker terraform tflint ansible-lint yamllint yq jq shellcheck; do
if command -v "$cmd" >/dev/null 2>&1; then
echo "OK: $cmd -> $(command -v "$cmd")"
else
echo "MISSING: $cmd"
fi
done
echo "== Reconcile dummy secret material =="
REPO_ROOT="${CODEX_REPO_DIR:-$PWD}"
SECRETS_DIR="$REPO_ROOT/secrets"
INVENTORY_JSON="$SECRETS_DIR/inventory.json"
EXAMPLE_ENV="$SECRETS_DIR/.env.secrets.example"
STACK_ENV="$SECRETS_DIR/stack-secrets.env"
if [[ ! -f "$INVENTORY_JSON" ]]; then
echo "Missing inventory file: $INVENTORY_JSON"
exit 1
fi
if [[ ! -f "$EXAMPLE_ENV" ]]; then
echo "Missing example env file: $EXAMPLE_ENV"
exit 1
fi
mkdir -p "$SECRETS_DIR"
dummy_value_for_key() {
local key="$1"
case "$key" in
*EMAIL* ) echo "dummy@example.com" ;;
*USER*|*USERNAME* ) echo "dummy-user" ;;
*DOMAIN* ) echo "example.lan.ddnsgeek.com" ;;
*TZ ) echo "Australia/Brisbane" ;;
*URL* ) echo "https://example.lan.ddnsgeek.com" ;;
*PORT* ) echo "1234" ;;
*PASSWORD*|*PASS*|*TOKEN*|*SECRET*|*KEY*|*JWT* ) echo "dummy-${key,,}" ;;
*FINGERPRINT* ) echo "0000000000000000000000000000000000000000" ;;
*DB_NAME* ) echo "dummydb" ;;
*DB_USER* ) echo "dummyuser" ;;
*NAME* ) echo "dummy-name" ;;
*ADDRESS* ) echo "dummy" ;;
* ) echo "dummy-value" ;;
esac
}
rebuild_dummy_stack_env() {
local tmp
tmp="$(mktemp)"
cp "$EXAMPLE_ENV" "$tmp"
while IFS= read -r var; do
[[ -z "$var" ]] && continue
dummy="$(dummy_value_for_key "$var")"
if grep -Eq "^[[:space:]]*${var}=" "$tmp"; then
sed -i "s|^[[:space:]]*${var}=.*|${var}=${dummy}|" "$tmp"
else
printf '%s=%s\n' "$var" "$dummy" >> "$tmp"
fi
done < <(jq -r '.env_template_variables[].variable' "$INVENTORY_JSON")
mv "$tmp" "$STACK_ENV"
chmod 600 "$STACK_ENV" || true
echo "Updated $STACK_ENV"
}
reconcile_file_based_secrets() {
local wanted existing relpath abspath
wanted="$(mktemp)"
existing="$(mktemp)"
jq -r '.file_based_secrets[].path' "$INVENTORY_JSON" | sort -u > "$wanted"
find "$SECRETS_DIR" -maxdepth 1 -type f -name '*.txt' -printf '%P\n' \
| sed "s#^#secrets/#" \
| sort -u > "$existing"
# Create missing listed files
while IFS= read -r relpath; do
[[ -z "$relpath" ]] && continue
abspath="$REPO_ROOT/$relpath"
if [[ ! -f "$abspath" ]]; then
mkdir -p "$(dirname "$abspath")"
printf 'dummy-secret\n' > "$abspath"
chmod 600 "$abspath" || true
echo "Created $relpath"
fi
done < <(comm -23 "$wanted" "$existing")
# Remove stale files no longer listed in inventory.json
while IFS= read -r relpath; do
[[ -z "$relpath" ]] && continue
abspath="$REPO_ROOT/$relpath"
if [[ -f "$abspath" ]]; then
rm -f "$abspath"
echo "Removed stale $relpath"
fi
done < <(comm -13 "$wanted" "$existing")
rm -f "$wanted" "$existing"
}
rebuild_dummy_stack_env
reconcile_file_based_secrets
echo "== Dummy secret reconciliation complete =="
echo "stack env: $STACK_ENV"
jq -r '.file_based_secrets[].path' "$INVENTORY_JSON" | sed 's/^/file secret: /'
+174
View File
@@ -0,0 +1,174 @@
#!/usr/bin/env bash
set -euo pipefail
export DEBIAN_FRONTEND=noninteractive
echo "== Base packages =="
if command -v apt-get >/dev/null 2>&1; then
apt-get update
apt-get install -y \
bash \
ca-certificates \
curl \
git \
jq \
unzip \
wget \
python3 \
python3-pip \
python3-venv \
shellcheck
else
echo "This script currently expects an apt-based environment."
exit 1
fi
echo "== yq =="
if ! command -v yq >/dev/null 2>&1; then
YQ_VERSION="v4.44.3"
ARCH="$(dpkg --print-architecture)"
case "$ARCH" in
amd64) YQ_ARCH="amd64" ;;
arm64) YQ_ARCH="arm64" ;;
*) echo "Unsupported architecture: $ARCH"; exit 1 ;;
esac
wget -qO /usr/local/bin/yq "https://github.com/mikefarah/yq/releases/download/${YQ_VERSION}/yq_linux_${YQ_ARCH}"
chmod +x /usr/local/bin/yq
fi
echo "== Docker CLI + Compose plugin =="
if ! command -v docker >/dev/null 2>&1; then
apt-get install -y docker.io docker-compose-v2 || true
fi
echo "== Python tooling =="
python3 -m pip install --break-system-packages --upgrade pip
python3 -m pip install --break-system-packages \
yamllint \
ansible \
ansible-lint
echo "== Terraform =="
if ! command -v terraform >/dev/null 2>&1; then
TF_VERSION="1.8.5"
ARCH="$(dpkg --print-architecture)"
case "$ARCH" in
amd64) TF_ARCH="amd64" ;;
arm64) TF_ARCH="arm64" ;;
*) echo "Unsupported architecture: $ARCH"; exit 1 ;;
esac
wget -qO /tmp/terraform.zip \
"https://releases.hashicorp.com/terraform/${TF_VERSION}/terraform_${TF_VERSION}_linux_${TF_ARCH}.zip"
unzip -o /tmp/terraform.zip -d /tmp
install /tmp/terraform /usr/local/bin/terraform
fi
echo "== TFLint =="
if ! command -v tflint >/dev/null 2>&1; then
TFLINT_VERSION="v0.56.0"
ARCH="$(dpkg --print-architecture)"
case "$ARCH" in
amd64) TFLINT_ARCH="amd64" ;;
arm64) TFLINT_ARCH="arm64" ;;
*) echo "Unsupported architecture: $ARCH"; exit 1 ;;
esac
wget -qO /tmp/tflint.zip \
"https://github.com/terraform-linters/tflint/releases/download/${TFLINT_VERSION}/tflint_linux_${TFLINT_ARCH}.zip"
unzip -o /tmp/tflint.zip -d /tmp
install /tmp/tflint /usr/local/bin/tflint
fi
echo "== Dummy secret material for compose validation =="
REPO_ROOT="${CODEX_REPO_DIR:-$PWD}"
SECRETS_DIR="$REPO_ROOT/secrets"
INVENTORY_JSON="$SECRETS_DIR/inventory.json"
EXAMPLE_ENV="$SECRETS_DIR/.env.secrets.example"
STACK_ENV="$SECRETS_DIR/stack-secrets.env"
if [[ ! -f "$INVENTORY_JSON" ]]; then
echo "Missing inventory file: $INVENTORY_JSON"
exit 1
fi
if [[ ! -f "$EXAMPLE_ENV" ]]; then
echo "Missing example env file: $EXAMPLE_ENV"
exit 1
fi
mkdir -p "$SECRETS_DIR"
dummy_value_for_key() {
local key="$1"
case "$key" in
*EMAIL* ) echo "dummy@example.com" ;;
*USER*|*USERNAME* ) echo "dummy-user" ;;
*DOMAIN* ) echo "example.lan.ddnsgeek.com" ;;
*TZ ) echo "Australia/Brisbane" ;;
*URL* ) echo "https://example.lan.ddnsgeek.com" ;;
*PORT* ) echo "1234" ;;
*PASSWORD*|*PASS*|*TOKEN*|*SECRET*|*KEY*|*JWT* ) echo "dummy-${key,,}" ;;
*FINGERPRINT* ) echo "0000000000000000000000000000000000000000" ;;
*DB_NAME* ) echo "dummydb" ;;
*DB_USER* ) echo "dummyuser" ;;
*NAME* ) echo "dummy-name" ;;
*ADDRESS* ) echo "dummy" ;;
* ) echo "dummy-value" ;;
esac
}
render_dummy_stack_env() {
cp "$EXAMPLE_ENV" "$STACK_ENV.tmp"
while IFS= read -r var; do
[[ -z "$var" ]] && continue
dummy="$(dummy_value_for_key "$var")"
if grep -Eq "^[[:space:]]*${var}=" "$STACK_ENV.tmp"; then
sed -i "s|^[[:space:]]*${var}=.*|${var}=${dummy}|" "$STACK_ENV.tmp"
else
printf '%s=%s\n' "$var" "$dummy" >> "$STACK_ENV.tmp"
fi
done < <(jq -r '.env_template_variables[].variable' "$INVENTORY_JSON")
mv "$STACK_ENV.tmp" "$STACK_ENV"
chmod 600 "$STACK_ENV" || true
}
ensure_dummy_secret_files() {
jq -r '.file_based_secrets[].path' "$INVENTORY_JSON" | while IFS= read -r relpath; do
[[ -z "$relpath" ]] && continue
abspath="$REPO_ROOT/$relpath"
mkdir -p "$(dirname "$abspath")"
if [[ ! -f "$abspath" ]]; then
printf 'dummy-secret\n' > "$abspath"
fi
chmod 600 "$abspath" || true
done
}
render_dummy_stack_env
ensure_dummy_secret_files
echo
echo "== Installed versions =="
bash --version | head -n 1 || true
git --version || true
docker --version || true
docker compose version || true
python3 --version || true
ansible --version | head -n 1 || true
ansible-lint --version || true
terraform version | head -n 1 || true
tflint --version || true
shellcheck --version | head -n 1 || true
yamllint --version || true
yq --version || true
jq --version || true
echo
echo "== Dummy secret files prepared =="
echo "$STACK_ENV"
jq -r '.file_based_secrets[].path' "$INVENTORY_JSON" || true
+152
View File
@@ -0,0 +1,152 @@
{
"scope_and_authority": {
"canonical_example_template": "secrets/.env.secrets.example",
"runtime_loaded_secret_env_file": "secrets/stack-secrets.env",
"docker_secret_files_pattern": "secrets/*.txt"
},
"env_template_variables": [
{
"variable": "NEXTCLOUD_DB_USER",
"used_by": "apps/nextcloud/docker-compose.yml",
"purpose": "Nextcloud database username (non-secret identifier but environment-specific)."
},
{
"variable": "NEXTCLOUD_ADMIN_USER",
"used_by": "apps/nextcloud/docker-compose.yml",
"purpose": "Initial Nextcloud admin username."
},
{
"variable": "NEXTCLOUD_SMTP_FROM_ADDRESS",
"used_by": "apps/nextcloud/docker-compose.yml",
"purpose": "SMTP sender local-part for outbound mail configuration."
},
{
"variable": "NEXTCLOUD_SMTP_DOMAIN",
"used_by": "apps/nextcloud/docker-compose.yml",
"purpose": "SMTP sender domain for outbound mail configuration."
},
{
"variable": "NEXTCLOUD_SMTP_NAME",
"used_by": "apps/nextcloud/docker-compose.yml",
"purpose": "SMTP display/sender name derived from address + domain in the example file."
},
{
"variable": "PASSBOLT_DB_NAME",
"used_by": "apps/passbolt/docker-compose.yml",
"purpose": "Passbolt database name."
},
{
"variable": "PASSBOLT_DB_USER",
"used_by": "apps/passbolt/docker-compose.yml",
"purpose": "Passbolt database username."
},
{
"variable": "PASSBOLT_GPG_SERVER_KEY_FINGERPRINT",
"used_by": "apps/passbolt/docker-compose.yml",
"purpose": "Passbolt server GPG key fingerprint."
},
{
"variable": "GRAMPSWEB_SECRET_KEY",
"used_by": "apps/gramps/docker-compose.yml",
"purpose": "Secret key used by Gramps Web for session/security signing."
},
{
"variable": "GRAMPSWEB_EMAIL_HOST_USER",
"used_by": "apps/gramps/docker-compose.yml",
"purpose": "SMTP username for Gramps outbound email."
},
{
"variable": "GRAMPSWEB_EMAIL_HOST_PASSWORD",
"used_by": "apps/gramps/docker-compose.yml",
"purpose": "SMTP password for Gramps outbound email."
},
{
"variable": "GOTIFY_DEFAULTUSER_NAME",
"used_by": "monitoring/gotify/docker-compose.yml",
"purpose": "Gotify default username."
},
{
"variable": "GOTIFY_DEFAULTUSER_PASS",
"used_by": "monitoring/gotify/docker-compose.yml",
"purpose": "Gotify default user password."
},
{
"variable": "INFLUXDB_INIT_USERNAME",
"used_by": "monitoring/prometheus/docker-compose.yml",
"purpose": "InfluxDB initial username."
},
{
"variable": "PIHOLE_PASSWORD",
"used_by": "monitoring/prometheus/docker-compose.yml",
"purpose": "Exporter auth / Pi-hole integration password."
}
],
"file_based_secrets": [
{
"path": "secrets/nextcloud_db_root_password.txt",
"purpose": "Nextcloud MariaDB root password file.",
"managed_by": "local_file",
"committed": false
},
{
"path": "secrets/nextcloud_db_password.txt",
"purpose": "Nextcloud MariaDB application user password file.",
"managed_by": "local_file",
"committed": false
},
{
"path": "secrets/nextcloud_admin_password.txt",
"purpose": "Initial Nextcloud admin password file.",
"managed_by": "local_file",
"committed": false
},
{
"path": "secrets/nextcloud_smtp_password.txt",
"purpose": "Nextcloud SMTP account password file.",
"managed_by": "local_file",
"committed": false
},
{
"path": "secrets/nextcloud_redis_password.txt",
"purpose": "Nextcloud Redis runtime password file.",
"managed_by": "local_file",
"committed": false
},
{
"path": "secrets/passbolt_db_password.txt",
"purpose": "Passbolt database user password file.",
"managed_by": "local_file",
"committed": false
},
{
"path": "secrets/influxdb_init_password.txt",
"purpose": "InfluxDB initialization password file.",
"managed_by": "local_file",
"committed": false
},
{
"path": "secrets/prometheus_kuma_basic_auth_password.txt",
"purpose": "Uptime Kuma Prometheus scrape basic-auth password file.",
"managed_by": "local_file",
"committed": false
}
],
"externally_managed_secrets": [
"Database/root passwords for Nextcloud, Passbolt, and supporting services are provided via Docker secret files.",
"Redis runtime password is loaded from a Docker secret file.",
"DOCKER_INFLUXDB_INIT_PASSWORD is loaded from a Docker secret in monitoring.",
"Uptime Kuma basic-auth password is loaded via password_file in Prometheus configuration.",
"Core stack secret values (for example Authelia and CrowdSec values) are injected via environment substitution."
],
"commit_safety_rules": [
"Never commit secrets/stack-secrets.env.",
"Never commit real secrets/*.txt files.",
"Never commit real Terraform .tfvars containing credentials.",
"Never commit Terraform state files with sensitive runtime metadata."
],
"related_docs": [
"docs/security-secrets.md",
"docs/deployment-prerequisites.md",
"docs/source-of-truth.md"
]
}
+13 -13
View File
@@ -1,13 +1,13 @@
22:10:52 INFO: === Update started: 2026-04-20 22:10:52 === 10:17:35 INFO: === Update started: 2026-04-21 10:17:35 ===
22:10:52 WARNING: Skipping traefik (directory does not exist) 10:17:35 WARNING: Skipping traefik (directory does not exist)
22:10:52 WARNING: Skipping nextcloud (directory does not exist) 10:17:35 WARNING: Skipping nextcloud (directory does not exist)
22:10:52 WARNING: Skipping passbolt (directory does not exist) 10:17:35 WARNING: Skipping passbolt (directory does not exist)
22:10:52 WARNING: Skipping searxng (directory does not exist) 10:17:35 WARNING: Skipping searxng (directory does not exist)
22:10:52 WARNING: Skipping gitea (directory does not exist) 10:17:35 WARNING: Skipping gitea (directory does not exist)
22:10:52 WARNING: Skipping gotify (directory does not exist) 10:17:35 WARNING: Skipping gotify (directory does not exist)
22:10:52 WARNING: Skipping grafana (directory does not exist) 10:17:35 WARNING: Skipping grafana (directory does not exist)
22:10:52 WARNING: Skipping gramps (directory does not exist) 10:17:35 WARNING: Skipping gramps (directory does not exist)
22:10:52 WARNING: Skipping portainer (directory does not exist) 10:17:35 WARNING: Skipping portainer (directory does not exist)
22:10:52 WARNING: Skipping prometheus (directory does not exist) 10:17:35 WARNING: Skipping prometheus (directory does not exist)
22:10:52 WARNING: Skipping uptime-kuma (directory does not exist) 10:17:35 WARNING: Skipping uptime-kuma (directory does not exist)
22:10:52 INFO: Pruning unused containers, images, networks, and volumes... 10:17:35 INFO: Pruning unused containers, images, networks, and volumes...