# Dynu Terraform Layer (Brownfield DNS Reconciliation) This Terraform root is for **Dynu DNS brownfield reconciliation**. The intended pattern is: 1. Import the existing root domain object. 2. Read inventory through `data.dynu_dns_records.root`. 3. Generate reviewable `dynu_dns_record` resources and import commands. 4. Import every existing DNS record into matching Terraform resources. 5. Use `terraform plan` as the reconciliation check before any apply. ## Provider behavior to keep in mind - Source: `beatz174-bit/dynu` - `dynu_domain` import requires a **numeric Dynu domain ID**. - Importing `dynu_domain` imports only the root domain object. - It **does not** import DNS records/subdomains. - `dynu_dns_record` imports require `/`. ## Variables - `dynu_root_domain` (default: `lan.ddnsgeek.com`) - `dynu_api_key` (sensitive) - `dynu_username` / `dynu_password` (optional) ## Safe validation commands ```bash cd infrastructure/terraform/dynu terraform fmt -check -recursive terraform init -backend=false -input=false terraform validate python3 -m py_compile scripts/generate-brownfield-records.py ``` ## Brownfield workflow ```bash cd infrastructure/terraform/dynu terraform init terraform import dynu_domain.lan_ddnsgeek_com '' terraform apply -refresh-only terraform output -json dynu_dns_records > /tmp/dynu-records.json python3 scripts/generate-brownfield-records.py --dry-run python3 scripts/generate-brownfield-records.py --overwrite # Review generated/dynu_dns_records.generated.tf # Review generated/import-dynu-dns-records.sh bash generated/import-dynu-dns-records.sh terraform plan ``` ## What each component means - `data.dynu_dns_records.root`: read-only live inventory from Dynu. - `generated/dynu_dns_records.generated.tf`: generated management-intent resources; includes `prevent_destroy = true` on each record. - `generated/import-dynu-dns-records.sh`: imports each discovered record to its generated `dynu_dns_record` address using `/`. - `terraform plan` after imports: reconciliation checkpoint. Any create/update/delete must be reviewed manually before apply. ## Generated artifacts The helper script writes these files under `generated/`: - `generated/dynu_dns_records_inventory.json` - `generated/dynu_dns_records.generated.tf` - `generated/import-dynu-dns-records.sh` These are generated outputs meant for operator review before use in production. ### Generator output selection (interactive + automation) The brownfield generator defaults to Terraform output `dynu_dns_records`: ```bash python3 scripts/generate-brownfield-records.py --dry-run ``` If the default output is missing/unusable and stdin is interactive, the script shows a picker of available Terraform outputs and indicates which ones are usable for DNS imports. ```bash # Interactive mode: choose from available Terraform outputs python3 scripts/generate-brownfield-records.py --dry-run # Non-interactive mode: specify output explicitly python3 scripts/generate-brownfield-records.py \ --records-output dynu_dns_inventory \ --dry-run # Disable menu and fail fast python3 scripts/generate-brownfield-records.py \ --no-interactive \ --dry-run # Use saved terraform output JSON and choose interactively terraform output -json > generated/terraform-output.json python3 scripts/generate-brownfield-records.py \ --from-file generated/terraform-output.json \ --dry-run ``` Notes: - The menu shows Terraform outputs currently stored in state. - If newly added outputs do not appear, run: ```bash terraform apply -refresh-only ``` - The selected output must contain real Dynu provider record fields: - `id` - `domain_id` - `hostname` - `record_type` ## Troubleshooting ### Plan shows a large wall of `+` values under outputs Cause: Terraform is planning to save **new output values** to state (for example, live records from `data.dynu_dns_records.root`). This is not creating DNS records by itself. How to verify: - Output-only changes appear under `Changes to Outputs`. - Real DNS changes appear as `dynu_dns_record` resource create/update/delete actions. Use: ```bash terraform apply -refresh-only ``` to persist refreshed data source and output values only. ### Error: `There is no function named "regexreplace"` Cause: `regexreplace` is not a Terraform function. Resource-name slugification should not be implemented in Terraform HCL for this workflow. Fix: - Keep `inventory.tf` focused on reading live records via `data.dynu_dns_records.root`. - Keep Terraform outputs simple (for example, `/` mappings). - Let `scripts/generate-brownfield-records.py` generate Terraform-safe resource names with Python `tf_name(record)`. ### Error: `'"'"'dynu_dns_records'"'"'` Cause: The helper script reads `terraform output -json` and expects an output named `dynu_dns_records`. Fix: ```bash cd infrastructure/terraform/dynu terraform init terraform apply -refresh-only terraform output -json | jq 'keys' ``` Confirm `dynu_dns_records` appears in the key list. If it does not, check that the Terraform config contains: ```hcl data "dynu_dns_records" "root" { hostname = var.dynu_root_domain } output "dynu_dns_records" { value = data.dynu_dns_records.root.records } ``` Then rerun: ```bash python3 scripts/generate-brownfield-records.py --dry-run ```