reconciler PoC

This commit is contained in:
Prox 2026-03-06 13:21:08 +02:00
parent cd6e8ea120
commit b4f3ce7870
38 changed files with 4286 additions and 237 deletions

View File

@ -1,12 +1,16 @@
# Beads - AI-Native Issue Tracking
Welcome to Beads! This repository uses **Beads** for issue tracking - a modern, AI-native tool designed to live directly in your codebase alongside your code.
Welcome to Beads! This repository uses **Beads** for issue tracking - a modern,
AI-native tool designed to live directly in your codebase alongside your code.
## What is Beads?
Beads is issue tracking that lives in your repo, making it perfect for AI coding agents and developers who want their issues close to their code. No web UI required - everything works through the CLI and integrates seamlessly with git.
Beads is issue tracking that lives in your repo, making it perfect for AI coding
agents and developers who want their issues close to their code. No web UI
required - everything works through the CLI and integrates seamlessly with git.
**Learn more:** [github.com/steveyegge/beads](https://github.com/steveyegge/beads)
**Learn more:**
[github.com/steveyegge/beads](https://github.com/steveyegge/beads)
## Quick Start
@ -33,6 +37,7 @@ bd dolt push
### Working with Issues
Issues in Beads are:
- **Git-native**: Stored in `.beads/issues.jsonl` and synced like code
- **AI-friendly**: CLI-first design works perfectly with AI coding agents
- **Branch-aware**: Issues can follow your branch workflow
@ -41,16 +46,19 @@ Issues in Beads are:
## Why Beads?
✨ **AI-Native Design**
- Built specifically for AI-assisted development workflows
- CLI-first interface works seamlessly with AI coding agents
- No context switching to web UIs
🚀 **Developer Focused**
- Issues live in your repo, right next to your code
- Works offline, syncs when you push
- Fast, lightweight, and stays out of your way
🔧 **Git Integration**
- Automatic sync with git commits
- Branch-aware issue tracking
- Intelligent JSONL merge resolution
@ -72,10 +80,12 @@ bd create "Try out Beads"
## Learn More
- **Documentation**: [github.com/steveyegge/beads/docs](https://github.com/steveyegge/beads/tree/main/docs)
- **Documentation**:
[github.com/steveyegge/beads/docs](https://github.com/steveyegge/beads/tree/main/docs)
- **Quick Start Guide**: Run `bd quickstart`
- **Examples**: [github.com/steveyegge/beads/examples](https://github.com/steveyegge/beads/tree/main/examples)
- **Examples**:
[github.com/steveyegge/beads/examples](https://github.com/steveyegge/beads/tree/main/examples)
---
*Beads: Issue tracking that moves at the speed of thought*
_Beads: Issue tracking that moves at the speed of thought_

View File

@ -4,4 +4,4 @@
"backend": "dolt",
"dolt_mode": "server",
"dolt_database": "beads_netbird-gitops"
}
}

View File

@ -3,7 +3,7 @@ name: Dry Run
on:
pull_request:
paths:
- 'netbird.json'
- "netbird.json"
jobs:
dry-run:

View File

@ -5,7 +5,7 @@ on:
branches:
- main
paths:
- 'netbird.json'
- "netbird.json"
jobs:
reconcile:

View File

@ -3,7 +3,7 @@ name: Release
on:
push:
tags:
- 'v*'
- "v*"
jobs:
build:

View File

@ -1,6 +1,7 @@
# Agent Instructions
This project uses **bd** (beads) for issue tracking. Run `bd onboard` to get started.
This project uses **bd** (beads) for issue tracking. Run `bd onboard` to get
started.
## Quick Reference
@ -13,9 +14,11 @@ bd sync # Sync with git
```
<!-- BEGIN BEADS INTEGRATION -->
## Issue Tracking with bd (beads)
**IMPORTANT**: This project uses **bd (beads)** for ALL issue tracking. Do NOT use markdown TODOs, task lists, or other tracking methods.
**IMPORTANT**: This project uses **bd (beads)** for ALL issue tracking. Do NOT
use markdown TODOs, task lists, or other tracking methods.
### Why bd?
@ -101,11 +104,13 @@ For more details, see README.md and docs/QUICKSTART.md.
## Landing the Plane (Session Completion)
**When ending a work session**, you MUST complete ALL steps below. Work is NOT complete until `git push` succeeds.
**When ending a work session**, you MUST complete ALL steps below. Work is NOT
complete until `git push` succeeds.
**MANDATORY WORKFLOW:**
1. **File issues for remaining work** - Create issues for anything that needs follow-up
1. **File issues for remaining work** - Create issues for anything that needs
follow-up
2. **Run quality gates** (if code changed) - Tests, linters, builds
3. **Update issue status** - Close finished work, update in-progress items
4. **PUSH TO REMOTE** - This is MANDATORY:
@ -120,6 +125,7 @@ For more details, see README.md and docs/QUICKSTART.md.
7. **Hand off** - Provide context for next session
**CRITICAL RULES:**
- Work is NOT complete until `git push` succeeds
- NEVER stop before pushing - that leaves work stranded locally
- NEVER say "ready to push when you are" - YOU must push

View File

@ -7,7 +7,8 @@
"test": "deno test --allow-net --allow-read --allow-write --allow-env",
"check": "deno check src/main.ts",
"lint": "deno lint",
"fmt": "deno fmt"
"fmt": "deno fmt",
"export": "deno run --allow-net --allow-env src/main.ts --export"
},
"imports": {
"@std/assert": "jsr:@std/assert@^1.0.0",

View File

@ -1,8 +1,13 @@
NETBIRD_API_URL=https://netbird.example.com/api
NETBIRD_API_TOKEN=
# Set to false to run without Gitea integration (standalone mode).
# When false, GITEA_URL/TOKEN/REPO are not required.
GITEA_ENABLED=true
GITEA_URL=https://gitea.example.com
GITEA_TOKEN=
GITEA_REPO=BlastPilot/netbird-gitops
RECONCILER_TOKEN=
POLL_INTERVAL_SECONDS=30
PORT=8080

View File

@ -1,25 +1,31 @@
# NetBird Reconciler — Design Document
> **Status:** Approved
> **Author:** @prox
> **Date:** 2026-03-03
> **Proposal:** NetBird GitOps Proposal (rev2)
> **Status:** Approved **Author:** @prox **Date:** 2026-03-03 **Proposal:**
> NetBird GitOps Proposal (rev2)
## Overview
A dedicated backend service that provides declarative GitOps-driven reconciliation for NetBird VPN configuration. Engineers declare desired state in `netbird.json`; the reconciler computes diffs and applies changes with all-or-nothing semantics.
A dedicated backend service that provides declarative GitOps-driven
reconciliation for NetBird VPN configuration. Engineers declare desired state in
`netbird.json`; the reconciler computes diffs and applies changes with
all-or-nothing semantics.
**Repo:** `BlastPilot/netbird-gitops` (service code + state file in one repo)
**Runtime:** TypeScript / Deno
**Deployment:** Docker Compose on the NetBird VPS, behind Traefik
**Runtime:** TypeScript / Deno **Deployment:** Docker Compose on the NetBird
VPS, behind Traefik
## Architecture
The reconciler has two responsibilities:
1. **Reconciliation API** — Called by Gitea Actions CI on PR events. Accepts desired state (`netbird.json`), fetches actual state from NetBird API, computes a diff, and either returns a plan (dry-run) or applies changes.
1. **Reconciliation API** — Called by Gitea Actions CI on PR events. Accepts
desired state (`netbird.json`), fetches actual state from NetBird API,
computes a diff, and either returns a plan (dry-run) or applies changes.
2. **Event Poller** — Background loop polling NetBird `/api/events` every 30s to detect peer enrollments. When a peer enrolls via a known setup key, the poller renames it, assigns it to the correct group, and commits `enrolled: true` back to git via Gitea API.
2. **Event Poller** — Background loop polling NetBird `/api/events` every 30s to
detect peer enrollments. When a peer enrolls via a known setup key, the
poller renames it, assigns it to the correct group, and commits
`enrolled: true` back to git via Gitea API.
### Data Flow
@ -41,12 +47,17 @@ Event poller (background):
The existing enrollment pipeline in `blastpilot-public` changes:
- **Before:** `handleApproval()` creates `peers/enrollment-{N}.json`, `handlePRMerge()` calls NetBird API directly to create setup keys, emails PDF.
- **After:** `handleApproval()` modifies `netbird.json` (adds setup key + group entries) and creates PR. Key creation is handled by the reconciler on merge. Key delivery starts as manual (engineer downloads encrypted artifact), with automation added later.
- **Before:** `handleApproval()` creates `peers/enrollment-{N}.json`,
`handlePRMerge()` calls NetBird API directly to create setup keys, emails PDF.
- **After:** `handleApproval()` modifies `netbird.json` (adds setup key + group
entries) and creates PR. Key creation is handled by the reconciler on merge.
Key delivery starts as manual (engineer downloads encrypted artifact), with
automation added later.
## State File Format
`netbird.json` at repo root. All resources referenced by name, never by NetBird ID.
`netbird.json` at repo root. All resources referenced by name, never by NetBird
ID.
```json
{
@ -96,6 +107,7 @@ The existing enrollment pipeline in `blastpilot-public` changes:
```
**Conventions:**
- Setup key name = expected peer hostname
- `enrolled: false` — setup key should exist, peer hasn't connected yet
- `enrolled: true` — peer detected, renamed, assigned to groups
@ -109,24 +121,37 @@ All endpoints authenticated via `Authorization: Bearer <token>`.
### `POST /reconcile`
**Query params:** `dry_run=true|false` (default: false)
**Request body:** Contents of `netbird.json`
**Query params:** `dry_run=true|false` (default: false) **Request body:**
Contents of `netbird.json`
Behavior:
1. Fetch actual state from NetBird API (groups, setup keys, peers, policies, routes, DNS)
1. Fetch actual state from NetBird API (groups, setup keys, peers, policies,
routes, DNS)
2. Process pending enrollments from event poller state
3. Compute diff between desired and actual
4. If `dry_run=true`: return plan without applying
5. If `dry_run=false`: execute in dependency order — groups, setup keys, peers, policies, routes. Abort on first failure.
5. If `dry_run=false`: execute in dependency order — groups, setup keys, peers,
policies, routes. Abort on first failure.
Response:
```json
{
"status": "applied | planned | error",
"operations": [
{ "type": "create_group", "name": "pilots", "status": "success" },
{ "type": "create_setup_key", "name": "Pilot-hawk-72", "status": "success" },
{ "type": "create_policy", "name": "pilots-to-gs", "status": "failed", "error": "..." }
{
"type": "create_setup_key",
"name": "Pilot-hawk-72",
"status": "success"
},
{
"type": "create_policy",
"name": "pilots-to-gs",
"status": "failed",
"error": "..."
}
],
"created_keys": {
"Pilot-hawk-72": "XXXXXX-XXXXXX-XXXXXX"
@ -135,16 +160,23 @@ Response:
}
```
`created_keys` only contains keys created in this run. CI uses this for encrypted artifacts.
`created_keys` only contains keys created in this run. CI uses this for
encrypted artifacts.
### `POST /sync-events`
Forces the event poller to process pending events immediately. Returns detected enrollments.
Forces the event poller to process pending events immediately. Returns detected
enrollments.
```json
{
"enrollments": [
{ "setup_key_name": "GS-hawk-72", "peer_id": "abc123", "renamed": true, "groups_assigned": true }
{
"setup_key_name": "GS-hawk-72",
"peer_id": "abc123",
"renamed": true,
"groups_assigned": true
}
]
}
```
@ -156,22 +188,29 @@ No auth. Returns service status for Docker healthcheck.
## Event Poller
**Mechanism:**
- Polls `GET /api/events` every 30 seconds (configurable via `POLL_INTERVAL_SECONDS`)
- Polls `GET /api/events` every 30 seconds (configurable via
`POLL_INTERVAL_SECONDS`)
- Persists `last_event_timestamp` to `/data/poller-state.json` (Docker volume)
- Loads last-known `netbird.json` desired state on startup and after each reconcile
- Loads last-known `netbird.json` desired state on startup and after each
reconcile
**Enrollment detection:**
1. Filter events for `peer.setupkey.add` activity
2. Extract `setup_key_name` from event metadata
3. Look up in desired state — if found and `enrolled: false`:
- Rename peer to match setup key name via `PUT /api/peers/{id}`
- Assign peer to groups from `setup_keys[name].auto_groups`
- Commit `enrolled: true` to git via Gitea API (optimistic concurrency with SHA check)
- Commit `enrolled: true` to git via Gitea API (optimistic concurrency with
SHA check)
- Commit message: `chore: mark {key_name} as enrolled [automated]`
4. If not found: log warning (unknown peer enrolled outside GitOps)
**Edge cases:**
- Race with reconcile: if reconcile is in progress, enrollment processing queues until complete
- Race with reconcile: if reconcile is in progress, enrollment processing queues
until complete
- Duplicate events: idempotent — skip if peer already renamed and enrolled
- Unknown peers: logged but not touched
@ -182,10 +221,11 @@ No auth. Returns service status for Docker healthcheck.
```yaml
on:
pull_request:
paths: ['netbird.json']
paths: ["netbird.json"]
```
Steps:
1. Checkout PR branch
2. `POST /reconcile?dry_run=true` with `netbird.json`
3. Format response as markdown table
@ -197,25 +237,27 @@ Steps:
on:
push:
branches: [main]
paths: ['netbird.json']
paths: ["netbird.json"]
```
Steps:
1. Checkout repo
2. `POST /sync-events` — process pending enrollments
3. `POST /reconcile` with `netbird.json`
4. If `created_keys` non-empty: encrypt with `age`, upload as Gitea Actions artifact
4. If `created_keys` non-empty: encrypt with `age`, upload as Gitea Actions
artifact
5. Pull latest (poller may have committed)
6. On failure: job fails, engineer investigates
### Gitea Secrets
| Secret | Purpose |
|--------|---------|
| `RECONCILER_URL` | Reconciler service URL |
| `RECONCILER_TOKEN` | Bearer token for CI auth |
| `AGE_PUBLIC_KEY` | Encrypts setup key artifacts |
| `GITEA_TOKEN` | PR comment posting (achilles-ci-bot) |
| Secret | Purpose |
| ------------------ | ------------------------------------ |
| `RECONCILER_URL` | Reconciler service URL |
| `RECONCILER_TOKEN` | Bearer token for CI auth |
| `AGE_PUBLIC_KEY` | Encrypts setup key artifacts |
| `GITEA_TOKEN` | PR comment posting (achilles-ci-bot) |
## Deployment
@ -248,20 +290,21 @@ services:
### Environment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| `NETBIRD_API_URL` | yes | NetBird management API base URL |
| `NETBIRD_API_TOKEN` | yes | NetBird API token |
| `GITEA_URL` | yes | Gitea instance URL |
| `GITEA_TOKEN` | yes | Gitea API token for commits |
| `GITEA_REPO` | yes | `owner/repo` for netbird-gitops |
| `RECONCILER_TOKEN` | yes | Bearer token for CI auth |
| `POLL_INTERVAL_SECONDS` | no | Poll interval (default: 30) |
| `PORT` | no | Listen port (default: 8080) |
| Variable | Required | Description |
| ----------------------- | -------- | ------------------------------- |
| `NETBIRD_API_URL` | yes | NetBird management API base URL |
| `NETBIRD_API_TOKEN` | yes | NetBird API token |
| `GITEA_URL` | yes | Gitea instance URL |
| `GITEA_TOKEN` | yes | Gitea API token for commits |
| `GITEA_REPO` | yes | `owner/repo` for netbird-gitops |
| `RECONCILER_TOKEN` | yes | Bearer token for CI auth |
| `POLL_INTERVAL_SECONDS` | no | Poll interval (default: 30) |
| `PORT` | no | Listen port (default: 8080) |
### Container Image Build
Tag-triggered CI (`v*`) in netbird-gitops:
1. `deno compile` to single binary
2. Docker build (`FROM denoland/deno:distroless`)
3. Push to Gitea container registry
@ -269,29 +312,35 @@ Tag-triggered CI (`v*`) in netbird-gitops:
## Error Handling & Rollback
**Validation phase (before mutations):**
- Parse and validate `netbird.json` schema
- Fetch all actual state
- Compute diff and verify all operations are possible
- If validation fails: return error, no mutations
**Apply phase:**
- Execute in dependency order (groups -> keys -> peers -> policies -> routes)
- On any failure: abort immediately, return partial results
- No automatic rollback — git revert is the rollback mechanism
**Why no automatic rollback:**
- Partial rollback is harder to get right than partial apply
- Git history provides clear, auditable rollback path
- `git revert` + re-reconcile converges to correct state
- Reconciler is idempotent — running twice with same state is safe
**Recovery pattern:**
1. Reconcile fails mid-apply
2. CI job fails, engineer notified
3. Engineer either forward-fixes `netbird.json` or `git revert`s the merge commit
3. Engineer either forward-fixes `netbird.json` or `git revert`s the merge
commit
4. New push triggers reconcile, converging to correct state
**Logging:**
- Structured JSON logs
- Every NetBird API call logged (method, path, status)
- Every state mutation logged (before/after)
@ -299,11 +348,11 @@ Tag-triggered CI (`v*`) in netbird-gitops:
## Resources Managed
| Resource | NetBird API | Create | Update | Delete |
|----------|-------------|--------|--------|--------|
| Groups | `/api/groups` | yes | yes (peers) | yes |
| Setup Keys | `/api/setup-keys` | yes | no (immutable) | yes |
| Peers | `/api/peers` | no (self-enroll) | yes (rename, groups) | yes |
| Policies | `/api/policies` | yes | yes | yes |
| Routes | `/api/routes` | yes | yes | yes |
| DNS | `/api/dns/nameservers` | yes | yes | yes |
| Resource | NetBird API | Create | Update | Delete |
| ---------- | ---------------------- | ---------------- | -------------------- | ------ |
| Groups | `/api/groups` | yes | yes (peers) | yes |
| Setup Keys | `/api/setup-keys` | yes | no (immutable) | yes |
| Peers | `/api/peers` | no (self-enroll) | yes (rename, groups) | yes |
| Policies | `/api/policies` | yes | yes | yes |
| Routes | `/api/routes` | yes | yes | yes |
| DNS | `/api/dns/nameservers` | yes | yes | yes |

View File

@ -1,18 +1,27 @@
# NetBird Reconciler Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to
> implement this plan task-by-task.
**Goal:** Build a Deno-based HTTP service that reconciles NetBird VPN configuration from a declarative `netbird.json` state file, with event-driven peer enrollment detection and Gitea Actions CI integration.
**Goal:** Build a Deno-based HTTP service that reconciles NetBird VPN
configuration from a declarative `netbird.json` state file, with event-driven
peer enrollment detection and Gitea Actions CI integration.
**Architecture:** Three-layer design. A NetBird API client wraps all management API calls. A reconciliation engine diffs desired vs actual state and produces an ordered operation plan. An HTTP server exposes `/reconcile`, `/sync-events`, and `/health` endpoints. A background event poller detects peer enrollments and commits state updates via Gitea API.
**Architecture:** Three-layer design. A NetBird API client wraps all management
API calls. A reconciliation engine diffs desired vs actual state and produces an
ordered operation plan. An HTTP server exposes `/reconcile`, `/sync-events`, and
`/health` endpoints. A background event poller detects peer enrollments and
commits state updates via Gitea API.
**Tech Stack:** Deno 2.x, Zod (schema validation), Deno standard library (HTTP server), Docker
**Tech Stack:** Deno 2.x, Zod (schema validation), Deno standard library (HTTP
server), Docker
---
## Task 0: Scaffold project structure
**Files:**
- Create: `deno.json`
- Create: `src/main.ts`
- Create: `src/config.ts`
@ -108,8 +117,7 @@ ENTRYPOINT ["reconciler"]
**Step 6: Verify project compiles**
Run: `deno check src/main.ts`
Expected: no errors
Run: `deno check src/main.ts` Expected: no errors
**Step 7: Commit**
@ -122,13 +130,15 @@ feat: scaffold netbird-reconciler project
## Task 1: NetBird API client — types and base client
**Files:**
- Create: `src/netbird/types.ts`
- Create: `src/netbird/client.ts`
- Create: `src/netbird/client.test.ts`
**Step 1: Define NetBird API response types in `src/netbird/types.ts`**
These types model the NetBird Management API responses. Only the fields we need for reconciliation are included.
These types model the NetBird Management API responses. Only the fields we need
for reconciliation are included.
```typescript
/** Group as returned by GET /api/groups */
@ -242,7 +252,8 @@ export interface NbEvent {
**Step 2: Write test for base HTTP client in `src/netbird/client.test.ts`**
Test the client can be constructed and makes authenticated requests. Use a mock fetch pattern: inject a fetch function so tests don't hit a real API.
Test the client can be constructed and makes authenticated requests. Use a mock
fetch pattern: inject a fetch function so tests don't hit a real API.
```typescript
import { assertEquals } from "jsr:@std/assert";
@ -255,8 +266,8 @@ function mockFetch(
const url = typeof input === "string"
? input
: input instanceof URL
? input.toString()
: input.url;
? input.toString()
: input.url;
const method = init?.method ?? "GET";
const key = `${method} ${url}`;
const resp = responses.get(key);
@ -271,13 +282,24 @@ function mockFetch(
}
Deno.test("NetbirdClient.listGroups sends auth header and parses response", async () => {
const groups = [{ id: "g1", name: "pilots", peers_count: 1, peers: [], issued: "api" }];
const groups = [{
id: "g1",
name: "pilots",
peers_count: 1,
peers: [],
issued: "api",
}];
const client = new NetbirdClient(
"https://nb.example.com/api",
"test-token",
mockFetch(new Map([
["GET https://nb.example.com/api/groups", { status: 200, body: groups }],
])),
mockFetch(
new Map([
["GET https://nb.example.com/api/groups", {
status: 200,
body: groups,
}],
]),
),
);
const result = await client.listGroups();
assertEquals(result.length, 1);
@ -288,9 +310,14 @@ Deno.test("NetbirdClient throws on non-2xx response", async () => {
const client = new NetbirdClient(
"https://nb.example.com/api",
"test-token",
mockFetch(new Map([
["GET https://nb.example.com/api/groups", { status: 401, body: { message: "unauthorized" } }],
])),
mockFetch(
new Map([
["GET https://nb.example.com/api/groups", {
status: 401,
body: { message: "unauthorized" },
}],
]),
),
);
try {
await client.listGroups();
@ -303,8 +330,8 @@ Deno.test("NetbirdClient throws on non-2xx response", async () => {
**Step 3: Run tests to verify they fail**
Run: `deno test src/netbird/client.test.ts`
Expected: FAIL — `NetbirdClient` not found
Run: `deno test src/netbird/client.test.ts` Expected: FAIL — `NetbirdClient` not
found
**Step 4: Implement base client in `src/netbird/client.ts`**
@ -340,7 +367,11 @@ export class NetbirdClient {
private readonly fetchFn: FetchFn = fetch,
) {}
private async request<T>(method: string, path: string, body?: unknown): Promise<T> {
private async request<T>(
method: string,
path: string,
body?: unknown,
): Promise<T> {
const url = `${this.baseUrl}${path}`;
const headers: Record<string, string> = {
"Authorization": `Token ${this.token}`,
@ -371,7 +402,11 @@ export class NetbirdClient {
return this.request("POST", "/groups", { name, peers });
}
updateGroup(id: string, name: string, peers: string[] = []): Promise<NbGroup> {
updateGroup(
id: string,
name: string,
peers: string[] = [],
): Promise<NbGroup> {
return this.request("PUT", `/groups/${id}`, { name, peers });
}
@ -546,8 +581,7 @@ export class NetbirdClient {
**Step 5: Run tests**
Run: `deno test src/netbird/client.test.ts`
Expected: PASS
Run: `deno test src/netbird/client.test.ts` Expected: PASS
**Step 6: Commit**
@ -560,6 +594,7 @@ feat: add NetBird API client with types and tests
## Task 2: State file schema and validation
**Files:**
- Create: `src/state/schema.ts`
- Create: `src/state/schema.test.ts`
@ -626,7 +661,9 @@ export const DesiredStateSchema = z.object({
routes: z.record(z.string(), RouteSchema).default({}),
dns: z
.object({
nameserver_groups: z.record(z.string(), DnsNameserverGroupSchema).default({}),
nameserver_groups: z.record(z.string(), DnsNameserverGroupSchema).default(
{},
),
})
.default({}),
});
@ -775,7 +812,8 @@ Deno.test("DesiredStateSchema parses valid state", () => {
Deno.test("DesiredStateSchema rejects invalid setup key type", () => {
const invalid = structuredClone(VALID_STATE);
(invalid.setup_keys["GS-hawk-72"] as Record<string, unknown>).type = "invalid";
(invalid.setup_keys["GS-hawk-72"] as Record<string, unknown>).type =
"invalid";
assertThrows(() => DesiredStateSchema.parse(invalid));
});
@ -806,8 +844,7 @@ Deno.test("validateCrossReferences catches peer without setup key", () => {
**Step 3: Run tests**
Run: `deno test src/state/schema.test.ts`
Expected: PASS
Run: `deno test src/state/schema.test.ts` Expected: PASS
**Step 4: Commit**
@ -820,17 +857,28 @@ feat: add desired state schema with Zod validation and cross-reference checks
## Task 3: Actual state fetcher
**Files:**
- Create: `src/state/actual.ts`
- Create: `src/state/actual.test.ts`
The actual state fetcher calls all NetBird list endpoints and normalizes the results into a structure that can be compared with the desired state. The key job is building bidirectional name<->ID mappings.
The actual state fetcher calls all NetBird list endpoints and normalizes the
results into a structure that can be compared with the desired state. The key
job is building bidirectional name<->ID mappings.
**Step 1: Write test in `src/state/actual.test.ts`**
```typescript
import { assertEquals } from "jsr:@std/assert";
import { fetchActualState } from "./actual.ts";
import type { NbGroup, NbPeer, NbPolicy, NbRoute, NbSetupKey, NbDnsNameserverGroup, NbEvent } from "../netbird/types.ts";
import type {
NbDnsNameserverGroup,
NbEvent,
NbGroup,
NbPeer,
NbPolicy,
NbRoute,
NbSetupKey,
} from "../netbird/types.ts";
/** Minimal mock NetBird client that returns predetermined data */
function mockClient(data: {
@ -859,9 +907,17 @@ Deno.test("fetchActualState builds name-to-id maps", async () => {
],
setupKeys: [
{
id: 1, name: "Pilot-hawk-72", type: "one-off", key: "masked",
expires: "2026-04-01T00:00:00Z", valid: true, revoked: false,
used_times: 0, state: "valid", auto_groups: ["g1"], usage_limit: 1,
id: 1,
name: "Pilot-hawk-72",
type: "one-off",
key: "masked",
expires: "2026-04-01T00:00:00Z",
valid: true,
revoked: false,
used_times: 0,
state: "valid",
auto_groups: ["g1"],
usage_limit: 1,
},
],
}));
@ -920,7 +976,9 @@ type ClientLike = Pick<
| "listDnsNameserverGroups"
>;
export async function fetchActualState(client: ClientLike): Promise<ActualState> {
export async function fetchActualState(
client: ClientLike,
): Promise<ActualState> {
const [groups, setupKeys, peers, policies, routes, dns] = await Promise.all([
client.listGroups(),
client.listSetupKeys(),
@ -956,8 +1014,7 @@ export async function fetchActualState(client: ClientLike): Promise<ActualState>
**Step 4: Run tests**
Run: `deno test src/state/actual.test.ts`
Expected: PASS
Run: `deno test src/state/actual.test.ts` Expected: PASS
**Step 5: Commit**
@ -970,11 +1027,13 @@ feat: add actual state fetcher with name/ID indexing
## Task 4: Diff engine — compute operations
**Files:**
- Create: `src/reconcile/diff.ts`
- Create: `src/reconcile/operations.ts`
- Create: `src/reconcile/diff.test.ts`
This is the core reconciliation logic. It compares desired state against actual state and produces an ordered list of operations.
This is the core reconciliation logic. It compares desired state against actual
state and produces an ordered list of operations.
**Step 1: Define operation types in `src/reconcile/operations.ts`**
@ -1048,12 +1107,20 @@ import type { ActualState } from "../state/actual.ts";
function emptyActual(): ActualState {
return {
groups: [], groupsByName: new Map(), groupsById: new Map(),
setupKeys: [], setupKeysByName: new Map(),
peers: [], peersByName: new Map(), peersById: new Map(),
policies: [], policiesByName: new Map(),
routes: [], routesByNetworkId: new Map(),
dns: [], dnsByName: new Map(),
groups: [],
groupsByName: new Map(),
groupsById: new Map(),
setupKeys: [],
setupKeysByName: new Map(),
peers: [],
peersByName: new Map(),
peersById: new Map(),
policies: [],
policiesByName: new Map(),
routes: [],
routesByNetworkId: new Map(),
dns: [],
dnsByName: new Map(),
};
}
@ -1061,8 +1128,11 @@ const DESIRED: DesiredState = {
groups: { pilots: { peers: ["Pilot-hawk-72"] } },
setup_keys: {
"Pilot-hawk-72": {
type: "one-off", expires_in: 604800, usage_limit: 1,
auto_groups: ["pilots"], enrolled: false,
type: "one-off",
expires_in: 604800,
usage_limit: 1,
auto_groups: ["pilots"],
enrolled: false,
},
},
policies: {},
@ -1080,14 +1150,25 @@ Deno.test("computeDiff against empty actual produces create ops", () => {
Deno.test("computeDiff with matching state produces no ops", () => {
const actual = emptyActual();
actual.groupsByName.set("pilots", {
id: "g1", name: "pilots", peers_count: 1,
peers: [{ id: "p1", name: "Pilot-hawk-72" }], issued: "api",
id: "g1",
name: "pilots",
peers_count: 1,
peers: [{ id: "p1", name: "Pilot-hawk-72" }],
issued: "api",
});
actual.groups = [actual.groupsByName.get("pilots")!];
actual.setupKeysByName.set("Pilot-hawk-72", {
id: 1, name: "Pilot-hawk-72", type: "one-off", key: "masked",
expires: "2026-04-01T00:00:00Z", valid: true, revoked: false,
used_times: 0, state: "valid", auto_groups: ["g1"], usage_limit: 1,
id: 1,
name: "Pilot-hawk-72",
type: "one-off",
key: "masked",
expires: "2026-04-01T00:00:00Z",
valid: true,
revoked: false,
used_times: 0,
state: "valid",
auto_groups: ["g1"],
usage_limit: 1,
});
actual.setupKeys = [actual.setupKeysByName.get("Pilot-hawk-72")!];
const ops = computeDiff(DESIRED, actual);
@ -1101,7 +1182,8 @@ Run: `deno test src/reconcile/diff.test.ts`
**Step 4: Implement `src/reconcile/diff.ts`**
This is a large module. The diff compares each resource type and produces operations.
This is a large module. The diff compares each resource type and produces
operations.
```typescript
import type { DesiredState } from "../state/schema.ts";
@ -1139,7 +1221,11 @@ export function computeDiff(
// Groups in actual but not in desired — delete (only API-issued, not system groups)
for (const group of actual.groups) {
if (!desiredGroupNames.has(group.name) && group.issued === "api") {
ops.push({ type: "delete_group", name: group.name, details: { id: group.id } });
ops.push({
type: "delete_group",
name: group.name,
details: { id: group.id },
});
}
}
@ -1167,7 +1253,11 @@ export function computeDiff(
}
for (const key of actual.setupKeys) {
if (!desiredKeyNames.has(key.name)) {
ops.push({ type: "delete_setup_key", name: key.name, details: { id: key.id } });
ops.push({
type: "delete_setup_key",
name: key.name,
details: { id: key.id },
});
}
}
@ -1234,7 +1324,11 @@ export function computeDiff(
}
for (const policy of actual.policies) {
if (!desiredPolicyNames.has(policy.name)) {
ops.push({ type: "delete_policy", name: policy.name, details: { id: policy.id } });
ops.push({
type: "delete_policy",
name: policy.name,
details: { id: policy.id },
});
}
}
@ -1304,7 +1398,8 @@ export function computeDiff(
} else {
const needsUpdate = existing.enabled !== dns.enabled ||
existing.primary !== dns.primary ||
JSON.stringify(existing.nameservers) !== JSON.stringify(dns.nameservers);
JSON.stringify(existing.nameservers) !==
JSON.stringify(dns.nameservers);
if (needsUpdate) {
ops.push({
type: "update_dns",
@ -1329,8 +1424,7 @@ export function computeDiff(
**Step 5: Run tests**
Run: `deno test src/reconcile/diff.test.ts`
Expected: PASS
Run: `deno test src/reconcile/diff.test.ts` Expected: PASS
**Step 6: Commit**
@ -1343,10 +1437,13 @@ feat: add diff engine computing operations from desired vs actual state
## Task 5: Operation executor
**Files:**
- Create: `src/reconcile/executor.ts`
- Create: `src/reconcile/executor.test.ts`
The executor takes a list of operations and applies them against the NetBird API. It resolves names to IDs (since the diff uses names but the API needs IDs), executes in order, and aborts on first failure.
The executor takes a list of operations and applies them against the NetBird
API. It resolves names to IDs (since the diff uses names but the API needs IDs),
executes in order, and aborts on first failure.
**Step 1: Write test in `src/reconcile/executor.test.ts`**
@ -1363,13 +1460,23 @@ Deno.test("executor calls createGroup for create_group op", async () => {
const mockClient = {
createGroup: (name: string) => {
calls.push(`createGroup:${name}`);
return Promise.resolve({ id: "new-g1", name, peers_count: 0, peers: [], issued: "api" as const });
return Promise.resolve({
id: "new-g1",
name,
peers_count: 0,
peers: [],
issued: "api" as const,
});
},
};
const ops: Operation[] = [
{ type: "create_group", name: "pilots" },
];
const results = await executeOperations(ops, mockClient as never, emptyActual());
const results = await executeOperations(
ops,
mockClient as never,
emptyActual(),
);
assertEquals(calls, ["createGroup:pilots"]);
assertEquals(results[0].status, "success");
});
@ -1383,19 +1490,31 @@ Deno.test("executor aborts on first failure", async () => {
{ type: "create_group", name: "pilots" },
{ type: "create_setup_key", name: "key1" },
];
const results = await executeOperations(ops, mockClient as never, emptyActual());
const results = await executeOperations(
ops,
mockClient as never,
emptyActual(),
);
assertEquals(results[0].status, "failed");
assertEquals(results.length, 1); // second op never executed
});
function emptyActual(): ActualState {
return {
groups: [], groupsByName: new Map(), groupsById: new Map(),
setupKeys: [], setupKeysByName: new Map(),
peers: [], peersByName: new Map(), peersById: new Map(),
policies: [], policiesByName: new Map(),
routes: [], routesByNetworkId: new Map(),
dns: [], dnsByName: new Map(),
groups: [],
groupsByName: new Map(),
groupsById: new Map(),
setupKeys: [],
setupKeysByName: new Map(),
peers: [],
peersByName: new Map(),
peersById: new Map(),
policies: [],
policiesByName: new Map(),
routes: [],
routesByNetworkId: new Map(),
dns: [],
dnsByName: new Map(),
};
}
```
@ -1404,7 +1523,10 @@ function emptyActual(): ActualState {
**Step 3: Implement `src/reconcile/executor.ts`**
The executor is a large switch/case that dispatches each operation type to the correct client method. It needs the actual state to resolve group name -> ID for policies/routes, and it tracks newly created group IDs to use in subsequent operations.
The executor is a large switch/case that dispatches each operation type to the
correct client method. It needs the actual state to resolve group name -> ID for
policies/routes, and it tracks newly created group IDs to use in subsequent
operations.
```typescript
import type { NetbirdClient } from "../netbird/client.ts";
@ -1555,7 +1677,9 @@ export async function executeOperations(
case "create_route": {
const d = op.details!;
const peerGroupIds = resolveGroupIds(d.peer_groups as string[]);
const distGroupIds = resolveGroupIds(d.distribution_groups as string[]);
const distGroupIds = resolveGroupIds(
d.distribution_groups as string[],
);
await client.createRoute({
description: (d.description as string) ?? "",
network_id: op.name,
@ -1573,7 +1697,9 @@ export async function executeOperations(
case "update_route": {
const d = op.details!;
const peerGroupIds = resolveGroupIds(d.peer_groups as string[]);
const distGroupIds = resolveGroupIds(d.distribution_groups as string[]);
const distGroupIds = resolveGroupIds(
d.distribution_groups as string[],
);
await client.updateRoute(d.id as string, {
description: (d.description as string) ?? "",
network_id: op.name,
@ -1598,12 +1724,15 @@ export async function executeOperations(
await client.createDnsNameserverGroup({
name: op.name,
description: (d.description as string) ?? "",
nameservers: d.nameservers as Array<{ ip: string; ns_type: string; port: number }>,
nameservers: d.nameservers as Array<
{ ip: string; ns_type: string; port: number }
>,
enabled: d.enabled as boolean,
groups: groupIds,
primary: d.primary as boolean,
domains: d.domains as string[],
search_domains_enabled: (d.search_domains_enabled as boolean) ?? false,
search_domains_enabled: (d.search_domains_enabled as boolean) ??
false,
});
break;
}
@ -1613,12 +1742,15 @@ export async function executeOperations(
await client.updateDnsNameserverGroup(d.id as string, {
name: op.name,
description: (d.description as string) ?? "",
nameservers: d.nameservers as Array<{ ip: string; ns_type: string; port: number }>,
nameservers: d.nameservers as Array<
{ ip: string; ns_type: string; port: number }
>,
enabled: d.enabled as boolean,
groups: groupIds,
primary: d.primary as boolean,
domains: d.domains as string[],
search_domains_enabled: (d.search_domains_enabled as boolean) ?? false,
search_domains_enabled: (d.search_domains_enabled as boolean) ??
false,
});
break;
}
@ -1656,7 +1788,8 @@ export function getCreatedKeys(
}
```
Note: The `createdKeys` map is local to `executeOperations` right now. Refactor the return type to include it:
Note: The `createdKeys` map is local to `executeOperations` right now. Refactor
the return type to include it:
```typescript
export interface ExecutionResult {
@ -1665,12 +1798,12 @@ export interface ExecutionResult {
}
```
Update the function signature and return accordingly. The test should verify `createdKeys` is populated when a setup key is created.
Update the function signature and return accordingly. The test should verify
`createdKeys` is populated when a setup key is created.
**Step 4: Run tests**
Run: `deno test src/reconcile/executor.test.ts`
Expected: PASS
Run: `deno test src/reconcile/executor.test.ts` Expected: PASS
**Step 5: Commit**
@ -1683,6 +1816,7 @@ feat: add operation executor with abort-on-failure semantics
## Task 6: Event poller
**Files:**
- Create: `src/poller/poller.ts`
- Create: `src/poller/poller.test.ts`
@ -1831,8 +1965,7 @@ export function processEnrollmentEvents(
**Step 4: Run tests**
Run: `deno test src/poller/poller.test.ts`
Expected: PASS
Run: `deno test src/poller/poller.test.ts` Expected: PASS
**Step 5: Commit**
@ -1845,6 +1978,7 @@ feat: add enrollment event detection from NetBird audit events
## Task 7: Gitea API client (for state commits)
**Files:**
- Create: `src/gitea/client.ts`
- Create: `src/gitea/client.test.ts`
@ -1861,8 +1995,8 @@ function mockFetch(
const url = typeof input === "string"
? input
: input instanceof URL
? input.toString()
: input.url;
? input.toString()
: input.url;
const method = init?.method ?? "GET";
const key = `${method} ${url}`;
const resp = responses.get(key);
@ -1881,18 +2015,20 @@ Deno.test("GiteaClient.getFileContent fetches file with SHA", async () => {
"https://gitea.example.com",
"test-token",
"BlastPilot/netbird-gitops",
mockFetch(new Map([
[
"GET https://gitea.example.com/api/v1/repos/BlastPilot/netbird-gitops/contents/netbird.json?ref=main",
{
status: 200,
body: {
content: btoa('{"groups":{}}'),
sha: "abc123",
mockFetch(
new Map([
[
"GET https://gitea.example.com/api/v1/repos/BlastPilot/netbird-gitops/contents/netbird.json?ref=main",
{
status: 200,
body: {
content: btoa('{"groups":{}}'),
sha: "abc123",
},
},
},
],
])),
],
]),
),
);
const result = await client.getFileContent("netbird.json", "main");
assertEquals(result.sha, "abc123");
@ -1935,7 +2071,9 @@ export class GiteaClient {
});
if (!resp.ok) {
const text = await resp.text().catch(() => "");
throw new Error(`Gitea API ${method} ${path} returned ${resp.status}: ${text}`);
throw new Error(
`Gitea API ${method} ${path} returned ${resp.status}: ${text}`,
);
}
return resp.json() as Promise<T>;
}
@ -1991,8 +2129,7 @@ export class GiteaClient {
**Step 4: Run tests**
Run: `deno test src/gitea/client.test.ts`
Expected: PASS
Run: `deno test src/gitea/client.test.ts` Expected: PASS
**Step 5: Commit**
@ -2005,10 +2142,12 @@ feat: add Gitea API client for state commits and PR comments
## Task 8: Poller background loop with Gitea commit
**Files:**
- Create: `src/poller/loop.ts`
- Create: `src/poller/state.ts`
This task wires the enrollment detection from Task 6 to the Gitea client from Task 7, creating the full background polling loop.
This task wires the enrollment detection from Task 6 to the Gitea client from
Task 7, creating the full background polling loop.
**Step 1: Create `src/poller/state.ts`** — persists poll state to disk
@ -2060,7 +2199,9 @@ export interface PollerContext {
export async function pollOnce(ctx: PollerContext): Promise<void> {
if (ctx.reconcileInProgress.value) {
console.log(JSON.stringify({ msg: "poll_deferred", reason: "reconcile_in_progress" }));
console.log(
JSON.stringify({ msg: "poll_deferred", reason: "reconcile_in_progress" }),
);
return;
}
@ -2210,12 +2351,14 @@ feat: add poller background loop with Gitea state commit
## Task 9: HTTP server
**Files:**
- Modify: `src/main.ts`
- Create: `src/server.ts`
**Step 1: Create `src/server.ts`**
This is the HTTP server with three endpoints. It wires together all the components.
This is the HTTP server with three endpoints. It wires together all the
components.
```typescript
import type { Config } from "./config.ts";
@ -2225,7 +2368,7 @@ import { DesiredStateSchema, validateCrossReferences } from "./state/schema.ts";
import { fetchActualState } from "./state/actual.ts";
import { computeDiff } from "./reconcile/diff.ts";
import { executeOperations } from "./reconcile/executor.ts";
import { pollOnce, type PollerContext } from "./poller/loop.ts";
import { type PollerContext, pollOnce } from "./poller/loop.ts";
export interface ServerContext {
config: Config;
@ -2306,7 +2449,9 @@ async function handleReconcile(
type: op.type,
name: op.name,
})),
summary: summarize(operations.map((op) => ({ ...op, status: "pending" }))),
summary: summarize(
operations.map((op) => ({ ...op, status: "pending" })),
),
});
}
@ -2366,10 +2511,14 @@ function summarize(
): { created: number; updated: number; deleted: number; failed: number } {
let created = 0, updated = 0, deleted = 0, failed = 0;
for (const r of results) {
if (r.status === "failed") { failed++; continue; }
if (r.status === "failed") {
failed++;
continue;
}
if (r.type.startsWith("create_")) created++;
else if (r.type.startsWith("update_") || r.type === "rename_peer") updated++;
else if (r.type.startsWith("delete_")) deleted++;
else if (r.type.startsWith("update_") || r.type === "rename_peer") {
updated++;
} else if (r.type.startsWith("delete_")) deleted++;
}
return { created, updated, deleted, failed };
}
@ -2425,8 +2574,7 @@ Deno.addSignalListener("SIGTERM", () => {
**Step 3: Verify compilation**
Run: `deno check src/main.ts`
Expected: no errors
Run: `deno check src/main.ts` Expected: no errors
**Step 4: Commit**
@ -2439,6 +2587,7 @@ feat: add HTTP server with /reconcile, /sync-events, /health endpoints
## Task 10: CI workflow files
**Files:**
- Create: `.gitea/workflows/dry-run.yml`
- Create: `.gitea/workflows/reconcile.yml`
- Create: `.gitea/workflows/release.yml`
@ -2451,7 +2600,7 @@ name: Dry Run
on:
pull_request:
paths:
- 'netbird.json'
- "netbird.json"
jobs:
dry-run:
@ -2520,7 +2669,7 @@ on:
branches:
- main
paths:
- 'netbird.json'
- "netbird.json"
jobs:
reconcile:
@ -2587,7 +2736,7 @@ name: Release
on:
push:
tags:
- 'v*'
- "v*"
jobs:
build:
@ -2624,11 +2773,14 @@ feat: add Gitea Actions CI workflows for dry-run, reconcile, and release
## Task 11: Seed `netbird.json` with initial state
**Files:**
- Create: `netbird.json`
**Step 1: Create the initial state file**
This should reflect the current BlastPilot NetBird configuration. Start minimal — populate with actual groups/policies after deploying the reconciler and importing existing state.
This should reflect the current BlastPilot NetBird configuration. Start minimal
— populate with actual groups/policies after deploying the reconciler and
importing existing state.
```json
{
@ -2653,6 +2805,7 @@ feat: add empty netbird.json state file
## Task 12: Docker Compose deployment config
**Files:**
- Create: `deploy/docker-compose.yml`
- Create: `deploy/.env.example`
@ -2707,9 +2860,12 @@ feat: add Docker Compose deployment config
## Task 13: Integration test with mock NetBird server
**Files:**
- Create: `src/integration.test.ts`
Write an end-to-end test that starts the HTTP server, posts a reconcile request with a known desired state against a mock NetBird API, and verifies the correct API calls were made.
Write an end-to-end test that starts the HTTP server, posts a reconcile request
with a known desired state against a mock NetBird API, and verifies the correct
API calls were made.
**Step 1: Write integration test**
@ -2742,7 +2898,9 @@ function createMockNetbird(): { client: NetbirdClient; calls: ApiCall[] } {
const mockFetch: typeof fetch = async (input, init) => {
const url = typeof input === "string"
? input
: input instanceof URL ? input.toString() : input.url;
: input instanceof URL
? input.toString()
: input.url;
const path = url.replace("https://nb.test/api", "");
const method = init?.method ?? "GET";
calls.push({
@ -2755,14 +2913,25 @@ function createMockNetbird(): { client: NetbirdClient; calls: ApiCall[] } {
if (method === "POST" && path === "/groups") {
const body = JSON.parse(init?.body as string);
return new Response(
JSON.stringify({ id: `g-${body.name}`, name: body.name, peers_count: 0, peers: [], issued: "api" }),
JSON.stringify({
id: `g-${body.name}`,
name: body.name,
peers_count: 0,
peers: [],
issued: "api",
}),
{ status: 201, headers: { "Content-Type": "application/json" } },
);
}
if (method === "POST" && path === "/setup-keys") {
const body = JSON.parse(init?.body as string);
return new Response(
JSON.stringify({ id: 1, name: body.name, key: "TEST-KEY-12345", state: "valid" }),
JSON.stringify({
id: 1,
name: body.name,
key: "TEST-KEY-12345",
state: "valid",
}),
{ status: 201, headers: { "Content-Type": "application/json" } },
);
}
@ -2777,7 +2946,9 @@ function createMockNetbird(): { client: NetbirdClient; calls: ApiCall[] } {
const key = `${method} ${path}`;
const resp = responses.get(key);
if (!resp) {
return new Response(JSON.stringify({ message: "not found" }), { status: 404 });
return new Response(JSON.stringify({ message: "not found" }), {
status: 404,
});
}
return new Response(JSON.stringify(resp.body), {
status: resp.status,
@ -2901,8 +3072,7 @@ Deno.test("POST /reconcile rejects unauthorized requests", async () => {
**Step 2: Run tests**
Run: `deno test src/integration.test.ts`
Expected: PASS
Run: `deno test src/integration.test.ts` Expected: PASS
**Step 3: Commit**
@ -2915,14 +3085,17 @@ test: add integration tests for reconcile HTTP endpoint
## Task 14: Update blastpilot-public enrollment pipeline
**Files:**
- Modify: `../blastpilot-public/api/src/services/enrollment-pipeline.ts`
- Modify: `../blastpilot-public/api/src/services/netbird.ts`
This task modifies the enrollment pipeline to write to `netbird.json` in the `netbird-gitops` repo instead of creating `peers/enrollment-{N}.json` files.
This task modifies the enrollment pipeline to write to `netbird.json` in the
`netbird-gitops` repo instead of creating `peers/enrollment-{N}.json` files.
**Step 1: Update `handleApproval()` in enrollment-pipeline.ts**
Change from creating a standalone peer JSON file to modifying `netbird.json`:
- Fetch current `netbird.json` from `netbird-gitops` repo via Gitea API
- Add setup key entries for GS and Pilot
- Add peer references to appropriate groups
@ -2930,7 +3103,8 @@ Change from creating a standalone peer JSON file to modifying `netbird.json`:
**Step 2: Remove direct NetBird API calls from `handlePRMerge()`**
The reconciler now handles key creation. `handlePRMerge()` should be simplified or removed (key delivery is manual for now).
The reconciler now handles key creation. `handlePRMerge()` should be simplified
or removed (key delivery is manual for now).
**Step 3: Update tests**

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,272 @@
# Reconciler PoC Validation — Design Document
> **Status:** Draft **Author:** @prox **Date:** 2026-03-06 **Depends on:**
> [NetBird Reconciler Design](2026-03-03-netbird-reconciler-design.md)
## Goal
Validate the reconciler end-to-end on a fresh, isolated NetBird instance before
pointing it at production. Prove that:
1. Declaring state in `netbird.json` → reconcile → resources appear in NetBird.
2. Event poller detects peer enrollment and renames the peer.
3. State export from a live NetBird instance produces a valid `netbird.json`.
## Scope
### In scope
- Deploy a self-contained stack on VPS-A (`vps-a.networkmonitor.cc`): fresh
NetBird, Caddy, Gitea, and reconciler — all via Docker Compose.
- `GITEA_ENABLED` feature flag so the reconciler works without Gitea
integration.
- State export tool: `GET /export` endpoint + `--export` CLI flag.
- Core reconcile: groups, setup keys, policies created via `/reconcile`.
- Event poller: detect enrollment, rename peer — with or without Gitea
commit-back.
### Out of scope (deferred)
- Enrollment pipeline integration (docs site → Gitea PR).
- CI workflows (dry-run on PR, reconcile on merge).
- Production deployment to real NetBird environments.
- Key encryption with `age` / artifact upload.
## Architecture
```
VPS-A (vps-a.networkmonitor.cc)
├── Caddy (reverse proxy, HTTPS, ACME)
│ ├── / → NetBird Dashboard
│ ├── /api → NetBird Management API
│ ├── /signalexchange → Signal (gRPC)
│ ├── /relay → Relay
│ └── /reconciler/* → Reconciler HTTP API
├── NetBird Management (config, IdP, API)
├── NetBird Signal (gRPC peer coordination)
├── NetBird Relay (data relay for NATed peers)
├── Coturn (STUN/TURN)
├── Gitea (hosts netbird-gitops repo)
└── Reconciler (reconcile API + event poller)
```
All containers share a single Docker Compose stack with a common network. Caddy
terminates TLS and routes by path prefix.
## Changes to Reconciler
### 1. Feature Flag: `GITEA_ENABLED`
New environment variable. Default: `true` (backward compatible).
**When `GITEA_ENABLED=false`:**
| Component | Behavior |
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| Config validation | Skip `GITEA_*` env var requirements |
| Startup | Don't create Gitea client |
| `POST /reconcile` | Works normally — accepts `netbird.json` from request body, applies to NetBird API |
| Event poller | Still runs. Detects `peer.setupkey.add` events, renames peers. Skips commit-back of `enrolled: true`. Logs enrollment instead. |
| `GET /export` | Works normally — no Gitea dependency |
**When `GITEA_ENABLED=true`:** Current behavior, unchanged.
**Affected files:**
- `src/config.ts` — conditional Gitea env var validation
- `src/main.ts` — conditional Gitea client creation, pass flag to poller
- `src/poller/loop.ts` — guard commit-back behind flag
### 2. State Export
New module: `src/export.ts`
Transforms `ActualState` (from `src/state/actual.ts`) into a valid
`netbird.json` conforming to `DesiredStateSchema`.
**Mapping logic:**
| NetBird resource | Export strategy |
| --------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| Groups | Map ID → name. Skip auto-generated groups (`All`, `ch-` prefixed). Peer refs mapped to setup key names where possible, otherwise peer hostname. |
| Setup keys | Export with current config. Set `enrolled: true` if `used_times >= usage_limit`, else `false`. |
| Policies | Map source/destination group IDs → names. Include port rules. |
| Routes | Map group IDs → names, include network CIDRs. |
| DNS nameserver groups | Map group refs → names. |
**Interfaces:**
```
GET /export
→ 200: { state: <netbird.json content>, meta: { exported_at, source_url, groups_count, ... } }
CLI: deno run src/main.ts --export --netbird-api-url <url> --netbird-api-token <token>
→ stdout: netbird.json content
```
The CLI mode is standalone — it creates a NetBird client, fetches state,
exports, and exits. No HTTP server started.
**Affected files:**
- `src/export.ts` — new: transformation logic
- `src/server.ts` — new endpoint: `GET /export`
- `src/main.ts` — new CLI flag: `--export`
### 3. No Structural Changes
The reconcile engine (`diff.ts`, `executor.ts`), NetBird client, and state
schema remain unchanged. The export tool and feature flag are additive.
## Ansible Playbook
Location: `poc/ansible/` within this repo.
```
poc/
ansible/
inventory.yml
playbook.yml
group_vars/
all/
vars.yml # domain, ports, non-secret config
vault.yml # secrets (gitignored)
vault.yml.example # template for secrets
templates/
docker-compose.yml.j2
management.json.j2 # NetBird management config (embedded IdP)
Caddyfile.j2
dashboard.env.j2
relay.env.j2
turnserver.conf.j2
reconciler.env.j2
gitea.env.j2
```
**Playbook tasks:**
1. Install Docker + Docker Compose (if not present)
2. Create working directory structure
3. Template all config files
4. Pull images, `docker compose up -d`
5. Wait for Gitea to be ready
6. Create Gitea admin user + `BlastPilot` org + `netbird-gitops` repo via API
7. Seed `netbird.json` into the repo with initial test state
**Key config decisions:**
- **Caddy** for reverse proxy (proven in existing PoC templates).
- **Embedded IdP** for NetBird (no external OAuth — same as existing PoC).
- **Secrets auto-generated** at deploy time (NetBird encryption key, TURN
secret, relay secret). Printed to stdout for operator reference.
- Reconciler env vars templated from `vault.yml` (NetBird API token, Gitea
token).
**SSH key:** `~/.ssh/hetzner` (same as docs site deployment).
**Deploy command:** `ansible-playbook -i inventory.yml playbook.yml`
## Test netbird.json
The seed state for validation:
```json
{
"groups": {
"ground-stations": { "peers": [] },
"pilots": { "peers": [] }
},
"setup_keys": {
"GS-TestHawk-1": {
"type": "one-off",
"expires_in": 604800,
"usage_limit": 1,
"auto_groups": ["ground-stations"],
"enrolled": false
},
"Pilot-TestHawk-1": {
"type": "one-off",
"expires_in": 604800,
"usage_limit": 1,
"auto_groups": ["pilots"],
"enrolled": false
}
},
"policies": {
"pilots-to-gs": {
"enabled": true,
"sources": ["pilots"],
"destinations": ["ground-stations"],
"bidirectional": true
}
},
"routes": {},
"dns": { "nameserver_groups": {} }
}
```
This creates two groups, two one-off setup keys, and a bidirectional policy
between pilots and ground stations. Minimal but sufficient to validate the full
reconcile + enrollment flow.
## Validation Plan
### Phase 1 — Deploy
1. Wipe VPS-A (or just `docker compose down -v` if redeploying).
2. Run playbook → full stack up.
3. Access NetBird dashboard at `https://vps-a.networkmonitor.cc` — verify clean
state (only default "All" group).
4. Access Gitea at `https://vps-a.networkmonitor.cc/gitea` (or dedicated port) —
verify `BlastPilot/netbird-gitops` repo exists with seed `netbird.json`.
### Phase 2 — Reconcile
5. `curl -X POST https://vps-a.networkmonitor.cc/reconciler/reconcile?dry_run=true -d @netbird.json`
→ Verify plan shows: create 2 groups, 2 setup keys, 1 policy.
6. `curl -X POST https://vps-a.networkmonitor.cc/reconciler/reconcile -d @netbird.json`
→ Verify response includes `created_keys` with actual key values.
7. Open NetBird dashboard → verify groups, setup keys, and policy exist.
8. `curl https://vps-a.networkmonitor.cc/reconciler/export` → Compare exported
state with input. Verify round-trip consistency.
### Phase 3 — Enrollment
9. Copy a setup key value from step 6 response.
10. On a test machine: `netbird up --setup-key <key>`.
11. Check NetBird dashboard: peer appears, gets auto-renamed by poller, placed
in correct group.
12. Check reconciler logs: enrollment event detected, peer renamed, log entry
written (no Gitea commit since `GITEA_ENABLED=false` for initial test).
### Phase 4 — State Export (against real instance)
13. Run CLI export against `dev.netbird.achilles-rnd.cc`:
```
deno run src/main.ts --export \
--netbird-api-url https://dev.netbird.achilles-rnd.cc/api \
--netbird-api-token <token>
```
14. Review output — validates we can bootstrap GitOps from existing environment.
15. Optionally: dry-run reconcile the exported state against the same instance —
should produce an empty plan (no changes needed).
## Success Criteria
- [ ] Reconcile creates all declared resources in NetBird.
- [ ] Dry-run returns accurate plan without side effects.
- [ ] Export produces valid `netbird.json` from a live instance.
- [ ] Export → dry-run round-trip yields empty plan (idempotent).
- [ ] Poller detects enrollment and renames peer within 30s.
- [ ] Reconciler starts and operates correctly with `GITEA_ENABLED=false`.
- [ ] Reconciler starts and operates correctly with `GITEA_ENABLED=true` +
Gitea.
## Risks
| Risk | Mitigation |
| ------------------------------------------------------------- | -------------------------------------------------------------------------- |
| NetBird Management API behavior differs from docs | Testing against real instance; reconciler has comprehensive error handling |
| Export misses edge cases in resource mapping | Validate with dry-run round-trip (export → reconcile → empty plan) |
| Poller misses events during 30s poll interval | Acceptable for PoC; production can tune interval or add webhook trigger |
| VPS-A resources (2 vCPU, 4GB RAM) insufficient for full stack | Monitor; NetBird + Gitea are lightweight individually |

169
poc/README.md Normal file
View File

@ -0,0 +1,169 @@
# NetBird GitOps Reconciler — PoC Deployment
Ansible playbook that deploys a self-contained stack on VPS-A for end-to-end
testing of the NetBird GitOps reconciler.
## Stack overview
| Component | Purpose |
| ---------- | ------------------------------------------- |
| Caddy | TLS termination, reverse proxy |
| NetBird | Management, Signal, Relay, Dashboard, TURN |
| Reconciler | Declarative config → NetBird API reconciler |
| Gitea | Local Git server (optional, off by default) |
All services run as Docker containers on a single VPS, connected via a `netbird`
Docker bridge network. Caddy handles ACME certificates automatically.
## Prerequisites
- SSH access to `46.225.220.61` (root, key-based)
- DNS A record: `vps-a.networkmonitor.cc``46.225.220.61`
- `rsync` installed locally (used to sync reconciler source)
- Ansible 2.15+ with `community.general` and `ansible.posix` collections
Install collections if needed:
```bash
ansible-galaxy collection install community.general ansible.posix
```
## Setup
### 1. Create vault file
```bash
cd poc/ansible
cp group_vars/all/vault.yml.example group_vars/all/vault.yml
```
For the first deploy, leave all values as empty strings — the playbook
auto-generates NetBird secrets and a reconciler token.
### 2. Deploy
```bash
cd poc/ansible
ansible-playbook -i inventory.yml playbook.yml
```
The playbook will:
1. Generate secrets (encryption key, TURN password, relay secret, reconciler
token)
2. Install Docker if not present
3. Configure UFW firewall
4. Rsync the reconciler source code to VPS-A
5. Template all config files
6. Build the reconciler Docker image on VPS-A
7. Pull NetBird/Gitea images and start all services
8. Run health checks and print a summary with generated secrets
**Save the generated secrets** printed at the end into `vault.yml` so subsequent
runs are idempotent.
### 3. Create NetBird admin + API token
1. Open `https://vps-a.networkmonitor.cc` in a browser
2. Create the first admin account (embedded IdP — no external OAuth)
3. Go to **Settings → Personal Access Tokens → Generate**
4. Copy the token into `vault.yml` as `vault_netbird_api_token`
5. Re-run the playbook:
```bash
ansible-playbook -i inventory.yml playbook.yml
```
The reconciler will now start successfully with a valid API token.
### 4. (Optional) Enable Gitea
To enable Gitea-backed GitOps polling:
1. Open `http://vps-a.networkmonitor.cc:3000` and complete the install wizard
2. Create an admin account (user: `blastpilot`)
3. Create org `BlastPilot` and repo `netbird-gitops`
4. Push `netbird.json` to the repo
5. Generate a Gitea API token (Settings → Applications)
6. In `vars.yml`, set `gitea_enabled: "true"`
7. In `vault.yml`, fill in `vault_gitea_token` and `vault_gitea_admin_password`
8. Re-run the playbook
## Testing
All commands below assume you have the reconciler token. Replace `<TOKEN>` with
the value of `vault_reconciler_token`.
### Health check
```bash
curl https://vps-a.networkmonitor.cc/reconciler/health
```
### Dry-run reconcile
```bash
curl -X POST \
-H "Authorization: Bearer <TOKEN>" \
-H "Content-Type: application/json" \
-d @ansible/files/netbird-seed.json \
'https://vps-a.networkmonitor.cc/reconciler/reconcile?dry_run=true'
```
### Apply reconcile
```bash
curl -X POST \
-H "Authorization: Bearer <TOKEN>" \
-H "Content-Type: application/json" \
-d @ansible/files/netbird-seed.json \
'https://vps-a.networkmonitor.cc/reconciler/reconcile'
```
### Export current state
```bash
curl -H "Authorization: Bearer <TOKEN>" \
'https://vps-a.networkmonitor.cc/reconciler/export'
```
### Enroll a peer
Use a setup key from the reconcile response (`created_keys` field):
```bash
sudo netbird up --management-url https://vps-a.networkmonitor.cc --setup-key <KEY>
```
## Teardown
Remove all containers and volumes:
```bash
ssh root@46.225.220.61 "cd /opt/netbird-poc && docker compose down -v"
```
## File structure
```
poc/
ansible/
inventory.yml # VPS-A host definition
playbook.yml # Main deployment playbook
.gitignore # Excludes vault.yml
group_vars/
all/
vars.yml # Non-secret config (versions, ports, etc.)
vault.yml.example # Secret template — copy to vault.yml
templates/
docker-compose.yml.j2 # All services (NetBird + Gitea + Reconciler)
management.json.j2 # NetBird management config
Caddyfile.j2 # Caddy reverse proxy with reconciler route
dashboard.env.j2 # NetBird dashboard env
relay.env.j2 # NetBird relay env
turnserver.conf.j2 # TURN server config
reconciler.env.j2 # Reconciler env
files/
netbird-seed.json # Example desired state for testing
README.md # This file
```

1
poc/ansible/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
group_vars/all/vault.yml

View File

@ -0,0 +1,32 @@
{
"groups": {
"ground-stations": { "peers": [] },
"pilots": { "peers": [] }
},
"setup_keys": {
"GS-TestHawk-1": {
"type": "one-off",
"expires_in": 604800,
"usage_limit": 1,
"auto_groups": ["ground-stations"],
"enrolled": false
},
"Pilot-TestHawk-1": {
"type": "one-off",
"expires_in": 604800,
"usage_limit": 1,
"auto_groups": ["pilots"],
"enrolled": false
}
},
"policies": {
"pilots-to-gs": {
"enabled": true,
"sources": ["pilots"],
"destinations": ["ground-stations"],
"bidirectional": true
}
},
"routes": {},
"dns": { "nameserver_groups": {} }
}

View File

@ -0,0 +1,26 @@
---
# Non-secret configuration for the NetBird + Reconciler PoC stack.
netbird_domain: vps-a.networkmonitor.cc
# --- Component versions (match existing PoC) ---
netbird_version: "0.63.0"
dashboard_version: "v2.27.1"
caddy_version: "2.10.2"
coturn_version: "4.8.0-r0"
# --- Reconciler ---
reconciler_port: 8080
# --- Gitea (standalone mode by default) ---
# String "false" because it goes into an env var verbatim.
gitea_enabled: "false"
gitea_version: "1.23"
gitea_http_port: 3000
gitea_ssh_port: 2222
gitea_admin_user: "blastpilot"
gitea_org_name: "BlastPilot"
gitea_repo_name: "netbird-gitops"
# --- Paths ---
base_dir: /opt/netbird-poc

View File

@ -0,0 +1,20 @@
---
# Copy to vault.yml and fill in values.
# This file is gitignored — do NOT commit real secrets.
# Auto-generated by playbook if empty (leave as empty string):
vault_encryption_key: ""
vault_turn_password: ""
vault_relay_secret: ""
# Reconciler auth token (generate: openssl rand -hex 32):
vault_reconciler_token: ""
# Gitea admin password:
vault_gitea_admin_password: ""
# NetBird API token (created via dashboard after first deploy):
vault_netbird_api_token: ""
# Gitea API token (created via Gitea UI after first deploy):
vault_gitea_token: ""

View File

@ -0,0 +1,9 @@
---
all:
children:
poc_servers:
hosts:
netbird-poc-a:
ansible_host: 46.225.220.61
ansible_ssh_private_key_file: ~/.ssh/id_ed25519
ansible_user: root

366
poc/ansible/playbook.yml Normal file
View File

@ -0,0 +1,366 @@
---
# =============================================================================
# NetBird + Reconciler + Gitea — PoC Deployment
# =============================================================================
# Deploys a self-contained stack on VPS-A for end-to-end reconciler testing.
#
# Prerequisites:
# - SSH access to VPS-A (46.225.220.61)
# - DNS A record: vps-a.networkmonitor.cc -> 46.225.220.61
# - rsync installed locally and on VPS-A
# - poc/ansible/group_vars/all/vault.yml (copy from vault.yml.example)
#
# Run:
# cd poc/ansible
# ansible-playbook -i inventory.yml playbook.yml
# =============================================================================
- name: Deploy NetBird + Reconciler PoC on VPS-A
hosts: poc_servers
become: true
tasks:
# =========================================================================
# 1. Generate secrets (if vault values are empty)
# =========================================================================
# vault_* vars come from group_vars/all/vault.yml. When left as empty
# strings, the playbook auto-generates values. On subsequent runs with
# filled-in vault.yml, the provided values are used instead.
- name: Generate encryption key (if not provided)
ansible.builtin.shell: openssl rand -base64 32
register: _gen_encryption_key
changed_when: false
when: vault_encryption_key | default('') | length == 0
- name: Generate TURN password (if not provided)
ansible.builtin.shell: openssl rand -hex 32
register: _gen_turn_password
changed_when: false
when: vault_turn_password | default('') | length == 0
- name: Generate relay secret (if not provided)
ansible.builtin.shell: openssl rand -hex 32
register: _gen_relay_secret
changed_when: false
when: vault_relay_secret | default('') | length == 0
- name: Generate reconciler token (if not provided)
ansible.builtin.shell: openssl rand -hex 32
register: _gen_reconciler_token
changed_when: false
when: vault_reconciler_token | default('') | length == 0
- name: Set effective secrets
ansible.builtin.set_fact:
vault_encryption_key: "{{ vault_encryption_key if (vault_encryption_key | default('') | length > 0) else _gen_encryption_key.stdout }}"
vault_turn_password: "{{ vault_turn_password if (vault_turn_password | default('') | length > 0) else _gen_turn_password.stdout }}"
vault_relay_secret: "{{ vault_relay_secret if (vault_relay_secret | default('') | length > 0) else _gen_relay_secret.stdout }}"
vault_reconciler_token: "{{ vault_reconciler_token if (vault_reconciler_token | default('') | length > 0) else _gen_reconciler_token.stdout }}"
# =========================================================================
# 2. Install Docker
# =========================================================================
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
cache_valid_time: 3600
- name: Install prerequisites
ansible.builtin.apt:
name:
- apt-transport-https
- ca-certificates
- curl
- gnupg
- lsb-release
- jq
- rsync
state: present
- name: Check if Docker is installed
ansible.builtin.command: docker --version
register: _docker_check
changed_when: false
failed_when: false
- name: Create keyrings directory
ansible.builtin.file:
path: /etc/apt/keyrings
state: directory
mode: "0755"
when: _docker_check.rc != 0
- name: Add Docker GPG key
ansible.builtin.shell: |
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
args:
creates: /etc/apt/keyrings/docker.gpg
when: _docker_check.rc != 0
- name: Add Docker repository
ansible.builtin.apt_repository:
repo: >-
deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg]
https://download.docker.com/linux/ubuntu
{{ ansible_distribution_release }} stable
state: present
filename: docker
when: _docker_check.rc != 0
- name: Install Docker packages
ansible.builtin.apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-compose-plugin
state: present
update_cache: true
when: _docker_check.rc != 0
- name: Start and enable Docker
ansible.builtin.systemd:
name: docker
state: started
enabled: true
# =========================================================================
# 3. UFW firewall
# =========================================================================
- name: Install UFW
ansible.builtin.apt:
name: ufw
state: present
- name: Allow SSH
community.general.ufw:
rule: allow
port: "22"
proto: tcp
- name: Allow HTTP (ACME + Caddy)
community.general.ufw:
rule: allow
port: "80"
proto: tcp
- name: Allow HTTPS
community.general.ufw:
rule: allow
port: "443"
proto: tcp
- name: Allow TURN UDP
community.general.ufw:
rule: allow
port: "3478"
proto: udp
- name: Allow TURN TCP
community.general.ufw:
rule: allow
port: "3478"
proto: tcp
- name: Allow Gitea HTTP
community.general.ufw:
rule: allow
port: "{{ gitea_http_port | string }}"
proto: tcp
- name: Enable UFW (default deny incoming)
community.general.ufw:
state: enabled
policy: deny
# =========================================================================
# 4. Create directories
# =========================================================================
- name: Create base directory
ansible.builtin.file:
path: "{{ base_dir }}"
state: directory
mode: "0755"
- name: Create config directory
ansible.builtin.file:
path: "{{ base_dir }}/config"
state: directory
mode: "0755"
- name: Create reconciler source directory
ansible.builtin.file:
path: "{{ base_dir }}/reconciler-src"
state: directory
mode: "0755"
# =========================================================================
# 5. Sync reconciler source code
# =========================================================================
# Uses rsync to copy the project root (minus junk) to VPS-A so that
# `docker compose build` can build the reconciler image on the server.
- name: Sync reconciler source to VPS-A
ansible.posix.synchronize:
src: "{{ playbook_dir }}/../../"
dest: "{{ base_dir }}/reconciler-src/"
delete: true
rsync_opts:
- "--exclude=.git"
- "--exclude=node_modules"
- "--exclude=poc"
- "--exclude=data"
- "--exclude=deploy"
- "--exclude=.beads"
# synchronize runs as the connecting user, not become. We need to
# set become: false so it uses the SSH user directly for rsync.
become: false
# =========================================================================
# 6. Template configs
# =========================================================================
- name: Deploy docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ base_dir }}/docker-compose.yml"
mode: "0644"
register: _compose_changed
- name: Deploy management.json
ansible.builtin.template:
src: management.json.j2
dest: "{{ base_dir }}/config/management.json"
mode: "0644"
register: _management_changed
- name: Deploy Caddyfile
ansible.builtin.template:
src: Caddyfile.j2
dest: "{{ base_dir }}/config/Caddyfile"
mode: "0644"
- name: Deploy dashboard.env
ansible.builtin.template:
src: dashboard.env.j2
dest: "{{ base_dir }}/config/dashboard.env"
mode: "0640"
- name: Deploy relay.env
ansible.builtin.template:
src: relay.env.j2
dest: "{{ base_dir }}/config/relay.env"
mode: "0640"
- name: Deploy turnserver.conf
ansible.builtin.template:
src: turnserver.conf.j2
dest: "{{ base_dir }}/config/turnserver.conf"
mode: "0644"
- name: Deploy reconciler.env
ansible.builtin.template:
src: reconciler.env.j2
dest: "{{ base_dir }}/config/reconciler.env"
mode: "0640"
register: _reconciler_env_changed
# =========================================================================
# 7. Docker Compose — pull, build, up
# =========================================================================
- name: Pull Docker images
ansible.builtin.command:
cmd: docker compose pull --ignore-buildable
chdir: "{{ base_dir }}"
changed_when: true
- name: Build reconciler image
ansible.builtin.command:
cmd: docker compose build reconciler
chdir: "{{ base_dir }}"
changed_when: true
- name: Start all services
ansible.builtin.command:
cmd: docker compose up -d
chdir: "{{ base_dir }}"
changed_when: true
# =========================================================================
# 8. Health checks
# =========================================================================
- name: Wait for management container to be running
ansible.builtin.command:
cmd: docker compose ps management --format json
chdir: "{{ base_dir }}"
register: _mgmt_status
until: "'running' in _mgmt_status.stdout and 'restarting' not in _mgmt_status.stdout"
retries: 15
delay: 5
changed_when: false
- name: Wait for Caddy / HTTPS to respond
ansible.builtin.uri:
url: "https://{{ netbird_domain }}"
method: GET
status_code: 200
validate_certs: false
register: _caddy_check
until: _caddy_check.status == 200
retries: 12
delay: 5
- name: Check reconciler health (may fail if API token not yet configured)
ansible.builtin.uri:
url: "http://127.0.0.1:{{ reconciler_port }}/health"
method: GET
status_code: 200
register: _reconciler_check
failed_when: false
changed_when: false
# =========================================================================
# 9. Summary
# =========================================================================
- name: Note about NetBird API token
ansible.builtin.debug:
msg: >-
The reconciler needs a NetBird API token to function.
If vault_netbird_api_token is empty, the reconciler will crash-loop
until you create an admin account via the dashboard, generate an
API token, add it to vault.yml, and re-run the playbook.
when: vault_netbird_api_token | default('') | length == 0
- name: Deployment summary
ansible.builtin.debug:
msg: |
============================================================
NetBird + Reconciler PoC deployed on VPS-A
============================================================
Dashboard: https://{{ netbird_domain }}
Gitea: http://{{ netbird_domain }}:{{ gitea_http_port }}
Reconciler: https://{{ netbird_domain }}/reconciler/health
Reconciler status: {{ 'healthy' if (_reconciler_check.status | default(0)) == 200 else 'NOT YET READY (see note above)' }}
Generated secrets (save these to vault.yml for idempotent re-runs):
vault_encryption_key: {{ vault_encryption_key }}
vault_turn_password: {{ vault_turn_password }}
vault_relay_secret: {{ vault_relay_secret }}
vault_reconciler_token: {{ vault_reconciler_token }}
Next steps:
1. Open the dashboard and create an admin account
2. Go to Settings > API > generate a Personal Access Token
3. Put the token in vault.yml as vault_netbird_api_token
4. Re-run: ansible-playbook -i inventory.yml playbook.yml
============================================================

View File

@ -0,0 +1,46 @@
{
servers :80,:443 {
protocols h1 h2c h2 h3
}
email vlad.stus@gmail.com
}
(security_headers) {
header * {
Strict-Transport-Security "max-age=3600; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "SAMEORIGIN"
X-XSS-Protection "1; mode=block"
-Server
Referrer-Policy strict-origin-when-cross-origin
}
}
{{ netbird_domain }} {
import security_headers
# Embedded IdP OAuth2 endpoints
reverse_proxy /oauth2/* management:80
reverse_proxy /.well-known/openid-configuration management:80
reverse_proxy /.well-known/jwks.json management:80
# NetBird Relay
reverse_proxy /relay* relay:80
# NetBird Signal (gRPC)
reverse_proxy /signalexchange.SignalExchange/* h2c://signal:10000
# NetBird Management API (gRPC)
reverse_proxy /management.ManagementService/* h2c://management:80
# NetBird Management REST API
reverse_proxy /api/* management:80
# Reconciler API (strip /reconciler prefix before proxying)
handle_path /reconciler/* {
reverse_proxy reconciler:{{ reconciler_port }}
}
# NetBird Dashboard (catch-all must be last)
reverse_proxy /* dashboard:80
}

View File

@ -0,0 +1,12 @@
NETBIRD_MGMT_API_ENDPOINT=https://{{ netbird_domain }}
NETBIRD_MGMT_GRPC_API_ENDPOINT=https://{{ netbird_domain }}
AUTH_AUDIENCE=netbird-dashboard
AUTH_CLIENT_ID=netbird-dashboard
AUTH_CLIENT_SECRET=
AUTH_AUTHORITY=https://{{ netbird_domain }}/oauth2
USE_AUTH0=false
AUTH_SUPPORTED_SCOPES=openid profile email groups
AUTH_REDIRECT_URI=/nb-auth
AUTH_SILENT_REDIRECT_URI=/nb-silent-auth
NGINX_SSL_PORT=443
LETSENCRYPT_DOMAIN=none

View File

@ -0,0 +1,145 @@
services:
# ---------------------------------------------------------------------------
# NetBird core services (proven config from existing PoC)
# ---------------------------------------------------------------------------
caddy:
image: caddy:{{ caddy_version }}
restart: unless-stopped
networks:
- netbird
ports:
- "80:80"
- "443:443"
volumes:
- ./config/Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
logging:
driver: json-file
options:
max-size: "500m"
max-file: "2"
dashboard:
image: netbirdio/dashboard:{{ dashboard_version }}
restart: unless-stopped
networks:
- netbird
env_file:
- ./config/dashboard.env
logging:
driver: json-file
options:
max-size: "500m"
max-file: "2"
signal:
image: netbirdio/signal:{{ netbird_version }}
restart: unless-stopped
networks:
- netbird
logging:
driver: json-file
options:
max-size: "500m"
max-file: "2"
relay:
image: netbirdio/relay:{{ netbird_version }}
restart: unless-stopped
networks:
- netbird
env_file:
- ./config/relay.env
logging:
driver: json-file
options:
max-size: "500m"
max-file: "2"
management:
image: netbirdio/management:{{ netbird_version }}
restart: unless-stopped
networks:
- netbird
volumes:
- netbird_management:/var/lib/netbird
- ./config/management.json:/etc/netbird/management.json
logging:
driver: json-file
options:
max-size: "500m"
max-file: "2"
coturn:
image: coturn/coturn:{{ coturn_version }}
restart: unless-stopped
networks:
- netbird
ports:
- "3478:3478/udp"
- "3478:3478/tcp"
volumes:
- ./config/turnserver.conf:/etc/coturn/turnserver.conf:ro
logging:
driver: json-file
options:
max-size: "500m"
max-file: "2"
# ---------------------------------------------------------------------------
# Gitea (local Git server for GitOps source-of-truth)
# ---------------------------------------------------------------------------
gitea:
image: gitea/gitea:{{ gitea_version }}
restart: unless-stopped
networks:
- netbird
environment:
- GITEA__server__DOMAIN={{ netbird_domain }}
- GITEA__server__ROOT_URL=http://{{ netbird_domain }}:{{ gitea_http_port }}
- GITEA__database__DB_TYPE=sqlite3
volumes:
- gitea_data:/data
ports:
- "{{ gitea_http_port }}:3000"
- "{{ gitea_ssh_port }}:22"
logging:
driver: json-file
options:
max-size: "50m"
max-file: "2"
# ---------------------------------------------------------------------------
# NetBird GitOps Reconciler
# ---------------------------------------------------------------------------
reconciler:
build:
context: {{ base_dir }}/reconciler-src
dockerfile: Dockerfile
restart: unless-stopped
networks:
- netbird
env_file:
- ./config/reconciler.env
volumes:
- reconciler_data:/data
ports:
- "127.0.0.1:{{ reconciler_port }}:{{ reconciler_port }}"
depends_on:
- management
logging:
driver: json-file
options:
max-size: "50m"
max-file: "2"
networks:
netbird:
volumes:
netbird_management:
caddy_data:
caddy_config:
gitea_data:
reconciler_data:

View File

@ -0,0 +1,49 @@
{
"Stuns": [
{
"Proto": "udp",
"URI": "stun:{{ netbird_domain }}:3478"
}
],
"TURNConfig": {
"Turns": [
{
"Proto": "udp",
"URI": "turn:{{ netbird_domain }}:3478",
"Username": "netbird",
"Password": "{{ vault_turn_password }}"
}
],
"TimeBasedCredentials": false
},
"Relay": {
"Addresses": [
"rels://{{ netbird_domain }}:443/relay"
],
"CredentialsTTL": "168h",
"Secret": "{{ vault_relay_secret }}"
},
"Signal": {
"Proto": "https",
"URI": "{{ netbird_domain }}:443"
},
"Datadir": "/var/lib/netbird",
"DataStoreEncryptionKey": "{{ vault_encryption_key }}",
"StoreConfig": {
"Engine": "sqlite"
},
"HttpConfig": {
"Address": "0.0.0.0:80"
},
"IdpManagerConfig": {
"ManagerType": "none"
},
"EmbeddedIdP": {
"Enabled": true,
"Issuer": "https://{{ netbird_domain }}/oauth2",
"DashboardRedirectURIs": [
"https://{{ netbird_domain }}/nb-auth",
"https://{{ netbird_domain }}/nb-silent-auth"
]
}
}

View File

@ -0,0 +1,12 @@
NETBIRD_API_URL=http://management:80/api
NETBIRD_API_TOKEN={{ vault_netbird_api_token }}
RECONCILER_TOKEN={{ vault_reconciler_token }}
GITEA_ENABLED={{ gitea_enabled }}
{% if gitea_enabled == "true" %}
GITEA_URL=http://gitea:{{ gitea_http_port }}
GITEA_TOKEN={{ vault_gitea_token }}
GITEA_REPO={{ gitea_org_name }}/{{ gitea_repo_name }}
{% endif %}
POLL_INTERVAL_SECONDS=30
PORT={{ reconciler_port }}
DATA_DIR=/data

View File

@ -0,0 +1,2 @@
NB_RELAY_ADDRESS=rels://{{ netbird_domain }}:443/relay
NB_RELAY_SECRET={{ vault_relay_secret }}

View File

@ -0,0 +1,10 @@
listening-port=3478
relay-ip={{ ansible_default_ipv4.address }}
min-port=10000
max-port=20000
fingerprint
use-auth-secret
static-auth-secret={{ vault_turn_password }}
realm={{ netbird_domain }}
cert=/data/cert.pem
pkey=/data/key.pem

View File

@ -1,23 +1,65 @@
import { z } from "zod";
const ConfigSchema = z.object({
/**
* Base fields required regardless of Gitea integration mode.
*/
const BaseSchema = z.object({
netbirdApiUrl: z.string().url(),
netbirdApiToken: z.string().min(1),
giteaUrl: z.string().url(),
giteaToken: z.string().min(1),
giteaRepo: z.string().regex(/^[^/]+\/[^/]+$/), // owner/repo
giteaEnabled: z.string().default("true").transform((v) => v === "true"),
reconcilerToken: z.string().min(1),
pollIntervalSeconds: z.coerce.number().int().positive().default(30),
port: z.coerce.number().int().positive().default(8080),
dataDir: z.string().default("/data"),
});
const GiteaFieldsSchema = z.object({
giteaUrl: z.string().url(),
giteaToken: z.string().min(1),
giteaRepo: z.string().regex(/^[^/]+\/[^/]+$/), // owner/repo
});
/**
* When giteaEnabled=true, Gitea fields are validated and required.
* When giteaEnabled=false, Gitea fields are ignored.
*
* The output type always includes the optional Gitea fields so
* downstream code can check `config.giteaEnabled` and narrow.
*/
const ConfigSchema = BaseSchema.merge(GiteaFieldsSchema.partial())
.superRefine((data, ctx) => {
if (!data.giteaEnabled) return;
if (!data.giteaUrl) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: "GITEA_URL is required when GITEA_ENABLED=true",
path: ["giteaUrl"],
});
}
if (!data.giteaToken) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: "GITEA_TOKEN is required when GITEA_ENABLED=true",
path: ["giteaToken"],
});
}
if (!data.giteaRepo) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: "GITEA_REPO is required when GITEA_ENABLED=true",
path: ["giteaRepo"],
});
}
});
export type Config = z.infer<typeof ConfigSchema>;
export function loadConfig(): Config {
return ConfigSchema.parse({
netbirdApiUrl: Deno.env.get("NETBIRD_API_URL"),
netbirdApiToken: Deno.env.get("NETBIRD_API_TOKEN"),
giteaEnabled: Deno.env.get("GITEA_ENABLED"),
giteaUrl: Deno.env.get("GITEA_URL"),
giteaToken: Deno.env.get("GITEA_TOKEN"),
giteaRepo: Deno.env.get("GITEA_REPO"),

523
src/export.test.ts Normal file
View File

@ -0,0 +1,523 @@
import { assertEquals } from "@std/assert";
import { exportState } from "./export.ts";
import type { ActualState } from "./state/actual.ts";
import type {
NbDnsNameserverGroup,
NbGroup,
NbPeer,
NbPolicy,
NbRoute,
NbSetupKey,
} from "./netbird/types.ts";
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
/** Builds a minimal ActualState with indexed maps from raw arrays. */
function buildActualState(data: {
groups?: NbGroup[];
setupKeys?: NbSetupKey[];
peers?: NbPeer[];
policies?: NbPolicy[];
routes?: NbRoute[];
dns?: NbDnsNameserverGroup[];
}): ActualState {
const groups = data.groups ?? [];
const setupKeys = data.setupKeys ?? [];
const peers = data.peers ?? [];
const policies = data.policies ?? [];
const routes = data.routes ?? [];
const dns = data.dns ?? [];
return {
groups,
groupsByName: new Map(groups.map((g) => [g.name, g])),
groupsById: new Map(groups.map((g) => [g.id, g])),
setupKeys,
setupKeysByName: new Map(setupKeys.map((k) => [k.name, k])),
peers,
peersByName: new Map(peers.map((p) => [p.name, p])),
peersById: new Map(peers.map((p) => [p.id, p])),
policies,
policiesByName: new Map(policies.map((p) => [p.name, p])),
routes,
routesByNetworkId: new Map(routes.map((r) => [r.network_id, r])),
dns,
dnsByName: new Map(dns.map((d) => [d.name, d])),
};
}
function makeGroup(
overrides: Partial<NbGroup> & Pick<NbGroup, "id" | "name">,
): NbGroup {
return {
peers_count: 0,
peers: [],
issued: "api",
...overrides,
};
}
function makeSetupKey(
overrides: Partial<NbSetupKey> & Pick<NbSetupKey, "name">,
): NbSetupKey {
return {
id: 1,
type: "one-off",
key: "NBSK-masked",
expires: "2027-01-01T00:00:00Z",
valid: true,
revoked: false,
used_times: 0,
state: "valid",
auto_groups: [],
usage_limit: 1,
...overrides,
};
}
// ---------------------------------------------------------------------------
// Tests: Normal state with groups, keys, policy
// ---------------------------------------------------------------------------
Deno.test("exportState: normal state with groups, keys, and policy", () => {
const actual = buildActualState({
groups: [
makeGroup({
id: "g-pilots",
name: "pilots",
peers: [{ id: "p1", name: "Pilot-hawk-72" }],
}),
makeGroup({ id: "g-vehicles", name: "vehicles" }),
],
setupKeys: [
makeSetupKey({
name: "Pilot-hawk-72",
auto_groups: ["g-pilots"],
used_times: 1,
usage_limit: 1,
}),
],
policies: [
{
id: "pol1",
name: "allow-pilot-vehicle",
description: "pilot to vehicle",
enabled: true,
rules: [
{
name: "rule1",
description: "",
enabled: true,
action: "accept",
bidirectional: true,
protocol: "all",
sources: ["g-pilots"],
destinations: ["g-vehicles"],
},
],
},
],
});
const exported = exportState(actual);
// Groups exported with correct peer mapping
assertEquals(Object.keys(exported.groups), ["pilots", "vehicles"]);
assertEquals(exported.groups["pilots"].peers, ["Pilot-hawk-72"]);
assertEquals(exported.groups["vehicles"].peers, []);
// Setup key with auto_groups resolved to names
assertEquals(Object.keys(exported.setup_keys), ["Pilot-hawk-72"]);
assertEquals(exported.setup_keys["Pilot-hawk-72"].auto_groups, ["pilots"]);
assertEquals(exported.setup_keys["Pilot-hawk-72"].enrolled, true);
assertEquals(exported.setup_keys["Pilot-hawk-72"].type, "one-off");
assertEquals(exported.setup_keys["Pilot-hawk-72"].expires_in, 604800);
// Policy with source/destination resolved
assertEquals(Object.keys(exported.policies), ["allow-pilot-vehicle"]);
assertEquals(exported.policies["allow-pilot-vehicle"].sources, ["pilots"]);
assertEquals(exported.policies["allow-pilot-vehicle"].destinations, [
"vehicles",
]);
assertEquals(exported.policies["allow-pilot-vehicle"].bidirectional, true);
assertEquals(exported.policies["allow-pilot-vehicle"].protocol, "all");
assertEquals(exported.policies["allow-pilot-vehicle"].action, "accept");
});
// ---------------------------------------------------------------------------
// Tests: Empty state (only "All" group)
// ---------------------------------------------------------------------------
Deno.test("exportState: empty state with only All group produces empty export", () => {
const actual = buildActualState({
groups: [
makeGroup({ id: "g-all", name: "All", issued: "jwt" }),
],
});
const exported = exportState(actual);
assertEquals(Object.keys(exported.groups).length, 0);
assertEquals(Object.keys(exported.setup_keys).length, 0);
assertEquals(Object.keys(exported.policies).length, 0);
assertEquals(Object.keys(exported.routes).length, 0);
assertEquals(Object.keys(exported.dns.nameserver_groups).length, 0);
});
// ---------------------------------------------------------------------------
// Tests: auto_groups ID-to-name mapping
// ---------------------------------------------------------------------------
Deno.test("exportState: auto_groups IDs are resolved to group names", () => {
const actual = buildActualState({
groups: [
makeGroup({ id: "g-alpha", name: "alpha" }),
makeGroup({ id: "g-beta", name: "beta" }),
],
setupKeys: [
makeSetupKey({
name: "key-1",
auto_groups: ["g-alpha", "g-beta"],
}),
],
});
const exported = exportState(actual);
assertEquals(exported.setup_keys["key-1"].auto_groups, ["alpha", "beta"]);
});
Deno.test("exportState: auto_groups with unresolvable IDs are dropped", () => {
const actual = buildActualState({
groups: [
makeGroup({ id: "g-alpha", name: "alpha" }),
],
setupKeys: [
makeSetupKey({
name: "key-1",
auto_groups: ["g-alpha", "g-nonexistent"],
}),
],
});
const exported = exportState(actual);
assertEquals(exported.setup_keys["key-1"].auto_groups, ["alpha"]);
});
// ---------------------------------------------------------------------------
// Tests: Enrolled detection
// ---------------------------------------------------------------------------
Deno.test("exportState: enrolled detection — used key is enrolled", () => {
const actual = buildActualState({
setupKeys: [
makeSetupKey({ name: "used-key", used_times: 1, usage_limit: 1 }),
],
});
assertEquals(exportState(actual).setup_keys["used-key"].enrolled, true);
});
Deno.test("exportState: enrolled detection — unused key is not enrolled", () => {
const actual = buildActualState({
setupKeys: [
makeSetupKey({ name: "fresh-key", used_times: 0, usage_limit: 1 }),
],
});
assertEquals(exportState(actual).setup_keys["fresh-key"].enrolled, false);
});
Deno.test("exportState: enrolled detection — unlimited reusable is never enrolled", () => {
const actual = buildActualState({
setupKeys: [
makeSetupKey({
name: "reusable-key",
type: "reusable",
used_times: 50,
usage_limit: 0,
}),
],
});
assertEquals(
exportState(actual).setup_keys["reusable-key"].enrolled,
false,
);
});
Deno.test("exportState: enrolled detection — partially used is not enrolled", () => {
const actual = buildActualState({
setupKeys: [
makeSetupKey({
name: "partial-key",
type: "reusable",
used_times: 2,
usage_limit: 5,
}),
],
});
assertEquals(
exportState(actual).setup_keys["partial-key"].enrolled,
false,
);
});
// ---------------------------------------------------------------------------
// Tests: System groups excluded
// ---------------------------------------------------------------------------
Deno.test("exportState: system groups are excluded", () => {
const actual = buildActualState({
groups: [
makeGroup({ id: "g-all", name: "All", issued: "jwt" }),
makeGroup({ id: "g-jwt", name: "jwt-group", issued: "jwt" }),
makeGroup({
id: "g-int",
name: "integration-group",
issued: "integration",
}),
makeGroup({ id: "g-api", name: "user-group", issued: "api" }),
],
});
const exported = exportState(actual);
const groupNames = Object.keys(exported.groups);
assertEquals(groupNames, ["user-group"]);
});
Deno.test("exportState: All group with api issued is still excluded", () => {
const actual = buildActualState({
groups: [
makeGroup({ id: "g-all", name: "All", issued: "api" }),
makeGroup({ id: "g-user", name: "my-group", issued: "api" }),
],
});
const exported = exportState(actual);
assertEquals(Object.keys(exported.groups), ["my-group"]);
});
// ---------------------------------------------------------------------------
// Tests: Group peers filter by setup key name
// ---------------------------------------------------------------------------
Deno.test("exportState: group peers only include names matching setup keys", () => {
const actual = buildActualState({
groups: [
makeGroup({
id: "g1",
name: "ops",
peers: [
{ id: "p1", name: "Pilot-hawk-72" },
{ id: "p2", name: "random-peer-no-key" },
],
}),
],
setupKeys: [
makeSetupKey({ name: "Pilot-hawk-72" }),
],
});
const exported = exportState(actual);
assertEquals(exported.groups["ops"].peers, ["Pilot-hawk-72"]);
});
// ---------------------------------------------------------------------------
// Tests: Policies
// ---------------------------------------------------------------------------
Deno.test("exportState: policies with empty rules are skipped", () => {
const actual = buildActualState({
policies: [
{
id: "pol1",
name: "empty-policy",
description: "no rules",
enabled: true,
rules: [],
},
],
});
assertEquals(Object.keys(exportState(actual).policies).length, 0);
});
Deno.test("exportState: policy sources/destinations as {id,name} objects are resolved", () => {
const actual = buildActualState({
groups: [
makeGroup({ id: "g-src", name: "source-group" }),
makeGroup({ id: "g-dst", name: "dest-group" }),
],
policies: [
{
id: "pol1",
name: "object-refs",
description: "",
enabled: true,
rules: [
{
name: "r1",
description: "",
enabled: true,
action: "accept",
bidirectional: false,
protocol: "tcp",
ports: ["443", "8080"],
sources: [{ id: "g-src", name: "source-group" }],
destinations: [{ id: "g-dst", name: "dest-group" }],
},
],
},
],
});
const exported = exportState(actual);
assertEquals(exported.policies["object-refs"].sources, ["source-group"]);
assertEquals(exported.policies["object-refs"].destinations, ["dest-group"]);
assertEquals(exported.policies["object-refs"].protocol, "tcp");
assertEquals(exported.policies["object-refs"].ports, ["443", "8080"]);
assertEquals(exported.policies["object-refs"].bidirectional, false);
});
Deno.test("exportState: policy without ports omits the ports field", () => {
const actual = buildActualState({
groups: [
makeGroup({ id: "g1", name: "g" }),
],
policies: [
{
id: "pol1",
name: "no-ports",
description: "",
enabled: true,
rules: [
{
name: "r1",
description: "",
enabled: true,
action: "accept",
bidirectional: true,
protocol: "all",
sources: ["g1"],
destinations: ["g1"],
},
],
},
],
});
const exported = exportState(actual);
assertEquals(exported.policies["no-ports"].ports, undefined);
});
// ---------------------------------------------------------------------------
// Tests: Routes
// ---------------------------------------------------------------------------
Deno.test("exportState: routes keyed by network_id with IDs resolved", () => {
const actual = buildActualState({
groups: [
makeGroup({ id: "g-pg", name: "peer-group" }),
makeGroup({ id: "g-dist", name: "dist-group" }),
],
routes: [
{
id: "r1",
description: "LAN route",
network_id: "lan-net",
enabled: true,
peer_groups: ["g-pg"],
network: "10.0.0.0/24",
metric: 100,
masquerade: true,
groups: ["g-dist"],
keep_route: false,
},
],
});
const exported = exportState(actual);
assertEquals(Object.keys(exported.routes), ["lan-net"]);
assertEquals(exported.routes["lan-net"].peer_groups, ["peer-group"]);
assertEquals(exported.routes["lan-net"].distribution_groups, ["dist-group"]);
assertEquals(exported.routes["lan-net"].network, "10.0.0.0/24");
assertEquals(exported.routes["lan-net"].metric, 100);
assertEquals(exported.routes["lan-net"].masquerade, true);
assertEquals(exported.routes["lan-net"].enabled, true);
assertEquals(exported.routes["lan-net"].keep_route, false);
});
Deno.test("exportState: route with domains and no network", () => {
const actual = buildActualState({
groups: [
makeGroup({ id: "g1", name: "grp" }),
],
routes: [
{
id: "r2",
description: "DNS route",
network_id: "dns-route",
enabled: true,
peer_groups: ["g1"],
domains: ["example.com"],
metric: 9999,
masquerade: false,
groups: ["g1"],
keep_route: true,
},
],
});
const exported = exportState(actual);
assertEquals(exported.routes["dns-route"].domains, ["example.com"]);
assertEquals(exported.routes["dns-route"].network, undefined);
});
// ---------------------------------------------------------------------------
// Tests: DNS
// ---------------------------------------------------------------------------
Deno.test("exportState: DNS nameserver groups with IDs resolved", () => {
const actual = buildActualState({
groups: [
makeGroup({ id: "g-dns", name: "dns-group" }),
],
dns: [
{
id: "d1",
name: "internal-dns",
description: "internal resolver",
nameservers: [{ ip: "1.1.1.1", ns_type: "udp", port: 53 }],
enabled: true,
groups: ["g-dns"],
primary: true,
domains: ["internal."],
search_domains_enabled: false,
},
],
});
const exported = exportState(actual);
assertEquals(Object.keys(exported.dns.nameserver_groups), ["internal-dns"]);
const ns = exported.dns.nameserver_groups["internal-dns"];
assertEquals(ns.groups, ["dns-group"]);
assertEquals(ns.nameservers, [{ ip: "1.1.1.1", ns_type: "udp", port: 53 }]);
assertEquals(ns.primary, true);
assertEquals(ns.domains, ["internal."]);
assertEquals(ns.search_domains_enabled, false);
assertEquals(ns.enabled, true);
assertEquals(ns.description, "internal resolver");
});

243
src/export.ts Normal file
View File

@ -0,0 +1,243 @@
import type { ActualState } from "./state/actual.ts";
import type { DesiredState } from "./state/schema.ts";
/**
* Default expires_in value (seconds). The NetBird API does not return the
* original `expires_in` that was used to create a setup key only the
* absolute `expires` timestamp. We fall back to 7 days as a reasonable
* default so the exported config round-trips cleanly.
*/
const DEFAULT_EXPIRES_IN = 604800;
/**
* Transforms live NetBird state (as fetched from the API) into the
* declarative `netbird.json` format (`DesiredState`).
*
* This is the inverse of the reconciliation flow: given what's actually
* deployed, produce a config file that would recreate it. Useful for
* bootstrapping gitops from an existing NetBird account.
*
* Filtering rules:
* - Groups: system-managed groups (issued !== "api" or name "All") are
* excluded. Peer lists only include peers whose name matches a known
* setup key (since the desired-state schema maps peers to setup keys).
* - Setup keys: all exported. `auto_groups` IDs resolved to names.
* `enrolled` derived from usage counters.
* - Policies: empty-rules policies skipped. Source/destination IDs
* resolved to group names via the first rule.
* - Routes: keyed by `network_id`. Peer groups and distribution groups
* resolved from IDs to names.
* - DNS: group IDs resolved to names.
*/
export function exportState(actual: ActualState): DesiredState {
const idToName = buildIdToNameMap(actual);
const setupKeyNames = new Set(actual.setupKeys.map((k) => k.name));
return {
groups: exportGroups(actual, setupKeyNames, idToName),
setup_keys: exportSetupKeys(actual, idToName),
policies: exportPolicies(actual, idToName),
routes: exportRoutes(actual, idToName),
dns: {
nameserver_groups: exportDns(actual, idToName),
},
};
}
// ---------------------------------------------------------------------------
// Internal helpers
// ---------------------------------------------------------------------------
/**
* Builds a group ID group name lookup from the full groups list.
* Used throughout to translate opaque IDs in API responses back to
* human-readable names for the config file.
*/
function buildIdToNameMap(actual: ActualState): Map<string, string> {
return new Map(actual.groups.map((g) => [g.id, g.name]));
}
/** Resolves an array of group IDs to group names, dropping any unresolvable IDs. */
function resolveIds(ids: string[], idToName: Map<string, string>): string[] {
return ids
.map((id) => idToName.get(id))
.filter((name): name is string => name !== undefined);
}
/**
* Extracts the group ID from a policy source/destination entry.
* The NetBird API returns these as either plain string IDs or
* `{ id, name }` objects depending on the API version.
*/
function extractGroupId(entry: string | { id: string; name: string }): string {
return typeof entry === "string" ? entry : entry.id;
}
// ---------------------------------------------------------------------------
// Groups
// ---------------------------------------------------------------------------
function exportGroups(
actual: ActualState,
setupKeyNames: Set<string>,
_idToName: Map<string, string>,
): DesiredState["groups"] {
const result: DesiredState["groups"] = {};
for (const group of actual.groups) {
if (isSystemGroup(group.name, group.issued)) continue;
// Only include peers whose name matches a known setup key, since
// the desired-state schema models peers as setup-key references.
const peers = group.peers
.map((p) => p.name)
.filter((name) => setupKeyNames.has(name));
result[group.name] = { peers };
}
return result;
}
function isSystemGroup(name: string, issued: string): boolean {
return name === "All" || issued !== "api";
}
// ---------------------------------------------------------------------------
// Setup Keys
// ---------------------------------------------------------------------------
function exportSetupKeys(
actual: ActualState,
idToName: Map<string, string>,
): DesiredState["setup_keys"] {
const result: DesiredState["setup_keys"] = {};
for (const key of actual.setupKeys) {
result[key.name] = {
type: key.type,
expires_in: DEFAULT_EXPIRES_IN,
usage_limit: key.usage_limit,
auto_groups: resolveIds(key.auto_groups, idToName),
enrolled: isEnrolled(key.used_times, key.usage_limit),
};
}
return result;
}
/**
* A setup key is considered "enrolled" when it has been fully consumed:
* `used_times >= usage_limit` with a finite limit. Keys with
* `usage_limit === 0` (unlimited reusable) are never enrolled.
*/
function isEnrolled(usedTimes: number, usageLimit: number): boolean {
if (usageLimit === 0) return false;
return usedTimes >= usageLimit;
}
// ---------------------------------------------------------------------------
// Policies
// ---------------------------------------------------------------------------
function exportPolicies(
actual: ActualState,
idToName: Map<string, string>,
): DesiredState["policies"] {
const result: DesiredState["policies"] = {};
for (const policy of actual.policies) {
if (policy.rules.length === 0) continue;
const rule = policy.rules[0];
const sources = resolveIds(
rule.sources.map(extractGroupId),
idToName,
);
const destinations = resolveIds(
rule.destinations.map(extractGroupId),
idToName,
);
const entry: DesiredState["policies"][string] = {
description: policy.description,
enabled: policy.enabled,
sources,
destinations,
bidirectional: rule.bidirectional,
protocol: rule.protocol,
action: rule.action,
};
if (rule.ports && rule.ports.length > 0) {
entry.ports = rule.ports;
}
result[policy.name] = entry;
}
return result;
}
// ---------------------------------------------------------------------------
// Routes
// ---------------------------------------------------------------------------
function exportRoutes(
actual: ActualState,
idToName: Map<string, string>,
): DesiredState["routes"] {
const result: DesiredState["routes"] = {};
for (const route of actual.routes) {
const entry: DesiredState["routes"][string] = {
description: route.description,
peer_groups: resolveIds(route.peer_groups ?? [], idToName),
metric: route.metric,
masquerade: route.masquerade,
distribution_groups: resolveIds(route.groups, idToName),
enabled: route.enabled,
keep_route: route.keep_route,
};
if (route.network) {
entry.network = route.network;
}
if (route.domains && route.domains.length > 0) {
entry.domains = route.domains;
}
result[route.network_id] = entry;
}
return result;
}
// ---------------------------------------------------------------------------
// DNS
// ---------------------------------------------------------------------------
function exportDns(
actual: ActualState,
idToName: Map<string, string>,
): DesiredState["dns"]["nameserver_groups"] {
const result: DesiredState["dns"]["nameserver_groups"] = {};
for (const ns of actual.dns) {
result[ns.name] = {
description: ns.description,
nameservers: ns.nameservers.map((s) => ({
ip: s.ip,
ns_type: s.ns_type,
port: s.port,
})),
enabled: ns.enabled,
groups: resolveIds(ns.groups, idToName),
primary: ns.primary,
domains: ns.domains,
search_domains_enabled: ns.search_domains_enabled,
};
}
return result;
}

View File

@ -8,8 +8,8 @@ function mockFetch(
const url = typeof input === "string"
? input
: input instanceof URL
? input.toString()
: input.url;
? input.toString()
: input.url;
const method = init?.method ?? "GET";
const key = `${method} ${url}`;
const resp = responses.get(key);

View File

@ -92,6 +92,7 @@ function createMockFetch(calls: ApiCall[]) {
const MOCK_CONFIG: Config = {
netbirdApiUrl: MOCK_BASE,
netbirdApiToken: "nb-test-token",
giteaEnabled: true,
giteaUrl: "https://gitea.test",
giteaToken: "gitea-test-token",
giteaRepo: "org/repo",
@ -143,6 +144,32 @@ function authedRequest(path: string, body?: unknown): Request {
});
}
// -----------------------------------------------------------------------------
// Standalone mode (giteaEnabled: false, gitea: null)
// -----------------------------------------------------------------------------
const STANDALONE_CONFIG: Config = {
netbirdApiUrl: MOCK_BASE,
netbirdApiToken: "nb-test-token",
giteaEnabled: false,
reconcilerToken: "secret",
pollIntervalSeconds: 30,
port: 8080,
dataDir: "/data",
};
function buildStandaloneHandler(calls: ApiCall[]) {
const mockFetch = createMockFetch(calls);
const netbird = new NetbirdClient(MOCK_BASE, "nb-test-token", mockFetch);
return createHandler({
config: STANDALONE_CONFIG,
netbird,
gitea: null,
reconcileInProgress: { value: false },
});
}
// -----------------------------------------------------------------------------
// Tests
// -----------------------------------------------------------------------------
@ -223,3 +250,187 @@ Deno.test("GET /health returns ok", async () => {
const json = await resp.json();
assertEquals(json.status, "ok");
});
// -----------------------------------------------------------------------------
// Standalone mode tests (giteaEnabled: false, gitea: null)
// -----------------------------------------------------------------------------
Deno.test("standalone: POST /reconcile?dry_run=true returns planned operations", async () => {
const calls: ApiCall[] = [];
const handler = buildStandaloneHandler(calls);
const resp = await handler(
authedRequest("/reconcile?dry_run=true", DESIRED_STATE),
);
assertEquals(resp.status, 200);
const json = await resp.json();
assertEquals(json.status, "planned");
const opTypes = json.operations.map((op: { type: string }) => op.type);
assertEquals(opTypes.includes("create_group"), true);
assertEquals(opTypes.includes("create_setup_key"), true);
});
Deno.test("standalone: POST /reconcile apply creates resources", async () => {
const calls: ApiCall[] = [];
const handler = buildStandaloneHandler(calls);
const resp = await handler(authedRequest("/reconcile", DESIRED_STATE));
assertEquals(resp.status, 200);
const json = await resp.json();
assertEquals(json.status, "applied");
assertExists(json.created_keys["Pilot-hawk-72"]);
assertEquals(json.created_keys["Pilot-hawk-72"], TEST_KEY_VALUE);
});
Deno.test("standalone: GET /health returns ok with gitea=null", async () => {
const calls: ApiCall[] = [];
const handler = buildStandaloneHandler(calls);
const req = new Request("http://localhost:8080/health", { method: "GET" });
const resp = await handler(req);
assertEquals(resp.status, 200);
const json = await resp.json();
assertEquals(json.status, "ok");
});
Deno.test("standalone: POST /sync-events does not crash with gitea=null", async () => {
const calls: ApiCall[] = [];
const handler = buildStandaloneHandler(calls);
const resp = await handler(authedRequest("/sync-events"));
assertEquals(resp.status, 200);
const json = await resp.json();
assertEquals(json.status, "synced");
});
// -----------------------------------------------------------------------------
// /export endpoint tests
// -----------------------------------------------------------------------------
/**
* Creates a mock fetch that returns pre-seeded data for GET list endpoints.
* This lets us verify the export endpoint transforms API data correctly.
*/
function createExportMockFetch(calls: ApiCall[]) {
return async (
input: string | URL | Request,
init?: RequestInit,
): Promise<Response> => {
const url = typeof input === "string" ? input : input.toString();
const method = init?.method ?? "GET";
const path = url.replace(MOCK_BASE, "");
calls.push({ method, path });
if (method === "GET" && path === "/groups") {
return Response.json([
{
id: "g-pilots",
name: "pilots",
peers_count: 1,
peers: [{ id: "p1", name: "Pilot-hawk-72" }],
issued: "api",
},
{
id: "g-all",
name: "All",
peers_count: 0,
peers: [],
issued: "jwt",
},
]);
}
if (method === "GET" && path === "/setup-keys") {
return Response.json([
{
id: 1,
name: "Pilot-hawk-72",
type: "one-off",
key: "NBSK-masked",
expires: "2027-01-01T00:00:00Z",
valid: true,
revoked: false,
used_times: 1,
state: "overused",
auto_groups: ["g-pilots"],
usage_limit: 1,
},
]);
}
if (method === "GET" && path === "/peers") return Response.json([]);
if (method === "GET" && path === "/policies") return Response.json([]);
if (method === "GET" && path === "/routes") return Response.json([]);
if (method === "GET" && path === "/dns/nameservers") {
return Response.json([]);
}
if (method === "GET" && path === "/events/audit") {
return Response.json([]);
}
return Response.json({ error: "mock: unhandled route" }, { status: 500 });
};
}
function buildExportHandler(calls: ApiCall[]) {
const mockFetch = createExportMockFetch(calls);
const netbird = new NetbirdClient(MOCK_BASE, "nb-test-token", mockFetch);
return createHandler({
config: MOCK_CONFIG,
netbird,
gitea: {} as GiteaClient,
reconcileInProgress: { value: false },
});
}
Deno.test("GET /export returns 200 with state and meta", async () => {
const calls: ApiCall[] = [];
const handler = buildExportHandler(calls);
const req = new Request("http://localhost:8080/export", {
method: "GET",
headers: { "Authorization": "Bearer secret" },
});
const resp = await handler(req);
assertEquals(resp.status, 200);
const json = await resp.json();
assertEquals(json.status, "ok");
// State should contain exported groups (minus "All")
assertEquals(Object.keys(json.state.groups), ["pilots"]);
assertEquals(json.state.groups["pilots"].peers, ["Pilot-hawk-72"]);
// Setup key auto_groups should be resolved from IDs to names
assertEquals(json.state.setup_keys["Pilot-hawk-72"].auto_groups, ["pilots"]);
assertEquals(json.state.setup_keys["Pilot-hawk-72"].enrolled, true);
// Meta should be present
assertExists(json.meta.exported_at);
assertEquals(json.meta.source_url, MOCK_BASE);
assertEquals(json.meta.groups_count, 1);
assertEquals(json.meta.setup_keys_count, 1);
assertEquals(json.meta.policies_count, 0);
assertEquals(json.meta.routes_count, 0);
assertEquals(json.meta.dns_count, 0);
});
Deno.test("GET /export requires auth — 401 without bearer", async () => {
const calls: ApiCall[] = [];
const handler = buildExportHandler(calls);
const req = new Request("http://localhost:8080/export", { method: "GET" });
const resp = await handler(req);
assertEquals(resp.status, 401);
const json = await resp.json();
assertEquals(json.error, "unauthorized");
});

View File

@ -4,6 +4,42 @@ import { NetbirdClient } from "./netbird/client.ts";
import { GiteaClient } from "./gitea/client.ts";
import { createHandler } from "./server.ts";
import { startPollerLoop } from "./poller/loop.ts";
import { fetchActualState } from "./state/actual.ts";
import { exportState } from "./export.ts";
// ---------------------------------------------------------------------------
// --export: one-shot state export, prints to stdout and exits
// ---------------------------------------------------------------------------
function getCliArg(name: string): string | undefined {
const idx = Deno.args.indexOf(name);
if (idx === -1 || idx + 1 >= Deno.args.length) return undefined;
return Deno.args[idx + 1];
}
if (Deno.args.includes("--export")) {
const apiUrl = getCliArg("--netbird-api-url") ??
Deno.env.get("NETBIRD_API_URL");
const apiToken = getCliArg("--netbird-api-token") ??
Deno.env.get("NETBIRD_API_TOKEN");
if (!apiUrl || !apiToken) {
console.error(
"Usage: --export --netbird-api-url <url> --netbird-api-token <token>",
);
Deno.exit(1);
}
const client = new NetbirdClient(apiUrl, apiToken);
const actual = await fetchActualState(client);
const state = exportState(actual);
console.log(JSON.stringify(state, null, 2));
Deno.exit(0);
}
// ---------------------------------------------------------------------------
// Normal server mode
// ---------------------------------------------------------------------------
let config;
try {
@ -19,11 +55,9 @@ try {
}
const netbird = new NetbirdClient(config.netbirdApiUrl, config.netbirdApiToken);
const gitea = new GiteaClient(
config.giteaUrl,
config.giteaToken,
config.giteaRepo,
);
const gitea = config.giteaEnabled
? new GiteaClient(config.giteaUrl!, config.giteaToken!, config.giteaRepo!)
: null;
const reconcileInProgress = { value: false };
// Start background poller
@ -36,7 +70,13 @@ const pollerAbort = startPollerLoop({
// Start HTTP server
const handler = createHandler({ config, netbird, gitea, reconcileInProgress });
console.log(JSON.stringify({ msg: "starting", port: config.port }));
console.log(
JSON.stringify({
msg: "starting",
port: config.port,
gitea_enabled: config.giteaEnabled,
}),
);
Deno.serve({ port: config.port, handler });
// Graceful shutdown

View File

@ -8,8 +8,8 @@ function mockFetch(
const url = typeof input === "string"
? input
: input instanceof URL
? input.toString()
: input.url;
? input.toString()
: input.url;
const method = init?.method ?? "GET";
const key = `${method} ${url}`;
const resp = responses.get(key);

View File

@ -2,12 +2,10 @@ import type { Config } from "../config.ts";
import type { GiteaClient } from "../gitea/client.ts";
import { GiteaApiError } from "../gitea/client.ts";
import type { NetbirdClient } from "../netbird/client.ts";
import type { NbSetupKey } from "../netbird/types.ts";
import { DesiredStateSchema } from "../state/schema.ts";
import type { DesiredState, SetupKeyConfig } from "../state/schema.ts";
import {
type EnrollmentDetection,
processEnrollmentEvents,
} from "./poller.ts";
import { type EnrollmentDetection, processEnrollmentEvents } from "./poller.ts";
import { loadPollerState, savePollerState } from "./state.ts";
// -----------------------------------------------------------------------------
@ -17,7 +15,7 @@ import { loadPollerState, savePollerState } from "./state.ts";
export interface PollerContext {
config: Config;
netbird: NetbirdClient;
gitea: GiteaClient;
gitea: GiteaClient | null;
reconcileInProgress: { value: boolean };
}
@ -26,21 +24,42 @@ export interface PollerContext {
// -----------------------------------------------------------------------------
/**
* Runs one poll cycle: fetch desired state from Gitea, check for new
* enrollment events in NetBird, rename enrolled peers, and commit
* `enrolled: true` back to the repo.
* Runs one poll cycle. Two modes:
*
* **Gitea mode** (gitea != null): fetch desired state from the repo, detect
* enrollments via events, rename peers, commit `enrolled: true` back.
*
* **Standalone mode** (gitea == null): query NetBird setup keys directly,
* treat unused valid keys as "unenrolled", detect enrollments via events,
* rename peers, log (no commit).
*
* Skips entirely when a reconcile is in progress to avoid racing with
* the webhook-triggered reconciliation path.
*/
export async function pollOnce(ctx: PollerContext): Promise<void> {
const { config, netbird, gitea, reconcileInProgress } = ctx;
if (reconcileInProgress.value) {
console.log(JSON.stringify({ msg: "poll_skipped", reason: "reconcile_in_progress" }));
if (ctx.reconcileInProgress.value) {
console.log(
JSON.stringify({ msg: "poll_skipped", reason: "reconcile_in_progress" }),
);
return;
}
if (ctx.gitea) {
await pollOnceGitea(ctx, ctx.gitea);
} else {
await pollOnceStandalone(ctx);
}
}
// -----------------------------------------------------------------------------
// Gitea mode — full GitOps cycle
// -----------------------------------------------------------------------------
async function pollOnceGitea(
ctx: PollerContext,
gitea: GiteaClient,
): Promise<void> {
const { config, netbird } = ctx;
const pollerState = await loadPollerState(config.dataDir);
// Fetch current desired state from Gitea (main branch)
@ -93,6 +112,7 @@ export async function pollOnce(ctx: PollerContext): Promise<void> {
for (const enrollment of enrollments) {
await processEnrollment(
ctx,
gitea,
enrollment,
currentDesired,
currentSha,
@ -114,7 +134,110 @@ export async function pollOnce(ctx: PollerContext): Promise<void> {
}
// -----------------------------------------------------------------------------
// Process a single enrollment
// Standalone mode — no Gitea, keys come from NetBird API directly
// -----------------------------------------------------------------------------
/**
* In standalone mode there is no git repo to read from. Instead we query
* NetBird's setup keys API and consider any key that is valid, not revoked,
* and has remaining uses as "unenrolled". Enrollment detection and peer
* renaming still happen, but there is no commit-back step.
*
* Idempotency relies on `lastEventTimestamp` persisted in `dataDir`.
* If poller state is lost (data dir wiped, container restart without a
* persistent volume), historical enrollments will be replayed peers
* get renamed again, which is harmless but produces noisy logs.
*/
async function pollOnceStandalone(ctx: PollerContext): Promise<void> {
const { config, netbird } = ctx;
const pollerState = await loadPollerState(config.dataDir);
const setupKeys = await netbird.listSetupKeys();
const unenrolledKeys = new Set<string>();
for (const key of setupKeys) {
if (isUnenrolledKey(key)) {
unenrolledKeys.add(key.name);
}
}
if (unenrolledKeys.size === 0) {
console.log(
JSON.stringify({ msg: "poll_no_unenrolled_keys", mode: "standalone" }),
);
return;
}
const events = await netbird.listEvents();
const enrollments = processEnrollmentEvents(
events,
unenrolledKeys,
pollerState.lastEventTimestamp,
);
if (enrollments.length === 0) {
return;
}
console.log(JSON.stringify({
msg: "poll_enrollments_detected",
count: enrollments.length,
mode: "standalone",
}));
let latestTimestamp = pollerState.lastEventTimestamp;
for (const enrollment of enrollments) {
await processEnrollmentStandalone(netbird, enrollment);
if (!latestTimestamp || enrollment.timestamp > latestTimestamp) {
latestTimestamp = enrollment.timestamp;
}
}
await savePollerState(config.dataDir, {
lastEventTimestamp: latestTimestamp,
});
}
/** A setup key is "unenrolled" if it is valid, not revoked, and has remaining uses. */
function isUnenrolledKey(key: NbSetupKey): boolean {
// usage_limit === 0 means unlimited reusable key
return key.valid && !key.revoked &&
(key.usage_limit === 0 || key.used_times < key.usage_limit);
}
/**
* Standalone enrollment: rename the peer and log. No Gitea commit.
*/
async function processEnrollmentStandalone(
netbird: NetbirdClient,
enrollment: EnrollmentDetection,
): Promise<void> {
const { setupKeyName, peerId, peerHostname } = enrollment;
try {
await netbird.updatePeer(peerId, { name: setupKeyName });
console.log(JSON.stringify({
msg: "peer_renamed",
peer_id: peerId,
from: peerHostname,
to: setupKeyName,
mode: "standalone",
note: "gitea commit skipped in standalone mode",
}));
} catch (err) {
console.error(JSON.stringify({
msg: "peer_rename_failed",
peer_id: peerId,
error: err instanceof Error ? err.message : String(err),
}));
return;
}
}
// -----------------------------------------------------------------------------
// Process a single enrollment (Gitea mode)
// -----------------------------------------------------------------------------
/**
@ -126,12 +249,13 @@ export async function pollOnce(ctx: PollerContext): Promise<void> {
*/
async function processEnrollment(
ctx: PollerContext,
gitea: GiteaClient,
enrollment: EnrollmentDetection,
desired: DesiredState,
fileSha: string,
onCommit: (newSha: string, newDesired: DesiredState) => void,
): Promise<void> {
const { netbird, gitea } = ctx;
const { netbird } = ctx;
const { setupKeyName, peerId, peerHostname } = enrollment;
// Rename the peer to match the setup key name

View File

@ -51,8 +51,7 @@ Deno.test("executor calls createGroup for create_group op", async () => {
Deno.test("executor aborts on first failure", async () => {
const mockClient = {
createGroup: () => Promise.reject(new Error("API down")),
createSetupKey: () =>
Promise.resolve({ id: 1, key: "k", name: "key1" }),
createSetupKey: () => Promise.resolve({ id: 1, key: "k", name: "key1" }),
};
const ops: Operation[] = [
{ type: "create_group", name: "pilots" },

View File

@ -341,7 +341,9 @@ async function executeSingle(
case "update_dns": {
const existing = actual.dnsByName.get(op.name);
if (!existing) {
throw new Error(`dns nameserver group "${op.name}" not found for update`);
throw new Error(
`dns nameserver group "${op.name}" not found for update`,
);
}
const groupIds = d.groups
? ctx.resolveGroupIds(d.groups as string[])

View File

@ -1,20 +1,18 @@
import type { Config } from "./config.ts";
import type { NetbirdClient } from "./netbird/client.ts";
import type { GiteaClient } from "./gitea/client.ts";
import {
DesiredStateSchema,
validateCrossReferences,
} from "./state/schema.ts";
import { DesiredStateSchema, validateCrossReferences } from "./state/schema.ts";
import { fetchActualState } from "./state/actual.ts";
import { computeDiff } from "./reconcile/diff.ts";
import { executeOperations } from "./reconcile/executor.ts";
import type { OperationResult } from "./reconcile/operations.ts";
import { pollOnce, type PollerContext } from "./poller/loop.ts";
import { type PollerContext, pollOnce } from "./poller/loop.ts";
import { exportState } from "./export.ts";
export interface ServerContext {
config: Config;
netbird: NetbirdClient;
gitea: GiteaClient;
gitea: GiteaClient | null;
reconcileInProgress: { value: boolean };
}
@ -43,6 +41,10 @@ export function createHandler(
return handleSyncEvents(ctx);
}
if (url.pathname === "/export" && req.method === "GET") {
return handleExport(ctx);
}
return Response.json({ error: "not found" }, { status: 404 });
};
}
@ -198,6 +200,50 @@ async function handleSyncEvents(ctx: ServerContext): Promise<Response> {
}
}
// -----------------------------------------------------------------------------
// /export
// -----------------------------------------------------------------------------
/**
* Fetches the current NetBird state and transforms it into the declarative
* `netbird.json` format. Useful for bootstrapping gitops from an existing
* account or inspecting what the reconciler "sees".
*/
async function handleExport(ctx: ServerContext): Promise<Response> {
try {
const actual = await fetchActualState(ctx.netbird);
const state = exportState(actual);
return Response.json({
status: "ok",
state,
meta: {
exported_at: new Date().toISOString(),
source_url: ctx.config.netbirdApiUrl,
groups_count: Object.keys(state.groups).length,
setup_keys_count: Object.keys(state.setup_keys).length,
policies_count: Object.keys(state.policies).length,
routes_count: Object.keys(state.routes).length,
dns_count: Object.keys(state.dns.nameserver_groups).length,
},
});
} catch (err) {
console.error(
JSON.stringify({
msg: "export_error",
error: err instanceof Error ? err.message : String(err),
}),
);
return Response.json(
{
status: "error",
error: err instanceof Error ? err.message : String(err),
},
{ status: 500 },
);
}
}
// -----------------------------------------------------------------------------
// Helpers
// -----------------------------------------------------------------------------

View File

@ -71,36 +71,64 @@ Deno.test("fetchActualState indexes all resource types", async () => {
const actual = await fetchActualState(
mockClient({
groups: [
{ id: "g1", name: "ops", peers_count: 1, peers: [{ id: "p1", name: "drone-1" }], issued: "api" },
{
id: "g1",
name: "ops",
peers_count: 1,
peers: [{ id: "p1", name: "drone-1" }],
issued: "api",
},
],
peers: [
{
id: "p1", name: "drone-1", ip: "100.64.0.1", connected: true,
hostname: "drone-1", os: "linux", version: "0.28.0",
groups: [{ id: "g1", name: "ops" }], last_seen: "2026-03-01T00:00:00Z",
dns_label: "drone-1", login_expiration_enabled: false,
ssh_enabled: false, inactivity_expiration_enabled: false,
id: "p1",
name: "drone-1",
ip: "100.64.0.1",
connected: true,
hostname: "drone-1",
os: "linux",
version: "0.28.0",
groups: [{ id: "g1", name: "ops" }],
last_seen: "2026-03-01T00:00:00Z",
dns_label: "drone-1",
login_expiration_enabled: false,
ssh_enabled: false,
inactivity_expiration_enabled: false,
},
],
policies: [
{
id: "pol1", name: "allow-ops", description: "ops traffic",
enabled: true, rules: [],
id: "pol1",
name: "allow-ops",
description: "ops traffic",
enabled: true,
rules: [],
},
],
routes: [
{
id: "r1", description: "lan", network_id: "lan-net",
enabled: true, network: "10.0.0.0/24", metric: 100,
masquerade: true, groups: ["g1"], keep_route: false,
id: "r1",
description: "lan",
network_id: "lan-net",
enabled: true,
network: "10.0.0.0/24",
metric: 100,
masquerade: true,
groups: ["g1"],
keep_route: false,
},
],
dns: [
{
id: "d1", name: "internal-dns", description: "internal",
id: "d1",
name: "internal-dns",
description: "internal",
nameservers: [{ ip: "1.1.1.1", ns_type: "udp", port: 53 }],
enabled: true, groups: ["g1"], primary: true,
domains: ["internal."], search_domains_enabled: false,
enabled: true,
groups: ["g1"],
primary: true,
domains: ["internal."],
search_domains_enabled: false,
},
],
}),