Compare commits
No commits in common. "0ba78a3c416185b92f80faede01c0a9e69a0ebb2" and "28eabc270e33dffd9a7042d35ab8fc689b42c38f" have entirely different histories.
0ba78a3c41
...
28eabc270e
54
.beads/.gitignore
vendored
Normal file
54
.beads/.gitignore
vendored
Normal file
@ -0,0 +1,54 @@
|
|||||||
|
# Dolt database (managed by Dolt, not git)
|
||||||
|
dolt/
|
||||||
|
dolt-access.lock
|
||||||
|
|
||||||
|
# Runtime files
|
||||||
|
bd.sock
|
||||||
|
bd.sock.startlock
|
||||||
|
sync-state.json
|
||||||
|
last-touched
|
||||||
|
|
||||||
|
# Local version tracking (prevents upgrade notification spam after git ops)
|
||||||
|
.local_version
|
||||||
|
|
||||||
|
# Worktree redirect file (contains relative path to main repo's .beads/)
|
||||||
|
# Must not be committed as paths would be wrong in other clones
|
||||||
|
redirect
|
||||||
|
|
||||||
|
# Sync state (local-only, per-machine)
|
||||||
|
# These files are machine-specific and should not be shared across clones
|
||||||
|
.sync.lock
|
||||||
|
.jsonl.lock
|
||||||
|
sync_base.jsonl
|
||||||
|
export-state/
|
||||||
|
|
||||||
|
# Ephemeral store (SQLite - wisps/molecules, intentionally not versioned)
|
||||||
|
ephemeral.sqlite3
|
||||||
|
ephemeral.sqlite3-journal
|
||||||
|
ephemeral.sqlite3-wal
|
||||||
|
ephemeral.sqlite3-shm
|
||||||
|
|
||||||
|
# Legacy files (from pre-Dolt versions)
|
||||||
|
*.db
|
||||||
|
*.db?*
|
||||||
|
*.db-journal
|
||||||
|
*.db-wal
|
||||||
|
*.db-shm
|
||||||
|
db.sqlite
|
||||||
|
bd.db
|
||||||
|
daemon.lock
|
||||||
|
daemon.log
|
||||||
|
daemon-*.log.gz
|
||||||
|
daemon.pid
|
||||||
|
beads.base.jsonl
|
||||||
|
beads.base.meta.json
|
||||||
|
beads.left.jsonl
|
||||||
|
beads.left.meta.json
|
||||||
|
beads.right.jsonl
|
||||||
|
beads.right.meta.json
|
||||||
|
|
||||||
|
# NOTE: Do NOT add negation patterns (e.g., !issues.jsonl) here.
|
||||||
|
# They would override fork protection in .git/info/exclude, allowing
|
||||||
|
# contributors to accidentally commit upstream issue databases.
|
||||||
|
# The JSONL files (issues.jsonl, interactions.jsonl) and config files
|
||||||
|
# are tracked by git by default since no pattern above ignores them.
|
||||||
91
.beads/README.md
Normal file
91
.beads/README.md
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
# Beads - AI-Native Issue Tracking
|
||||||
|
|
||||||
|
Welcome to Beads! This repository uses **Beads** for issue tracking - a modern,
|
||||||
|
AI-native tool designed to live directly in your codebase alongside your code.
|
||||||
|
|
||||||
|
## What is Beads?
|
||||||
|
|
||||||
|
Beads is issue tracking that lives in your repo, making it perfect for AI coding
|
||||||
|
agents and developers who want their issues close to their code. No web UI
|
||||||
|
required - everything works through the CLI and integrates seamlessly with git.
|
||||||
|
|
||||||
|
**Learn more:**
|
||||||
|
[github.com/steveyegge/beads](https://github.com/steveyegge/beads)
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### Essential Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create new issues
|
||||||
|
bd create "Add user authentication"
|
||||||
|
|
||||||
|
# View all issues
|
||||||
|
bd list
|
||||||
|
|
||||||
|
# View issue details
|
||||||
|
bd show <issue-id>
|
||||||
|
|
||||||
|
# Update issue status
|
||||||
|
bd update <issue-id> --status in_progress
|
||||||
|
bd update <issue-id> --status done
|
||||||
|
|
||||||
|
# Sync with Dolt remote
|
||||||
|
bd dolt push
|
||||||
|
```
|
||||||
|
|
||||||
|
### Working with Issues
|
||||||
|
|
||||||
|
Issues in Beads are:
|
||||||
|
|
||||||
|
- **Git-native**: Stored in `.beads/issues.jsonl` and synced like code
|
||||||
|
- **AI-friendly**: CLI-first design works perfectly with AI coding agents
|
||||||
|
- **Branch-aware**: Issues can follow your branch workflow
|
||||||
|
- **Always in sync**: Auto-syncs with your commits
|
||||||
|
|
||||||
|
## Why Beads?
|
||||||
|
|
||||||
|
✨ **AI-Native Design**
|
||||||
|
|
||||||
|
- Built specifically for AI-assisted development workflows
|
||||||
|
- CLI-first interface works seamlessly with AI coding agents
|
||||||
|
- No context switching to web UIs
|
||||||
|
|
||||||
|
🚀 **Developer Focused**
|
||||||
|
|
||||||
|
- Issues live in your repo, right next to your code
|
||||||
|
- Works offline, syncs when you push
|
||||||
|
- Fast, lightweight, and stays out of your way
|
||||||
|
|
||||||
|
🔧 **Git Integration**
|
||||||
|
|
||||||
|
- Automatic sync with git commits
|
||||||
|
- Branch-aware issue tracking
|
||||||
|
- Intelligent JSONL merge resolution
|
||||||
|
|
||||||
|
## Get Started with Beads
|
||||||
|
|
||||||
|
Try Beads in your own projects:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install Beads
|
||||||
|
curl -sSL https://raw.githubusercontent.com/steveyegge/beads/main/scripts/install.sh | bash
|
||||||
|
|
||||||
|
# Initialize in your repo
|
||||||
|
bd init
|
||||||
|
|
||||||
|
# Create your first issue
|
||||||
|
bd create "Try out Beads"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Learn More
|
||||||
|
|
||||||
|
- **Documentation**:
|
||||||
|
[github.com/steveyegge/beads/docs](https://github.com/steveyegge/beads/tree/main/docs)
|
||||||
|
- **Quick Start Guide**: Run `bd quickstart`
|
||||||
|
- **Examples**:
|
||||||
|
[github.com/steveyegge/beads/examples](https://github.com/steveyegge/beads/tree/main/examples)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
_Beads: Issue tracking that moves at the speed of thought_ ⚡
|
||||||
42
.beads/config.yaml
Normal file
42
.beads/config.yaml
Normal file
@ -0,0 +1,42 @@
|
|||||||
|
# Beads Configuration File
|
||||||
|
# This file configures default behavior for all bd commands in this repository
|
||||||
|
# All settings can also be set via environment variables (BD_* prefix)
|
||||||
|
# or overridden with command-line flags
|
||||||
|
|
||||||
|
# Issue prefix for this repository (used by bd init)
|
||||||
|
# If not set, bd init will auto-detect from directory name
|
||||||
|
# Example: issue-prefix: "myproject" creates issues like "myproject-1", "myproject-2", etc.
|
||||||
|
# issue-prefix: ""
|
||||||
|
|
||||||
|
# Use no-db mode: load from JSONL, write back after each command
|
||||||
|
# When true, bd will use .beads/issues.jsonl as the source of truth
|
||||||
|
# instead of the Dolt database
|
||||||
|
# no-db: false
|
||||||
|
|
||||||
|
# Enable JSON output by default
|
||||||
|
# json: false
|
||||||
|
|
||||||
|
# Default actor for audit trails (overridden by BD_ACTOR or --actor)
|
||||||
|
# actor: ""
|
||||||
|
|
||||||
|
# Export events (audit trail) to .beads/events.jsonl on each flush/sync
|
||||||
|
# When enabled, new events are appended incrementally using a high-water mark.
|
||||||
|
# Use 'bd export --events' to trigger manually regardless of this setting.
|
||||||
|
# events-export: false
|
||||||
|
|
||||||
|
# Multi-repo configuration (experimental - bd-307)
|
||||||
|
# Allows hydrating from multiple repositories and routing writes to the correct JSONL
|
||||||
|
# repos:
|
||||||
|
# primary: "." # Primary repo (where this database lives)
|
||||||
|
# additional: # Additional repos to hydrate from (read-only)
|
||||||
|
# - ~/beads-planning # Personal planning repo
|
||||||
|
# - ~/work-planning # Work planning repo
|
||||||
|
|
||||||
|
# Integration settings (access with 'bd config get/set')
|
||||||
|
# These are stored in the database, not in this file:
|
||||||
|
# - jira.url
|
||||||
|
# - jira.project
|
||||||
|
# - linear.url
|
||||||
|
# - linear.api-key
|
||||||
|
# - github.org
|
||||||
|
# - github.repo
|
||||||
17
.beads/hooks/post-checkout
Executable file
17
.beads/hooks/post-checkout
Executable file
@ -0,0 +1,17 @@
|
|||||||
|
#!/usr/bin/env sh
|
||||||
|
# bd-shim v1
|
||||||
|
# bd-hooks-version: 0.56.1
|
||||||
|
#
|
||||||
|
# bd (beads) post-checkout hook - thin shim
|
||||||
|
#
|
||||||
|
# This shim delegates to 'bd hooks run post-checkout' which contains
|
||||||
|
# the actual hook logic. This pattern ensures hook behavior is always
|
||||||
|
# in sync with the installed bd version - no manual updates needed.
|
||||||
|
|
||||||
|
# Check if bd is available
|
||||||
|
if ! command -v bd >/dev/null 2>&1; then
|
||||||
|
# Silently skip - post-checkout is called frequently
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
exec bd hooks run post-checkout "$@"
|
||||||
19
.beads/hooks/post-merge
Executable file
19
.beads/hooks/post-merge
Executable file
@ -0,0 +1,19 @@
|
|||||||
|
#!/usr/bin/env sh
|
||||||
|
# bd-shim v1
|
||||||
|
# bd-hooks-version: 0.56.1
|
||||||
|
#
|
||||||
|
# bd (beads) post-merge hook - thin shim
|
||||||
|
#
|
||||||
|
# This shim delegates to 'bd hooks run post-merge' which contains
|
||||||
|
# the actual hook logic. This pattern ensures hook behavior is always
|
||||||
|
# in sync with the installed bd version - no manual updates needed.
|
||||||
|
|
||||||
|
# Check if bd is available
|
||||||
|
if ! command -v bd >/dev/null 2>&1; then
|
||||||
|
echo "Warning: bd command not found in PATH, skipping post-merge hook" >&2
|
||||||
|
echo " Install bd: brew install beads" >&2
|
||||||
|
echo " Or add bd to your PATH" >&2
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
exec bd hooks run post-merge "$@"
|
||||||
19
.beads/hooks/pre-commit
Executable file
19
.beads/hooks/pre-commit
Executable file
@ -0,0 +1,19 @@
|
|||||||
|
#!/usr/bin/env sh
|
||||||
|
# bd-shim v2
|
||||||
|
# bd-hooks-version: 0.56.1
|
||||||
|
#
|
||||||
|
# bd (beads) pre-commit hook — thin shim
|
||||||
|
#
|
||||||
|
# Delegates to 'bd hooks run pre-commit' which contains the actual hook
|
||||||
|
# logic. This pattern ensures hook behavior is always in sync with the
|
||||||
|
# installed bd version — no manual updates needed.
|
||||||
|
|
||||||
|
# Check if bd is available
|
||||||
|
if ! command -v bd >/dev/null 2>&1; then
|
||||||
|
echo "Warning: bd command not found in PATH, skipping pre-commit hook" >&2
|
||||||
|
echo " Install bd: brew install beads" >&2
|
||||||
|
echo " Or add bd to your PATH" >&2
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
exec bd hooks run pre-commit "$@"
|
||||||
19
.beads/hooks/pre-push
Executable file
19
.beads/hooks/pre-push
Executable file
@ -0,0 +1,19 @@
|
|||||||
|
#!/usr/bin/env sh
|
||||||
|
# bd-shim v1
|
||||||
|
# bd-hooks-version: 0.56.1
|
||||||
|
#
|
||||||
|
# bd (beads) pre-push hook - thin shim
|
||||||
|
#
|
||||||
|
# This shim delegates to 'bd hooks run pre-push' which contains
|
||||||
|
# the actual hook logic. This pattern ensures hook behavior is always
|
||||||
|
# in sync with the installed bd version - no manual updates needed.
|
||||||
|
|
||||||
|
# Check if bd is available
|
||||||
|
if ! command -v bd >/dev/null 2>&1; then
|
||||||
|
echo "Warning: bd command not found in PATH, skipping pre-push hook" >&2
|
||||||
|
echo " Install bd: brew install beads" >&2
|
||||||
|
echo " Or add bd to your PATH" >&2
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
exec bd hooks run pre-push "$@"
|
||||||
24
.beads/hooks/prepare-commit-msg
Executable file
24
.beads/hooks/prepare-commit-msg
Executable file
@ -0,0 +1,24 @@
|
|||||||
|
#!/usr/bin/env sh
|
||||||
|
# bd-shim v1
|
||||||
|
# bd-hooks-version: 0.48.0
|
||||||
|
#
|
||||||
|
# bd (beads) prepare-commit-msg hook - thin shim
|
||||||
|
#
|
||||||
|
# This shim delegates to 'bd hooks run prepare-commit-msg' which contains
|
||||||
|
# the actual hook logic. This pattern ensures hook behavior is always
|
||||||
|
# in sync with the installed bd version - no manual updates needed.
|
||||||
|
#
|
||||||
|
# Arguments:
|
||||||
|
# $1 = path to the commit message file
|
||||||
|
# $2 = source of commit message (message, template, merge, squash, commit)
|
||||||
|
# $3 = commit SHA-1 (if -c, -C, or --amend)
|
||||||
|
|
||||||
|
# Check if bd is available
|
||||||
|
if ! command -v bd >/dev/null 2>&1; then
|
||||||
|
echo "Warning: bd command not found in PATH, skipping prepare-commit-msg hook" >&2
|
||||||
|
echo " Install bd: brew install beads" >&2
|
||||||
|
echo " Or add bd to your PATH" >&2
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
exec bd hooks run prepare-commit-msg "$@"
|
||||||
0
.beads/interactions.jsonl
Normal file
0
.beads/interactions.jsonl
Normal file
7
.beads/metadata.json
Normal file
7
.beads/metadata.json
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
{
|
||||||
|
"database": "dolt",
|
||||||
|
"jsonl_export": "issues.jsonl",
|
||||||
|
"backend": "dolt",
|
||||||
|
"dolt_mode": "server",
|
||||||
|
"dolt_database": "beads_netbird-gitops"
|
||||||
|
}
|
||||||
62
.gitea/workflows/dry-run.yml
Normal file
62
.gitea/workflows/dry-run.yml
Normal file
@ -0,0 +1,62 @@
|
|||||||
|
name: Dry Run
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
paths:
|
||||||
|
- "netbird.json"
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
dry-run:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Run dry-run reconcile
|
||||||
|
id: plan
|
||||||
|
run: |
|
||||||
|
RESPONSE=$(curl -sf \
|
||||||
|
-X POST \
|
||||||
|
-H "Authorization: Bearer ${{ secrets.RECONCILER_TOKEN }}" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d @netbird.json \
|
||||||
|
"${{ secrets.RECONCILER_URL }}/reconcile?dry_run=true")
|
||||||
|
echo "response<<EOF" >> "$GITHUB_OUTPUT"
|
||||||
|
echo "$RESPONSE" >> "$GITHUB_OUTPUT"
|
||||||
|
echo "EOF" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
- name: Format plan as markdown
|
||||||
|
id: format
|
||||||
|
run: |
|
||||||
|
cat <<'SCRIPT' > format.py
|
||||||
|
import json, sys
|
||||||
|
data = json.loads(sys.stdin.read())
|
||||||
|
ops = data.get("operations", [])
|
||||||
|
summary = data.get("summary", {})
|
||||||
|
lines = ["## NetBird Reconciliation Plan\n"]
|
||||||
|
if not ops:
|
||||||
|
lines.append("No changes detected.\n")
|
||||||
|
else:
|
||||||
|
lines.append("| Operation | Name |")
|
||||||
|
lines.append("|-----------|------|")
|
||||||
|
for op in ops:
|
||||||
|
lines.append(f"| `{op['type']}` | {op['name']} |")
|
||||||
|
lines.append("")
|
||||||
|
s = summary
|
||||||
|
lines.append(f"**Summary:** {s.get('created',0)} create, {s.get('updated',0)} update, {s.get('deleted',0)} delete")
|
||||||
|
print("\n".join(lines))
|
||||||
|
SCRIPT
|
||||||
|
COMMENT=$(echo '${{ steps.plan.outputs.response }}' | python3 format.py)
|
||||||
|
echo "comment<<EOF" >> "$GITHUB_OUTPUT"
|
||||||
|
echo "$COMMENT" >> "$GITHUB_OUTPUT"
|
||||||
|
echo "EOF" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
- name: Post PR comment
|
||||||
|
env:
|
||||||
|
GITEA_TOKEN: ${{ secrets.GITEA_TOKEN }}
|
||||||
|
run: |
|
||||||
|
curl -sf \
|
||||||
|
-X POST \
|
||||||
|
-H "Authorization: token ${GITEA_TOKEN}" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d "{\"body\": $(echo '${{ steps.format.outputs.comment }}' | jq -Rs .)}" \
|
||||||
|
"${{ secrets.GITEA_URL }}/api/v1/repos/${{ github.repository }}/issues/${{ github.event.pull_request.number }}/comments"
|
||||||
64
.gitea/workflows/reconcile.yml
Normal file
64
.gitea/workflows/reconcile.yml
Normal file
@ -0,0 +1,64 @@
|
|||||||
|
name: Reconcile
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
paths:
|
||||||
|
- "netbird.json"
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
reconcile:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Sync events
|
||||||
|
run: |
|
||||||
|
curl -sf \
|
||||||
|
-X POST \
|
||||||
|
-H "Authorization: Bearer ${{ secrets.RECONCILER_TOKEN }}" \
|
||||||
|
"${{ secrets.RECONCILER_URL }}/sync-events"
|
||||||
|
|
||||||
|
- name: Pull latest (poller may have committed)
|
||||||
|
run: git pull --rebase
|
||||||
|
|
||||||
|
- name: Apply reconcile
|
||||||
|
id: reconcile
|
||||||
|
run: |
|
||||||
|
RESPONSE=$(curl -sf \
|
||||||
|
-X POST \
|
||||||
|
-H "Authorization: Bearer ${{ secrets.RECONCILER_TOKEN }}" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d @netbird.json \
|
||||||
|
"${{ secrets.RECONCILER_URL }}/reconcile")
|
||||||
|
echo "response<<EOF" >> "$GITHUB_OUTPUT"
|
||||||
|
echo "$RESPONSE" >> "$GITHUB_OUTPUT"
|
||||||
|
echo "EOF" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
STATUS=$(echo "$RESPONSE" | jq -r '.status')
|
||||||
|
if [ "$STATUS" = "error" ]; then
|
||||||
|
echo "Reconcile failed"
|
||||||
|
echo "$RESPONSE" | jq .
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Encrypt and upload setup keys
|
||||||
|
if: success()
|
||||||
|
run: |
|
||||||
|
KEYS=$(echo '${{ steps.reconcile.outputs.response }}' | jq -r '.created_keys // empty')
|
||||||
|
if [ -n "$KEYS" ] && [ "$KEYS" != "{}" ] && [ "$KEYS" != "null" ]; then
|
||||||
|
echo "$KEYS" | age -r "${{ secrets.AGE_PUBLIC_KEY }}" -o setup-keys.age
|
||||||
|
echo "Setup keys encrypted to setup-keys.age"
|
||||||
|
else
|
||||||
|
echo "No new keys created"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
- name: Upload artifact
|
||||||
|
if: success()
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: setup-keys
|
||||||
|
path: setup-keys.age
|
||||||
|
if-no-files-found: ignore
|
||||||
29
.gitea/workflows/release.yml
Normal file
29
.gitea/workflows/release.yml
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
name: Release
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
tags:
|
||||||
|
- "v*"
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
container:
|
||||||
|
image: denoland/deno:debian
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Compile
|
||||||
|
run: deno compile --allow-net --allow-read --allow-write --allow-env --output reconciler src/main.ts
|
||||||
|
|
||||||
|
- name: Build Docker image
|
||||||
|
run: |
|
||||||
|
docker build -t ${{ secrets.GITEA_URL }}/blastpilot/netbird-reconciler:${{ github.ref_name }} .
|
||||||
|
docker tag ${{ secrets.GITEA_URL }}/blastpilot/netbird-reconciler:${{ github.ref_name }} \
|
||||||
|
${{ secrets.GITEA_URL }}/blastpilot/netbird-reconciler:latest
|
||||||
|
|
||||||
|
- name: Push Docker image
|
||||||
|
run: |
|
||||||
|
echo "${{ secrets.PACKAGE_TOKEN }}" | docker login ${{ secrets.GITEA_URL }} -u achilles-ci-bot --password-stdin
|
||||||
|
docker push ${{ secrets.GITEA_URL }}/blastpilot/netbird-reconciler:${{ github.ref_name }}
|
||||||
|
docker push ${{ secrets.GITEA_URL }}/blastpilot/netbird-reconciler:latest
|
||||||
4
.gitignore
vendored
Normal file
4
.gitignore
vendored
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
/data/
|
||||||
|
*.log
|
||||||
|
.env*
|
||||||
|
!.env.example
|
||||||
132
AGENTS.md
Normal file
132
AGENTS.md
Normal file
@ -0,0 +1,132 @@
|
|||||||
|
# Agent Instructions
|
||||||
|
|
||||||
|
This project uses **bd** (beads) for issue tracking. Run `bd onboard` to get
|
||||||
|
started.
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd ready # Find available work
|
||||||
|
bd show <id> # View issue details
|
||||||
|
bd update <id> --status in_progress # Claim work
|
||||||
|
bd close <id> # Complete work
|
||||||
|
bd sync # Sync with git
|
||||||
|
```
|
||||||
|
|
||||||
|
<!-- BEGIN BEADS INTEGRATION -->
|
||||||
|
|
||||||
|
## Issue Tracking with bd (beads)
|
||||||
|
|
||||||
|
**IMPORTANT**: This project uses **bd (beads)** for ALL issue tracking. Do NOT
|
||||||
|
use markdown TODOs, task lists, or other tracking methods.
|
||||||
|
|
||||||
|
### Why bd?
|
||||||
|
|
||||||
|
- Dependency-aware: Track blockers and relationships between issues
|
||||||
|
- Git-friendly: Auto-syncs to JSONL for version control
|
||||||
|
- Agent-optimized: JSON output, ready work detection, discovered-from links
|
||||||
|
- Prevents duplicate tracking systems and confusion
|
||||||
|
|
||||||
|
### Quick Start
|
||||||
|
|
||||||
|
**Check for ready work:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd ready --json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Create new issues:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd create "Issue title" --description="Detailed context" -t bug|feature|task -p 0-4 --json
|
||||||
|
bd create "Issue title" --description="What this issue is about" -p 1 --deps discovered-from:bd-123 --json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Claim and update:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd update bd-42 --status in_progress --json
|
||||||
|
bd update bd-42 --priority 1 --json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Complete work:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd close bd-42 --reason "Completed" --json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Issue Types
|
||||||
|
|
||||||
|
- `bug` - Something broken
|
||||||
|
- `feature` - New functionality
|
||||||
|
- `task` - Work item (tests, docs, refactoring)
|
||||||
|
- `epic` - Large feature with subtasks
|
||||||
|
- `chore` - Maintenance (dependencies, tooling)
|
||||||
|
|
||||||
|
### Priorities
|
||||||
|
|
||||||
|
- `0` - Critical (security, data loss, broken builds)
|
||||||
|
- `1` - High (major features, important bugs)
|
||||||
|
- `2` - Medium (default, nice-to-have)
|
||||||
|
- `3` - Low (polish, optimization)
|
||||||
|
- `4` - Backlog (future ideas)
|
||||||
|
|
||||||
|
### Workflow for AI Agents
|
||||||
|
|
||||||
|
1. **Check ready work**: `bd ready` shows unblocked issues
|
||||||
|
2. **Claim your task**: `bd update <id> --status in_progress`
|
||||||
|
3. **Work on it**: Implement, test, document
|
||||||
|
4. **Discover new work?** Create linked issue:
|
||||||
|
- `bd create "Found bug" --description="Details about what was found" -p 1 --deps discovered-from:<parent-id>`
|
||||||
|
5. **Complete**: `bd close <id> --reason "Done"`
|
||||||
|
|
||||||
|
### Auto-Sync
|
||||||
|
|
||||||
|
bd automatically syncs with git:
|
||||||
|
|
||||||
|
- Exports to `.beads/issues.jsonl` after changes (5s debounce)
|
||||||
|
- Imports from JSONL when newer (e.g., after `git pull`)
|
||||||
|
- No manual export/import needed!
|
||||||
|
|
||||||
|
### Important Rules
|
||||||
|
|
||||||
|
- ✅ Use bd for ALL task tracking
|
||||||
|
- ✅ Always use `--json` flag for programmatic use
|
||||||
|
- ✅ Link discovered work with `discovered-from` dependencies
|
||||||
|
- ✅ Check `bd ready` before asking "what should I work on?"
|
||||||
|
- ❌ Do NOT create markdown TODO lists
|
||||||
|
- ❌ Do NOT use external issue trackers
|
||||||
|
- ❌ Do NOT duplicate tracking systems
|
||||||
|
|
||||||
|
For more details, see README.md and docs/QUICKSTART.md.
|
||||||
|
|
||||||
|
<!-- END BEADS INTEGRATION -->
|
||||||
|
|
||||||
|
## Landing the Plane (Session Completion)
|
||||||
|
|
||||||
|
**When ending a work session**, you MUST complete ALL steps below. Work is NOT
|
||||||
|
complete until `git push` succeeds.
|
||||||
|
|
||||||
|
**MANDATORY WORKFLOW:**
|
||||||
|
|
||||||
|
1. **File issues for remaining work** - Create issues for anything that needs
|
||||||
|
follow-up
|
||||||
|
2. **Run quality gates** (if code changed) - Tests, linters, builds
|
||||||
|
3. **Update issue status** - Close finished work, update in-progress items
|
||||||
|
4. **PUSH TO REMOTE** - This is MANDATORY:
|
||||||
|
```bash
|
||||||
|
git pull --rebase
|
||||||
|
bd sync
|
||||||
|
git push
|
||||||
|
git status # MUST show "up to date with origin"
|
||||||
|
```
|
||||||
|
5. **Clean up** - Clear stashes, prune remote branches
|
||||||
|
6. **Verify** - All changes committed AND pushed
|
||||||
|
7. **Hand off** - Provide context for next session
|
||||||
|
|
||||||
|
**CRITICAL RULES:**
|
||||||
|
|
||||||
|
- Work is NOT complete until `git push` succeeds
|
||||||
|
- NEVER stop before pushing - that leaves work stranded locally
|
||||||
|
- NEVER say "ready to push when you are" - YOU must push
|
||||||
|
- If push fails, resolve and retry until it succeeds
|
||||||
10
Dockerfile
Normal file
10
Dockerfile
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
FROM denoland/deno:2.6.5 AS builder
|
||||||
|
WORKDIR /app
|
||||||
|
COPY deno.json .
|
||||||
|
COPY deno.lock .
|
||||||
|
COPY src/ src/
|
||||||
|
RUN deno compile --allow-net --allow-read --allow-write --allow-env --output reconciler src/main.ts
|
||||||
|
|
||||||
|
FROM gcr.io/distroless/cc-debian12:nonroot
|
||||||
|
COPY --from=builder /app/reconciler /usr/local/bin/reconciler
|
||||||
|
ENTRYPOINT ["reconciler"]
|
||||||
21
deno.json
Normal file
21
deno.json
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
{
|
||||||
|
"name": "@blastpilot/netbird-reconciler",
|
||||||
|
"version": "0.1.0",
|
||||||
|
"tasks": {
|
||||||
|
"dev": "deno run --allow-net --allow-read --allow-write --allow-env --watch src/main.ts",
|
||||||
|
"start": "deno run --allow-net --allow-read --allow-write --allow-env src/main.ts",
|
||||||
|
"test": "deno test --allow-net --allow-read --allow-write --allow-env",
|
||||||
|
"check": "deno check src/main.ts",
|
||||||
|
"lint": "deno lint",
|
||||||
|
"fmt": "deno fmt",
|
||||||
|
"export": "deno run --allow-net --allow-env src/main.ts --export"
|
||||||
|
},
|
||||||
|
"imports": {
|
||||||
|
"@std/assert": "jsr:@std/assert@^1.0.0",
|
||||||
|
"@std/path": "jsr:@std/path@^1.0.0",
|
||||||
|
"zod": "npm:zod@^3.23.0"
|
||||||
|
},
|
||||||
|
"compilerOptions": {
|
||||||
|
"strict": true
|
||||||
|
}
|
||||||
|
}
|
||||||
39
deno.lock
generated
Normal file
39
deno.lock
generated
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
{
|
||||||
|
"version": "5",
|
||||||
|
"specifiers": {
|
||||||
|
"jsr:@std/assert@*": "1.0.19",
|
||||||
|
"jsr:@std/assert@1": "1.0.19",
|
||||||
|
"jsr:@std/internal@^1.0.12": "1.0.12",
|
||||||
|
"jsr:@std/path@1": "1.1.4",
|
||||||
|
"npm:zod@^3.23.0": "3.25.76"
|
||||||
|
},
|
||||||
|
"jsr": {
|
||||||
|
"@std/assert@1.0.19": {
|
||||||
|
"integrity": "eaada96ee120cb980bc47e040f82814d786fe8162ecc53c91d8df60b8755991e",
|
||||||
|
"dependencies": [
|
||||||
|
"jsr:@std/internal"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"@std/internal@1.0.12": {
|
||||||
|
"integrity": "972a634fd5bc34b242024402972cd5143eac68d8dffaca5eaa4dba30ce17b027"
|
||||||
|
},
|
||||||
|
"@std/path@1.1.4": {
|
||||||
|
"integrity": "1d2d43f39efb1b42f0b1882a25486647cb851481862dc7313390b2bb044314b5",
|
||||||
|
"dependencies": [
|
||||||
|
"jsr:@std/internal"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"npm": {
|
||||||
|
"zod@3.25.76": {
|
||||||
|
"integrity": "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"workspace": {
|
||||||
|
"dependencies": [
|
||||||
|
"jsr:@std/assert@1",
|
||||||
|
"jsr:@std/path@1",
|
||||||
|
"npm:zod@^3.23.0"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
14
deploy/.env.example
Normal file
14
deploy/.env.example
Normal file
@ -0,0 +1,14 @@
|
|||||||
|
NETBIRD_API_URL=https://netbird.example.com/api
|
||||||
|
NETBIRD_API_TOKEN=
|
||||||
|
|
||||||
|
# Set to false to run without Gitea integration (standalone mode).
|
||||||
|
# When false, GITEA_URL/TOKEN/REPO are not required.
|
||||||
|
GITEA_ENABLED=true
|
||||||
|
GITEA_URL=https://gitea.example.com
|
||||||
|
GITEA_TOKEN=
|
||||||
|
GITEA_REPO=BlastPilot/netbird-gitops
|
||||||
|
|
||||||
|
RECONCILER_TOKEN=
|
||||||
|
POLL_INTERVAL_SECONDS=30
|
||||||
|
PORT=8080
|
||||||
|
DATA_DIR=/data
|
||||||
21
deploy/docker-compose.yml
Normal file
21
deploy/docker-compose.yml
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
services:
|
||||||
|
netbird-reconciler:
|
||||||
|
image: gitea.internal/blastpilot/netbird-reconciler:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
env_file: .env
|
||||||
|
volumes:
|
||||||
|
- reconciler-data:/data
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:8080:8080"
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "wget", "-q", "--spider", "http://localhost:8080/health"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 3
|
||||||
|
labels:
|
||||||
|
- "traefik.enable=true"
|
||||||
|
- "traefik.http.routers.reconciler.rule=Host(`reconciler.internal`)"
|
||||||
|
- "traefik.http.services.reconciler.loadbalancer.server.port=8080"
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
reconciler-data:
|
||||||
358
docs/plans/2026-03-03-netbird-reconciler-design.md
Normal file
358
docs/plans/2026-03-03-netbird-reconciler-design.md
Normal file
@ -0,0 +1,358 @@
|
|||||||
|
# NetBird Reconciler — Design Document
|
||||||
|
|
||||||
|
> **Status:** Approved **Author:** @prox **Date:** 2026-03-03 **Proposal:**
|
||||||
|
> NetBird GitOps Proposal (rev2)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
A dedicated backend service that provides declarative GitOps-driven
|
||||||
|
reconciliation for NetBird VPN configuration. Engineers declare desired state in
|
||||||
|
`netbird.json`; the reconciler computes diffs and applies changes with
|
||||||
|
all-or-nothing semantics.
|
||||||
|
|
||||||
|
**Repo:** `BlastPilot/netbird-gitops` (service code + state file in one repo)
|
||||||
|
**Runtime:** TypeScript / Deno **Deployment:** Docker Compose on the NetBird
|
||||||
|
VPS, behind Traefik
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
The reconciler has two responsibilities:
|
||||||
|
|
||||||
|
1. **Reconciliation API** — Called by Gitea Actions CI on PR events. Accepts
|
||||||
|
desired state (`netbird.json`), fetches actual state from NetBird API,
|
||||||
|
computes a diff, and either returns a plan (dry-run) or applies changes.
|
||||||
|
|
||||||
|
2. **Event Poller** — Background loop polling NetBird `/api/events` every 30s to
|
||||||
|
detect peer enrollments. When a peer enrolls via a known setup key, the
|
||||||
|
poller renames it, assigns it to the correct group, and commits
|
||||||
|
`enrolled: true` back to git via Gitea API.
|
||||||
|
|
||||||
|
### Data Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
Engineer -> PR to netbird-gitops (edit netbird.json)
|
||||||
|
-> CI: dry-run -> reconciler -> plan posted as PR comment
|
||||||
|
-> PR merged -> CI: apply -> reconciler -> mutations to NetBird API
|
||||||
|
-> response with created_keys
|
||||||
|
-> CI: encrypt keys with age, upload artifact
|
||||||
|
|
||||||
|
Event poller (background):
|
||||||
|
-> polls NetBird /api/events
|
||||||
|
-> detects peer enrollment (peer.setupkey.add)
|
||||||
|
-> renames peer, assigns groups
|
||||||
|
-> commits enrolled:true via Gitea API
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integration with Enrollment Pipeline
|
||||||
|
|
||||||
|
The existing enrollment pipeline in `blastpilot-public` changes:
|
||||||
|
|
||||||
|
- **Before:** `handleApproval()` creates `peers/enrollment-{N}.json`,
|
||||||
|
`handlePRMerge()` calls NetBird API directly to create setup keys, emails PDF.
|
||||||
|
- **After:** `handleApproval()` modifies `netbird.json` (adds setup key + group
|
||||||
|
entries) and creates PR. Key creation is handled by the reconciler on merge.
|
||||||
|
Key delivery starts as manual (engineer downloads encrypted artifact), with
|
||||||
|
automation added later.
|
||||||
|
|
||||||
|
## State File Format
|
||||||
|
|
||||||
|
`netbird.json` at repo root. All resources referenced by name, never by NetBird
|
||||||
|
ID.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"groups": {
|
||||||
|
"pilots": { "peers": ["Pilot-hawk-72"] },
|
||||||
|
"ground-stations": { "peers": ["GS-hawk-72"] },
|
||||||
|
"commanders": { "peers": [] }
|
||||||
|
},
|
||||||
|
"setup_keys": {
|
||||||
|
"GS-hawk-72": {
|
||||||
|
"type": "one-off",
|
||||||
|
"expires_in": 604800,
|
||||||
|
"usage_limit": 1,
|
||||||
|
"auto_groups": ["ground-stations"],
|
||||||
|
"enrolled": true
|
||||||
|
},
|
||||||
|
"Pilot-hawk-72": {
|
||||||
|
"type": "one-off",
|
||||||
|
"expires_in": 604800,
|
||||||
|
"usage_limit": 1,
|
||||||
|
"auto_groups": ["pilots"],
|
||||||
|
"enrolled": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"policies": {
|
||||||
|
"pilots-to-gs": {
|
||||||
|
"description": "Allow pilots to reach ground stations",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": ["pilots"],
|
||||||
|
"destinations": ["ground-stations"],
|
||||||
|
"bidirectional": true,
|
||||||
|
"protocol": "ALL"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"routes": {
|
||||||
|
"gs-local-network": {
|
||||||
|
"description": "Route to GS local subnet",
|
||||||
|
"network": "192.168.1.0/24",
|
||||||
|
"peer_groups": ["ground-stations"],
|
||||||
|
"enabled": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"dns": {
|
||||||
|
"nameserver_groups": {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Conventions:**
|
||||||
|
|
||||||
|
- Setup key name = expected peer hostname
|
||||||
|
- `enrolled: false` — setup key should exist, peer hasn't connected yet
|
||||||
|
- `enrolled: true` — peer detected, renamed, assigned to groups
|
||||||
|
- Groups reference peers by setup key name (becomes peer hostname after rename)
|
||||||
|
- Policies reference groups by name
|
||||||
|
- Reconciler maintains internal name-to-ID mapping fetched at plan time
|
||||||
|
|
||||||
|
## API Endpoints
|
||||||
|
|
||||||
|
All endpoints authenticated via `Authorization: Bearer <token>`.
|
||||||
|
|
||||||
|
### `POST /reconcile`
|
||||||
|
|
||||||
|
**Query params:** `dry_run=true|false` (default: false) **Request body:**
|
||||||
|
Contents of `netbird.json`
|
||||||
|
|
||||||
|
Behavior:
|
||||||
|
|
||||||
|
1. Fetch actual state from NetBird API (groups, setup keys, peers, policies,
|
||||||
|
routes, DNS)
|
||||||
|
2. Process pending enrollments from event poller state
|
||||||
|
3. Compute diff between desired and actual
|
||||||
|
4. If `dry_run=true`: return plan without applying
|
||||||
|
5. If `dry_run=false`: execute in dependency order — groups, setup keys, peers,
|
||||||
|
policies, routes. Abort on first failure.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "applied | planned | error",
|
||||||
|
"operations": [
|
||||||
|
{ "type": "create_group", "name": "pilots", "status": "success" },
|
||||||
|
{
|
||||||
|
"type": "create_setup_key",
|
||||||
|
"name": "Pilot-hawk-72",
|
||||||
|
"status": "success"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "create_policy",
|
||||||
|
"name": "pilots-to-gs",
|
||||||
|
"status": "failed",
|
||||||
|
"error": "..."
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"created_keys": {
|
||||||
|
"Pilot-hawk-72": "XXXXXX-XXXXXX-XXXXXX"
|
||||||
|
},
|
||||||
|
"summary": { "created": 3, "updated": 1, "deleted": 0, "failed": 0 }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
`created_keys` only contains keys created in this run. CI uses this for
|
||||||
|
encrypted artifacts.
|
||||||
|
|
||||||
|
### `POST /sync-events`
|
||||||
|
|
||||||
|
Forces the event poller to process pending events immediately. Returns detected
|
||||||
|
enrollments.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"enrollments": [
|
||||||
|
{
|
||||||
|
"setup_key_name": "GS-hawk-72",
|
||||||
|
"peer_id": "abc123",
|
||||||
|
"renamed": true,
|
||||||
|
"groups_assigned": true
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `GET /health`
|
||||||
|
|
||||||
|
No auth. Returns service status for Docker healthcheck.
|
||||||
|
|
||||||
|
## Event Poller
|
||||||
|
|
||||||
|
**Mechanism:**
|
||||||
|
|
||||||
|
- Polls `GET /api/events` every 30 seconds (configurable via
|
||||||
|
`POLL_INTERVAL_SECONDS`)
|
||||||
|
- Persists `last_event_timestamp` to `/data/poller-state.json` (Docker volume)
|
||||||
|
- Loads last-known `netbird.json` desired state on startup and after each
|
||||||
|
reconcile
|
||||||
|
|
||||||
|
**Enrollment detection:**
|
||||||
|
|
||||||
|
1. Filter events for `peer.setupkey.add` activity
|
||||||
|
2. Extract `setup_key_name` from event metadata
|
||||||
|
3. Look up in desired state — if found and `enrolled: false`:
|
||||||
|
- Rename peer to match setup key name via `PUT /api/peers/{id}`
|
||||||
|
- Assign peer to groups from `setup_keys[name].auto_groups`
|
||||||
|
- Commit `enrolled: true` to git via Gitea API (optimistic concurrency with
|
||||||
|
SHA check)
|
||||||
|
- Commit message: `chore: mark {key_name} as enrolled [automated]`
|
||||||
|
4. If not found: log warning (unknown peer enrolled outside GitOps)
|
||||||
|
|
||||||
|
**Edge cases:**
|
||||||
|
|
||||||
|
- Race with reconcile: if reconcile is in progress, enrollment processing queues
|
||||||
|
until complete
|
||||||
|
- Duplicate events: idempotent — skip if peer already renamed and enrolled
|
||||||
|
- Unknown peers: logged but not touched
|
||||||
|
|
||||||
|
## CI Workflows
|
||||||
|
|
||||||
|
### `dry-run.yml` — On PR open/update
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
paths: ["netbird.json"]
|
||||||
|
```
|
||||||
|
|
||||||
|
Steps:
|
||||||
|
|
||||||
|
1. Checkout PR branch
|
||||||
|
2. `POST /reconcile?dry_run=true` with `netbird.json`
|
||||||
|
3. Format response as markdown table
|
||||||
|
4. Post/update PR comment via Gitea API
|
||||||
|
|
||||||
|
### `reconcile.yml` — On push to main
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches: [main]
|
||||||
|
paths: ["netbird.json"]
|
||||||
|
```
|
||||||
|
|
||||||
|
Steps:
|
||||||
|
|
||||||
|
1. Checkout repo
|
||||||
|
2. `POST /sync-events` — process pending enrollments
|
||||||
|
3. `POST /reconcile` with `netbird.json`
|
||||||
|
4. If `created_keys` non-empty: encrypt with `age`, upload as Gitea Actions
|
||||||
|
artifact
|
||||||
|
5. Pull latest (poller may have committed)
|
||||||
|
6. On failure: job fails, engineer investigates
|
||||||
|
|
||||||
|
### Gitea Secrets
|
||||||
|
|
||||||
|
| Secret | Purpose |
|
||||||
|
| ------------------ | ------------------------------------ |
|
||||||
|
| `RECONCILER_URL` | Reconciler service URL |
|
||||||
|
| `RECONCILER_TOKEN` | Bearer token for CI auth |
|
||||||
|
| `AGE_PUBLIC_KEY` | Encrypts setup key artifacts |
|
||||||
|
| `GITEA_TOKEN` | PR comment posting (achilles-ci-bot) |
|
||||||
|
|
||||||
|
## Deployment
|
||||||
|
|
||||||
|
Docker Compose on the NetBird VPS:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
netbird-reconciler:
|
||||||
|
image: gitea.internal/blastpilot/netbird-reconciler:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
environment:
|
||||||
|
NETBIRD_API_URL: "https://netbird.example.com/api"
|
||||||
|
NETBIRD_API_TOKEN: "${NETBIRD_API_TOKEN}"
|
||||||
|
GITEA_URL: "https://gitea.example.com"
|
||||||
|
GITEA_TOKEN: "${GITEA_TOKEN}"
|
||||||
|
GITEA_REPO: "BlastPilot/netbird-gitops"
|
||||||
|
RECONCILER_TOKEN: "${RECONCILER_TOKEN}"
|
||||||
|
POLL_INTERVAL_SECONDS: "30"
|
||||||
|
PORT: "8080"
|
||||||
|
volumes:
|
||||||
|
- reconciler-data:/data
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 5s
|
||||||
|
labels:
|
||||||
|
- "traefik.enable=true"
|
||||||
|
- "traefik.http.routers.reconciler.rule=Host(`reconciler.internal`)"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
|
||||||
|
| Variable | Required | Description |
|
||||||
|
| ----------------------- | -------- | ------------------------------- |
|
||||||
|
| `NETBIRD_API_URL` | yes | NetBird management API base URL |
|
||||||
|
| `NETBIRD_API_TOKEN` | yes | NetBird API token |
|
||||||
|
| `GITEA_URL` | yes | Gitea instance URL |
|
||||||
|
| `GITEA_TOKEN` | yes | Gitea API token for commits |
|
||||||
|
| `GITEA_REPO` | yes | `owner/repo` for netbird-gitops |
|
||||||
|
| `RECONCILER_TOKEN` | yes | Bearer token for CI auth |
|
||||||
|
| `POLL_INTERVAL_SECONDS` | no | Poll interval (default: 30) |
|
||||||
|
| `PORT` | no | Listen port (default: 8080) |
|
||||||
|
|
||||||
|
### Container Image Build
|
||||||
|
|
||||||
|
Tag-triggered CI (`v*`) in netbird-gitops:
|
||||||
|
|
||||||
|
1. `deno compile` to single binary
|
||||||
|
2. Docker build (`FROM denoland/deno:distroless`)
|
||||||
|
3. Push to Gitea container registry
|
||||||
|
|
||||||
|
## Error Handling & Rollback
|
||||||
|
|
||||||
|
**Validation phase (before mutations):**
|
||||||
|
|
||||||
|
- Parse and validate `netbird.json` schema
|
||||||
|
- Fetch all actual state
|
||||||
|
- Compute diff and verify all operations are possible
|
||||||
|
- If validation fails: return error, no mutations
|
||||||
|
|
||||||
|
**Apply phase:**
|
||||||
|
|
||||||
|
- Execute in dependency order (groups -> keys -> peers -> policies -> routes)
|
||||||
|
- On any failure: abort immediately, return partial results
|
||||||
|
- No automatic rollback — git revert is the rollback mechanism
|
||||||
|
|
||||||
|
**Why no automatic rollback:**
|
||||||
|
|
||||||
|
- Partial rollback is harder to get right than partial apply
|
||||||
|
- Git history provides clear, auditable rollback path
|
||||||
|
- `git revert` + re-reconcile converges to correct state
|
||||||
|
- Reconciler is idempotent — running twice with same state is safe
|
||||||
|
|
||||||
|
**Recovery pattern:**
|
||||||
|
|
||||||
|
1. Reconcile fails mid-apply
|
||||||
|
2. CI job fails, engineer notified
|
||||||
|
3. Engineer either forward-fixes `netbird.json` or `git revert`s the merge
|
||||||
|
commit
|
||||||
|
4. New push triggers reconcile, converging to correct state
|
||||||
|
|
||||||
|
**Logging:**
|
||||||
|
|
||||||
|
- Structured JSON logs
|
||||||
|
- Every NetBird API call logged (method, path, status)
|
||||||
|
- Every state mutation logged (before/after)
|
||||||
|
- Event poller logs each event processed
|
||||||
|
|
||||||
|
## Resources Managed
|
||||||
|
|
||||||
|
| Resource | NetBird API | Create | Update | Delete |
|
||||||
|
| ---------- | ---------------------- | ---------------- | -------------------- | ------ |
|
||||||
|
| Groups | `/api/groups` | yes | yes (peers) | yes |
|
||||||
|
| Setup Keys | `/api/setup-keys` | yes | no (immutable) | yes |
|
||||||
|
| Peers | `/api/peers` | no (self-enroll) | yes (rename, groups) | yes |
|
||||||
|
| Policies | `/api/policies` | yes | yes | yes |
|
||||||
|
| Routes | `/api/routes` | yes | yes | yes |
|
||||||
|
| DNS | `/api/dns/nameservers` | yes | yes | yes |
|
||||||
3170
docs/plans/2026-03-03-netbird-reconciler-implementation.md
Normal file
3170
docs/plans/2026-03-03-netbird-reconciler-implementation.md
Normal file
File diff suppressed because it is too large
Load Diff
1375
docs/plans/2026-03-06-reconciler-poc-implementation.md
Normal file
1375
docs/plans/2026-03-06-reconciler-poc-implementation.md
Normal file
File diff suppressed because it is too large
Load Diff
272
docs/plans/2026-03-06-reconciler-poc-validation.md
Normal file
272
docs/plans/2026-03-06-reconciler-poc-validation.md
Normal file
@ -0,0 +1,272 @@
|
|||||||
|
# Reconciler PoC Validation — Design Document
|
||||||
|
|
||||||
|
> **Status:** Draft **Author:** @prox **Date:** 2026-03-06 **Depends on:**
|
||||||
|
> [NetBird Reconciler Design](2026-03-03-netbird-reconciler-design.md)
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
Validate the reconciler end-to-end on a fresh, isolated NetBird instance before
|
||||||
|
pointing it at production. Prove that:
|
||||||
|
|
||||||
|
1. Declaring state in `netbird.json` → reconcile → resources appear in NetBird.
|
||||||
|
2. Event poller detects peer enrollment and renames the peer.
|
||||||
|
3. State export from a live NetBird instance produces a valid `netbird.json`.
|
||||||
|
|
||||||
|
## Scope
|
||||||
|
|
||||||
|
### In scope
|
||||||
|
|
||||||
|
- Deploy a self-contained stack on VPS-A (`vps-a.networkmonitor.cc`): fresh
|
||||||
|
NetBird, Caddy, Gitea, and reconciler — all via Docker Compose.
|
||||||
|
- `GITEA_ENABLED` feature flag so the reconciler works without Gitea
|
||||||
|
integration.
|
||||||
|
- State export tool: `GET /export` endpoint + `--export` CLI flag.
|
||||||
|
- Core reconcile: groups, setup keys, policies created via `/reconcile`.
|
||||||
|
- Event poller: detect enrollment, rename peer — with or without Gitea
|
||||||
|
commit-back.
|
||||||
|
|
||||||
|
### Out of scope (deferred)
|
||||||
|
|
||||||
|
- Enrollment pipeline integration (docs site → Gitea PR).
|
||||||
|
- CI workflows (dry-run on PR, reconcile on merge).
|
||||||
|
- Production deployment to real NetBird environments.
|
||||||
|
- Key encryption with `age` / artifact upload.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
VPS-A (vps-a.networkmonitor.cc)
|
||||||
|
├── Caddy (reverse proxy, HTTPS, ACME)
|
||||||
|
│ ├── / → NetBird Dashboard
|
||||||
|
│ ├── /api → NetBird Management API
|
||||||
|
│ ├── /signalexchange → Signal (gRPC)
|
||||||
|
│ ├── /relay → Relay
|
||||||
|
│ └── /reconciler/* → Reconciler HTTP API
|
||||||
|
├── NetBird Management (config, IdP, API)
|
||||||
|
├── NetBird Signal (gRPC peer coordination)
|
||||||
|
├── NetBird Relay (data relay for NATed peers)
|
||||||
|
├── Coturn (STUN/TURN)
|
||||||
|
├── Gitea (hosts netbird-gitops repo)
|
||||||
|
└── Reconciler (reconcile API + event poller)
|
||||||
|
```
|
||||||
|
|
||||||
|
All containers share a single Docker Compose stack with a common network. Caddy
|
||||||
|
terminates TLS and routes by path prefix.
|
||||||
|
|
||||||
|
## Changes to Reconciler
|
||||||
|
|
||||||
|
### 1. Feature Flag: `GITEA_ENABLED`
|
||||||
|
|
||||||
|
New environment variable. Default: `true` (backward compatible).
|
||||||
|
|
||||||
|
**When `GITEA_ENABLED=false`:**
|
||||||
|
|
||||||
|
| Component | Behavior |
|
||||||
|
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------ |
|
||||||
|
| Config validation | Skip `GITEA_*` env var requirements |
|
||||||
|
| Startup | Don't create Gitea client |
|
||||||
|
| `POST /reconcile` | Works normally — accepts `netbird.json` from request body, applies to NetBird API |
|
||||||
|
| Event poller | Still runs. Detects `peer.setupkey.add` events, renames peers. Skips commit-back of `enrolled: true`. Logs enrollment instead. |
|
||||||
|
| `GET /export` | Works normally — no Gitea dependency |
|
||||||
|
|
||||||
|
**When `GITEA_ENABLED=true`:** Current behavior, unchanged.
|
||||||
|
|
||||||
|
**Affected files:**
|
||||||
|
|
||||||
|
- `src/config.ts` — conditional Gitea env var validation
|
||||||
|
- `src/main.ts` — conditional Gitea client creation, pass flag to poller
|
||||||
|
- `src/poller/loop.ts` — guard commit-back behind flag
|
||||||
|
|
||||||
|
### 2. State Export
|
||||||
|
|
||||||
|
New module: `src/export.ts`
|
||||||
|
|
||||||
|
Transforms `ActualState` (from `src/state/actual.ts`) into a valid
|
||||||
|
`netbird.json` conforming to `DesiredStateSchema`.
|
||||||
|
|
||||||
|
**Mapping logic:**
|
||||||
|
|
||||||
|
| NetBird resource | Export strategy |
|
||||||
|
| --------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
|
| Groups | Map ID → name. Skip auto-generated groups (`All`, `ch-` prefixed). Peer refs mapped to setup key names where possible, otherwise peer hostname. |
|
||||||
|
| Setup keys | Export with current config. Set `enrolled: true` if `used_times >= usage_limit`, else `false`. |
|
||||||
|
| Policies | Map source/destination group IDs → names. Include port rules. |
|
||||||
|
| Routes | Map group IDs → names, include network CIDRs. |
|
||||||
|
| DNS nameserver groups | Map group refs → names. |
|
||||||
|
|
||||||
|
**Interfaces:**
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /export
|
||||||
|
→ 200: { state: <netbird.json content>, meta: { exported_at, source_url, groups_count, ... } }
|
||||||
|
|
||||||
|
CLI: deno run src/main.ts --export --netbird-api-url <url> --netbird-api-token <token>
|
||||||
|
→ stdout: netbird.json content
|
||||||
|
```
|
||||||
|
|
||||||
|
The CLI mode is standalone — it creates a NetBird client, fetches state,
|
||||||
|
exports, and exits. No HTTP server started.
|
||||||
|
|
||||||
|
**Affected files:**
|
||||||
|
|
||||||
|
- `src/export.ts` — new: transformation logic
|
||||||
|
- `src/server.ts` — new endpoint: `GET /export`
|
||||||
|
- `src/main.ts` — new CLI flag: `--export`
|
||||||
|
|
||||||
|
### 3. No Structural Changes
|
||||||
|
|
||||||
|
The reconcile engine (`diff.ts`, `executor.ts`), NetBird client, and state
|
||||||
|
schema remain unchanged. The export tool and feature flag are additive.
|
||||||
|
|
||||||
|
## Ansible Playbook
|
||||||
|
|
||||||
|
Location: `poc/ansible/` within this repo.
|
||||||
|
|
||||||
|
```
|
||||||
|
poc/
|
||||||
|
ansible/
|
||||||
|
inventory.yml
|
||||||
|
playbook.yml
|
||||||
|
group_vars/
|
||||||
|
all/
|
||||||
|
vars.yml # domain, ports, non-secret config
|
||||||
|
vault.yml # secrets (gitignored)
|
||||||
|
vault.yml.example # template for secrets
|
||||||
|
templates/
|
||||||
|
docker-compose.yml.j2
|
||||||
|
management.json.j2 # NetBird management config (embedded IdP)
|
||||||
|
Caddyfile.j2
|
||||||
|
dashboard.env.j2
|
||||||
|
relay.env.j2
|
||||||
|
turnserver.conf.j2
|
||||||
|
reconciler.env.j2
|
||||||
|
gitea.env.j2
|
||||||
|
```
|
||||||
|
|
||||||
|
**Playbook tasks:**
|
||||||
|
|
||||||
|
1. Install Docker + Docker Compose (if not present)
|
||||||
|
2. Create working directory structure
|
||||||
|
3. Template all config files
|
||||||
|
4. Pull images, `docker compose up -d`
|
||||||
|
5. Wait for Gitea to be ready
|
||||||
|
6. Create Gitea admin user + `BlastPilot` org + `netbird-gitops` repo via API
|
||||||
|
7. Seed `netbird.json` into the repo with initial test state
|
||||||
|
|
||||||
|
**Key config decisions:**
|
||||||
|
|
||||||
|
- **Caddy** for reverse proxy (proven in existing PoC templates).
|
||||||
|
- **Embedded IdP** for NetBird (no external OAuth — same as existing PoC).
|
||||||
|
- **Secrets auto-generated** at deploy time (NetBird encryption key, TURN
|
||||||
|
secret, relay secret). Printed to stdout for operator reference.
|
||||||
|
- Reconciler env vars templated from `vault.yml` (NetBird API token, Gitea
|
||||||
|
token).
|
||||||
|
|
||||||
|
**SSH key:** `~/.ssh/hetzner` (same as docs site deployment).
|
||||||
|
|
||||||
|
**Deploy command:** `ansible-playbook -i inventory.yml playbook.yml`
|
||||||
|
|
||||||
|
## Test netbird.json
|
||||||
|
|
||||||
|
The seed state for validation:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"groups": {
|
||||||
|
"ground-stations": { "peers": [] },
|
||||||
|
"pilots": { "peers": [] }
|
||||||
|
},
|
||||||
|
"setup_keys": {
|
||||||
|
"GS-TestHawk-1": {
|
||||||
|
"type": "one-off",
|
||||||
|
"expires_in": 604800,
|
||||||
|
"usage_limit": 1,
|
||||||
|
"auto_groups": ["ground-stations"],
|
||||||
|
"enrolled": false
|
||||||
|
},
|
||||||
|
"Pilot-TestHawk-1": {
|
||||||
|
"type": "one-off",
|
||||||
|
"expires_in": 604800,
|
||||||
|
"usage_limit": 1,
|
||||||
|
"auto_groups": ["pilots"],
|
||||||
|
"enrolled": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"policies": {
|
||||||
|
"pilots-to-gs": {
|
||||||
|
"enabled": true,
|
||||||
|
"sources": ["pilots"],
|
||||||
|
"destinations": ["ground-stations"],
|
||||||
|
"bidirectional": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"routes": {},
|
||||||
|
"dns": { "nameserver_groups": {} }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This creates two groups, two one-off setup keys, and a bidirectional policy
|
||||||
|
between pilots and ground stations. Minimal but sufficient to validate the full
|
||||||
|
reconcile + enrollment flow.
|
||||||
|
|
||||||
|
## Validation Plan
|
||||||
|
|
||||||
|
### Phase 1 — Deploy
|
||||||
|
|
||||||
|
1. Wipe VPS-A (or just `docker compose down -v` if redeploying).
|
||||||
|
2. Run playbook → full stack up.
|
||||||
|
3. Access NetBird dashboard at `https://vps-a.networkmonitor.cc` — verify clean
|
||||||
|
state (only default "All" group).
|
||||||
|
4. Access Gitea at `https://vps-a.networkmonitor.cc/gitea` (or dedicated port) —
|
||||||
|
verify `BlastPilot/netbird-gitops` repo exists with seed `netbird.json`.
|
||||||
|
|
||||||
|
### Phase 2 — Reconcile
|
||||||
|
|
||||||
|
5. `curl -X POST https://vps-a.networkmonitor.cc/reconciler/reconcile?dry_run=true -d @netbird.json`
|
||||||
|
→ Verify plan shows: create 2 groups, 2 setup keys, 1 policy.
|
||||||
|
6. `curl -X POST https://vps-a.networkmonitor.cc/reconciler/reconcile -d @netbird.json`
|
||||||
|
→ Verify response includes `created_keys` with actual key values.
|
||||||
|
7. Open NetBird dashboard → verify groups, setup keys, and policy exist.
|
||||||
|
8. `curl https://vps-a.networkmonitor.cc/reconciler/export` → Compare exported
|
||||||
|
state with input. Verify round-trip consistency.
|
||||||
|
|
||||||
|
### Phase 3 — Enrollment
|
||||||
|
|
||||||
|
9. Copy a setup key value from step 6 response.
|
||||||
|
10. On a test machine: `netbird up --setup-key <key>`.
|
||||||
|
11. Check NetBird dashboard: peer appears, gets auto-renamed by poller, placed
|
||||||
|
in correct group.
|
||||||
|
12. Check reconciler logs: enrollment event detected, peer renamed, log entry
|
||||||
|
written (no Gitea commit since `GITEA_ENABLED=false` for initial test).
|
||||||
|
|
||||||
|
### Phase 4 — State Export (against real instance)
|
||||||
|
|
||||||
|
13. Run CLI export against `dev.netbird.achilles-rnd.cc`:
|
||||||
|
```
|
||||||
|
deno run src/main.ts --export \
|
||||||
|
--netbird-api-url https://dev.netbird.achilles-rnd.cc/api \
|
||||||
|
--netbird-api-token <token>
|
||||||
|
```
|
||||||
|
14. Review output — validates we can bootstrap GitOps from existing environment.
|
||||||
|
15. Optionally: dry-run reconcile the exported state against the same instance —
|
||||||
|
should produce an empty plan (no changes needed).
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
- [ ] Reconcile creates all declared resources in NetBird.
|
||||||
|
- [ ] Dry-run returns accurate plan without side effects.
|
||||||
|
- [ ] Export produces valid `netbird.json` from a live instance.
|
||||||
|
- [ ] Export → dry-run round-trip yields empty plan (idempotent).
|
||||||
|
- [ ] Poller detects enrollment and renames peer within 30s.
|
||||||
|
- [ ] Reconciler starts and operates correctly with `GITEA_ENABLED=false`.
|
||||||
|
- [ ] Reconciler starts and operates correctly with `GITEA_ENABLED=true` +
|
||||||
|
Gitea.
|
||||||
|
|
||||||
|
## Risks
|
||||||
|
|
||||||
|
| Risk | Mitigation |
|
||||||
|
| ------------------------------------------------------------- | -------------------------------------------------------------------------- |
|
||||||
|
| NetBird Management API behavior differs from docs | Testing against real instance; reconciler has comprehensive error handling |
|
||||||
|
| Export misses edge cases in resource mapping | Validate with dry-run round-trip (export → reconcile → empty plan) |
|
||||||
|
| Poller misses events during 30s poll interval | Acceptable for PoC; production can tune interval or add webhook trigger |
|
||||||
|
| VPS-A resources (2 vCPU, 4GB RAM) insufficient for full stack | Monitor; NetBird + Gitea are lightweight individually |
|
||||||
555
docs/plans/2026-03-06-schema-expansion.md
Normal file
555
docs/plans/2026-03-06-schema-expansion.md
Normal file
@ -0,0 +1,555 @@
|
|||||||
|
# Schema Expansion: Full NetBird State Coverage
|
||||||
|
|
||||||
|
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to
|
||||||
|
> implement this plan task-by-task.
|
||||||
|
|
||||||
|
**Goal:** Expand the reconciler schema and export to cover all NetBird resource
|
||||||
|
types: posture checks, networks (with resources and routers), peers, users, and
|
||||||
|
resource-backed policies.
|
||||||
|
|
||||||
|
**Architecture:** Each new resource type follows the existing pattern: add NB
|
||||||
|
types → add schema → add to ActualState → add client methods → add diff logic →
|
||||||
|
add executor handlers → add export → add tests. Policies are extended to support
|
||||||
|
`destination_resource` as an alternative to `destinations`. The "All" group gets
|
||||||
|
hardcoded exclusion from deletion.
|
||||||
|
|
||||||
|
**Tech Stack:** Deno 2.x, TypeScript, Zod, injectable fetch for testing.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 1: Fix "All" group hardcoded exclusion + policy null-safety
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
|
||||||
|
- Modify: `src/reconcile/diff.ts:66-70` (add "All" name check)
|
||||||
|
- Modify: `src/reconcile/diff.ts:138-145` (null-safety for destinations)
|
||||||
|
- Modify: `src/reconcile/diff.test.ts` (add test for "All" exclusion with
|
||||||
|
`issued: "api"`)
|
||||||
|
|
||||||
|
The diff already filters `issued === "api"` but "All" has `issued: "api"` in
|
||||||
|
real environments. Add explicit name exclusion. Also guard against `null`
|
||||||
|
destinations in policy rules (resource-backed policies).
|
||||||
|
|
||||||
|
**Changes to `src/reconcile/diff.ts`:**
|
||||||
|
|
||||||
|
In `diffGroups`, line 67, change:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
if (!desiredNames.has(group.name) && group.issued === "api") {
|
||||||
|
```
|
||||||
|
|
||||||
|
to:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
if (!desiredNames.has(group.name) && group.issued === "api" && group.name !== "All") {
|
||||||
|
```
|
||||||
|
|
||||||
|
In `diffPolicies`, around line 143, wrap destinations extraction:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const actualDests = extractGroupNames(
|
||||||
|
existing.rules.flatMap((r) => r.destinations ?? []),
|
||||||
|
actual,
|
||||||
|
).sort();
|
||||||
|
```
|
||||||
|
|
||||||
|
Add test: `computeDiff does not delete "All" group even when issued is "api"`.
|
||||||
|
|
||||||
|
Run: `deno task test`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 2: Add posture check and network types to `src/netbird/types.ts`
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
|
||||||
|
- Modify: `src/netbird/types.ts`
|
||||||
|
|
||||||
|
Add these interfaces after the existing types:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
/** Posture check as returned by GET /api/posture-checks */
|
||||||
|
export interface NbPostureCheck {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
description: string;
|
||||||
|
checks: Record<string, unknown>;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Network as returned by GET /api/networks */
|
||||||
|
export interface NbNetwork {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
description: string;
|
||||||
|
resources: string[];
|
||||||
|
routers: string[];
|
||||||
|
policies: string[];
|
||||||
|
routing_peers_count: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Network resource as returned by GET /api/networks/{id}/resources */
|
||||||
|
export interface NbNetworkResource {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
description: string;
|
||||||
|
type: "host" | "subnet" | "domain";
|
||||||
|
address: string;
|
||||||
|
enabled: boolean;
|
||||||
|
groups: Array<
|
||||||
|
{ id: string; name: string; peers_count: number; resources_count: number }
|
||||||
|
>;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Network router as returned by GET /api/networks/{id}/routers */
|
||||||
|
export interface NbNetworkRouter {
|
||||||
|
id: string;
|
||||||
|
peer: string | null;
|
||||||
|
peer_groups: string[] | null;
|
||||||
|
metric: number;
|
||||||
|
masquerade: boolean;
|
||||||
|
enabled: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** User as returned by GET /api/users */
|
||||||
|
export interface NbUser {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
email: string;
|
||||||
|
role: "owner" | "admin" | "user";
|
||||||
|
status: "active" | "invited" | "blocked";
|
||||||
|
auto_groups: string[];
|
||||||
|
is_service_user: boolean;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Also add `destinationResource` and `source_posture_checks` to `NbPolicy`:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export interface NbPolicy {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
description: string;
|
||||||
|
enabled: boolean;
|
||||||
|
rules: NbPolicyRule[];
|
||||||
|
source_posture_checks: string[]; // posture check IDs
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
And add to `NbPolicyRule`:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export interface NbPolicyRule {
|
||||||
|
// ... existing fields ...
|
||||||
|
destinationResource?: { id: string; type: string } | null;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Run: `deno task check`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 3: Add client methods for new resource types
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
|
||||||
|
- Modify: `src/netbird/client.ts`
|
||||||
|
|
||||||
|
Add sections for:
|
||||||
|
|
||||||
|
**Posture Checks:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
listPostureChecks(): Promise<NbPostureCheck[]>
|
||||||
|
createPostureCheck(data: Omit<NbPostureCheck, "id">): Promise<NbPostureCheck>
|
||||||
|
updatePostureCheck(id: string, data: Omit<NbPostureCheck, "id">): Promise<NbPostureCheck>
|
||||||
|
deletePostureCheck(id: string): Promise<void>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Networks:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
listNetworks(): Promise<NbNetwork[]>
|
||||||
|
createNetwork(data: { name: string; description?: string }): Promise<NbNetwork>
|
||||||
|
updateNetwork(id: string, data: { name: string; description?: string }): Promise<NbNetwork>
|
||||||
|
deleteNetwork(id: string): Promise<void>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Network Resources (nested under network):**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
listNetworkResources(networkId: string): Promise<NbNetworkResource[]>
|
||||||
|
createNetworkResource(networkId: string, data: { name: string; description?: string; address: string; enabled: boolean; groups: string[] }): Promise<NbNetworkResource>
|
||||||
|
updateNetworkResource(networkId: string, resourceId: string, data: { name: string; description?: string; address: string; enabled: boolean; groups: string[] }): Promise<NbNetworkResource>
|
||||||
|
deleteNetworkResource(networkId: string, resourceId: string): Promise<void>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Network Routers:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
listNetworkRouters(networkId: string): Promise<NbNetworkRouter[]>
|
||||||
|
createNetworkRouter(networkId: string, data: Omit<NbNetworkRouter, "id">): Promise<NbNetworkRouter>
|
||||||
|
updateNetworkRouter(networkId: string, routerId: string, data: Omit<NbNetworkRouter, "id">): Promise<NbNetworkRouter>
|
||||||
|
deleteNetworkRouter(networkId: string, routerId: string): Promise<void>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Users:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
listUsers(): Promise<NbUser[]>
|
||||||
|
createUser(data: { email: string; name?: string; role: string; auto_groups: string[]; is_service_user: boolean }): Promise<NbUser>
|
||||||
|
updateUser(id: string, data: { name?: string; role?: string; auto_groups?: string[] }): Promise<NbUser>
|
||||||
|
deleteUser(id: string): Promise<void>
|
||||||
|
```
|
||||||
|
|
||||||
|
Run: `deno task check`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 4: Expand ActualState with new resource collections
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
|
||||||
|
- Modify: `src/state/actual.ts`
|
||||||
|
|
||||||
|
Add to `ActualState` interface:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
postureChecks: NbPostureCheck[];
|
||||||
|
postureChecksByName: Map<string, NbPostureCheck>;
|
||||||
|
networks: NbNetwork[];
|
||||||
|
networksByName: Map<string, NbNetwork>;
|
||||||
|
networkResources: Map<string, NbNetworkResource[]>; // networkId -> resources
|
||||||
|
networkRouters: Map<string, NbNetworkRouter[]>; // networkId -> routers
|
||||||
|
users: NbUser[];
|
||||||
|
usersByEmail: Map<string, NbUser>;
|
||||||
|
```
|
||||||
|
|
||||||
|
Expand `ClientLike` to include:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
| "listPostureChecks"
|
||||||
|
| "listNetworks"
|
||||||
|
| "listNetworkResources"
|
||||||
|
| "listNetworkRouters"
|
||||||
|
| "listUsers"
|
||||||
|
```
|
||||||
|
|
||||||
|
In `fetchActualState`: fetch posture checks, networks, users in the initial
|
||||||
|
`Promise.all`. Then for each network, fetch its resources and routers in a
|
||||||
|
second parallel batch.
|
||||||
|
|
||||||
|
Run: `deno task check`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 5: Expand the Zod schema with new resource types
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
|
||||||
|
- Modify: `src/state/schema.ts`
|
||||||
|
|
||||||
|
Add schemas:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export const PostureCheckSchema = z.object({
|
||||||
|
description: z.string().default(""),
|
||||||
|
checks: z.record(z.string(), z.unknown()),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const NetworkResourceSchema = z.object({
|
||||||
|
name: z.string(),
|
||||||
|
description: z.string().default(""),
|
||||||
|
type: z.enum(["host", "subnet", "domain"]),
|
||||||
|
address: z.string(),
|
||||||
|
enabled: z.boolean().default(true),
|
||||||
|
groups: z.array(z.string()),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const NetworkRouterSchema = z.object({
|
||||||
|
peer: z.string().optional(),
|
||||||
|
peer_groups: z.array(z.string()).optional(),
|
||||||
|
metric: z.number().int().min(1).max(9999).default(9999),
|
||||||
|
masquerade: z.boolean().default(true),
|
||||||
|
enabled: z.boolean().default(true),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const NetworkSchema = z.object({
|
||||||
|
description: z.string().default(""),
|
||||||
|
resources: z.array(NetworkResourceSchema).default([]),
|
||||||
|
routers: z.array(NetworkRouterSchema).default([]),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const PeerSchema = z.object({
|
||||||
|
groups: z.array(z.string()),
|
||||||
|
login_expiration_enabled: z.boolean().default(false),
|
||||||
|
inactivity_expiration_enabled: z.boolean().default(false),
|
||||||
|
ssh_enabled: z.boolean().default(false),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const UserSchema = z.object({
|
||||||
|
name: z.string(),
|
||||||
|
role: z.enum(["owner", "admin", "user"]),
|
||||||
|
auto_groups: z.array(z.string()).default([]),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
Extend `PolicySchema` to support `destination_resource`:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export const DestinationResourceSchema = z.object({
|
||||||
|
id: z.string(), // resource name, resolved at reconcile time
|
||||||
|
type: z.string(),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const PolicySchema = z.object({
|
||||||
|
description: z.string().default(""),
|
||||||
|
enabled: z.boolean(),
|
||||||
|
sources: z.array(z.string()),
|
||||||
|
destinations: z.array(z.string()).default([]),
|
||||||
|
destination_resource: DestinationResourceSchema.optional(),
|
||||||
|
bidirectional: z.boolean(),
|
||||||
|
protocol: z.enum(["tcp", "udp", "icmp", "all"]).default("all"),
|
||||||
|
action: z.enum(["accept", "drop"]).default("accept"),
|
||||||
|
ports: z.array(z.string()).optional(),
|
||||||
|
source_posture_checks: z.array(z.string()).default([]),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
Add to `DesiredStateSchema`:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export const DesiredStateSchema = z.object({
|
||||||
|
groups: z.record(z.string(), GroupSchema),
|
||||||
|
setup_keys: z.record(z.string(), SetupKeySchema),
|
||||||
|
policies: z.record(z.string(), PolicySchema).default({}),
|
||||||
|
posture_checks: z.record(z.string(), PostureCheckSchema).default({}),
|
||||||
|
networks: z.record(z.string(), NetworkSchema).default({}),
|
||||||
|
peers: z.record(z.string(), PeerSchema).default({}),
|
||||||
|
users: z.record(z.string(), UserSchema).default({}),
|
||||||
|
routes: z.record(z.string(), RouteSchema).default({}),
|
||||||
|
dns: z.object({
|
||||||
|
nameserver_groups: z.record(z.string(), DnsNameserverGroupSchema).default(
|
||||||
|
{},
|
||||||
|
),
|
||||||
|
}).default({ nameserver_groups: {} }),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
Update `validateCrossReferences` to also check:
|
||||||
|
|
||||||
|
- Peer groups reference existing groups
|
||||||
|
- User auto_groups reference existing groups
|
||||||
|
- Network resource groups reference existing groups
|
||||||
|
- Policy source_posture_checks reference existing posture checks
|
||||||
|
- Policy destination_resource.id references an existing network resource name
|
||||||
|
|
||||||
|
Run: `deno task check`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 6: Add operations for new resource types
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
|
||||||
|
- Modify: `src/reconcile/operations.ts`
|
||||||
|
|
||||||
|
Add to `OperationType`:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
| "create_posture_check" | "update_posture_check" | "delete_posture_check"
|
||||||
|
| "create_network" | "update_network" | "delete_network"
|
||||||
|
| "create_network_resource" | "update_network_resource" | "delete_network_resource"
|
||||||
|
| "create_network_router" | "update_network_router" | "delete_network_router"
|
||||||
|
| "create_user" | "update_user" | "delete_user"
|
||||||
|
| "update_peer"
|
||||||
|
```
|
||||||
|
|
||||||
|
Update `EXECUTION_ORDER` — networks must be created before resources/routers,
|
||||||
|
posture checks before policies that reference them:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export const EXECUTION_ORDER: OperationType[] = [
|
||||||
|
"create_posture_check",
|
||||||
|
"update_posture_check",
|
||||||
|
"create_group",
|
||||||
|
"update_group",
|
||||||
|
"create_setup_key",
|
||||||
|
"rename_peer",
|
||||||
|
"update_peer_groups",
|
||||||
|
"update_peer",
|
||||||
|
"create_network",
|
||||||
|
"update_network",
|
||||||
|
"create_network_resource",
|
||||||
|
"update_network_resource",
|
||||||
|
"create_network_router",
|
||||||
|
"update_network_router",
|
||||||
|
"create_user",
|
||||||
|
"update_user",
|
||||||
|
"create_policy",
|
||||||
|
"update_policy",
|
||||||
|
"create_route",
|
||||||
|
"update_route",
|
||||||
|
"create_dns",
|
||||||
|
"update_dns",
|
||||||
|
// Deletions in reverse dependency order
|
||||||
|
"delete_dns",
|
||||||
|
"delete_route",
|
||||||
|
"delete_policy",
|
||||||
|
"delete_user",
|
||||||
|
"delete_network_router",
|
||||||
|
"delete_network_resource",
|
||||||
|
"delete_network",
|
||||||
|
"delete_peer",
|
||||||
|
"delete_setup_key",
|
||||||
|
"delete_posture_check",
|
||||||
|
"delete_group",
|
||||||
|
];
|
||||||
|
```
|
||||||
|
|
||||||
|
Run: `deno task check`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 7: Add diff logic for new resource types
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
|
||||||
|
- Modify: `src/reconcile/diff.ts`
|
||||||
|
|
||||||
|
Add `diffPostureChecks`, `diffNetworks`, `diffPeers`, `diffUsers` functions and
|
||||||
|
call them from `computeDiff`.
|
||||||
|
|
||||||
|
**Posture checks:** Compare by name. Create if missing. Update if `checks`
|
||||||
|
object or description changed (deep JSON compare). Delete if not in desired.
|
||||||
|
|
||||||
|
**Networks:** Compare by name. Create network if missing. For each network, diff
|
||||||
|
resources and routers:
|
||||||
|
|
||||||
|
- Resources: match by name within the network. Create/update/delete.
|
||||||
|
- Routers: match by peer name (or peer_group). Create/update/delete.
|
||||||
|
|
||||||
|
**Peers:** Compare by name. Only update operations (never create/delete).
|
||||||
|
Compare `groups` (excluding "All"), `login_expiration_enabled`,
|
||||||
|
`inactivity_expiration_enabled`, `ssh_enabled`.
|
||||||
|
|
||||||
|
**Users:** Compare by email. Create if missing. Update if role or auto_groups
|
||||||
|
changed. Delete if not in desired (but never delete "owner" role).
|
||||||
|
|
||||||
|
**Policies update:** Handle `destination_resource` — when present, skip
|
||||||
|
group-based destination comparison. Handle `source_posture_checks`.
|
||||||
|
|
||||||
|
Run: `deno task check`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 8: Add executor handlers for new operations
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
|
||||||
|
- Modify: `src/reconcile/executor.ts`
|
||||||
|
|
||||||
|
Add `case` handlers in `executeSingle` for all new operation types. Network
|
||||||
|
operations need special handling: resources and routers reference the network
|
||||||
|
ID, which may be newly created. Track `createdNetworkIds` similar to
|
||||||
|
`createdGroupIds`.
|
||||||
|
|
||||||
|
Posture check operations: create/update/delete via client methods. Track
|
||||||
|
`createdPostureCheckIds`.
|
||||||
|
|
||||||
|
User operations: resolve `auto_groups` names to IDs.
|
||||||
|
|
||||||
|
Network resource operations: resolve `groups` names to IDs.
|
||||||
|
|
||||||
|
Network router operations: resolve `peer` name to peer ID, or `peer_groups`
|
||||||
|
names to group IDs.
|
||||||
|
|
||||||
|
Update `ExecutorClient` type to include all new client methods.
|
||||||
|
|
||||||
|
Run: `deno task check`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 9: Update export to cover new resource types
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
|
||||||
|
- Modify: `src/export.ts`
|
||||||
|
|
||||||
|
Add `exportPostureChecks`, `exportNetworks`, `exportPeers`, `exportUsers`
|
||||||
|
functions.
|
||||||
|
|
||||||
|
**Posture checks:** Keyed by name. Pass through `checks` object as-is. Include
|
||||||
|
`description`.
|
||||||
|
|
||||||
|
**Networks:** Keyed by name. For each network, fetch resources and routers from
|
||||||
|
ActualState maps. Resources: resolve group IDs to names. Routers: resolve peer
|
||||||
|
ID to peer name (via `actual.peersById`), resolve peer_group IDs to group names.
|
||||||
|
|
||||||
|
**Peers:** Keyed by peer name. Include groups (resolved to names, excluding
|
||||||
|
"All"), `login_expiration_enabled`, `inactivity_expiration_enabled`,
|
||||||
|
`ssh_enabled`.
|
||||||
|
|
||||||
|
**Users:** Keyed by email. Include name, role, auto_groups (resolved to names).
|
||||||
|
|
||||||
|
**Policies:** Handle `destinationResource` — resolve resource ID to resource
|
||||||
|
name. Include `source_posture_checks` resolved to posture check names.
|
||||||
|
|
||||||
|
Update the `exportState` return to include all new sections.
|
||||||
|
|
||||||
|
Run: `deno task check`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 10: Export the three environments to state/*.json
|
||||||
|
|
||||||
|
Run the export against all three production NetBird instances:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p state
|
||||||
|
deno task export -- --netbird-api-url https://dev.netbird.achilles-rnd.cc/api --netbird-api-token <DEV_TOKEN> > state/dev.json
|
||||||
|
deno task export -- --netbird-api-url https://achilles-rnd.cc/api --netbird-api-token <PROD_TOKEN> > state/prod.json
|
||||||
|
deno task export -- --netbird-api-url https://ext.netbird.achilles-rnd.cc/api --netbird-api-token <EXT_TOKEN> > state/ext.json
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify each file parses with the updated schema. Visually inspect for
|
||||||
|
completeness against dashboards.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 11: Update tests
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
|
||||||
|
- Modify: `src/reconcile/diff.test.ts` — tests for new diff functions
|
||||||
|
- Modify: `src/reconcile/executor.test.ts` — tests for new executor cases
|
||||||
|
- Modify: `src/export.test.ts` — tests for new export functions
|
||||||
|
- Modify: `src/state/schema.test.ts` — tests for new schema validation
|
||||||
|
- Modify: `src/state/actual.test.ts` — tests for expanded fetchActualState
|
||||||
|
- Modify: `src/integration.test.ts` — update mock data to include new resource
|
||||||
|
types
|
||||||
|
|
||||||
|
All existing tests must continue to pass. New tests should cover:
|
||||||
|
|
||||||
|
- Posture check CRUD diff/execute
|
||||||
|
- Network with resources and routers diff/execute
|
||||||
|
- Peer update diff (group changes, setting changes)
|
||||||
|
- User CRUD diff/execute
|
||||||
|
- Policy with destination_resource (export and diff)
|
||||||
|
- Policy with source_posture_checks (export and diff)
|
||||||
|
- Export of all new resource types
|
||||||
|
|
||||||
|
Run: `deno task test` — all tests must pass.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Task 12: Final verification
|
||||||
|
|
||||||
|
Run full quality gate:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
deno task check # type check
|
||||||
|
deno fmt --check # formatting
|
||||||
|
deno task test # all tests
|
||||||
|
```
|
||||||
|
|
||||||
|
All must pass.
|
||||||
9
netbird.json
Normal file
9
netbird.json
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
{
|
||||||
|
"groups": {},
|
||||||
|
"setup_keys": {},
|
||||||
|
"policies": {},
|
||||||
|
"routes": {},
|
||||||
|
"dns": {
|
||||||
|
"nameserver_groups": {}
|
||||||
|
}
|
||||||
|
}
|
||||||
177
poc/README.md
Normal file
177
poc/README.md
Normal file
@ -0,0 +1,177 @@
|
|||||||
|
# NetBird GitOps Reconciler — PoC Deployment
|
||||||
|
|
||||||
|
Ansible playbook that deploys a self-contained stack on VPS-A for end-to-end
|
||||||
|
testing of the NetBird GitOps reconciler.
|
||||||
|
|
||||||
|
## Stack overview
|
||||||
|
|
||||||
|
| Component | Purpose |
|
||||||
|
| --------------- | ------------------------------------------- |
|
||||||
|
| Caddy | TLS termination, reverse proxy |
|
||||||
|
| NetBird | Management, Signal, Relay, Dashboard, TURN |
|
||||||
|
| Reconciler | Declarative config → NetBird API reconciler |
|
||||||
|
| Gitea | Git server for GitOps source-of-truth |
|
||||||
|
| Gitea Runner | Executes CI workflows (Actions) |
|
||||||
|
|
||||||
|
All services run as Docker containers on a single VPS, connected via a `netbird`
|
||||||
|
Docker bridge network. Caddy handles ACME certificates automatically.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- SSH access to `46.225.220.61` (root, key-based)
|
||||||
|
- DNS A records:
|
||||||
|
- `vps-a.networkmonitor.cc` → `46.225.220.61`
|
||||||
|
- `gitea.vps-a.networkmonitor.cc` → `46.225.220.61`
|
||||||
|
- `rsync` installed locally (used to sync reconciler source)
|
||||||
|
- Ansible 2.15+ with `community.general` and `ansible.posix` collections
|
||||||
|
|
||||||
|
Install collections if needed:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-galaxy collection install community.general ansible.posix
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deployment (multi-phase)
|
||||||
|
|
||||||
|
The deployment is intentionally multi-phase because some tokens can only be
|
||||||
|
obtained after services are running.
|
||||||
|
|
||||||
|
### Phase 1: Initial deploy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd poc/ansible
|
||||||
|
cp group_vars/all/vault.yml.example group_vars/all/vault.yml
|
||||||
|
ansible-playbook -i inventory.yml playbook.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
The playbook will:
|
||||||
|
|
||||||
|
1. Generate secrets (encryption key, TURN password, relay secret, reconciler
|
||||||
|
token)
|
||||||
|
2. Install Docker, configure UFW
|
||||||
|
3. Rsync the reconciler source code and build the Docker image
|
||||||
|
4. Template configs and start all services
|
||||||
|
5. Skip the Gitea Actions runner (no token yet)
|
||||||
|
6. Print a summary with generated secrets
|
||||||
|
|
||||||
|
**Save the generated secrets** into `vault.yml` so subsequent runs are
|
||||||
|
idempotent.
|
||||||
|
|
||||||
|
### Phase 2: Create NetBird admin + API token
|
||||||
|
|
||||||
|
1. Open `https://vps-a.networkmonitor.cc` in a browser
|
||||||
|
2. Create the first admin account (embedded IdP — no external OAuth)
|
||||||
|
3. Go to **Settings → Personal Access Tokens → Generate**
|
||||||
|
4. Copy the token into `vault.yml` as `vault_netbird_api_token`
|
||||||
|
|
||||||
|
### Phase 3: Set up Gitea
|
||||||
|
|
||||||
|
1. Open `https://gitea.vps-a.networkmonitor.cc` and complete the install wizard
|
||||||
|
2. Create an admin account (user: `blastpilot`)
|
||||||
|
3. Create org `BlastPilot` and repo `netbird-gitops`
|
||||||
|
4. Generate a Gitea API token (**Settings → Applications**) → `vault_gitea_token`
|
||||||
|
5. Go to **Site Administration → Actions → Runners** → copy runner registration
|
||||||
|
token → `vault_gitea_admin_password` and `vault_gitea_runner_token`
|
||||||
|
|
||||||
|
### Phase 4: Re-deploy with all tokens
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook -i inventory.yml playbook.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
This run will:
|
||||||
|
- Start the reconciler with a valid NetBird API token
|
||||||
|
- Register and start the Gitea Actions runner
|
||||||
|
- Wire the reconciler to poll Gitea for `netbird.json` changes
|
||||||
|
|
||||||
|
### Phase 5: Push code and test CI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /path/to/netbird-gitops
|
||||||
|
git remote add poc git@gitea.vps-a.networkmonitor.cc:BlastPilot/netbird-gitops.git
|
||||||
|
git push poc main
|
||||||
|
```
|
||||||
|
|
||||||
|
Then configure Gitea repo secrets (Settings → Actions → Secrets):
|
||||||
|
- `RECONCILER_TOKEN` — the reconciler bearer token
|
||||||
|
- `RECONCILER_URL` — `https://vps-a.networkmonitor.cc/reconciler`
|
||||||
|
- `GITEA_TOKEN` — same Gitea API token
|
||||||
|
|
||||||
|
Create a branch, modify `netbird.json`, open a PR — the dry-run workflow should
|
||||||
|
post a plan as a PR comment.
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
Replace `<TOKEN>` with `vault_reconciler_token`.
|
||||||
|
|
||||||
|
### Health check
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl https://vps-a.networkmonitor.cc/reconciler/health
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dry-run reconcile
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST \
|
||||||
|
-H "Authorization: Bearer <TOKEN>" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d @ansible/files/netbird-seed.json \
|
||||||
|
'https://vps-a.networkmonitor.cc/reconciler/reconcile?dry_run=true'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Apply reconcile
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST \
|
||||||
|
-H "Authorization: Bearer <TOKEN>" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d @ansible/files/netbird-seed.json \
|
||||||
|
'https://vps-a.networkmonitor.cc/reconciler/reconcile'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Export current state
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -H "Authorization: Bearer <TOKEN>" \
|
||||||
|
'https://vps-a.networkmonitor.cc/reconciler/export'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Teardown
|
||||||
|
|
||||||
|
Remove all containers and volumes:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh root@46.225.220.61 "cd /opt/netbird-poc && docker compose down -v"
|
||||||
|
```
|
||||||
|
|
||||||
|
Stop the runner:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh root@46.225.220.61 "systemctl stop gitea-runner && systemctl disable gitea-runner"
|
||||||
|
```
|
||||||
|
|
||||||
|
## File structure
|
||||||
|
|
||||||
|
```
|
||||||
|
poc/
|
||||||
|
ansible/
|
||||||
|
inventory.yml # VPS-A host definition
|
||||||
|
playbook.yml # Main deployment playbook
|
||||||
|
.gitignore # Excludes vault.yml
|
||||||
|
group_vars/
|
||||||
|
all/
|
||||||
|
vars.yml # Non-secret config (versions, ports, etc.)
|
||||||
|
vault.yml.example # Secret template — copy to vault.yml
|
||||||
|
templates/
|
||||||
|
docker-compose.yml.j2 # All services (NetBird + Gitea + Reconciler)
|
||||||
|
management.json.j2 # NetBird management config
|
||||||
|
Caddyfile.j2 # Caddy reverse proxy (NetBird + Gitea)
|
||||||
|
dashboard.env.j2 # NetBird dashboard env
|
||||||
|
relay.env.j2 # NetBird relay env
|
||||||
|
turnserver.conf.j2 # TURN server config
|
||||||
|
reconciler.env.j2 # Reconciler env
|
||||||
|
files/
|
||||||
|
netbird-seed.json # Example desired state for testing
|
||||||
|
README.md # This file
|
||||||
|
```
|
||||||
1
poc/ansible/.gitignore
vendored
Normal file
1
poc/ansible/.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
|||||||
|
group_vars/all/vault.yml
|
||||||
32
poc/ansible/files/netbird-seed.json
Normal file
32
poc/ansible/files/netbird-seed.json
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
{
|
||||||
|
"groups": {
|
||||||
|
"ground-stations": { "peers": [] },
|
||||||
|
"pilots": { "peers": [] }
|
||||||
|
},
|
||||||
|
"setup_keys": {
|
||||||
|
"GS-TestHawk-1": {
|
||||||
|
"type": "one-off",
|
||||||
|
"expires_in": 604800,
|
||||||
|
"usage_limit": 1,
|
||||||
|
"auto_groups": ["ground-stations"],
|
||||||
|
"enrolled": false
|
||||||
|
},
|
||||||
|
"Pilot-TestHawk-1": {
|
||||||
|
"type": "one-off",
|
||||||
|
"expires_in": 604800,
|
||||||
|
"usage_limit": 1,
|
||||||
|
"auto_groups": ["pilots"],
|
||||||
|
"enrolled": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"policies": {
|
||||||
|
"pilots-to-gs": {
|
||||||
|
"enabled": true,
|
||||||
|
"sources": ["pilots"],
|
||||||
|
"destinations": ["ground-stations"],
|
||||||
|
"bidirectional": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"routes": {},
|
||||||
|
"dns": { "nameserver_groups": {} }
|
||||||
|
}
|
||||||
32
poc/ansible/group_vars/all/vars.yml
Normal file
32
poc/ansible/group_vars/all/vars.yml
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
---
|
||||||
|
# Non-secret configuration for the NetBird + Reconciler PoC stack.
|
||||||
|
|
||||||
|
netbird_domain: vps-a.networkmonitor.cc
|
||||||
|
|
||||||
|
# --- Component versions (match existing PoC) ---
|
||||||
|
netbird_version: "0.63.0"
|
||||||
|
dashboard_version: "v2.27.1"
|
||||||
|
caddy_version: "2.10.2"
|
||||||
|
coturn_version: "4.8.0-r0"
|
||||||
|
|
||||||
|
# --- Reconciler ---
|
||||||
|
reconciler_port: 8080
|
||||||
|
|
||||||
|
# --- Gitea ---
|
||||||
|
gitea_enabled: "true"
|
||||||
|
gitea_version: "1.23"
|
||||||
|
gitea_domain: "gitea.vps-a.networkmonitor.cc"
|
||||||
|
gitea_http_port: 3000
|
||||||
|
gitea_ssh_port: 2222
|
||||||
|
gitea_admin_user: "blastpilot"
|
||||||
|
gitea_org_name: "BlastPilot"
|
||||||
|
gitea_repo_name: "netbird-gitops"
|
||||||
|
|
||||||
|
# --- Gitea Actions Runner ---
|
||||||
|
gitea_runner_version: "0.2.11"
|
||||||
|
gitea_runner_dir: "/opt/gitea-runner"
|
||||||
|
gitea_runner_name: "poc-runner"
|
||||||
|
gitea_runner_labels: "ubuntu-latest:docker://node:20-bookworm,ubuntu-22.04:docker://ubuntu:22.04"
|
||||||
|
|
||||||
|
# --- Paths ---
|
||||||
|
base_dir: /opt/netbird-poc
|
||||||
24
poc/ansible/group_vars/all/vault.yml.example
Normal file
24
poc/ansible/group_vars/all/vault.yml.example
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
# Copy to vault.yml and fill in values.
|
||||||
|
# This file is gitignored — do NOT commit real secrets.
|
||||||
|
|
||||||
|
# Auto-generated by playbook if empty (leave as empty string):
|
||||||
|
vault_encryption_key: ""
|
||||||
|
vault_turn_password: ""
|
||||||
|
vault_relay_secret: ""
|
||||||
|
|
||||||
|
# Reconciler auth token (generate: openssl rand -hex 32):
|
||||||
|
vault_reconciler_token: ""
|
||||||
|
|
||||||
|
# Gitea admin password:
|
||||||
|
vault_gitea_admin_password: ""
|
||||||
|
|
||||||
|
# NetBird API token (created via dashboard after first deploy):
|
||||||
|
vault_netbird_api_token: ""
|
||||||
|
|
||||||
|
# Gitea API token (created via Gitea UI after first deploy):
|
||||||
|
vault_gitea_token: ""
|
||||||
|
|
||||||
|
# Gitea Actions runner registration token
|
||||||
|
# (get from Gitea: Site Administration → Actions → Runners):
|
||||||
|
vault_gitea_runner_token: ""
|
||||||
9
poc/ansible/inventory.yml
Normal file
9
poc/ansible/inventory.yml
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
all:
|
||||||
|
children:
|
||||||
|
poc_servers:
|
||||||
|
hosts:
|
||||||
|
netbird-poc-a:
|
||||||
|
ansible_host: 46.225.220.61
|
||||||
|
ansible_ssh_private_key_file: ~/.ssh/hetzner
|
||||||
|
ansible_user: root
|
||||||
451
poc/ansible/playbook.yml
Normal file
451
poc/ansible/playbook.yml
Normal file
@ -0,0 +1,451 @@
|
|||||||
|
---
|
||||||
|
# =============================================================================
|
||||||
|
# NetBird + Reconciler + Gitea — PoC Deployment
|
||||||
|
# =============================================================================
|
||||||
|
# Deploys a self-contained stack on VPS-A for end-to-end reconciler testing.
|
||||||
|
#
|
||||||
|
# Prerequisites:
|
||||||
|
# - SSH access to VPS-A (46.225.220.61)
|
||||||
|
# - DNS A record: vps-a.networkmonitor.cc -> 46.225.220.61
|
||||||
|
# - rsync installed locally and on VPS-A
|
||||||
|
# - poc/ansible/group_vars/all/vault.yml (copy from vault.yml.example)
|
||||||
|
#
|
||||||
|
# Run:
|
||||||
|
# cd poc/ansible
|
||||||
|
# ansible-playbook -i inventory.yml playbook.yml
|
||||||
|
# =============================================================================
|
||||||
|
|
||||||
|
- name: Deploy NetBird + Reconciler PoC on VPS-A
|
||||||
|
hosts: poc_servers
|
||||||
|
become: true
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# =========================================================================
|
||||||
|
# 1. Generate secrets (if vault values are empty)
|
||||||
|
# =========================================================================
|
||||||
|
# vault_* vars come from group_vars/all/vault.yml. When left as empty
|
||||||
|
# strings, the playbook auto-generates values. On subsequent runs with
|
||||||
|
# filled-in vault.yml, the provided values are used instead.
|
||||||
|
- name: Generate encryption key (if not provided)
|
||||||
|
ansible.builtin.shell: openssl rand -base64 32
|
||||||
|
register: _gen_encryption_key
|
||||||
|
changed_when: false
|
||||||
|
when: vault_encryption_key | default('') | length == 0
|
||||||
|
|
||||||
|
- name: Generate TURN password (if not provided)
|
||||||
|
ansible.builtin.shell: openssl rand -hex 32
|
||||||
|
register: _gen_turn_password
|
||||||
|
changed_when: false
|
||||||
|
when: vault_turn_password | default('') | length == 0
|
||||||
|
|
||||||
|
- name: Generate relay secret (if not provided)
|
||||||
|
ansible.builtin.shell: openssl rand -hex 32
|
||||||
|
register: _gen_relay_secret
|
||||||
|
changed_when: false
|
||||||
|
when: vault_relay_secret | default('') | length == 0
|
||||||
|
|
||||||
|
- name: Generate reconciler token (if not provided)
|
||||||
|
ansible.builtin.shell: openssl rand -hex 32
|
||||||
|
register: _gen_reconciler_token
|
||||||
|
changed_when: false
|
||||||
|
when: vault_reconciler_token | default('') | length == 0
|
||||||
|
|
||||||
|
- name: Set effective secrets
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
vault_encryption_key: "{{ vault_encryption_key if (vault_encryption_key | default('') | length > 0) else _gen_encryption_key.stdout }}"
|
||||||
|
vault_turn_password: "{{ vault_turn_password if (vault_turn_password | default('') | length > 0) else _gen_turn_password.stdout }}"
|
||||||
|
vault_relay_secret: "{{ vault_relay_secret if (vault_relay_secret | default('') | length > 0) else _gen_relay_secret.stdout }}"
|
||||||
|
vault_reconciler_token: "{{ vault_reconciler_token if (vault_reconciler_token | default('') | length > 0) else _gen_reconciler_token.stdout }}"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# 2. Install Docker
|
||||||
|
# =========================================================================
|
||||||
|
|
||||||
|
- name: Update apt cache
|
||||||
|
ansible.builtin.apt:
|
||||||
|
update_cache: true
|
||||||
|
cache_valid_time: 3600
|
||||||
|
|
||||||
|
- name: Install prerequisites
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name:
|
||||||
|
- apt-transport-https
|
||||||
|
- ca-certificates
|
||||||
|
- curl
|
||||||
|
- gnupg
|
||||||
|
- lsb-release
|
||||||
|
- jq
|
||||||
|
- rsync
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Check if Docker is installed
|
||||||
|
ansible.builtin.command: docker --version
|
||||||
|
register: _docker_check
|
||||||
|
changed_when: false
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Create keyrings directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /etc/apt/keyrings
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
when: _docker_check.rc != 0
|
||||||
|
|
||||||
|
- name: Add Docker GPG key
|
||||||
|
ansible.builtin.shell: |
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
|
||||||
|
| gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
|
chmod a+r /etc/apt/keyrings/docker.gpg
|
||||||
|
args:
|
||||||
|
creates: /etc/apt/keyrings/docker.gpg
|
||||||
|
when: _docker_check.rc != 0
|
||||||
|
|
||||||
|
- name: Add Docker repository
|
||||||
|
ansible.builtin.apt_repository:
|
||||||
|
repo: >-
|
||||||
|
deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg]
|
||||||
|
https://download.docker.com/linux/ubuntu
|
||||||
|
{{ ansible_distribution_release }} stable
|
||||||
|
state: present
|
||||||
|
filename: docker
|
||||||
|
when: _docker_check.rc != 0
|
||||||
|
|
||||||
|
- name: Install Docker packages
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name:
|
||||||
|
- docker-ce
|
||||||
|
- docker-ce-cli
|
||||||
|
- containerd.io
|
||||||
|
- docker-compose-plugin
|
||||||
|
state: present
|
||||||
|
update_cache: true
|
||||||
|
when: _docker_check.rc != 0
|
||||||
|
|
||||||
|
- name: Start and enable Docker
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: docker
|
||||||
|
state: started
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# 3. UFW firewall
|
||||||
|
# =========================================================================
|
||||||
|
|
||||||
|
- name: Install UFW
|
||||||
|
ansible.builtin.apt:
|
||||||
|
name: ufw
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Allow SSH
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "22"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow HTTP (ACME + Caddy)
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "80"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow HTTPS
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "443"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow TURN UDP
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "3478"
|
||||||
|
proto: udp
|
||||||
|
|
||||||
|
- name: Allow TURN TCP
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "3478"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Allow Gitea SSH
|
||||||
|
community.general.ufw:
|
||||||
|
rule: allow
|
||||||
|
port: "{{ gitea_ssh_port | string }}"
|
||||||
|
proto: tcp
|
||||||
|
|
||||||
|
- name: Enable UFW (default deny incoming)
|
||||||
|
community.general.ufw:
|
||||||
|
state: enabled
|
||||||
|
policy: deny
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# 4. Create directories
|
||||||
|
# =========================================================================
|
||||||
|
|
||||||
|
- name: Create base directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ base_dir }}"
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
|
||||||
|
- name: Create config directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ base_dir }}/config"
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
|
||||||
|
- name: Create reconciler source directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ base_dir }}/reconciler-src"
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# 5. Sync reconciler source code
|
||||||
|
# =========================================================================
|
||||||
|
# Uses rsync to copy the project root (minus junk) to VPS-A so that
|
||||||
|
# `docker compose build` can build the reconciler image on the server.
|
||||||
|
|
||||||
|
- name: Sync reconciler source to VPS-A
|
||||||
|
ansible.posix.synchronize:
|
||||||
|
src: "{{ playbook_dir }}/../../"
|
||||||
|
dest: "{{ base_dir }}/reconciler-src/"
|
||||||
|
delete: true
|
||||||
|
rsync_opts:
|
||||||
|
- "--exclude=.git"
|
||||||
|
- "--exclude=node_modules"
|
||||||
|
- "--exclude=poc"
|
||||||
|
- "--exclude=data"
|
||||||
|
- "--exclude=deploy"
|
||||||
|
- "--exclude=.beads"
|
||||||
|
# synchronize runs as the connecting user, not become. We need to
|
||||||
|
# set become: false so it uses the SSH user directly for rsync.
|
||||||
|
become: false
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# 6. Template configs
|
||||||
|
# =========================================================================
|
||||||
|
|
||||||
|
- name: Deploy docker-compose.yml
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: docker-compose.yml.j2
|
||||||
|
dest: "{{ base_dir }}/docker-compose.yml"
|
||||||
|
mode: "0644"
|
||||||
|
register: _compose_changed
|
||||||
|
|
||||||
|
- name: Deploy management.json
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: management.json.j2
|
||||||
|
dest: "{{ base_dir }}/config/management.json"
|
||||||
|
mode: "0644"
|
||||||
|
register: _management_changed
|
||||||
|
|
||||||
|
- name: Deploy Caddyfile
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: Caddyfile.j2
|
||||||
|
dest: "{{ base_dir }}/config/Caddyfile"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
- name: Deploy dashboard.env
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: dashboard.env.j2
|
||||||
|
dest: "{{ base_dir }}/config/dashboard.env"
|
||||||
|
mode: "0640"
|
||||||
|
|
||||||
|
- name: Deploy relay.env
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: relay.env.j2
|
||||||
|
dest: "{{ base_dir }}/config/relay.env"
|
||||||
|
mode: "0640"
|
||||||
|
|
||||||
|
- name: Deploy turnserver.conf
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: turnserver.conf.j2
|
||||||
|
dest: "{{ base_dir }}/config/turnserver.conf"
|
||||||
|
mode: "0644"
|
||||||
|
|
||||||
|
- name: Deploy reconciler.env
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: reconciler.env.j2
|
||||||
|
dest: "{{ base_dir }}/config/reconciler.env"
|
||||||
|
mode: "0640"
|
||||||
|
register: _reconciler_env_changed
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# 7. Docker Compose — pull, build, up
|
||||||
|
# =========================================================================
|
||||||
|
|
||||||
|
- name: Pull Docker images
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose pull --ignore-buildable
|
||||||
|
chdir: "{{ base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Build reconciler image
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose build reconciler
|
||||||
|
chdir: "{{ base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
- name: Start all services
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose up -d
|
||||||
|
chdir: "{{ base_dir }}"
|
||||||
|
changed_when: true
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# 8. Health checks
|
||||||
|
# =========================================================================
|
||||||
|
|
||||||
|
- name: Wait for management container to be running
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: docker compose ps management --format json
|
||||||
|
chdir: "{{ base_dir }}"
|
||||||
|
register: _mgmt_status
|
||||||
|
until: "'running' in _mgmt_status.stdout and 'restarting' not in _mgmt_status.stdout"
|
||||||
|
retries: 15
|
||||||
|
delay: 5
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
- name: Wait for Caddy / HTTPS to respond
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "https://{{ netbird_domain }}"
|
||||||
|
method: GET
|
||||||
|
status_code: 200
|
||||||
|
validate_certs: false
|
||||||
|
register: _caddy_check
|
||||||
|
until: _caddy_check.status == 200
|
||||||
|
retries: 12
|
||||||
|
delay: 5
|
||||||
|
|
||||||
|
- name: Check reconciler health (may fail if API token not yet configured)
|
||||||
|
ansible.builtin.uri:
|
||||||
|
url: "http://127.0.0.1:{{ reconciler_port }}/health"
|
||||||
|
method: GET
|
||||||
|
status_code: 200
|
||||||
|
register: _reconciler_check
|
||||||
|
failed_when: false
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# 9. Gitea Actions Runner
|
||||||
|
# =========================================================================
|
||||||
|
# The runner needs Gitea to be up and a registration token.
|
||||||
|
# On first deploy, skip this (vault_gitea_runner_token is empty).
|
||||||
|
# After Gitea is running, get the token from Site Administration →
|
||||||
|
# Actions → Runners, add it to vault.yml, and re-run.
|
||||||
|
|
||||||
|
- name: Create runner directory
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ gitea_runner_dir }}"
|
||||||
|
state: directory
|
||||||
|
mode: "0755"
|
||||||
|
when: vault_gitea_runner_token | default('') | length > 0
|
||||||
|
|
||||||
|
- name: Download act_runner binary
|
||||||
|
ansible.builtin.get_url:
|
||||||
|
url: "https://gitea.com/gitea/act_runner/releases/download/v{{ gitea_runner_version }}/act_runner-{{ gitea_runner_version }}-linux-amd64"
|
||||||
|
dest: "{{ gitea_runner_dir }}/act_runner"
|
||||||
|
mode: "0755"
|
||||||
|
when: vault_gitea_runner_token | default('') | length > 0
|
||||||
|
|
||||||
|
- name: Check if runner is already registered
|
||||||
|
ansible.builtin.stat:
|
||||||
|
path: "{{ gitea_runner_dir }}/.runner"
|
||||||
|
register: _runner_config
|
||||||
|
when: vault_gitea_runner_token | default('') | length > 0
|
||||||
|
|
||||||
|
- name: Register runner with Gitea
|
||||||
|
ansible.builtin.command:
|
||||||
|
cmd: >-
|
||||||
|
{{ gitea_runner_dir }}/act_runner register
|
||||||
|
--instance https://{{ gitea_domain }}
|
||||||
|
--token {{ vault_gitea_runner_token }}
|
||||||
|
--name {{ gitea_runner_name }}
|
||||||
|
--labels {{ gitea_runner_labels }}
|
||||||
|
--no-interactive
|
||||||
|
chdir: "{{ gitea_runner_dir }}"
|
||||||
|
when:
|
||||||
|
- vault_gitea_runner_token | default('') | length > 0
|
||||||
|
- not (_runner_config.stat.exists | default(false))
|
||||||
|
|
||||||
|
- name: Create systemd service for runner
|
||||||
|
ansible.builtin.copy:
|
||||||
|
dest: /etc/systemd/system/gitea-runner.service
|
||||||
|
mode: "0644"
|
||||||
|
content: |
|
||||||
|
[Unit]
|
||||||
|
Description=Gitea Actions Runner
|
||||||
|
After=network.target docker.service
|
||||||
|
Requires=docker.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=root
|
||||||
|
WorkingDirectory={{ gitea_runner_dir }}
|
||||||
|
ExecStart={{ gitea_runner_dir }}/act_runner daemon
|
||||||
|
Restart=always
|
||||||
|
RestartSec=10
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
when: vault_gitea_runner_token | default('') | length > 0
|
||||||
|
|
||||||
|
- name: Start and enable runner service
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: gitea-runner
|
||||||
|
daemon_reload: true
|
||||||
|
state: started
|
||||||
|
enabled: true
|
||||||
|
when: vault_gitea_runner_token | default('') | length > 0
|
||||||
|
|
||||||
|
- name: Skip runner (no token provided)
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: >-
|
||||||
|
Skipping Gitea Actions runner — vault_gitea_runner_token is empty.
|
||||||
|
After Gitea is running, get the token from
|
||||||
|
https://{{ gitea_domain }}/-/admin/actions/runners
|
||||||
|
and add it to vault.yml.
|
||||||
|
when: vault_gitea_runner_token | default('') | length == 0
|
||||||
|
|
||||||
|
# =========================================================================
|
||||||
|
# 10. Summary
|
||||||
|
# =========================================================================
|
||||||
|
|
||||||
|
- name: Note about NetBird API token
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: >-
|
||||||
|
The reconciler needs a NetBird API token to function.
|
||||||
|
If vault_netbird_api_token is empty, the reconciler will crash-loop
|
||||||
|
until you create an admin account via the dashboard, generate an
|
||||||
|
API token, add it to vault.yml, and re-run the playbook.
|
||||||
|
when: vault_netbird_api_token | default('') | length == 0
|
||||||
|
|
||||||
|
- name: Deployment summary
|
||||||
|
ansible.builtin.debug:
|
||||||
|
msg: |
|
||||||
|
============================================================
|
||||||
|
NetBird + Reconciler + Gitea PoC deployed on VPS-A
|
||||||
|
============================================================
|
||||||
|
|
||||||
|
Dashboard: https://{{ netbird_domain }}
|
||||||
|
Gitea: https://{{ gitea_domain }}
|
||||||
|
Gitea SSH: ssh://git@{{ gitea_domain }}:{{ gitea_ssh_port }}
|
||||||
|
Reconciler: https://{{ netbird_domain }}/reconciler/health
|
||||||
|
|
||||||
|
Reconciler status: {{ 'healthy' if (_reconciler_check.status | default(0)) == 200 else 'NOT YET READY (see note above)' }}
|
||||||
|
|
||||||
|
Generated secrets (save these to vault.yml for idempotent re-runs):
|
||||||
|
vault_encryption_key: {{ vault_encryption_key }}
|
||||||
|
vault_turn_password: {{ vault_turn_password }}
|
||||||
|
vault_relay_secret: {{ vault_relay_secret }}
|
||||||
|
vault_reconciler_token: {{ vault_reconciler_token }}
|
||||||
|
|
||||||
|
Next steps:
|
||||||
|
1. Open the dashboard and create an admin account
|
||||||
|
2. Go to Settings > API > generate a Personal Access Token
|
||||||
|
3. Put the token in vault.yml as vault_netbird_api_token
|
||||||
|
4. Open Gitea, complete install wizard, create org + repo
|
||||||
|
5. Go to Site Administration > Actions > Runners, copy token
|
||||||
|
6. Put the token in vault.yml as vault_gitea_runner_token
|
||||||
|
7. Re-run: ansible-playbook -i inventory.yml playbook.yml
|
||||||
|
============================================================
|
||||||
54
poc/ansible/templates/Caddyfile.j2
Normal file
54
poc/ansible/templates/Caddyfile.j2
Normal file
@ -0,0 +1,54 @@
|
|||||||
|
{
|
||||||
|
servers :80,:443 {
|
||||||
|
protocols h1 h2c h2 h3
|
||||||
|
}
|
||||||
|
email vlad.stus@gmail.com
|
||||||
|
}
|
||||||
|
|
||||||
|
(security_headers) {
|
||||||
|
header * {
|
||||||
|
Strict-Transport-Security "max-age=3600; includeSubDomains; preload"
|
||||||
|
X-Content-Type-Options "nosniff"
|
||||||
|
X-Frame-Options "SAMEORIGIN"
|
||||||
|
X-XSS-Protection "1; mode=block"
|
||||||
|
-Server
|
||||||
|
Referrer-Policy strict-origin-when-cross-origin
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{{ netbird_domain }} {
|
||||||
|
import security_headers
|
||||||
|
|
||||||
|
# Embedded IdP OAuth2 endpoints
|
||||||
|
reverse_proxy /oauth2/* management:80
|
||||||
|
reverse_proxy /.well-known/openid-configuration management:80
|
||||||
|
reverse_proxy /.well-known/jwks.json management:80
|
||||||
|
|
||||||
|
# NetBird Relay
|
||||||
|
reverse_proxy /relay* relay:80
|
||||||
|
|
||||||
|
# NetBird Signal (gRPC)
|
||||||
|
reverse_proxy /signalexchange.SignalExchange/* h2c://signal:10000
|
||||||
|
|
||||||
|
# NetBird Management API (gRPC)
|
||||||
|
reverse_proxy /management.ManagementService/* h2c://management:80
|
||||||
|
|
||||||
|
# NetBird Management REST API
|
||||||
|
reverse_proxy /api/* management:80
|
||||||
|
|
||||||
|
# Reconciler API (strip /reconciler prefix before proxying)
|
||||||
|
handle_path /reconciler/* {
|
||||||
|
reverse_proxy reconciler:{{ reconciler_port }}
|
||||||
|
}
|
||||||
|
|
||||||
|
# NetBird Dashboard (catch-all — must be last)
|
||||||
|
reverse_proxy /* dashboard:80
|
||||||
|
}
|
||||||
|
|
||||||
|
# =============================================================================
|
||||||
|
# Gitea
|
||||||
|
# =============================================================================
|
||||||
|
{{ gitea_domain }} {
|
||||||
|
import security_headers
|
||||||
|
reverse_proxy gitea:{{ gitea_http_port }}
|
||||||
|
}
|
||||||
12
poc/ansible/templates/dashboard.env.j2
Normal file
12
poc/ansible/templates/dashboard.env.j2
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
NETBIRD_MGMT_API_ENDPOINT=https://{{ netbird_domain }}
|
||||||
|
NETBIRD_MGMT_GRPC_API_ENDPOINT=https://{{ netbird_domain }}
|
||||||
|
AUTH_AUDIENCE=netbird-dashboard
|
||||||
|
AUTH_CLIENT_ID=netbird-dashboard
|
||||||
|
AUTH_CLIENT_SECRET=
|
||||||
|
AUTH_AUTHORITY=https://{{ netbird_domain }}/oauth2
|
||||||
|
USE_AUTH0=false
|
||||||
|
AUTH_SUPPORTED_SCOPES=openid profile email groups
|
||||||
|
AUTH_REDIRECT_URI=/nb-auth
|
||||||
|
AUTH_SILENT_REDIRECT_URI=/nb-silent-auth
|
||||||
|
NGINX_SSL_PORT=443
|
||||||
|
LETSENCRYPT_DOMAIN=none
|
||||||
146
poc/ansible/templates/docker-compose.yml.j2
Normal file
146
poc/ansible/templates/docker-compose.yml.j2
Normal file
@ -0,0 +1,146 @@
|
|||||||
|
services:
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# NetBird core services (proven config from existing PoC)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
caddy:
|
||||||
|
image: caddy:{{ caddy_version }}
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- netbird
|
||||||
|
ports:
|
||||||
|
- "80:80"
|
||||||
|
- "443:443"
|
||||||
|
volumes:
|
||||||
|
- ./config/Caddyfile:/etc/caddy/Caddyfile:ro
|
||||||
|
- caddy_data:/data
|
||||||
|
- caddy_config:/config
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "500m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
dashboard:
|
||||||
|
image: netbirdio/dashboard:{{ dashboard_version }}
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- netbird
|
||||||
|
env_file:
|
||||||
|
- ./config/dashboard.env
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "500m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
signal:
|
||||||
|
image: netbirdio/signal:{{ netbird_version }}
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- netbird
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "500m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
relay:
|
||||||
|
image: netbirdio/relay:{{ netbird_version }}
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- netbird
|
||||||
|
env_file:
|
||||||
|
- ./config/relay.env
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "500m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
management:
|
||||||
|
image: netbirdio/management:{{ netbird_version }}
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- netbird
|
||||||
|
volumes:
|
||||||
|
- netbird_management:/var/lib/netbird
|
||||||
|
- ./config/management.json:/etc/netbird/management.json
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "500m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
coturn:
|
||||||
|
image: coturn/coturn:{{ coturn_version }}
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- netbird
|
||||||
|
ports:
|
||||||
|
- "3478:3478/udp"
|
||||||
|
- "3478:3478/tcp"
|
||||||
|
volumes:
|
||||||
|
- ./config/turnserver.conf:/etc/coturn/turnserver.conf:ro
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "500m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Gitea (local Git server for GitOps source-of-truth)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
gitea:
|
||||||
|
image: gitea/gitea:{{ gitea_version }}
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- netbird
|
||||||
|
environment:
|
||||||
|
- GITEA__server__DOMAIN={{ gitea_domain }}
|
||||||
|
- GITEA__server__ROOT_URL=https://{{ gitea_domain }}
|
||||||
|
- GITEA__server__SSH_DOMAIN={{ gitea_domain }}
|
||||||
|
- GITEA__database__DB_TYPE=sqlite3
|
||||||
|
- GITEA__actions__ENABLED=true
|
||||||
|
volumes:
|
||||||
|
- gitea_data:/data
|
||||||
|
ports:
|
||||||
|
- "{{ gitea_ssh_port }}:22"
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "50m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# NetBird GitOps Reconciler
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
reconciler:
|
||||||
|
build:
|
||||||
|
context: {{ base_dir }}/reconciler-src
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- netbird
|
||||||
|
env_file:
|
||||||
|
- ./config/reconciler.env
|
||||||
|
volumes:
|
||||||
|
- reconciler_data:/data
|
||||||
|
ports:
|
||||||
|
- "127.0.0.1:{{ reconciler_port }}:{{ reconciler_port }}"
|
||||||
|
depends_on:
|
||||||
|
- management
|
||||||
|
logging:
|
||||||
|
driver: json-file
|
||||||
|
options:
|
||||||
|
max-size: "50m"
|
||||||
|
max-file: "2"
|
||||||
|
|
||||||
|
networks:
|
||||||
|
netbird:
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
netbird_management:
|
||||||
|
caddy_data:
|
||||||
|
caddy_config:
|
||||||
|
gitea_data:
|
||||||
|
reconciler_data:
|
||||||
49
poc/ansible/templates/management.json.j2
Normal file
49
poc/ansible/templates/management.json.j2
Normal file
@ -0,0 +1,49 @@
|
|||||||
|
{
|
||||||
|
"Stuns": [
|
||||||
|
{
|
||||||
|
"Proto": "udp",
|
||||||
|
"URI": "stun:{{ netbird_domain }}:3478"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"TURNConfig": {
|
||||||
|
"Turns": [
|
||||||
|
{
|
||||||
|
"Proto": "udp",
|
||||||
|
"URI": "turn:{{ netbird_domain }}:3478",
|
||||||
|
"Username": "netbird",
|
||||||
|
"Password": "{{ vault_turn_password }}"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"TimeBasedCredentials": false
|
||||||
|
},
|
||||||
|
"Relay": {
|
||||||
|
"Addresses": [
|
||||||
|
"rels://{{ netbird_domain }}:443/relay"
|
||||||
|
],
|
||||||
|
"CredentialsTTL": "168h",
|
||||||
|
"Secret": "{{ vault_relay_secret }}"
|
||||||
|
},
|
||||||
|
"Signal": {
|
||||||
|
"Proto": "https",
|
||||||
|
"URI": "{{ netbird_domain }}:443"
|
||||||
|
},
|
||||||
|
"Datadir": "/var/lib/netbird",
|
||||||
|
"DataStoreEncryptionKey": "{{ vault_encryption_key }}",
|
||||||
|
"StoreConfig": {
|
||||||
|
"Engine": "sqlite"
|
||||||
|
},
|
||||||
|
"HttpConfig": {
|
||||||
|
"Address": "0.0.0.0:80"
|
||||||
|
},
|
||||||
|
"IdpManagerConfig": {
|
||||||
|
"ManagerType": "none"
|
||||||
|
},
|
||||||
|
"EmbeddedIdP": {
|
||||||
|
"Enabled": true,
|
||||||
|
"Issuer": "https://{{ netbird_domain }}/oauth2",
|
||||||
|
"DashboardRedirectURIs": [
|
||||||
|
"https://{{ netbird_domain }}/nb-auth",
|
||||||
|
"https://{{ netbird_domain }}/nb-silent-auth"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
10
poc/ansible/templates/reconciler.env.j2
Normal file
10
poc/ansible/templates/reconciler.env.j2
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
NETBIRD_API_URL=http://management:80/api
|
||||||
|
NETBIRD_API_TOKEN={{ vault_netbird_api_token }}
|
||||||
|
RECONCILER_TOKEN={{ vault_reconciler_token }}
|
||||||
|
GITEA_ENABLED={{ gitea_enabled }}
|
||||||
|
GITEA_URL=http://gitea:{{ gitea_http_port }}
|
||||||
|
GITEA_TOKEN={{ vault_gitea_token }}
|
||||||
|
GITEA_REPO={{ gitea_org_name }}/{{ gitea_repo_name }}
|
||||||
|
POLL_INTERVAL_SECONDS=30
|
||||||
|
PORT={{ reconciler_port }}
|
||||||
|
DATA_DIR=/data
|
||||||
2
poc/ansible/templates/relay.env.j2
Normal file
2
poc/ansible/templates/relay.env.j2
Normal file
@ -0,0 +1,2 @@
|
|||||||
|
NB_RELAY_ADDRESS=rels://{{ netbird_domain }}:443/relay
|
||||||
|
NB_RELAY_SECRET={{ vault_relay_secret }}
|
||||||
10
poc/ansible/templates/turnserver.conf.j2
Normal file
10
poc/ansible/templates/turnserver.conf.j2
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
listening-port=3478
|
||||||
|
relay-ip={{ ansible_default_ipv4.address }}
|
||||||
|
min-port=10000
|
||||||
|
max-port=20000
|
||||||
|
fingerprint
|
||||||
|
use-auth-secret
|
||||||
|
static-auth-secret={{ vault_turn_password }}
|
||||||
|
realm={{ netbird_domain }}
|
||||||
|
cert=/data/cert.pem
|
||||||
|
pkey=/data/key.pem
|
||||||
71
src/config.ts
Normal file
71
src/config.ts
Normal file
@ -0,0 +1,71 @@
|
|||||||
|
import { z } from "zod";
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Base fields required regardless of Gitea integration mode.
|
||||||
|
*/
|
||||||
|
const BaseSchema = z.object({
|
||||||
|
netbirdApiUrl: z.string().url(),
|
||||||
|
netbirdApiToken: z.string().min(1),
|
||||||
|
giteaEnabled: z.string().default("true").transform((v) => v === "true"),
|
||||||
|
reconcilerToken: z.string().min(1),
|
||||||
|
pollIntervalSeconds: z.coerce.number().int().positive().default(30),
|
||||||
|
port: z.coerce.number().int().positive().default(8080),
|
||||||
|
dataDir: z.string().default("/data"),
|
||||||
|
});
|
||||||
|
|
||||||
|
const GiteaFieldsSchema = z.object({
|
||||||
|
giteaUrl: z.string().url(),
|
||||||
|
giteaToken: z.string().min(1),
|
||||||
|
giteaRepo: z.string().regex(/^[^/]+\/[^/]+$/), // owner/repo
|
||||||
|
});
|
||||||
|
|
||||||
|
/**
|
||||||
|
* When giteaEnabled=true, Gitea fields are validated and required.
|
||||||
|
* When giteaEnabled=false, Gitea fields are ignored.
|
||||||
|
*
|
||||||
|
* The output type always includes the optional Gitea fields so
|
||||||
|
* downstream code can check `config.giteaEnabled` and narrow.
|
||||||
|
*/
|
||||||
|
const ConfigSchema = BaseSchema.merge(GiteaFieldsSchema.partial())
|
||||||
|
.superRefine((data, ctx) => {
|
||||||
|
if (!data.giteaEnabled) return;
|
||||||
|
|
||||||
|
if (!data.giteaUrl) {
|
||||||
|
ctx.addIssue({
|
||||||
|
code: z.ZodIssueCode.custom,
|
||||||
|
message: "GITEA_URL is required when GITEA_ENABLED=true",
|
||||||
|
path: ["giteaUrl"],
|
||||||
|
});
|
||||||
|
}
|
||||||
|
if (!data.giteaToken) {
|
||||||
|
ctx.addIssue({
|
||||||
|
code: z.ZodIssueCode.custom,
|
||||||
|
message: "GITEA_TOKEN is required when GITEA_ENABLED=true",
|
||||||
|
path: ["giteaToken"],
|
||||||
|
});
|
||||||
|
}
|
||||||
|
if (!data.giteaRepo) {
|
||||||
|
ctx.addIssue({
|
||||||
|
code: z.ZodIssueCode.custom,
|
||||||
|
message: "GITEA_REPO is required when GITEA_ENABLED=true",
|
||||||
|
path: ["giteaRepo"],
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
export type Config = z.infer<typeof ConfigSchema>;
|
||||||
|
|
||||||
|
export function loadConfig(): Config {
|
||||||
|
return ConfigSchema.parse({
|
||||||
|
netbirdApiUrl: Deno.env.get("NETBIRD_API_URL"),
|
||||||
|
netbirdApiToken: Deno.env.get("NETBIRD_API_TOKEN"),
|
||||||
|
giteaEnabled: Deno.env.get("GITEA_ENABLED"),
|
||||||
|
giteaUrl: Deno.env.get("GITEA_URL"),
|
||||||
|
giteaToken: Deno.env.get("GITEA_TOKEN"),
|
||||||
|
giteaRepo: Deno.env.get("GITEA_REPO"),
|
||||||
|
reconcilerToken: Deno.env.get("RECONCILER_TOKEN"),
|
||||||
|
pollIntervalSeconds: Deno.env.get("POLL_INTERVAL_SECONDS"),
|
||||||
|
port: Deno.env.get("PORT"),
|
||||||
|
dataDir: Deno.env.get("DATA_DIR"),
|
||||||
|
});
|
||||||
|
}
|
||||||
548
src/export.test.ts
Normal file
548
src/export.test.ts
Normal file
@ -0,0 +1,548 @@
|
|||||||
|
import { assertEquals } from "@std/assert";
|
||||||
|
import { exportState } from "./export.ts";
|
||||||
|
import type { ActualState } from "./state/actual.ts";
|
||||||
|
import type {
|
||||||
|
NbDnsNameserverGroup,
|
||||||
|
NbGroup,
|
||||||
|
NbNetwork,
|
||||||
|
NbNetworkResource,
|
||||||
|
NbNetworkRouter,
|
||||||
|
NbPeer,
|
||||||
|
NbPolicy,
|
||||||
|
NbPostureCheck,
|
||||||
|
NbRoute,
|
||||||
|
NbSetupKey,
|
||||||
|
NbUser,
|
||||||
|
} from "./netbird/types.ts";
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Helpers
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
/** Builds a minimal ActualState with indexed maps from raw arrays. */
|
||||||
|
function buildActualState(data: {
|
||||||
|
groups?: NbGroup[];
|
||||||
|
setupKeys?: NbSetupKey[];
|
||||||
|
peers?: NbPeer[];
|
||||||
|
policies?: NbPolicy[];
|
||||||
|
routes?: NbRoute[];
|
||||||
|
dns?: NbDnsNameserverGroup[];
|
||||||
|
postureChecks?: NbPostureCheck[];
|
||||||
|
networks?: NbNetwork[];
|
||||||
|
networkResources?: Map<string, NbNetworkResource[]>;
|
||||||
|
networkRouters?: Map<string, NbNetworkRouter[]>;
|
||||||
|
users?: NbUser[];
|
||||||
|
}): ActualState {
|
||||||
|
const groups = data.groups ?? [];
|
||||||
|
const setupKeys = data.setupKeys ?? [];
|
||||||
|
const peers = data.peers ?? [];
|
||||||
|
const policies = data.policies ?? [];
|
||||||
|
const routes = data.routes ?? [];
|
||||||
|
const dns = data.dns ?? [];
|
||||||
|
const postureChecks = data.postureChecks ?? [];
|
||||||
|
const networks = data.networks ?? [];
|
||||||
|
const users = data.users ?? [];
|
||||||
|
|
||||||
|
return {
|
||||||
|
groups,
|
||||||
|
groupsByName: new Map(groups.map((g) => [g.name, g])),
|
||||||
|
groupsById: new Map(groups.map((g) => [g.id, g])),
|
||||||
|
setupKeys,
|
||||||
|
setupKeysByName: new Map(setupKeys.map((k) => [k.name, k])),
|
||||||
|
peers,
|
||||||
|
peersByName: new Map(peers.map((p) => [p.name, p])),
|
||||||
|
peersById: new Map(peers.map((p) => [p.id, p])),
|
||||||
|
policies,
|
||||||
|
policiesByName: new Map(policies.map((p) => [p.name, p])),
|
||||||
|
routes,
|
||||||
|
routesByNetworkId: new Map(routes.map((r) => [r.network_id, r])),
|
||||||
|
dns,
|
||||||
|
dnsByName: new Map(dns.map((d) => [d.name, d])),
|
||||||
|
postureChecks,
|
||||||
|
postureChecksByName: new Map(postureChecks.map((pc) => [pc.name, pc])),
|
||||||
|
networks,
|
||||||
|
networksByName: new Map(networks.map((n) => [n.name, n])),
|
||||||
|
networkResources: data.networkResources ?? new Map(),
|
||||||
|
networkRouters: data.networkRouters ?? new Map(),
|
||||||
|
users,
|
||||||
|
usersByEmail: new Map(users.map((u) => [u.email, u])),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeGroup(
|
||||||
|
overrides: Partial<NbGroup> & Pick<NbGroup, "id" | "name">,
|
||||||
|
): NbGroup {
|
||||||
|
return {
|
||||||
|
peers_count: 0,
|
||||||
|
peers: [],
|
||||||
|
issued: "api",
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function makeSetupKey(
|
||||||
|
overrides: Partial<NbSetupKey> & Pick<NbSetupKey, "name">,
|
||||||
|
): NbSetupKey {
|
||||||
|
return {
|
||||||
|
id: 1,
|
||||||
|
type: "one-off",
|
||||||
|
key: "NBSK-masked",
|
||||||
|
expires: "2027-01-01T00:00:00Z",
|
||||||
|
valid: true,
|
||||||
|
revoked: false,
|
||||||
|
used_times: 0,
|
||||||
|
state: "valid",
|
||||||
|
auto_groups: [],
|
||||||
|
usage_limit: 1,
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Tests: Normal state with groups, keys, policy
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Deno.test("exportState: normal state with groups, keys, and policy", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
groups: [
|
||||||
|
makeGroup({
|
||||||
|
id: "g-pilots",
|
||||||
|
name: "pilots",
|
||||||
|
peers: [{ id: "p1", name: "Pilot-hawk-72" }],
|
||||||
|
}),
|
||||||
|
makeGroup({ id: "g-vehicles", name: "vehicles" }),
|
||||||
|
],
|
||||||
|
setupKeys: [
|
||||||
|
makeSetupKey({
|
||||||
|
name: "Pilot-hawk-72",
|
||||||
|
auto_groups: ["g-pilots"],
|
||||||
|
used_times: 1,
|
||||||
|
usage_limit: 1,
|
||||||
|
}),
|
||||||
|
],
|
||||||
|
policies: [
|
||||||
|
{
|
||||||
|
id: "pol1",
|
||||||
|
name: "allow-pilot-vehicle",
|
||||||
|
description: "pilot to vehicle",
|
||||||
|
enabled: true,
|
||||||
|
source_posture_checks: [],
|
||||||
|
rules: [
|
||||||
|
{
|
||||||
|
name: "rule1",
|
||||||
|
description: "",
|
||||||
|
enabled: true,
|
||||||
|
action: "accept",
|
||||||
|
bidirectional: true,
|
||||||
|
protocol: "all",
|
||||||
|
sources: ["g-pilots"],
|
||||||
|
destinations: ["g-vehicles"],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
const exported = exportState(actual);
|
||||||
|
|
||||||
|
// Groups exported with correct peer mapping
|
||||||
|
assertEquals(Object.keys(exported.groups), ["pilots", "vehicles"]);
|
||||||
|
assertEquals(exported.groups["pilots"].peers, ["Pilot-hawk-72"]);
|
||||||
|
assertEquals(exported.groups["vehicles"].peers, []);
|
||||||
|
|
||||||
|
// Setup key with auto_groups resolved to names
|
||||||
|
assertEquals(Object.keys(exported.setup_keys), ["Pilot-hawk-72"]);
|
||||||
|
assertEquals(exported.setup_keys["Pilot-hawk-72"].auto_groups, ["pilots"]);
|
||||||
|
assertEquals(exported.setup_keys["Pilot-hawk-72"].enrolled, true);
|
||||||
|
assertEquals(exported.setup_keys["Pilot-hawk-72"].type, "one-off");
|
||||||
|
assertEquals(exported.setup_keys["Pilot-hawk-72"].expires_in, 604800);
|
||||||
|
|
||||||
|
// Policy with source/destination resolved
|
||||||
|
assertEquals(Object.keys(exported.policies), ["allow-pilot-vehicle"]);
|
||||||
|
assertEquals(exported.policies["allow-pilot-vehicle"].sources, ["pilots"]);
|
||||||
|
assertEquals(exported.policies["allow-pilot-vehicle"].destinations, [
|
||||||
|
"vehicles",
|
||||||
|
]);
|
||||||
|
assertEquals(exported.policies["allow-pilot-vehicle"].bidirectional, true);
|
||||||
|
assertEquals(exported.policies["allow-pilot-vehicle"].protocol, "all");
|
||||||
|
assertEquals(exported.policies["allow-pilot-vehicle"].action, "accept");
|
||||||
|
});
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Tests: Empty state (only "All" group)
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Deno.test("exportState: empty state with only All group produces empty export", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
groups: [
|
||||||
|
makeGroup({ id: "g-all", name: "All", issued: "jwt" }),
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
const exported = exportState(actual);
|
||||||
|
|
||||||
|
assertEquals(Object.keys(exported.groups).length, 0);
|
||||||
|
assertEquals(Object.keys(exported.setup_keys).length, 0);
|
||||||
|
assertEquals(Object.keys(exported.policies).length, 0);
|
||||||
|
assertEquals(Object.keys(exported.routes).length, 0);
|
||||||
|
assertEquals(Object.keys(exported.dns.nameserver_groups).length, 0);
|
||||||
|
});
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Tests: auto_groups ID-to-name mapping
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Deno.test("exportState: auto_groups IDs are resolved to group names", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
groups: [
|
||||||
|
makeGroup({ id: "g-alpha", name: "alpha" }),
|
||||||
|
makeGroup({ id: "g-beta", name: "beta" }),
|
||||||
|
],
|
||||||
|
setupKeys: [
|
||||||
|
makeSetupKey({
|
||||||
|
name: "key-1",
|
||||||
|
auto_groups: ["g-alpha", "g-beta"],
|
||||||
|
}),
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
const exported = exportState(actual);
|
||||||
|
|
||||||
|
assertEquals(exported.setup_keys["key-1"].auto_groups, ["alpha", "beta"]);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("exportState: auto_groups with unresolvable IDs are dropped", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
groups: [
|
||||||
|
makeGroup({ id: "g-alpha", name: "alpha" }),
|
||||||
|
],
|
||||||
|
setupKeys: [
|
||||||
|
makeSetupKey({
|
||||||
|
name: "key-1",
|
||||||
|
auto_groups: ["g-alpha", "g-nonexistent"],
|
||||||
|
}),
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
const exported = exportState(actual);
|
||||||
|
|
||||||
|
assertEquals(exported.setup_keys["key-1"].auto_groups, ["alpha"]);
|
||||||
|
});
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Tests: Enrolled detection
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Deno.test("exportState: enrolled detection — used key is enrolled", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
setupKeys: [
|
||||||
|
makeSetupKey({ name: "used-key", used_times: 1, usage_limit: 1 }),
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
assertEquals(exportState(actual).setup_keys["used-key"].enrolled, true);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("exportState: enrolled detection — unused key is not enrolled", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
setupKeys: [
|
||||||
|
makeSetupKey({ name: "fresh-key", used_times: 0, usage_limit: 1 }),
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
assertEquals(exportState(actual).setup_keys["fresh-key"].enrolled, false);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("exportState: enrolled detection — unlimited reusable is never enrolled", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
setupKeys: [
|
||||||
|
makeSetupKey({
|
||||||
|
name: "reusable-key",
|
||||||
|
type: "reusable",
|
||||||
|
used_times: 50,
|
||||||
|
usage_limit: 0,
|
||||||
|
}),
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
assertEquals(
|
||||||
|
exportState(actual).setup_keys["reusable-key"].enrolled,
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("exportState: enrolled detection — partially used is not enrolled", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
setupKeys: [
|
||||||
|
makeSetupKey({
|
||||||
|
name: "partial-key",
|
||||||
|
type: "reusable",
|
||||||
|
used_times: 2,
|
||||||
|
usage_limit: 5,
|
||||||
|
}),
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
assertEquals(
|
||||||
|
exportState(actual).setup_keys["partial-key"].enrolled,
|
||||||
|
false,
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Tests: System groups excluded
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Deno.test("exportState: system groups are excluded", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
groups: [
|
||||||
|
makeGroup({ id: "g-all", name: "All", issued: "jwt" }),
|
||||||
|
makeGroup({ id: "g-jwt", name: "jwt-group", issued: "jwt" }),
|
||||||
|
makeGroup({
|
||||||
|
id: "g-int",
|
||||||
|
name: "integration-group",
|
||||||
|
issued: "integration",
|
||||||
|
}),
|
||||||
|
makeGroup({ id: "g-api", name: "user-group", issued: "api" }),
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
const exported = exportState(actual);
|
||||||
|
const groupNames = Object.keys(exported.groups);
|
||||||
|
|
||||||
|
assertEquals(groupNames, ["user-group"]);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("exportState: All group with api issued is still excluded", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
groups: [
|
||||||
|
makeGroup({ id: "g-all", name: "All", issued: "api" }),
|
||||||
|
makeGroup({ id: "g-user", name: "my-group", issued: "api" }),
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
const exported = exportState(actual);
|
||||||
|
|
||||||
|
assertEquals(Object.keys(exported.groups), ["my-group"]);
|
||||||
|
});
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Tests: Group peers filter by setup key name
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Deno.test("exportState: group peers only include names matching setup keys", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
groups: [
|
||||||
|
makeGroup({
|
||||||
|
id: "g1",
|
||||||
|
name: "ops",
|
||||||
|
peers: [
|
||||||
|
{ id: "p1", name: "Pilot-hawk-72" },
|
||||||
|
{ id: "p2", name: "random-peer-no-key" },
|
||||||
|
],
|
||||||
|
}),
|
||||||
|
],
|
||||||
|
setupKeys: [
|
||||||
|
makeSetupKey({ name: "Pilot-hawk-72" }),
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
const exported = exportState(actual);
|
||||||
|
|
||||||
|
assertEquals(exported.groups["ops"].peers, ["Pilot-hawk-72"]);
|
||||||
|
});
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Tests: Policies
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Deno.test("exportState: policies with empty rules are skipped", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
policies: [
|
||||||
|
{
|
||||||
|
id: "pol1",
|
||||||
|
name: "empty-policy",
|
||||||
|
description: "no rules",
|
||||||
|
enabled: true,
|
||||||
|
source_posture_checks: [],
|
||||||
|
rules: [],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
assertEquals(Object.keys(exportState(actual).policies).length, 0);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("exportState: policy sources/destinations as {id,name} objects are resolved", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
groups: [
|
||||||
|
makeGroup({ id: "g-src", name: "source-group" }),
|
||||||
|
makeGroup({ id: "g-dst", name: "dest-group" }),
|
||||||
|
],
|
||||||
|
policies: [
|
||||||
|
{
|
||||||
|
id: "pol1",
|
||||||
|
name: "object-refs",
|
||||||
|
description: "",
|
||||||
|
enabled: true,
|
||||||
|
source_posture_checks: [],
|
||||||
|
rules: [
|
||||||
|
{
|
||||||
|
name: "r1",
|
||||||
|
description: "",
|
||||||
|
enabled: true,
|
||||||
|
action: "accept",
|
||||||
|
bidirectional: false,
|
||||||
|
protocol: "tcp",
|
||||||
|
ports: ["443", "8080"],
|
||||||
|
sources: [{ id: "g-src", name: "source-group" }],
|
||||||
|
destinations: [{ id: "g-dst", name: "dest-group" }],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
const exported = exportState(actual);
|
||||||
|
|
||||||
|
assertEquals(exported.policies["object-refs"].sources, ["source-group"]);
|
||||||
|
assertEquals(exported.policies["object-refs"].destinations, ["dest-group"]);
|
||||||
|
assertEquals(exported.policies["object-refs"].protocol, "tcp");
|
||||||
|
assertEquals(exported.policies["object-refs"].ports, ["443", "8080"]);
|
||||||
|
assertEquals(exported.policies["object-refs"].bidirectional, false);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("exportState: policy without ports omits the ports field", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
groups: [
|
||||||
|
makeGroup({ id: "g1", name: "g" }),
|
||||||
|
],
|
||||||
|
policies: [
|
||||||
|
{
|
||||||
|
id: "pol1",
|
||||||
|
name: "no-ports",
|
||||||
|
description: "",
|
||||||
|
enabled: true,
|
||||||
|
source_posture_checks: [],
|
||||||
|
rules: [
|
||||||
|
{
|
||||||
|
name: "r1",
|
||||||
|
description: "",
|
||||||
|
enabled: true,
|
||||||
|
action: "accept",
|
||||||
|
bidirectional: true,
|
||||||
|
protocol: "all",
|
||||||
|
sources: ["g1"],
|
||||||
|
destinations: ["g1"],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
const exported = exportState(actual);
|
||||||
|
|
||||||
|
assertEquals(exported.policies["no-ports"].ports, undefined);
|
||||||
|
});
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Tests: Routes
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Deno.test("exportState: routes keyed by network_id with IDs resolved", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
groups: [
|
||||||
|
makeGroup({ id: "g-pg", name: "peer-group" }),
|
||||||
|
makeGroup({ id: "g-dist", name: "dist-group" }),
|
||||||
|
],
|
||||||
|
routes: [
|
||||||
|
{
|
||||||
|
id: "r1",
|
||||||
|
description: "LAN route",
|
||||||
|
network_id: "lan-net",
|
||||||
|
enabled: true,
|
||||||
|
peer_groups: ["g-pg"],
|
||||||
|
network: "10.0.0.0/24",
|
||||||
|
metric: 100,
|
||||||
|
masquerade: true,
|
||||||
|
groups: ["g-dist"],
|
||||||
|
keep_route: false,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
const exported = exportState(actual);
|
||||||
|
|
||||||
|
assertEquals(Object.keys(exported.routes), ["lan-net"]);
|
||||||
|
assertEquals(exported.routes["lan-net"].peer_groups, ["peer-group"]);
|
||||||
|
assertEquals(exported.routes["lan-net"].distribution_groups, ["dist-group"]);
|
||||||
|
assertEquals(exported.routes["lan-net"].network, "10.0.0.0/24");
|
||||||
|
assertEquals(exported.routes["lan-net"].metric, 100);
|
||||||
|
assertEquals(exported.routes["lan-net"].masquerade, true);
|
||||||
|
assertEquals(exported.routes["lan-net"].enabled, true);
|
||||||
|
assertEquals(exported.routes["lan-net"].keep_route, false);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("exportState: route with domains and no network", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
groups: [
|
||||||
|
makeGroup({ id: "g1", name: "grp" }),
|
||||||
|
],
|
||||||
|
routes: [
|
||||||
|
{
|
||||||
|
id: "r2",
|
||||||
|
description: "DNS route",
|
||||||
|
network_id: "dns-route",
|
||||||
|
enabled: true,
|
||||||
|
peer_groups: ["g1"],
|
||||||
|
domains: ["example.com"],
|
||||||
|
metric: 9999,
|
||||||
|
masquerade: false,
|
||||||
|
groups: ["g1"],
|
||||||
|
keep_route: true,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
const exported = exportState(actual);
|
||||||
|
|
||||||
|
assertEquals(exported.routes["dns-route"].domains, ["example.com"]);
|
||||||
|
assertEquals(exported.routes["dns-route"].network, undefined);
|
||||||
|
});
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Tests: DNS
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Deno.test("exportState: DNS nameserver groups with IDs resolved", () => {
|
||||||
|
const actual = buildActualState({
|
||||||
|
groups: [
|
||||||
|
makeGroup({ id: "g-dns", name: "dns-group" }),
|
||||||
|
],
|
||||||
|
dns: [
|
||||||
|
{
|
||||||
|
id: "d1",
|
||||||
|
name: "internal-dns",
|
||||||
|
description: "internal resolver",
|
||||||
|
nameservers: [{ ip: "1.1.1.1", ns_type: "udp", port: 53 }],
|
||||||
|
enabled: true,
|
||||||
|
groups: ["g-dns"],
|
||||||
|
primary: true,
|
||||||
|
domains: ["internal."],
|
||||||
|
search_domains_enabled: false,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
const exported = exportState(actual);
|
||||||
|
|
||||||
|
assertEquals(Object.keys(exported.dns.nameserver_groups), ["internal-dns"]);
|
||||||
|
const ns = exported.dns.nameserver_groups["internal-dns"];
|
||||||
|
assertEquals(ns.groups, ["dns-group"]);
|
||||||
|
assertEquals(ns.nameservers, [{ ip: "1.1.1.1", ns_type: "udp", port: 53 }]);
|
||||||
|
assertEquals(ns.primary, true);
|
||||||
|
assertEquals(ns.domains, ["internal."]);
|
||||||
|
assertEquals(ns.search_domains_enabled, false);
|
||||||
|
assertEquals(ns.enabled, true);
|
||||||
|
assertEquals(ns.description, "internal resolver");
|
||||||
|
});
|
||||||
402
src/export.ts
Normal file
402
src/export.ts
Normal file
@ -0,0 +1,402 @@
|
|||||||
|
import type { ActualState } from "./state/actual.ts";
|
||||||
|
import type { DesiredState } from "./state/schema.ts";
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Default expires_in value (seconds). The NetBird API does not return the
|
||||||
|
* original `expires_in` that was used to create a setup key — only the
|
||||||
|
* absolute `expires` timestamp. We fall back to 7 days as a reasonable
|
||||||
|
* default so the exported config round-trips cleanly.
|
||||||
|
*/
|
||||||
|
const DEFAULT_EXPIRES_IN = 604800;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Transforms live NetBird state (as fetched from the API) into the
|
||||||
|
* declarative `netbird.json` format (`DesiredState`).
|
||||||
|
*
|
||||||
|
* This is the inverse of the reconciliation flow: given what's actually
|
||||||
|
* deployed, produce a config file that would recreate it. Useful for
|
||||||
|
* bootstrapping gitops from an existing NetBird account.
|
||||||
|
*
|
||||||
|
* Filtering rules:
|
||||||
|
* - Groups: system-managed groups (issued !== "api" or name "All") are
|
||||||
|
* excluded. Peer lists only include peers whose name matches a known
|
||||||
|
* setup key (since the desired-state schema maps peers to setup keys).
|
||||||
|
* - Setup keys: all exported. `auto_groups` IDs resolved to names.
|
||||||
|
* `enrolled` derived from usage counters.
|
||||||
|
* - Policies: empty-rules policies skipped. Source/destination IDs
|
||||||
|
* resolved to group names via the first rule.
|
||||||
|
* - Routes: keyed by `network_id`. Peer groups and distribution groups
|
||||||
|
* resolved from IDs to names.
|
||||||
|
* - DNS: group IDs resolved to names.
|
||||||
|
* - Posture checks: keyed by name, checks object passed through.
|
||||||
|
* - Networks: keyed by name, resources and routers resolved.
|
||||||
|
* - Peers: keyed by name, groups resolved (excluding "All").
|
||||||
|
* - Users: keyed by email, auto_groups resolved.
|
||||||
|
*/
|
||||||
|
export function exportState(actual: ActualState): DesiredState {
|
||||||
|
const idToName = buildIdToNameMap(actual);
|
||||||
|
const setupKeyNames = new Set(actual.setupKeys.map((k) => k.name));
|
||||||
|
|
||||||
|
// Build resource ID → name map from all network resources
|
||||||
|
const resourceIdToName = new Map<string, string>();
|
||||||
|
for (const resources of actual.networkResources.values()) {
|
||||||
|
for (const res of resources) {
|
||||||
|
resourceIdToName.set(res.id, res.name);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build posture check ID → name map
|
||||||
|
const postureCheckIdToName = new Map<string, string>(
|
||||||
|
actual.postureChecks.map((pc) => [pc.id, pc.name]),
|
||||||
|
);
|
||||||
|
|
||||||
|
return {
|
||||||
|
groups: exportGroups(actual, setupKeyNames, idToName),
|
||||||
|
setup_keys: exportSetupKeys(actual, idToName),
|
||||||
|
policies: exportPolicies(
|
||||||
|
actual,
|
||||||
|
idToName,
|
||||||
|
resourceIdToName,
|
||||||
|
postureCheckIdToName,
|
||||||
|
),
|
||||||
|
posture_checks: exportPostureChecks(actual),
|
||||||
|
networks: exportNetworks(actual, idToName),
|
||||||
|
peers: exportPeers(actual, idToName),
|
||||||
|
users: exportUsers(actual, idToName),
|
||||||
|
routes: exportRoutes(actual, idToName),
|
||||||
|
dns: {
|
||||||
|
nameserver_groups: exportDns(actual, idToName),
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Internal helpers
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Builds a group ID → group name lookup from the full groups list.
|
||||||
|
* Used throughout to translate opaque IDs in API responses back to
|
||||||
|
* human-readable names for the config file.
|
||||||
|
*/
|
||||||
|
function buildIdToNameMap(actual: ActualState): Map<string, string> {
|
||||||
|
return new Map(actual.groups.map((g) => [g.id, g.name]));
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Resolves an array of group IDs to group names, dropping any unresolvable IDs. */
|
||||||
|
function resolveIds(ids: string[], idToName: Map<string, string>): string[] {
|
||||||
|
return ids
|
||||||
|
.map((id) => idToName.get(id))
|
||||||
|
.filter((name): name is string => name !== undefined);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Extracts the group ID from a policy source/destination entry.
|
||||||
|
* The NetBird API returns these as either plain string IDs or
|
||||||
|
* `{ id, name }` objects depending on the API version.
|
||||||
|
*/
|
||||||
|
function extractGroupId(entry: string | { id: string; name: string }): string {
|
||||||
|
return typeof entry === "string" ? entry : entry.id;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Groups
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function exportGroups(
|
||||||
|
actual: ActualState,
|
||||||
|
setupKeyNames: Set<string>,
|
||||||
|
_idToName: Map<string, string>,
|
||||||
|
): DesiredState["groups"] {
|
||||||
|
const result: DesiredState["groups"] = {};
|
||||||
|
|
||||||
|
for (const group of actual.groups) {
|
||||||
|
if (isSystemGroup(group.name, group.issued)) continue;
|
||||||
|
|
||||||
|
// Only include peers whose name matches a known setup key, since
|
||||||
|
// the desired-state schema models peers as setup-key references.
|
||||||
|
const peers = (group.peers ?? [])
|
||||||
|
.map((p) => p.name)
|
||||||
|
.filter((name) => setupKeyNames.has(name));
|
||||||
|
|
||||||
|
result[group.name] = { peers };
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
function isSystemGroup(name: string, issued: string): boolean {
|
||||||
|
return name === "All" || issued !== "api";
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Setup Keys
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function exportSetupKeys(
|
||||||
|
actual: ActualState,
|
||||||
|
idToName: Map<string, string>,
|
||||||
|
): DesiredState["setup_keys"] {
|
||||||
|
const result: DesiredState["setup_keys"] = {};
|
||||||
|
|
||||||
|
for (const key of actual.setupKeys) {
|
||||||
|
result[key.name] = {
|
||||||
|
type: key.type,
|
||||||
|
expires_in: DEFAULT_EXPIRES_IN,
|
||||||
|
usage_limit: key.usage_limit,
|
||||||
|
auto_groups: resolveIds(key.auto_groups, idToName),
|
||||||
|
enrolled: isEnrolled(key.used_times, key.usage_limit),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* A setup key is considered "enrolled" when it has been fully consumed:
|
||||||
|
* `used_times >= usage_limit` with a finite limit. Keys with
|
||||||
|
* `usage_limit === 0` (unlimited reusable) are never enrolled.
|
||||||
|
*/
|
||||||
|
function isEnrolled(usedTimes: number, usageLimit: number): boolean {
|
||||||
|
if (usageLimit === 0) return false;
|
||||||
|
return usedTimes >= usageLimit;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Policies
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function exportPolicies(
|
||||||
|
actual: ActualState,
|
||||||
|
idToName: Map<string, string>,
|
||||||
|
resourceIdToName: Map<string, string>,
|
||||||
|
postureCheckIdToName: Map<string, string>,
|
||||||
|
): DesiredState["policies"] {
|
||||||
|
const result: DesiredState["policies"] = {};
|
||||||
|
|
||||||
|
for (const policy of actual.policies) {
|
||||||
|
if (policy.rules.length === 0) continue;
|
||||||
|
|
||||||
|
const rule = policy.rules[0];
|
||||||
|
const sources = resolveIds(
|
||||||
|
(rule.sources ?? []).map(extractGroupId),
|
||||||
|
idToName,
|
||||||
|
);
|
||||||
|
|
||||||
|
const entry: DesiredState["policies"][string] = {
|
||||||
|
description: policy.description,
|
||||||
|
enabled: policy.enabled,
|
||||||
|
sources,
|
||||||
|
destinations: [],
|
||||||
|
bidirectional: rule.bidirectional,
|
||||||
|
protocol: rule.protocol,
|
||||||
|
action: rule.action,
|
||||||
|
source_posture_checks: resolveIds(
|
||||||
|
policy.source_posture_checks ?? [],
|
||||||
|
postureCheckIdToName,
|
||||||
|
),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Handle destination_resource vs group-based destinations
|
||||||
|
if (rule.destinationResource) {
|
||||||
|
const resourceName = resourceIdToName.get(
|
||||||
|
rule.destinationResource.id,
|
||||||
|
);
|
||||||
|
entry.destination_resource = {
|
||||||
|
id: resourceName ?? rule.destinationResource.id,
|
||||||
|
type: rule.destinationResource.type,
|
||||||
|
};
|
||||||
|
} else {
|
||||||
|
entry.destinations = resolveIds(
|
||||||
|
(rule.destinations ?? []).map(extractGroupId),
|
||||||
|
idToName,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (rule.ports && rule.ports.length > 0) {
|
||||||
|
entry.ports = rule.ports;
|
||||||
|
}
|
||||||
|
|
||||||
|
result[policy.name] = entry;
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Posture Checks
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function exportPostureChecks(
|
||||||
|
actual: ActualState,
|
||||||
|
): DesiredState["posture_checks"] {
|
||||||
|
const result: DesiredState["posture_checks"] = {};
|
||||||
|
|
||||||
|
for (const pc of actual.postureChecks) {
|
||||||
|
result[pc.name] = {
|
||||||
|
description: pc.description,
|
||||||
|
checks: pc.checks,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Networks
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function exportNetworks(
|
||||||
|
actual: ActualState,
|
||||||
|
idToName: Map<string, string>,
|
||||||
|
): DesiredState["networks"] {
|
||||||
|
const result: DesiredState["networks"] = {};
|
||||||
|
|
||||||
|
for (const network of actual.networks) {
|
||||||
|
const resources = actual.networkResources.get(network.id) ?? [];
|
||||||
|
const routers = actual.networkRouters.get(network.id) ?? [];
|
||||||
|
|
||||||
|
result[network.name] = {
|
||||||
|
description: network.description,
|
||||||
|
resources: resources.map((res) => ({
|
||||||
|
name: res.name,
|
||||||
|
description: res.description,
|
||||||
|
type: res.type,
|
||||||
|
address: res.address,
|
||||||
|
enabled: res.enabled,
|
||||||
|
groups: res.groups.map((g) => {
|
||||||
|
// Resource groups are objects with id/name — use the idToName map
|
||||||
|
// for consistency, falling back to the embedded name.
|
||||||
|
return idToName.get(g.id) ?? g.name;
|
||||||
|
}),
|
||||||
|
})),
|
||||||
|
routers: routers.map((router) => {
|
||||||
|
const entry: DesiredState["networks"][string]["routers"][number] = {
|
||||||
|
metric: router.metric,
|
||||||
|
masquerade: router.masquerade,
|
||||||
|
enabled: router.enabled,
|
||||||
|
};
|
||||||
|
if (router.peer) {
|
||||||
|
const peer = actual.peersById.get(router.peer);
|
||||||
|
entry.peer = peer ? peer.name : router.peer;
|
||||||
|
}
|
||||||
|
if (router.peer_groups && router.peer_groups.length > 0) {
|
||||||
|
entry.peer_groups = resolveIds(router.peer_groups, idToName);
|
||||||
|
}
|
||||||
|
return entry;
|
||||||
|
}),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Peers
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function exportPeers(
|
||||||
|
actual: ActualState,
|
||||||
|
idToName: Map<string, string>,
|
||||||
|
): DesiredState["peers"] {
|
||||||
|
const result: DesiredState["peers"] = {};
|
||||||
|
|
||||||
|
for (const peer of actual.peers) {
|
||||||
|
const groups = peer.groups
|
||||||
|
.filter((g) => g.name !== "All")
|
||||||
|
.map((g) => idToName.get(g.id) ?? g.name);
|
||||||
|
|
||||||
|
result[peer.name] = {
|
||||||
|
groups,
|
||||||
|
login_expiration_enabled: peer.login_expiration_enabled,
|
||||||
|
inactivity_expiration_enabled: peer.inactivity_expiration_enabled,
|
||||||
|
ssh_enabled: peer.ssh_enabled,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Users
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function exportUsers(
|
||||||
|
actual: ActualState,
|
||||||
|
idToName: Map<string, string>,
|
||||||
|
): DesiredState["users"] {
|
||||||
|
const result: DesiredState["users"] = {};
|
||||||
|
|
||||||
|
for (const user of actual.users) {
|
||||||
|
result[user.email] = {
|
||||||
|
name: user.name,
|
||||||
|
role: user.role,
|
||||||
|
auto_groups: resolveIds(user.auto_groups, idToName),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Routes
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function exportRoutes(
|
||||||
|
actual: ActualState,
|
||||||
|
idToName: Map<string, string>,
|
||||||
|
): DesiredState["routes"] {
|
||||||
|
const result: DesiredState["routes"] = {};
|
||||||
|
|
||||||
|
for (const route of actual.routes) {
|
||||||
|
const entry: DesiredState["routes"][string] = {
|
||||||
|
description: route.description,
|
||||||
|
peer_groups: resolveIds(route.peer_groups ?? [], idToName),
|
||||||
|
metric: route.metric,
|
||||||
|
masquerade: route.masquerade,
|
||||||
|
distribution_groups: resolveIds(route.groups, idToName),
|
||||||
|
enabled: route.enabled,
|
||||||
|
keep_route: route.keep_route,
|
||||||
|
};
|
||||||
|
|
||||||
|
if (route.network) {
|
||||||
|
entry.network = route.network;
|
||||||
|
}
|
||||||
|
if (route.domains && route.domains.length > 0) {
|
||||||
|
entry.domains = route.domains;
|
||||||
|
}
|
||||||
|
|
||||||
|
result[route.network_id] = entry;
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// DNS
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function exportDns(
|
||||||
|
actual: ActualState,
|
||||||
|
idToName: Map<string, string>,
|
||||||
|
): DesiredState["dns"]["nameserver_groups"] {
|
||||||
|
const result: DesiredState["dns"]["nameserver_groups"] = {};
|
||||||
|
|
||||||
|
for (const ns of actual.dns) {
|
||||||
|
result[ns.name] = {
|
||||||
|
description: ns.description,
|
||||||
|
nameservers: ns.nameservers.map((s) => ({
|
||||||
|
ip: s.ip,
|
||||||
|
ns_type: s.ns_type,
|
||||||
|
port: s.port,
|
||||||
|
})),
|
||||||
|
enabled: ns.enabled,
|
||||||
|
groups: resolveIds(ns.groups, idToName),
|
||||||
|
primary: ns.primary,
|
||||||
|
domains: ns.domains,
|
||||||
|
search_domains_enabled: ns.search_domains_enabled,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
192
src/gitea/client.test.ts
Normal file
192
src/gitea/client.test.ts
Normal file
@ -0,0 +1,192 @@
|
|||||||
|
import { assertEquals } from "@std/assert";
|
||||||
|
import { type FetchFn, GiteaApiError, GiteaClient } from "./client.ts";
|
||||||
|
|
||||||
|
function mockFetch(
|
||||||
|
responses: Map<string, { status: number; body: unknown }>,
|
||||||
|
): FetchFn {
|
||||||
|
return (input: string | URL | Request, init?: RequestInit) => {
|
||||||
|
const url = typeof input === "string"
|
||||||
|
? input
|
||||||
|
: input instanceof URL
|
||||||
|
? input.toString()
|
||||||
|
: input.url;
|
||||||
|
const method = init?.method ?? "GET";
|
||||||
|
const key = `${method} ${url}`;
|
||||||
|
const resp = responses.get(key);
|
||||||
|
if (!resp) throw new Error(`Unmocked request: ${key}`);
|
||||||
|
return Promise.resolve(
|
||||||
|
new Response(JSON.stringify(resp.body), {
|
||||||
|
status: resp.status,
|
||||||
|
headers: { "Content-Type": "application/json" },
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
Deno.test("GiteaClient.getFileContent fetches and decodes base64 content with SHA", async () => {
|
||||||
|
const client = new GiteaClient(
|
||||||
|
"https://gitea.example.com",
|
||||||
|
"test-token",
|
||||||
|
"BlastPilot/netbird-gitops",
|
||||||
|
mockFetch(
|
||||||
|
new Map([
|
||||||
|
[
|
||||||
|
"GET https://gitea.example.com/api/v1/repos/BlastPilot/netbird-gitops/contents/netbird.json?ref=main",
|
||||||
|
{
|
||||||
|
status: 200,
|
||||||
|
body: {
|
||||||
|
content: btoa('{"groups":{}}'),
|
||||||
|
sha: "abc123",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
]),
|
||||||
|
),
|
||||||
|
);
|
||||||
|
const result = await client.getFileContent("netbird.json", "main");
|
||||||
|
assertEquals(result.sha, "abc123");
|
||||||
|
assertEquals(result.content, '{"groups":{}}');
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("GiteaClient.updateFile sends PUT with base64 content and SHA", async () => {
|
||||||
|
let capturedMethod: string | undefined;
|
||||||
|
let capturedBody: string | undefined;
|
||||||
|
let capturedUrl: string | undefined;
|
||||||
|
|
||||||
|
const fakeFetch: FetchFn = (
|
||||||
|
input: string | URL | Request,
|
||||||
|
init?: RequestInit,
|
||||||
|
) => {
|
||||||
|
capturedUrl = typeof input === "string" ? input : input.toString();
|
||||||
|
capturedMethod = init?.method;
|
||||||
|
capturedBody = init?.body as string;
|
||||||
|
return Promise.resolve(
|
||||||
|
new Response(JSON.stringify({ content: {} }), {
|
||||||
|
status: 200,
|
||||||
|
headers: { "Content-Type": "application/json" },
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
const client = new GiteaClient(
|
||||||
|
"https://gitea.example.com",
|
||||||
|
"test-token",
|
||||||
|
"BlastPilot/netbird-gitops",
|
||||||
|
fakeFetch,
|
||||||
|
);
|
||||||
|
|
||||||
|
await client.updateFile(
|
||||||
|
"netbird.json",
|
||||||
|
'{"groups":{}}',
|
||||||
|
"abc123",
|
||||||
|
"chore: update enrolled state",
|
||||||
|
"main",
|
||||||
|
);
|
||||||
|
|
||||||
|
assertEquals(capturedMethod, "PUT");
|
||||||
|
assertEquals(
|
||||||
|
capturedUrl,
|
||||||
|
"https://gitea.example.com/api/v1/repos/BlastPilot/netbird-gitops/contents/netbird.json",
|
||||||
|
);
|
||||||
|
|
||||||
|
const parsed = JSON.parse(capturedBody!);
|
||||||
|
assertEquals(parsed.sha, "abc123");
|
||||||
|
assertEquals(parsed.branch, "main");
|
||||||
|
assertEquals(parsed.message, "chore: update enrolled state");
|
||||||
|
assertEquals(atob(parsed.content), '{"groups":{}}');
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("GiteaClient.postIssueComment sends POST with body", async () => {
|
||||||
|
let capturedMethod: string | undefined;
|
||||||
|
let capturedBody: string | undefined;
|
||||||
|
let capturedUrl: string | undefined;
|
||||||
|
|
||||||
|
const fakeFetch: FetchFn = (
|
||||||
|
input: string | URL | Request,
|
||||||
|
init?: RequestInit,
|
||||||
|
) => {
|
||||||
|
capturedUrl = typeof input === "string" ? input : input.toString();
|
||||||
|
capturedMethod = init?.method;
|
||||||
|
capturedBody = init?.body as string;
|
||||||
|
return Promise.resolve(
|
||||||
|
new Response(JSON.stringify({ id: 1 }), {
|
||||||
|
status: 201,
|
||||||
|
headers: { "Content-Type": "application/json" },
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
const client = new GiteaClient(
|
||||||
|
"https://gitea.example.com",
|
||||||
|
"test-token",
|
||||||
|
"BlastPilot/netbird-gitops",
|
||||||
|
fakeFetch,
|
||||||
|
);
|
||||||
|
|
||||||
|
await client.postIssueComment(42, "Reconciliation complete.");
|
||||||
|
|
||||||
|
assertEquals(capturedMethod, "POST");
|
||||||
|
assertEquals(
|
||||||
|
capturedUrl,
|
||||||
|
"https://gitea.example.com/api/v1/repos/BlastPilot/netbird-gitops/issues/42/comments",
|
||||||
|
);
|
||||||
|
assertEquals(JSON.parse(capturedBody!), { body: "Reconciliation complete." });
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("GiteaClient throws GiteaApiError on non-2xx response", async () => {
|
||||||
|
const client = new GiteaClient(
|
||||||
|
"https://gitea.example.com",
|
||||||
|
"test-token",
|
||||||
|
"BlastPilot/netbird-gitops",
|
||||||
|
mockFetch(
|
||||||
|
new Map([
|
||||||
|
[
|
||||||
|
"GET https://gitea.example.com/api/v1/repos/BlastPilot/netbird-gitops/contents/netbird.json?ref=main",
|
||||||
|
{
|
||||||
|
status: 404,
|
||||||
|
body: { message: "not found" },
|
||||||
|
},
|
||||||
|
],
|
||||||
|
]),
|
||||||
|
),
|
||||||
|
);
|
||||||
|
|
||||||
|
try {
|
||||||
|
await client.getFileContent("netbird.json", "main");
|
||||||
|
throw new Error("Should have thrown");
|
||||||
|
} catch (e) {
|
||||||
|
assertEquals(e instanceof GiteaApiError, true);
|
||||||
|
assertEquals((e as GiteaApiError).status, 404);
|
||||||
|
assertEquals((e as Error).message.includes("404"), true);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("GiteaClient sends lowercase 'token' auth header (Gitea convention)", async () => {
|
||||||
|
let capturedHeaders: Headers | undefined;
|
||||||
|
const fakeFetch: FetchFn = (
|
||||||
|
_input: string | URL | Request,
|
||||||
|
init?: RequestInit,
|
||||||
|
) => {
|
||||||
|
capturedHeaders = new Headers(init?.headers);
|
||||||
|
return Promise.resolve(
|
||||||
|
new Response(JSON.stringify({ content: btoa("{}"), sha: "x" }), {
|
||||||
|
status: 200,
|
||||||
|
headers: { "Content-Type": "application/json" },
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
const client = new GiteaClient(
|
||||||
|
"https://gitea.example.com",
|
||||||
|
"my-secret-token",
|
||||||
|
"BlastPilot/netbird-gitops",
|
||||||
|
fakeFetch,
|
||||||
|
);
|
||||||
|
await client.getFileContent("netbird.json", "main");
|
||||||
|
|
||||||
|
assertEquals(
|
||||||
|
capturedHeaders?.get("Authorization"),
|
||||||
|
"token my-secret-token",
|
||||||
|
);
|
||||||
|
assertEquals(capturedHeaders?.get("Accept"), "application/json");
|
||||||
|
});
|
||||||
140
src/gitea/client.ts
Normal file
140
src/gitea/client.ts
Normal file
@ -0,0 +1,140 @@
|
|||||||
|
/** Narrowed fetch signature used for dependency injection. */
|
||||||
|
export type FetchFn = (
|
||||||
|
input: string | URL | Request,
|
||||||
|
init?: RequestInit,
|
||||||
|
) => Promise<Response>;
|
||||||
|
|
||||||
|
/** Thrown when the Gitea API returns a non-2xx status. */
|
||||||
|
export class GiteaApiError extends Error {
|
||||||
|
constructor(
|
||||||
|
public readonly status: number,
|
||||||
|
public readonly method: string,
|
||||||
|
public readonly path: string,
|
||||||
|
public readonly body: unknown,
|
||||||
|
) {
|
||||||
|
super(`Gitea API error: ${method} ${path} returned ${status}`);
|
||||||
|
this.name = "GiteaApiError";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Thin HTTP client for the Gitea API.
|
||||||
|
*
|
||||||
|
* Used by the event poller to commit `enrolled: true` state changes
|
||||||
|
* back to the git repo, and by CI workflows via PR comments.
|
||||||
|
*
|
||||||
|
* Auth uses `token <token>` (lowercase — Gitea convention, distinct
|
||||||
|
* from NetBird's uppercase `Token`).
|
||||||
|
*
|
||||||
|
* Accepts an injectable fetch function so callers (and tests) can swap
|
||||||
|
* the transport without touching the client logic.
|
||||||
|
*/
|
||||||
|
export class GiteaClient {
|
||||||
|
constructor(
|
||||||
|
private readonly baseUrl: string,
|
||||||
|
private readonly token: string,
|
||||||
|
private readonly repo: string, // "owner/repo"
|
||||||
|
private readonly fetchFn: FetchFn = fetch,
|
||||||
|
) {}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Internal
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
private async request<T>(
|
||||||
|
method: string,
|
||||||
|
path: string,
|
||||||
|
body?: unknown,
|
||||||
|
): Promise<T> {
|
||||||
|
const url = `${this.baseUrl}/api/v1${path}`;
|
||||||
|
const headers: Record<string, string> = {
|
||||||
|
"Authorization": `token ${this.token}`,
|
||||||
|
"Accept": "application/json",
|
||||||
|
};
|
||||||
|
if (body !== undefined) {
|
||||||
|
headers["Content-Type"] = "application/json";
|
||||||
|
}
|
||||||
|
|
||||||
|
const resp = await this.fetchFn(url, {
|
||||||
|
method,
|
||||||
|
headers,
|
||||||
|
body: body !== undefined ? JSON.stringify(body) : undefined,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!resp.ok) {
|
||||||
|
const text = await resp.text().catch(() => "");
|
||||||
|
let errorBody: unknown;
|
||||||
|
try {
|
||||||
|
errorBody = JSON.parse(text);
|
||||||
|
} catch {
|
||||||
|
errorBody = text;
|
||||||
|
}
|
||||||
|
throw new GiteaApiError(resp.status, method, path, errorBody);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 204 No Content — nothing to parse
|
||||||
|
if (resp.status === 204) {
|
||||||
|
return undefined as T;
|
||||||
|
}
|
||||||
|
|
||||||
|
return (await resp.json()) as T;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Repository Contents
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
/** Get file content and SHA for optimistic concurrency. */
|
||||||
|
async getFileContent(
|
||||||
|
path: string,
|
||||||
|
ref: string,
|
||||||
|
): Promise<{ content: string; sha: string }> {
|
||||||
|
const data = await this.request<{ content: string; sha: string }>(
|
||||||
|
"GET",
|
||||||
|
`/repos/${this.repo}/contents/${path}?ref=${ref}`,
|
||||||
|
);
|
||||||
|
return {
|
||||||
|
content: atob(data.content),
|
||||||
|
sha: data.sha,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Update file with optimistic concurrency (SHA check).
|
||||||
|
*
|
||||||
|
* The SHA from getFileContent acts as a CAS token — the update will be
|
||||||
|
* rejected by Gitea if the file has been modified since we read it.
|
||||||
|
* This prevents race conditions when the poller and CI both try to
|
||||||
|
* update netbird.json.
|
||||||
|
*/
|
||||||
|
async updateFile(
|
||||||
|
path: string,
|
||||||
|
content: string,
|
||||||
|
sha: string,
|
||||||
|
message: string,
|
||||||
|
branch: string,
|
||||||
|
): Promise<void> {
|
||||||
|
await this.request("PUT", `/repos/${this.repo}/contents/${path}`, {
|
||||||
|
content: btoa(content),
|
||||||
|
sha,
|
||||||
|
message,
|
||||||
|
branch,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Issue Comments (used for PR status reporting)
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
/** Post a comment on an issue or pull request. */
|
||||||
|
async postIssueComment(
|
||||||
|
issueNumber: number,
|
||||||
|
body: string,
|
||||||
|
): Promise<void> {
|
||||||
|
await this.request(
|
||||||
|
"POST",
|
||||||
|
`/repos/${this.repo}/issues/${issueNumber}/comments`,
|
||||||
|
{ body },
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
441
src/integration.test.ts
Normal file
441
src/integration.test.ts
Normal file
@ -0,0 +1,441 @@
|
|||||||
|
import { assertEquals, assertExists } from "@std/assert";
|
||||||
|
import { createHandler } from "./server.ts";
|
||||||
|
import { NetbirdClient } from "./netbird/client.ts";
|
||||||
|
import type { GiteaClient } from "./gitea/client.ts";
|
||||||
|
import type { Config } from "./config.ts";
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// Mock NetBird API
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
interface ApiCall {
|
||||||
|
method: string;
|
||||||
|
path: string;
|
||||||
|
body?: Record<string, unknown>;
|
||||||
|
}
|
||||||
|
|
||||||
|
const MOCK_BASE = "https://nb.test/api";
|
||||||
|
const TEST_KEY_VALUE = "NBSK-test-key-value-abc123";
|
||||||
|
|
||||||
|
function createMockFetch(calls: ApiCall[]) {
|
||||||
|
let groupCounter = 0;
|
||||||
|
|
||||||
|
return async (
|
||||||
|
input: string | URL | Request,
|
||||||
|
init?: RequestInit,
|
||||||
|
): Promise<Response> => {
|
||||||
|
const url = typeof input === "string" ? input : input.toString();
|
||||||
|
const method = init?.method ?? "GET";
|
||||||
|
const path = url.replace(MOCK_BASE, "");
|
||||||
|
const body = init?.body ? JSON.parse(init.body as string) : undefined;
|
||||||
|
|
||||||
|
calls.push({ method, path, body });
|
||||||
|
|
||||||
|
// Route GET list endpoints — return empty arrays
|
||||||
|
if (method === "GET") {
|
||||||
|
return Response.json([]);
|
||||||
|
}
|
||||||
|
|
||||||
|
// POST /groups — return a created group
|
||||||
|
if (method === "POST" && path === "/groups") {
|
||||||
|
groupCounter++;
|
||||||
|
return Response.json({
|
||||||
|
id: `mock-group-${groupCounter}`,
|
||||||
|
name: body.name,
|
||||||
|
peers_count: 0,
|
||||||
|
peers: [],
|
||||||
|
issued: "api",
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// POST /setup-keys — return a created key
|
||||||
|
if (method === "POST" && path === "/setup-keys") {
|
||||||
|
return Response.json({
|
||||||
|
id: 1,
|
||||||
|
name: body.name,
|
||||||
|
type: body.type,
|
||||||
|
key: TEST_KEY_VALUE,
|
||||||
|
expires: "2027-01-01T00:00:00Z",
|
||||||
|
valid: true,
|
||||||
|
revoked: false,
|
||||||
|
used_times: 0,
|
||||||
|
state: "valid",
|
||||||
|
auto_groups: body.auto_groups ?? [],
|
||||||
|
usage_limit: body.usage_limit ?? 0,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// POST /policies — return a created policy
|
||||||
|
if (method === "POST" && path === "/policies") {
|
||||||
|
return Response.json({
|
||||||
|
id: "mock-policy-1",
|
||||||
|
name: body.name,
|
||||||
|
description: body.description ?? "",
|
||||||
|
enabled: body.enabled ?? true,
|
||||||
|
rules: body.rules ?? [],
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// DELETE — 204 No Content
|
||||||
|
if (method === "DELETE") {
|
||||||
|
return new Response(null, { status: 204 });
|
||||||
|
}
|
||||||
|
|
||||||
|
return Response.json({ error: "mock: unhandled route" }, { status: 500 });
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// Test fixtures
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
const MOCK_CONFIG: Config = {
|
||||||
|
netbirdApiUrl: MOCK_BASE,
|
||||||
|
netbirdApiToken: "nb-test-token",
|
||||||
|
giteaEnabled: true,
|
||||||
|
giteaUrl: "https://gitea.test",
|
||||||
|
giteaToken: "gitea-test-token",
|
||||||
|
giteaRepo: "org/repo",
|
||||||
|
reconcilerToken: "secret",
|
||||||
|
pollIntervalSeconds: 30,
|
||||||
|
port: 8080,
|
||||||
|
dataDir: "/data",
|
||||||
|
};
|
||||||
|
|
||||||
|
/** Desired state with one group and one setup key referencing it. */
|
||||||
|
const DESIRED_STATE = {
|
||||||
|
groups: {
|
||||||
|
pilots: { peers: [] },
|
||||||
|
},
|
||||||
|
setup_keys: {
|
||||||
|
"Pilot-hawk-72": {
|
||||||
|
type: "one-off",
|
||||||
|
expires_in: 604800,
|
||||||
|
usage_limit: 1,
|
||||||
|
auto_groups: ["pilots"],
|
||||||
|
enrolled: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
function buildHandler(calls: ApiCall[]) {
|
||||||
|
const mockFetch = createMockFetch(calls);
|
||||||
|
const netbird = new NetbirdClient(MOCK_BASE, "nb-test-token", mockFetch);
|
||||||
|
|
||||||
|
// The GiteaClient is not exercised in reconcile tests — stub it out.
|
||||||
|
const gitea = {} as GiteaClient;
|
||||||
|
|
||||||
|
return createHandler({
|
||||||
|
config: MOCK_CONFIG,
|
||||||
|
netbird,
|
||||||
|
gitea,
|
||||||
|
reconcileInProgress: { value: false },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function authedRequest(path: string, body?: unknown): Request {
|
||||||
|
return new Request(`http://localhost:8080${path}`, {
|
||||||
|
method: "POST",
|
||||||
|
headers: {
|
||||||
|
"Authorization": "Bearer secret",
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
},
|
||||||
|
body: body !== undefined ? JSON.stringify(body) : undefined,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// Standalone mode (giteaEnabled: false, gitea: null)
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
const STANDALONE_CONFIG: Config = {
|
||||||
|
netbirdApiUrl: MOCK_BASE,
|
||||||
|
netbirdApiToken: "nb-test-token",
|
||||||
|
giteaEnabled: false,
|
||||||
|
reconcilerToken: "secret",
|
||||||
|
pollIntervalSeconds: 30,
|
||||||
|
port: 8080,
|
||||||
|
dataDir: "/data",
|
||||||
|
};
|
||||||
|
|
||||||
|
function buildStandaloneHandler(calls: ApiCall[]) {
|
||||||
|
const mockFetch = createMockFetch(calls);
|
||||||
|
const netbird = new NetbirdClient(MOCK_BASE, "nb-test-token", mockFetch);
|
||||||
|
|
||||||
|
return createHandler({
|
||||||
|
config: STANDALONE_CONFIG,
|
||||||
|
netbird,
|
||||||
|
gitea: null,
|
||||||
|
reconcileInProgress: { value: false },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// Tests
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Deno.test("POST /reconcile?dry_run=true returns planned operations", async () => {
|
||||||
|
const calls: ApiCall[] = [];
|
||||||
|
const handler = buildHandler(calls);
|
||||||
|
|
||||||
|
const resp = await handler(
|
||||||
|
authedRequest("/reconcile?dry_run=true", DESIRED_STATE),
|
||||||
|
);
|
||||||
|
assertEquals(resp.status, 200);
|
||||||
|
|
||||||
|
const json = await resp.json();
|
||||||
|
assertEquals(json.status, "planned");
|
||||||
|
|
||||||
|
const opTypes = json.operations.map((op: { type: string }) => op.type);
|
||||||
|
assertEquals(opTypes.includes("create_group"), true);
|
||||||
|
assertEquals(opTypes.includes("create_setup_key"), true);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("POST /reconcile apply creates resources and returns keys", async () => {
|
||||||
|
const calls: ApiCall[] = [];
|
||||||
|
const handler = buildHandler(calls);
|
||||||
|
|
||||||
|
const resp = await handler(authedRequest("/reconcile", DESIRED_STATE));
|
||||||
|
assertEquals(resp.status, 200);
|
||||||
|
|
||||||
|
const json = await resp.json();
|
||||||
|
assertEquals(json.status, "applied");
|
||||||
|
|
||||||
|
// The created setup key's raw value should be in the response
|
||||||
|
assertExists(json.created_keys["Pilot-hawk-72"]);
|
||||||
|
assertEquals(json.created_keys["Pilot-hawk-72"], TEST_KEY_VALUE);
|
||||||
|
|
||||||
|
// Verify mock API received the expected POST calls
|
||||||
|
const postGroups = calls.filter(
|
||||||
|
(c) => c.method === "POST" && c.path === "/groups",
|
||||||
|
);
|
||||||
|
assertEquals(postGroups.length, 1);
|
||||||
|
assertEquals(postGroups[0].body?.name, "pilots");
|
||||||
|
|
||||||
|
const postKeys = calls.filter(
|
||||||
|
(c) => c.method === "POST" && c.path === "/setup-keys",
|
||||||
|
);
|
||||||
|
assertEquals(postKeys.length, 1);
|
||||||
|
assertEquals(postKeys[0].body?.name, "Pilot-hawk-72");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("POST /reconcile rejects unauthorized requests", async () => {
|
||||||
|
const calls: ApiCall[] = [];
|
||||||
|
const handler = buildHandler(calls);
|
||||||
|
|
||||||
|
const req = new Request("http://localhost:8080/reconcile", {
|
||||||
|
method: "POST",
|
||||||
|
headers: {
|
||||||
|
"Authorization": "Bearer wrong-token",
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
},
|
||||||
|
body: JSON.stringify(DESIRED_STATE),
|
||||||
|
});
|
||||||
|
|
||||||
|
const resp = await handler(req);
|
||||||
|
assertEquals(resp.status, 401);
|
||||||
|
|
||||||
|
const json = await resp.json();
|
||||||
|
assertEquals(json.error, "unauthorized");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("GET /health returns ok", async () => {
|
||||||
|
const calls: ApiCall[] = [];
|
||||||
|
const handler = buildHandler(calls);
|
||||||
|
|
||||||
|
const req = new Request("http://localhost:8080/health", { method: "GET" });
|
||||||
|
const resp = await handler(req);
|
||||||
|
assertEquals(resp.status, 200);
|
||||||
|
|
||||||
|
const json = await resp.json();
|
||||||
|
assertEquals(json.status, "ok");
|
||||||
|
});
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// Standalone mode tests (giteaEnabled: false, gitea: null)
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
Deno.test("standalone: POST /reconcile?dry_run=true returns planned operations", async () => {
|
||||||
|
const calls: ApiCall[] = [];
|
||||||
|
const handler = buildStandaloneHandler(calls);
|
||||||
|
|
||||||
|
const resp = await handler(
|
||||||
|
authedRequest("/reconcile?dry_run=true", DESIRED_STATE),
|
||||||
|
);
|
||||||
|
assertEquals(resp.status, 200);
|
||||||
|
|
||||||
|
const json = await resp.json();
|
||||||
|
assertEquals(json.status, "planned");
|
||||||
|
|
||||||
|
const opTypes = json.operations.map((op: { type: string }) => op.type);
|
||||||
|
assertEquals(opTypes.includes("create_group"), true);
|
||||||
|
assertEquals(opTypes.includes("create_setup_key"), true);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("standalone: POST /reconcile apply creates resources", async () => {
|
||||||
|
const calls: ApiCall[] = [];
|
||||||
|
const handler = buildStandaloneHandler(calls);
|
||||||
|
|
||||||
|
const resp = await handler(authedRequest("/reconcile", DESIRED_STATE));
|
||||||
|
assertEquals(resp.status, 200);
|
||||||
|
|
||||||
|
const json = await resp.json();
|
||||||
|
assertEquals(json.status, "applied");
|
||||||
|
|
||||||
|
assertExists(json.created_keys["Pilot-hawk-72"]);
|
||||||
|
assertEquals(json.created_keys["Pilot-hawk-72"], TEST_KEY_VALUE);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("standalone: GET /health returns ok with gitea=null", async () => {
|
||||||
|
const calls: ApiCall[] = [];
|
||||||
|
const handler = buildStandaloneHandler(calls);
|
||||||
|
|
||||||
|
const req = new Request("http://localhost:8080/health", { method: "GET" });
|
||||||
|
const resp = await handler(req);
|
||||||
|
assertEquals(resp.status, 200);
|
||||||
|
|
||||||
|
const json = await resp.json();
|
||||||
|
assertEquals(json.status, "ok");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("standalone: POST /sync-events does not crash with gitea=null", async () => {
|
||||||
|
const calls: ApiCall[] = [];
|
||||||
|
const handler = buildStandaloneHandler(calls);
|
||||||
|
|
||||||
|
const resp = await handler(authedRequest("/sync-events"));
|
||||||
|
assertEquals(resp.status, 200);
|
||||||
|
|
||||||
|
const json = await resp.json();
|
||||||
|
assertEquals(json.status, "synced");
|
||||||
|
});
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// /export endpoint tests
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Creates a mock fetch that returns pre-seeded data for GET list endpoints.
|
||||||
|
* This lets us verify the export endpoint transforms API data correctly.
|
||||||
|
*/
|
||||||
|
function createExportMockFetch(calls: ApiCall[]) {
|
||||||
|
return async (
|
||||||
|
input: string | URL | Request,
|
||||||
|
init?: RequestInit,
|
||||||
|
): Promise<Response> => {
|
||||||
|
const url = typeof input === "string" ? input : input.toString();
|
||||||
|
const method = init?.method ?? "GET";
|
||||||
|
const path = url.replace(MOCK_BASE, "");
|
||||||
|
|
||||||
|
calls.push({ method, path });
|
||||||
|
|
||||||
|
if (method === "GET" && path === "/groups") {
|
||||||
|
return Response.json([
|
||||||
|
{
|
||||||
|
id: "g-pilots",
|
||||||
|
name: "pilots",
|
||||||
|
peers_count: 1,
|
||||||
|
peers: [{ id: "p1", name: "Pilot-hawk-72" }],
|
||||||
|
issued: "api",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "g-all",
|
||||||
|
name: "All",
|
||||||
|
peers_count: 0,
|
||||||
|
peers: [],
|
||||||
|
issued: "jwt",
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
}
|
||||||
|
if (method === "GET" && path === "/setup-keys") {
|
||||||
|
return Response.json([
|
||||||
|
{
|
||||||
|
id: 1,
|
||||||
|
name: "Pilot-hawk-72",
|
||||||
|
type: "one-off",
|
||||||
|
key: "NBSK-masked",
|
||||||
|
expires: "2027-01-01T00:00:00Z",
|
||||||
|
valid: true,
|
||||||
|
revoked: false,
|
||||||
|
used_times: 1,
|
||||||
|
state: "overused",
|
||||||
|
auto_groups: ["g-pilots"],
|
||||||
|
usage_limit: 1,
|
||||||
|
},
|
||||||
|
]);
|
||||||
|
}
|
||||||
|
if (method === "GET" && path === "/peers") return Response.json([]);
|
||||||
|
if (method === "GET" && path === "/policies") return Response.json([]);
|
||||||
|
if (method === "GET" && path === "/routes") return Response.json([]);
|
||||||
|
if (method === "GET" && path === "/dns/nameservers") {
|
||||||
|
return Response.json([]);
|
||||||
|
}
|
||||||
|
if (method === "GET" && path === "/posture-checks") {
|
||||||
|
return Response.json([]);
|
||||||
|
}
|
||||||
|
if (method === "GET" && path === "/networks") return Response.json([]);
|
||||||
|
if (method === "GET" && path === "/users") return Response.json([]);
|
||||||
|
if (method === "GET" && path === "/events/audit") {
|
||||||
|
return Response.json([]);
|
||||||
|
}
|
||||||
|
|
||||||
|
return Response.json({ error: "mock: unhandled route" }, { status: 500 });
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function buildExportHandler(calls: ApiCall[]) {
|
||||||
|
const mockFetch = createExportMockFetch(calls);
|
||||||
|
const netbird = new NetbirdClient(MOCK_BASE, "nb-test-token", mockFetch);
|
||||||
|
|
||||||
|
return createHandler({
|
||||||
|
config: MOCK_CONFIG,
|
||||||
|
netbird,
|
||||||
|
gitea: {} as GiteaClient,
|
||||||
|
reconcileInProgress: { value: false },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
Deno.test("GET /export returns 200 with state and meta", async () => {
|
||||||
|
const calls: ApiCall[] = [];
|
||||||
|
const handler = buildExportHandler(calls);
|
||||||
|
|
||||||
|
const req = new Request("http://localhost:8080/export", {
|
||||||
|
method: "GET",
|
||||||
|
headers: { "Authorization": "Bearer secret" },
|
||||||
|
});
|
||||||
|
|
||||||
|
const resp = await handler(req);
|
||||||
|
assertEquals(resp.status, 200);
|
||||||
|
|
||||||
|
const json = await resp.json();
|
||||||
|
assertEquals(json.status, "ok");
|
||||||
|
|
||||||
|
// State should contain exported groups (minus "All")
|
||||||
|
assertEquals(Object.keys(json.state.groups), ["pilots"]);
|
||||||
|
assertEquals(json.state.groups["pilots"].peers, ["Pilot-hawk-72"]);
|
||||||
|
|
||||||
|
// Setup key auto_groups should be resolved from IDs to names
|
||||||
|
assertEquals(json.state.setup_keys["Pilot-hawk-72"].auto_groups, ["pilots"]);
|
||||||
|
assertEquals(json.state.setup_keys["Pilot-hawk-72"].enrolled, true);
|
||||||
|
|
||||||
|
// Meta should be present
|
||||||
|
assertExists(json.meta.exported_at);
|
||||||
|
assertEquals(json.meta.source_url, MOCK_BASE);
|
||||||
|
assertEquals(json.meta.groups_count, 1);
|
||||||
|
assertEquals(json.meta.setup_keys_count, 1);
|
||||||
|
assertEquals(json.meta.policies_count, 0);
|
||||||
|
assertEquals(json.meta.routes_count, 0);
|
||||||
|
assertEquals(json.meta.dns_count, 0);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("GET /export requires auth — 401 without bearer", async () => {
|
||||||
|
const calls: ApiCall[] = [];
|
||||||
|
const handler = buildExportHandler(calls);
|
||||||
|
|
||||||
|
const req = new Request("http://localhost:8080/export", { method: "GET" });
|
||||||
|
|
||||||
|
const resp = await handler(req);
|
||||||
|
assertEquals(resp.status, 401);
|
||||||
|
|
||||||
|
const json = await resp.json();
|
||||||
|
assertEquals(json.error, "unauthorized");
|
||||||
|
});
|
||||||
87
src/main.ts
Normal file
87
src/main.ts
Normal file
@ -0,0 +1,87 @@
|
|||||||
|
import { ZodError } from "zod";
|
||||||
|
import { loadConfig } from "./config.ts";
|
||||||
|
import { NetbirdClient } from "./netbird/client.ts";
|
||||||
|
import { GiteaClient } from "./gitea/client.ts";
|
||||||
|
import { createHandler } from "./server.ts";
|
||||||
|
import { startPollerLoop } from "./poller/loop.ts";
|
||||||
|
import { fetchActualState } from "./state/actual.ts";
|
||||||
|
import { exportState } from "./export.ts";
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// --export: one-shot state export, prints to stdout and exits
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function getCliArg(name: string): string | undefined {
|
||||||
|
const idx = Deno.args.indexOf(name);
|
||||||
|
if (idx === -1 || idx + 1 >= Deno.args.length) return undefined;
|
||||||
|
return Deno.args[idx + 1];
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Deno.args.includes("--export")) {
|
||||||
|
const apiUrl = getCliArg("--netbird-api-url") ??
|
||||||
|
Deno.env.get("NETBIRD_API_URL");
|
||||||
|
const apiToken = getCliArg("--netbird-api-token") ??
|
||||||
|
Deno.env.get("NETBIRD_API_TOKEN");
|
||||||
|
|
||||||
|
if (!apiUrl || !apiToken) {
|
||||||
|
console.error(
|
||||||
|
"Usage: --export --netbird-api-url <url> --netbird-api-token <token>",
|
||||||
|
);
|
||||||
|
Deno.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
const client = new NetbirdClient(apiUrl, apiToken);
|
||||||
|
const actual = await fetchActualState(client);
|
||||||
|
const state = exportState(actual);
|
||||||
|
console.log(JSON.stringify(state, null, 2));
|
||||||
|
Deno.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Normal server mode
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
let config;
|
||||||
|
try {
|
||||||
|
config = loadConfig();
|
||||||
|
} catch (err) {
|
||||||
|
if (err instanceof ZodError) {
|
||||||
|
console.error(
|
||||||
|
JSON.stringify({ msg: "invalid config", issues: err.issues }),
|
||||||
|
);
|
||||||
|
Deno.exit(1);
|
||||||
|
}
|
||||||
|
throw err;
|
||||||
|
}
|
||||||
|
|
||||||
|
const netbird = new NetbirdClient(config.netbirdApiUrl, config.netbirdApiToken);
|
||||||
|
const gitea = config.giteaEnabled
|
||||||
|
? new GiteaClient(config.giteaUrl!, config.giteaToken!, config.giteaRepo!)
|
||||||
|
: null;
|
||||||
|
const reconcileInProgress = { value: false };
|
||||||
|
|
||||||
|
// Start background poller
|
||||||
|
const pollerAbort = startPollerLoop({
|
||||||
|
config,
|
||||||
|
netbird,
|
||||||
|
gitea,
|
||||||
|
reconcileInProgress,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Start HTTP server
|
||||||
|
const handler = createHandler({ config, netbird, gitea, reconcileInProgress });
|
||||||
|
console.log(
|
||||||
|
JSON.stringify({
|
||||||
|
msg: "starting",
|
||||||
|
port: config.port,
|
||||||
|
gitea_enabled: config.giteaEnabled,
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
Deno.serve({ port: config.port, handler });
|
||||||
|
|
||||||
|
// Graceful shutdown
|
||||||
|
Deno.addSignalListener("SIGTERM", () => {
|
||||||
|
console.log(JSON.stringify({ msg: "shutting_down" }));
|
||||||
|
pollerAbort.abort();
|
||||||
|
Deno.exit(0);
|
||||||
|
});
|
||||||
147
src/netbird/client.test.ts
Normal file
147
src/netbird/client.test.ts
Normal file
@ -0,0 +1,147 @@
|
|||||||
|
import { assertEquals } from "@std/assert";
|
||||||
|
import { type FetchFn, NetbirdApiError, NetbirdClient } from "./client.ts";
|
||||||
|
|
||||||
|
function mockFetch(
|
||||||
|
responses: Map<string, { status: number; body: unknown }>,
|
||||||
|
): FetchFn {
|
||||||
|
return (input: string | URL | Request, init?: RequestInit) => {
|
||||||
|
const url = typeof input === "string"
|
||||||
|
? input
|
||||||
|
: input instanceof URL
|
||||||
|
? input.toString()
|
||||||
|
: input.url;
|
||||||
|
const method = init?.method ?? "GET";
|
||||||
|
const key = `${method} ${url}`;
|
||||||
|
const resp = responses.get(key);
|
||||||
|
if (!resp) throw new Error(`Unmocked request: ${key}`);
|
||||||
|
return Promise.resolve(
|
||||||
|
new Response(JSON.stringify(resp.body), {
|
||||||
|
status: resp.status,
|
||||||
|
headers: { "Content-Type": "application/json" },
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
Deno.test("NetbirdClient.listGroups sends auth header and parses response", async () => {
|
||||||
|
const groups = [{
|
||||||
|
id: "g1",
|
||||||
|
name: "pilots",
|
||||||
|
peers_count: 1,
|
||||||
|
peers: [],
|
||||||
|
issued: "api",
|
||||||
|
}];
|
||||||
|
const client = new NetbirdClient(
|
||||||
|
"https://nb.example.com/api",
|
||||||
|
"test-token",
|
||||||
|
mockFetch(
|
||||||
|
new Map([
|
||||||
|
["GET https://nb.example.com/api/groups", {
|
||||||
|
status: 200,
|
||||||
|
body: groups,
|
||||||
|
}],
|
||||||
|
]),
|
||||||
|
),
|
||||||
|
);
|
||||||
|
const result = await client.listGroups();
|
||||||
|
assertEquals(result.length, 1);
|
||||||
|
assertEquals(result[0].name, "pilots");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("NetbirdClient throws on non-2xx response", async () => {
|
||||||
|
const client = new NetbirdClient(
|
||||||
|
"https://nb.example.com/api",
|
||||||
|
"test-token",
|
||||||
|
mockFetch(
|
||||||
|
new Map([
|
||||||
|
["GET https://nb.example.com/api/groups", {
|
||||||
|
status: 401,
|
||||||
|
body: { message: "unauthorized" },
|
||||||
|
}],
|
||||||
|
]),
|
||||||
|
),
|
||||||
|
);
|
||||||
|
try {
|
||||||
|
await client.listGroups();
|
||||||
|
throw new Error("Should have thrown");
|
||||||
|
} catch (e) {
|
||||||
|
assertEquals(e instanceof NetbirdApiError, true);
|
||||||
|
assertEquals((e as NetbirdApiError).status, 401);
|
||||||
|
assertEquals((e as Error).message.includes("401"), true);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("NetbirdClient sends correct auth header", async () => {
|
||||||
|
let capturedHeaders: Headers | undefined;
|
||||||
|
const fakeFetch: FetchFn = (
|
||||||
|
_input: string | URL | Request,
|
||||||
|
init?: RequestInit,
|
||||||
|
) => {
|
||||||
|
capturedHeaders = new Headers(init?.headers);
|
||||||
|
return Promise.resolve(
|
||||||
|
new Response(JSON.stringify([]), {
|
||||||
|
status: 200,
|
||||||
|
headers: { "Content-Type": "application/json" },
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
const client = new NetbirdClient(
|
||||||
|
"https://nb.example.com/api",
|
||||||
|
"my-secret-token",
|
||||||
|
fakeFetch,
|
||||||
|
);
|
||||||
|
await client.listGroups();
|
||||||
|
|
||||||
|
assertEquals(capturedHeaders?.get("Authorization"), "Token my-secret-token");
|
||||||
|
assertEquals(capturedHeaders?.get("Accept"), "application/json");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("NetbirdClient.deleteGroup handles 204 No Content", async () => {
|
||||||
|
const fakeFetch: FetchFn = () => {
|
||||||
|
return Promise.resolve(new Response(null, { status: 204 }));
|
||||||
|
};
|
||||||
|
|
||||||
|
const client = new NetbirdClient(
|
||||||
|
"https://nb.example.com/api",
|
||||||
|
"test-token",
|
||||||
|
fakeFetch,
|
||||||
|
);
|
||||||
|
// Should not throw — 204 is a success with no body
|
||||||
|
await client.deleteGroup("g1");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("NetbirdClient.createGroup sends POST with body", async () => {
|
||||||
|
let capturedMethod: string | undefined;
|
||||||
|
let capturedBody: string | undefined;
|
||||||
|
const fakeFetch: FetchFn = (
|
||||||
|
_input: string | URL | Request,
|
||||||
|
init?: RequestInit,
|
||||||
|
) => {
|
||||||
|
capturedMethod = init?.method;
|
||||||
|
capturedBody = init?.body as string;
|
||||||
|
return Promise.resolve(
|
||||||
|
new Response(
|
||||||
|
JSON.stringify({
|
||||||
|
id: "g2",
|
||||||
|
name: "drones",
|
||||||
|
peers_count: 0,
|
||||||
|
peers: [],
|
||||||
|
issued: "api",
|
||||||
|
}),
|
||||||
|
{ status: 200, headers: { "Content-Type": "application/json" } },
|
||||||
|
),
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
const client = new NetbirdClient(
|
||||||
|
"https://nb.example.com/api",
|
||||||
|
"test-token",
|
||||||
|
fakeFetch,
|
||||||
|
);
|
||||||
|
const result = await client.createGroup({ name: "drones" });
|
||||||
|
|
||||||
|
assertEquals(capturedMethod, "POST");
|
||||||
|
assertEquals(JSON.parse(capturedBody!), { name: "drones" });
|
||||||
|
assertEquals(result.name, "drones");
|
||||||
|
});
|
||||||
397
src/netbird/client.ts
Normal file
397
src/netbird/client.ts
Normal file
@ -0,0 +1,397 @@
|
|||||||
|
import type {
|
||||||
|
NbDnsNameserverGroup,
|
||||||
|
NbEvent,
|
||||||
|
NbGroup,
|
||||||
|
NbNetwork,
|
||||||
|
NbNetworkResource,
|
||||||
|
NbNetworkRouter,
|
||||||
|
NbPeer,
|
||||||
|
NbPolicy,
|
||||||
|
NbPostureCheck,
|
||||||
|
NbRoute,
|
||||||
|
NbSetupKey,
|
||||||
|
NbUser,
|
||||||
|
} from "./types.ts";
|
||||||
|
|
||||||
|
/** Narrowed fetch signature used for dependency injection. */
|
||||||
|
export type FetchFn = (
|
||||||
|
input: string | URL | Request,
|
||||||
|
init?: RequestInit,
|
||||||
|
) => Promise<Response>;
|
||||||
|
|
||||||
|
/** Thrown when the NetBird API returns a non-2xx status. */
|
||||||
|
export class NetbirdApiError extends Error {
|
||||||
|
constructor(
|
||||||
|
public readonly status: number,
|
||||||
|
public readonly method: string,
|
||||||
|
public readonly path: string,
|
||||||
|
public readonly body: unknown,
|
||||||
|
) {
|
||||||
|
super(`NetBird API error: ${method} ${path} returned ${status}`);
|
||||||
|
this.name = "NetbirdApiError";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Thin HTTP client for the NetBird Management API.
|
||||||
|
*
|
||||||
|
* Accepts an injectable fetch function so callers (and tests) can swap
|
||||||
|
* the transport without touching the client logic.
|
||||||
|
*/
|
||||||
|
export class NetbirdClient {
|
||||||
|
constructor(
|
||||||
|
private readonly baseUrl: string,
|
||||||
|
private readonly token: string,
|
||||||
|
private readonly fetchFn: FetchFn = fetch,
|
||||||
|
) {}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Internal
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
private async request<T>(
|
||||||
|
method: string,
|
||||||
|
path: string,
|
||||||
|
body?: unknown,
|
||||||
|
): Promise<T> {
|
||||||
|
const url = `${this.baseUrl}${path}`;
|
||||||
|
const headers: Record<string, string> = {
|
||||||
|
"Authorization": `Token ${this.token}`,
|
||||||
|
"Accept": "application/json",
|
||||||
|
};
|
||||||
|
if (body !== undefined) {
|
||||||
|
headers["Content-Type"] = "application/json";
|
||||||
|
}
|
||||||
|
|
||||||
|
const resp = await this.fetchFn(url, {
|
||||||
|
method,
|
||||||
|
headers,
|
||||||
|
body: body !== undefined ? JSON.stringify(body) : undefined,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!resp.ok) {
|
||||||
|
const text = await resp.text();
|
||||||
|
let errorBody: unknown;
|
||||||
|
try {
|
||||||
|
errorBody = JSON.parse(text);
|
||||||
|
} catch {
|
||||||
|
errorBody = text;
|
||||||
|
}
|
||||||
|
throw new NetbirdApiError(resp.status, method, path, errorBody);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 204 No Content — nothing to parse
|
||||||
|
if (resp.status === 204) {
|
||||||
|
return undefined as T;
|
||||||
|
}
|
||||||
|
|
||||||
|
return (await resp.json()) as T;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Groups
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
listGroups(): Promise<NbGroup[]> {
|
||||||
|
return this.request("GET", "/groups");
|
||||||
|
}
|
||||||
|
|
||||||
|
createGroup(data: { name: string; peers?: string[] }): Promise<NbGroup> {
|
||||||
|
return this.request("POST", "/groups", data);
|
||||||
|
}
|
||||||
|
|
||||||
|
updateGroup(
|
||||||
|
id: string,
|
||||||
|
data: { name?: string; peers?: string[] },
|
||||||
|
): Promise<NbGroup> {
|
||||||
|
return this.request("PUT", `/groups/${id}`, data);
|
||||||
|
}
|
||||||
|
|
||||||
|
deleteGroup(id: string): Promise<void> {
|
||||||
|
return this.request("DELETE", `/groups/${id}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Setup Keys
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
listSetupKeys(): Promise<NbSetupKey[]> {
|
||||||
|
return this.request("GET", "/setup-keys");
|
||||||
|
}
|
||||||
|
|
||||||
|
createSetupKey(data: {
|
||||||
|
name: string;
|
||||||
|
type: "one-off" | "reusable";
|
||||||
|
expires_in: number;
|
||||||
|
auto_groups?: string[];
|
||||||
|
usage_limit?: number;
|
||||||
|
}): Promise<NbSetupKey> {
|
||||||
|
return this.request("POST", "/setup-keys", data);
|
||||||
|
}
|
||||||
|
|
||||||
|
deleteSetupKey(id: number | string): Promise<void> {
|
||||||
|
return this.request("DELETE", `/setup-keys/${id}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Peers
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
listPeers(): Promise<NbPeer[]> {
|
||||||
|
return this.request("GET", "/peers");
|
||||||
|
}
|
||||||
|
|
||||||
|
updatePeer(
|
||||||
|
id: string,
|
||||||
|
data: {
|
||||||
|
name?: string;
|
||||||
|
ssh_enabled?: boolean;
|
||||||
|
login_expiration_enabled?: boolean;
|
||||||
|
inactivity_expiration_enabled?: boolean;
|
||||||
|
},
|
||||||
|
): Promise<NbPeer> {
|
||||||
|
return this.request("PUT", `/peers/${id}`, data);
|
||||||
|
}
|
||||||
|
|
||||||
|
deletePeer(id: string): Promise<void> {
|
||||||
|
return this.request("DELETE", `/peers/${id}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Policies
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
listPolicies(): Promise<NbPolicy[]> {
|
||||||
|
return this.request("GET", "/policies");
|
||||||
|
}
|
||||||
|
|
||||||
|
createPolicy(data: Omit<NbPolicy, "id">): Promise<NbPolicy> {
|
||||||
|
return this.request("POST", "/policies", data);
|
||||||
|
}
|
||||||
|
|
||||||
|
updatePolicy(id: string, data: Omit<NbPolicy, "id">): Promise<NbPolicy> {
|
||||||
|
return this.request("PUT", `/policies/${id}`, data);
|
||||||
|
}
|
||||||
|
|
||||||
|
deletePolicy(id: string): Promise<void> {
|
||||||
|
return this.request("DELETE", `/policies/${id}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Routes
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
listRoutes(): Promise<NbRoute[]> {
|
||||||
|
return this.request("GET", "/routes");
|
||||||
|
}
|
||||||
|
|
||||||
|
createRoute(data: Omit<NbRoute, "id">): Promise<NbRoute> {
|
||||||
|
return this.request("POST", "/routes", data);
|
||||||
|
}
|
||||||
|
|
||||||
|
updateRoute(id: string, data: Omit<NbRoute, "id">): Promise<NbRoute> {
|
||||||
|
return this.request("PUT", `/routes/${id}`, data);
|
||||||
|
}
|
||||||
|
|
||||||
|
deleteRoute(id: string): Promise<void> {
|
||||||
|
return this.request("DELETE", `/routes/${id}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// DNS Nameserver Groups
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
listDnsNameserverGroups(): Promise<NbDnsNameserverGroup[]> {
|
||||||
|
return this.request("GET", "/dns/nameservers");
|
||||||
|
}
|
||||||
|
|
||||||
|
createDnsNameserverGroup(
|
||||||
|
data: Omit<NbDnsNameserverGroup, "id">,
|
||||||
|
): Promise<NbDnsNameserverGroup> {
|
||||||
|
return this.request("POST", "/dns/nameservers", data);
|
||||||
|
}
|
||||||
|
|
||||||
|
updateDnsNameserverGroup(
|
||||||
|
id: string,
|
||||||
|
data: Omit<NbDnsNameserverGroup, "id">,
|
||||||
|
): Promise<NbDnsNameserverGroup> {
|
||||||
|
return this.request("PUT", `/dns/nameservers/${id}`, data);
|
||||||
|
}
|
||||||
|
|
||||||
|
deleteDnsNameserverGroup(id: string): Promise<void> {
|
||||||
|
return this.request("DELETE", `/dns/nameservers/${id}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Events
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
listEvents(): Promise<NbEvent[]> {
|
||||||
|
return this.request("GET", "/events/audit");
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Posture Checks
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
listPostureChecks(): Promise<NbPostureCheck[]> {
|
||||||
|
return this.request("GET", "/posture-checks");
|
||||||
|
}
|
||||||
|
|
||||||
|
createPostureCheck(
|
||||||
|
data: Omit<NbPostureCheck, "id">,
|
||||||
|
): Promise<NbPostureCheck> {
|
||||||
|
return this.request("POST", "/posture-checks", data);
|
||||||
|
}
|
||||||
|
|
||||||
|
updatePostureCheck(
|
||||||
|
id: string,
|
||||||
|
data: Omit<NbPostureCheck, "id">,
|
||||||
|
): Promise<NbPostureCheck> {
|
||||||
|
return this.request("PUT", `/posture-checks/${id}`, data);
|
||||||
|
}
|
||||||
|
|
||||||
|
deletePostureCheck(id: string): Promise<void> {
|
||||||
|
return this.request("DELETE", `/posture-checks/${id}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Networks
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
listNetworks(): Promise<NbNetwork[]> {
|
||||||
|
return this.request("GET", "/networks");
|
||||||
|
}
|
||||||
|
|
||||||
|
createNetwork(
|
||||||
|
data: { name: string; description?: string },
|
||||||
|
): Promise<NbNetwork> {
|
||||||
|
return this.request("POST", "/networks", data);
|
||||||
|
}
|
||||||
|
|
||||||
|
updateNetwork(
|
||||||
|
id: string,
|
||||||
|
data: { name: string; description?: string },
|
||||||
|
): Promise<NbNetwork> {
|
||||||
|
return this.request("PUT", `/networks/${id}`, data);
|
||||||
|
}
|
||||||
|
|
||||||
|
deleteNetwork(id: string): Promise<void> {
|
||||||
|
return this.request("DELETE", `/networks/${id}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Network Resources
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
listNetworkResources(networkId: string): Promise<NbNetworkResource[]> {
|
||||||
|
return this.request("GET", `/networks/${networkId}/resources`);
|
||||||
|
}
|
||||||
|
|
||||||
|
createNetworkResource(
|
||||||
|
networkId: string,
|
||||||
|
data: {
|
||||||
|
name: string;
|
||||||
|
description?: string;
|
||||||
|
address: string;
|
||||||
|
enabled: boolean;
|
||||||
|
groups: string[];
|
||||||
|
},
|
||||||
|
): Promise<NbNetworkResource> {
|
||||||
|
return this.request("POST", `/networks/${networkId}/resources`, data);
|
||||||
|
}
|
||||||
|
|
||||||
|
updateNetworkResource(
|
||||||
|
networkId: string,
|
||||||
|
resourceId: string,
|
||||||
|
data: {
|
||||||
|
name: string;
|
||||||
|
description?: string;
|
||||||
|
address: string;
|
||||||
|
enabled: boolean;
|
||||||
|
groups: string[];
|
||||||
|
},
|
||||||
|
): Promise<NbNetworkResource> {
|
||||||
|
return this.request(
|
||||||
|
"PUT",
|
||||||
|
`/networks/${networkId}/resources/${resourceId}`,
|
||||||
|
data,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
deleteNetworkResource(
|
||||||
|
networkId: string,
|
||||||
|
resourceId: string,
|
||||||
|
): Promise<void> {
|
||||||
|
return this.request(
|
||||||
|
"DELETE",
|
||||||
|
`/networks/${networkId}/resources/${resourceId}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Network Routers
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
listNetworkRouters(networkId: string): Promise<NbNetworkRouter[]> {
|
||||||
|
return this.request("GET", `/networks/${networkId}/routers`);
|
||||||
|
}
|
||||||
|
|
||||||
|
createNetworkRouter(
|
||||||
|
networkId: string,
|
||||||
|
data: Omit<NbNetworkRouter, "id">,
|
||||||
|
): Promise<NbNetworkRouter> {
|
||||||
|
return this.request("POST", `/networks/${networkId}/routers`, data);
|
||||||
|
}
|
||||||
|
|
||||||
|
updateNetworkRouter(
|
||||||
|
networkId: string,
|
||||||
|
routerId: string,
|
||||||
|
data: Omit<NbNetworkRouter, "id">,
|
||||||
|
): Promise<NbNetworkRouter> {
|
||||||
|
return this.request(
|
||||||
|
"PUT",
|
||||||
|
`/networks/${networkId}/routers/${routerId}`,
|
||||||
|
data,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
deleteNetworkRouter(
|
||||||
|
networkId: string,
|
||||||
|
routerId: string,
|
||||||
|
): Promise<void> {
|
||||||
|
return this.request(
|
||||||
|
"DELETE",
|
||||||
|
`/networks/${networkId}/routers/${routerId}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Users
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
listUsers(): Promise<NbUser[]> {
|
||||||
|
return this.request("GET", "/users");
|
||||||
|
}
|
||||||
|
|
||||||
|
createUser(data: {
|
||||||
|
email: string;
|
||||||
|
name?: string;
|
||||||
|
role: string;
|
||||||
|
auto_groups: string[];
|
||||||
|
is_service_user: boolean;
|
||||||
|
}): Promise<NbUser> {
|
||||||
|
return this.request("POST", "/users", data);
|
||||||
|
}
|
||||||
|
|
||||||
|
updateUser(
|
||||||
|
id: string,
|
||||||
|
data: { name?: string; role?: string; auto_groups?: string[] },
|
||||||
|
): Promise<NbUser> {
|
||||||
|
return this.request("PUT", `/users/${id}`, data);
|
||||||
|
}
|
||||||
|
|
||||||
|
deleteUser(id: string): Promise<void> {
|
||||||
|
return this.request("DELETE", `/users/${id}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
165
src/netbird/types.ts
Normal file
165
src/netbird/types.ts
Normal file
@ -0,0 +1,165 @@
|
|||||||
|
/** Group as returned by GET /api/groups */
|
||||||
|
export interface NbGroup {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
peers_count: number;
|
||||||
|
peers: Array<{ id: string; name: string }> | null;
|
||||||
|
issued: "api" | "jwt" | "integration";
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Setup key as returned by GET /api/setup-keys */
|
||||||
|
export interface NbSetupKey {
|
||||||
|
id: number;
|
||||||
|
name: string;
|
||||||
|
type: "one-off" | "reusable";
|
||||||
|
key: string;
|
||||||
|
expires: string;
|
||||||
|
valid: boolean;
|
||||||
|
revoked: boolean;
|
||||||
|
used_times: number;
|
||||||
|
state: "valid" | "expired" | "revoked" | "overused";
|
||||||
|
auto_groups: string[];
|
||||||
|
usage_limit: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Peer as returned by GET /api/peers */
|
||||||
|
export interface NbPeer {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
ip: string;
|
||||||
|
connected: boolean;
|
||||||
|
hostname: string;
|
||||||
|
os: string;
|
||||||
|
version: string;
|
||||||
|
groups: Array<{ id: string; name: string }>;
|
||||||
|
last_seen: string;
|
||||||
|
dns_label: string;
|
||||||
|
login_expiration_enabled: boolean;
|
||||||
|
ssh_enabled: boolean;
|
||||||
|
inactivity_expiration_enabled: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Policy as returned by GET /api/policies */
|
||||||
|
export interface NbPolicy {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
description: string;
|
||||||
|
enabled: boolean;
|
||||||
|
rules: NbPolicyRule[];
|
||||||
|
source_posture_checks: string[];
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface NbPolicyRule {
|
||||||
|
id?: string;
|
||||||
|
name: string;
|
||||||
|
description: string;
|
||||||
|
enabled: boolean;
|
||||||
|
action: "accept" | "drop";
|
||||||
|
bidirectional: boolean;
|
||||||
|
protocol: "tcp" | "udp" | "icmp" | "all";
|
||||||
|
ports?: string[];
|
||||||
|
sources: Array<string | { id: string; name: string }> | null;
|
||||||
|
destinations: Array<string | { id: string; name: string }> | null;
|
||||||
|
destinationResource?: { id: string; type: string } | null;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Route as returned by GET /api/routes */
|
||||||
|
export interface NbRoute {
|
||||||
|
id: string;
|
||||||
|
description: string;
|
||||||
|
network_id: string;
|
||||||
|
enabled: boolean;
|
||||||
|
peer?: string;
|
||||||
|
peer_groups?: string[];
|
||||||
|
network?: string;
|
||||||
|
domains?: string[];
|
||||||
|
metric: number;
|
||||||
|
masquerade: boolean;
|
||||||
|
groups: string[];
|
||||||
|
keep_route: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** DNS nameserver group as returned by GET /api/dns/nameservers */
|
||||||
|
export interface NbDnsNameserverGroup {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
description: string;
|
||||||
|
nameservers: Array<{
|
||||||
|
ip: string;
|
||||||
|
ns_type: string;
|
||||||
|
port: number;
|
||||||
|
}>;
|
||||||
|
enabled: boolean;
|
||||||
|
groups: string[];
|
||||||
|
primary: boolean;
|
||||||
|
domains: string[];
|
||||||
|
search_domains_enabled: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Posture check as returned by GET /api/posture-checks */
|
||||||
|
export interface NbPostureCheck {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
description: string;
|
||||||
|
checks: Record<string, unknown>;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Network as returned by GET /api/networks */
|
||||||
|
export interface NbNetwork {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
description: string;
|
||||||
|
resources: string[];
|
||||||
|
routers: string[];
|
||||||
|
policies: string[];
|
||||||
|
routing_peers_count: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Network resource as returned by GET /api/networks/{id}/resources */
|
||||||
|
export interface NbNetworkResource {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
description: string;
|
||||||
|
type: "host" | "subnet" | "domain";
|
||||||
|
address: string;
|
||||||
|
enabled: boolean;
|
||||||
|
groups: Array<{
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
peers_count: number;
|
||||||
|
resources_count: number;
|
||||||
|
}>;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Network router as returned by GET /api/networks/{id}/routers */
|
||||||
|
export interface NbNetworkRouter {
|
||||||
|
id: string;
|
||||||
|
peer: string | null;
|
||||||
|
peer_groups: string[] | null;
|
||||||
|
metric: number;
|
||||||
|
masquerade: boolean;
|
||||||
|
enabled: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** User as returned by GET /api/users */
|
||||||
|
export interface NbUser {
|
||||||
|
id: string;
|
||||||
|
name: string;
|
||||||
|
email: string;
|
||||||
|
role: "owner" | "admin" | "user";
|
||||||
|
status: "active" | "invited" | "blocked";
|
||||||
|
auto_groups: string[];
|
||||||
|
is_service_user: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Audit event as returned by GET /api/events/audit */
|
||||||
|
export interface NbEvent {
|
||||||
|
id: number;
|
||||||
|
timestamp: string;
|
||||||
|
activity: string;
|
||||||
|
activity_code: string;
|
||||||
|
initiator_id: string;
|
||||||
|
initiator_name: string;
|
||||||
|
target_id: string;
|
||||||
|
meta: Record<string, string>;
|
||||||
|
}
|
||||||
361
src/poller/loop.ts
Normal file
361
src/poller/loop.ts
Normal file
@ -0,0 +1,361 @@
|
|||||||
|
import type { Config } from "../config.ts";
|
||||||
|
import type { GiteaClient } from "../gitea/client.ts";
|
||||||
|
import { GiteaApiError } from "../gitea/client.ts";
|
||||||
|
import type { NetbirdClient } from "../netbird/client.ts";
|
||||||
|
import type { NbSetupKey } from "../netbird/types.ts";
|
||||||
|
import { DesiredStateSchema } from "../state/schema.ts";
|
||||||
|
import type { DesiredState, SetupKeyConfig } from "../state/schema.ts";
|
||||||
|
import { type EnrollmentDetection, processEnrollmentEvents } from "./poller.ts";
|
||||||
|
import { loadPollerState, savePollerState } from "./state.ts";
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// Context
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
export interface PollerContext {
|
||||||
|
config: Config;
|
||||||
|
netbird: NetbirdClient;
|
||||||
|
gitea: GiteaClient | null;
|
||||||
|
reconcileInProgress: { value: boolean };
|
||||||
|
}
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// Single poll iteration
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Runs one poll cycle. Two modes:
|
||||||
|
*
|
||||||
|
* **Gitea mode** (gitea != null): fetch desired state from the repo, detect
|
||||||
|
* enrollments via events, rename peers, commit `enrolled: true` back.
|
||||||
|
*
|
||||||
|
* **Standalone mode** (gitea == null): query NetBird setup keys directly,
|
||||||
|
* treat unused valid keys as "unenrolled", detect enrollments via events,
|
||||||
|
* rename peers, log (no commit).
|
||||||
|
*
|
||||||
|
* Skips entirely when a reconcile is in progress to avoid racing with
|
||||||
|
* the webhook-triggered reconciliation path.
|
||||||
|
*/
|
||||||
|
export async function pollOnce(ctx: PollerContext): Promise<void> {
|
||||||
|
if (ctx.reconcileInProgress.value) {
|
||||||
|
console.log(
|
||||||
|
JSON.stringify({ msg: "poll_skipped", reason: "reconcile_in_progress" }),
|
||||||
|
);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (ctx.gitea) {
|
||||||
|
await pollOnceGitea(ctx, ctx.gitea);
|
||||||
|
} else {
|
||||||
|
await pollOnceStandalone(ctx);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// Gitea mode — full GitOps cycle
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
async function pollOnceGitea(
|
||||||
|
ctx: PollerContext,
|
||||||
|
gitea: GiteaClient,
|
||||||
|
): Promise<void> {
|
||||||
|
const { config, netbird } = ctx;
|
||||||
|
const pollerState = await loadPollerState(config.dataDir);
|
||||||
|
|
||||||
|
// Fetch current desired state from Gitea (main branch)
|
||||||
|
const file = await gitea.getFileContent("netbird.json", "main");
|
||||||
|
const desired: DesiredState = DesiredStateSchema.parse(
|
||||||
|
JSON.parse(file.content),
|
||||||
|
);
|
||||||
|
|
||||||
|
// Build set of unenrolled setup key names
|
||||||
|
const unenrolledKeys = new Set<string>();
|
||||||
|
for (
|
||||||
|
const [name, key] of Object.entries(desired.setup_keys) as [
|
||||||
|
string,
|
||||||
|
SetupKeyConfig,
|
||||||
|
][]
|
||||||
|
) {
|
||||||
|
if (!key.enrolled) {
|
||||||
|
unenrolledKeys.add(name);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (unenrolledKeys.size === 0) {
|
||||||
|
console.log(JSON.stringify({ msg: "poll_no_unenrolled_keys" }));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fetch events and detect enrollments
|
||||||
|
const events = await netbird.listEvents();
|
||||||
|
const enrollments = processEnrollmentEvents(
|
||||||
|
events,
|
||||||
|
unenrolledKeys,
|
||||||
|
pollerState.lastEventTimestamp,
|
||||||
|
);
|
||||||
|
|
||||||
|
if (enrollments.length === 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(JSON.stringify({
|
||||||
|
msg: "poll_enrollments_detected",
|
||||||
|
count: enrollments.length,
|
||||||
|
}));
|
||||||
|
|
||||||
|
// Process each enrollment: rename peer, commit enrolled:true
|
||||||
|
let latestTimestamp = pollerState.lastEventTimestamp;
|
||||||
|
// Track the file SHA across iterations — each successful commit changes it
|
||||||
|
let currentSha = file.sha;
|
||||||
|
let currentDesired = desired;
|
||||||
|
|
||||||
|
for (const enrollment of enrollments) {
|
||||||
|
await processEnrollment(
|
||||||
|
ctx,
|
||||||
|
gitea,
|
||||||
|
enrollment,
|
||||||
|
currentDesired,
|
||||||
|
currentSha,
|
||||||
|
(newSha, newDesired) => {
|
||||||
|
currentSha = newSha;
|
||||||
|
currentDesired = newDesired;
|
||||||
|
},
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!latestTimestamp || enrollment.timestamp > latestTimestamp) {
|
||||||
|
latestTimestamp = enrollment.timestamp;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Persist the latest event timestamp so next poll skips processed events
|
||||||
|
await savePollerState(config.dataDir, {
|
||||||
|
lastEventTimestamp: latestTimestamp,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// Standalone mode — no Gitea, keys come from NetBird API directly
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
/**
|
||||||
|
* In standalone mode there is no git repo to read from. Instead we query
|
||||||
|
* NetBird's setup keys API and consider any key that is valid, not revoked,
|
||||||
|
* and has remaining uses as "unenrolled". Enrollment detection and peer
|
||||||
|
* renaming still happen, but there is no commit-back step.
|
||||||
|
*
|
||||||
|
* Idempotency relies on `lastEventTimestamp` persisted in `dataDir`.
|
||||||
|
* If poller state is lost (data dir wiped, container restart without a
|
||||||
|
* persistent volume), historical enrollments will be replayed — peers
|
||||||
|
* get renamed again, which is harmless but produces noisy logs.
|
||||||
|
*/
|
||||||
|
async function pollOnceStandalone(ctx: PollerContext): Promise<void> {
|
||||||
|
const { config, netbird } = ctx;
|
||||||
|
const pollerState = await loadPollerState(config.dataDir);
|
||||||
|
|
||||||
|
const setupKeys = await netbird.listSetupKeys();
|
||||||
|
const unenrolledKeys = new Set<string>();
|
||||||
|
|
||||||
|
for (const key of setupKeys) {
|
||||||
|
if (isUnenrolledKey(key)) {
|
||||||
|
unenrolledKeys.add(key.name);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (unenrolledKeys.size === 0) {
|
||||||
|
console.log(
|
||||||
|
JSON.stringify({ msg: "poll_no_unenrolled_keys", mode: "standalone" }),
|
||||||
|
);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const events = await netbird.listEvents();
|
||||||
|
const enrollments = processEnrollmentEvents(
|
||||||
|
events,
|
||||||
|
unenrolledKeys,
|
||||||
|
pollerState.lastEventTimestamp,
|
||||||
|
);
|
||||||
|
|
||||||
|
if (enrollments.length === 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(JSON.stringify({
|
||||||
|
msg: "poll_enrollments_detected",
|
||||||
|
count: enrollments.length,
|
||||||
|
mode: "standalone",
|
||||||
|
}));
|
||||||
|
|
||||||
|
let latestTimestamp = pollerState.lastEventTimestamp;
|
||||||
|
|
||||||
|
for (const enrollment of enrollments) {
|
||||||
|
await processEnrollmentStandalone(netbird, enrollment);
|
||||||
|
|
||||||
|
if (!latestTimestamp || enrollment.timestamp > latestTimestamp) {
|
||||||
|
latestTimestamp = enrollment.timestamp;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
await savePollerState(config.dataDir, {
|
||||||
|
lastEventTimestamp: latestTimestamp,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/** A setup key is "unenrolled" if it is valid, not revoked, and has remaining uses. */
|
||||||
|
function isUnenrolledKey(key: NbSetupKey): boolean {
|
||||||
|
// usage_limit === 0 means unlimited reusable key
|
||||||
|
return key.valid && !key.revoked &&
|
||||||
|
(key.usage_limit === 0 || key.used_times < key.usage_limit);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Standalone enrollment: rename the peer and log. No Gitea commit.
|
||||||
|
*/
|
||||||
|
async function processEnrollmentStandalone(
|
||||||
|
netbird: NetbirdClient,
|
||||||
|
enrollment: EnrollmentDetection,
|
||||||
|
): Promise<void> {
|
||||||
|
const { setupKeyName, peerId, peerHostname } = enrollment;
|
||||||
|
|
||||||
|
try {
|
||||||
|
await netbird.updatePeer(peerId, { name: setupKeyName });
|
||||||
|
console.log(JSON.stringify({
|
||||||
|
msg: "peer_renamed",
|
||||||
|
peer_id: peerId,
|
||||||
|
from: peerHostname,
|
||||||
|
to: setupKeyName,
|
||||||
|
mode: "standalone",
|
||||||
|
note: "gitea commit skipped in standalone mode",
|
||||||
|
}));
|
||||||
|
} catch (err) {
|
||||||
|
console.error(JSON.stringify({
|
||||||
|
msg: "peer_rename_failed",
|
||||||
|
peer_id: peerId,
|
||||||
|
error: err instanceof Error ? err.message : String(err),
|
||||||
|
}));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// Process a single enrollment (Gitea mode)
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Renames a peer to match its setup key name, then commits
|
||||||
|
* `enrolled: true` to the repo via Gitea's optimistic concurrency.
|
||||||
|
*
|
||||||
|
* If the file SHA has changed (409 conflict), logs a warning and moves
|
||||||
|
* on — the next poll cycle will retry with fresh state.
|
||||||
|
*/
|
||||||
|
async function processEnrollment(
|
||||||
|
ctx: PollerContext,
|
||||||
|
gitea: GiteaClient,
|
||||||
|
enrollment: EnrollmentDetection,
|
||||||
|
desired: DesiredState,
|
||||||
|
fileSha: string,
|
||||||
|
onCommit: (newSha: string, newDesired: DesiredState) => void,
|
||||||
|
): Promise<void> {
|
||||||
|
const { netbird } = ctx;
|
||||||
|
const { setupKeyName, peerId, peerHostname } = enrollment;
|
||||||
|
|
||||||
|
// Rename the peer to match the setup key name
|
||||||
|
try {
|
||||||
|
await netbird.updatePeer(peerId, { name: setupKeyName });
|
||||||
|
console.log(JSON.stringify({
|
||||||
|
msg: "peer_renamed",
|
||||||
|
peer_id: peerId,
|
||||||
|
from: peerHostname,
|
||||||
|
to: setupKeyName,
|
||||||
|
}));
|
||||||
|
} catch (err) {
|
||||||
|
console.error(JSON.stringify({
|
||||||
|
msg: "peer_rename_failed",
|
||||||
|
peer_id: peerId,
|
||||||
|
error: err instanceof Error ? err.message : String(err),
|
||||||
|
}));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mark enrolled:true in desired state and commit via Gitea
|
||||||
|
const updated = structuredClone(desired);
|
||||||
|
updated.setup_keys[setupKeyName].enrolled = true;
|
||||||
|
const content = JSON.stringify(updated, null, 2);
|
||||||
|
|
||||||
|
try {
|
||||||
|
await gitea.updateFile(
|
||||||
|
"netbird.json",
|
||||||
|
content,
|
||||||
|
fileSha,
|
||||||
|
`chore: mark ${setupKeyName} as enrolled`,
|
||||||
|
"main",
|
||||||
|
);
|
||||||
|
|
||||||
|
// Fetch the new SHA for subsequent commits in this poll cycle.
|
||||||
|
// The updateFile response from Gitea doesn't return the new blob SHA
|
||||||
|
// in a convenient form, so we re-read it.
|
||||||
|
const freshFile = await gitea.getFileContent("netbird.json", "main");
|
||||||
|
onCommit(freshFile.sha, updated);
|
||||||
|
|
||||||
|
console.log(JSON.stringify({
|
||||||
|
msg: "enrollment_committed",
|
||||||
|
setup_key: setupKeyName,
|
||||||
|
}));
|
||||||
|
} catch (err) {
|
||||||
|
if (err instanceof GiteaApiError && err.status === 409) {
|
||||||
|
// SHA mismatch — file was modified between read and write.
|
||||||
|
// Will be retried on next poll cycle with fresh state.
|
||||||
|
console.warn(JSON.stringify({
|
||||||
|
msg: "enrollment_commit_conflict",
|
||||||
|
setup_key: setupKeyName,
|
||||||
|
}));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
console.error(JSON.stringify({
|
||||||
|
msg: "enrollment_commit_failed",
|
||||||
|
setup_key: setupKeyName,
|
||||||
|
error: err instanceof Error ? err.message : String(err),
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// Background loop
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Starts the poller background loop on a configurable interval.
|
||||||
|
*
|
||||||
|
* Returns an AbortController — call `.abort()` for graceful shutdown.
|
||||||
|
* Errors in individual poll cycles are caught and logged without
|
||||||
|
* crashing the process.
|
||||||
|
*/
|
||||||
|
export function startPollerLoop(ctx: PollerContext): AbortController {
|
||||||
|
const controller = new AbortController();
|
||||||
|
const intervalMs = ctx.config.pollIntervalSeconds * 1000;
|
||||||
|
|
||||||
|
const run = async () => {
|
||||||
|
if (controller.signal.aborted) return;
|
||||||
|
try {
|
||||||
|
await pollOnce(ctx);
|
||||||
|
} catch (err) {
|
||||||
|
console.error(JSON.stringify({
|
||||||
|
msg: "poll_error",
|
||||||
|
error: err instanceof Error ? err.message : String(err),
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const id = setInterval(run, intervalMs);
|
||||||
|
|
||||||
|
controller.signal.addEventListener("abort", () => {
|
||||||
|
clearInterval(id);
|
||||||
|
console.log(JSON.stringify({ msg: "poller_stopped" }));
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(JSON.stringify({
|
||||||
|
msg: "poller_started",
|
||||||
|
interval_seconds: ctx.config.pollIntervalSeconds,
|
||||||
|
}));
|
||||||
|
|
||||||
|
return controller;
|
||||||
|
}
|
||||||
78
src/poller/poller.test.ts
Normal file
78
src/poller/poller.test.ts
Normal file
@ -0,0 +1,78 @@
|
|||||||
|
import { assertEquals } from "@std/assert";
|
||||||
|
import type { NbEvent } from "../netbird/types.ts";
|
||||||
|
import { processEnrollmentEvents } from "./poller.ts";
|
||||||
|
|
||||||
|
/** Helper: build an NbEvent with sensible defaults, overridable per-field. */
|
||||||
|
function makeEvent(overrides: Partial<NbEvent> = {}): NbEvent {
|
||||||
|
return {
|
||||||
|
id: 1,
|
||||||
|
timestamp: "2025-06-01T12:00:00Z",
|
||||||
|
activity: "peer setup key added",
|
||||||
|
activity_code: "peer.setupkey.add",
|
||||||
|
initiator_id: "init-1",
|
||||||
|
initiator_name: "admin",
|
||||||
|
target_id: "peer-1",
|
||||||
|
meta: { setup_key: "drone-key", name: "drone-01" },
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
Deno.test("processEnrollmentEvents detects peer.setupkey.add", () => {
|
||||||
|
const events: NbEvent[] = [
|
||||||
|
makeEvent(), // enrollment — should match
|
||||||
|
makeEvent({
|
||||||
|
id: 2,
|
||||||
|
activity_code: "peer.login",
|
||||||
|
target_id: "peer-2",
|
||||||
|
}), // not an enrollment
|
||||||
|
makeEvent({
|
||||||
|
id: 3,
|
||||||
|
activity_code: "group.add",
|
||||||
|
target_id: "group-1",
|
||||||
|
}), // unrelated event
|
||||||
|
];
|
||||||
|
|
||||||
|
const known = new Set(["drone-key"]);
|
||||||
|
const result = processEnrollmentEvents(events, known, null);
|
||||||
|
|
||||||
|
assertEquals(result.length, 1);
|
||||||
|
assertEquals(result[0].setupKeyName, "drone-key");
|
||||||
|
assertEquals(result[0].peerId, "peer-1");
|
||||||
|
assertEquals(result[0].peerHostname, "drone-01");
|
||||||
|
assertEquals(result[0].timestamp, "2025-06-01T12:00:00Z");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("processEnrollmentEvents filters by lastTimestamp", () => {
|
||||||
|
const events: NbEvent[] = [
|
||||||
|
makeEvent({ id: 1, timestamp: "2025-06-01T10:00:00Z", target_id: "p1" }),
|
||||||
|
makeEvent({ id: 2, timestamp: "2025-06-01T12:00:00Z", target_id: "p2" }),
|
||||||
|
makeEvent({ id: 3, timestamp: "2025-06-01T14:00:00Z", target_id: "p3" }),
|
||||||
|
];
|
||||||
|
|
||||||
|
const known = new Set(["drone-key"]);
|
||||||
|
|
||||||
|
// Only events strictly after the watermark should pass.
|
||||||
|
const result = processEnrollmentEvents(
|
||||||
|
events,
|
||||||
|
known,
|
||||||
|
"2025-06-01T12:00:00Z",
|
||||||
|
);
|
||||||
|
|
||||||
|
assertEquals(result.length, 1);
|
||||||
|
assertEquals(result[0].peerId, "p3");
|
||||||
|
assertEquals(result[0].timestamp, "2025-06-01T14:00:00Z");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("processEnrollmentEvents ignores unknown keys", () => {
|
||||||
|
const events: NbEvent[] = [
|
||||||
|
makeEvent({
|
||||||
|
meta: { setup_key: "rogue-key", name: "rogue-host" },
|
||||||
|
target_id: "peer-x",
|
||||||
|
}),
|
||||||
|
];
|
||||||
|
|
||||||
|
const known = new Set(["drone-key", "gcs-key"]);
|
||||||
|
const result = processEnrollmentEvents(events, known, null);
|
||||||
|
|
||||||
|
assertEquals(result.length, 0);
|
||||||
|
});
|
||||||
40
src/poller/poller.ts
Normal file
40
src/poller/poller.ts
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
import type { NbEvent } from "../netbird/types.ts";
|
||||||
|
|
||||||
|
export interface EnrollmentDetection {
|
||||||
|
setupKeyName: string;
|
||||||
|
peerId: string;
|
||||||
|
peerHostname: string;
|
||||||
|
timestamp: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Filters enrollment events from the full event list.
|
||||||
|
* Returns enrollments for peers that enrolled using a known setup key
|
||||||
|
* and whose timestamp is after lastTimestamp (if provided).
|
||||||
|
*/
|
||||||
|
export function processEnrollmentEvents(
|
||||||
|
events: NbEvent[],
|
||||||
|
knownKeyNames: Set<string>,
|
||||||
|
lastTimestamp: string | null,
|
||||||
|
): EnrollmentDetection[] {
|
||||||
|
return events
|
||||||
|
.filter((e) => {
|
||||||
|
if (e.activity_code !== "peer.setupkey.add") return false;
|
||||||
|
if (lastTimestamp && e.timestamp <= lastTimestamp) return false;
|
||||||
|
if (!knownKeyNames.has(e.meta.setup_key)) {
|
||||||
|
console.log(JSON.stringify({
|
||||||
|
msg: "unknown_enrollment",
|
||||||
|
setup_key: e.meta.setup_key,
|
||||||
|
peer_id: e.target_id,
|
||||||
|
}));
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
})
|
||||||
|
.map((e) => ({
|
||||||
|
setupKeyName: e.meta.setup_key,
|
||||||
|
peerId: e.target_id,
|
||||||
|
peerHostname: e.meta.name,
|
||||||
|
timestamp: e.timestamp,
|
||||||
|
}));
|
||||||
|
}
|
||||||
24
src/poller/state.ts
Normal file
24
src/poller/state.ts
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
import { join } from "@std/path";
|
||||||
|
|
||||||
|
export interface PollerState {
|
||||||
|
lastEventTimestamp: string | null;
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function loadPollerState(dataDir: string): Promise<PollerState> {
|
||||||
|
const path = join(dataDir, "poller-state.json");
|
||||||
|
try {
|
||||||
|
const text = await Deno.readTextFile(path);
|
||||||
|
return JSON.parse(text) as PollerState;
|
||||||
|
} catch {
|
||||||
|
return { lastEventTimestamp: null };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function savePollerState(
|
||||||
|
dataDir: string,
|
||||||
|
state: PollerState,
|
||||||
|
): Promise<void> {
|
||||||
|
const path = join(dataDir, "poller-state.json");
|
||||||
|
await Deno.mkdir(dataDir, { recursive: true });
|
||||||
|
await Deno.writeTextFile(path, JSON.stringify(state, null, 2));
|
||||||
|
}
|
||||||
336
src/reconcile/diff.test.ts
Normal file
336
src/reconcile/diff.test.ts
Normal file
@ -0,0 +1,336 @@
|
|||||||
|
import { assertEquals } from "@std/assert";
|
||||||
|
import { computeDiff } from "./diff.ts";
|
||||||
|
import type { DesiredState } from "../state/schema.ts";
|
||||||
|
import type { ActualState } from "../state/actual.ts";
|
||||||
|
import type { NbGroup, NbSetupKey } from "../netbird/types.ts";
|
||||||
|
|
||||||
|
function emptyActual(): ActualState {
|
||||||
|
return {
|
||||||
|
groups: [],
|
||||||
|
groupsByName: new Map(),
|
||||||
|
groupsById: new Map(),
|
||||||
|
setupKeys: [],
|
||||||
|
setupKeysByName: new Map(),
|
||||||
|
peers: [],
|
||||||
|
peersByName: new Map(),
|
||||||
|
peersById: new Map(),
|
||||||
|
policies: [],
|
||||||
|
policiesByName: new Map(),
|
||||||
|
routes: [],
|
||||||
|
routesByNetworkId: new Map(),
|
||||||
|
dns: [],
|
||||||
|
dnsByName: new Map(),
|
||||||
|
postureChecks: [],
|
||||||
|
postureChecksByName: new Map(),
|
||||||
|
networks: [],
|
||||||
|
networksByName: new Map(),
|
||||||
|
networkResources: new Map(),
|
||||||
|
networkRouters: new Map(),
|
||||||
|
users: [],
|
||||||
|
usersByEmail: new Map(),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Builds a minimal DesiredState with defaults for all required sections. */
|
||||||
|
function desiredState(
|
||||||
|
overrides: Partial<DesiredState> = {},
|
||||||
|
): DesiredState {
|
||||||
|
return {
|
||||||
|
groups: {},
|
||||||
|
setup_keys: {},
|
||||||
|
policies: {},
|
||||||
|
routes: {},
|
||||||
|
dns: { nameserver_groups: {} },
|
||||||
|
posture_checks: {},
|
||||||
|
networks: {},
|
||||||
|
peers: {},
|
||||||
|
users: {},
|
||||||
|
...overrides,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
const DESIRED: DesiredState = desiredState({
|
||||||
|
groups: { pilots: { peers: ["Pilot-hawk-72"] } },
|
||||||
|
setup_keys: {
|
||||||
|
"Pilot-hawk-72": {
|
||||||
|
type: "one-off",
|
||||||
|
expires_in: 604800,
|
||||||
|
usage_limit: 1,
|
||||||
|
auto_groups: ["pilots"],
|
||||||
|
enrolled: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("computeDiff against empty actual produces create ops", () => {
|
||||||
|
const ops = computeDiff(DESIRED, emptyActual());
|
||||||
|
const types = ops.map((o) => o.type);
|
||||||
|
assertEquals(types.includes("create_group"), true);
|
||||||
|
assertEquals(types.includes("create_setup_key"), true);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("computeDiff with matching state produces no ops", () => {
|
||||||
|
const actual = emptyActual();
|
||||||
|
|
||||||
|
const group: NbGroup = {
|
||||||
|
id: "g1",
|
||||||
|
name: "pilots",
|
||||||
|
peers_count: 1,
|
||||||
|
peers: [{ id: "p1", name: "Pilot-hawk-72" }],
|
||||||
|
issued: "api",
|
||||||
|
};
|
||||||
|
actual.groupsByName.set("pilots", group);
|
||||||
|
actual.groups = [group];
|
||||||
|
|
||||||
|
const key: NbSetupKey = {
|
||||||
|
id: 1,
|
||||||
|
name: "Pilot-hawk-72",
|
||||||
|
type: "one-off",
|
||||||
|
key: "masked",
|
||||||
|
expires: "2026-04-01T00:00:00Z",
|
||||||
|
valid: true,
|
||||||
|
revoked: false,
|
||||||
|
used_times: 0,
|
||||||
|
state: "valid",
|
||||||
|
auto_groups: ["g1"],
|
||||||
|
usage_limit: 1,
|
||||||
|
};
|
||||||
|
actual.setupKeysByName.set("Pilot-hawk-72", key);
|
||||||
|
actual.setupKeys = [key];
|
||||||
|
|
||||||
|
const ops = computeDiff(DESIRED, actual);
|
||||||
|
assertEquals(ops.length, 0);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("computeDiff does not delete system groups", () => {
|
||||||
|
const desired = desiredState();
|
||||||
|
const actual = emptyActual();
|
||||||
|
|
||||||
|
const jwtGroup: NbGroup = {
|
||||||
|
id: "g-jwt",
|
||||||
|
name: "All",
|
||||||
|
peers_count: 5,
|
||||||
|
peers: [],
|
||||||
|
issued: "jwt",
|
||||||
|
};
|
||||||
|
actual.groupsByName.set("All", jwtGroup);
|
||||||
|
actual.groups = [jwtGroup];
|
||||||
|
|
||||||
|
const ops = computeDiff(desired, actual);
|
||||||
|
assertEquals(ops.length, 0);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("computeDiff deletes api-issued groups not in desired", () => {
|
||||||
|
const desired = desiredState();
|
||||||
|
const actual = emptyActual();
|
||||||
|
|
||||||
|
const staleGroup: NbGroup = {
|
||||||
|
id: "g-old",
|
||||||
|
name: "stale-group",
|
||||||
|
peers_count: 0,
|
||||||
|
peers: [],
|
||||||
|
issued: "api",
|
||||||
|
};
|
||||||
|
actual.groupsByName.set("stale-group", staleGroup);
|
||||||
|
actual.groups = [staleGroup];
|
||||||
|
|
||||||
|
const ops = computeDiff(desired, actual);
|
||||||
|
assertEquals(ops.length, 1);
|
||||||
|
assertEquals(ops[0].type, "delete_group");
|
||||||
|
assertEquals(ops[0].name, "stale-group");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("computeDiff detects group peer membership change", () => {
|
||||||
|
const actual = emptyActual();
|
||||||
|
|
||||||
|
const group: NbGroup = {
|
||||||
|
id: "g1",
|
||||||
|
name: "pilots",
|
||||||
|
peers_count: 0,
|
||||||
|
peers: [], // No peers currently
|
||||||
|
issued: "api",
|
||||||
|
};
|
||||||
|
actual.groupsByName.set("pilots", group);
|
||||||
|
actual.groups = [group];
|
||||||
|
|
||||||
|
// Desired has a peer in the group, actual has none
|
||||||
|
const desired = desiredState({
|
||||||
|
groups: { pilots: { peers: ["Pilot-hawk-72"] } },
|
||||||
|
});
|
||||||
|
const ops = computeDiff(desired, actual);
|
||||||
|
const updateOps = ops.filter((o) => o.type === "update_group");
|
||||||
|
assertEquals(updateOps.length, 1);
|
||||||
|
assertEquals(updateOps[0].name, "pilots");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("computeDiff skips enrolled setup keys", () => {
|
||||||
|
const desired = desiredState({
|
||||||
|
setup_keys: {
|
||||||
|
"Already-enrolled": {
|
||||||
|
type: "one-off",
|
||||||
|
expires_in: 604800,
|
||||||
|
usage_limit: 1,
|
||||||
|
auto_groups: [],
|
||||||
|
enrolled: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const ops = computeDiff(desired, emptyActual());
|
||||||
|
const createKeyOps = ops.filter((o) => o.type === "create_setup_key");
|
||||||
|
assertEquals(createKeyOps.length, 0);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("computeDiff creates policy when not in actual", () => {
|
||||||
|
const desired = desiredState({
|
||||||
|
policies: {
|
||||||
|
"allow-pilots": {
|
||||||
|
description: "Allow pilot traffic",
|
||||||
|
enabled: true,
|
||||||
|
sources: ["pilots"],
|
||||||
|
destinations: ["pilots"],
|
||||||
|
bidirectional: true,
|
||||||
|
protocol: "all",
|
||||||
|
action: "accept",
|
||||||
|
source_posture_checks: [],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const ops = computeDiff(desired, emptyActual());
|
||||||
|
const policyOps = ops.filter((o) => o.type === "create_policy");
|
||||||
|
assertEquals(policyOps.length, 1);
|
||||||
|
assertEquals(policyOps[0].name, "allow-pilots");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("computeDiff detects policy enabled change", () => {
|
||||||
|
const actual = emptyActual();
|
||||||
|
|
||||||
|
const group: NbGroup = {
|
||||||
|
id: "g1",
|
||||||
|
name: "pilots",
|
||||||
|
peers_count: 0,
|
||||||
|
peers: [],
|
||||||
|
issued: "api",
|
||||||
|
};
|
||||||
|
actual.groupsByName.set("pilots", group);
|
||||||
|
actual.groupsById.set("g1", group);
|
||||||
|
actual.groups = [group];
|
||||||
|
|
||||||
|
actual.policiesByName.set("allow-pilots", {
|
||||||
|
id: "pol-1",
|
||||||
|
name: "allow-pilots",
|
||||||
|
description: "Allow pilot traffic",
|
||||||
|
enabled: true, // currently enabled
|
||||||
|
source_posture_checks: [],
|
||||||
|
rules: [{
|
||||||
|
name: "allow-pilots",
|
||||||
|
description: "",
|
||||||
|
enabled: true,
|
||||||
|
action: "accept",
|
||||||
|
bidirectional: true,
|
||||||
|
protocol: "all",
|
||||||
|
sources: [{ id: "g1", name: "pilots" }],
|
||||||
|
destinations: [{ id: "g1", name: "pilots" }],
|
||||||
|
}],
|
||||||
|
});
|
||||||
|
actual.policies = [actual.policiesByName.get("allow-pilots")!];
|
||||||
|
|
||||||
|
const desired = desiredState({
|
||||||
|
groups: { pilots: { peers: [] } },
|
||||||
|
policies: {
|
||||||
|
"allow-pilots": {
|
||||||
|
description: "Allow pilot traffic",
|
||||||
|
enabled: false, // desired: disabled
|
||||||
|
sources: ["pilots"],
|
||||||
|
destinations: ["pilots"],
|
||||||
|
bidirectional: true,
|
||||||
|
protocol: "all",
|
||||||
|
action: "accept",
|
||||||
|
source_posture_checks: [],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const ops = computeDiff(desired, actual);
|
||||||
|
const updateOps = ops.filter((o) => o.type === "update_policy");
|
||||||
|
assertEquals(updateOps.length, 1);
|
||||||
|
assertEquals(updateOps[0].name, "allow-pilots");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("computeDiff creates route when not in actual", () => {
|
||||||
|
const desired = desiredState({
|
||||||
|
routes: {
|
||||||
|
"vpn-exit": {
|
||||||
|
description: "VPN exit route",
|
||||||
|
network: "0.0.0.0/0",
|
||||||
|
peer_groups: ["pilots"],
|
||||||
|
metric: 9999,
|
||||||
|
masquerade: true,
|
||||||
|
distribution_groups: ["pilots"],
|
||||||
|
enabled: true,
|
||||||
|
keep_route: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const ops = computeDiff(desired, emptyActual());
|
||||||
|
const routeOps = ops.filter((o) => o.type === "create_route");
|
||||||
|
assertEquals(routeOps.length, 1);
|
||||||
|
assertEquals(routeOps[0].name, "vpn-exit");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("computeDiff creates dns when not in actual", () => {
|
||||||
|
const desired = desiredState({
|
||||||
|
dns: {
|
||||||
|
nameserver_groups: {
|
||||||
|
"cloudflare": {
|
||||||
|
description: "Cloudflare DNS",
|
||||||
|
nameservers: [{ ip: "1.1.1.1", ns_type: "udp", port: 53 }],
|
||||||
|
enabled: true,
|
||||||
|
groups: ["pilots"],
|
||||||
|
primary: true,
|
||||||
|
domains: [],
|
||||||
|
search_domains_enabled: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const ops = computeDiff(desired, emptyActual());
|
||||||
|
const dnsOps = ops.filter((o) => o.type === "create_dns");
|
||||||
|
assertEquals(dnsOps.length, 1);
|
||||||
|
assertEquals(dnsOps[0].name, "cloudflare");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("computeDiff operations are sorted by EXECUTION_ORDER", () => {
|
||||||
|
// Desired state that produces creates for multiple resource types
|
||||||
|
const desired = desiredState({
|
||||||
|
groups: { pilots: { peers: [] } },
|
||||||
|
setup_keys: {
|
||||||
|
"new-key": {
|
||||||
|
type: "one-off",
|
||||||
|
expires_in: 604800,
|
||||||
|
usage_limit: 1,
|
||||||
|
auto_groups: ["pilots"],
|
||||||
|
enrolled: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
policies: {
|
||||||
|
"test-policy": {
|
||||||
|
description: "",
|
||||||
|
enabled: true,
|
||||||
|
sources: ["pilots"],
|
||||||
|
destinations: ["pilots"],
|
||||||
|
bidirectional: true,
|
||||||
|
protocol: "all",
|
||||||
|
action: "accept",
|
||||||
|
source_posture_checks: [],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const ops = computeDiff(desired, emptyActual());
|
||||||
|
|
||||||
|
// create_group must come before create_setup_key, which must come before
|
||||||
|
// create_policy — matching EXECUTION_ORDER
|
||||||
|
const groupIdx = ops.findIndex((o) => o.type === "create_group");
|
||||||
|
const keyIdx = ops.findIndex((o) => o.type === "create_setup_key");
|
||||||
|
const policyIdx = ops.findIndex((o) => o.type === "create_policy");
|
||||||
|
assertEquals(groupIdx < keyIdx, true);
|
||||||
|
assertEquals(keyIdx < policyIdx, true);
|
||||||
|
});
|
||||||
808
src/reconcile/diff.ts
Normal file
808
src/reconcile/diff.ts
Normal file
@ -0,0 +1,808 @@
|
|||||||
|
import type { DesiredState } from "../state/schema.ts";
|
||||||
|
import type { ActualState } from "../state/actual.ts";
|
||||||
|
import type { NbPolicyRule } from "../netbird/types.ts";
|
||||||
|
import { EXECUTION_ORDER, type Operation } from "./operations.ts";
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Compares desired state against actual state and returns an ordered list of
|
||||||
|
* operations needed to reconcile the two. Operations are sorted by
|
||||||
|
* EXECUTION_ORDER so that creates happen before updates, and deletions happen
|
||||||
|
* in reverse dependency order.
|
||||||
|
*/
|
||||||
|
export function computeDiff(
|
||||||
|
desired: DesiredState,
|
||||||
|
actual: ActualState,
|
||||||
|
): Operation[] {
|
||||||
|
const ops: Operation[] = [];
|
||||||
|
|
||||||
|
diffPostureChecks(desired, actual, ops);
|
||||||
|
diffGroups(desired, actual, ops);
|
||||||
|
diffSetupKeys(desired, actual, ops);
|
||||||
|
diffNetworks(desired, actual, ops);
|
||||||
|
diffPeers(desired, actual, ops);
|
||||||
|
diffUsers(desired, actual, ops);
|
||||||
|
diffPolicies(desired, actual, ops);
|
||||||
|
diffRoutes(desired, actual, ops);
|
||||||
|
diffDns(desired, actual, ops);
|
||||||
|
|
||||||
|
return sortByExecutionOrder(ops);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Posture Checks
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function diffPostureChecks(
|
||||||
|
desired: DesiredState,
|
||||||
|
actual: ActualState,
|
||||||
|
ops: Operation[],
|
||||||
|
): void {
|
||||||
|
const desiredNames = new Set(Object.keys(desired.posture_checks));
|
||||||
|
|
||||||
|
for (const [name, config] of Object.entries(desired.posture_checks)) {
|
||||||
|
const existing = actual.postureChecksByName.get(name);
|
||||||
|
if (!existing) {
|
||||||
|
ops.push({
|
||||||
|
type: "create_posture_check",
|
||||||
|
name,
|
||||||
|
details: {
|
||||||
|
description: config.description,
|
||||||
|
checks: config.checks,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (
|
||||||
|
existing.description !== config.description ||
|
||||||
|
JSON.stringify(existing.checks) !== JSON.stringify(config.checks)
|
||||||
|
) {
|
||||||
|
ops.push({
|
||||||
|
type: "update_posture_check",
|
||||||
|
name,
|
||||||
|
details: {
|
||||||
|
description: config.description,
|
||||||
|
checks: config.checks,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const pc of actual.postureChecks) {
|
||||||
|
if (!desiredNames.has(pc.name)) {
|
||||||
|
ops.push({ type: "delete_posture_check", name: pc.name });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Groups
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function diffGroups(
|
||||||
|
desired: DesiredState,
|
||||||
|
actual: ActualState,
|
||||||
|
ops: Operation[],
|
||||||
|
): void {
|
||||||
|
const desiredNames = new Set(Object.keys(desired.groups));
|
||||||
|
|
||||||
|
for (const [name, config] of Object.entries(desired.groups)) {
|
||||||
|
const existing = actual.groupsByName.get(name);
|
||||||
|
if (!existing) {
|
||||||
|
ops.push({
|
||||||
|
type: "create_group",
|
||||||
|
name,
|
||||||
|
details: { peers: config.peers },
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compare peer membership by name (sorted for stable comparison)
|
||||||
|
const actualPeerNames = (existing.peers ?? []).map((p) => p.name).sort();
|
||||||
|
const desiredPeerNames = [...config.peers].sort();
|
||||||
|
if (!arraysEqual(actualPeerNames, desiredPeerNames)) {
|
||||||
|
ops.push({
|
||||||
|
type: "update_group",
|
||||||
|
name,
|
||||||
|
details: {
|
||||||
|
desired_peers: desiredPeerNames,
|
||||||
|
actual_peers: actualPeerNames,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete groups that exist in actual but not in desired.
|
||||||
|
// Only delete API-issued groups — system and JWT groups are managed externally.
|
||||||
|
for (const group of actual.groups) {
|
||||||
|
if (
|
||||||
|
!desiredNames.has(group.name) && group.issued === "api" &&
|
||||||
|
group.name !== "All"
|
||||||
|
) {
|
||||||
|
ops.push({ type: "delete_group", name: group.name });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Setup Keys
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function diffSetupKeys(
|
||||||
|
desired: DesiredState,
|
||||||
|
actual: ActualState,
|
||||||
|
ops: Operation[],
|
||||||
|
): void {
|
||||||
|
const desiredNames = new Set(Object.keys(desired.setup_keys));
|
||||||
|
|
||||||
|
for (const [name, config] of Object.entries(desired.setup_keys)) {
|
||||||
|
const existing = actual.setupKeysByName.get(name);
|
||||||
|
if (!existing && !config.enrolled) {
|
||||||
|
ops.push({
|
||||||
|
type: "create_setup_key",
|
||||||
|
name,
|
||||||
|
details: {
|
||||||
|
type: config.type,
|
||||||
|
auto_groups: config.auto_groups,
|
||||||
|
usage_limit: config.usage_limit,
|
||||||
|
expires_in: config.expires_in,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
// Setup keys are immutable — no update path.
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete keys that exist in actual but not in desired.
|
||||||
|
for (const key of actual.setupKeys) {
|
||||||
|
if (!desiredNames.has(key.name)) {
|
||||||
|
ops.push({ type: "delete_setup_key", name: key.name });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Networks (including resources and routers)
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function diffNetworks(
|
||||||
|
desired: DesiredState,
|
||||||
|
actual: ActualState,
|
||||||
|
ops: Operation[],
|
||||||
|
): void {
|
||||||
|
const desiredNames = new Set(Object.keys(desired.networks));
|
||||||
|
|
||||||
|
for (const [name, config] of Object.entries(desired.networks)) {
|
||||||
|
const existing = actual.networksByName.get(name);
|
||||||
|
|
||||||
|
if (!existing) {
|
||||||
|
ops.push({
|
||||||
|
type: "create_network",
|
||||||
|
name,
|
||||||
|
details: { description: config.description },
|
||||||
|
});
|
||||||
|
|
||||||
|
// All resources and routers under a new network are creates
|
||||||
|
for (const res of config.resources) {
|
||||||
|
ops.push({
|
||||||
|
type: "create_network_resource",
|
||||||
|
name: res.name,
|
||||||
|
details: {
|
||||||
|
network_name: name,
|
||||||
|
description: res.description,
|
||||||
|
type: res.type,
|
||||||
|
address: res.address,
|
||||||
|
enabled: res.enabled,
|
||||||
|
groups: res.groups,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
for (const router of config.routers) {
|
||||||
|
ops.push({
|
||||||
|
type: "create_network_router",
|
||||||
|
name: routerKey(router),
|
||||||
|
details: {
|
||||||
|
network_name: name,
|
||||||
|
peer: router.peer,
|
||||||
|
peer_groups: router.peer_groups,
|
||||||
|
metric: router.metric,
|
||||||
|
masquerade: router.masquerade,
|
||||||
|
enabled: router.enabled,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Network exists — check for description change
|
||||||
|
if (existing.description !== config.description) {
|
||||||
|
ops.push({
|
||||||
|
type: "update_network",
|
||||||
|
name,
|
||||||
|
details: { description: config.description },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Diff resources within this network
|
||||||
|
const actualResources = actual.networkResources.get(existing.id) ?? [];
|
||||||
|
diffNetworkResources(name, config.resources, actualResources, actual, ops);
|
||||||
|
|
||||||
|
// Diff routers within this network
|
||||||
|
const actualRouters = actual.networkRouters.get(existing.id) ?? [];
|
||||||
|
diffNetworkRouters(name, config.routers, actualRouters, actual, ops);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete networks not in desired (this also implicitly removes their resources/routers)
|
||||||
|
for (const network of actual.networks) {
|
||||||
|
if (!desiredNames.has(network.name)) {
|
||||||
|
// Delete routers and resources first (execution order handles this,
|
||||||
|
// but we still emit the ops)
|
||||||
|
const routers = actual.networkRouters.get(network.id) ?? [];
|
||||||
|
for (const router of routers) {
|
||||||
|
ops.push({
|
||||||
|
type: "delete_network_router",
|
||||||
|
name: actualRouterKey(router, actual),
|
||||||
|
details: { network_name: network.name, router_id: router.id },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
const resources = actual.networkResources.get(network.id) ?? [];
|
||||||
|
for (const res of resources) {
|
||||||
|
ops.push({
|
||||||
|
type: "delete_network_resource",
|
||||||
|
name: res.name,
|
||||||
|
details: { network_name: network.name, resource_id: res.id },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
ops.push({ type: "delete_network", name: network.name });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function diffNetworkResources(
|
||||||
|
networkName: string,
|
||||||
|
desiredResources: DesiredState["networks"][string]["resources"],
|
||||||
|
actualResources: ActualState["networkResources"] extends Map<
|
||||||
|
string,
|
||||||
|
infer V
|
||||||
|
> ? V
|
||||||
|
: never,
|
||||||
|
actual: ActualState,
|
||||||
|
ops: Operation[],
|
||||||
|
): void {
|
||||||
|
const actualByName = new Map(actualResources.map((r) => [r.name, r]));
|
||||||
|
const desiredNames = new Set(desiredResources.map((r) => r.name));
|
||||||
|
|
||||||
|
for (const res of desiredResources) {
|
||||||
|
const existing = actualByName.get(res.name);
|
||||||
|
if (!existing) {
|
||||||
|
ops.push({
|
||||||
|
type: "create_network_resource",
|
||||||
|
name: res.name,
|
||||||
|
details: {
|
||||||
|
network_name: networkName,
|
||||||
|
description: res.description,
|
||||||
|
type: res.type,
|
||||||
|
address: res.address,
|
||||||
|
enabled: res.enabled,
|
||||||
|
groups: res.groups,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compare fields: resolve actual group names for comparison
|
||||||
|
const actualGroupNames = existing.groups.map((g) => g.name).sort();
|
||||||
|
const desiredGroupNames = [...res.groups].sort();
|
||||||
|
|
||||||
|
if (
|
||||||
|
existing.description !== res.description ||
|
||||||
|
existing.type !== res.type ||
|
||||||
|
existing.address !== res.address ||
|
||||||
|
existing.enabled !== res.enabled ||
|
||||||
|
!arraysEqual(actualGroupNames, desiredGroupNames)
|
||||||
|
) {
|
||||||
|
ops.push({
|
||||||
|
type: "update_network_resource",
|
||||||
|
name: res.name,
|
||||||
|
details: {
|
||||||
|
network_name: networkName,
|
||||||
|
resource_id: existing.id,
|
||||||
|
description: res.description,
|
||||||
|
type: res.type,
|
||||||
|
address: res.address,
|
||||||
|
enabled: res.enabled,
|
||||||
|
groups: res.groups,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete resources not in desired
|
||||||
|
for (const res of actualResources) {
|
||||||
|
if (!desiredNames.has(res.name)) {
|
||||||
|
ops.push({
|
||||||
|
type: "delete_network_resource",
|
||||||
|
name: res.name,
|
||||||
|
details: { network_name: networkName, resource_id: res.id },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function diffNetworkRouters(
|
||||||
|
networkName: string,
|
||||||
|
desiredRouters: DesiredState["networks"][string]["routers"],
|
||||||
|
actualRouters: ActualState["networkRouters"] extends Map<string, infer V> ? V
|
||||||
|
: never,
|
||||||
|
actual: ActualState,
|
||||||
|
ops: Operation[],
|
||||||
|
): void {
|
||||||
|
// Match routers by their key (peer name or serialized peer_groups)
|
||||||
|
const actualByKey = new Map(
|
||||||
|
actualRouters.map((r) => [actualRouterKey(r, actual), r]),
|
||||||
|
);
|
||||||
|
const desiredKeys = new Set(desiredRouters.map((r) => routerKey(r)));
|
||||||
|
|
||||||
|
for (const router of desiredRouters) {
|
||||||
|
const key = routerKey(router);
|
||||||
|
const existing = actualByKey.get(key);
|
||||||
|
if (!existing) {
|
||||||
|
ops.push({
|
||||||
|
type: "create_network_router",
|
||||||
|
name: key,
|
||||||
|
details: {
|
||||||
|
network_name: networkName,
|
||||||
|
peer: router.peer,
|
||||||
|
peer_groups: router.peer_groups,
|
||||||
|
metric: router.metric,
|
||||||
|
masquerade: router.masquerade,
|
||||||
|
enabled: router.enabled,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compare mutable fields
|
||||||
|
if (
|
||||||
|
existing.metric !== router.metric ||
|
||||||
|
existing.masquerade !== router.masquerade ||
|
||||||
|
existing.enabled !== router.enabled
|
||||||
|
) {
|
||||||
|
ops.push({
|
||||||
|
type: "update_network_router",
|
||||||
|
name: key,
|
||||||
|
details: {
|
||||||
|
network_name: networkName,
|
||||||
|
router_id: existing.id,
|
||||||
|
peer: router.peer,
|
||||||
|
peer_groups: router.peer_groups,
|
||||||
|
metric: router.metric,
|
||||||
|
masquerade: router.masquerade,
|
||||||
|
enabled: router.enabled,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete routers not in desired
|
||||||
|
for (const router of actualRouters) {
|
||||||
|
const key = actualRouterKey(router, actual);
|
||||||
|
if (!desiredKeys.has(key)) {
|
||||||
|
ops.push({
|
||||||
|
type: "delete_network_router",
|
||||||
|
name: key,
|
||||||
|
details: { network_name: networkName, router_id: router.id },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generates a stable key for a desired router config.
|
||||||
|
* Uses the peer name if set, otherwise serializes peer_groups sorted.
|
||||||
|
*/
|
||||||
|
function routerKey(
|
||||||
|
router: { peer?: string; peer_groups?: string[] },
|
||||||
|
): string {
|
||||||
|
if (router.peer) return `peer:${router.peer}`;
|
||||||
|
return `groups:${[...(router.peer_groups ?? [])].sort().join(",")}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generates a stable key for an actual router, resolving peer ID to name.
|
||||||
|
*/
|
||||||
|
function actualRouterKey(
|
||||||
|
router: { peer: string | null; peer_groups: string[] | null },
|
||||||
|
actual: ActualState,
|
||||||
|
): string {
|
||||||
|
if (router.peer) {
|
||||||
|
const peer = actual.peersById.get(router.peer);
|
||||||
|
return `peer:${peer ? peer.name : router.peer}`;
|
||||||
|
}
|
||||||
|
// peer_groups on actual routers are group IDs — resolve to names
|
||||||
|
const groupNames = (router.peer_groups ?? [])
|
||||||
|
.map((id) => {
|
||||||
|
const g = actual.groupsById.get(id);
|
||||||
|
return g ? g.name : id;
|
||||||
|
})
|
||||||
|
.sort();
|
||||||
|
return `groups:${groupNames.join(",")}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Peers
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function diffPeers(
|
||||||
|
desired: DesiredState,
|
||||||
|
actual: ActualState,
|
||||||
|
ops: Operation[],
|
||||||
|
): void {
|
||||||
|
for (const [name, config] of Object.entries(desired.peers)) {
|
||||||
|
const existing = actual.peersByName.get(name);
|
||||||
|
if (!existing) continue; // Never create or delete peers
|
||||||
|
|
||||||
|
let changed = false;
|
||||||
|
|
||||||
|
// Compare groups (excluding "All"), resolve actual peer group names
|
||||||
|
const actualGroupNames = existing.groups
|
||||||
|
.map((g) => g.name)
|
||||||
|
.filter((n) => n !== "All")
|
||||||
|
.sort();
|
||||||
|
const desiredGroupNames = [...config.groups].sort();
|
||||||
|
if (!arraysEqual(actualGroupNames, desiredGroupNames)) {
|
||||||
|
changed = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (
|
||||||
|
existing.login_expiration_enabled !== config.login_expiration_enabled ||
|
||||||
|
existing.inactivity_expiration_enabled !==
|
||||||
|
config.inactivity_expiration_enabled ||
|
||||||
|
existing.ssh_enabled !== config.ssh_enabled
|
||||||
|
) {
|
||||||
|
changed = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (changed) {
|
||||||
|
ops.push({
|
||||||
|
type: "update_peer",
|
||||||
|
name,
|
||||||
|
details: {
|
||||||
|
groups: config.groups,
|
||||||
|
login_expiration_enabled: config.login_expiration_enabled,
|
||||||
|
inactivity_expiration_enabled: config.inactivity_expiration_enabled,
|
||||||
|
ssh_enabled: config.ssh_enabled,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Users
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function diffUsers(
|
||||||
|
desired: DesiredState,
|
||||||
|
actual: ActualState,
|
||||||
|
ops: Operation[],
|
||||||
|
): void {
|
||||||
|
const desiredEmails = new Set(Object.keys(desired.users));
|
||||||
|
|
||||||
|
for (const [email, config] of Object.entries(desired.users)) {
|
||||||
|
const existing = actual.usersByEmail.get(email);
|
||||||
|
if (!existing) {
|
||||||
|
ops.push({
|
||||||
|
type: "create_user",
|
||||||
|
name: email,
|
||||||
|
details: {
|
||||||
|
email,
|
||||||
|
name: config.name,
|
||||||
|
role: config.role,
|
||||||
|
auto_groups: config.auto_groups,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compare role and auto_groups
|
||||||
|
const actualAutoGroupNames = resolveIds(
|
||||||
|
existing.auto_groups,
|
||||||
|
actual,
|
||||||
|
).sort();
|
||||||
|
const desiredAutoGroupNames = [...config.auto_groups].sort();
|
||||||
|
|
||||||
|
if (
|
||||||
|
existing.role !== config.role ||
|
||||||
|
!arraysEqual(actualAutoGroupNames, desiredAutoGroupNames)
|
||||||
|
) {
|
||||||
|
ops.push({
|
||||||
|
type: "update_user",
|
||||||
|
name: email,
|
||||||
|
details: {
|
||||||
|
name: config.name,
|
||||||
|
role: config.role,
|
||||||
|
auto_groups: config.auto_groups,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete users not in desired, but NEVER delete owners
|
||||||
|
for (const user of actual.users) {
|
||||||
|
if (!desiredEmails.has(user.email) && user.role !== "owner") {
|
||||||
|
ops.push({ type: "delete_user", name: user.email });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Resolves group IDs to group names using actual state. */
|
||||||
|
function resolveIds(ids: string[], actual: ActualState): string[] {
|
||||||
|
return ids.map((id) => {
|
||||||
|
const group = actual.groupsById.get(id);
|
||||||
|
return group ? group.name : id;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Policies
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function diffPolicies(
|
||||||
|
desired: DesiredState,
|
||||||
|
actual: ActualState,
|
||||||
|
ops: Operation[],
|
||||||
|
): void {
|
||||||
|
const desiredNames = new Set(Object.keys(desired.policies));
|
||||||
|
|
||||||
|
for (const [name, config] of Object.entries(desired.policies)) {
|
||||||
|
const existing = actual.policiesByName.get(name);
|
||||||
|
if (!existing) {
|
||||||
|
ops.push({
|
||||||
|
type: "create_policy",
|
||||||
|
name,
|
||||||
|
details: {
|
||||||
|
enabled: config.enabled,
|
||||||
|
sources: config.sources,
|
||||||
|
destinations: config.destinations,
|
||||||
|
destination_resource: config.destination_resource,
|
||||||
|
source_posture_checks: config.source_posture_checks,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract group names from actual rules for comparison.
|
||||||
|
const actualSources = extractGroupNames(
|
||||||
|
existing.rules.flatMap((r) => r.sources ?? []),
|
||||||
|
actual,
|
||||||
|
).sort();
|
||||||
|
const desiredSources = [...config.sources].sort();
|
||||||
|
|
||||||
|
let destsChanged = false;
|
||||||
|
if (config.destination_resource) {
|
||||||
|
// When desired has destination_resource, compare against actual rule's destinationResource
|
||||||
|
const actualDestRes = existing.rules[0]?.destinationResource;
|
||||||
|
if (
|
||||||
|
!actualDestRes ||
|
||||||
|
actualDestRes.id !== config.destination_resource.id ||
|
||||||
|
actualDestRes.type !== config.destination_resource.type
|
||||||
|
) {
|
||||||
|
destsChanged = true;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Standard group-based destination comparison
|
||||||
|
const actualDests = extractGroupNames(
|
||||||
|
existing.rules.flatMap((r) => r.destinations ?? []),
|
||||||
|
actual,
|
||||||
|
).sort();
|
||||||
|
const desiredDests = [...config.destinations].sort();
|
||||||
|
destsChanged = !arraysEqual(actualDests, desiredDests);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compare source_posture_checks
|
||||||
|
const actualPostureChecks = [
|
||||||
|
...(existing.source_posture_checks ?? []),
|
||||||
|
].sort();
|
||||||
|
const desiredPostureChecks = [...config.source_posture_checks].sort();
|
||||||
|
const postureChecksChanged = !arraysEqual(
|
||||||
|
actualPostureChecks,
|
||||||
|
desiredPostureChecks,
|
||||||
|
);
|
||||||
|
|
||||||
|
if (
|
||||||
|
existing.enabled !== config.enabled ||
|
||||||
|
!arraysEqual(actualSources, desiredSources) ||
|
||||||
|
destsChanged ||
|
||||||
|
postureChecksChanged
|
||||||
|
) {
|
||||||
|
ops.push({
|
||||||
|
type: "update_policy",
|
||||||
|
name,
|
||||||
|
details: {
|
||||||
|
enabled: config.enabled,
|
||||||
|
sources: config.sources,
|
||||||
|
destinations: config.destinations,
|
||||||
|
destination_resource: config.destination_resource,
|
||||||
|
source_posture_checks: config.source_posture_checks,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const policy of actual.policies) {
|
||||||
|
if (!desiredNames.has(policy.name)) {
|
||||||
|
ops.push({ type: "delete_policy", name: policy.name });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Policy rule sources/destinations can be either plain group ID strings or
|
||||||
|
* `{id, name}` objects. This helper normalizes them to group names, falling
|
||||||
|
* back to the ID if the group is unknown (defensive).
|
||||||
|
*/
|
||||||
|
function extractGroupNames(
|
||||||
|
refs: NonNullable<NbPolicyRule["sources"]>,
|
||||||
|
actual: ActualState,
|
||||||
|
): string[] {
|
||||||
|
return refs.map((ref) => {
|
||||||
|
if (typeof ref === "object" && ref !== null) {
|
||||||
|
return ref.name;
|
||||||
|
}
|
||||||
|
// Plain string — it's a group ID. Look up the name.
|
||||||
|
const group = actual.groupsById.get(ref);
|
||||||
|
return group ? group.name : ref;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Routes
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function diffRoutes(
|
||||||
|
desired: DesiredState,
|
||||||
|
actual: ActualState,
|
||||||
|
ops: Operation[],
|
||||||
|
): void {
|
||||||
|
const desiredIds = new Set(Object.keys(desired.routes));
|
||||||
|
|
||||||
|
for (const [networkId, config] of Object.entries(desired.routes)) {
|
||||||
|
const existing = actual.routesByNetworkId.get(networkId);
|
||||||
|
if (!existing) {
|
||||||
|
ops.push({
|
||||||
|
type: "create_route",
|
||||||
|
name: networkId,
|
||||||
|
details: {
|
||||||
|
network: config.network,
|
||||||
|
domains: config.domains,
|
||||||
|
enabled: config.enabled,
|
||||||
|
description: config.description,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (
|
||||||
|
existing.enabled !== config.enabled ||
|
||||||
|
existing.description !== config.description ||
|
||||||
|
existing.network !== config.network
|
||||||
|
) {
|
||||||
|
ops.push({
|
||||||
|
type: "update_route",
|
||||||
|
name: networkId,
|
||||||
|
details: {
|
||||||
|
enabled: config.enabled,
|
||||||
|
description: config.description,
|
||||||
|
network: config.network,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const route of actual.routes) {
|
||||||
|
if (!desiredIds.has(route.network_id)) {
|
||||||
|
ops.push({ type: "delete_route", name: route.network_id });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// DNS Nameserver Groups
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function diffDns(
|
||||||
|
desired: DesiredState,
|
||||||
|
actual: ActualState,
|
||||||
|
ops: Operation[],
|
||||||
|
): void {
|
||||||
|
const desiredNames = new Set(
|
||||||
|
Object.keys(desired.dns.nameserver_groups),
|
||||||
|
);
|
||||||
|
|
||||||
|
for (
|
||||||
|
const [name, config] of Object.entries(desired.dns.nameserver_groups)
|
||||||
|
) {
|
||||||
|
const existing = actual.dnsByName.get(name);
|
||||||
|
if (!existing) {
|
||||||
|
ops.push({
|
||||||
|
type: "create_dns",
|
||||||
|
name,
|
||||||
|
details: {
|
||||||
|
enabled: config.enabled,
|
||||||
|
primary: config.primary,
|
||||||
|
nameservers: config.nameservers,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
const nsChanged = !nameserversEqual(
|
||||||
|
existing.nameservers,
|
||||||
|
config.nameservers,
|
||||||
|
);
|
||||||
|
|
||||||
|
if (
|
||||||
|
existing.enabled !== config.enabled ||
|
||||||
|
existing.primary !== config.primary ||
|
||||||
|
nsChanged
|
||||||
|
) {
|
||||||
|
ops.push({
|
||||||
|
type: "update_dns",
|
||||||
|
name,
|
||||||
|
details: {
|
||||||
|
enabled: config.enabled,
|
||||||
|
primary: config.primary,
|
||||||
|
nameservers: config.nameservers,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const ns of actual.dns) {
|
||||||
|
if (!desiredNames.has(ns.name)) {
|
||||||
|
ops.push({ type: "delete_dns", name: ns.name });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Helpers
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
function arraysEqual(a: string[], b: string[]): boolean {
|
||||||
|
if (a.length !== b.length) return false;
|
||||||
|
for (let i = 0; i < a.length; i++) {
|
||||||
|
if (a[i] !== b[i]) return false;
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Deep-compares two nameserver arrays by ip, ns_type, and port.
|
||||||
|
* Order-sensitive — the API preserves insertion order.
|
||||||
|
*/
|
||||||
|
function nameserversEqual(
|
||||||
|
a: Array<{ ip: string; ns_type: string; port: number }>,
|
||||||
|
b: Array<{ ip: string; ns_type: string; port: number }>,
|
||||||
|
): boolean {
|
||||||
|
if (a.length !== b.length) return false;
|
||||||
|
for (let i = 0; i < a.length; i++) {
|
||||||
|
if (
|
||||||
|
a[i].ip !== b[i].ip ||
|
||||||
|
a[i].ns_type !== b[i].ns_type ||
|
||||||
|
a[i].port !== b[i].port
|
||||||
|
) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
function sortByExecutionOrder(ops: Operation[]): Operation[] {
|
||||||
|
const orderIndex = new Map(EXECUTION_ORDER.map((t, i) => [t, i]));
|
||||||
|
return ops.sort((a, b) => {
|
||||||
|
const ai = orderIndex.get(a.type) ?? Number.MAX_SAFE_INTEGER;
|
||||||
|
const bi = orderIndex.get(b.type) ?? Number.MAX_SAFE_INTEGER;
|
||||||
|
return ai - bi;
|
||||||
|
});
|
||||||
|
}
|
||||||
235
src/reconcile/executor.test.ts
Normal file
235
src/reconcile/executor.test.ts
Normal file
@ -0,0 +1,235 @@
|
|||||||
|
import { assertEquals } from "@std/assert";
|
||||||
|
import { executeOperations } from "./executor.ts";
|
||||||
|
import type { Operation } from "./operations.ts";
|
||||||
|
import type { ActualState } from "../state/actual.ts";
|
||||||
|
|
||||||
|
function emptyActual(): ActualState {
|
||||||
|
return {
|
||||||
|
groups: [],
|
||||||
|
groupsByName: new Map(),
|
||||||
|
groupsById: new Map(),
|
||||||
|
setupKeys: [],
|
||||||
|
setupKeysByName: new Map(),
|
||||||
|
peers: [],
|
||||||
|
peersByName: new Map(),
|
||||||
|
peersById: new Map(),
|
||||||
|
policies: [],
|
||||||
|
policiesByName: new Map(),
|
||||||
|
routes: [],
|
||||||
|
routesByNetworkId: new Map(),
|
||||||
|
dns: [],
|
||||||
|
dnsByName: new Map(),
|
||||||
|
postureChecks: [],
|
||||||
|
postureChecksByName: new Map(),
|
||||||
|
networks: [],
|
||||||
|
networksByName: new Map(),
|
||||||
|
networkResources: new Map(),
|
||||||
|
networkRouters: new Map(),
|
||||||
|
users: [],
|
||||||
|
usersByEmail: new Map(),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
Deno.test("executor calls createGroup for create_group op", async () => {
|
||||||
|
const calls: string[] = [];
|
||||||
|
const mockClient = {
|
||||||
|
createGroup: (data: { name: string }) => {
|
||||||
|
calls.push(`createGroup:${data.name}`);
|
||||||
|
return Promise.resolve({
|
||||||
|
id: "new-g1",
|
||||||
|
name: data.name,
|
||||||
|
peers_count: 0,
|
||||||
|
peers: [],
|
||||||
|
issued: "api" as const,
|
||||||
|
});
|
||||||
|
},
|
||||||
|
};
|
||||||
|
const ops: Operation[] = [
|
||||||
|
{ type: "create_group", name: "pilots" },
|
||||||
|
];
|
||||||
|
const { results } = await executeOperations(
|
||||||
|
ops,
|
||||||
|
mockClient as never,
|
||||||
|
emptyActual(),
|
||||||
|
);
|
||||||
|
assertEquals(calls, ["createGroup:pilots"]);
|
||||||
|
assertEquals(results[0].status, "success");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("executor aborts on first failure", async () => {
|
||||||
|
const mockClient = {
|
||||||
|
createGroup: () => Promise.reject(new Error("API down")),
|
||||||
|
createSetupKey: () => Promise.resolve({ id: 1, key: "k", name: "key1" }),
|
||||||
|
};
|
||||||
|
const ops: Operation[] = [
|
||||||
|
{ type: "create_group", name: "pilots" },
|
||||||
|
{ type: "create_setup_key", name: "key1" },
|
||||||
|
];
|
||||||
|
const { results } = await executeOperations(
|
||||||
|
ops,
|
||||||
|
mockClient as never,
|
||||||
|
emptyActual(),
|
||||||
|
);
|
||||||
|
assertEquals(results[0].status, "failed");
|
||||||
|
assertEquals(results.length, 1); // second op never executed
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("executor tracks created group IDs for setup key auto_groups", async () => {
|
||||||
|
const calls: Array<{ method: string; data: unknown }> = [];
|
||||||
|
const mockClient = {
|
||||||
|
createGroup: (data: { name: string }) => {
|
||||||
|
calls.push({ method: "createGroup", data });
|
||||||
|
return Promise.resolve({
|
||||||
|
id: "new-g1",
|
||||||
|
name: data.name,
|
||||||
|
peers_count: 0,
|
||||||
|
peers: [],
|
||||||
|
issued: "api" as const,
|
||||||
|
});
|
||||||
|
},
|
||||||
|
createSetupKey: (data: Record<string, unknown>) => {
|
||||||
|
calls.push({ method: "createSetupKey", data });
|
||||||
|
return Promise.resolve({
|
||||||
|
id: 1,
|
||||||
|
name: data.name,
|
||||||
|
key: "raw-key-123",
|
||||||
|
type: data.type,
|
||||||
|
expires: "2026-04-01T00:00:00Z",
|
||||||
|
valid: true,
|
||||||
|
revoked: false,
|
||||||
|
used_times: 0,
|
||||||
|
state: "valid" as const,
|
||||||
|
auto_groups: data.auto_groups,
|
||||||
|
usage_limit: data.usage_limit,
|
||||||
|
});
|
||||||
|
},
|
||||||
|
};
|
||||||
|
const ops: Operation[] = [
|
||||||
|
{ type: "create_group", name: "pilots" },
|
||||||
|
{
|
||||||
|
type: "create_setup_key",
|
||||||
|
name: "key1",
|
||||||
|
details: {
|
||||||
|
type: "one-off",
|
||||||
|
auto_groups: ["pilots"],
|
||||||
|
usage_limit: 1,
|
||||||
|
expires_in: 604800,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
];
|
||||||
|
const { results, createdKeys } = await executeOperations(
|
||||||
|
ops,
|
||||||
|
mockClient as never,
|
||||||
|
emptyActual(),
|
||||||
|
);
|
||||||
|
assertEquals(results.length, 2);
|
||||||
|
assertEquals(results[0].status, "success");
|
||||||
|
assertEquals(results[1].status, "success");
|
||||||
|
// The setup key call should have resolved "pilots" -> "new-g1"
|
||||||
|
const setupKeyCall = calls.find((c) => c.method === "createSetupKey");
|
||||||
|
assertEquals(
|
||||||
|
(setupKeyCall?.data as Record<string, unknown>).auto_groups,
|
||||||
|
["new-g1"],
|
||||||
|
);
|
||||||
|
// Created keys map stores the raw key
|
||||||
|
assertEquals(createdKeys.get("key1"), "raw-key-123");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("executor resolves group IDs from actual state", async () => {
|
||||||
|
const calls: Array<{ method: string; data: unknown }> = [];
|
||||||
|
const actual = emptyActual();
|
||||||
|
actual.groupsByName.set("pilots", {
|
||||||
|
id: "existing-g1",
|
||||||
|
name: "pilots",
|
||||||
|
peers_count: 0,
|
||||||
|
peers: [],
|
||||||
|
issued: "api",
|
||||||
|
});
|
||||||
|
|
||||||
|
const mockClient = {
|
||||||
|
createSetupKey: (data: Record<string, unknown>) => {
|
||||||
|
calls.push({ method: "createSetupKey", data });
|
||||||
|
return Promise.resolve({
|
||||||
|
id: 1,
|
||||||
|
name: data.name,
|
||||||
|
key: "raw-key-456",
|
||||||
|
type: data.type,
|
||||||
|
expires: "2026-04-01T00:00:00Z",
|
||||||
|
valid: true,
|
||||||
|
revoked: false,
|
||||||
|
used_times: 0,
|
||||||
|
state: "valid" as const,
|
||||||
|
auto_groups: data.auto_groups,
|
||||||
|
usage_limit: data.usage_limit,
|
||||||
|
});
|
||||||
|
},
|
||||||
|
};
|
||||||
|
const ops: Operation[] = [
|
||||||
|
{
|
||||||
|
type: "create_setup_key",
|
||||||
|
name: "key1",
|
||||||
|
details: {
|
||||||
|
type: "one-off",
|
||||||
|
auto_groups: ["pilots"],
|
||||||
|
usage_limit: 1,
|
||||||
|
expires_in: 604800,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
];
|
||||||
|
const { results } = await executeOperations(
|
||||||
|
ops,
|
||||||
|
mockClient as never,
|
||||||
|
actual,
|
||||||
|
);
|
||||||
|
assertEquals(results[0].status, "success");
|
||||||
|
const setupKeyCall = calls.find((c) => c.method === "createSetupKey");
|
||||||
|
assertEquals(
|
||||||
|
(setupKeyCall?.data as Record<string, unknown>).auto_groups,
|
||||||
|
["existing-g1"],
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("executor deletes group by resolving ID from actual", async () => {
|
||||||
|
const calls: string[] = [];
|
||||||
|
const actual = emptyActual();
|
||||||
|
actual.groupsByName.set("stale-group", {
|
||||||
|
id: "g-old",
|
||||||
|
name: "stale-group",
|
||||||
|
peers_count: 0,
|
||||||
|
peers: [],
|
||||||
|
issued: "api",
|
||||||
|
});
|
||||||
|
|
||||||
|
const mockClient = {
|
||||||
|
deleteGroup: (id: string) => {
|
||||||
|
calls.push(`deleteGroup:${id}`);
|
||||||
|
return Promise.resolve();
|
||||||
|
},
|
||||||
|
};
|
||||||
|
const ops: Operation[] = [
|
||||||
|
{ type: "delete_group", name: "stale-group" },
|
||||||
|
];
|
||||||
|
const { results } = await executeOperations(
|
||||||
|
ops,
|
||||||
|
mockClient as never,
|
||||||
|
actual,
|
||||||
|
);
|
||||||
|
assertEquals(calls, ["deleteGroup:g-old"]);
|
||||||
|
assertEquals(results[0].status, "success");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("executor stores error message on failure", async () => {
|
||||||
|
const mockClient = {
|
||||||
|
createGroup: () => Promise.reject(new Error("rate limited")),
|
||||||
|
};
|
||||||
|
const ops: Operation[] = [
|
||||||
|
{ type: "create_group", name: "pilots" },
|
||||||
|
];
|
||||||
|
const { results } = await executeOperations(
|
||||||
|
ops,
|
||||||
|
mockClient as never,
|
||||||
|
emptyActual(),
|
||||||
|
);
|
||||||
|
assertEquals(results[0].status, "failed");
|
||||||
|
assertEquals(results[0].error, "rate limited");
|
||||||
|
});
|
||||||
679
src/reconcile/executor.ts
Normal file
679
src/reconcile/executor.ts
Normal file
@ -0,0 +1,679 @@
|
|||||||
|
import type { NetbirdClient } from "../netbird/client.ts";
|
||||||
|
import type { ActualState } from "../state/actual.ts";
|
||||||
|
import type { Operation, OperationResult } from "./operations.ts";
|
||||||
|
|
||||||
|
export interface ExecutionResult {
|
||||||
|
results: OperationResult[];
|
||||||
|
createdKeys: Map<string, string>;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Subset of NetbirdClient methods the executor actually calls.
|
||||||
|
*
|
||||||
|
* Using a structural pick keeps tests simple — callers can pass a partial
|
||||||
|
* mock that satisfies only the methods their operations need.
|
||||||
|
*/
|
||||||
|
type ExecutorClient = Pick<
|
||||||
|
NetbirdClient,
|
||||||
|
| "createGroup"
|
||||||
|
| "updateGroup"
|
||||||
|
| "deleteGroup"
|
||||||
|
| "createSetupKey"
|
||||||
|
| "deleteSetupKey"
|
||||||
|
| "updatePeer"
|
||||||
|
| "deletePeer"
|
||||||
|
| "createPolicy"
|
||||||
|
| "updatePolicy"
|
||||||
|
| "deletePolicy"
|
||||||
|
| "createRoute"
|
||||||
|
| "updateRoute"
|
||||||
|
| "deleteRoute"
|
||||||
|
| "createDnsNameserverGroup"
|
||||||
|
| "updateDnsNameserverGroup"
|
||||||
|
| "deleteDnsNameserverGroup"
|
||||||
|
| "createPostureCheck"
|
||||||
|
| "updatePostureCheck"
|
||||||
|
| "deletePostureCheck"
|
||||||
|
| "createNetwork"
|
||||||
|
| "updateNetwork"
|
||||||
|
| "deleteNetwork"
|
||||||
|
| "createNetworkResource"
|
||||||
|
| "updateNetworkResource"
|
||||||
|
| "deleteNetworkResource"
|
||||||
|
| "createNetworkRouter"
|
||||||
|
| "updateNetworkRouter"
|
||||||
|
| "deleteNetworkRouter"
|
||||||
|
| "createUser"
|
||||||
|
| "updateUser"
|
||||||
|
| "deleteUser"
|
||||||
|
>;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Executes a list of operations against the NetBird API, aborting on the
|
||||||
|
* first failure. Resolves names to IDs using the provided actual state and
|
||||||
|
* tracks newly created resource IDs for cross-referencing within the same run.
|
||||||
|
*
|
||||||
|
* Returns both the per-operation results and a map of created setup key
|
||||||
|
* names to their raw key values (needed for enrollment output).
|
||||||
|
*/
|
||||||
|
export async function executeOperations(
|
||||||
|
ops: Operation[],
|
||||||
|
client: ExecutorClient,
|
||||||
|
actual: ActualState,
|
||||||
|
): Promise<ExecutionResult> {
|
||||||
|
const results: OperationResult[] = [];
|
||||||
|
const createdGroupIds = new Map<string, string>();
|
||||||
|
const createdPostureCheckIds = new Map<string, string>();
|
||||||
|
const createdNetworkIds = new Map<string, string>();
|
||||||
|
const createdKeys = new Map<string, string>();
|
||||||
|
|
||||||
|
function resolveGroupId(name: string): string {
|
||||||
|
const created = createdGroupIds.get(name);
|
||||||
|
if (created) return created;
|
||||||
|
const existing = actual.groupsByName.get(name);
|
||||||
|
if (existing) return existing.id;
|
||||||
|
throw new Error(`group "${name}" not found`);
|
||||||
|
}
|
||||||
|
|
||||||
|
function resolveGroupIds(names: string[]): string[] {
|
||||||
|
return names.map(resolveGroupId);
|
||||||
|
}
|
||||||
|
|
||||||
|
function resolvePeerIds(names: string[]): string[] {
|
||||||
|
return names.map((name) => {
|
||||||
|
const peer = actual.peersByName.get(name);
|
||||||
|
if (peer) return peer.id;
|
||||||
|
throw new Error(`peer "${name}" not found`);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function resolvePeerId(name: string): string {
|
||||||
|
const peer = actual.peersByName.get(name);
|
||||||
|
if (peer) return peer.id;
|
||||||
|
throw new Error(`peer "${name}" not found`);
|
||||||
|
}
|
||||||
|
|
||||||
|
function resolvePostureCheckId(name: string): string {
|
||||||
|
const created = createdPostureCheckIds.get(name);
|
||||||
|
if (created) return created;
|
||||||
|
const existing = actual.postureChecksByName.get(name);
|
||||||
|
if (existing) return existing.id;
|
||||||
|
throw new Error(`posture check "${name}" not found`);
|
||||||
|
}
|
||||||
|
|
||||||
|
function resolveNetworkId(name: string): string {
|
||||||
|
const created = createdNetworkIds.get(name);
|
||||||
|
if (created) return created;
|
||||||
|
const existing = actual.networksByName.get(name);
|
||||||
|
if (existing) return existing.id;
|
||||||
|
throw new Error(`network "${name}" not found`);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const op of ops) {
|
||||||
|
try {
|
||||||
|
await executeSingle(op, client, actual, {
|
||||||
|
createdGroupIds,
|
||||||
|
createdPostureCheckIds,
|
||||||
|
createdNetworkIds,
|
||||||
|
createdKeys,
|
||||||
|
resolveGroupId,
|
||||||
|
resolveGroupIds,
|
||||||
|
resolvePeerIds,
|
||||||
|
resolvePeerId,
|
||||||
|
resolvePostureCheckId,
|
||||||
|
resolveNetworkId,
|
||||||
|
});
|
||||||
|
results.push({ ...op, status: "success" });
|
||||||
|
} catch (err) {
|
||||||
|
results.push({
|
||||||
|
...op,
|
||||||
|
status: "failed",
|
||||||
|
error: err instanceof Error ? err.message : String(err),
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return { results, createdKeys };
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
// Internal dispatch
|
||||||
|
// ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
interface ExecutorContext {
|
||||||
|
createdGroupIds: Map<string, string>;
|
||||||
|
createdPostureCheckIds: Map<string, string>;
|
||||||
|
createdNetworkIds: Map<string, string>;
|
||||||
|
createdKeys: Map<string, string>;
|
||||||
|
resolveGroupId: (name: string) => string;
|
||||||
|
resolveGroupIds: (names: string[]) => string[];
|
||||||
|
resolvePeerIds: (names: string[]) => string[];
|
||||||
|
resolvePeerId: (name: string) => string;
|
||||||
|
resolvePostureCheckId: (name: string) => string;
|
||||||
|
resolveNetworkId: (name: string) => string;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function executeSingle(
|
||||||
|
op: Operation,
|
||||||
|
client: ExecutorClient,
|
||||||
|
actual: ActualState,
|
||||||
|
ctx: ExecutorContext,
|
||||||
|
): Promise<void> {
|
||||||
|
const d = op.details ?? {};
|
||||||
|
|
||||||
|
switch (op.type) {
|
||||||
|
// ----- Posture Checks -----
|
||||||
|
case "create_posture_check": {
|
||||||
|
const pc = await client.createPostureCheck({
|
||||||
|
name: op.name,
|
||||||
|
description: (d.description as string) ?? "",
|
||||||
|
checks: (d.checks as Record<string, unknown>) ?? {},
|
||||||
|
});
|
||||||
|
ctx.createdPostureCheckIds.set(op.name, pc.id);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "update_posture_check": {
|
||||||
|
const existing = actual.postureChecksByName.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(`posture check "${op.name}" not found for update`);
|
||||||
|
}
|
||||||
|
await client.updatePostureCheck(existing.id, {
|
||||||
|
name: op.name,
|
||||||
|
description: (d.description as string) ?? existing.description,
|
||||||
|
checks: (d.checks as Record<string, unknown>) ?? existing.checks,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "delete_posture_check": {
|
||||||
|
const existing = actual.postureChecksByName.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(`posture check "${op.name}" not found for delete`);
|
||||||
|
}
|
||||||
|
await client.deletePostureCheck(existing.id);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ----- Groups -----
|
||||||
|
case "create_group": {
|
||||||
|
const peerNames = d.peers as string[] | undefined;
|
||||||
|
const peerIds = peerNames?.length ? ctx.resolvePeerIds(peerNames) : [];
|
||||||
|
const group = await client.createGroup({
|
||||||
|
name: op.name,
|
||||||
|
peers: peerIds,
|
||||||
|
});
|
||||||
|
ctx.createdGroupIds.set(op.name, group.id);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "update_group": {
|
||||||
|
const existing = actual.groupsByName.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(`group "${op.name}" not found for update`);
|
||||||
|
}
|
||||||
|
const desiredPeers = d.desired_peers as string[] | undefined;
|
||||||
|
const peerIds = desiredPeers?.length
|
||||||
|
? ctx.resolvePeerIds(desiredPeers)
|
||||||
|
: [];
|
||||||
|
await client.updateGroup(existing.id, {
|
||||||
|
name: op.name,
|
||||||
|
peers: peerIds,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "delete_group": {
|
||||||
|
const existing = actual.groupsByName.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(`group "${op.name}" not found for delete`);
|
||||||
|
}
|
||||||
|
await client.deleteGroup(existing.id);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ----- Setup Keys -----
|
||||||
|
case "create_setup_key": {
|
||||||
|
const autoGroupNames = d.auto_groups as string[] | undefined;
|
||||||
|
const autoGroupIds = autoGroupNames?.length
|
||||||
|
? ctx.resolveGroupIds(autoGroupNames)
|
||||||
|
: [];
|
||||||
|
const key = await client.createSetupKey({
|
||||||
|
name: op.name,
|
||||||
|
type: (d.type as "one-off" | "reusable") ?? "one-off",
|
||||||
|
expires_in: (d.expires_in as number) ?? 604800,
|
||||||
|
auto_groups: autoGroupIds,
|
||||||
|
usage_limit: d.usage_limit as number | undefined,
|
||||||
|
});
|
||||||
|
ctx.createdKeys.set(op.name, key.key);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "delete_setup_key": {
|
||||||
|
const existing = actual.setupKeysByName.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(`setup key "${op.name}" not found for delete`);
|
||||||
|
}
|
||||||
|
await client.deleteSetupKey(existing.id);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ----- Peers -----
|
||||||
|
case "rename_peer": {
|
||||||
|
const peerId = d.id as string;
|
||||||
|
if (!peerId) throw new Error(`rename_peer missing details.id`);
|
||||||
|
await client.updatePeer(peerId, { name: op.name });
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "update_peer_groups": {
|
||||||
|
const peerId = d.id as string;
|
||||||
|
if (!peerId) throw new Error(`update_peer_groups missing details.id`);
|
||||||
|
await client.updatePeer(peerId, {
|
||||||
|
name: d.name as string | undefined,
|
||||||
|
ssh_enabled: d.ssh_enabled as boolean | undefined,
|
||||||
|
login_expiration_enabled: d.login_expiration_enabled as
|
||||||
|
| boolean
|
||||||
|
| undefined,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "update_peer": {
|
||||||
|
const peerId = ctx.resolvePeerId(op.name);
|
||||||
|
await client.updatePeer(peerId, {
|
||||||
|
login_expiration_enabled: d.login_expiration_enabled as
|
||||||
|
| boolean
|
||||||
|
| undefined,
|
||||||
|
inactivity_expiration_enabled: d.inactivity_expiration_enabled as
|
||||||
|
| boolean
|
||||||
|
| undefined,
|
||||||
|
ssh_enabled: d.ssh_enabled as boolean | undefined,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "delete_peer": {
|
||||||
|
const peer = actual.peersByName.get(op.name);
|
||||||
|
if (!peer) throw new Error(`peer "${op.name}" not found for delete`);
|
||||||
|
await client.deletePeer(peer.id);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ----- Networks -----
|
||||||
|
case "create_network": {
|
||||||
|
const network = await client.createNetwork({
|
||||||
|
name: op.name,
|
||||||
|
description: (d.description as string) ?? "",
|
||||||
|
});
|
||||||
|
ctx.createdNetworkIds.set(op.name, network.id);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "update_network": {
|
||||||
|
const existing = actual.networksByName.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(`network "${op.name}" not found for update`);
|
||||||
|
}
|
||||||
|
await client.updateNetwork(existing.id, {
|
||||||
|
name: op.name,
|
||||||
|
description: (d.description as string) ?? existing.description,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "delete_network": {
|
||||||
|
const existing = actual.networksByName.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(`network "${op.name}" not found for delete`);
|
||||||
|
}
|
||||||
|
await client.deleteNetwork(existing.id);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ----- Network Resources -----
|
||||||
|
case "create_network_resource": {
|
||||||
|
const networkName = d.network_name as string;
|
||||||
|
if (!networkName) {
|
||||||
|
throw new Error("create_network_resource missing network_name");
|
||||||
|
}
|
||||||
|
const networkId = ctx.resolveNetworkId(networkName);
|
||||||
|
const groupIds = ctx.resolveGroupIds(d.groups as string[] ?? []);
|
||||||
|
await client.createNetworkResource(networkId, {
|
||||||
|
name: op.name,
|
||||||
|
description: (d.description as string) ?? "",
|
||||||
|
address: d.address as string,
|
||||||
|
enabled: (d.enabled as boolean) ?? true,
|
||||||
|
groups: groupIds,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "update_network_resource": {
|
||||||
|
const networkName = d.network_name as string;
|
||||||
|
if (!networkName) {
|
||||||
|
throw new Error("update_network_resource missing network_name");
|
||||||
|
}
|
||||||
|
const networkId = ctx.resolveNetworkId(networkName);
|
||||||
|
const resourceId = d.resource_id as string;
|
||||||
|
if (!resourceId) {
|
||||||
|
throw new Error("update_network_resource missing resource_id");
|
||||||
|
}
|
||||||
|
const groupIds = ctx.resolveGroupIds(d.groups as string[] ?? []);
|
||||||
|
await client.updateNetworkResource(networkId, resourceId, {
|
||||||
|
name: op.name,
|
||||||
|
description: (d.description as string) ?? "",
|
||||||
|
address: d.address as string,
|
||||||
|
enabled: (d.enabled as boolean) ?? true,
|
||||||
|
groups: groupIds,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "delete_network_resource": {
|
||||||
|
const networkName = d.network_name as string;
|
||||||
|
if (!networkName) {
|
||||||
|
throw new Error("delete_network_resource missing network_name");
|
||||||
|
}
|
||||||
|
const networkId = ctx.resolveNetworkId(networkName);
|
||||||
|
const resourceId = d.resource_id as string;
|
||||||
|
if (!resourceId) {
|
||||||
|
throw new Error("delete_network_resource missing resource_id");
|
||||||
|
}
|
||||||
|
await client.deleteNetworkResource(networkId, resourceId);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ----- Network Routers -----
|
||||||
|
case "create_network_router": {
|
||||||
|
const networkName = d.network_name as string;
|
||||||
|
if (!networkName) {
|
||||||
|
throw new Error("create_network_router missing network_name");
|
||||||
|
}
|
||||||
|
const networkId = ctx.resolveNetworkId(networkName);
|
||||||
|
const peer = d.peer ? ctx.resolvePeerId(d.peer as string) : null;
|
||||||
|
const peerGroups = d.peer_groups
|
||||||
|
? ctx.resolveGroupIds(d.peer_groups as string[])
|
||||||
|
: null;
|
||||||
|
await client.createNetworkRouter(networkId, {
|
||||||
|
peer,
|
||||||
|
peer_groups: peerGroups,
|
||||||
|
metric: (d.metric as number) ?? 9999,
|
||||||
|
masquerade: (d.masquerade as boolean) ?? true,
|
||||||
|
enabled: (d.enabled as boolean) ?? true,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "update_network_router": {
|
||||||
|
const networkName = d.network_name as string;
|
||||||
|
if (!networkName) {
|
||||||
|
throw new Error("update_network_router missing network_name");
|
||||||
|
}
|
||||||
|
const networkId = ctx.resolveNetworkId(networkName);
|
||||||
|
const routerId = d.router_id as string;
|
||||||
|
if (!routerId) {
|
||||||
|
throw new Error("update_network_router missing router_id");
|
||||||
|
}
|
||||||
|
const peer = d.peer ? ctx.resolvePeerId(d.peer as string) : null;
|
||||||
|
const peerGroups = d.peer_groups
|
||||||
|
? ctx.resolveGroupIds(d.peer_groups as string[])
|
||||||
|
: null;
|
||||||
|
await client.updateNetworkRouter(networkId, routerId, {
|
||||||
|
peer,
|
||||||
|
peer_groups: peerGroups,
|
||||||
|
metric: (d.metric as number) ?? 9999,
|
||||||
|
masquerade: (d.masquerade as boolean) ?? true,
|
||||||
|
enabled: (d.enabled as boolean) ?? true,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "delete_network_router": {
|
||||||
|
const networkName = d.network_name as string;
|
||||||
|
if (!networkName) {
|
||||||
|
throw new Error("delete_network_router missing network_name");
|
||||||
|
}
|
||||||
|
const networkId = ctx.resolveNetworkId(networkName);
|
||||||
|
const routerId = d.router_id as string;
|
||||||
|
if (!routerId) {
|
||||||
|
throw new Error("delete_network_router missing router_id");
|
||||||
|
}
|
||||||
|
await client.deleteNetworkRouter(networkId, routerId);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ----- Users -----
|
||||||
|
case "create_user": {
|
||||||
|
const autoGroupIds = ctx.resolveGroupIds(
|
||||||
|
d.auto_groups as string[] ?? [],
|
||||||
|
);
|
||||||
|
await client.createUser({
|
||||||
|
email: d.email as string,
|
||||||
|
name: d.name as string | undefined,
|
||||||
|
role: d.role as string,
|
||||||
|
auto_groups: autoGroupIds,
|
||||||
|
is_service_user: false,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "update_user": {
|
||||||
|
const existing = actual.usersByEmail.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(`user "${op.name}" not found for update`);
|
||||||
|
}
|
||||||
|
const autoGroupIds = ctx.resolveGroupIds(
|
||||||
|
d.auto_groups as string[] ?? [],
|
||||||
|
);
|
||||||
|
await client.updateUser(existing.id, {
|
||||||
|
name: d.name as string | undefined,
|
||||||
|
role: d.role as string | undefined,
|
||||||
|
auto_groups: autoGroupIds,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "delete_user": {
|
||||||
|
const existing = actual.usersByEmail.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(`user "${op.name}" not found for delete`);
|
||||||
|
}
|
||||||
|
await client.deleteUser(existing.id);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ----- Policies -----
|
||||||
|
case "create_policy": {
|
||||||
|
const sourceIds = ctx.resolveGroupIds(d.sources as string[] ?? []);
|
||||||
|
const destResource = d.destination_resource as
|
||||||
|
| { id: string; type: string }
|
||||||
|
| undefined;
|
||||||
|
const destIds = destResource
|
||||||
|
? []
|
||||||
|
: ctx.resolveGroupIds(d.destinations as string[] ?? []);
|
||||||
|
const postureCheckIds = (d.source_posture_checks as string[] ?? [])
|
||||||
|
.map((name) => ctx.resolvePostureCheckId(name));
|
||||||
|
const rule: Record<string, unknown> = {
|
||||||
|
name: op.name,
|
||||||
|
description: (d.description as string) ?? "",
|
||||||
|
enabled: (d.enabled as boolean) ?? true,
|
||||||
|
action: (d.action as string) ?? "accept",
|
||||||
|
bidirectional: (d.bidirectional as boolean) ?? true,
|
||||||
|
protocol: (d.protocol as string) ?? "all",
|
||||||
|
ports: d.ports as string[] | undefined,
|
||||||
|
sources: sourceIds,
|
||||||
|
destinations: destIds,
|
||||||
|
};
|
||||||
|
if (destResource) {
|
||||||
|
rule.destinationResource = destResource;
|
||||||
|
}
|
||||||
|
await client.createPolicy({
|
||||||
|
name: op.name,
|
||||||
|
description: (d.description as string) ?? "",
|
||||||
|
enabled: (d.enabled as boolean) ?? true,
|
||||||
|
source_posture_checks: postureCheckIds,
|
||||||
|
rules: [
|
||||||
|
rule as unknown as import("../netbird/types.ts").NbPolicyRule,
|
||||||
|
],
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "update_policy": {
|
||||||
|
const existing = actual.policiesByName.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(`policy "${op.name}" not found for update`);
|
||||||
|
}
|
||||||
|
const sourceIds = ctx.resolveGroupIds(d.sources as string[] ?? []);
|
||||||
|
const destResource = d.destination_resource as
|
||||||
|
| { id: string; type: string }
|
||||||
|
| undefined;
|
||||||
|
const destIds = destResource
|
||||||
|
? []
|
||||||
|
: ctx.resolveGroupIds(d.destinations as string[] ?? []);
|
||||||
|
const postureCheckIds = (d.source_posture_checks as string[] ?? [])
|
||||||
|
.map((name) => ctx.resolvePostureCheckId(name));
|
||||||
|
const rule: Record<string, unknown> = {
|
||||||
|
name: op.name,
|
||||||
|
description: (d.description as string) ?? existing.description,
|
||||||
|
enabled: (d.enabled as boolean) ?? existing.enabled,
|
||||||
|
action: (d.action as string) ?? "accept",
|
||||||
|
bidirectional: (d.bidirectional as boolean) ?? true,
|
||||||
|
protocol: (d.protocol as string) ?? "all",
|
||||||
|
ports: d.ports as string[] | undefined,
|
||||||
|
sources: sourceIds,
|
||||||
|
destinations: destIds,
|
||||||
|
};
|
||||||
|
if (destResource) {
|
||||||
|
rule.destinationResource = destResource;
|
||||||
|
}
|
||||||
|
await client.updatePolicy(existing.id, {
|
||||||
|
name: op.name,
|
||||||
|
description: (d.description as string) ?? existing.description,
|
||||||
|
enabled: (d.enabled as boolean) ?? existing.enabled,
|
||||||
|
source_posture_checks: postureCheckIds,
|
||||||
|
rules: [
|
||||||
|
rule as unknown as import("../netbird/types.ts").NbPolicyRule,
|
||||||
|
],
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "delete_policy": {
|
||||||
|
const existing = actual.policiesByName.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(`policy "${op.name}" not found for delete`);
|
||||||
|
}
|
||||||
|
await client.deletePolicy(existing.id);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ----- Routes -----
|
||||||
|
case "create_route": {
|
||||||
|
const peerGroupIds = d.peer_groups
|
||||||
|
? ctx.resolveGroupIds(d.peer_groups as string[])
|
||||||
|
: undefined;
|
||||||
|
const distGroupIds = d.distribution_groups
|
||||||
|
? ctx.resolveGroupIds(d.distribution_groups as string[])
|
||||||
|
: [];
|
||||||
|
await client.createRoute({
|
||||||
|
network_id: op.name,
|
||||||
|
description: (d.description as string) ?? "",
|
||||||
|
enabled: (d.enabled as boolean) ?? true,
|
||||||
|
network: d.network as string | undefined,
|
||||||
|
domains: d.domains as string[] | undefined,
|
||||||
|
peer: d.peer as string | undefined,
|
||||||
|
peer_groups: peerGroupIds,
|
||||||
|
metric: (d.metric as number) ?? 9999,
|
||||||
|
masquerade: (d.masquerade as boolean) ?? true,
|
||||||
|
groups: distGroupIds,
|
||||||
|
keep_route: (d.keep_route as boolean) ?? true,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "update_route": {
|
||||||
|
const existing = actual.routesByNetworkId.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(`route "${op.name}" not found for update`);
|
||||||
|
}
|
||||||
|
const peerGroupIds = d.peer_groups
|
||||||
|
? ctx.resolveGroupIds(d.peer_groups as string[])
|
||||||
|
: existing.peer_groups;
|
||||||
|
const distGroupIds = d.distribution_groups
|
||||||
|
? ctx.resolveGroupIds(d.distribution_groups as string[])
|
||||||
|
: existing.groups;
|
||||||
|
await client.updateRoute(existing.id, {
|
||||||
|
network_id: op.name,
|
||||||
|
description: (d.description as string) ?? existing.description,
|
||||||
|
enabled: (d.enabled as boolean) ?? existing.enabled,
|
||||||
|
network: (d.network as string | undefined) ?? existing.network,
|
||||||
|
domains: (d.domains as string[] | undefined) ?? existing.domains,
|
||||||
|
peer: (d.peer as string | undefined) ?? existing.peer,
|
||||||
|
peer_groups: peerGroupIds,
|
||||||
|
metric: (d.metric as number) ?? existing.metric,
|
||||||
|
masquerade: (d.masquerade as boolean) ?? existing.masquerade,
|
||||||
|
groups: distGroupIds,
|
||||||
|
keep_route: (d.keep_route as boolean) ?? existing.keep_route,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "delete_route": {
|
||||||
|
const existing = actual.routesByNetworkId.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(`route "${op.name}" not found for delete`);
|
||||||
|
}
|
||||||
|
await client.deleteRoute(existing.id);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ----- DNS Nameserver Groups -----
|
||||||
|
case "create_dns": {
|
||||||
|
const groupIds = d.groups
|
||||||
|
? ctx.resolveGroupIds(d.groups as string[])
|
||||||
|
: [];
|
||||||
|
await client.createDnsNameserverGroup({
|
||||||
|
name: op.name,
|
||||||
|
description: (d.description as string) ?? "",
|
||||||
|
nameservers: (d.nameservers as Array<{
|
||||||
|
ip: string;
|
||||||
|
ns_type: string;
|
||||||
|
port: number;
|
||||||
|
}>) ?? [],
|
||||||
|
enabled: (d.enabled as boolean) ?? true,
|
||||||
|
groups: groupIds,
|
||||||
|
primary: (d.primary as boolean) ?? false,
|
||||||
|
domains: (d.domains as string[]) ?? [],
|
||||||
|
search_domains_enabled: (d.search_domains_enabled as boolean) ?? false,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "update_dns": {
|
||||||
|
const existing = actual.dnsByName.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(
|
||||||
|
`dns nameserver group "${op.name}" not found for update`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
const groupIds = d.groups
|
||||||
|
? ctx.resolveGroupIds(d.groups as string[])
|
||||||
|
: existing.groups;
|
||||||
|
await client.updateDnsNameserverGroup(existing.id, {
|
||||||
|
name: op.name,
|
||||||
|
description: (d.description as string) ?? existing.description,
|
||||||
|
nameservers: (d.nameservers as Array<{
|
||||||
|
ip: string;
|
||||||
|
ns_type: string;
|
||||||
|
port: number;
|
||||||
|
}>) ?? existing.nameservers,
|
||||||
|
enabled: (d.enabled as boolean) ?? existing.enabled,
|
||||||
|
groups: groupIds,
|
||||||
|
primary: (d.primary as boolean) ?? existing.primary,
|
||||||
|
domains: (d.domains as string[]) ?? existing.domains,
|
||||||
|
search_domains_enabled: (d.search_domains_enabled as boolean) ??
|
||||||
|
existing.search_domains_enabled,
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case "delete_dns": {
|
||||||
|
const existing = actual.dnsByName.get(op.name);
|
||||||
|
if (!existing) {
|
||||||
|
throw new Error(
|
||||||
|
`dns nameserver group "${op.name}" not found for delete`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
await client.deleteDnsNameserverGroup(existing.id);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
default: {
|
||||||
|
// Exhaustiveness check — if a new OperationType is added but not
|
||||||
|
// handled here, TypeScript will flag it at compile time.
|
||||||
|
const _exhaustive: never = op.type;
|
||||||
|
throw new Error(`unknown operation type: ${_exhaustive}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
84
src/reconcile/operations.ts
Normal file
84
src/reconcile/operations.ts
Normal file
@ -0,0 +1,84 @@
|
|||||||
|
export type OperationType =
|
||||||
|
| "create_group"
|
||||||
|
| "update_group"
|
||||||
|
| "delete_group"
|
||||||
|
| "create_setup_key"
|
||||||
|
| "delete_setup_key"
|
||||||
|
| "rename_peer"
|
||||||
|
| "update_peer_groups"
|
||||||
|
| "update_peer"
|
||||||
|
| "delete_peer"
|
||||||
|
| "create_policy"
|
||||||
|
| "update_policy"
|
||||||
|
| "delete_policy"
|
||||||
|
| "create_route"
|
||||||
|
| "update_route"
|
||||||
|
| "delete_route"
|
||||||
|
| "create_dns"
|
||||||
|
| "update_dns"
|
||||||
|
| "delete_dns"
|
||||||
|
| "create_posture_check"
|
||||||
|
| "update_posture_check"
|
||||||
|
| "delete_posture_check"
|
||||||
|
| "create_network"
|
||||||
|
| "update_network"
|
||||||
|
| "delete_network"
|
||||||
|
| "create_network_resource"
|
||||||
|
| "update_network_resource"
|
||||||
|
| "delete_network_resource"
|
||||||
|
| "create_network_router"
|
||||||
|
| "update_network_router"
|
||||||
|
| "delete_network_router"
|
||||||
|
| "create_user"
|
||||||
|
| "update_user"
|
||||||
|
| "delete_user";
|
||||||
|
|
||||||
|
export interface Operation {
|
||||||
|
type: OperationType;
|
||||||
|
name: string;
|
||||||
|
details?: Record<string, unknown>;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface OperationResult extends Operation {
|
||||||
|
status: "success" | "failed" | "skipped";
|
||||||
|
error?: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Order in which operation types must be executed */
|
||||||
|
export const EXECUTION_ORDER: OperationType[] = [
|
||||||
|
// Creates: dependencies first
|
||||||
|
"create_posture_check",
|
||||||
|
"update_posture_check",
|
||||||
|
"create_group",
|
||||||
|
"update_group",
|
||||||
|
"create_setup_key",
|
||||||
|
"rename_peer",
|
||||||
|
"update_peer_groups",
|
||||||
|
"update_peer",
|
||||||
|
"create_network",
|
||||||
|
"update_network",
|
||||||
|
"create_network_resource",
|
||||||
|
"update_network_resource",
|
||||||
|
"create_network_router",
|
||||||
|
"update_network_router",
|
||||||
|
"create_user",
|
||||||
|
"update_user",
|
||||||
|
"create_policy",
|
||||||
|
"update_policy",
|
||||||
|
"create_route",
|
||||||
|
"update_route",
|
||||||
|
"create_dns",
|
||||||
|
"update_dns",
|
||||||
|
// Deletions in reverse dependency order
|
||||||
|
"delete_dns",
|
||||||
|
"delete_route",
|
||||||
|
"delete_policy",
|
||||||
|
"delete_user",
|
||||||
|
"delete_network_router",
|
||||||
|
"delete_network_resource",
|
||||||
|
"delete_network",
|
||||||
|
"delete_peer",
|
||||||
|
"delete_setup_key",
|
||||||
|
"delete_posture_check",
|
||||||
|
"delete_group",
|
||||||
|
];
|
||||||
285
src/server.ts
Normal file
285
src/server.ts
Normal file
@ -0,0 +1,285 @@
|
|||||||
|
import type { Config } from "./config.ts";
|
||||||
|
import type { NetbirdClient } from "./netbird/client.ts";
|
||||||
|
import type { GiteaClient } from "./gitea/client.ts";
|
||||||
|
import { DesiredStateSchema, validateCrossReferences } from "./state/schema.ts";
|
||||||
|
import { fetchActualState } from "./state/actual.ts";
|
||||||
|
import { computeDiff } from "./reconcile/diff.ts";
|
||||||
|
import { executeOperations } from "./reconcile/executor.ts";
|
||||||
|
import type { OperationResult } from "./reconcile/operations.ts";
|
||||||
|
import { type PollerContext, pollOnce } from "./poller/loop.ts";
|
||||||
|
import { exportState } from "./export.ts";
|
||||||
|
|
||||||
|
export interface ServerContext {
|
||||||
|
config: Config;
|
||||||
|
netbird: NetbirdClient;
|
||||||
|
gitea: GiteaClient | null;
|
||||||
|
reconcileInProgress: { value: boolean };
|
||||||
|
}
|
||||||
|
|
||||||
|
export function createHandler(
|
||||||
|
ctx: ServerContext,
|
||||||
|
): (req: Request) => Promise<Response> {
|
||||||
|
return async (req: Request): Promise<Response> => {
|
||||||
|
const url = new URL(req.url);
|
||||||
|
|
||||||
|
// Health check — no auth required
|
||||||
|
if (url.pathname === "/health" && req.method === "GET") {
|
||||||
|
return Response.json({ status: "ok" });
|
||||||
|
}
|
||||||
|
|
||||||
|
// All other endpoints require bearer token auth
|
||||||
|
const authHeader = req.headers.get("Authorization");
|
||||||
|
if (authHeader !== `Bearer ${ctx.config.reconcilerToken}`) {
|
||||||
|
return Response.json({ error: "unauthorized" }, { status: 401 });
|
||||||
|
}
|
||||||
|
|
||||||
|
if (url.pathname === "/reconcile" && req.method === "POST") {
|
||||||
|
return handleReconcile(req, url, ctx);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (url.pathname === "/sync-events" && req.method === "POST") {
|
||||||
|
return handleSyncEvents(ctx);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (url.pathname === "/export" && req.method === "GET") {
|
||||||
|
return handleExport(ctx);
|
||||||
|
}
|
||||||
|
|
||||||
|
return Response.json({ error: "not found" }, { status: 404 });
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// /reconcile
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
async function handleReconcile(
|
||||||
|
req: Request,
|
||||||
|
url: URL,
|
||||||
|
ctx: ServerContext,
|
||||||
|
): Promise<Response> {
|
||||||
|
const dryRun = url.searchParams.get("dry_run") === "true";
|
||||||
|
|
||||||
|
// Parse and validate the desired state from the request body
|
||||||
|
let body: unknown;
|
||||||
|
try {
|
||||||
|
body = await req.json();
|
||||||
|
} catch {
|
||||||
|
return Response.json(
|
||||||
|
{ status: "error", error: "invalid JSON body" },
|
||||||
|
{ status: 400 },
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
const parseResult = DesiredStateSchema.safeParse(body);
|
||||||
|
if (!parseResult.success) {
|
||||||
|
return Response.json(
|
||||||
|
{
|
||||||
|
status: "error",
|
||||||
|
error: "schema validation failed",
|
||||||
|
issues: parseResult.error.issues,
|
||||||
|
},
|
||||||
|
{ status: 400 },
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
const desired = parseResult.data;
|
||||||
|
|
||||||
|
// Cross-reference validation (e.g. group refs in policies exist)
|
||||||
|
const crossRefErrors = validateCrossReferences(desired);
|
||||||
|
if (crossRefErrors.length > 0) {
|
||||||
|
return Response.json(
|
||||||
|
{
|
||||||
|
status: "error",
|
||||||
|
error: "cross-reference validation failed",
|
||||||
|
issues: crossRefErrors,
|
||||||
|
},
|
||||||
|
{ status: 400 },
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx.reconcileInProgress.value = true;
|
||||||
|
try {
|
||||||
|
const actual = await fetchActualState(ctx.netbird);
|
||||||
|
const ops = computeDiff(desired, actual);
|
||||||
|
|
||||||
|
if (dryRun) {
|
||||||
|
return Response.json({
|
||||||
|
status: "planned",
|
||||||
|
operations: ops.map((op) => ({
|
||||||
|
type: op.type,
|
||||||
|
name: op.name,
|
||||||
|
})),
|
||||||
|
summary: summarize(ops),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
if (ops.length === 0) {
|
||||||
|
return Response.json({
|
||||||
|
status: "applied",
|
||||||
|
operations: [],
|
||||||
|
created_keys: {},
|
||||||
|
summary: { created: 0, updated: 0, deleted: 0, failed: 0 },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
const { results, createdKeys } = await executeOperations(
|
||||||
|
ops,
|
||||||
|
ctx.netbird,
|
||||||
|
actual,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Convert Map to plain object for JSON serialization
|
||||||
|
const createdKeysObj: Record<string, string> = {};
|
||||||
|
for (const [name, key] of createdKeys) {
|
||||||
|
createdKeysObj[name] = key;
|
||||||
|
}
|
||||||
|
|
||||||
|
return Response.json({
|
||||||
|
status: "applied",
|
||||||
|
operations: results.map((r) => ({
|
||||||
|
type: r.type,
|
||||||
|
name: r.name,
|
||||||
|
status: r.status,
|
||||||
|
})),
|
||||||
|
created_keys: createdKeysObj,
|
||||||
|
summary: summarize(results),
|
||||||
|
});
|
||||||
|
} catch (err) {
|
||||||
|
console.error(
|
||||||
|
JSON.stringify({
|
||||||
|
msg: "reconcile_error",
|
||||||
|
error: err instanceof Error ? err.message : String(err),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
return Response.json(
|
||||||
|
{
|
||||||
|
status: "error",
|
||||||
|
error: err instanceof Error ? err.message : String(err),
|
||||||
|
},
|
||||||
|
{ status: 500 },
|
||||||
|
);
|
||||||
|
} finally {
|
||||||
|
ctx.reconcileInProgress.value = false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// /sync-events
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Forces a single poll cycle. Temporarily clears reconcileInProgress so
|
||||||
|
* pollOnce doesn't skip, then restores it afterward.
|
||||||
|
*/
|
||||||
|
async function handleSyncEvents(ctx: ServerContext): Promise<Response> {
|
||||||
|
const pollerCtx: PollerContext = {
|
||||||
|
config: ctx.config,
|
||||||
|
netbird: ctx.netbird,
|
||||||
|
gitea: ctx.gitea,
|
||||||
|
reconcileInProgress: { value: false },
|
||||||
|
};
|
||||||
|
|
||||||
|
try {
|
||||||
|
await pollOnce(pollerCtx);
|
||||||
|
return Response.json({ status: "synced" });
|
||||||
|
} catch (err) {
|
||||||
|
console.error(
|
||||||
|
JSON.stringify({
|
||||||
|
msg: "sync_events_error",
|
||||||
|
error: err instanceof Error ? err.message : String(err),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
return Response.json(
|
||||||
|
{
|
||||||
|
status: "error",
|
||||||
|
error: err instanceof Error ? err.message : String(err),
|
||||||
|
},
|
||||||
|
{ status: 500 },
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// /export
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fetches the current NetBird state and transforms it into the declarative
|
||||||
|
* `netbird.json` format. Useful for bootstrapping gitops from an existing
|
||||||
|
* account or inspecting what the reconciler "sees".
|
||||||
|
*/
|
||||||
|
async function handleExport(ctx: ServerContext): Promise<Response> {
|
||||||
|
try {
|
||||||
|
const actual = await fetchActualState(ctx.netbird);
|
||||||
|
const state = exportState(actual);
|
||||||
|
|
||||||
|
return Response.json({
|
||||||
|
status: "ok",
|
||||||
|
state,
|
||||||
|
meta: {
|
||||||
|
exported_at: new Date().toISOString(),
|
||||||
|
source_url: ctx.config.netbirdApiUrl,
|
||||||
|
groups_count: Object.keys(state.groups).length,
|
||||||
|
setup_keys_count: Object.keys(state.setup_keys).length,
|
||||||
|
policies_count: Object.keys(state.policies).length,
|
||||||
|
routes_count: Object.keys(state.routes).length,
|
||||||
|
dns_count: Object.keys(state.dns.nameserver_groups).length,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
} catch (err) {
|
||||||
|
console.error(
|
||||||
|
JSON.stringify({
|
||||||
|
msg: "export_error",
|
||||||
|
error: err instanceof Error ? err.message : String(err),
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
return Response.json(
|
||||||
|
{
|
||||||
|
status: "error",
|
||||||
|
error: err instanceof Error ? err.message : String(err),
|
||||||
|
},
|
||||||
|
{ status: 500 },
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
// Helpers
|
||||||
|
// -----------------------------------------------------------------------------
|
||||||
|
|
||||||
|
interface Summary {
|
||||||
|
created: number;
|
||||||
|
updated: number;
|
||||||
|
deleted: number;
|
||||||
|
failed: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Counts operations by category. Works on both raw Operation[] (for dry-run
|
||||||
|
* plans) and OperationResult[] (for executed results where failures are
|
||||||
|
* tallied separately).
|
||||||
|
*/
|
||||||
|
function summarize(ops: Array<{ type: string; status?: string }>): Summary {
|
||||||
|
let created = 0;
|
||||||
|
let updated = 0;
|
||||||
|
let deleted = 0;
|
||||||
|
let failed = 0;
|
||||||
|
|
||||||
|
for (const op of ops) {
|
||||||
|
if ((op as OperationResult).status === "failed") {
|
||||||
|
failed++;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (op.type.startsWith("create_")) {
|
||||||
|
created++;
|
||||||
|
} else if (op.type.startsWith("update_") || op.type === "rename_peer") {
|
||||||
|
updated++;
|
||||||
|
} else if (op.type.startsWith("delete_")) {
|
||||||
|
deleted++;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return { created, updated, deleted, failed };
|
||||||
|
}
|
||||||
171
src/state/actual.test.ts
Normal file
171
src/state/actual.test.ts
Normal file
@ -0,0 +1,171 @@
|
|||||||
|
import { assertEquals } from "@std/assert";
|
||||||
|
import { fetchActualState } from "./actual.ts";
|
||||||
|
import type {
|
||||||
|
NbDnsNameserverGroup,
|
||||||
|
NbGroup,
|
||||||
|
NbNetwork,
|
||||||
|
NbNetworkResource,
|
||||||
|
NbNetworkRouter,
|
||||||
|
NbPeer,
|
||||||
|
NbPolicy,
|
||||||
|
NbPostureCheck,
|
||||||
|
NbRoute,
|
||||||
|
NbSetupKey,
|
||||||
|
NbUser,
|
||||||
|
} from "../netbird/types.ts";
|
||||||
|
|
||||||
|
/** Minimal mock NetBird client that returns predetermined data */
|
||||||
|
function mockClient(data: {
|
||||||
|
groups?: NbGroup[];
|
||||||
|
setupKeys?: NbSetupKey[];
|
||||||
|
peers?: NbPeer[];
|
||||||
|
policies?: NbPolicy[];
|
||||||
|
routes?: NbRoute[];
|
||||||
|
dns?: NbDnsNameserverGroup[];
|
||||||
|
postureChecks?: NbPostureCheck[];
|
||||||
|
networks?: NbNetwork[];
|
||||||
|
networkResources?: Map<string, NbNetworkResource[]>;
|
||||||
|
networkRouters?: Map<string, NbNetworkRouter[]>;
|
||||||
|
users?: NbUser[];
|
||||||
|
}) {
|
||||||
|
return {
|
||||||
|
listGroups: () => Promise.resolve(data.groups ?? []),
|
||||||
|
listSetupKeys: () => Promise.resolve(data.setupKeys ?? []),
|
||||||
|
listPeers: () => Promise.resolve(data.peers ?? []),
|
||||||
|
listPolicies: () => Promise.resolve(data.policies ?? []),
|
||||||
|
listRoutes: () => Promise.resolve(data.routes ?? []),
|
||||||
|
listDnsNameserverGroups: () => Promise.resolve(data.dns ?? []),
|
||||||
|
listPostureChecks: () => Promise.resolve(data.postureChecks ?? []),
|
||||||
|
listNetworks: () => Promise.resolve(data.networks ?? []),
|
||||||
|
listNetworkResources: (networkId: string) =>
|
||||||
|
Promise.resolve(data.networkResources?.get(networkId) ?? []),
|
||||||
|
listNetworkRouters: (networkId: string) =>
|
||||||
|
Promise.resolve(data.networkRouters?.get(networkId) ?? []),
|
||||||
|
listUsers: () => Promise.resolve(data.users ?? []),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
Deno.test("fetchActualState builds name-to-id maps", async () => {
|
||||||
|
const actual = await fetchActualState(
|
||||||
|
mockClient({
|
||||||
|
groups: [
|
||||||
|
{ id: "g1", name: "pilots", peers_count: 0, peers: [], issued: "api" },
|
||||||
|
],
|
||||||
|
setupKeys: [
|
||||||
|
{
|
||||||
|
id: 1,
|
||||||
|
name: "Pilot-hawk-72",
|
||||||
|
type: "one-off",
|
||||||
|
key: "masked",
|
||||||
|
expires: "2026-04-01T00:00:00Z",
|
||||||
|
valid: true,
|
||||||
|
revoked: false,
|
||||||
|
used_times: 0,
|
||||||
|
state: "valid",
|
||||||
|
auto_groups: ["g1"],
|
||||||
|
usage_limit: 1,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
assertEquals(actual.groupsByName.get("pilots")?.id, "g1");
|
||||||
|
assertEquals(actual.setupKeysByName.get("Pilot-hawk-72")?.id, 1);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("fetchActualState returns empty maps for empty input", async () => {
|
||||||
|
const actual = await fetchActualState(mockClient({}));
|
||||||
|
assertEquals(actual.groups.length, 0);
|
||||||
|
assertEquals(actual.groupsByName.size, 0);
|
||||||
|
assertEquals(actual.groupsById.size, 0);
|
||||||
|
assertEquals(actual.setupKeys.length, 0);
|
||||||
|
assertEquals(actual.peers.length, 0);
|
||||||
|
assertEquals(actual.policies.length, 0);
|
||||||
|
assertEquals(actual.routes.length, 0);
|
||||||
|
assertEquals(actual.dns.length, 0);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("fetchActualState indexes all resource types", async () => {
|
||||||
|
const actual = await fetchActualState(
|
||||||
|
mockClient({
|
||||||
|
groups: [
|
||||||
|
{
|
||||||
|
id: "g1",
|
||||||
|
name: "ops",
|
||||||
|
peers_count: 1,
|
||||||
|
peers: [{ id: "p1", name: "drone-1" }],
|
||||||
|
issued: "api",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
peers: [
|
||||||
|
{
|
||||||
|
id: "p1",
|
||||||
|
name: "drone-1",
|
||||||
|
ip: "100.64.0.1",
|
||||||
|
connected: true,
|
||||||
|
hostname: "drone-1",
|
||||||
|
os: "linux",
|
||||||
|
version: "0.28.0",
|
||||||
|
groups: [{ id: "g1", name: "ops" }],
|
||||||
|
last_seen: "2026-03-01T00:00:00Z",
|
||||||
|
dns_label: "drone-1",
|
||||||
|
login_expiration_enabled: false,
|
||||||
|
ssh_enabled: false,
|
||||||
|
inactivity_expiration_enabled: false,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
policies: [
|
||||||
|
{
|
||||||
|
id: "pol1",
|
||||||
|
name: "allow-ops",
|
||||||
|
description: "ops traffic",
|
||||||
|
enabled: true,
|
||||||
|
source_posture_checks: [],
|
||||||
|
rules: [],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
routes: [
|
||||||
|
{
|
||||||
|
id: "r1",
|
||||||
|
description: "lan",
|
||||||
|
network_id: "lan-net",
|
||||||
|
enabled: true,
|
||||||
|
network: "10.0.0.0/24",
|
||||||
|
metric: 100,
|
||||||
|
masquerade: true,
|
||||||
|
groups: ["g1"],
|
||||||
|
keep_route: false,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
dns: [
|
||||||
|
{
|
||||||
|
id: "d1",
|
||||||
|
name: "internal-dns",
|
||||||
|
description: "internal",
|
||||||
|
nameservers: [{ ip: "1.1.1.1", ns_type: "udp", port: 53 }],
|
||||||
|
enabled: true,
|
||||||
|
groups: ["g1"],
|
||||||
|
primary: true,
|
||||||
|
domains: ["internal."],
|
||||||
|
search_domains_enabled: false,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
// Groups indexed both ways
|
||||||
|
assertEquals(actual.groupsByName.get("ops")?.id, "g1");
|
||||||
|
assertEquals(actual.groupsById.get("g1")?.name, "ops");
|
||||||
|
|
||||||
|
// Peers indexed both ways
|
||||||
|
assertEquals(actual.peersByName.get("drone-1")?.id, "p1");
|
||||||
|
assertEquals(actual.peersById.get("p1")?.name, "drone-1");
|
||||||
|
|
||||||
|
// Policies by name
|
||||||
|
assertEquals(actual.policiesByName.get("allow-ops")?.id, "pol1");
|
||||||
|
|
||||||
|
// Routes by network_id
|
||||||
|
assertEquals(actual.routesByNetworkId.get("lan-net")?.id, "r1");
|
||||||
|
|
||||||
|
// DNS by name
|
||||||
|
assertEquals(actual.dnsByName.get("internal-dns")?.id, "d1");
|
||||||
|
});
|
||||||
140
src/state/actual.ts
Normal file
140
src/state/actual.ts
Normal file
@ -0,0 +1,140 @@
|
|||||||
|
import type { NetbirdClient } from "../netbird/client.ts";
|
||||||
|
import type {
|
||||||
|
NbDnsNameserverGroup,
|
||||||
|
NbGroup,
|
||||||
|
NbNetwork,
|
||||||
|
NbNetworkResource,
|
||||||
|
NbNetworkRouter,
|
||||||
|
NbPeer,
|
||||||
|
NbPolicy,
|
||||||
|
NbPostureCheck,
|
||||||
|
NbRoute,
|
||||||
|
NbSetupKey,
|
||||||
|
NbUser,
|
||||||
|
} from "../netbird/types.ts";
|
||||||
|
|
||||||
|
/** Indexed view of all current NetBird state */
|
||||||
|
export interface ActualState {
|
||||||
|
groups: NbGroup[];
|
||||||
|
groupsByName: Map<string, NbGroup>;
|
||||||
|
groupsById: Map<string, NbGroup>;
|
||||||
|
setupKeys: NbSetupKey[];
|
||||||
|
setupKeysByName: Map<string, NbSetupKey>;
|
||||||
|
peers: NbPeer[];
|
||||||
|
peersByName: Map<string, NbPeer>;
|
||||||
|
peersById: Map<string, NbPeer>;
|
||||||
|
policies: NbPolicy[];
|
||||||
|
policiesByName: Map<string, NbPolicy>;
|
||||||
|
routes: NbRoute[];
|
||||||
|
routesByNetworkId: Map<string, NbRoute>;
|
||||||
|
dns: NbDnsNameserverGroup[];
|
||||||
|
dnsByName: Map<string, NbDnsNameserverGroup>;
|
||||||
|
postureChecks: NbPostureCheck[];
|
||||||
|
postureChecksByName: Map<string, NbPostureCheck>;
|
||||||
|
networks: NbNetwork[];
|
||||||
|
networksByName: Map<string, NbNetwork>;
|
||||||
|
networkResources: Map<string, NbNetworkResource[]>; // keyed by network ID
|
||||||
|
networkRouters: Map<string, NbNetworkRouter[]>; // keyed by network ID
|
||||||
|
users: NbUser[];
|
||||||
|
usersByEmail: Map<string, NbUser>;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Subset of NetbirdClient needed for fetching state.
|
||||||
|
*
|
||||||
|
* Using a structural pick rather than the full class keeps this module
|
||||||
|
* testable with plain object mocks and avoids pulling in fetch/auth deps.
|
||||||
|
*/
|
||||||
|
type ClientLike = Pick<
|
||||||
|
NetbirdClient,
|
||||||
|
| "listGroups"
|
||||||
|
| "listSetupKeys"
|
||||||
|
| "listPeers"
|
||||||
|
| "listPolicies"
|
||||||
|
| "listRoutes"
|
||||||
|
| "listDnsNameserverGroups"
|
||||||
|
| "listPostureChecks"
|
||||||
|
| "listNetworks"
|
||||||
|
| "listNetworkResources"
|
||||||
|
| "listNetworkRouters"
|
||||||
|
| "listUsers"
|
||||||
|
>;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Fetches all resource collections from the NetBird API in parallel and
|
||||||
|
* returns them with bidirectional name<->ID indexes for O(1) lookup
|
||||||
|
* during diff/reconciliation.
|
||||||
|
*/
|
||||||
|
export async function fetchActualState(
|
||||||
|
client: ClientLike,
|
||||||
|
): Promise<ActualState> {
|
||||||
|
const [
|
||||||
|
groups,
|
||||||
|
setupKeys,
|
||||||
|
peers,
|
||||||
|
policies,
|
||||||
|
routes,
|
||||||
|
dns,
|
||||||
|
postureChecks,
|
||||||
|
networks,
|
||||||
|
users,
|
||||||
|
] = await Promise.all([
|
||||||
|
client.listGroups(),
|
||||||
|
client.listSetupKeys(),
|
||||||
|
client.listPeers(),
|
||||||
|
client.listPolicies(),
|
||||||
|
client.listRoutes(),
|
||||||
|
client.listDnsNameserverGroups(),
|
||||||
|
client.listPostureChecks(),
|
||||||
|
client.listNetworks(),
|
||||||
|
client.listUsers(),
|
||||||
|
]);
|
||||||
|
|
||||||
|
// Fetch sub-resources for each network in parallel
|
||||||
|
const [resourcesByNetwork, routersByNetwork] = await Promise.all([
|
||||||
|
Promise.all(
|
||||||
|
networks.map(async (n) => ({
|
||||||
|
id: n.id,
|
||||||
|
resources: await client.listNetworkResources(n.id),
|
||||||
|
})),
|
||||||
|
),
|
||||||
|
Promise.all(
|
||||||
|
networks.map(async (n) => ({
|
||||||
|
id: n.id,
|
||||||
|
routers: await client.listNetworkRouters(n.id),
|
||||||
|
})),
|
||||||
|
),
|
||||||
|
]);
|
||||||
|
|
||||||
|
const networkResources = new Map<string, NbNetworkResource[]>(
|
||||||
|
resourcesByNetwork.map((r) => [r.id, r.resources]),
|
||||||
|
);
|
||||||
|
const networkRouters = new Map<string, NbNetworkRouter[]>(
|
||||||
|
routersByNetwork.map((r) => [r.id, r.routers]),
|
||||||
|
);
|
||||||
|
|
||||||
|
return {
|
||||||
|
groups,
|
||||||
|
groupsByName: new Map(groups.map((g) => [g.name, g])),
|
||||||
|
groupsById: new Map(groups.map((g) => [g.id, g])),
|
||||||
|
setupKeys,
|
||||||
|
setupKeysByName: new Map(setupKeys.map((k) => [k.name, k])),
|
||||||
|
peers,
|
||||||
|
peersByName: new Map(peers.map((p) => [p.name, p])),
|
||||||
|
peersById: new Map(peers.map((p) => [p.id, p])),
|
||||||
|
policies,
|
||||||
|
policiesByName: new Map(policies.map((p) => [p.name, p])),
|
||||||
|
routes,
|
||||||
|
routesByNetworkId: new Map(routes.map((r) => [r.network_id, r])),
|
||||||
|
dns,
|
||||||
|
dnsByName: new Map(dns.map((d) => [d.name, d])),
|
||||||
|
postureChecks,
|
||||||
|
postureChecksByName: new Map(postureChecks.map((pc) => [pc.name, pc])),
|
||||||
|
networks,
|
||||||
|
networksByName: new Map(networks.map((n) => [n.name, n])),
|
||||||
|
networkResources,
|
||||||
|
networkRouters,
|
||||||
|
users,
|
||||||
|
usersByEmail: new Map(users.map((u) => [u.email, u])),
|
||||||
|
};
|
||||||
|
}
|
||||||
199
src/state/schema.test.ts
Normal file
199
src/state/schema.test.ts
Normal file
@ -0,0 +1,199 @@
|
|||||||
|
import { assertEquals, assertThrows } from "@std/assert";
|
||||||
|
import { DesiredStateSchema, validateCrossReferences } from "./schema.ts";
|
||||||
|
|
||||||
|
const VALID_STATE = {
|
||||||
|
groups: {
|
||||||
|
pilots: { peers: ["Pilot-hawk-72"] },
|
||||||
|
"ground-stations": { peers: ["GS-hawk-72"] },
|
||||||
|
},
|
||||||
|
setup_keys: {
|
||||||
|
"GS-hawk-72": {
|
||||||
|
type: "one-off" as const,
|
||||||
|
expires_in: 604800,
|
||||||
|
usage_limit: 1,
|
||||||
|
auto_groups: ["ground-stations"],
|
||||||
|
enrolled: true,
|
||||||
|
},
|
||||||
|
"Pilot-hawk-72": {
|
||||||
|
type: "one-off" as const,
|
||||||
|
expires_in: 604800,
|
||||||
|
usage_limit: 1,
|
||||||
|
auto_groups: ["pilots"],
|
||||||
|
enrolled: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
policies: {
|
||||||
|
"pilots-to-gs": {
|
||||||
|
description: "Allow pilots to reach ground stations",
|
||||||
|
enabled: true,
|
||||||
|
sources: ["pilots"],
|
||||||
|
destinations: ["ground-stations"],
|
||||||
|
bidirectional: true,
|
||||||
|
protocol: "all" as const,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
routes: {},
|
||||||
|
dns: { nameserver_groups: {} },
|
||||||
|
};
|
||||||
|
|
||||||
|
Deno.test("DesiredStateSchema parses a valid state", () => {
|
||||||
|
const result = DesiredStateSchema.parse(VALID_STATE);
|
||||||
|
assertEquals(Object.keys(result.groups).length, 2);
|
||||||
|
assertEquals(Object.keys(result.setup_keys).length, 2);
|
||||||
|
assertEquals(Object.keys(result.policies).length, 1);
|
||||||
|
assertEquals(result.policies["pilots-to-gs"].action, "accept");
|
||||||
|
assertEquals(result.policies["pilots-to-gs"].protocol, "all");
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("DesiredStateSchema applies defaults for optional top-level fields", () => {
|
||||||
|
const minimal = {
|
||||||
|
groups: { ops: { peers: [] } },
|
||||||
|
setup_keys: {},
|
||||||
|
};
|
||||||
|
const result = DesiredStateSchema.parse(minimal);
|
||||||
|
assertEquals(result.policies, {});
|
||||||
|
assertEquals(result.routes, {});
|
||||||
|
assertEquals(result.dns, { nameserver_groups: {} });
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("DesiredStateSchema rejects invalid setup key type", () => {
|
||||||
|
const bad = {
|
||||||
|
...VALID_STATE,
|
||||||
|
setup_keys: {
|
||||||
|
"bad-key": {
|
||||||
|
type: "permanent",
|
||||||
|
expires_in: 100,
|
||||||
|
usage_limit: 0,
|
||||||
|
auto_groups: [],
|
||||||
|
enrolled: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
assertThrows(() => DesiredStateSchema.parse(bad));
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("DesiredStateSchema rejects negative expires_in", () => {
|
||||||
|
const bad = {
|
||||||
|
...VALID_STATE,
|
||||||
|
setup_keys: {
|
||||||
|
"bad-key": {
|
||||||
|
type: "one-off",
|
||||||
|
expires_in: -1,
|
||||||
|
usage_limit: 0,
|
||||||
|
auto_groups: [],
|
||||||
|
enrolled: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
assertThrows(() => DesiredStateSchema.parse(bad));
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("DesiredStateSchema rejects route metric out of range", () => {
|
||||||
|
const bad = {
|
||||||
|
...VALID_STATE,
|
||||||
|
routes: {
|
||||||
|
"bad-route": {
|
||||||
|
peer_groups: ["pilots"],
|
||||||
|
metric: 10000,
|
||||||
|
distribution_groups: ["pilots"],
|
||||||
|
enabled: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
assertThrows(() => DesiredStateSchema.parse(bad));
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("validateCrossReferences passes for a valid state", () => {
|
||||||
|
const state = DesiredStateSchema.parse(VALID_STATE);
|
||||||
|
const errors = validateCrossReferences(state);
|
||||||
|
assertEquals(errors, []);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("validateCrossReferences catches missing group in policy source", () => {
|
||||||
|
const bad = DesiredStateSchema.parse({
|
||||||
|
...VALID_STATE,
|
||||||
|
policies: {
|
||||||
|
broken: {
|
||||||
|
enabled: true,
|
||||||
|
sources: ["nonexistent"],
|
||||||
|
destinations: ["pilots"],
|
||||||
|
bidirectional: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const errors = validateCrossReferences(bad);
|
||||||
|
assertEquals(errors.length, 1);
|
||||||
|
assertEquals(errors[0].includes("nonexistent"), true);
|
||||||
|
assertEquals(errors[0].includes("source"), true);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("validateCrossReferences catches peer without matching setup key", () => {
|
||||||
|
const bad = DesiredStateSchema.parse({
|
||||||
|
groups: {
|
||||||
|
pilots: { peers: ["ghost-peer"] },
|
||||||
|
},
|
||||||
|
setup_keys: {},
|
||||||
|
});
|
||||||
|
const errors = validateCrossReferences(bad);
|
||||||
|
assertEquals(errors.length, 1);
|
||||||
|
assertEquals(errors[0].includes("ghost-peer"), true);
|
||||||
|
assertEquals(errors[0].includes("setup key"), true);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("validateCrossReferences catches auto_group referencing nonexistent group", () => {
|
||||||
|
const bad = DesiredStateSchema.parse({
|
||||||
|
groups: {},
|
||||||
|
setup_keys: {
|
||||||
|
"some-key": {
|
||||||
|
type: "reusable",
|
||||||
|
expires_in: 3600,
|
||||||
|
usage_limit: 0,
|
||||||
|
auto_groups: ["phantom-group"],
|
||||||
|
enrolled: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const errors = validateCrossReferences(bad);
|
||||||
|
assertEquals(errors.length, 1);
|
||||||
|
assertEquals(errors[0].includes("phantom-group"), true);
|
||||||
|
assertEquals(errors[0].includes("auto_group"), true);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("validateCrossReferences catches missing group in route peer_groups", () => {
|
||||||
|
const bad = DesiredStateSchema.parse({
|
||||||
|
groups: { ops: { peers: [] } },
|
||||||
|
setup_keys: {},
|
||||||
|
routes: {
|
||||||
|
"bad-route": {
|
||||||
|
peer_groups: ["ops", "missing"],
|
||||||
|
distribution_groups: ["ops"],
|
||||||
|
enabled: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const errors = validateCrossReferences(bad);
|
||||||
|
assertEquals(errors.length, 1);
|
||||||
|
assertEquals(errors[0].includes("missing"), true);
|
||||||
|
assertEquals(errors[0].includes("peer_group"), true);
|
||||||
|
});
|
||||||
|
|
||||||
|
Deno.test("validateCrossReferences catches missing group in DNS nameserver group", () => {
|
||||||
|
const bad = DesiredStateSchema.parse({
|
||||||
|
groups: {},
|
||||||
|
setup_keys: {},
|
||||||
|
dns: {
|
||||||
|
nameserver_groups: {
|
||||||
|
"my-dns": {
|
||||||
|
nameservers: [{ ip: "1.1.1.1" }],
|
||||||
|
enabled: true,
|
||||||
|
groups: ["ghost"],
|
||||||
|
primary: true,
|
||||||
|
domains: [],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
const errors = validateCrossReferences(bad);
|
||||||
|
assertEquals(errors.length, 1);
|
||||||
|
assertEquals(errors[0].includes("ghost"), true);
|
||||||
|
});
|
||||||
284
src/state/schema.ts
Normal file
284
src/state/schema.ts
Normal file
@ -0,0 +1,284 @@
|
|||||||
|
import { z } from "zod";
|
||||||
|
|
||||||
|
// --- Leaf schemas ---
|
||||||
|
|
||||||
|
export const SetupKeySchema = z.object({
|
||||||
|
type: z.enum(["one-off", "reusable"]),
|
||||||
|
expires_in: z.number().int().positive(),
|
||||||
|
usage_limit: z.number().int().nonnegative(),
|
||||||
|
auto_groups: z.array(z.string()),
|
||||||
|
enrolled: z.boolean(),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const GroupSchema = z.object({
|
||||||
|
peers: z.array(z.string()),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const DestinationResourceSchema = z.object({
|
||||||
|
id: z.string(),
|
||||||
|
type: z.string(),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const PolicySchema = z.object({
|
||||||
|
description: z.string().default(""),
|
||||||
|
enabled: z.boolean(),
|
||||||
|
sources: z.array(z.string()),
|
||||||
|
destinations: z.array(z.string()).default([]),
|
||||||
|
bidirectional: z.boolean(),
|
||||||
|
protocol: z.enum(["tcp", "udp", "icmp", "all"]).default("all"),
|
||||||
|
action: z.enum(["accept", "drop"]).default("accept"),
|
||||||
|
ports: z.array(z.string()).optional(),
|
||||||
|
destination_resource: DestinationResourceSchema.optional(),
|
||||||
|
source_posture_checks: z.array(z.string()).default([]),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const RouteSchema = z.object({
|
||||||
|
description: z.string().default(""),
|
||||||
|
network: z.string().optional(),
|
||||||
|
domains: z.array(z.string()).optional(),
|
||||||
|
peer_groups: z.array(z.string()),
|
||||||
|
metric: z.number().int().min(1).max(9999).default(9999),
|
||||||
|
masquerade: z.boolean().default(true),
|
||||||
|
distribution_groups: z.array(z.string()),
|
||||||
|
enabled: z.boolean(),
|
||||||
|
keep_route: z.boolean().default(true),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const NameserverSchema = z.object({
|
||||||
|
ip: z.string(),
|
||||||
|
ns_type: z.string().default("udp"),
|
||||||
|
port: z.number().int().default(53),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const DnsNameserverGroupSchema = z.object({
|
||||||
|
description: z.string().default(""),
|
||||||
|
nameservers: z.array(NameserverSchema).min(1).max(3),
|
||||||
|
enabled: z.boolean(),
|
||||||
|
groups: z.array(z.string()),
|
||||||
|
primary: z.boolean(),
|
||||||
|
domains: z.array(z.string()),
|
||||||
|
search_domains_enabled: z.boolean().default(false),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const PostureCheckSchema = z.object({
|
||||||
|
description: z.string().default(""),
|
||||||
|
checks: z.record(z.string(), z.unknown()),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const NetworkResourceSchema = z.object({
|
||||||
|
name: z.string(),
|
||||||
|
description: z.string().default(""),
|
||||||
|
type: z.enum(["host", "subnet", "domain"]),
|
||||||
|
address: z.string(),
|
||||||
|
enabled: z.boolean().default(true),
|
||||||
|
groups: z.array(z.string()),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const NetworkRouterSchema = z.object({
|
||||||
|
peer: z.string().optional(),
|
||||||
|
peer_groups: z.array(z.string()).optional(),
|
||||||
|
metric: z.number().int().min(1).max(9999).default(9999),
|
||||||
|
masquerade: z.boolean().default(true),
|
||||||
|
enabled: z.boolean().default(true),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const NetworkSchema = z.object({
|
||||||
|
description: z.string().default(""),
|
||||||
|
resources: z.array(NetworkResourceSchema).default([]),
|
||||||
|
routers: z.array(NetworkRouterSchema).default([]),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const PeerSchema = z.object({
|
||||||
|
groups: z.array(z.string()),
|
||||||
|
login_expiration_enabled: z.boolean().default(false),
|
||||||
|
inactivity_expiration_enabled: z.boolean().default(false),
|
||||||
|
ssh_enabled: z.boolean().default(false),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const UserSchema = z.object({
|
||||||
|
name: z.string(),
|
||||||
|
role: z.enum(["owner", "admin", "user"]),
|
||||||
|
auto_groups: z.array(z.string()).default([]),
|
||||||
|
});
|
||||||
|
|
||||||
|
// --- Top-level schema ---
|
||||||
|
|
||||||
|
export const DesiredStateSchema = z.object({
|
||||||
|
groups: z.record(z.string(), GroupSchema),
|
||||||
|
setup_keys: z.record(z.string(), SetupKeySchema),
|
||||||
|
policies: z.record(z.string(), PolicySchema).default({}),
|
||||||
|
routes: z.record(z.string(), RouteSchema).default({}),
|
||||||
|
dns: z.object({
|
||||||
|
nameserver_groups: z.record(z.string(), DnsNameserverGroupSchema)
|
||||||
|
.default({}),
|
||||||
|
}).default({ nameserver_groups: {} }),
|
||||||
|
posture_checks: z.record(z.string(), PostureCheckSchema).default({}),
|
||||||
|
networks: z.record(z.string(), NetworkSchema).default({}),
|
||||||
|
peers: z.record(z.string(), PeerSchema).default({}),
|
||||||
|
users: z.record(z.string(), UserSchema).default({}),
|
||||||
|
});
|
||||||
|
|
||||||
|
// --- Inferred types ---
|
||||||
|
|
||||||
|
export type DesiredState = z.infer<typeof DesiredStateSchema>;
|
||||||
|
export type SetupKeyConfig = z.infer<typeof SetupKeySchema>;
|
||||||
|
export type GroupConfig = z.infer<typeof GroupSchema>;
|
||||||
|
export type PolicyConfig = z.infer<typeof PolicySchema>;
|
||||||
|
export type RouteConfig = z.infer<typeof RouteSchema>;
|
||||||
|
export type DnsNameserverGroupConfig = z.infer<typeof DnsNameserverGroupSchema>;
|
||||||
|
export type PostureCheckConfig = z.infer<typeof PostureCheckSchema>;
|
||||||
|
export type NetworkConfig = z.infer<typeof NetworkSchema>;
|
||||||
|
export type NetworkResourceConfig = z.infer<typeof NetworkResourceSchema>;
|
||||||
|
export type NetworkRouterConfig = z.infer<typeof NetworkRouterSchema>;
|
||||||
|
export type PeerConfig = z.infer<typeof PeerSchema>;
|
||||||
|
export type UserConfig = z.infer<typeof UserSchema>;
|
||||||
|
export type DestinationResourceConfig = z.infer<
|
||||||
|
typeof DestinationResourceSchema
|
||||||
|
>;
|
||||||
|
|
||||||
|
// --- Cross-reference validation ---
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Validates that all cross-references within a parsed DesiredState are
|
||||||
|
* consistent. Returns an array of human-readable error strings — an empty
|
||||||
|
* array means the state is internally consistent.
|
||||||
|
*
|
||||||
|
* Checks performed:
|
||||||
|
* 1. Every peer listed in a group corresponds to an existing setup key.
|
||||||
|
* 2. Every auto_group on a setup key references an existing group.
|
||||||
|
* 3. Every source/destination in a policy references an existing group.
|
||||||
|
* 4. Every peer_group and distribution_group in a route references an
|
||||||
|
* existing group.
|
||||||
|
* 5. Every group in a DNS nameserver group references an existing group.
|
||||||
|
* 6. Every group in a peer config references an existing group.
|
||||||
|
* 7. Every auto_group on a user references an existing group.
|
||||||
|
* 8. Every group on a network resource references an existing group.
|
||||||
|
* 9. Every source_posture_check in a policy references an existing posture check.
|
||||||
|
*/
|
||||||
|
export function validateCrossReferences(state: DesiredState): string[] {
|
||||||
|
const errors: string[] = [];
|
||||||
|
const groupNames = new Set(Object.keys(state.groups));
|
||||||
|
const setupKeyNames = new Set(Object.keys(state.setup_keys));
|
||||||
|
const postureCheckNames = new Set(Object.keys(state.posture_checks));
|
||||||
|
|
||||||
|
// 1. Peers in groups must reference existing setup keys
|
||||||
|
for (const [groupName, group] of Object.entries(state.groups)) {
|
||||||
|
for (const peer of group.peers) {
|
||||||
|
if (!setupKeyNames.has(peer)) {
|
||||||
|
errors.push(
|
||||||
|
`group "${groupName}": peer "${peer}" does not match any setup key`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. auto_groups on setup keys must reference existing groups
|
||||||
|
for (const [keyName, key] of Object.entries(state.setup_keys)) {
|
||||||
|
for (const ag of key.auto_groups) {
|
||||||
|
if (!groupNames.has(ag)) {
|
||||||
|
errors.push(
|
||||||
|
`setup_key "${keyName}": auto_group "${ag}" does not match any group`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Policy sources and destinations must reference existing groups
|
||||||
|
for (const [policyName, policy] of Object.entries(state.policies)) {
|
||||||
|
for (const src of policy.sources) {
|
||||||
|
if (!groupNames.has(src)) {
|
||||||
|
errors.push(
|
||||||
|
`policy "${policyName}": source "${src}" does not match any group`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for (const dst of policy.destinations) {
|
||||||
|
if (!groupNames.has(dst)) {
|
||||||
|
errors.push(
|
||||||
|
`policy "${policyName}": destination "${dst}" does not match any group`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 4. Route peer_groups and distribution_groups must reference existing groups
|
||||||
|
for (const [routeName, route] of Object.entries(state.routes)) {
|
||||||
|
for (const pg of route.peer_groups) {
|
||||||
|
if (!groupNames.has(pg)) {
|
||||||
|
errors.push(
|
||||||
|
`route "${routeName}": peer_group "${pg}" does not match any group`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for (const dg of route.distribution_groups) {
|
||||||
|
if (!groupNames.has(dg)) {
|
||||||
|
errors.push(
|
||||||
|
`route "${routeName}": distribution_group "${dg}" does not match any group`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 5. DNS nameserver group references must match existing groups
|
||||||
|
for (
|
||||||
|
const [nsGroupName, nsGroup] of Object.entries(
|
||||||
|
state.dns.nameserver_groups,
|
||||||
|
)
|
||||||
|
) {
|
||||||
|
for (const g of nsGroup.groups) {
|
||||||
|
if (!groupNames.has(g)) {
|
||||||
|
errors.push(
|
||||||
|
`dns.nameserver_groups "${nsGroupName}": group "${g}" does not match any group`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 6. Peer groups must reference existing groups
|
||||||
|
for (const [peerName, peer] of Object.entries(state.peers)) {
|
||||||
|
for (const g of peer.groups) {
|
||||||
|
if (!groupNames.has(g)) {
|
||||||
|
errors.push(
|
||||||
|
`peer "${peerName}": group "${g}" does not match any group`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 7. User auto_groups must reference existing groups
|
||||||
|
for (const [userName, user] of Object.entries(state.users)) {
|
||||||
|
for (const ag of user.auto_groups) {
|
||||||
|
if (!groupNames.has(ag)) {
|
||||||
|
errors.push(
|
||||||
|
`user "${userName}": auto_group "${ag}" does not match any group`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 8. Network resource groups must reference existing groups
|
||||||
|
for (const [networkName, network] of Object.entries(state.networks)) {
|
||||||
|
for (const resource of network.resources) {
|
||||||
|
for (const g of resource.groups) {
|
||||||
|
if (!groupNames.has(g)) {
|
||||||
|
errors.push(
|
||||||
|
`network "${networkName}": resource "${resource.name}" group "${g}" does not match any group`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 9. Policy source_posture_checks must reference existing posture checks
|
||||||
|
for (const [policyName, policy] of Object.entries(state.policies)) {
|
||||||
|
for (const pc of policy.source_posture_checks) {
|
||||||
|
if (!postureCheckNames.has(pc)) {
|
||||||
|
errors.push(
|
||||||
|
`policy "${policyName}": source_posture_check "${pc}" does not match any posture check`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return errors;
|
||||||
|
}
|
||||||
414
state/dev.json
Normal file
414
state/dev.json
Normal file
@ -0,0 +1,414 @@
|
|||||||
|
{
|
||||||
|
"groups": {
|
||||||
|
"dev-team": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"dev-services": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"fusion": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"test-gs": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"restricted": {
|
||||||
|
"peers": []
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"setup_keys": {
|
||||||
|
"public-site": {
|
||||||
|
"type": "reusable",
|
||||||
|
"expires_in": 604800,
|
||||||
|
"usage_limit": 0,
|
||||||
|
"auto_groups": [
|
||||||
|
"dev-services"
|
||||||
|
],
|
||||||
|
"enrolled": false
|
||||||
|
},
|
||||||
|
"docs vps": {
|
||||||
|
"type": "reusable",
|
||||||
|
"expires_in": 604800,
|
||||||
|
"usage_limit": 0,
|
||||||
|
"auto_groups": [
|
||||||
|
"dev-services"
|
||||||
|
],
|
||||||
|
"enrolled": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"policies": {
|
||||||
|
"Dev to test gs": {
|
||||||
|
"description": "",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"destinations": [
|
||||||
|
"All"
|
||||||
|
],
|
||||||
|
"bidirectional": false,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": []
|
||||||
|
},
|
||||||
|
"Dev Access to Gitea": {
|
||||||
|
"description": "",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"destinations": [
|
||||||
|
"dev-services"
|
||||||
|
],
|
||||||
|
"bidirectional": false,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": []
|
||||||
|
},
|
||||||
|
"Fusion Access All": {
|
||||||
|
"description": "",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": [
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"destinations": [
|
||||||
|
"dev-team",
|
||||||
|
"test-gs"
|
||||||
|
],
|
||||||
|
"bidirectional": true,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": []
|
||||||
|
},
|
||||||
|
"only fusion": {
|
||||||
|
"description": "",
|
||||||
|
"enabled": false,
|
||||||
|
"sources": [
|
||||||
|
"restricted"
|
||||||
|
],
|
||||||
|
"destinations": [
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"bidirectional": true,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": []
|
||||||
|
},
|
||||||
|
"Ground Stations to Debian Repository": {
|
||||||
|
"description": "",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": [
|
||||||
|
"test-gs"
|
||||||
|
],
|
||||||
|
"destinations": [
|
||||||
|
"dev-services"
|
||||||
|
],
|
||||||
|
"bidirectional": true,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": []
|
||||||
|
},
|
||||||
|
"dev services can communicate": {
|
||||||
|
"description": "",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": [
|
||||||
|
"dev-services"
|
||||||
|
],
|
||||||
|
"destinations": [
|
||||||
|
"dev-services"
|
||||||
|
],
|
||||||
|
"bidirectional": true,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": []
|
||||||
|
},
|
||||||
|
"Everyone can access docs": {
|
||||||
|
"description": "",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": [
|
||||||
|
"All"
|
||||||
|
],
|
||||||
|
"destinations": [],
|
||||||
|
"bidirectional": false,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": [],
|
||||||
|
"destination_resource": {
|
||||||
|
"id": "docs.blastpilot.achilles-rnd.cc",
|
||||||
|
"type": "domain"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"posture_checks": {
|
||||||
|
"10.112.*.* subnet access": {
|
||||||
|
"description": "",
|
||||||
|
"checks": {
|
||||||
|
"peer_network_range_check": {
|
||||||
|
"action": "allow",
|
||||||
|
"ranges": [
|
||||||
|
"10.112.0.0/16"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"networks": {
|
||||||
|
"Internal Services": {
|
||||||
|
"description": "",
|
||||||
|
"resources": [
|
||||||
|
{
|
||||||
|
"name": "docs.blastpilot.achilles-rnd.cc",
|
||||||
|
"description": "docs.blastpilot.achilles-rnd.cc",
|
||||||
|
"type": "domain",
|
||||||
|
"address": "docs.blastpilot.achilles-rnd.cc",
|
||||||
|
"enabled": true,
|
||||||
|
"groups": [
|
||||||
|
"All"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"routers": [
|
||||||
|
{
|
||||||
|
"metric": 9999,
|
||||||
|
"masquerade": true,
|
||||||
|
"enabled": true,
|
||||||
|
"peer": "blast-fusion"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"peers": {
|
||||||
|
"acarus": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": true,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"blast-fusion": {
|
||||||
|
"groups": [
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"blastgs-fpv3": {
|
||||||
|
"groups": [
|
||||||
|
"test-gs"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"documentation-site": {
|
||||||
|
"groups": [
|
||||||
|
"dev-services"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"gitea-server": {
|
||||||
|
"groups": [
|
||||||
|
"dev-services"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"grc-1-3bat": {
|
||||||
|
"groups": [
|
||||||
|
"test-gs"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"grc-422-vlad.blast.local": {
|
||||||
|
"groups": [
|
||||||
|
"test-gs"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"ihor-rnd": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"ivan-rnd": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"multik-acer1": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"multik-ptt-test-gs": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team",
|
||||||
|
"fusion",
|
||||||
|
"test-gs"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"oleksandr": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": true,
|
||||||
|
"inactivity_expiration_enabled": true,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"prox": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"prox-orangepi": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"prox-pc": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"prox-ubuntu-vm": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"public-website-vps": {
|
||||||
|
"groups": [
|
||||||
|
"dev-services"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"remote-autel-1-rnd": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"rpitest2": {
|
||||||
|
"groups": [
|
||||||
|
"test-gs"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"seed-asus1": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team",
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"seed-linux": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": true,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"seed-macbook1": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"testovyy-nrk-1-rnd-new-arch": {
|
||||||
|
"groups": [
|
||||||
|
"test-gs"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"ubuntu": {
|
||||||
|
"groups": [],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"users": {
|
||||||
|
"admin@achilles.local": {
|
||||||
|
"name": "admin",
|
||||||
|
"role": "owner",
|
||||||
|
"auto_groups": []
|
||||||
|
},
|
||||||
|
"seed@achilles.local": {
|
||||||
|
"name": "seed",
|
||||||
|
"role": "admin",
|
||||||
|
"auto_groups": [
|
||||||
|
"dev-team"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"keltir@achilles.local": {
|
||||||
|
"name": "keltir",
|
||||||
|
"role": "admin",
|
||||||
|
"auto_groups": [
|
||||||
|
"dev-team"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"eugene@achilles.local": {
|
||||||
|
"name": "eugene",
|
||||||
|
"role": "admin",
|
||||||
|
"auto_groups": [
|
||||||
|
"dev-team"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"sava@achilles.local": {
|
||||||
|
"name": "sava",
|
||||||
|
"role": "admin",
|
||||||
|
"auto_groups": [
|
||||||
|
"dev-team"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"routes": {},
|
||||||
|
"dns": {
|
||||||
|
"nameserver_groups": {}
|
||||||
|
}
|
||||||
|
}
|
||||||
1031
state/ext.json
Normal file
1031
state/ext.json
Normal file
File diff suppressed because it is too large
Load Diff
627
state/prod.json
Normal file
627
state/prod.json
Normal file
@ -0,0 +1,627 @@
|
|||||||
|
{
|
||||||
|
"groups": {
|
||||||
|
"battalion-1-pilots": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"battalion-2-pilots": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"battalion-3-pilots": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"battalion-1-ground-stations": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"battalion-2-ground-stations": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"battalion-3-ground-stations": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"dev-team": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"fusion": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"exp-company-ground-stations": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"exp-company-pilots": {
|
||||||
|
"peers": []
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"setup_keys": {
|
||||||
|
"1bat-multik": {
|
||||||
|
"type": "reusable",
|
||||||
|
"expires_in": 604800,
|
||||||
|
"usage_limit": 10,
|
||||||
|
"auto_groups": [
|
||||||
|
"battalion-1-ground-stations",
|
||||||
|
"battalion-1-pilots"
|
||||||
|
],
|
||||||
|
"enrolled": false
|
||||||
|
},
|
||||||
|
"boots-laptops": {
|
||||||
|
"type": "reusable",
|
||||||
|
"expires_in": 604800,
|
||||||
|
"usage_limit": 5,
|
||||||
|
"auto_groups": [
|
||||||
|
"battalion-1-ground-stations",
|
||||||
|
"battalion-1-pilots"
|
||||||
|
],
|
||||||
|
"enrolled": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"policies": {
|
||||||
|
"1st Battalion - Internal Access": {
|
||||||
|
"description": "Allow 1st Battalion pilots to access their ground stations",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": [
|
||||||
|
"battalion-1-pilots",
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"destinations": [
|
||||||
|
"battalion-1-ground-stations",
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"bidirectional": true,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": []
|
||||||
|
},
|
||||||
|
"2nd Battalion - Internal Access": {
|
||||||
|
"description": "Allow 2nd Battalion pilots to access their ground stations",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": [
|
||||||
|
"battalion-2-pilots",
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"destinations": [
|
||||||
|
"battalion-2-ground-stations",
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"bidirectional": true,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": []
|
||||||
|
},
|
||||||
|
"3rd Battalion - Internal Access": {
|
||||||
|
"description": "Allow 3rd Battalion pilots to access their ground stations",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"destinations": [
|
||||||
|
"battalion-3-ground-stations",
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"bidirectional": true,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": []
|
||||||
|
},
|
||||||
|
"Dev Team - Full Access": {
|
||||||
|
"description": "Dev team can access all peers for troubleshooting",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"destinations": [
|
||||||
|
"All"
|
||||||
|
],
|
||||||
|
"bidirectional": true,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": [
|
||||||
|
"Restrict admins to Ukraine"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"Fusion Access All Pilots and Ground Stations": {
|
||||||
|
"description": "",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": [
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"destinations": [
|
||||||
|
"dev-team",
|
||||||
|
"exp-company-ground-stations",
|
||||||
|
"exp-company-pilots",
|
||||||
|
"battalion-1-ground-stations",
|
||||||
|
"battalion-2-ground-stations",
|
||||||
|
"battalion-2-pilots",
|
||||||
|
"battalion-3-ground-stations",
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-1-pilots"
|
||||||
|
],
|
||||||
|
"bidirectional": true,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": []
|
||||||
|
},
|
||||||
|
"exp-company-pilots2gs": {
|
||||||
|
"description": "",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": [
|
||||||
|
"exp-company-pilots",
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"destinations": [
|
||||||
|
"exp-company-ground-stations",
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"bidirectional": true,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": []
|
||||||
|
},
|
||||||
|
"Everyone can access docs": {
|
||||||
|
"description": "Internal Services ",
|
||||||
|
"enabled": false,
|
||||||
|
"sources": [
|
||||||
|
"All"
|
||||||
|
],
|
||||||
|
"destinations": [],
|
||||||
|
"bidirectional": false,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": [],
|
||||||
|
"destination_resource": {
|
||||||
|
"id": "docs.blastpilot.achilles-rnd.cc",
|
||||||
|
"type": "domain"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"posture_checks": {
|
||||||
|
"Restrict admins to Ukraine": {
|
||||||
|
"description": "",
|
||||||
|
"checks": {
|
||||||
|
"geo_location_check": {
|
||||||
|
"action": "allow",
|
||||||
|
"locations": [
|
||||||
|
{
|
||||||
|
"country_code": "UA"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"country_code": "PL"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"networks": {
|
||||||
|
"Internal Services": {
|
||||||
|
"description": "",
|
||||||
|
"resources": [
|
||||||
|
{
|
||||||
|
"name": "docs.blastpilot.achilles-rnd.cc",
|
||||||
|
"description": "docs.blastpilot.achilles-rnd.cc",
|
||||||
|
"type": "domain",
|
||||||
|
"address": "docs.blastpilot.achilles-rnd.cc",
|
||||||
|
"enabled": true,
|
||||||
|
"groups": [
|
||||||
|
"All"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"routers": [
|
||||||
|
{
|
||||||
|
"metric": 9999,
|
||||||
|
"masquerade": true,
|
||||||
|
"enabled": true,
|
||||||
|
"peer": "blast-fusion"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"peers": {
|
||||||
|
"3bat-goggles-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-3-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"3bat-lin-win-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-3-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"3bat-linux-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-3-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"acarus": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": true,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"banya-slackware-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-3-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"banya1-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-3-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"banya2-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-3-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"banya3-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-3-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"banya4-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-3-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"bilozir1-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-2-pilots",
|
||||||
|
"battalion-2-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"blast-fusion": {
|
||||||
|
"groups": [
|
||||||
|
"fusion"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"blastgs-agent-dji-goggles1": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-3-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"boots1-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-1-pilots",
|
||||||
|
"battalion-1-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"boots2-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-1-pilots",
|
||||||
|
"battalion-1-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"exp-lenovo-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"exp-company-ground-stations",
|
||||||
|
"exp-company-pilots"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"ihor-rnd-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"ivan-rnd-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"kaban-1-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-1-pilots",
|
||||||
|
"battalion-1-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"kaban-2-laptop-1bat": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-1-pilots",
|
||||||
|
"battalion-1-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"kaban-3-laptop-1bat": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-1-pilots",
|
||||||
|
"battalion-1-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"mango-rexp1-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"exp-company-ground-stations",
|
||||||
|
"exp-company-pilots"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"mavic-rnd-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"multik-rnd-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"oleksandr-rnd-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"prox-rnd-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"remote-autel-1-1bat-1rrbpak": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-1-pilots",
|
||||||
|
"battalion-1-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"remote-autel-1-3bat-5rrbpak": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-3-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"remote-autel-1-rexp": {
|
||||||
|
"groups": [
|
||||||
|
"exp-company-ground-stations",
|
||||||
|
"exp-company-pilots"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"remote-autel-2-1bat-1rrbpak": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-1-pilots",
|
||||||
|
"battalion-1-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"remote-autel-2-3bat-5rrbpak": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-3-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"remote-autel-2-rexp": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-2-pilots",
|
||||||
|
"battalion-2-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"remote-autel-3-1bat": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-1-pilots",
|
||||||
|
"battalion-1-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"remote-autel-3-2bat-3rrbpak": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-2-pilots",
|
||||||
|
"battalion-2-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"remote-autel-3-3bat-5rrbpak": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-3-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"remote-autel-4-1bat": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-1-pilots",
|
||||||
|
"battalion-1-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"remote-autel-4-2bat-3rrbpak": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-2-pilots",
|
||||||
|
"battalion-2-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"remote-matrice-1-3bat-5rrbpak": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-3-pilots",
|
||||||
|
"battalion-3-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"rexp-lenovo-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"exp-company-ground-stations",
|
||||||
|
"exp-company-pilots"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"seed-1-rnd-laptop": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"seed-asus1": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"seed-macbook1": {
|
||||||
|
"groups": [
|
||||||
|
"dev-team"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
},
|
||||||
|
"ugv-1-1bat": {
|
||||||
|
"groups": [
|
||||||
|
"battalion-1-pilots",
|
||||||
|
"battalion-1-ground-stations"
|
||||||
|
],
|
||||||
|
"login_expiration_enabled": false,
|
||||||
|
"inactivity_expiration_enabled": false,
|
||||||
|
"ssh_enabled": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"users": {
|
||||||
|
"vlad.stus@gmail.com": {
|
||||||
|
"name": "admin",
|
||||||
|
"role": "owner",
|
||||||
|
"auto_groups": [
|
||||||
|
"dev-team"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"": {
|
||||||
|
"name": "Automation Service",
|
||||||
|
"role": "admin",
|
||||||
|
"auto_groups": []
|
||||||
|
},
|
||||||
|
"eugene@achilles.local": {
|
||||||
|
"name": "eugene",
|
||||||
|
"role": "admin",
|
||||||
|
"auto_groups": [
|
||||||
|
"dev-team"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"seed@achilles.local": {
|
||||||
|
"name": "seed",
|
||||||
|
"role": "admin",
|
||||||
|
"auto_groups": [
|
||||||
|
"dev-team"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"keltir@achilles.local": {
|
||||||
|
"name": "Artem",
|
||||||
|
"role": "admin",
|
||||||
|
"auto_groups": [
|
||||||
|
"dev-team"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"routes": {},
|
||||||
|
"dns": {
|
||||||
|
"nameserver_groups": {}
|
||||||
|
}
|
||||||
|
}
|
||||||
60
state/test.json
Normal file
60
state/test.json
Normal file
@ -0,0 +1,60 @@
|
|||||||
|
{
|
||||||
|
"groups": {
|
||||||
|
"ground-stations": {
|
||||||
|
"peers": []
|
||||||
|
},
|
||||||
|
"pilots": {
|
||||||
|
"peers": []
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"setup_keys": {
|
||||||
|
"GS-TestHawk-1": {
|
||||||
|
"type": "one-off",
|
||||||
|
"expires_in": 604800,
|
||||||
|
"usage_limit": 1,
|
||||||
|
"auto_groups": [
|
||||||
|
"ground-stations"
|
||||||
|
],
|
||||||
|
"enrolled": false
|
||||||
|
},
|
||||||
|
"Pilot-TestHawk-1": {
|
||||||
|
"type": "one-off",
|
||||||
|
"expires_in": 604800,
|
||||||
|
"usage_limit": 1,
|
||||||
|
"auto_groups": [
|
||||||
|
"pilots"
|
||||||
|
],
|
||||||
|
"enrolled": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"policies": {
|
||||||
|
"pilots-to-gs": {
|
||||||
|
"description": "",
|
||||||
|
"enabled": true,
|
||||||
|
"sources": [
|
||||||
|
"pilots"
|
||||||
|
],
|
||||||
|
"destinations": [
|
||||||
|
"ground-stations"
|
||||||
|
],
|
||||||
|
"bidirectional": true,
|
||||||
|
"protocol": "all",
|
||||||
|
"action": "accept",
|
||||||
|
"source_posture_checks": []
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"posture_checks": {},
|
||||||
|
"networks": {},
|
||||||
|
"peers": {},
|
||||||
|
"users": {
|
||||||
|
"admin@example.com": {
|
||||||
|
"name": "admin",
|
||||||
|
"role": "owner",
|
||||||
|
"auto_groups": []
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"routes": {},
|
||||||
|
"dns": {
|
||||||
|
"nameserver_groups": {}
|
||||||
|
}
|
||||||
|
}
|
||||||
Loading…
x
Reference in New Issue
Block a user