summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.gitignore27
-rw-r--r--AGENTS.md304
-rw-r--r--CHANGELOG.md51
-rw-r--r--CLAUDE.md1
-rw-r--r--Cargo.lock2401
-rw-r--r--Cargo.toml191
-rw-r--r--Justfile168
-rw-r--r--LICENSE21
-rw-r--r--README.md150
-rw-r--r--build.rs150
-rw-r--r--debian/postinst54
-rw-r--r--debian/postrm19
-rw-r--r--debian/witryna.service62
-rw-r--r--examples/caddy/Caddyfile25
-rwxr-xr-xexamples/hooks/caddy-deploy.sh118
-rw-r--r--examples/nginx/witryna.conf48
-rw-r--r--examples/systemd/docker.conf3
-rw-r--r--examples/systemd/podman.conf3
-rw-r--r--examples/witryna.toml63
-rw-r--r--examples/witryna.yaml14
-rw-r--r--lefthook.yml22
-rw-r--r--man/witryna.toml.5490
-rw-r--r--scripts/witryna.service100
-rw-r--r--src/build.rs843
-rw-r--r--src/build_guard.rs128
-rw-r--r--src/cleanup.rs467
-rw-r--r--src/cli.rs134
-rw-r--r--src/config.rs3041
-rw-r--r--src/git.rs1320
-rw-r--r--src/hook.rs499
-rw-r--r--src/lib.rs21
-rw-r--r--src/logs.rs919
-rw-r--r--src/main.rs422
-rw-r--r--src/pipeline.rs328
-rw-r--r--src/polling.rs242
-rw-r--r--src/publish.rs488
-rw-r--r--src/repo_config.rs523
-rw-r--r--src/server.rs1219
-rw-r--r--src/test_support.rs72
-rw-r--r--tests/integration/auth.rs58
-rw-r--r--tests/integration/cache.rs125
-rw-r--r--tests/integration/cleanup.rs92
-rw-r--r--tests/integration/cli_run.rs277
-rw-r--r--tests/integration/cli_status.rs313
-rw-r--r--tests/integration/concurrent.rs111
-rw-r--r--tests/integration/deploy.rs78
-rw-r--r--tests/integration/edge_cases.rs69
-rw-r--r--tests/integration/env_vars.rs162
-rw-r--r--tests/integration/git_helpers.rs275
-rw-r--r--tests/integration/harness.rs356
-rw-r--r--tests/integration/health.rs17
-rw-r--r--tests/integration/hooks.rs137
-rw-r--r--tests/integration/logs.rs73
-rw-r--r--tests/integration/main.rs31
-rw-r--r--tests/integration/not_found.rs17
-rw-r--r--tests/integration/overrides.rs59
-rw-r--r--tests/integration/packaging.rs49
-rw-r--r--tests/integration/polling.rs114
-rw-r--r--tests/integration/rate_limit.rs114
-rw-r--r--tests/integration/runtime.rs61
-rw-r--r--tests/integration/secrets.rs74
-rw-r--r--tests/integration/sighup.rs149
62 files changed, 17962 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..0abb5a0
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,27 @@
+# Build artifacts
+/target
+
+# IDE
+.idea/
+.vscode/
+.claude/settings.local.json
+*.swp
+*.swo
+
+# Environment
+.env
+.env.local
+
+# Config files with secrets
+/witryna.toml
+
+# OS
+.DS_Store
+Thumbs.db
+
+# Internal docs (AI task tracking, absorbed into AGENTS.md)
+SPRINT.md
+ARCHITECTURE.md
+
+# Temporary files
+/tmp/
diff --git a/AGENTS.md b/AGENTS.md
new file mode 100644
index 0000000..97b3aab
--- /dev/null
+++ b/AGENTS.md
@@ -0,0 +1,304 @@
+# CLAUDE.md
+
+## Project Overview
+
+Witryna is a minimalist Git-based static site deployment orchestrator. It listens for webhook triggers, pulls Git repositories, runs containerized build commands, and publishes static assets via atomic symlink switching. Following the Unix philosophy: "Do one thing and do it well."
+
+## Design Philosophy
+
+This project follows a **minimal philosophy**: software should be simple, minimal, and frugal.
+
+- **No feature creep.** Only add functionality that serves the core mission. If a feature is "nice to have" but not essential, leave it out.
+- **Minimal dependencies.** Every external crate is a liability — it adds compile time, attack surface, and maintenance burden. Prefer the standard library and existing dependencies over pulling in new ones. Justify any new dependency before adding it.
+- **No over-engineering.** Write the simplest code that solves the problem. Avoid abstractions, indirection, and generalization unless there is a concrete, present need. Three similar lines are better than a premature abstraction.
+- **Small, auditable codebase.** The entire program should be understandable by a single person. Favour clarity and brevity over cleverness.
+- **Lean runtime.** Witryna delegates heavy lifting to external tools (Git, Podman/Docker, the OS). It does not reimplement functionality that already exists in well-tested programs.
+
+## Commands
+
+### CLI Subcommands
+
+```bash
+witryna serve # Start the deployment server
+witryna validate # Validate config and print summary
+witryna run <site> [-v] # One-off build (synchronous)
+witryna status [-s <site>] [--json] # Deployment status
+# Config discovery: ./witryna.toml → $XDG_CONFIG_HOME/witryna/witryna.toml → /etc/witryna/witryna.toml
+# Override with: witryna --config /path/to/witryna.toml <command>
+```
+
+### Development
+
+```bash
+# Development (just recipes)
+just fmt # Auto-format Rust code
+just lint # Run all lints (fmt check + clippy + yamllint + gitleaks)
+just test # Run unit tests
+just test-integration # Run integration tests (Tier 1 + Tier 2)
+just test-integration-serial # Integration tests with --test-threads=1 (for SIGHUP)
+just test-all # All lints + unit tests + integration tests
+just pre-commit # Mirrors lefthook pre-commit checks
+just man-1 # View witryna(1) man page (needs cargo build first)
+just man-5 # View witryna.toml(5) man page
+
+# Cargo (direct)
+cargo build # Build the project
+cargo run # Run the application
+cargo check # Type-check without building
+```
+
+## Architecture
+
+### Core Components
+
+1. **HTTP Server (axum)**: Listens on localhost, handles webhook POST requests
+2. **Site Manager**: Manages site configurations from `witryna.toml`
+3. **Build Executor**: Runs containerized builds via Podman/Docker
+4. **Asset Publisher**: Atomic symlink switching for zero-downtime deployments
+
+### Key Files
+
+- `witryna.toml` - Main configuration (listen address, sites, tokens)
+- `witryna.yaml` - Per-repository build configuration (image, command, public dir). Searched in order: `.witryna.yaml`, `.witryna.yml`, `witryna.yaml`, `witryna.yml`. Or set `config_file` in `witryna.toml` for a custom path.
+
+### Directory Structure
+
+```
+/var/lib/witryna/
+├── clones/{site-name}/ # Git repository clones
+├── builds/{site-name}/
+│ ├── {timestamp}/ # Timestamped build outputs
+│ └── current -> {latest} # Symlink to current build
+└── cache/{site-name}/ # Persistent build caches
+
+/var/log/witryna/
+└── {site-name}/
+ └── {timestamp}.log # Build logs
+```
+
+### API
+
+- `GET /health` - Health check (returns `200 OK`)
+- `POST /{site_name}` - Trigger deployment (`Authorization: Bearer <token>` required when `webhook_token` is configured)
+ - `202 Accepted` - Build triggered (immediate or queued)
+ - `401 Unauthorized` - Invalid token (only when `webhook_token` is configured; `{"error": "unauthorized"}`)
+ - `404 Not Found` - Unknown site (`{"error": "not_found"}`)
+ - `429 Too Many Requests` - Rate limit exceeded (`{"error": "rate_limit_exceeded"}`)
+
+### System Diagram
+
+```
+ Internet
+ |
++-----------------------------------------------------------------------------+
+| User's Server (e.g., DigitalOcean) |
+| |
+| +--------------------------------+ +-----------------------------------+ |
+| | Web Server (VHOST 1: Public) | | Web Server (VHOST 2: Webhooks) | |
+| | | | | |
+| | my-cool-site.com | | witryna-endpoint.com/{site_name} | |
+| +--------------|-----------------+ +-----------------|-----------------+ |
+| | (serves files) | (reverse proxy) |
+| | | |
+| /var/www/my-site/ <------------------. +-------------------------------+ |
+| ^ `---| Witryna (Rust App) | |
+| | (symlink) | listening on | |
+| | | 127.0.0.1:8080/{site_name} | |
+| /var/lib/witryna/builds/.. +----------|--------------------+ |
+| ^ | (executes commands) |
+| | v |
+| `----------------------------------(uses)-------------------> Git & Container Runtime
+| | (e.g., Podman/Docker)
++-----------------------------------------------------------------------------+
+```
+
+### Deployment Workflow
+
+Upon receiving a valid webhook request, Witryna executes asynchronously:
+
+1. **Acquire Lock / Queue:** Per-site non-blocking lock. If a build is in progress, the request is queued (depth-1, latest-wins). Queued rebuilds run after the current build completes.
+2. **Determine Paths:** Construct clone/build paths from `base_dir` and `site_name`.
+3. **Fetch Source Code:** `git clone` if first time, `git pull` otherwise.
+3b. **Initialize Submodules:** If `.gitmodules` exists, run `git submodule sync --recursive` (pull only) then `git submodule update --init --recursive [--depth N]`.
+4. **Parse Repository Config:** Read build config (`.witryna.yaml` / `witryna.yaml` / custom `config_file`) or use `witryna.toml` overrides.
+5. **Execute Build:** Run container command, e.g.:
+ ```bash
+ # Podman (default --network=bridge, rootless with userns mapping):
+ podman run --rm --cap-drop=ALL --network=bridge --userns=keep-id \
+ -v /var/lib/witryna/clones/my-site:/workspace:Z \
+ -w /workspace \ # or /workspace/{container_workdir}
+ node:20-alpine sh -c "npm install && npm run build"
+
+ # Docker (needs DAC_OVERRIDE for host-UID workspace access):
+ docker run --rm --cap-drop=ALL --cap-add=DAC_OVERRIDE --network=bridge \
+ -v /var/lib/witryna/clones/my-site:/workspace \
+ -w /workspace \ # or /workspace/{container_workdir}
+ node:20-alpine sh -c "npm install && npm run build"
+ ```
+6. **Publish Assets:** Copy built `public` dir to timestamped directory, atomically switch symlink via `ln -sfn`.
+6b. **Post-Deploy Hook (Optional):** Run `post_deploy` command with `WITRYNA_SITE`, `WITRYNA_BUILD_DIR`, `WITRYNA_BUILD_TIMESTAMP` env vars. 30s timeout, non-fatal on failure.
+7. **Release Lock:** Release the per-site lock.
+8. **Log Outcome:** Log success or failure.
+
+## Testing
+
+### Unit Tests
+
+- Keep tests in the same files as implementation using `#[cfg(test)]` modules
+- Think TDD: identify the function's purpose, its expected outputs, and its failure modes — then write tests for those. Test *behaviour*, not implementation details.
+- Do not write dummy tests just for coverage (e.g., asserting a constructor returns an object, or that `Option` defaults to `None`). Every test must verify a meaningful property.
+- Test both happy paths and error conditions
+- Use descriptive test names: `<function>_<scenario>_<expected_result>`
+
+```rust
+#[cfg(test)]
+mod tests {
+ use super::*;
+
+ #[tokio::test]
+ async fn build_executor_valid_config_returns_success() {
+ // ...
+ }
+}
+```
+
+### Integration Tests
+
+Integration tests run locally via `cargo test --features integration`. Each test starts its own server on a random port with a temporary directory — no VMs, no containers for the test harness itself (container runtimes are only needed for tests that exercise the build pipeline).
+
+#### Running Integration Tests
+
+```bash
+# Run all integration tests
+cargo test --features integration
+
+# Run with single thread (required if running SIGHUP tests)
+cargo test --features integration -- --test-threads=1
+
+# Run specific test categories
+cargo test --features integration auth # Authentication tests
+cargo test --features integration deploy # Full deployment pipeline
+cargo test --features integration sighup # SIGHUP reload tests
+cargo test --features integration polling # Periodic polling tests
+cargo test --features integration edge # Edge case / security tests
+cargo test --features integration overrides # Build config override tests
+```
+
+#### Test Tiers
+
+- **Tier 1 (no container runtime needed):** health, auth (401), 404, concurrent build (409), rate limit (429), edge cases, SIGHUP
+- **Tier 2 (requires podman or docker):** deploy, logs, cleanup, overrides, polling
+
+Tests that require git or a container runtime automatically skip with an explicit message (e.g., `SKIPPED: no container runtime (podman/docker) found`) when the dependency is missing.
+
+#### SIGHUP Test Isolation
+
+SIGHUP tests send real signals to the test process. They use `#[serial]` from the `serial_test` crate to ensure they run one at a time. For full safety, run them with a single test thread:
+
+```bash
+cargo test --features integration sighup -- --test-threads=1
+```
+
+#### Test Structure
+
+```
+tests/integration/
+ main.rs # Feature gate + mod declarations
+ harness.rs # TestServer (async reqwest, TempDir, shutdown oneshot)
+ git_helpers.rs # Local bare repo creation + git detection
+ runtime.rs # Container runtime detection + skip macros
+ health.rs # GET /health → 200
+ auth.rs # 401: missing/invalid/malformed/empty bearer
+ not_found.rs # 404: unknown site
+ deploy.rs # Full build pipeline (Tier 2)
+ concurrent.rs # 409 via DashSet injection (Tier 1)
+ rate_limit.rs # 429 with isolated server (Tier 1)
+ logs.rs # Build log verification (Tier 2)
+ cleanup.rs # Old build cleanup (Tier 2)
+ sighup.rs # SIGHUP reload (#[serial], Tier 1)
+ overrides.rs # Build config overrides (Tier 2)
+ polling.rs # Periodic polling (#[serial], Tier 2)
+ edge_cases.rs # Path traversal, long names, etc. (Tier 1)
+ cache.rs # Cache directory persistence (Tier 2)
+ env_vars.rs # Environment variable passing (Tier 2)
+ cli_run.rs # witryna run command (Tier 2)
+ cli_status.rs # witryna status command (Tier 1)
+ hooks.rs # Post-deploy hooks (Tier 2)
+```
+
+#### Test Categories
+
+- **Core pipeline** — health, auth (401), 404, deployment (202), concurrent build rejection (409), rate limiting (429)
+- **FEAT-001** — SIGHUP config hot-reload
+- **FEAT-002** — build config overrides from `witryna.toml` (complete and partial)
+- **FEAT-003** — periodic repository polling, new commit detection
+- **OPS** — build log persistence, old build cleanup
+- **Edge cases** — path traversal, long site names, rapid SIGHUP, empty auth headers
+
+## Security
+
+### OWASP Guidelines for Endpoints
+
+Follow OWASP best practices for all HTTP endpoints:
+
+1. **Authentication & Authorization**
+ - Validate `Authorization: Bearer <token>` on every request when `webhook_token` is configured
+ - Use constant-time comparison for token validation to prevent timing attacks
+ - Reject requests with missing or malformed tokens with `401 Unauthorized`
+ - When `webhook_token` is omitted (empty), authentication is disabled for that site; a warning is logged at startup
+
+2. **Input Validation**
+ - Validate and sanitize `site_name` parameter (alphanumeric, hyphens only)
+ - Reject path traversal attempts (`../`, encoded variants)
+ - Limit request body size to prevent DoS
+
+3. **Rate Limiting**
+ - Implement rate limiting per token/IP to prevent abuse
+ - Return `429 Too Many Requests` when exceeded
+
+4. **Error Handling**
+ - Never expose internal error details in responses
+ - Log detailed errors server-side with `tracing`
+ - Return generic error messages to clients
+
+5. **Command Injection Prevention**
+ - Never interpolate user input into shell commands
+ - Use typed arguments when invoking Podman/Docker
+ - Validate repository URLs against allowlist
+
+6. **Container Security**
+ - Drop all capabilities not explicitly needed (`--cap-drop=ALL`)
+ - Default network mode is `bridge` (standard NAT networking); set to `none` for maximum isolation
+ - Configurable resource limits: `container_memory`, `container_cpus`, `container_pids_limit`
+ - Configurable working directory: `container_workdir` (relative path, no traversal)
+ - Podman: rootless via `--userns=keep-id`; Docker: `--cap-add=DAC_OVERRIDE` for workspace access
+
+## Conventions
+
+- Use `anyhow` for error handling with context
+- Use `tracing` macros for logging (`info!`, `debug!`, `error!`)
+- Async-first: prefer `tokio::fs` over `std::fs`
+- Use `DashSet` for concurrent build tracking
+- `SPRINT.md` is gitignored — update it after each task to track progress, but **never commit it**
+- Test functions: do **not** use the `test_` prefix — the `#[test]` attribute is sufficient
+- String conversion: use `.to_owned()` on `&str`, not `.to_string()` — reserve `.to_string()` for `Display` types
+
+## Branching
+
+- Implement each new feature or task on a **dedicated branch** named after the task ID (e.g., `cli-002-man-pages`, `pkg-001-cargo-deb`)
+- Branch from `main` before starting work: `git checkout -b <branch-name> main`
+- Keep the branch focused on a single task — do not mix unrelated changes
+- Merge back to `main` only after the task is complete and tests pass
+- Do not delete the branch until the merge is confirmed
+
+## Commit Rules
+
+**IMPORTANT:** Before completing any task, run `just test-all` to verify everything passes, then run `/commit-smart` to commit changes.
+
+- Only commit files modified in the current session
+- Use atomic commits with descriptive messages
+- Do not push unless explicitly asked
+- Use always Cargo for dependency management
+- **NEVER** touch the `.git` directory directly (no removing lock files, no manual index manipulation)
+- **NEVER** run `git reset --hard`, `git checkout .`, `git restore --staged`, or `git config`
+- Always use `git add` to stage files — do not use `git restore --staged :/` or other reset-style commands
diff --git a/CHANGELOG.md b/CHANGELOG.md
new file mode 100644
index 0000000..18cc413
--- /dev/null
+++ b/CHANGELOG.md
@@ -0,0 +1,51 @@
+# Changelog
+
+## 0.1.0 — 2026-02-10
+
+Initial release.
+
+Witryna is a minimalist Git-based static site deployment orchestrator.
+It listens for webhook triggers, pulls Git repositories, runs
+containerized build commands, and publishes static assets via atomic
+symlink switching.
+
+### Features
+
+- **HTTP webhook server** (axum) with bearer token auth, rate limiting,
+ and JSON error responses
+- **Git integration**: clone, fetch, shallow/full depth, automatic
+ submodule initialization, LFS support
+- **Containerized builds** via Podman or Docker with security hardening
+ (`--cap-drop=ALL`, `--network=none` default, resource limits)
+- **Atomic publishing** via timestamped directories and symlink switching
+- **Post-deploy hooks** with environment variables (`WITRYNA_SITE`,
+ `WITRYNA_BUILD_DIR`, `WITRYNA_PUBLIC_DIR`, `WITRYNA_BUILD_TIMESTAMP`)
+- **SIGHUP hot-reload** for adding/removing/reconfiguring sites without
+ restart
+- **Periodic polling** with configurable intervals and new-commit
+ detection
+- **Build queue** (depth-1, latest-wins) for concurrent webhook requests
+- **Per-site environment variables** passed to builds and hooks
+- **Build config overrides** in `witryna.toml` (image, command, public)
+- **Container working directory** (`container_workdir`) for monorepo
+ support
+- **Cache volumes** for persistent build caches across deploys
+- **Old build cleanup** with configurable retention
+ (`max_builds_to_keep`)
+- **Build and git timeouts** with configurable durations
+
+### CLI
+
+- `witryna serve` — start the deployment server
+- `witryna validate` — validate config and print summary
+- `witryna run <site>` — one-off synchronous build with `--verbose`
+- `witryna status` — deployment status with `--json` and `--site`
+
+### Packaging
+
+- Debian/Ubuntu `.deb` and Fedora/RHEL `.rpm` packages with systemd
+ service, man pages, and example configurations
+- Automatic container runtime detection in postinst (Docker group +
+ systemd override, or Podman subuids + lingering + override)
+- Static binary tarball for manual installs
+- Example reverse proxy configs for Caddy and nginx
diff --git a/CLAUDE.md b/CLAUDE.md
new file mode 100644
index 0000000..43c994c
--- /dev/null
+++ b/CLAUDE.md
@@ -0,0 +1 @@
+@AGENTS.md
diff --git a/Cargo.lock b/Cargo.lock
new file mode 100644
index 0000000..70e45f7
--- /dev/null
+++ b/Cargo.lock
@@ -0,0 +1,2401 @@
+# This file is automatically @generated by Cargo.
+# It is not intended for manual editing.
+version = 4
+
+[[package]]
+name = "aho-corasick"
+version = "1.1.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ddd31a130427c27518df266943a5308ed92d4b226cc639f5a8f1002816174301"
+dependencies = [
+ "memchr",
+]
+
+[[package]]
+name = "android_system_properties"
+version = "0.1.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "819e7219dbd41043ac279b19830f2efc897156490d7fd6ea916720117ee66311"
+dependencies = [
+ "libc",
+]
+
+[[package]]
+name = "anstream"
+version = "0.6.21"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "43d5b281e737544384e969a5ccad3f1cdd24b48086a0fc1b2a5262a26b8f4f4a"
+dependencies = [
+ "anstyle",
+ "anstyle-parse",
+ "anstyle-query",
+ "anstyle-wincon",
+ "colorchoice",
+ "is_terminal_polyfill",
+ "utf8parse",
+]
+
+[[package]]
+name = "anstyle"
+version = "1.0.13"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5192cca8006f1fd4f7237516f40fa183bb07f8fbdfedaa0036de5ea9b0b45e78"
+
+[[package]]
+name = "anstyle-parse"
+version = "0.2.7"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "4e7644824f0aa2c7b9384579234ef10eb7efb6a0deb83f9630a49594dd9c15c2"
+dependencies = [
+ "utf8parse",
+]
+
+[[package]]
+name = "anstyle-query"
+version = "1.1.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "40c48f72fd53cd289104fc64099abca73db4166ad86ea0b4341abe65af83dadc"
+dependencies = [
+ "windows-sys 0.61.2",
+]
+
+[[package]]
+name = "anstyle-wincon"
+version = "3.0.11"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "291e6a250ff86cd4a820112fb8898808a366d8f9f58ce16d1f538353ad55747d"
+dependencies = [
+ "anstyle",
+ "once_cell_polyfill",
+ "windows-sys 0.61.2",
+]
+
+[[package]]
+name = "anyhow"
+version = "1.0.100"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a23eb6b1614318a8071c9b2521f36b424b2c83db5eb3a0fead4a6c0809af6e61"
+
+[[package]]
+name = "atomic-waker"
+version = "1.1.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "1505bd5d3d116872e7271a6d4e16d81d0c8570876c8de68093a09ac269d8aac0"
+
+[[package]]
+name = "autocfg"
+version = "1.5.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "c08606f8c3cbf4ce6ec8e28fb0014a2c086708fe954eaa885384a6165172e7e8"
+
+[[package]]
+name = "axum"
+version = "0.8.8"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8b52af3cb4058c895d37317bb27508dccc8e5f2d39454016b297bf4a400597b8"
+dependencies = [
+ "axum-core",
+ "bytes",
+ "form_urlencoded",
+ "futures-util",
+ "http",
+ "http-body",
+ "http-body-util",
+ "hyper",
+ "hyper-util",
+ "itoa",
+ "matchit",
+ "memchr",
+ "mime",
+ "percent-encoding",
+ "pin-project-lite",
+ "serde_core",
+ "serde_json",
+ "serde_path_to_error",
+ "serde_urlencoded",
+ "sync_wrapper",
+ "tokio",
+ "tower",
+ "tower-layer",
+ "tower-service",
+ "tracing",
+]
+
+[[package]]
+name = "axum-core"
+version = "0.5.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "08c78f31d7b1291f7ee735c1c6780ccde7785daae9a9206026862dab7d8792d1"
+dependencies = [
+ "bytes",
+ "futures-core",
+ "http",
+ "http-body",
+ "http-body-util",
+ "mime",
+ "pin-project-lite",
+ "sync_wrapper",
+ "tower-layer",
+ "tower-service",
+ "tracing",
+]
+
+[[package]]
+name = "base64"
+version = "0.22.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6"
+
+[[package]]
+name = "bitflags"
+version = "2.10.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "812e12b5285cc515a9c72a5c1d3b6d46a19dac5acfef5265968c166106e31dd3"
+
+[[package]]
+name = "bumpalo"
+version = "3.19.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5dd9dc738b7a8311c7ade152424974d8115f2cdad61e8dab8dac9f2362298510"
+
+[[package]]
+name = "bytes"
+version = "1.11.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "1e748733b7cbc798e1434b6ac524f0c1ff2ab456fe201501e6497c8417a4fc33"
+
+[[package]]
+name = "cc"
+version = "1.2.53"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "755d2fce177175ffca841e9a06afdb2c4ab0f593d53b4dee48147dfaade85932"
+dependencies = [
+ "find-msvc-tools",
+ "shlex",
+]
+
+[[package]]
+name = "cfg-if"
+version = "1.0.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "9330f8b2ff13f34540b44e946ef35111825727b38d33286ef986142615121801"
+
+[[package]]
+name = "cfg_aliases"
+version = "0.2.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
+
+[[package]]
+name = "chrono"
+version = "0.4.43"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "fac4744fb15ae8337dc853fee7fb3f4e48c0fbaa23d0afe49c447b4fab126118"
+dependencies = [
+ "iana-time-zone",
+ "js-sys",
+ "num-traits",
+ "wasm-bindgen",
+ "windows-link",
+]
+
+[[package]]
+name = "clap"
+version = "4.5.56"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a75ca66430e33a14957acc24c5077b503e7d374151b2b4b3a10c83b4ceb4be0e"
+dependencies = [
+ "clap_builder",
+ "clap_derive",
+]
+
+[[package]]
+name = "clap_builder"
+version = "4.5.56"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "793207c7fa6300a0608d1080b858e5fdbe713cdc1c8db9fb17777d8a13e63df0"
+dependencies = [
+ "anstream",
+ "anstyle",
+ "clap_lex",
+ "strsim",
+]
+
+[[package]]
+name = "clap_derive"
+version = "4.5.55"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a92793da1a46a5f2a02a6f4c46c6496b28c43638adea8306fcb0caa1634f24e5"
+dependencies = [
+ "heck",
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
+[[package]]
+name = "clap_lex"
+version = "0.7.7"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "c3e64b0cc0439b12df2fa678eae89a1c56a529fd067a9115f7827f1fffd22b32"
+
+[[package]]
+name = "clap_mangen"
+version = "0.2.31"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "439ea63a92086df93893164221ad4f24142086d535b3a0957b9b9bea2dc86301"
+dependencies = [
+ "clap",
+ "roff",
+]
+
+[[package]]
+name = "colorchoice"
+version = "1.0.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b05b61dc5112cbb17e4b6cd61790d9845d13888356391624cbe7e41efeac1e75"
+
+[[package]]
+name = "core-foundation"
+version = "0.9.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "91e195e091a93c46f7102ec7818a2aa394e1e1771c3ab4825963fa03e45afb8f"
+dependencies = [
+ "core-foundation-sys",
+ "libc",
+]
+
+[[package]]
+name = "core-foundation-sys"
+version = "0.8.7"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b"
+
+[[package]]
+name = "crossbeam-utils"
+version = "0.8.21"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d0a5c400df2834b80a4c3327b3aad3a4c4cd4de0629063962b03235697506a28"
+
+[[package]]
+name = "dashmap"
+version = "6.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5041cc499144891f3790297212f32a74fb938e5136a14943f338ef9e0ae276cf"
+dependencies = [
+ "cfg-if",
+ "crossbeam-utils",
+ "hashbrown 0.14.5",
+ "lock_api",
+ "once_cell",
+ "parking_lot_core",
+]
+
+[[package]]
+name = "displaydoc"
+version = "0.2.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "97369cbbc041bc366949bc74d34658d6cda5621039731c6310521892a3a20ae0"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
+[[package]]
+name = "encoding_rs"
+version = "0.8.35"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "75030f3c4f45dafd7586dd6780965a8c7e8e285a5ecb86713e63a79c5b2766f3"
+dependencies = [
+ "cfg-if",
+]
+
+[[package]]
+name = "equivalent"
+version = "1.0.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f"
+
+[[package]]
+name = "errno"
+version = "0.3.14"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb"
+dependencies = [
+ "libc",
+ "windows-sys 0.61.2",
+]
+
+[[package]]
+name = "fastrand"
+version = "2.3.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
+
+[[package]]
+name = "find-msvc-tools"
+version = "0.1.8"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8591b0bcc8a98a64310a2fae1bb3e9b8564dd10e381e6e28010fde8e8e8568db"
+
+[[package]]
+name = "fnv"
+version = "1.0.7"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1"
+
+[[package]]
+name = "foreign-types"
+version = "0.3.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f6f339eb8adc052cd2ca78910fda869aefa38d22d5cb648e6485e4d3fc06f3b1"
+dependencies = [
+ "foreign-types-shared",
+]
+
+[[package]]
+name = "foreign-types-shared"
+version = "0.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "00b0228411908ca8685dba7fc2cdd70ec9990a6e753e89b6ac91a84c40fbaf4b"
+
+[[package]]
+name = "form_urlencoded"
+version = "1.2.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "cb4cb245038516f5f85277875cdaa4f7d2c9a0fa0468de06ed190163b1581fcf"
+dependencies = [
+ "percent-encoding",
+]
+
+[[package]]
+name = "futures-channel"
+version = "0.3.31"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "2dff15bf788c671c1934e366d07e30c1814a8ef514e1af724a602e8a2fbe1b10"
+dependencies = [
+ "futures-core",
+]
+
+[[package]]
+name = "futures-core"
+version = "0.3.31"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "05f29059c0c2090612e8d742178b0580d2dc940c837851ad723096f87af6663e"
+
+[[package]]
+name = "futures-executor"
+version = "0.3.31"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "1e28d1d997f585e54aebc3f97d39e72338912123a67330d723fdbb564d646c9f"
+dependencies = [
+ "futures-core",
+ "futures-task",
+ "futures-util",
+]
+
+[[package]]
+name = "futures-sink"
+version = "0.3.31"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "e575fab7d1e0dcb8d0c7bcf9a63ee213816ab51902e6d244a95819acacf1d4f7"
+
+[[package]]
+name = "futures-task"
+version = "0.3.31"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f90f7dce0722e95104fcb095585910c0977252f286e354b5e3bd38902cd99988"
+
+[[package]]
+name = "futures-timer"
+version = "3.0.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f288b0a4f20f9a56b5d1da57e2227c661b7b16168e2f72365f57b63326e29b24"
+
+[[package]]
+name = "futures-util"
+version = "0.3.31"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "9fa08315bb612088cc391249efdc3bc77536f16c91f6cf495e6fbe85b20a4a81"
+dependencies = [
+ "futures-core",
+ "futures-sink",
+ "futures-task",
+ "pin-project-lite",
+ "pin-utils",
+ "slab",
+]
+
+[[package]]
+name = "getrandom"
+version = "0.2.17"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ff2abc00be7fca6ebc474524697ae276ad847ad0a6b3faa4bcb027e9a4614ad0"
+dependencies = [
+ "cfg-if",
+ "libc",
+ "wasi",
+]
+
+[[package]]
+name = "getrandom"
+version = "0.3.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "899def5c37c4fd7b2664648c28120ecec138e4d395b459e5ca34f9cce2dd77fd"
+dependencies = [
+ "cfg-if",
+ "js-sys",
+ "libc",
+ "r-efi",
+ "wasip2",
+ "wasm-bindgen",
+]
+
+[[package]]
+name = "governor"
+version = "0.8.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "be93b4ec2e4710b04d9264c0c7350cdd62a8c20e5e4ac732552ebb8f0debe8eb"
+dependencies = [
+ "cfg-if",
+ "dashmap",
+ "futures-sink",
+ "futures-timer",
+ "futures-util",
+ "getrandom 0.3.4",
+ "no-std-compat",
+ "nonzero_ext",
+ "parking_lot",
+ "portable-atomic",
+ "quanta",
+ "rand",
+ "smallvec",
+ "spinning_top",
+ "web-time",
+]
+
+[[package]]
+name = "h2"
+version = "0.4.13"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "2f44da3a8150a6703ed5d34e164b875fd14c2cdab9af1252a9a1020bde2bdc54"
+dependencies = [
+ "atomic-waker",
+ "bytes",
+ "fnv",
+ "futures-core",
+ "futures-sink",
+ "http",
+ "indexmap",
+ "slab",
+ "tokio",
+ "tokio-util",
+ "tracing",
+]
+
+[[package]]
+name = "hashbrown"
+version = "0.14.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "e5274423e17b7c9fc20b6e7e208532f9b19825d82dfd615708b70edd83df41f1"
+
+[[package]]
+name = "hashbrown"
+version = "0.16.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100"
+
+[[package]]
+name = "heck"
+version = "0.5.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
+
+[[package]]
+name = "http"
+version = "1.4.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "e3ba2a386d7f85a81f119ad7498ebe444d2e22c2af0b86b069416ace48b3311a"
+dependencies = [
+ "bytes",
+ "itoa",
+]
+
+[[package]]
+name = "http-body"
+version = "1.0.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "1efedce1fb8e6913f23e0c92de8e62cd5b772a67e7b3946df930a62566c93184"
+dependencies = [
+ "bytes",
+ "http",
+]
+
+[[package]]
+name = "http-body-util"
+version = "0.1.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b021d93e26becf5dc7e1b75b1bed1fd93124b374ceb73f43d4d4eafec896a64a"
+dependencies = [
+ "bytes",
+ "futures-core",
+ "http",
+ "http-body",
+ "pin-project-lite",
+]
+
+[[package]]
+name = "httparse"
+version = "1.10.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "6dbf3de79e51f3d586ab4cb9d5c3e2c14aa28ed23d180cf89b4df0454a69cc87"
+
+[[package]]
+name = "httpdate"
+version = "1.0.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
+
+[[package]]
+name = "humantime"
+version = "2.3.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "135b12329e5e3ce057a9f972339ea52bc954fe1e9358ef27f95e89716fbc5424"
+
+[[package]]
+name = "hyper"
+version = "1.8.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "2ab2d4f250c3d7b1c9fcdff1cece94ea4e2dfbec68614f7b87cb205f24ca9d11"
+dependencies = [
+ "atomic-waker",
+ "bytes",
+ "futures-channel",
+ "futures-core",
+ "h2",
+ "http",
+ "http-body",
+ "httparse",
+ "httpdate",
+ "itoa",
+ "pin-project-lite",
+ "pin-utils",
+ "smallvec",
+ "tokio",
+ "want",
+]
+
+[[package]]
+name = "hyper-rustls"
+version = "0.27.7"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "e3c93eb611681b207e1fe55d5a71ecf91572ec8a6705cdb6857f7d8d5242cf58"
+dependencies = [
+ "http",
+ "hyper",
+ "hyper-util",
+ "rustls",
+ "rustls-pki-types",
+ "tokio",
+ "tokio-rustls",
+ "tower-service",
+]
+
+[[package]]
+name = "hyper-tls"
+version = "0.6.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "70206fc6890eaca9fde8a0bf71caa2ddfc9fe045ac9e5c70df101a7dbde866e0"
+dependencies = [
+ "bytes",
+ "http-body-util",
+ "hyper",
+ "hyper-util",
+ "native-tls",
+ "tokio",
+ "tokio-native-tls",
+ "tower-service",
+]
+
+[[package]]
+name = "hyper-util"
+version = "0.1.19"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "727805d60e7938b76b826a6ef209eb70eaa1812794f9424d4a4e2d740662df5f"
+dependencies = [
+ "base64",
+ "bytes",
+ "futures-channel",
+ "futures-core",
+ "futures-util",
+ "http",
+ "http-body",
+ "hyper",
+ "ipnet",
+ "libc",
+ "percent-encoding",
+ "pin-project-lite",
+ "socket2",
+ "system-configuration",
+ "tokio",
+ "tower-service",
+ "tracing",
+ "windows-registry",
+]
+
+[[package]]
+name = "iana-time-zone"
+version = "0.1.64"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "33e57f83510bb73707521ebaffa789ec8caf86f9657cad665b092b581d40e9fb"
+dependencies = [
+ "android_system_properties",
+ "core-foundation-sys",
+ "iana-time-zone-haiku",
+ "js-sys",
+ "log",
+ "wasm-bindgen",
+ "windows-core",
+]
+
+[[package]]
+name = "iana-time-zone-haiku"
+version = "0.1.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f31827a206f56af32e590ba56d5d2d085f558508192593743f16b2306495269f"
+dependencies = [
+ "cc",
+]
+
+[[package]]
+name = "icu_collections"
+version = "2.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "4c6b649701667bbe825c3b7e6388cb521c23d88644678e83c0c4d0a621a34b43"
+dependencies = [
+ "displaydoc",
+ "potential_utf",
+ "yoke",
+ "zerofrom",
+ "zerovec",
+]
+
+[[package]]
+name = "icu_locale_core"
+version = "2.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "edba7861004dd3714265b4db54a3c390e880ab658fec5f7db895fae2046b5bb6"
+dependencies = [
+ "displaydoc",
+ "litemap",
+ "tinystr",
+ "writeable",
+ "zerovec",
+]
+
+[[package]]
+name = "icu_normalizer"
+version = "2.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5f6c8828b67bf8908d82127b2054ea1b4427ff0230ee9141c54251934ab1b599"
+dependencies = [
+ "icu_collections",
+ "icu_normalizer_data",
+ "icu_properties",
+ "icu_provider",
+ "smallvec",
+ "zerovec",
+]
+
+[[package]]
+name = "icu_normalizer_data"
+version = "2.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7aedcccd01fc5fe81e6b489c15b247b8b0690feb23304303a9e560f37efc560a"
+
+[[package]]
+name = "icu_properties"
+version = "2.1.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "020bfc02fe870ec3a66d93e677ccca0562506e5872c650f893269e08615d74ec"
+dependencies = [
+ "icu_collections",
+ "icu_locale_core",
+ "icu_properties_data",
+ "icu_provider",
+ "zerotrie",
+ "zerovec",
+]
+
+[[package]]
+name = "icu_properties_data"
+version = "2.1.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "616c294cf8d725c6afcd8f55abc17c56464ef6211f9ed59cccffe534129c77af"
+
+[[package]]
+name = "icu_provider"
+version = "2.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "85962cf0ce02e1e0a629cc34e7ca3e373ce20dda4c4d7294bbd0bf1fdb59e614"
+dependencies = [
+ "displaydoc",
+ "icu_locale_core",
+ "writeable",
+ "yoke",
+ "zerofrom",
+ "zerotrie",
+ "zerovec",
+]
+
+[[package]]
+name = "idna"
+version = "1.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "3b0875f23caa03898994f6ddc501886a45c7d3d62d04d2d90788d47be1b1e4de"
+dependencies = [
+ "idna_adapter",
+ "smallvec",
+ "utf8_iter",
+]
+
+[[package]]
+name = "idna_adapter"
+version = "1.2.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "3acae9609540aa318d1bc588455225fb2085b9ed0c4f6bd0d9d5bcd86f1a0344"
+dependencies = [
+ "icu_normalizer",
+ "icu_properties",
+]
+
+[[package]]
+name = "indexmap"
+version = "2.13.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7714e70437a7dc3ac8eb7e6f8df75fd8eb422675fc7678aff7364301092b1017"
+dependencies = [
+ "equivalent",
+ "hashbrown 0.16.1",
+]
+
+[[package]]
+name = "ipnet"
+version = "2.11.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "469fb0b9cefa57e3ef31275ee7cacb78f2fdca44e4765491884a2b119d4eb130"
+
+[[package]]
+name = "iri-string"
+version = "0.7.10"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "c91338f0783edbd6195decb37bae672fd3b165faffb89bf7b9e6942f8b1a731a"
+dependencies = [
+ "memchr",
+ "serde",
+]
+
+[[package]]
+name = "is_terminal_polyfill"
+version = "1.70.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a6cb138bb79a146c1bd460005623e142ef0181e3d0219cb493e02f7d08a35695"
+
+[[package]]
+name = "itoa"
+version = "1.0.17"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "92ecc6618181def0457392ccd0ee51198e065e016d1d527a7ac1b6dc7c1f09d2"
+
+[[package]]
+name = "js-sys"
+version = "0.3.85"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8c942ebf8e95485ca0d52d97da7c5a2c387d0e7f0ba4c35e93bfcaee045955b3"
+dependencies = [
+ "once_cell",
+ "wasm-bindgen",
+]
+
+[[package]]
+name = "lazy_static"
+version = "1.5.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe"
+
+[[package]]
+name = "libc"
+version = "0.2.180"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "bcc35a38544a891a5f7c865aca548a982ccb3b8650a5b06d0fd33a10283c56fc"
+
+[[package]]
+name = "linux-raw-sys"
+version = "0.11.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "df1d3c3b53da64cf5760482273a98e575c651a67eec7f77df96b5b642de8f039"
+
+[[package]]
+name = "litemap"
+version = "0.8.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "6373607a59f0be73a39b6fe456b8192fcc3585f602af20751600e974dd455e77"
+
+[[package]]
+name = "lock_api"
+version = "0.4.14"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "224399e74b87b5f3557511d98dff8b14089b3dadafcab6bb93eab67d3aace965"
+dependencies = [
+ "scopeguard",
+]
+
+[[package]]
+name = "log"
+version = "0.4.29"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5e5032e24019045c762d3c0f28f5b6b8bbf38563a65908389bf7978758920897"
+
+[[package]]
+name = "matchers"
+version = "0.2.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d1525a2a28c7f4fa0fc98bb91ae755d1e2d1505079e05539e35bc876b5d65ae9"
+dependencies = [
+ "regex-automata",
+]
+
+[[package]]
+name = "matchit"
+version = "0.8.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "47e1ffaa40ddd1f3ed91f717a33c8c0ee23fff369e3aa8772b9605cc1d22f4c3"
+
+[[package]]
+name = "memchr"
+version = "2.7.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f52b00d39961fc5b2736ea853c9cc86238e165017a493d1d5c8eac6bdc4cc273"
+
+[[package]]
+name = "mime"
+version = "0.3.17"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "6877bb514081ee2a7ff5ef9de3281f14a4dd4bceac4c09388074a6b5df8a139a"
+
+[[package]]
+name = "mio"
+version = "1.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a69bcab0ad47271a0234d9422b131806bf3968021e5dc9328caf2d4cd58557fc"
+dependencies = [
+ "libc",
+ "wasi",
+ "windows-sys 0.61.2",
+]
+
+[[package]]
+name = "native-tls"
+version = "0.2.14"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "87de3442987e9dbec73158d5c715e7ad9072fda936bb03d19d7fa10e00520f0e"
+dependencies = [
+ "libc",
+ "log",
+ "openssl",
+ "openssl-probe",
+ "openssl-sys",
+ "schannel",
+ "security-framework",
+ "security-framework-sys",
+ "tempfile",
+]
+
+[[package]]
+name = "nix"
+version = "0.29.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "71e2746dc3a24dd78b3cfcb7be93368c6de9963d30f43a6a73998a9cf4b17b46"
+dependencies = [
+ "bitflags",
+ "cfg-if",
+ "cfg_aliases",
+ "libc",
+]
+
+[[package]]
+name = "no-std-compat"
+version = "0.4.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b93853da6d84c2e3c7d730d6473e8817692dd89be387eb01b94d7f108ecb5b8c"
+
+[[package]]
+name = "nonzero_ext"
+version = "0.3.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "38bf9645c8b145698bb0b18a4637dcacbc421ea49bef2317e4fd8065a387cf21"
+
+[[package]]
+name = "nu-ansi-term"
+version = "0.50.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7957b9740744892f114936ab4a57b3f487491bbeafaf8083688b16841a4240e5"
+dependencies = [
+ "windows-sys 0.61.2",
+]
+
+[[package]]
+name = "num-traits"
+version = "0.2.19"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "071dfc062690e90b734c0b2273ce72ad0ffa95f0c74596bc250dcfd960262841"
+dependencies = [
+ "autocfg",
+]
+
+[[package]]
+name = "once_cell"
+version = "1.21.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d"
+
+[[package]]
+name = "once_cell_polyfill"
+version = "1.70.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "384b8ab6d37215f3c5301a95a4accb5d64aa607f1fcb26a11b5303878451b4fe"
+
+[[package]]
+name = "openssl"
+version = "0.10.75"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "08838db121398ad17ab8531ce9de97b244589089e290a384c900cb9ff7434328"
+dependencies = [
+ "bitflags",
+ "cfg-if",
+ "foreign-types",
+ "libc",
+ "once_cell",
+ "openssl-macros",
+ "openssl-sys",
+]
+
+[[package]]
+name = "openssl-macros"
+version = "0.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a948666b637a0f465e8564c73e89d4dde00d72d4d473cc972f390fc3dcee7d9c"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
+[[package]]
+name = "openssl-probe"
+version = "0.1.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d05e27ee213611ffe7d6348b942e8f942b37114c00cc03cec254295a4a17852e"
+
+[[package]]
+name = "openssl-sys"
+version = "0.9.111"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "82cab2d520aa75e3c58898289429321eb788c3106963d0dc886ec7a5f4adc321"
+dependencies = [
+ "cc",
+ "libc",
+ "pkg-config",
+ "vcpkg",
+]
+
+[[package]]
+name = "parking_lot"
+version = "0.12.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "93857453250e3077bd71ff98b6a65ea6621a19bb0f559a85248955ac12c45a1a"
+dependencies = [
+ "lock_api",
+ "parking_lot_core",
+]
+
+[[package]]
+name = "parking_lot_core"
+version = "0.9.12"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "2621685985a2ebf1c516881c026032ac7deafcda1a2c9b7850dc81e3dfcb64c1"
+dependencies = [
+ "cfg-if",
+ "libc",
+ "redox_syscall",
+ "smallvec",
+ "windows-link",
+]
+
+[[package]]
+name = "percent-encoding"
+version = "2.3.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "9b4f627cb1b25917193a259e49bdad08f671f8d9708acfd5fe0a8c1455d87220"
+
+[[package]]
+name = "pin-project-lite"
+version = "0.2.16"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "3b3cff922bd51709b605d9ead9aa71031d81447142d828eb4a6eba76fe619f9b"
+
+[[package]]
+name = "pin-utils"
+version = "0.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184"
+
+[[package]]
+name = "pkg-config"
+version = "0.3.32"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7edddbd0b52d732b21ad9a5fab5c704c14cd949e5e9a1ec5929a24fded1b904c"
+
+[[package]]
+name = "portable-atomic"
+version = "1.13.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f89776e4d69bb58bc6993e99ffa1d11f228b839984854c7daeb5d37f87cbe950"
+
+[[package]]
+name = "potential_utf"
+version = "0.1.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b73949432f5e2a09657003c25bca5e19a0e9c84f8058ca374f49e0ebe605af77"
+dependencies = [
+ "zerovec",
+]
+
+[[package]]
+name = "ppv-lite86"
+version = "0.2.21"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "85eae3c4ed2f50dcfe72643da4befc30deadb458a9b590d720cde2f2b1e97da9"
+dependencies = [
+ "zerocopy",
+]
+
+[[package]]
+name = "proc-macro2"
+version = "1.0.106"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8fd00f0bb2e90d81d1044c2b32617f68fcb9fa3bb7640c23e9c748e53fb30934"
+dependencies = [
+ "unicode-ident",
+]
+
+[[package]]
+name = "quanta"
+version = "0.12.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f3ab5a9d756f0d97bdc89019bd2e4ea098cf9cde50ee7564dde6b81ccc8f06c7"
+dependencies = [
+ "crossbeam-utils",
+ "libc",
+ "once_cell",
+ "raw-cpuid",
+ "wasi",
+ "web-sys",
+ "winapi",
+]
+
+[[package]]
+name = "quote"
+version = "1.0.43"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "dc74d9a594b72ae6656596548f56f667211f8a97b3d4c3d467150794690dc40a"
+dependencies = [
+ "proc-macro2",
+]
+
+[[package]]
+name = "r-efi"
+version = "5.3.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "69cdb34c158ceb288df11e18b4bd39de994f6657d83847bdffdbd7f346754b0f"
+
+[[package]]
+name = "rand"
+version = "0.9.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1"
+dependencies = [
+ "rand_chacha",
+ "rand_core",
+]
+
+[[package]]
+name = "rand_chacha"
+version = "0.9.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d3022b5f1df60f26e1ffddd6c66e8aa15de382ae63b3a0c1bfc0e4d3e3f325cb"
+dependencies = [
+ "ppv-lite86",
+ "rand_core",
+]
+
+[[package]]
+name = "rand_core"
+version = "0.9.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "76afc826de14238e6e8c374ddcc1fa19e374fd8dd986b0d2af0d02377261d83c"
+dependencies = [
+ "getrandom 0.3.4",
+]
+
+[[package]]
+name = "raw-cpuid"
+version = "11.6.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "498cd0dc59d73224351ee52a95fee0f1a617a2eae0e7d9d720cc622c73a54186"
+dependencies = [
+ "bitflags",
+]
+
+[[package]]
+name = "redox_syscall"
+version = "0.5.18"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ed2bf2547551a7053d6fdfafda3f938979645c44812fbfcda098faae3f1a362d"
+dependencies = [
+ "bitflags",
+]
+
+[[package]]
+name = "regex-automata"
+version = "0.4.13"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5276caf25ac86c8d810222b3dbb938e512c55c6831a10f3e6ed1c93b84041f1c"
+dependencies = [
+ "aho-corasick",
+ "memchr",
+ "regex-syntax",
+]
+
+[[package]]
+name = "regex-syntax"
+version = "0.8.8"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7a2d987857b319362043e95f5353c0535c1f58eec5336fdfcf626430af7def58"
+
+[[package]]
+name = "reqwest"
+version = "0.12.28"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "eddd3ca559203180a307f12d114c268abf583f59b03cb906fd0b3ff8646c1147"
+dependencies = [
+ "base64",
+ "bytes",
+ "encoding_rs",
+ "futures-core",
+ "h2",
+ "http",
+ "http-body",
+ "http-body-util",
+ "hyper",
+ "hyper-rustls",
+ "hyper-tls",
+ "hyper-util",
+ "js-sys",
+ "log",
+ "mime",
+ "native-tls",
+ "percent-encoding",
+ "pin-project-lite",
+ "rustls-pki-types",
+ "serde",
+ "serde_json",
+ "serde_urlencoded",
+ "sync_wrapper",
+ "tokio",
+ "tokio-native-tls",
+ "tower",
+ "tower-http",
+ "tower-service",
+ "url",
+ "wasm-bindgen",
+ "wasm-bindgen-futures",
+ "web-sys",
+]
+
+[[package]]
+name = "ring"
+version = "0.17.14"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a4689e6c2294d81e88dc6261c768b63bc4fcdb852be6d1352498b114f61383b7"
+dependencies = [
+ "cc",
+ "cfg-if",
+ "getrandom 0.2.17",
+ "libc",
+ "untrusted",
+ "windows-sys 0.52.0",
+]
+
+[[package]]
+name = "roff"
+version = "0.2.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "88f8660c1ff60292143c98d08fc6e2f654d722db50410e3f3797d40baaf9d8f3"
+
+[[package]]
+name = "rustix"
+version = "1.1.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "146c9e247ccc180c1f61615433868c99f3de3ae256a30a43b49f67c2d9171f34"
+dependencies = [
+ "bitflags",
+ "errno",
+ "libc",
+ "linux-raw-sys",
+ "windows-sys 0.61.2",
+]
+
+[[package]]
+name = "rustls"
+version = "0.23.36"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "c665f33d38cea657d9614f766881e4d510e0eda4239891eea56b4cadcf01801b"
+dependencies = [
+ "once_cell",
+ "rustls-pki-types",
+ "rustls-webpki",
+ "subtle",
+ "zeroize",
+]
+
+[[package]]
+name = "rustls-pki-types"
+version = "1.14.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "be040f8b0a225e40375822a563fa9524378b9d63112f53e19ffff34df5d33fdd"
+dependencies = [
+ "zeroize",
+]
+
+[[package]]
+name = "rustls-webpki"
+version = "0.103.9"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d7df23109aa6c1567d1c575b9952556388da57401e4ace1d15f79eedad0d8f53"
+dependencies = [
+ "ring",
+ "rustls-pki-types",
+ "untrusted",
+]
+
+[[package]]
+name = "rustversion"
+version = "1.0.22"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b39cdef0fa800fc44525c84ccb54a029961a8215f9619753635a9c0d2538d46d"
+
+[[package]]
+name = "ryu"
+version = "1.0.22"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a50f4cf475b65d88e057964e0e9bb1f0aa9bbb2036dc65c64596b42932536984"
+
+[[package]]
+name = "scc"
+version = "2.4.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "46e6f046b7fef48e2660c57ed794263155d713de679057f2d0c169bfc6e756cc"
+dependencies = [
+ "sdd",
+]
+
+[[package]]
+name = "schannel"
+version = "0.1.28"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "891d81b926048e76efe18581bf793546b4c0eaf8448d72be8de2bbee5fd166e1"
+dependencies = [
+ "windows-sys 0.61.2",
+]
+
+[[package]]
+name = "scopeguard"
+version = "1.2.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
+
+[[package]]
+name = "sdd"
+version = "3.0.10"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "490dcfcbfef26be6800d11870ff2df8774fa6e86d047e3e8c8a76b25655e41ca"
+
+[[package]]
+name = "security-framework"
+version = "2.11.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "897b2245f0b511c87893af39b033e5ca9cce68824c4d7e7630b5a1d339658d02"
+dependencies = [
+ "bitflags",
+ "core-foundation",
+ "core-foundation-sys",
+ "libc",
+ "security-framework-sys",
+]
+
+[[package]]
+name = "security-framework-sys"
+version = "2.15.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "cc1f0cbffaac4852523ce30d8bd3c5cdc873501d96ff467ca09b6767bb8cd5c0"
+dependencies = [
+ "core-foundation-sys",
+ "libc",
+]
+
+[[package]]
+name = "serde"
+version = "1.0.228"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "9a8e94ea7f378bd32cbbd37198a4a91436180c5bb472411e48b5ec2e2124ae9e"
+dependencies = [
+ "serde_core",
+ "serde_derive",
+]
+
+[[package]]
+name = "serde_core"
+version = "1.0.228"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "41d385c7d4ca58e59fc732af25c3983b67ac852c1a25000afe1175de458b67ad"
+dependencies = [
+ "serde_derive",
+]
+
+[[package]]
+name = "serde_derive"
+version = "1.0.228"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d540f220d3187173da220f885ab66608367b6574e925011a9353e4badda91d79"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
+[[package]]
+name = "serde_json"
+version = "1.0.149"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "83fc039473c5595ace860d8c4fafa220ff474b3fc6bfdb4293327f1a37e94d86"
+dependencies = [
+ "itoa",
+ "memchr",
+ "serde",
+ "serde_core",
+ "zmij",
+]
+
+[[package]]
+name = "serde_path_to_error"
+version = "0.1.20"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "10a9ff822e371bb5403e391ecd83e182e0e77ba7f6fe0160b795797109d1b457"
+dependencies = [
+ "itoa",
+ "serde",
+ "serde_core",
+]
+
+[[package]]
+name = "serde_spanned"
+version = "1.0.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f8bbf91e5a4d6315eee45e704372590b30e260ee83af6639d64557f51b067776"
+dependencies = [
+ "serde_core",
+]
+
+[[package]]
+name = "serde_urlencoded"
+version = "0.7.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d3491c14715ca2294c4d6a88f15e84739788c1d030eed8c110436aafdaa2f3fd"
+dependencies = [
+ "form_urlencoded",
+ "itoa",
+ "ryu",
+ "serde",
+]
+
+[[package]]
+name = "serde_yaml_ng"
+version = "0.10.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7b4db627b98b36d4203a7b458cf3573730f2bb591b28871d916dfa9efabfd41f"
+dependencies = [
+ "indexmap",
+ "itoa",
+ "ryu",
+ "serde",
+ "unsafe-libyaml",
+]
+
+[[package]]
+name = "serial_test"
+version = "3.3.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "0d0b343e184fc3b7bb44dff0705fffcf4b3756ba6aff420dddd8b24ca145e555"
+dependencies = [
+ "futures-executor",
+ "futures-util",
+ "log",
+ "once_cell",
+ "parking_lot",
+ "scc",
+ "serial_test_derive",
+]
+
+[[package]]
+name = "serial_test_derive"
+version = "3.3.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "6f50427f258fb77356e4cd4aa0e87e2bd2c66dbcee41dc405282cae2bfc26c83"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
+[[package]]
+name = "sharded-slab"
+version = "0.1.7"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f40ca3c46823713e0d4209592e8d6e826aa57e928f09752619fc696c499637f6"
+dependencies = [
+ "lazy_static",
+]
+
+[[package]]
+name = "shlex"
+version = "1.3.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
+
+[[package]]
+name = "signal-hook-registry"
+version = "1.4.8"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "c4db69cba1110affc0e9f7bcd48bbf87b3f4fc7c61fc9155afd4c469eb3d6c1b"
+dependencies = [
+ "errno",
+ "libc",
+]
+
+[[package]]
+name = "slab"
+version = "0.4.11"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7a2ae44ef20feb57a68b23d846850f861394c2e02dc425a50098ae8c90267589"
+
+[[package]]
+name = "smallvec"
+version = "1.15.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "67b1b7a3b5fe4f1376887184045fcf45c69e92af734b7aaddc05fb777b6fbd03"
+
+[[package]]
+name = "socket2"
+version = "0.6.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "17129e116933cf371d018bb80ae557e889637989d8638274fb25622827b03881"
+dependencies = [
+ "libc",
+ "windows-sys 0.60.2",
+]
+
+[[package]]
+name = "spinning_top"
+version = "0.3.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d96d2d1d716fb500937168cc09353ffdc7a012be8475ac7308e1bdf0e3923300"
+dependencies = [
+ "lock_api",
+]
+
+[[package]]
+name = "stable_deref_trait"
+version = "1.2.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "6ce2be8dc25455e1f91df71bfa12ad37d7af1092ae736f3a6cd0e37bc7810596"
+
+[[package]]
+name = "strsim"
+version = "0.11.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f"
+
+[[package]]
+name = "subtle"
+version = "2.6.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "13c2bddecc57b384dee18652358fb23172facb8a2c51ccc10d74c157bdea3292"
+
+[[package]]
+name = "syn"
+version = "2.0.114"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d4d107df263a3013ef9b1879b0df87d706ff80f65a86ea879bd9c31f9b307c2a"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "unicode-ident",
+]
+
+[[package]]
+name = "sync_wrapper"
+version = "1.0.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "0bf256ce5efdfa370213c1dabab5935a12e49f2c58d15e9eac2870d3b4f27263"
+dependencies = [
+ "futures-core",
+]
+
+[[package]]
+name = "synstructure"
+version = "0.13.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "728a70f3dbaf5bab7f0c4b1ac8d7ae5ea60a4b5549c8a5914361c99147a709d2"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
+[[package]]
+name = "system-configuration"
+version = "0.6.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "3c879d448e9d986b661742763247d3693ed13609438cf3d006f51f5368a5ba6b"
+dependencies = [
+ "bitflags",
+ "core-foundation",
+ "system-configuration-sys",
+]
+
+[[package]]
+name = "system-configuration-sys"
+version = "0.6.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8e1d1b10ced5ca923a1fcb8d03e96b8d3268065d724548c0211415ff6ac6bac4"
+dependencies = [
+ "core-foundation-sys",
+ "libc",
+]
+
+[[package]]
+name = "tempfile"
+version = "3.24.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "655da9c7eb6305c55742045d5a8d2037996d61d8de95806335c7c86ce0f82e9c"
+dependencies = [
+ "fastrand",
+ "getrandom 0.3.4",
+ "once_cell",
+ "rustix",
+ "windows-sys 0.61.2",
+]
+
+[[package]]
+name = "thread_local"
+version = "1.1.9"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f60246a4944f24f6e018aa17cdeffb7818b76356965d03b07d6a9886e8962185"
+dependencies = [
+ "cfg-if",
+]
+
+[[package]]
+name = "tinystr"
+version = "0.8.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "42d3e9c45c09de15d06dd8acf5f4e0e399e85927b7f00711024eb7ae10fa4869"
+dependencies = [
+ "displaydoc",
+ "zerovec",
+]
+
+[[package]]
+name = "tokio"
+version = "1.49.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "72a2903cd7736441aac9df9d7688bd0ce48edccaadf181c3b90be801e81d3d86"
+dependencies = [
+ "bytes",
+ "libc",
+ "mio",
+ "pin-project-lite",
+ "signal-hook-registry",
+ "socket2",
+ "tokio-macros",
+ "windows-sys 0.61.2",
+]
+
+[[package]]
+name = "tokio-macros"
+version = "2.6.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "af407857209536a95c8e56f8231ef2c2e2aff839b22e07a1ffcbc617e9db9fa5"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
+[[package]]
+name = "tokio-native-tls"
+version = "0.3.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "bbae76ab933c85776efabc971569dd6119c580d8f5d448769dec1764bf796ef2"
+dependencies = [
+ "native-tls",
+ "tokio",
+]
+
+[[package]]
+name = "tokio-rustls"
+version = "0.26.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "1729aa945f29d91ba541258c8df89027d5792d85a8841fb65e8bf0f4ede4ef61"
+dependencies = [
+ "rustls",
+ "tokio",
+]
+
+[[package]]
+name = "tokio-util"
+version = "0.7.18"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "9ae9cec805b01e8fc3fd2fe289f89149a9b66dd16786abd8b19cfa7b48cb0098"
+dependencies = [
+ "bytes",
+ "futures-core",
+ "futures-sink",
+ "pin-project-lite",
+ "tokio",
+]
+
+[[package]]
+name = "toml"
+version = "0.9.11+spec-1.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f3afc9a848309fe1aaffaed6e1546a7a14de1f935dc9d89d32afd9a44bab7c46"
+dependencies = [
+ "indexmap",
+ "serde_core",
+ "serde_spanned",
+ "toml_datetime",
+ "toml_parser",
+ "toml_writer",
+ "winnow",
+]
+
+[[package]]
+name = "toml_datetime"
+version = "0.7.5+spec-1.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "92e1cfed4a3038bc5a127e35a2d360f145e1f4b971b551a2ba5fd7aedf7e1347"
+dependencies = [
+ "serde_core",
+]
+
+[[package]]
+name = "toml_parser"
+version = "1.0.6+spec-1.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a3198b4b0a8e11f09dd03e133c0280504d0801269e9afa46362ffde1cbeebf44"
+dependencies = [
+ "winnow",
+]
+
+[[package]]
+name = "toml_writer"
+version = "1.0.6+spec-1.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ab16f14aed21ee8bfd8ec22513f7287cd4a91aa92e44edfe2c17ddd004e92607"
+
+[[package]]
+name = "tower"
+version = "0.5.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ebe5ef63511595f1344e2d5cfa636d973292adc0eec1f0ad45fae9f0851ab1d4"
+dependencies = [
+ "futures-core",
+ "futures-util",
+ "pin-project-lite",
+ "sync_wrapper",
+ "tokio",
+ "tower-layer",
+ "tower-service",
+ "tracing",
+]
+
+[[package]]
+name = "tower-http"
+version = "0.6.8"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d4e6559d53cc268e5031cd8429d05415bc4cb4aefc4aa5d6cc35fbf5b924a1f8"
+dependencies = [
+ "bitflags",
+ "bytes",
+ "futures-util",
+ "http",
+ "http-body",
+ "iri-string",
+ "pin-project-lite",
+ "tower",
+ "tower-layer",
+ "tower-service",
+]
+
+[[package]]
+name = "tower-layer"
+version = "0.3.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "121c2a6cda46980bb0fcd1647ffaf6cd3fc79a013de288782836f6df9c48780e"
+
+[[package]]
+name = "tower-service"
+version = "0.3.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8df9b6e13f2d32c91b9bd719c00d1958837bc7dec474d94952798cc8e69eeec3"
+
+[[package]]
+name = "tracing"
+version = "0.1.44"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "63e71662fa4b2a2c3a26f570f037eb95bb1f85397f3cd8076caed2f026a6d100"
+dependencies = [
+ "log",
+ "pin-project-lite",
+ "tracing-attributes",
+ "tracing-core",
+]
+
+[[package]]
+name = "tracing-attributes"
+version = "0.1.31"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7490cfa5ec963746568740651ac6781f701c9c5ea257c58e057f3ba8cf69e8da"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
+[[package]]
+name = "tracing-core"
+version = "0.1.36"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "db97caf9d906fbde555dd62fa95ddba9eecfd14cb388e4f491a66d74cd5fb79a"
+dependencies = [
+ "once_cell",
+ "valuable",
+]
+
+[[package]]
+name = "tracing-log"
+version = "0.2.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ee855f1f400bd0e5c02d150ae5de3840039a3f54b025156404e34c23c03f47c3"
+dependencies = [
+ "log",
+ "once_cell",
+ "tracing-core",
+]
+
+[[package]]
+name = "tracing-subscriber"
+version = "0.3.22"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "2f30143827ddab0d256fd843b7a66d164e9f271cfa0dde49142c5ca0ca291f1e"
+dependencies = [
+ "matchers",
+ "nu-ansi-term",
+ "once_cell",
+ "regex-automata",
+ "sharded-slab",
+ "smallvec",
+ "thread_local",
+ "tracing",
+ "tracing-core",
+ "tracing-log",
+]
+
+[[package]]
+name = "try-lock"
+version = "0.2.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "e421abadd41a4225275504ea4d6566923418b7f05506fbc9c0fe86ba7396114b"
+
+[[package]]
+name = "unicode-ident"
+version = "1.0.22"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "9312f7c4f6ff9069b165498234ce8be658059c6728633667c526e27dc2cf1df5"
+
+[[package]]
+name = "unsafe-libyaml"
+version = "0.2.11"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "673aac59facbab8a9007c7f6108d11f63b603f7cabff99fabf650fea5c32b861"
+
+[[package]]
+name = "untrusted"
+version = "0.9.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1"
+
+[[package]]
+name = "url"
+version = "2.5.8"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ff67a8a4397373c3ef660812acab3268222035010ab8680ec4215f38ba3d0eed"
+dependencies = [
+ "form_urlencoded",
+ "idna",
+ "percent-encoding",
+ "serde",
+]
+
+[[package]]
+name = "utf8_iter"
+version = "1.0.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be"
+
+[[package]]
+name = "utf8parse"
+version = "0.2.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821"
+
+[[package]]
+name = "valuable"
+version = "0.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ba73ea9cf16a25df0c8caa16c51acb937d5712a8429db78a3ee29d5dcacd3a65"
+
+[[package]]
+name = "vcpkg"
+version = "0.2.15"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "accd4ea62f7bb7a82fe23066fb0957d48ef677f6eeb8215f372f52e48bb32426"
+
+[[package]]
+name = "want"
+version = "0.3.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "bfa7760aed19e106de2c7c0b581b509f2f25d3dacaf737cb82ac61bc6d760b0e"
+dependencies = [
+ "try-lock",
+]
+
+[[package]]
+name = "wasi"
+version = "0.11.1+wasi-snapshot-preview1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ccf3ec651a847eb01de73ccad15eb7d99f80485de043efb2f370cd654f4ea44b"
+
+[[package]]
+name = "wasip2"
+version = "1.0.2+wasi-0.2.9"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "9517f9239f02c069db75e65f174b3da828fe5f5b945c4dd26bd25d89c03ebcf5"
+dependencies = [
+ "wit-bindgen",
+]
+
+[[package]]
+name = "wasm-bindgen"
+version = "0.2.108"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "64024a30ec1e37399cf85a7ffefebdb72205ca1c972291c51512360d90bd8566"
+dependencies = [
+ "cfg-if",
+ "once_cell",
+ "rustversion",
+ "wasm-bindgen-macro",
+ "wasm-bindgen-shared",
+]
+
+[[package]]
+name = "wasm-bindgen-futures"
+version = "0.4.58"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "70a6e77fd0ae8029c9ea0063f87c46fde723e7d887703d74ad2616d792e51e6f"
+dependencies = [
+ "cfg-if",
+ "futures-util",
+ "js-sys",
+ "once_cell",
+ "wasm-bindgen",
+ "web-sys",
+]
+
+[[package]]
+name = "wasm-bindgen-macro"
+version = "0.2.108"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "008b239d9c740232e71bd39e8ef6429d27097518b6b30bdf9086833bd5b6d608"
+dependencies = [
+ "quote",
+ "wasm-bindgen-macro-support",
+]
+
+[[package]]
+name = "wasm-bindgen-macro-support"
+version = "0.2.108"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5256bae2d58f54820e6490f9839c49780dff84c65aeab9e772f15d5f0e913a55"
+dependencies = [
+ "bumpalo",
+ "proc-macro2",
+ "quote",
+ "syn",
+ "wasm-bindgen-shared",
+]
+
+[[package]]
+name = "wasm-bindgen-shared"
+version = "0.2.108"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "1f01b580c9ac74c8d8f0c0e4afb04eeef2acf145458e52c03845ee9cd23e3d12"
+dependencies = [
+ "unicode-ident",
+]
+
+[[package]]
+name = "web-sys"
+version = "0.3.85"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "312e32e551d92129218ea9a2452120f4aabc03529ef03e4d0d82fb2780608598"
+dependencies = [
+ "js-sys",
+ "wasm-bindgen",
+]
+
+[[package]]
+name = "web-time"
+version = "1.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5a6580f308b1fad9207618087a65c04e7a10bc77e02c8e84e9b00dd4b12fa0bb"
+dependencies = [
+ "js-sys",
+ "wasm-bindgen",
+]
+
+[[package]]
+name = "winapi"
+version = "0.3.9"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
+dependencies = [
+ "winapi-i686-pc-windows-gnu",
+ "winapi-x86_64-pc-windows-gnu",
+]
+
+[[package]]
+name = "winapi-i686-pc-windows-gnu"
+version = "0.4.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
+
+[[package]]
+name = "winapi-x86_64-pc-windows-gnu"
+version = "0.4.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
+
+[[package]]
+name = "windows-core"
+version = "0.62.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b8e83a14d34d0623b51dce9581199302a221863196a1dde71a7663a4c2be9deb"
+dependencies = [
+ "windows-implement",
+ "windows-interface",
+ "windows-link",
+ "windows-result",
+ "windows-strings",
+]
+
+[[package]]
+name = "windows-implement"
+version = "0.60.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "053e2e040ab57b9dc951b72c264860db7eb3b0200ba345b4e4c3b14f67855ddf"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
+[[package]]
+name = "windows-interface"
+version = "0.59.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "3f316c4a2570ba26bbec722032c4099d8c8bc095efccdc15688708623367e358"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
+[[package]]
+name = "windows-link"
+version = "0.2.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5"
+
+[[package]]
+name = "windows-registry"
+version = "0.6.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "02752bf7fbdcce7f2a27a742f798510f3e5ad88dbe84871e5168e2120c3d5720"
+dependencies = [
+ "windows-link",
+ "windows-result",
+ "windows-strings",
+]
+
+[[package]]
+name = "windows-result"
+version = "0.4.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7781fa89eaf60850ac3d2da7af8e5242a5ea78d1a11c49bf2910bb5a73853eb5"
+dependencies = [
+ "windows-link",
+]
+
+[[package]]
+name = "windows-strings"
+version = "0.5.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "7837d08f69c77cf6b07689544538e017c1bfcf57e34b4c0ff58e6c2cd3b37091"
+dependencies = [
+ "windows-link",
+]
+
+[[package]]
+name = "windows-sys"
+version = "0.52.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "282be5f36a8ce781fad8c8ae18fa3f9beff57ec1b52cb3de0789201425d9a33d"
+dependencies = [
+ "windows-targets 0.52.6",
+]
+
+[[package]]
+name = "windows-sys"
+version = "0.60.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "f2f500e4d28234f72040990ec9d39e3a6b950f9f22d3dba18416c35882612bcb"
+dependencies = [
+ "windows-targets 0.53.5",
+]
+
+[[package]]
+name = "windows-sys"
+version = "0.61.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "ae137229bcbd6cdf0f7b80a31df61766145077ddf49416a728b02cb3921ff3fc"
+dependencies = [
+ "windows-link",
+]
+
+[[package]]
+name = "windows-targets"
+version = "0.52.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "9b724f72796e036ab90c1021d4780d4d3d648aca59e491e6b98e725b84e99973"
+dependencies = [
+ "windows_aarch64_gnullvm 0.52.6",
+ "windows_aarch64_msvc 0.52.6",
+ "windows_i686_gnu 0.52.6",
+ "windows_i686_gnullvm 0.52.6",
+ "windows_i686_msvc 0.52.6",
+ "windows_x86_64_gnu 0.52.6",
+ "windows_x86_64_gnullvm 0.52.6",
+ "windows_x86_64_msvc 0.52.6",
+]
+
+[[package]]
+name = "windows-targets"
+version = "0.53.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "4945f9f551b88e0d65f3db0bc25c33b8acea4d9e41163edf90dcd0b19f9069f3"
+dependencies = [
+ "windows-link",
+ "windows_aarch64_gnullvm 0.53.1",
+ "windows_aarch64_msvc 0.53.1",
+ "windows_i686_gnu 0.53.1",
+ "windows_i686_gnullvm 0.53.1",
+ "windows_i686_msvc 0.53.1",
+ "windows_x86_64_gnu 0.53.1",
+ "windows_x86_64_gnullvm 0.53.1",
+ "windows_x86_64_msvc 0.53.1",
+]
+
+[[package]]
+name = "windows_aarch64_gnullvm"
+version = "0.52.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3"
+
+[[package]]
+name = "windows_aarch64_gnullvm"
+version = "0.53.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "a9d8416fa8b42f5c947f8482c43e7d89e73a173cead56d044f6a56104a6d1b53"
+
+[[package]]
+name = "windows_aarch64_msvc"
+version = "0.52.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469"
+
+[[package]]
+name = "windows_aarch64_msvc"
+version = "0.53.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b9d782e804c2f632e395708e99a94275910eb9100b2114651e04744e9b125006"
+
+[[package]]
+name = "windows_i686_gnu"
+version = "0.52.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b"
+
+[[package]]
+name = "windows_i686_gnu"
+version = "0.53.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "960e6da069d81e09becb0ca57a65220ddff016ff2d6af6a223cf372a506593a3"
+
+[[package]]
+name = "windows_i686_gnullvm"
+version = "0.52.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66"
+
+[[package]]
+name = "windows_i686_gnullvm"
+version = "0.53.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "fa7359d10048f68ab8b09fa71c3daccfb0e9b559aed648a8f95469c27057180c"
+
+[[package]]
+name = "windows_i686_msvc"
+version = "0.52.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66"
+
+[[package]]
+name = "windows_i686_msvc"
+version = "0.53.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "1e7ac75179f18232fe9c285163565a57ef8d3c89254a30685b57d83a38d326c2"
+
+[[package]]
+name = "windows_x86_64_gnu"
+version = "0.52.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78"
+
+[[package]]
+name = "windows_x86_64_gnu"
+version = "0.53.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "9c3842cdd74a865a8066ab39c8a7a473c0778a3f29370b5fd6b4b9aa7df4a499"
+
+[[package]]
+name = "windows_x86_64_gnullvm"
+version = "0.52.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d"
+
+[[package]]
+name = "windows_x86_64_gnullvm"
+version = "0.53.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "0ffa179e2d07eee8ad8f57493436566c7cc30ac536a3379fdf008f47f6bb7ae1"
+
+[[package]]
+name = "windows_x86_64_msvc"
+version = "0.52.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec"
+
+[[package]]
+name = "windows_x86_64_msvc"
+version = "0.53.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d6bbff5f0aada427a1e5a6da5f1f98158182f26556f345ac9e04d36d0ebed650"
+
+[[package]]
+name = "winnow"
+version = "0.7.14"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "5a5364e9d77fcdeeaa6062ced926ee3381faa2ee02d3eb83a5c27a8825540829"
+
+[[package]]
+name = "wit-bindgen"
+version = "0.51.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d7249219f66ced02969388cf2bb044a09756a083d0fab1e566056b04d9fbcaa5"
+
+[[package]]
+name = "witryna"
+version = "0.1.0"
+dependencies = [
+ "anyhow",
+ "axum",
+ "chrono",
+ "clap",
+ "clap_mangen",
+ "dashmap",
+ "governor",
+ "humantime",
+ "nix",
+ "reqwest",
+ "serde",
+ "serde_json",
+ "serde_yaml_ng",
+ "serial_test",
+ "subtle",
+ "tempfile",
+ "tokio",
+ "tokio-util",
+ "toml",
+ "tower",
+ "tracing",
+ "tracing-subscriber",
+]
+
+[[package]]
+name = "writeable"
+version = "0.6.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "9edde0db4769d2dc68579893f2306b26c6ecfbe0ef499b013d731b7b9247e0b9"
+
+[[package]]
+name = "yoke"
+version = "0.8.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "72d6e5c6afb84d73944e5cedb052c4680d5657337201555f9f2a16b7406d4954"
+dependencies = [
+ "stable_deref_trait",
+ "yoke-derive",
+ "zerofrom",
+]
+
+[[package]]
+name = "yoke-derive"
+version = "0.8.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b659052874eb698efe5b9e8cf382204678a0086ebf46982b79d6ca3182927e5d"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+ "synstructure",
+]
+
+[[package]]
+name = "zerocopy"
+version = "0.8.33"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "668f5168d10b9ee831de31933dc111a459c97ec93225beb307aed970d1372dfd"
+dependencies = [
+ "zerocopy-derive",
+]
+
+[[package]]
+name = "zerocopy-derive"
+version = "0.8.33"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "2c7962b26b0a8685668b671ee4b54d007a67d4eaf05fda79ac0ecf41e32270f1"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
+[[package]]
+name = "zerofrom"
+version = "0.1.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "50cc42e0333e05660c3587f3bf9d0478688e15d870fab3346451ce7f8c9fbea5"
+dependencies = [
+ "zerofrom-derive",
+]
+
+[[package]]
+name = "zerofrom-derive"
+version = "0.1.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "d71e5d6e06ab090c67b5e44993ec16b72dcbaabc526db883a360057678b48502"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+ "synstructure",
+]
+
+[[package]]
+name = "zeroize"
+version = "1.8.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "b97154e67e32c85465826e8bcc1c59429aaaf107c1e4a9e53c8d8ccd5eff88d0"
+
+[[package]]
+name = "zerotrie"
+version = "0.2.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "2a59c17a5562d507e4b54960e8569ebee33bee890c70aa3fe7b97e85a9fd7851"
+dependencies = [
+ "displaydoc",
+ "yoke",
+ "zerofrom",
+]
+
+[[package]]
+name = "zerovec"
+version = "0.11.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "6c28719294829477f525be0186d13efa9a3c602f7ec202ca9e353d310fb9a002"
+dependencies = [
+ "yoke",
+ "zerofrom",
+ "zerovec-derive",
+]
+
+[[package]]
+name = "zerovec-derive"
+version = "0.11.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "eadce39539ca5cb3985590102671f2567e659fca9666581ad3411d59207951f3"
+dependencies = [
+ "proc-macro2",
+ "quote",
+ "syn",
+]
+
+[[package]]
+name = "zmij"
+version = "1.0.16"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+checksum = "dfcd145825aace48cff44a8844de64bf75feec3080e0aa5cdbde72961ae51a65"
diff --git a/Cargo.toml b/Cargo.toml
new file mode 100644
index 0000000..fb8e20d
--- /dev/null
+++ b/Cargo.toml
@@ -0,0 +1,191 @@
+[package]
+name = "witryna"
+version = "0.1.0"
+edition = "2024"
+authors = ["Dawid Rycerz"]
+description = "Minimalist Git-based static site deployment orchestrator"
+homepage = "https://git.craftknight.com/dawid/witryna"
+repository = "https://git.craftknight.com/dawid/witryna.git"
+license = "MIT"
+
+[features]
+default = []
+integration = []
+
+[dependencies]
+anyhow = "1.0.100"
+clap = { version = "4", features = ["derive"] }
+subtle = "2.6"
+axum = "0.8.8"
+chrono = "0.4.43"
+dashmap = "6.1.0"
+governor = "0.8"
+serde = { version = "1.0.228", features = ["derive"] }
+serde_yaml_ng = "0.10"
+tokio = { version = "1.49.0", features = ["rt-multi-thread", "macros", "fs", "process", "net", "signal", "sync", "time", "io-util", "io-std"] }
+toml = "0.9.11"
+tracing = "0.1.44"
+tracing-subscriber = { version = "0.3.22", features = ["env-filter"] }
+humantime = "2.3.0"
+tokio-util = "0.7.18"
+serde_json = "1.0"
+
+[dev-dependencies]
+tower = "0.5"
+reqwest = { version = "0.12" }
+tempfile = "3"
+nix = { version = "0.29", features = ["signal"] }
+serial_test = "3"
+
+[build-dependencies]
+clap = { version = "4", features = ["derive"] }
+clap_mangen = "0.2.31"
+
+[profile.release]
+strip = true
+lto = true
+panic = "abort"
+
+[package.metadata.deb]
+maintainer = "Dawid Rycerz"
+copyright = "2026, Dawid Rycerz"
+extended-description = """\
+Witryna is a minimalist Git-based static site deployment orchestrator. \
+It listens for webhook triggers, pulls Git repositories, runs \
+containerized build commands, and publishes static assets via atomic \
+symlink switching."""
+section = "web"
+priority = "optional"
+depends = "$auto, adduser, systemd"
+recommends = "podman | docker.io"
+maintainer-scripts = "debian/"
+conf-files = ["/etc/witryna/witryna.toml"]
+assets = [
+ ["target/release/witryna", "usr/bin/", "755"],
+ ["target/man/witryna.1", "usr/share/man/man1/witryna.1", "644"],
+ ["man/witryna.toml.5", "usr/share/man/man5/witryna.toml.5", "644"],
+ ["examples/witryna.toml", "etc/witryna/witryna.toml", "644"],
+ ["README.md", "usr/share/doc/witryna/README.md", "644"],
+ ["examples/hooks/caddy-deploy.sh", "usr/share/doc/witryna/examples/hooks/caddy-deploy.sh", "755"],
+ ["examples/caddy/Caddyfile", "usr/share/doc/witryna/examples/caddy/Caddyfile", "644"],
+ ["examples/nginx/witryna.conf", "usr/share/doc/witryna/examples/nginx/witryna.conf", "644"],
+ ["examples/witryna.yaml", "usr/share/doc/witryna/examples/witryna.yaml", "644"],
+ ["examples/systemd/docker.conf", "usr/share/doc/witryna/examples/systemd/docker.conf", "644"],
+ ["examples/systemd/podman.conf", "usr/share/doc/witryna/examples/systemd/podman.conf", "644"],
+]
+
+[package.metadata.deb.systemd-units]
+unit-scripts = "debian/"
+enable = true
+start = false
+restart-after-upgrade = true
+
+[package.metadata.generate-rpm]
+summary = "Minimalist Git-based static site deployment orchestrator"
+description = """\
+Witryna is a minimalist Git-based static site deployment orchestrator. \
+It listens for webhook triggers, pulls Git repositories, runs \
+containerized build commands, and publishes static assets via atomic \
+symlink switching."""
+group = "Applications/Internet"
+release = "1"
+assets = [
+ { source = "target/release/witryna", dest = "/usr/bin/witryna", mode = "755" },
+ { source = "target/man/witryna.1", dest = "/usr/share/man/man1/witryna.1", mode = "644", doc = true },
+ { source = "man/witryna.toml.5", dest = "/usr/share/man/man5/witryna.toml.5", mode = "644", doc = true },
+ { source = "examples/witryna.toml", dest = "/etc/witryna/witryna.toml", mode = "644", config = "noreplace" },
+ { source = "debian/witryna.service", dest = "/usr/lib/systemd/system/witryna.service", mode = "644" },
+ { source = "README.md", dest = "/usr/share/doc/witryna/README.md", mode = "644", doc = true },
+ { source = "examples/hooks/caddy-deploy.sh", dest = "/usr/share/doc/witryna/examples/hooks/caddy-deploy.sh", mode = "755", doc = true },
+ { source = "examples/caddy/Caddyfile", dest = "/usr/share/doc/witryna/examples/caddy/Caddyfile", mode = "644", doc = true },
+ { source = "examples/nginx/witryna.conf", dest = "/usr/share/doc/witryna/examples/nginx/witryna.conf", mode = "644", doc = true },
+ { source = "examples/witryna.yaml", dest = "/usr/share/doc/witryna/examples/witryna.yaml", mode = "644", doc = true },
+ { source = "examples/systemd/docker.conf", dest = "/usr/share/doc/witryna/examples/systemd/docker.conf", mode = "644", doc = true },
+ { source = "examples/systemd/podman.conf", dest = "/usr/share/doc/witryna/examples/systemd/podman.conf", mode = "644", doc = true },
+]
+post_install_script = """\
+#!/bin/bash
+set -e
+# Create system user/group (idempotent)
+if ! getent group witryna >/dev/null 2>&1; then
+ groupadd --system witryna
+fi
+if ! getent passwd witryna >/dev/null 2>&1; then
+ useradd --system --gid witryna --no-create-home \
+ --home-dir /var/lib/witryna --shell /sbin/nologin witryna
+fi
+# Create data + log directories
+install -d -o witryna -g witryna -m 0755 /var/lib/witryna
+install -d -o witryna -g witryna -m 0755 /var/lib/witryna/clones
+install -d -o witryna -g witryna -m 0755 /var/lib/witryna/builds
+install -d -o witryna -g witryna -m 0755 /var/lib/witryna/cache
+install -d -o witryna -g witryna -m 0755 /var/log/witryna
+# Fix config permissions (root owns, witryna group can read)
+if [ -f /etc/witryna/witryna.toml ]; then
+ chown root:witryna /etc/witryna/witryna.toml
+ chmod 640 /etc/witryna/witryna.toml
+fi
+# Auto-detect and configure container runtime
+if command -v docker >/dev/null 2>&1 && docker info >/dev/null 2>&1; then
+ if getent group docker >/dev/null; then
+ usermod -aG docker witryna || true
+ fi
+ mkdir -p /etc/systemd/system/witryna.service.d
+ cp /usr/share/doc/witryna/examples/systemd/docker.conf \
+ /etc/systemd/system/witryna.service.d/10-runtime.conf
+ chmod 644 /etc/systemd/system/witryna.service.d/10-runtime.conf
+ systemctl daemon-reload >/dev/null 2>&1 || true
+ echo "witryna: Docker detected and configured."
+elif command -v podman >/dev/null 2>&1 && podman info >/dev/null 2>&1; then
+ if ! grep -q "^witryna:" /etc/subuid 2>/dev/null; then
+ usermod --add-subuids 100000-165535 witryna || true
+ fi
+ if ! grep -q "^witryna:" /etc/subgid 2>/dev/null; then
+ usermod --add-subgids 100000-165535 witryna || true
+ fi
+ loginctl enable-linger witryna >/dev/null 2>&1 || true
+ mkdir -p /etc/systemd/system/witryna.service.d
+ cp /usr/share/doc/witryna/examples/systemd/podman.conf \
+ /etc/systemd/system/witryna.service.d/10-runtime.conf
+ chmod 644 /etc/systemd/system/witryna.service.d/10-runtime.conf
+ systemctl daemon-reload >/dev/null 2>&1 || true
+ echo "witryna: Podman detected and configured."
+else
+ echo "witryna: WARNING — no container runtime (docker/podman) detected."
+ echo " Install one, then reinstall this package or copy an override from"
+ echo " /usr/share/doc/witryna/examples/systemd/ manually."
+fi
+# Reload systemd
+systemctl daemon-reload >/dev/null 2>&1 || true
+# Enable (but don't start) on fresh install — matches deb behavior
+if [ "$1" -eq 1 ]; then
+ systemctl enable witryna.service >/dev/null 2>&1 || true
+fi"""
+pre_uninstall_script = """\
+#!/bin/bash
+if [ "$1" -eq 0 ]; then
+ systemctl stop witryna.service >/dev/null 2>&1 || true
+ systemctl disable witryna.service >/dev/null 2>&1 || true
+fi"""
+post_uninstall_script = """\
+#!/bin/bash
+if [ "$1" -eq 0 ]; then
+ # Remove user and group
+ userdel witryna >/dev/null 2>&1 || true
+ groupdel witryna >/dev/null 2>&1 || true
+ # Remove config directory and systemd overrides
+ rm -rf /etc/witryna
+ rm -rf /etc/systemd/system/witryna.service.d
+ loginctl disable-linger witryna >/dev/null 2>&1 || true
+ # /var/lib/witryna and /var/log/witryna left for manual cleanup
+ # Reload systemd
+ systemctl daemon-reload >/dev/null 2>&1 || true
+fi"""
+
+[package.metadata.generate-rpm.requires]
+systemd = "*"
+shadow-utils = "*"
+git = "*"
+
+[package.metadata.generate-rpm.recommends]
+podman = "*"
diff --git a/Justfile b/Justfile
new file mode 100644
index 0000000..fb24e36
--- /dev/null
+++ b/Justfile
@@ -0,0 +1,168 @@
+# Witryna development tasks
+
+# List available recipes
+default:
+ @just --list
+
+# --- Formatting ---
+
+# Auto-format Rust code
+fmt:
+ cargo fmt --all
+
+# --- Linting ---
+
+# Lint Rust (fmt check + clippy)
+lint-rust:
+ cargo fmt --all -- --check
+ cargo clippy --all-targets --all-features -- -D warnings
+
+# Lint YAML files (if any exist)
+lint-yaml:
+ @if find . -name '*.yml' -o -name '*.yaml' | grep -q .; then yamllint $(find . -name '*.yml' -o -name '*.yaml'); else echo "No YAML files, skipping"; fi
+
+# Scan for leaked secrets
+lint-secrets:
+ gitleaks protect --staged
+
+# Clippy pedantic + nursery (advisory, not enforced in CI)
+lint-pedantic:
+ cargo clippy --all-targets --all-features -- -W clippy::pedantic -W clippy::nursery
+
+# Clippy picky: pedantic + nursery + restriction subset (strict, advisory)
+lint-picky:
+ cargo clippy --all-targets --all-features -- -W clippy::pedantic -W clippy::nursery -W clippy::unwrap_used -W clippy::expect_used -W clippy::panic -W clippy::indexing_slicing -W clippy::clone_on_ref_ptr -W clippy::print_stdout -W clippy::print_stderr
+
+# Run all lints
+lint: lint-rust lint-yaml lint-secrets
+
+# --- Testing ---
+
+# Run unit tests
+test:
+ cargo test --all
+
+# Run integration tests (Tier 1 needs git, Tier 2 needs podman/docker)
+test-integration:
+ cargo test --features integration
+
+# Run integration tests with single thread (required for SIGHUP tests)
+test-integration-serial:
+ cargo test --features integration -- --test-threads=1
+
+# Run all lints, unit tests, and integration tests
+test-all: lint test test-integration
+
+# --- Building ---
+
+# Build a release binary
+build-release:
+ cargo build --release
+
+# --- Man pages ---
+
+# View witryna(1) man page (run `cargo build` first)
+man-1:
+ man -l target/man/witryna.1
+
+# View witryna.toml(5) man page
+man-5:
+ man -l man/witryna.toml.5
+
+# --- Packaging ---
+
+# Build Debian package
+build-deb:
+ cargo deb
+
+# Show contents of built Debian package
+inspect-deb:
+ #!/usr/bin/env bash
+ set -euo pipefail
+ deb=$(find target/debian -name 'witryna_*.deb' 2>/dev/null | head -1)
+ if [[ -z "$deb" ]]; then echo "No .deb found — run 'just build-deb' first" >&2; exit 1; fi
+ echo "=== Info ===" && dpkg-deb --info "$deb"
+ echo "" && echo "=== Contents ===" && dpkg-deb --contents "$deb"
+
+# Build RPM package (requires: cargo install cargo-generate-rpm)
+build-rpm:
+ cargo build --release
+ cargo generate-rpm
+
+# Show contents of built RPM package
+inspect-rpm:
+ #!/usr/bin/env bash
+ set -euo pipefail
+ rpm=$(find target/generate-rpm -name 'witryna-*.rpm' 2>/dev/null | head -1)
+ if [[ -z "$rpm" ]]; then echo "No .rpm found — run 'just build-rpm' first" >&2; exit 1; fi
+ echo "=== Info ===" && rpm -qip "$rpm"
+ echo "" && echo "=== Contents ===" && rpm -qlp "$rpm"
+ echo "" && echo "=== Config files ===" && rpm -qcp "$rpm"
+ echo "" && echo "=== Scripts ===" && rpm -q --scripts -p "$rpm"
+
+# --- Release ---
+
+RELEASE_HOST := "git@sandcrawler"
+RELEASE_PATH := "/srv/git/release"
+
+# Build tarball with binary, config, service, man pages, examples
+build-tarball: build-release
+ #!/usr/bin/env bash
+ set -euo pipefail
+ version=$(cargo metadata --format-version=1 --no-deps | grep -o '"version":"[^"]*"' | head -1 | cut -d'"' -f4)
+ name="witryna-${version}-linux-amd64"
+ staging="target/tarball/${name}"
+ rm -rf "$staging"
+ mkdir -p "$staging/examples/hooks" "$staging/examples/caddy" "$staging/examples/nginx" "$staging/examples/systemd"
+ cp target/release/witryna "$staging/"
+ cp examples/witryna.toml "$staging/"
+ cp debian/witryna.service "$staging/"
+ cp target/man/witryna.1 "$staging/"
+ cp man/witryna.toml.5 "$staging/"
+ cp README.md "$staging/"
+ cp examples/hooks/caddy-deploy.sh "$staging/examples/hooks/"
+ cp examples/caddy/Caddyfile "$staging/examples/caddy/"
+ cp examples/nginx/witryna.conf "$staging/examples/nginx/"
+ cp examples/witryna.yaml "$staging/examples/"
+ cp examples/systemd/docker.conf "$staging/examples/systemd/"
+ cp examples/systemd/podman.conf "$staging/examples/systemd/"
+ tar -czf "target/tarball/${name}.tar.gz" -C target/tarball "$name"
+ echo "target/tarball/${name}.tar.gz"
+
+# Upload deb to release server
+release-deb: build-deb
+ #!/usr/bin/env bash
+ set -euo pipefail
+ f=$(find target/debian -name 'witryna_*.deb' | sort -V | tail -1)
+ if [[ -z "$f" ]]; then echo "No .deb found" >&2; exit 1; fi
+ sha256sum "$f" > "${f}.sha256"
+ scp "$f" "${f}.sha256" "{{RELEASE_HOST}}:{{RELEASE_PATH}}/"
+ echo "Done — https://release.craftknight.com/"
+
+# Upload rpm to release server
+release-rpm: build-rpm
+ #!/usr/bin/env bash
+ set -euo pipefail
+ f=$(find target/generate-rpm -name 'witryna-*.rpm' | sort -V | tail -1)
+ if [[ -z "$f" ]]; then echo "No .rpm found" >&2; exit 1; fi
+ sha256sum "$f" > "${f}.sha256"
+ scp "$f" "${f}.sha256" "{{RELEASE_HOST}}:{{RELEASE_PATH}}/"
+ echo "Done — https://release.craftknight.com/"
+
+# Upload tarball to release server
+release-tarball: build-tarball
+ #!/usr/bin/env bash
+ set -euo pipefail
+ f=$(find target/tarball -name 'witryna-*.tar.gz' | sort -V | tail -1)
+ if [[ -z "$f" ]]; then echo "No tarball found" >&2; exit 1; fi
+ sha256sum "$f" > "${f}.sha256"
+ scp "$f" "${f}.sha256" "{{RELEASE_HOST}}:{{RELEASE_PATH}}/"
+ echo "Done — https://release.craftknight.com/"
+
+# Build and upload all packages (deb + rpm + tarball)
+release: release-deb release-rpm release-tarball
+
+# --- Pre-commit ---
+
+# Run all pre-commit checks (mirrors lefthook)
+pre-commit: lint test
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..17766e8
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2026 Dawid Rycerz
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..f9ed799
--- /dev/null
+++ b/README.md
@@ -0,0 +1,150 @@
+# Witryna
+
+Witryna is a minimalist Git-based static site deployment orchestrator.
+It listens for webhook triggers, pulls Git repositories, runs
+containerized build commands, and publishes static assets via atomic
+symlink switching.
+
+## How it works
+
+A webhook POST triggers Witryna to pull a Git repository (with
+automatic submodule initialization and Git LFS fetch), run a build
+command inside a container, copy the output to a timestamped directory,
+and atomically switch a symlink to the new build.
+Your web server serves files from the symlink — zero-downtime deploys.
+
+## Install
+
+Pre-built packages are available at
+[release.craftknight.com](https://release.craftknight.com/).
+
+From a `.deb` package (Debian/Ubuntu):
+
+ curl -LO https://release.craftknight.com/witryna_0.1.0-1_amd64.deb
+ sudo dpkg -i witryna_0.1.0-1_amd64.deb
+
+From an `.rpm` package (Fedora/RHEL):
+
+ curl -LO https://release.craftknight.com/witryna-0.1.0-1.x86_64.rpm
+ sudo rpm -i witryna-0.1.0-1.x86_64.rpm
+
+From a tarball (any Linux):
+
+ curl -LO https://release.craftknight.com/witryna-0.1.0-linux-amd64.tar.gz
+ tar xzf witryna-0.1.0-linux-amd64.tar.gz
+ sudo cp witryna-0.1.0-linux-amd64/witryna /usr/local/bin/
+
+From source:
+
+ cargo install --path .
+
+## Post-install
+
+The deb/rpm packages automatically detect your container runtime:
+
+- **Docker**: `witryna` user added to `docker` group, systemd override installed
+- **Podman**: subuids/subgids allocated, lingering enabled, systemd override installed
+- **Neither found**: warning with instructions
+
+If you install a runtime later, reinstall the package or manually copy the
+override from `/usr/share/doc/witryna/examples/systemd/` to
+`/etc/systemd/system/witryna.service.d/10-runtime.conf` and run
+`sudo systemctl daemon-reload`.
+
+## Quickstart
+
+1. Create `/etc/witryna/witryna.toml`:
+
+ ```toml
+ listen_address = "127.0.0.1:8080"
+ container_runtime = "podman"
+ base_dir = "/var/lib/witryna"
+ log_level = "info"
+
+ [[sites]]
+ name = "my-site"
+ repo_url = "https://github.com/user/my-site.git"
+ branch = "main"
+ webhook_token = "YOUR_TOKEN" # omit to disable auth
+ ```
+
+2. Add a build config to the root of your Git repository (`.witryna.yaml`,
+ `.witryna.yml`, `witryna.yaml`, or `witryna.yml`):
+
+ ```yaml
+ image: node:20-alpine
+ command: "npm ci && npm run build"
+ public: dist
+ ```
+
+3. Start Witryna:
+
+ ```
+ witryna serve
+ ```
+
+4. Trigger a build:
+
+ ```
+ TOKEN=... # your webhook_token from witryna.toml
+ curl -X POST -H "Authorization: Bearer $TOKEN" http://127.0.0.1:8080/my-site
+ # If webhook_token is omitted, the -H header is not needed.
+ ```
+
+## CLI
+
+| Command | Description |
+|---------|-------------|
+| `witryna serve` | Start the deployment server |
+| `witryna validate` | Validate config and print summary |
+| `witryna run <site>` | Run a one-off build (synchronous) |
+| `witryna status` | Show deployment status |
+
+## Configuration
+
+Witryna searches for its config file automatically: `./witryna.toml`,
+`$XDG_CONFIG_HOME/witryna/witryna.toml`, `/etc/witryna/witryna.toml`.
+Use `--config` to specify an explicit path.
+
+See `witryna.toml(5)` for the full configuration reference:
+
+ man witryna.toml
+
+Build config supports `.witryna.yaml`, `.witryna.yml`, `witryna.yaml`, and
+`witryna.yml` (searched in that order). See `examples/` for annotated files.
+
+## Reverse proxy
+
+Point your web server's document root at
+`/var/lib/witryna/builds/<site>/current` and reverse-proxy webhook
+requests to Witryna. See `examples/caddy/` and `examples/nginx/` for
+ready-to-use configurations.
+
+## Building from source
+
+ cargo build --release
+
+Development uses [just](https://github.com/casey/just) as a command runner:
+
+ cargo install just # or: pacman -S just / brew install just
+
+Key recipes:
+
+ just test # unit tests
+ just test-all # lints + unit + integration tests
+ just lint # fmt check + clippy + yamllint + gitleaks
+
+To build distribution packages:
+
+ just build-deb # Debian .deb package
+ just build-rpm # RPM package
+
+## Dependencies
+
+- Rust 1.85+ (build only)
+- Git
+- Podman or Docker
+
+## License
+
+MIT — see LICENSE for details.
diff --git a/build.rs b/build.rs
new file mode 100644
index 0000000..287f7fb
--- /dev/null
+++ b/build.rs
@@ -0,0 +1,150 @@
+#[path = "src/cli.rs"]
+mod cli;
+
+use clap::CommandFactory as _;
+use std::io::Write as _;
+
+fn main() -> std::io::Result<()> {
+ println!("cargo:rerun-if-changed=src/cli.rs");
+
+ #[allow(clippy::expect_used)] // OUT_DIR is always set by cargo during build scripts
+ let out_dir = std::env::var("OUT_DIR").expect("OUT_DIR is set by cargo");
+ let out_path = std::path::Path::new(&out_dir).join("witryna.1");
+
+ let cmd = cli::Cli::command();
+ let man = clap_mangen::Man::new(cmd).date(build_date());
+
+ let mut buf: Vec<u8> = Vec::new();
+
+ // Standard clap-generated sections
+ man.render_title(&mut buf)?;
+ man.render_name_section(&mut buf)?;
+ man.render_synopsis_section(&mut buf)?;
+ man.render_description_section(&mut buf)?;
+ buf.write_all(SUBCOMMANDS)?;
+ man.render_options_section(&mut buf)?;
+
+ // Custom roff sections
+ buf.write_all(SIGNALS)?;
+ buf.write_all(EXIT_STATUS)?;
+ buf.write_all(INSTALLATION)?;
+ buf.write_all(FILES)?;
+ buf.write_all(SEE_ALSO)?;
+
+ man.render_version_section(&mut buf)?;
+ man.render_authors_section(&mut buf)?;
+
+ std::fs::write(&out_path, &buf)?;
+
+ // Copy to stable path so cargo-deb and Justfile always find it
+ let target_dir = std::path::Path::new("target/man");
+ std::fs::create_dir_all(target_dir)?;
+ std::fs::copy(&out_path, target_dir.join("witryna.1"))?;
+
+ Ok(())
+}
+
+const SUBCOMMANDS: &[u8] = br#".SH "SUBCOMMANDS"
+.TP
+\fBserve\fR
+Start the deployment server (foreground).
+.TP
+\fBvalidate\fR
+Validate configuration file and print summary.
+.TP
+\fBrun\fR \fIsite\fR
+Trigger a one\-off build for a site (synchronous, no server).
+.TP
+\fBstatus\fR
+Show deployment status for configured sites.
+"#;
+
+const SIGNALS: &[u8] = br#".SH "SIGNALS"
+.TP
+\fBSIGHUP\fR
+Reload configuration from \fIwitryna.toml\fR without restarting the server.
+Sites can be added, removed, or modified on the fly.
+Changes to \fBlisten_address\fR, \fBbase_dir\fR, \fBlog_dir\fR,
+and \fBlog_level\fR are detected but require a full restart to take effect;
+a warning is logged when these fields differ.
+.TP
+\fBSIGTERM\fR, \fBSIGINT\fR
+Initiate graceful shutdown.
+In\-progress builds are allowed to finish before the process exits.
+"#;
+
+const EXIT_STATUS: &[u8] = br#".SH "EXIT STATUS"
+.TP
+\fB0\fR
+Clean shutdown after SIGTERM/SIGINT (\fBserve\fR), configuration valid (\fBvalidate\fR),
+build succeeded (\fBrun\fR), or status displayed (\fBstatus\fR).
+Post\-deploy hook failure is non\-fatal and still exits 0.
+.TP
+\fB1\fR
+Startup failure, validation error, build failure, site not found, or configuration error.
+.TP
+\fB2\fR
+Command\-line usage error (unknown flag, missing subcommand, etc.).
+"#;
+
+const INSTALLATION: &[u8] = br#".SH "INSTALLATION"
+When installed via deb or rpm packages, the post\-install script automatically
+detects the available container runtime and configures system\-level access:
+.TP
+\fBDocker\fR
+The \fBwitryna\fR user is added to the \fBdocker\fR group and a systemd
+override is installed at
+\fI/etc/systemd/system/witryna.service.d/10\-runtime.conf\fR
+granting access to the Docker socket.
+.TP
+\fBPodman\fR
+Subordinate UID/GID ranges are allocated (100000\-165535), user lingering is
+enabled, and a systemd override is installed that disables
+\fBRestrictNamespaces\fR and sets \fBXDG_RUNTIME_DIR\fR.
+.PP
+If neither runtime is found at install time, a warning is printed.
+Install a runtime and reinstall the package, or manually copy the appropriate
+override template from \fI/usr/share/doc/witryna/examples/systemd/\fR to
+\fI/etc/systemd/system/witryna.service.d/10\-runtime.conf\fR.
+"#;
+
+const FILES: &[u8] = br#".SH "FILES"
+.TP
+\fI/etc/witryna/witryna.toml\fR
+Conventional configuration path for system\-wide installs.
+The shipped systemd unit passes \fB\-\-config /etc/witryna/witryna.toml\fR
+explicitly; without \fB\-\-config\fR the CLI defaults to \fIwitryna.toml\fR
+in the current working directory.
+.TP
+\fI/var/lib/witryna/clones/<site>/\fR
+Git repository clones.
+.TP
+\fI/var/lib/witryna/builds/<site>/<timestamp>/\fR
+Timestamped build outputs.
+.TP
+\fI/var/lib/witryna/builds/<site>/current\fR
+Symlink to the latest successful build.
+.TP
+\fI/var/lib/witryna/cache/<site>/\fR
+Persistent build cache volumes.
+.TP
+\fI/var/log/witryna/<site>/<timestamp>.log\fR
+Per\-build log files (site, timestamp, git commit, image, duration, status, stdout, stderr).
+.TP
+\fI/var/log/witryna/<site>/<timestamp>\-hook.log\fR
+Post\-deploy hook output (if configured).
+"#;
+
+const SEE_ALSO: &[u8] = br#".SH "SEE ALSO"
+\fBwitryna.toml\fR(5)
+"#;
+
+fn build_date() -> String {
+ std::process::Command::new("date")
+ .arg("+%Y-%m-%d")
+ .output()
+ .ok()
+ .and_then(|o| String::from_utf8(o.stdout).ok())
+ .map(|s| s.trim().to_owned())
+ .unwrap_or_default()
+}
diff --git a/debian/postinst b/debian/postinst
new file mode 100644
index 0000000..f47ea01
--- /dev/null
+++ b/debian/postinst
@@ -0,0 +1,54 @@
+#!/bin/bash
+set -e
+case "$1" in
+ configure)
+ # Create system user/group
+ if ! getent passwd witryna >/dev/null; then
+ adduser --system --group --no-create-home --home /var/lib/witryna witryna
+ fi
+ # Create data + log directories
+ install -d -o witryna -g witryna -m 0755 /var/lib/witryna
+ install -d -o witryna -g witryna -m 0755 /var/lib/witryna/clones
+ install -d -o witryna -g witryna -m 0755 /var/lib/witryna/builds
+ install -d -o witryna -g witryna -m 0755 /var/lib/witryna/cache
+ install -d -o witryna -g witryna -m 0755 /var/log/witryna
+ # Config file is installed by dpkg from the asset.
+ # Fix ownership so the witryna service can read it (Group=witryna in unit).
+ chown root:witryna /etc/witryna/witryna.toml
+ chmod 640 /etc/witryna/witryna.toml
+ # Auto-detect and configure container runtime
+ if command -v docker >/dev/null 2>&1 && docker info >/dev/null 2>&1; then
+ # Docker: add to docker group + install override
+ if getent group docker >/dev/null; then
+ usermod -aG docker witryna || true
+ fi
+ mkdir -p /etc/systemd/system/witryna.service.d
+ cp /usr/share/doc/witryna/examples/systemd/docker.conf \
+ /etc/systemd/system/witryna.service.d/10-runtime.conf
+ chmod 644 /etc/systemd/system/witryna.service.d/10-runtime.conf
+ systemctl daemon-reload >/dev/null 2>&1 || true
+ echo "witryna: Docker detected and configured."
+ elif command -v podman >/dev/null 2>&1 && podman info >/dev/null 2>&1; then
+ # Podman: subuids + lingering + override
+ if ! grep -q "^witryna:" /etc/subuid 2>/dev/null; then
+ usermod --add-subuids 100000-165535 witryna || true
+ fi
+ if ! grep -q "^witryna:" /etc/subgid 2>/dev/null; then
+ usermod --add-subgids 100000-165535 witryna || true
+ fi
+ loginctl enable-linger witryna >/dev/null 2>&1 || true
+ mkdir -p /etc/systemd/system/witryna.service.d
+ cp /usr/share/doc/witryna/examples/systemd/podman.conf \
+ /etc/systemd/system/witryna.service.d/10-runtime.conf
+ chmod 644 /etc/systemd/system/witryna.service.d/10-runtime.conf
+ systemctl daemon-reload >/dev/null 2>&1 || true
+ echo "witryna: Podman detected and configured."
+ else
+ echo "witryna: WARNING — no container runtime (docker/podman) detected."
+ echo " Install one, then reinstall this package or copy an override from"
+ echo " /usr/share/doc/witryna/examples/systemd/ manually."
+ fi
+ ;;
+esac
+#DEBHELPER#
+exit 0
diff --git a/debian/postrm b/debian/postrm
new file mode 100644
index 0000000..5a7f86a
--- /dev/null
+++ b/debian/postrm
@@ -0,0 +1,19 @@
+#!/bin/bash
+set -e
+case "$1" in
+ purge)
+ if getent passwd witryna >/dev/null; then
+ deluser --quiet --system witryna >/dev/null || true
+ fi
+ if getent group witryna >/dev/null; then
+ delgroup --quiet --system witryna >/dev/null || true
+ fi
+ rm -rf /etc/witryna
+ rm -rf /etc/systemd/system/witryna.service.d
+ systemctl daemon-reload >/dev/null 2>&1 || true
+ loginctl disable-linger witryna >/dev/null 2>&1 || true
+ # /var/lib/witryna and /var/log/witryna left for manual cleanup
+ ;;
+esac
+#DEBHELPER#
+exit 0
diff --git a/debian/witryna.service b/debian/witryna.service
new file mode 100644
index 0000000..d3e0713
--- /dev/null
+++ b/debian/witryna.service
@@ -0,0 +1,62 @@
+[Unit]
+Description=Witryna - Git-based static site deployment orchestrator
+Documentation=man:witryna(1) man:witryna.toml(5)
+After=network-online.target
+Wants=network-online.target
+
+[Service]
+Type=simple
+User=witryna
+Group=witryna
+
+# Start the deployment server
+ExecStart=/usr/bin/witryna serve --config /etc/witryna/witryna.toml
+ExecReload=/bin/kill -HUP $MAINPID
+
+# Environment
+Environment="RUST_LOG=info"
+
+# Restart policy
+Restart=on-failure
+RestartSec=5
+StartLimitBurst=3
+StartLimitIntervalSec=60
+
+# Security hardening
+NoNewPrivileges=yes
+PrivateTmp=yes
+ProtectSystem=strict
+ProtectKernelTunables=yes
+ProtectKernelModules=yes
+ProtectControlGroups=yes
+RestrictNamespaces=yes
+RestrictRealtime=yes
+RestrictSUIDSGID=yes
+LockPersonality=yes
+MemoryDenyWriteExecute=yes
+
+# Note: ProtectHome=yes is NOT set because it hides /run/user/<uid>,
+# which is required for rootless Podman. The witryna user's home is
+# /var/lib/witryna (covered by ReadWritePaths), not /home.
+
+# Allow read/write to witryna directories
+ReadWritePaths=/var/lib/witryna
+ReadWritePaths=/var/log/witryna
+
+# Allow access to container runtime directories
+# For Podman (rootless): needs /run/user/<uid> for XDG_RUNTIME_DIR
+ReadWritePaths=/run/user
+# For Docker:
+# SupplementaryGroups=docker
+# ReadWritePaths=/var/run/docker.sock
+
+# Capabilities (minimal for container runtime access)
+CapabilityBoundingSet=
+AmbientCapabilities=
+
+# Resource limits
+LimitNOFILE=65536
+LimitNPROC=4096
+
+[Install]
+WantedBy=multi-user.target
diff --git a/examples/caddy/Caddyfile b/examples/caddy/Caddyfile
new file mode 100644
index 0000000..b2285f6
--- /dev/null
+++ b/examples/caddy/Caddyfile
@@ -0,0 +1,25 @@
+# Caddyfile — Witryna with auto-managed site configs
+#
+# Site configs are generated by the caddy-deploy.sh hook script
+# and imported from /etc/caddy/sites.d/. See examples/hooks/caddy-deploy.sh.
+#
+# Caddy obtains and renews TLS certificates automatically via ACME.
+# See https://caddyserver.com/docs/ for full documentation.
+
+# Import auto-managed site configs
+import /etc/caddy/sites.d/*.caddy
+
+# Webhook endpoint — reverse proxy to Witryna
+witryna.example.com {
+ reverse_proxy 127.0.0.1:8080
+
+ # Restrict access to POST requests only
+ @not_post not method POST
+ respond @not_post 405
+
+ # Security headers
+ header {
+ X-Content-Type-Options "nosniff"
+ -Server
+ }
+}
diff --git a/examples/hooks/caddy-deploy.sh b/examples/hooks/caddy-deploy.sh
new file mode 100755
index 0000000..7f2173b
--- /dev/null
+++ b/examples/hooks/caddy-deploy.sh
@@ -0,0 +1,118 @@
+#!/bin/sh
+# caddy-deploy.sh — Post-deploy hook for Witryna + Caddy integration
+#
+# Generates a Caddyfile snippet for the deployed site and reloads Caddy.
+# Supports wildcard hosting domains and custom primary domains with redirects.
+#
+# Env vars from Witryna (automatic):
+# WITRYNA_SITE — site name
+# WITRYNA_PUBLIC_DIR — stable "current" symlink path (document root)
+#
+# Env vars from [sites.env] in witryna.toml:
+# BASE_DOMAIN — wildcard hosting domain (e.g. mywitrynahost.com)
+# PRIMARY_DOMAIN — (optional) custom primary domain
+# REDIRECT_DOMAINS — (optional) comma-separated additional redirect domains
+# CADDY_SITES_DIR — (optional) where to write configs (default: /etc/caddy/sites.d)
+#
+# Behavior matrix:
+# BASE_DOMAIN set, PRIMARY_DOMAIN not set:
+# Serving: {site}.{base}
+# Redirects: (none)
+#
+# BASE_DOMAIN set, PRIMARY_DOMAIN set:
+# Serving: PRIMARY_DOMAIN
+# Redirects: {site}.{base} + REDIRECT_DOMAINS → PRIMARY_DOMAIN
+#
+# BASE_DOMAIN not set, PRIMARY_DOMAIN set:
+# Serving: PRIMARY_DOMAIN
+# Redirects: REDIRECT_DOMAINS → PRIMARY_DOMAIN
+#
+# Neither set: error
+#
+# Usage in witryna.toml:
+# post_deploy = ["/etc/witryna/hooks/caddy-deploy.sh"]
+# [sites.env]
+# BASE_DOMAIN = "mywitrynahost.com"
+# PRIMARY_DOMAIN = "blog.example.com"
+
+set -eu
+
+SITES_DIR="${CADDY_SITES_DIR:-/etc/caddy/sites.d}"
+CADDY_CONFIG="${CADDY_CONFIG:-/etc/caddy/Caddyfile}"
+
+# Validate required env vars
+if [ -z "${WITRYNA_SITE:-}" ]; then
+ echo "ERROR: WITRYNA_SITE is not set" >&2
+ exit 1
+fi
+if [ -z "${WITRYNA_PUBLIC_DIR:-}" ]; then
+ echo "ERROR: WITRYNA_PUBLIC_DIR is not set" >&2
+ exit 1
+fi
+
+# Determine serving domain and redirect domains
+auto_domain=""
+if [ -n "${BASE_DOMAIN:-}" ]; then
+ auto_domain="${WITRYNA_SITE}.${BASE_DOMAIN}"
+fi
+
+serving_domain=""
+redirect_domains=""
+
+if [ -n "${PRIMARY_DOMAIN:-}" ]; then
+ serving_domain="$PRIMARY_DOMAIN"
+ # Auto-domain redirects to primary (if base is set)
+ if [ -n "$auto_domain" ]; then
+ redirect_domains="$auto_domain"
+ fi
+ # Append user-specified redirect domains
+ if [ -n "${REDIRECT_DOMAINS:-}" ]; then
+ if [ -n "$redirect_domains" ]; then
+ redirect_domains="${redirect_domains}, ${REDIRECT_DOMAINS}"
+ else
+ redirect_domains="$REDIRECT_DOMAINS"
+ fi
+ fi
+elif [ -n "$auto_domain" ]; then
+ serving_domain="$auto_domain"
+ # No primary → REDIRECT_DOMAINS still apply as redirects to auto_domain
+ if [ -n "${REDIRECT_DOMAINS:-}" ]; then
+ redirect_domains="$REDIRECT_DOMAINS"
+ fi
+else
+ echo "ERROR: at least one of BASE_DOMAIN or PRIMARY_DOMAIN must be set" >&2
+ exit 1
+fi
+
+# Ensure sites directory exists
+mkdir -p "$SITES_DIR"
+
+# Generate Caddyfile snippet
+config_file="${SITES_DIR}/${WITRYNA_SITE}.caddy"
+{
+ echo "# Managed by witryna caddy-deploy.sh — do not edit"
+ echo "${serving_domain} {"
+ echo " root * ${WITRYNA_PUBLIC_DIR}"
+ echo " file_server"
+ echo " encode gzip"
+ echo " header {"
+ echo " X-Frame-Options \"DENY\""
+ echo " X-Content-Type-Options \"nosniff\""
+ echo " Referrer-Policy \"strict-origin-when-cross-origin\""
+ echo " -Server"
+ echo " }"
+ echo "}"
+
+ if [ -n "$redirect_domains" ]; then
+ echo ""
+ echo "${redirect_domains} {"
+ echo " redir https://${serving_domain}{uri} permanent"
+ echo "}"
+ fi
+} > "$config_file"
+
+echo "Wrote Caddy config: $config_file"
+
+# Reload Caddy
+caddy reload --config "$CADDY_CONFIG"
+echo "Caddy reloaded"
diff --git a/examples/nginx/witryna.conf b/examples/nginx/witryna.conf
new file mode 100644
index 0000000..5f56ef2
--- /dev/null
+++ b/examples/nginx/witryna.conf
@@ -0,0 +1,48 @@
+# witryna.conf — Nginx reverse proxy configuration for Witryna
+#
+# Two server blocks:
+# 1. Public site — serves the built static assets
+# 2. Webhook endpoint — proxies deploy triggers to Witryna
+#
+# TLS is not configured here — use certbot or similar to add certificates:
+# sudo certbot --nginx -d my-site.example.com -d witryna.example.com
+
+# Public site — serves your built static files
+server {
+ listen 80;
+ server_name my-site.example.com;
+
+ root /var/lib/witryna/builds/my-site/current;
+ index index.html;
+
+ location / {
+ try_files $uri $uri/ =404;
+ }
+
+ # Security headers
+ add_header X-Frame-Options "DENY" always;
+ add_header X-Content-Type-Options "nosniff" always;
+ add_header Referrer-Policy "strict-origin-when-cross-origin" always;
+}
+
+# Webhook endpoint — reverse proxy to Witryna
+server {
+ listen 80;
+ server_name witryna.example.com;
+
+ # Only allow POST requests
+ location / {
+ limit_except POST {
+ deny all;
+ }
+
+ proxy_pass http://127.0.0.1:8080;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+ }
+
+ # Security headers
+ add_header X-Content-Type-Options "nosniff" always;
+}
diff --git a/examples/systemd/docker.conf b/examples/systemd/docker.conf
new file mode 100644
index 0000000..9ee2b2d
--- /dev/null
+++ b/examples/systemd/docker.conf
@@ -0,0 +1,3 @@
+[Service]
+SupplementaryGroups=docker
+ReadWritePaths=/var/run/docker.sock
diff --git a/examples/systemd/podman.conf b/examples/systemd/podman.conf
new file mode 100644
index 0000000..98502f8
--- /dev/null
+++ b/examples/systemd/podman.conf
@@ -0,0 +1,3 @@
+[Service]
+RestrictNamespaces=no
+Environment="XDG_RUNTIME_DIR=/run/user/%U"
diff --git a/examples/witryna.toml b/examples/witryna.toml
new file mode 100644
index 0000000..6256d63
--- /dev/null
+++ b/examples/witryna.toml
@@ -0,0 +1,63 @@
+# /etc/witryna/witryna.toml — Witryna configuration
+# See witryna.toml(5) for full documentation.
+
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_dir = "/var/log/witryna"
+log_level = "info"
+rate_limit_per_minute = 10
+max_builds_to_keep = 5
+# git_timeout = "2m" # default: 60s, range: 5s..1h
+
+# [[sites]]
+# name = "my-site"
+# repo_url = "https://github.com/user/my-site.git"
+# branch = "main"
+# webhook_token = "CHANGE-ME"
+# # Or use environment variable: webhook_token = "${WITRYNA_TOKEN}"
+# # Or use file (Docker/K8s secrets): webhook_token_file = "/run/secrets/token"
+# # Omit webhook_token to disable authentication (e.g., behind VPN)
+#
+# # Polling (default: disabled, webhook-only)
+# # poll_interval = "30m" # min: 60s
+#
+# # Build timeout (default: 10m, range: 10s..24h)
+# # build_timeout = "15m"
+#
+# # Git clone depth (default: 1 for shallow, 0 for full history)
+# # git_depth = 0
+#
+# # Container resource limits
+# # container_memory = "512m"
+# # container_cpus = 1.0
+# # container_pids_limit = 256
+# # container_network = "bridge" # bridge (default) | none | host | slirp4netns
+#
+# # Container working directory (for monorepos)
+# # container_workdir = "packages/frontend"
+#
+# # Custom repo config file (default: .witryna.yaml → .witryna.yml → witryna.yaml → witryna.yml)
+# # config_file = ".witryna.yaml"
+#
+# # Cache directories (container paths, persisted across builds)
+# # cache_dirs = ["/root/.npm"]
+#
+# # Build config overrides (all three → witryna.yaml optional)
+# # image = "node:20-alpine"
+# # command = "npm ci && npm run build"
+# # public = "dist"
+#
+# # Post-deploy hook (30s timeout, non-fatal)
+# # post_deploy = ["systemctl", "reload", "nginx"]
+#
+# # Caddy auto-configuration (see examples/hooks/caddy-deploy.sh)
+# # post_deploy = ["/etc/witryna/hooks/caddy-deploy.sh"]
+#
+# # Environment variables for builds and hooks
+# # [sites.env]
+# # NODE_ENV = "production"
+# # BASE_DOMAIN = "mywitrynahost.com" # for Caddy hook
+# # PRIMARY_DOMAIN = "my-site.example.com" # for Caddy hook
+# # REDIRECT_DOMAINS = "www.my-site.example.com" # for Caddy hook
+
diff --git a/examples/witryna.yaml b/examples/witryna.yaml
new file mode 100644
index 0000000..3d6a09f
--- /dev/null
+++ b/examples/witryna.yaml
@@ -0,0 +1,14 @@
+# witryna.yaml — per-repository build configuration
+# Place this file in the root of your Git repository.
+# Supported filenames: .witryna.yaml, .witryna.yml, witryna.yaml, witryna.yml
+# Or set config_file in witryna.toml for a custom path.
+# See witryna.toml(5) for overriding these values in the server config.
+
+# Container image for the build environment
+image: node:20-alpine
+
+# Build command (executed via sh -c inside the container)
+command: "npm ci && npm run build"
+
+# Directory containing built static assets (relative to repo root)
+public: dist
diff --git a/lefthook.yml b/lefthook.yml
new file mode 100644
index 0000000..fd6856d
--- /dev/null
+++ b/lefthook.yml
@@ -0,0 +1,22 @@
+---
+pre-commit:
+ parallel: true
+ commands:
+ hadolint:
+ glob: "**/*Dockerfile"
+ run: hadolint {staged_files}
+ woodpecker:
+ glob: ".woodpecker/*.{yml,yaml}"
+ run: woodpecker lint {staged_files}
+ yamllint:
+ glob: "**/*.{yml,yaml}"
+ run: yamllint {staged_files}
+ fmt:
+ run: cargo fmt --all -- --check
+ stage_fixed: true
+ clippy:
+ run: cargo clippy --all-targets --all-features -- -D warnings
+ test:
+ run: cargo test --all
+ gitleaks:
+ run: gitleaks protect --staged
diff --git a/man/witryna.toml.5 b/man/witryna.toml.5
new file mode 100644
index 0000000..29c0331
--- /dev/null
+++ b/man/witryna.toml.5
@@ -0,0 +1,490 @@
+.TH WITRYNA.TOML 5 "2026-02-10" "witryna 0.1.0" "Witryna Configuration"
+.SH NAME
+witryna.toml \- configuration file for \fBwitryna\fR(1)
+.SH DESCRIPTION
+\fBwitryna.toml\fR is a TOML file that configures the \fBwitryna\fR static site
+deployment orchestrator.
+It defines the HTTP listen address, container runtime, directory layout,
+logging, rate limiting, and zero or more site definitions with optional build
+overrides, polling intervals, cache volumes, and post\-deploy hooks.
+.PP
+The file is read at startup and can be reloaded at runtime by sending
+\fBSIGHUP\fR to the process (see \fBHOT RELOAD\fR below).
+.SH GLOBAL OPTIONS
+.TP
+\fBlisten_address\fR = "\fIhost:port\fR" (required)
+Socket address the HTTP server binds to.
+Must be a valid \fIip:port\fR pair (e.g., "127.0.0.1:8080").
+.TP
+\fBcontainer_runtime\fR = "\fIname\fR" (required)
+Container runtime executable used to run build commands.
+Typically "podman" or "docker".
+Must not be empty or whitespace\-only.
+.TP
+\fBbase_dir\fR = "\fI/path\fR" (required)
+Root directory for clones, builds, and cache.
+Default layout:
+.RS
+.nf
+<base_dir>/clones/<site>/
+<base_dir>/builds/<site>/<timestamp>/
+<base_dir>/builds/<site>/current -> <timestamp>
+<base_dir>/cache/<site>/
+.fi
+.RE
+.TP
+\fBlog_dir\fR = "\fI/path\fR" (optional, default: "/var/log/witryna")
+Directory for per\-build log files.
+Layout: \fI<log_dir>/<site>/<timestamp>.log\fR
+.TP
+\fBlog_level\fR = "\fIlevel\fR" (required)
+Tracing verbosity.
+Valid values: "trace", "debug", "info", "warn", "error" (case\-insensitive).
+Can be overridden at runtime with the \fBRUST_LOG\fR environment variable.
+.TP
+\fBrate_limit_per_minute\fR = \fIn\fR (optional, default: 10)
+Maximum webhook requests per token per minute.
+Exceeding this limit returns HTTP 429.
+.PP
+\fBNote:\fR The server enforces a hard 1\ MB request body size limit.
+This is not configurable and applies to all endpoints.
+.TP
+\fBmax_builds_to_keep\fR = \fIn\fR (optional, default: 5)
+Number of timestamped build directories to retain per site.
+Older builds and their corresponding log files are removed after each
+successful publish.
+Set to 0 to disable cleanup (keep all builds).
+.TP
+\fBgit_timeout\fR = "\fIduration\fR" (optional, default: "1m")
+Maximum time allowed for each git operation (clone, fetch, reset, submodule update).
+Accepts a \fBhumantime\fR duration string (e.g., "30s", "2m", "5m").
+.RS
+Minimum is 5 seconds, maximum is 1 hour.
+Applies globally to all sites.
+Repositories with many or large submodules may need a longer timeout (e.g., "5m").
+.RE
+.SH SITE DEFINITIONS
+Sites are defined as TOML array\-of\-tables entries under \fB[[sites]]\fR.
+Each site represents a Git repository that witryna can build and publish.
+Site names must be unique.
+The list may be empty (\fBsites = []\fR); witryna will start but serve
+only the health\-check endpoint.
+.TP
+\fBname\fR = "\fIsite\-name\fR" (required)
+Unique identifier for the site.
+Used in the webhook URL path (\fIPOST /<name>\fR) and directory names.
+.RS
+Validation rules:
+.IP \(bu 2
+Alphanumeric characters, hyphens, and underscores only.
+.IP \(bu 2
+Cannot start or end with a hyphen or underscore.
+.IP \(bu 2
+Cannot contain consecutive hyphens or consecutive underscores.
+.IP \(bu 2
+No path traversal characters (\fI..\fR, \fI/\fR, \fI\\\fR).
+.RE
+.TP
+\fBrepo_url\fR = "\fIurl\fR" (required)
+Git repository URL to clone.
+Any URL that \fBgit clone\fR accepts (HTTPS, SSH, local path).
+.TP
+\fBbranch\fR = "\fIref\fR" (required)
+Git branch to track.
+This branch is checked out after clone and fetched on each build trigger.
+.TP
+\fBwebhook_token\fR = "\fItoken\fR" (optional)
+Bearer token for webhook endpoint authentication.
+If omitted or set to an empty string, webhook authentication is disabled for
+this site \(em all POST requests to the site endpoint will be accepted without
+token validation.
+Use this when the endpoint is protected by other means (reverse proxy, VPN,
+firewall).
+A warning is logged at startup for sites without authentication.
+.PP
+When set, the token is validated using constant\-time comparison to prevent
+timing attacks.
+Sent as: \fIAuthorization: Bearer <token>\fR.
+.RS
+The token can be provided in three ways:
+.IP \(bu 2
+\fBLiteral value:\fR \fBwebhook_token = "my\-secret"\fR
+.IP \(bu 2
+\fBEnvironment variable:\fR \fBwebhook_token = "${VAR_NAME}"\fR \- resolved from
+the process environment at config load time.
+The variable name must consist of ASCII uppercase letters, digits, and underscores,
+and must start with a letter or underscore.
+Only full\-value substitution is supported; partial interpolation
+(e.g., "prefix\-${VAR}") is treated as a literal token.
+.IP \(bu 2
+\fBFile:\fR Use \fBwebhook_token_file\fR (see below).
+.PP
+The \fB${VAR}\fR syntax and \fBwebhook_token_file\fR are mutually exclusive.
+If the referenced environment variable is not set or the file cannot be read,
+config loading fails with an error.
+.RE
+.TP
+\fBwebhook_token_file\fR = "\fI/path/to/file\fR" (optional)
+Path to a file containing the webhook token.
+The file contents are read and trimmed of leading/trailing whitespace.
+Compatible with Docker secrets (\fI/run/secrets/\fR) and Kubernetes secret volumes.
+.RS
+When set, \fBwebhook_token\fR should be omitted (it defaults to empty).
+Cannot be combined with the \fB${VAR}\fR substitution syntax.
+.PP
+\fBSecurity note:\fR Ensure the token file has restrictive permissions
+(e.g., 0400 or 0600) and is readable only by the witryna user.
+.RE
+.SH REPOSITORY CONFIG FILE
+.TP
+\fBconfig_file\fR = "\fIpath\fR" (optional)
+Path to a custom build config file in the repository, relative to the repo root
+(e.g., ".witryna.yaml", "build/config.yml").
+Must be a relative path with no path traversal (\fI..\fR).
+.PP
+If not set, witryna searches the repository root in order:
+\fI.witryna.yaml\fR, \fI.witryna.yml\fR, \fIwitryna.yaml\fR, \fIwitryna.yml\fR.
+The first file found is used.
+.SH BUILD OVERRIDES
+Build parameters can optionally be specified directly in the site definition,
+overriding values from the repository's build config file
+(\fI.witryna.yaml\fR / \fIwitryna.yaml\fR).
+When all three fields (\fBimage\fR, \fBcommand\fR, \fBpublic\fR) are set,
+the repository config file becomes optional.
+.TP
+\fBimage\fR = "\fIcontainer:tag\fR" (optional)
+Container image to use for the build (e.g., "node:20\-alpine").
+Required unless provided in \fIwitryna.yaml\fR.
+Must not be blank.
+.TP
+\fBcommand\fR = "\fIshell command\fR" (optional)
+Build command executed via \fBsh \-c\fR inside the container.
+Must not be blank.
+.TP
+\fBpublic\fR = "\fIrelative/path\fR" (optional)
+Directory containing built static assets, relative to the repository root.
+Must be a relative path with no path traversal (\fI..\fR).
+.SH RESOURCE LIMITS
+Optional resource limits for container builds.
+These flags are passed directly to the container runtime.
+.TP
+\fBcontainer_memory\fR = "\fIsize\fR" (optional)
+Memory limit for the build container (e.g., "512m", "2g", "1024k").
+Must be a number followed by a unit suffix: \fBk\fR, \fBm\fR, or \fBg\fR
+(case\-insensitive).
+Passed as \fB\-\-memory\fR to the container runtime.
+.TP
+\fBcontainer_cpus\fR = \fIn\fR (optional)
+CPU limit for the build container (e.g., 0.5, 2.0).
+Must be greater than 0.
+Passed as \fB\-\-cpus\fR to the container runtime.
+.TP
+\fBcontainer_pids_limit\fR = \fIn\fR (optional)
+Maximum number of PIDs inside the build container (e.g., 100).
+Must be greater than 0.
+Passed as \fB\-\-pids\-limit\fR to the container runtime.
+Helps prevent fork bombs during builds.
+.SH NETWORK ISOLATION
+.TP
+\fBcontainer_network\fR = "\fImode\fR" (optional, default: "bridge")
+Network mode for the build container.
+Passed as \fB\-\-network=\fImode\fR to the container runtime.
+.RS
+Allowed values:
+.IP \(bu 2
+\fB"bridge"\fR (default) \- Standard container networking (NAT).
+Works out of the box for builds that download dependencies (e.g., \fBnpm install\fR).
+.IP \(bu 2
+\fB"none"\fR \- No network access.
+Most secure option; use for builds that don't need to download anything.
+.IP \(bu 2
+\fB"host"\fR \- Use the host network namespace directly.
+.IP \(bu 2
+\fB"slirp4netns"\fR \- User\-mode networking (Podman rootless).
+.PP
+Set \fBcontainer_network = "none"\fR for maximum isolation when your build
+does not require network access.
+.RE
+.SH CONTAINER WORKING DIRECTORY
+.TP
+\fBcontainer_workdir\fR = "\fIpath\fR" (optional, default: repo root)
+Working directory inside the build container, relative to the repository root.
+Useful for monorepo projects where the build runs from a subdirectory.
+.PP
+The value is a relative path (e.g., "packages/frontend") appended to the
+default \fB/workspace\fR mount point, resulting in
+\fB\-\-workdir /workspace/packages/frontend\fR.
+.PP
+Must be a relative path with no path traversal (\fI..\fR) and no leading slash.
+.SH POLLING
+.TP
+\fBpoll_interval\fR = "\fIduration\fR" (optional, default: disabled)
+If set, witryna periodically fetches the remote branch and triggers a build
+when new commits are detected.
+Accepts a \fBhumantime\fR duration string (e.g., "30m", "1h", "2h30m").
+.RS
+Minimum interval is 1 minute.
+Polling respects the concurrent build lock; if a build is already in progress,
+the poll cycle is skipped.
+Initial poll delays are staggered across sites to avoid a thundering herd.
+.RE
+.SH GIT CLONE DEPTH
+.TP
+\fBgit_depth\fR = \fIn\fR (optional, default: 1)
+Git clone depth for the repository.
+.RS
+.IP \(bu 2
+\fB1\fR (default) \- Shallow clone with only the latest commit.
+Fast and minimal disk usage.
+.IP \(bu 2
+\fB0\fR \- Full clone with complete history.
+Required for \fBgit describe\fR or monorepo change detection.
+.IP \(bu 2
+\fBn > 1\fR \- Shallow clone with \fIn\fR commits of history.
+.RE
+.PP
+Submodules are detected and initialized automatically regardless of clone depth.
+.SH BUILD TIMEOUT
+.TP
+\fBbuild_timeout\fR = "\fIduration\fR" (optional, default: "10m")
+Maximum time a build command is allowed to run before being killed.
+Accepts a \fBhumantime\fR duration string (e.g., "5m", "30m", "1h").
+.RS
+Minimum is 10 seconds, maximum is 24 hours.
+.RE
+.SH CACHE VOLUMES
+.TP
+\fBcache_dirs\fR = ["\fI/container/path\fR", ...] (optional)
+List of absolute container paths to persist as cache volumes across builds.
+Each path gets a dedicated host directory under \fI<base_dir>/cache/<site>/\fR.
+.RS
+Validation rules:
+.IP \(bu 2
+Each entry must be an absolute path.
+.IP \(bu 2
+No path traversal (\fI..\fR) allowed.
+.IP \(bu 2
+No duplicates after normalization.
+.IP \(bu 2
+Maximum 20 entries per site.
+.PP
+Common cache paths:
+.TS
+l l.
+Node.js /root/.npm
+Python pip /root/.cache/pip
+Go modules /root/.cache/go
+Rust cargo /usr/local/cargo/registry
+Maven /root/.m2/repository
+.TE
+.PP
+Cache directories are \fBnever cleaned automatically\fR by witryna.
+Administrators should monitor disk usage under
+\fI<base_dir>/cache/\fR and prune manually when needed.
+.RE
+.SH ENVIRONMENT VARIABLES
+.TP
+\fB[sites.env]\fR (optional)
+TOML table of environment variables passed to builds and post\-deploy hooks.
+Each key\-value pair becomes a \fB\-\-env KEY=VALUE\fR flag for the container
+runtime.
+Variables are also passed to post\-deploy hooks.
+.RS
+Validation rules:
+.IP \(bu 2
+Keys must not be empty.
+.IP \(bu 2
+Keys must not contain \fB=\fR.
+.IP \(bu 2
+Neither keys nor values may contain null bytes.
+.IP \(bu 2
+Keys starting with \fBWITRYNA_\fR (case\-insensitive) are reserved and rejected.
+.IP \(bu 2
+Maximum 64 entries per site.
+.PP
+In post\-deploy hooks, user\-defined variables are set \fBbefore\fR the reserved
+variables (PATH, HOME, LANG, WITRYNA_*), so they cannot override system or
+Witryna\-internal values.
+.RE
+.SH POST-DEPLOY HOOKS
+.TP
+\fBpost_deploy\fR = ["\fIcmd\fR", "\fIarg\fR", ...] (optional)
+Command to execute after a successful symlink switch.
+Uses array form (no shell interpolation) for safety.
+.RS
+The hook receives context exclusively via environment variables:
+.TP
+\fBWITRYNA_SITE\fR
+The site name.
+.TP
+\fBWITRYNA_BUILD_DIR\fR
+Absolute path to the new build directory (also the working directory).
+.TP
+\fBWITRYNA_PUBLIC_DIR\fR
+Absolute path to the stable current symlink
+(e.g. /var/lib/witryna/builds/my\-site/current).
+Use this as the web server document root.
+.TP
+\fBWITRYNA_BUILD_TIMESTAMP\fR
+Build timestamp (YYYYmmdd-HHMMSS-ffffff).
+.PP
+The hook runs with a minimal environment: any user\-defined variables
+from \fB[sites.env]\fR, followed by PATH, HOME, LANG,
+and the WITRYNA_* variables above (which take precedence).
+Its working directory is the build output directory.
+It is subject to a 30\-second timeout and killed if exceeded.
+Output is streamed to disk and logged to
+\fI<log_dir>/<site>/<timestamp>\-hook.log\fR.
+.PP
+Hook failure is \fBnon\-fatal\fR: the deployment is already live,
+and a warning is logged.
+The exit code is recorded in the hook log.
+A log file is written for every hook invocation (success, failure,
+timeout, or spawn error).
+.PP
+Validation rules:
+.IP \(bu 2
+The array must not be empty.
+.IP \(bu 2
+The first element (executable) must not be empty or whitespace\-only.
+.IP \(bu 2
+No element may contain null bytes.
+.IP \(bu 2
+Maximum 64 elements.
+.RE
+.SH HOT RELOAD
+Sending \fBSIGHUP\fR to the witryna process causes it to re\-read
+\fIwitryna.toml\fR.
+Sites can be added, removed, or reconfigured without downtime.
+Polling tasks are stopped and restarted to reflect the new configuration.
+.PP
+The following fields are \fBnot reloadable\fR and require a full restart:
+.IP \(bu 2
+\fBlisten_address\fR
+.IP \(bu 2
+\fBbase_dir\fR
+.IP \(bu 2
+\fBlog_dir\fR
+.IP \(bu 2
+\fBlog_level\fR
+.PP
+If any of these fields differ after reload, a warning is logged but the new
+values are ignored until the process is restarted.
+.PP
+If the reloaded configuration is invalid, the existing configuration remains
+active and an error is logged.
+.SH EXAMPLES
+A complete annotated configuration:
+.PP
+.nf
+.RS 4
+# Network
+listen_address = "127.0.0.1:8080"
+
+# Container runtime ("podman" or "docker")
+container_runtime = "podman"
+
+# Data directory (clones, builds, cache)
+base_dir = "/var/lib/witryna"
+
+# Log directory (per\-build logs)
+log_dir = "/var/log/witryna"
+
+# Tracing verbosity
+log_level = "info"
+
+# Webhook rate limit (per token, per minute)
+rate_limit_per_minute = 10
+
+# Keep the 5 most recent builds per site
+max_builds_to_keep = 5
+
+# Git operation timeout (clone, fetch, reset)
+# git_timeout = "2m"
+
+# A site that relies on witryna.yaml in the repo
+[[sites]]
+name = "my\-blog"
+repo_url = "https://github.com/user/my\-blog.git"
+branch = "main"
+webhook_token = "s3cret\-tok3n"
+# Or from environment: webhook_token = "${MY_BLOG_TOKEN}"
+# Or from file: webhook_token_file = "/run/secrets/blog\-token"
+poll_interval = "1h"
+
+# A site with full build overrides (no witryna.yaml needed)
+# Custom build timeout (default: 10 minutes)
+[[sites]]
+name = "docs\-site"
+repo_url = "https://github.com/user/docs.git"
+branch = "main"
+webhook_token = "an0ther\-t0ken"
+image = "node:20\-alpine"
+command = "npm ci && npm run build"
+public = "dist"
+build_timeout = "30m"
+cache_dirs = ["/root/.npm"]
+post_deploy = ["curl", "\-sf", "https://status.example.com/deployed"]
+
+# Environment variables for builds and hooks
+[sites.env]
+DEPLOY_TOKEN = "abc123"
+NODE_ENV = "production"
+.fi
+.RE
+.SH SECURITY CONSIDERATIONS
+.TP
+\fBContainer isolation\fR
+Build commands run inside ephemeral containers with \fB\-\-cap\-drop=ALL\fR.
+When the runtime is Podman, \fB\-\-userns=keep\-id\fR is added so the
+container user maps to the host UID, avoiding permission issues without
+any extra capabilities.
+When the runtime is Docker, \fBDAC_OVERRIDE\fR is re\-added because
+Docker runs as root (UID\ 0) inside the container while the workspace is
+owned by the host UID; without this capability, the build cannot read or
+write files.
+The container filesystem is isolated; the build has no direct access to
+the host beyond the mounted workspace directory.
+.SH SYSTEMD OVERRIDES
+The deb and rpm packages install example systemd override templates to
+\fI/usr/share/doc/witryna/examples/systemd/\fR.
+The post\-install script copies the appropriate template to
+\fI/etc/systemd/system/witryna.service.d/10\-runtime.conf\fR based on the
+detected container runtime.
+.TP
+\fBdocker.conf\fR
+.nf
+.RS 4
+[Service]
+SupplementaryGroups=docker
+ReadWritePaths=/var/run/docker.sock
+.fi
+.RE
+.TP
+\fBpodman.conf\fR
+.nf
+.RS 4
+[Service]
+RestrictNamespaces=no
+Environment="XDG_RUNTIME_DIR=/run/user/%U"
+.fi
+.RE
+.PP
+\fB%U\fR resolves to the numeric UID of the service user at runtime.
+To add custom overrides, create a separate file (e.g.,
+\fI20\-custom.conf\fR) in the same directory; do not edit
+\fI10\-runtime.conf\fR as it will be overwritten on package reinstall.
+.SH FILES
+When no \fB\-\-config\fR flag is given, \fBwitryna\fR searches for the
+configuration file in the following order:
+.IP 1. 4
+\fI./witryna.toml\fR (current working directory)
+.IP 2. 4
+\fI$XDG_CONFIG_HOME/witryna/witryna.toml\fR (default: \fI~/.config/witryna/witryna.toml\fR)
+.IP 3. 4
+\fI/etc/witryna/witryna.toml\fR
+.PP
+The first file found is used.
+If none exists, an error is printed listing all searched paths.
+.SH SEE ALSO
+\fBwitryna\fR(1)
diff --git a/scripts/witryna.service b/scripts/witryna.service
new file mode 100644
index 0000000..63d7c2f
--- /dev/null
+++ b/scripts/witryna.service
@@ -0,0 +1,100 @@
+# Witryna - Git-based static site deployment orchestrator
+#
+# NOTE: This file is for MANUAL installations only.
+# The Debian package ships its own unit in debian/witryna.service.
+#
+# Installation:
+# 1. Create system user:
+# sudo adduser --system --group --no-create-home --home /var/lib/witryna witryna
+# 2. Copy binary: sudo cp target/release/witryna /usr/local/bin/
+# 3. Create config dir: sudo mkdir -p /etc/witryna
+# 4. Copy config: sudo cp witryna.toml /etc/witryna/
+# 5. Create dirs: sudo mkdir -p /var/lib/witryna/{clones,builds,cache} /var/log/witryna
+# 6. Set ownership: sudo chown -R witryna:witryna /var/lib/witryna /var/log/witryna
+# 7. Install service: sudo cp scripts/witryna.service /etc/systemd/system/
+# 8. Reload systemd: sudo systemctl daemon-reload
+# 9. Enable service: sudo systemctl enable witryna
+# 10. Start service: sudo systemctl start witryna
+#
+# Podman rootless prerequisites (if using container_runtime = "podman"):
+# - Allocate sub-UIDs/GIDs for the witryna user:
+# sudo usermod --add-subuids 100000-165535 --add-subgids 100000-165535 witryna
+# - Enable lingering so the user session persists:
+# sudo loginctl enable-linger witryna
+# - Allow user namespaces via a systemd drop-in:
+# sudo mkdir -p /etc/systemd/system/witryna.service.d
+# printf '[Service]\nRestrictNamespaces=no\n' | \
+# sudo tee /etc/systemd/system/witryna.service.d/namespaces.conf
+# - Set XDG_RUNTIME_DIR via a systemd drop-in:
+# printf '[Service]\nEnvironment="XDG_RUNTIME_DIR=/run/user/%%U"\n' | \
+# sudo tee /etc/systemd/system/witryna.service.d/xdg-runtime.conf
+# - Reload systemd: sudo systemctl daemon-reload
+#
+# Usage:
+# sudo systemctl status witryna # Check status
+# sudo systemctl restart witryna # Restart service
+# sudo kill -HUP $(pidof witryna) # Hot-reload config
+# sudo journalctl -u witryna -f # View logs
+
+[Unit]
+Description=Witryna - Git-based static site deployment orchestrator
+Documentation=https://github.com/knightdave/witryna
+After=network-online.target
+Wants=network-online.target
+
+[Service]
+Type=simple
+User=witryna
+Group=witryna
+
+# Start the deployment server
+ExecStart=/usr/local/bin/witryna serve --config /etc/witryna/witryna.toml
+ExecReload=/bin/kill -HUP $MAINPID
+
+# Environment
+Environment="RUST_LOG=info"
+
+# Restart policy
+Restart=on-failure
+RestartSec=5
+StartLimitBurst=3
+StartLimitIntervalSec=60
+
+# Security hardening
+NoNewPrivileges=yes
+PrivateTmp=yes
+ProtectSystem=strict
+ProtectKernelTunables=yes
+ProtectKernelModules=yes
+ProtectControlGroups=yes
+RestrictNamespaces=yes
+RestrictRealtime=yes
+RestrictSUIDSGID=yes
+LockPersonality=yes
+MemoryDenyWriteExecute=yes
+
+# Note: ProtectHome=yes is NOT set because it hides /run/user/<uid>,
+# which is required for rootless Podman. The witryna user's home is
+# /var/lib/witryna (covered by ReadWritePaths), not /home.
+
+# Allow read/write to witryna directories
+ReadWritePaths=/var/lib/witryna
+ReadWritePaths=/var/log/witryna
+
+# Allow access to container runtime directories
+# For Podman (rootless): needs /run/user/<uid> for XDG_RUNTIME_DIR
+ReadWritePaths=/run/user
+# For Docker:
+# SupplementaryGroups=docker
+# ReadWritePaths=/var/run/docker.sock
+
+# Capabilities (minimal for container runtime access)
+CapabilityBoundingSet=
+AmbientCapabilities=
+
+# Resource limits
+LimitNOFILE=65536
+LimitNPROC=4096
+
+[Install]
+WantedBy=multi-user.target
diff --git a/src/build.rs b/src/build.rs
new file mode 100644
index 0000000..e887f64
--- /dev/null
+++ b/src/build.rs
@@ -0,0 +1,843 @@
+use anyhow::{Context as _, Result};
+use std::collections::HashMap;
+use std::path::{Path, PathBuf};
+use std::process::Stdio;
+use std::time::{Duration, Instant};
+use tokio::io::{AsyncWrite, AsyncWriteExt as _, BufWriter};
+use tokio::process::Command;
+use tracing::{debug, info};
+
+use crate::repo_config::RepoConfig;
+
+/// Optional container resource limits and network mode.
+///
+/// Passed from `SiteConfig` to `execute()` to inject `--memory`, `--cpus`,
+/// `--pids-limit`, and `--network` flags into the container command.
+#[derive(Debug)]
+pub struct ContainerOptions {
+ pub memory: Option<String>,
+ pub cpus: Option<f64>,
+ pub pids_limit: Option<u32>,
+ pub network: String,
+ pub workdir: Option<String>,
+}
+
+impl Default for ContainerOptions {
+ fn default() -> Self {
+ Self {
+ memory: None,
+ cpus: None,
+ pids_limit: None,
+ network: "bridge".to_owned(),
+ workdir: None,
+ }
+ }
+}
+
+/// Default timeout for build operations.
+pub const BUILD_TIMEOUT_DEFAULT: Duration = Duration::from_secs(600); // 10 minutes
+
+/// Size of the in-memory tail buffer for stderr (last 1 KB).
+/// Used for `BuildFailure::Display` after streaming to disk.
+const STDERR_TAIL_SIZE: usize = 1024;
+
+/// Result of a build execution.
+///
+/// Stdout and stderr are streamed to temporary files on disk during the build.
+/// Callers should pass these paths to `logs::save_build_log()` for composition.
+#[derive(Debug)]
+pub struct BuildResult {
+ pub stdout_file: PathBuf,
+ pub stderr_file: PathBuf,
+ pub duration: Duration,
+}
+
+/// Error from a failed build command.
+///
+/// Carries structured exit code and file paths to captured output.
+/// `last_stderr` holds the last 1 KB of stderr for the `Display` impl.
+#[derive(Debug)]
+pub struct BuildFailure {
+ pub exit_code: i32,
+ pub stdout_file: PathBuf,
+ pub stderr_file: PathBuf,
+ pub last_stderr: String,
+ pub duration: Duration,
+}
+
+impl std::fmt::Display for BuildFailure {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ write!(
+ f,
+ "build failed with exit code {}: {}",
+ self.exit_code,
+ self.last_stderr.trim()
+ )
+ }
+}
+
+impl std::error::Error for BuildFailure {}
+
+/// Writer that duplicates all writes to both a primary and secondary writer.
+///
+/// Used for `--verbose` mode: streams build output to both a temp file (primary)
+/// and stderr (secondary) simultaneously.
+pub(crate) struct TeeWriter<W> {
+ primary: W,
+ secondary: tokio::io::Stderr,
+}
+
+impl<W: AsyncWrite + Unpin> TeeWriter<W> {
+ pub(crate) const fn new(primary: W, secondary: tokio::io::Stderr) -> Self {
+ Self { primary, secondary }
+ }
+}
+
+impl<W: AsyncWrite + Unpin> AsyncWrite for TeeWriter<W> {
+ fn poll_write(
+ mut self: std::pin::Pin<&mut Self>,
+ cx: &mut std::task::Context<'_>,
+ buf: &[u8],
+ ) -> std::task::Poll<std::io::Result<usize>> {
+ // Write to primary first
+ let poll = std::pin::Pin::new(&mut self.primary).poll_write(cx, buf);
+ if let std::task::Poll::Ready(Ok(n)) = &poll {
+ // Best-effort write to secondary (stderr) — same bytes
+ let _ = std::pin::Pin::new(&mut self.secondary).poll_write(cx, &buf[..*n]);
+ }
+ poll
+ }
+
+ fn poll_flush(
+ mut self: std::pin::Pin<&mut Self>,
+ cx: &mut std::task::Context<'_>,
+ ) -> std::task::Poll<std::io::Result<()>> {
+ let _ = std::pin::Pin::new(&mut self.secondary).poll_flush(cx);
+ std::pin::Pin::new(&mut self.primary).poll_flush(cx)
+ }
+
+ fn poll_shutdown(
+ mut self: std::pin::Pin<&mut Self>,
+ cx: &mut std::task::Context<'_>,
+ ) -> std::task::Poll<std::io::Result<()>> {
+ let _ = std::pin::Pin::new(&mut self.secondary).poll_shutdown(cx);
+ std::pin::Pin::new(&mut self.primary).poll_shutdown(cx)
+ }
+}
+
+/// Execute a containerized build for a site.
+///
+/// Stdout and stderr are streamed to the provided temporary files on disk
+/// instead of being buffered in memory. This removes unbounded memory usage
+/// for container builds.
+///
+/// # Arguments
+/// * `runtime` - Container runtime to use ("podman" or "docker")
+/// * `clone_dir` - Path to the cloned repository
+/// * `repo_config` - Build configuration from witryna.yaml
+/// * `cache_volumes` - Pairs of (`container_path`, `host_path`) for persistent cache mounts
+/// * `env` - User-defined environment variables to pass into the container via `--env`
+/// * `options` - Optional container resource limits and network mode
+/// * `stdout_file` - Temp file path for captured stdout
+/// * `stderr_file` - Temp file path for captured stderr
+/// * `timeout` - Maximum duration before killing the build
+/// * `verbose` - When true, also stream build output to stderr in real-time
+///
+/// # Errors
+///
+/// Returns an error if the container command times out, fails to execute,
+/// or exits with a non-zero status code (as a [`BuildFailure`]).
+///
+/// # Security
+/// - Uses typed arguments (no shell interpolation) per OWASP guidelines
+/// - Mounts clone directory as read-write (needed for build output)
+/// - Runs with minimal capabilities
+#[allow(clippy::implicit_hasher, clippy::too_many_arguments)]
+pub async fn execute(
+ runtime: &str,
+ clone_dir: &Path,
+ repo_config: &RepoConfig,
+ cache_volumes: &[(String, PathBuf)],
+ env: &HashMap<String, String>,
+ options: &ContainerOptions,
+ stdout_file: &Path,
+ stderr_file: &Path,
+ timeout: Duration,
+ verbose: bool,
+) -> Result<BuildResult> {
+ info!(
+ image = %repo_config.image,
+ command = %repo_config.command,
+ path = %clone_dir.display(),
+ "executing container build"
+ );
+
+ let start = Instant::now();
+
+ // Build args dynamically to support optional cache volumes
+ let mut args = vec![
+ "run".to_owned(),
+ "--rm".to_owned(),
+ "--volume".to_owned(),
+ format!("{}:/workspace:Z", clone_dir.display()),
+ ];
+
+ // Add cache volume mounts
+ for (container_path, host_path) in cache_volumes {
+ args.push("--volume".to_owned());
+ args.push(format!("{}:{}:Z", host_path.display(), container_path));
+ }
+
+ // Add user-defined environment variables
+ for (key, value) in env {
+ args.push("--env".to_owned());
+ args.push(format!("{key}={value}"));
+ }
+
+ let workdir = match &options.workdir {
+ Some(subdir) => format!("/workspace/{subdir}"),
+ None => "/workspace".to_owned(),
+ };
+ args.extend(["--workdir".to_owned(), workdir, "--cap-drop=ALL".to_owned()]);
+
+ if runtime == "podman" {
+ args.push("--userns=keep-id".to_owned());
+ } else {
+ // Docker: container runs as root but workspace is owned by host UID.
+ // DAC_OVERRIDE lets root bypass file permission checks.
+ // Podman doesn't need this because --userns=keep-id maps to the host UID.
+ args.push("--cap-add=DAC_OVERRIDE".to_owned());
+ }
+
+ // Resource limits
+ if let Some(memory) = &options.memory {
+ args.push("--memory".to_owned());
+ args.push(memory.clone());
+ }
+ if let Some(cpus) = options.cpus {
+ args.push("--cpus".to_owned());
+ args.push(cpus.to_string());
+ }
+ if let Some(pids) = options.pids_limit {
+ args.push("--pids-limit".to_owned());
+ args.push(pids.to_string());
+ }
+
+ // Network mode
+ args.push(format!("--network={}", options.network));
+
+ args.extend([
+ repo_config.image.clone(),
+ "sh".to_owned(),
+ "-c".to_owned(),
+ repo_config.command.clone(),
+ ]);
+
+ // Spawn with piped stdout/stderr for streaming (OWASP: no shell interpolation)
+ let mut child = Command::new(runtime)
+ .args(&args)
+ .kill_on_drop(true)
+ .stdout(Stdio::piped())
+ .stderr(Stdio::piped())
+ .spawn()
+ .context("failed to spawn container build")?;
+
+ let stdout_pipe = child
+ .stdout
+ .take()
+ .ok_or_else(|| anyhow::anyhow!("missing stdout pipe"))?;
+ let stderr_pipe = child
+ .stderr
+ .take()
+ .ok_or_else(|| anyhow::anyhow!("missing stderr pipe"))?;
+
+ let stdout_file_writer = BufWriter::new(
+ tokio::fs::File::create(stdout_file)
+ .await
+ .with_context(|| format!("failed to create {}", stdout_file.display()))?,
+ );
+ let stderr_file_writer = BufWriter::new(
+ tokio::fs::File::create(stderr_file)
+ .await
+ .with_context(|| format!("failed to create {}", stderr_file.display()))?,
+ );
+
+ if verbose {
+ let mut stdout_tee = TeeWriter::new(stdout_file_writer, tokio::io::stderr());
+ let mut stderr_tee = TeeWriter::new(stderr_file_writer, tokio::io::stderr());
+ run_build_process(
+ child,
+ stdout_pipe,
+ stderr_pipe,
+ &mut stdout_tee,
+ &mut stderr_tee,
+ start,
+ stdout_file,
+ stderr_file,
+ clone_dir,
+ "container",
+ timeout,
+ )
+ .await
+ } else {
+ let mut stdout_writer = stdout_file_writer;
+ let mut stderr_writer = stderr_file_writer;
+ run_build_process(
+ child,
+ stdout_pipe,
+ stderr_pipe,
+ &mut stdout_writer,
+ &mut stderr_writer,
+ start,
+ stdout_file,
+ stderr_file,
+ clone_dir,
+ "container",
+ timeout,
+ )
+ .await
+ }
+}
+
+/// Copy from reader to writer, keeping the last `tail_size` bytes in memory.
+/// Returns `(total_bytes_copied, tail_buffer)`.
+///
+/// When `tail_size` is 0, skips tail tracking entirely (used for stdout
+/// where we don't need a tail). The tail buffer is used to provide a
+/// meaningful error message in `BuildFailure::Display` without reading
+/// the entire stderr file back into memory.
+#[allow(clippy::indexing_slicing)] // buf[..n] bounded by read() return value
+pub(crate) async fn copy_with_tail<R, W>(
+ mut reader: R,
+ mut writer: W,
+ tail_size: usize,
+) -> std::io::Result<(u64, Vec<u8>)>
+where
+ R: tokio::io::AsyncRead + Unpin,
+ W: tokio::io::AsyncWrite + Unpin,
+{
+ use tokio::io::AsyncReadExt as _;
+
+ let mut buf = [0_u8; 8192];
+ let mut total: u64 = 0;
+ let mut tail: Vec<u8> = Vec::new();
+
+ loop {
+ let n = reader.read(&mut buf).await?;
+ if n == 0 {
+ break;
+ }
+ writer.write_all(&buf[..n]).await?;
+ total += n as u64;
+
+ if tail_size > 0 {
+ tail.extend_from_slice(&buf[..n]);
+ if tail.len() > tail_size {
+ let excess = tail.len() - tail_size;
+ tail.drain(..excess);
+ }
+ }
+ }
+
+ Ok((total, tail))
+}
+
+/// Shared build-process loop: stream stdout/stderr through writers, handle timeout and exit status.
+#[allow(clippy::too_many_arguments)]
+async fn run_build_process<W1, W2>(
+ mut child: tokio::process::Child,
+ stdout_pipe: tokio::process::ChildStdout,
+ stderr_pipe: tokio::process::ChildStderr,
+ stdout_writer: &mut W1,
+ stderr_writer: &mut W2,
+ start: Instant,
+ stdout_file: &Path,
+ stderr_file: &Path,
+ clone_dir: &Path,
+ label: &str,
+ timeout: Duration,
+) -> Result<BuildResult>
+where
+ W1: AsyncWrite + Unpin,
+ W2: AsyncWrite + Unpin,
+{
+ #[allow(clippy::large_futures)]
+ let Ok((stdout_res, stderr_res, wait_res)) = tokio::time::timeout(timeout, async {
+ let (stdout_res, stderr_res, wait_res) = tokio::join!(
+ copy_with_tail(stdout_pipe, &mut *stdout_writer, 0),
+ copy_with_tail(stderr_pipe, &mut *stderr_writer, STDERR_TAIL_SIZE),
+ child.wait(),
+ );
+ (stdout_res, stderr_res, wait_res)
+ })
+ .await
+ else {
+ let _ = child.kill().await;
+ anyhow::bail!("{label} build timed out after {}s", timeout.as_secs());
+ };
+
+ stdout_res.context("failed to stream stdout")?;
+ let (_, stderr_tail) = stderr_res.context("failed to stream stderr")?;
+ stdout_writer.flush().await?;
+ stderr_writer.flush().await?;
+
+ let status = wait_res.context(format!("{label} build I/O error"))?;
+ let last_stderr = String::from_utf8_lossy(&stderr_tail).into_owned();
+
+ if !status.success() {
+ let exit_code = status.code().unwrap_or(-1);
+ debug!(exit_code, "{label} build failed");
+ return Err(BuildFailure {
+ exit_code,
+ stdout_file: stdout_file.to_path_buf(),
+ stderr_file: stderr_file.to_path_buf(),
+ last_stderr,
+ duration: start.elapsed(),
+ }
+ .into());
+ }
+
+ let duration = start.elapsed();
+ debug!(path = %clone_dir.display(), ?duration, "{label} build completed");
+ Ok(BuildResult {
+ stdout_file: stdout_file.to_path_buf(),
+ stderr_file: stderr_file.to_path_buf(),
+ duration,
+ })
+}
+
+#[cfg(test)]
+#[allow(
+ clippy::unwrap_used,
+ clippy::indexing_slicing,
+ clippy::large_futures,
+ clippy::print_stderr
+)]
+mod tests {
+ use super::*;
+ use crate::test_support::{cleanup, temp_dir};
+ use tokio::fs;
+ use tokio::process::Command as TokioCommand;
+
+ /// Check if a container runtime is available and its daemon is running.
+ async fn container_runtime_available(runtime: &str) -> bool {
+ TokioCommand::new(runtime)
+ .args(["info"])
+ .stdout(Stdio::null())
+ .stderr(Stdio::null())
+ .status()
+ .await
+ .map(|s| s.success())
+ .unwrap_or(false)
+ }
+
+ /// Get the first available container runtime.
+ async fn get_runtime() -> Option<String> {
+ for runtime in &["podman", "docker"] {
+ if container_runtime_available(runtime).await {
+ return Some((*runtime).to_owned());
+ }
+ }
+ None
+ }
+
+ // --- copy_with_tail() unit tests ---
+
+ #[tokio::test]
+ async fn copy_with_tail_small_input() {
+ let input = b"hello";
+ let mut output = Vec::new();
+ let (total, tail) = copy_with_tail(&input[..], &mut output, 1024).await.unwrap();
+ assert_eq!(total, 5);
+ assert_eq!(tail, b"hello");
+ assert_eq!(output, b"hello");
+ }
+
+ #[tokio::test]
+ async fn copy_with_tail_large_input() {
+ // Input larger than tail_size — only last N bytes kept
+ let input: Vec<u8> = (0_u8..=255).cycle().take(2048).collect();
+ let mut output = Vec::new();
+ let (total, tail) = copy_with_tail(&input[..], &mut output, 512).await.unwrap();
+ assert_eq!(total, 2048);
+ assert_eq!(tail.len(), 512);
+ assert_eq!(&tail[..], &input[2048 - 512..]);
+ assert_eq!(output, input);
+ }
+
+ #[tokio::test]
+ async fn copy_with_tail_zero_tail() {
+ let input = b"data";
+ let mut output = Vec::new();
+ let (total, tail) = copy_with_tail(&input[..], &mut output, 0).await.unwrap();
+ assert_eq!(total, 4);
+ assert!(tail.is_empty());
+ assert_eq!(output, b"data");
+ }
+
+ // --- ContainerOptions workdir tests ---
+
+ #[tokio::test]
+ async fn execute_custom_workdir_runs_from_subdir() {
+ let Some(runtime) = get_runtime().await else {
+ eprintln!("Skipping test: no container runtime available");
+ return;
+ };
+
+ let temp = temp_dir("build-workdir-test").await;
+ let stdout_tmp = temp.join("stdout.tmp");
+ let stderr_tmp = temp.join("stderr.tmp");
+
+ // Create a subdirectory with a marker file
+ let subdir = temp.join("packages").join("frontend");
+ fs::create_dir_all(&subdir).await.unwrap();
+ fs::write(subdir.join("marker.txt"), "subdir-marker")
+ .await
+ .unwrap();
+
+ let repo_config = RepoConfig {
+ image: "alpine:latest".to_owned(),
+ command: "cat marker.txt".to_owned(),
+ public: "dist".to_owned(),
+ };
+
+ let options = ContainerOptions {
+ workdir: Some("packages/frontend".to_owned()),
+ ..ContainerOptions::default()
+ };
+
+ let result = execute(
+ &runtime,
+ &temp,
+ &repo_config,
+ &[],
+ &HashMap::new(),
+ &options,
+ &stdout_tmp,
+ &stderr_tmp,
+ BUILD_TIMEOUT_DEFAULT,
+ false,
+ )
+ .await;
+
+ assert!(result.is_ok(), "build should succeed: {result:?}");
+ let stdout = fs::read_to_string(&stdout_tmp).await.unwrap();
+ assert!(
+ stdout.contains("subdir-marker"),
+ "should read marker from subdir, got: {stdout}"
+ );
+
+ cleanup(&temp).await;
+ }
+
+ // --- execute() container tests (Tier 2) ---
+
+ #[tokio::test]
+ async fn execute_simple_command_success() {
+ let Some(runtime) = get_runtime().await else {
+ eprintln!("Skipping test: no container runtime available");
+ return;
+ };
+
+ let temp = temp_dir("build-test").await;
+ let stdout_tmp = temp.join("stdout.tmp");
+ let stderr_tmp = temp.join("stderr.tmp");
+
+ let repo_config = RepoConfig {
+ image: "alpine:latest".to_owned(),
+ command: "echo 'hello world'".to_owned(),
+ public: "dist".to_owned(),
+ };
+
+ let result = execute(
+ &runtime,
+ &temp,
+ &repo_config,
+ &[],
+ &HashMap::new(),
+ &ContainerOptions::default(),
+ &stdout_tmp,
+ &stderr_tmp,
+ BUILD_TIMEOUT_DEFAULT,
+ false,
+ )
+ .await;
+
+ assert!(result.is_ok(), "build should succeed: {result:?}");
+ let stdout = fs::read_to_string(&stdout_tmp).await.unwrap();
+ assert!(stdout.contains("hello world"));
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn execute_creates_output_files() {
+ let Some(runtime) = get_runtime().await else {
+ eprintln!("Skipping test: no container runtime available");
+ return;
+ };
+
+ let temp = temp_dir("build-test").await;
+ let stdout_tmp = temp.join("stdout.tmp");
+ let stderr_tmp = temp.join("stderr.tmp");
+
+ let repo_config = RepoConfig {
+ image: "alpine:latest".to_owned(),
+ command: "mkdir -p dist && echo 'content' > dist/index.html".to_owned(),
+ public: "dist".to_owned(),
+ };
+
+ let result = execute(
+ &runtime,
+ &temp,
+ &repo_config,
+ &[],
+ &HashMap::new(),
+ &ContainerOptions::default(),
+ &stdout_tmp,
+ &stderr_tmp,
+ BUILD_TIMEOUT_DEFAULT,
+ false,
+ )
+ .await;
+
+ assert!(result.is_ok(), "build should succeed: {result:?}");
+
+ // Verify output file was created
+ let output_file = temp.join("dist/index.html");
+ assert!(output_file.exists(), "output file should exist");
+
+ let content = fs::read_to_string(&output_file).await.unwrap();
+ assert!(content.contains("content"));
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn execute_failing_command_returns_error() {
+ let Some(runtime) = get_runtime().await else {
+ eprintln!("Skipping test: no container runtime available");
+ return;
+ };
+
+ let temp = temp_dir("build-test").await;
+ let stdout_tmp = temp.join("stdout.tmp");
+ let stderr_tmp = temp.join("stderr.tmp");
+
+ let repo_config = RepoConfig {
+ image: "alpine:latest".to_owned(),
+ command: "exit 1".to_owned(),
+ public: "dist".to_owned(),
+ };
+
+ let result = execute(
+ &runtime,
+ &temp,
+ &repo_config,
+ &[],
+ &HashMap::new(),
+ &ContainerOptions::default(),
+ &stdout_tmp,
+ &stderr_tmp,
+ BUILD_TIMEOUT_DEFAULT,
+ false,
+ )
+ .await;
+
+ assert!(result.is_err(), "build should fail");
+ let err = result.unwrap_err().to_string();
+ assert!(err.contains("exit code 1"));
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn execute_command_with_stderr() {
+ let Some(runtime) = get_runtime().await else {
+ eprintln!("Skipping test: no container runtime available");
+ return;
+ };
+
+ let temp = temp_dir("build-test").await;
+ let stdout_tmp = temp.join("stdout.tmp");
+ let stderr_tmp = temp.join("stderr.tmp");
+
+ let repo_config = RepoConfig {
+ image: "alpine:latest".to_owned(),
+ command: "echo 'error message' >&2 && exit 1".to_owned(),
+ public: "dist".to_owned(),
+ };
+
+ let result = execute(
+ &runtime,
+ &temp,
+ &repo_config,
+ &[],
+ &HashMap::new(),
+ &ContainerOptions::default(),
+ &stdout_tmp,
+ &stderr_tmp,
+ BUILD_TIMEOUT_DEFAULT,
+ false,
+ )
+ .await;
+
+ assert!(result.is_err(), "build should fail");
+ let err = result.unwrap_err().to_string();
+ assert!(err.contains("error message"));
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn execute_invalid_image_returns_error() {
+ let Some(runtime) = get_runtime().await else {
+ eprintln!("Skipping test: no container runtime available");
+ return;
+ };
+
+ let temp = temp_dir("build-test").await;
+ let stdout_tmp = temp.join("stdout.tmp");
+ let stderr_tmp = temp.join("stderr.tmp");
+
+ let repo_config = RepoConfig {
+ image: "nonexistent-image-xyz-12345:latest".to_owned(),
+ command: "echo hello".to_owned(),
+ public: "dist".to_owned(),
+ };
+
+ let result = execute(
+ &runtime,
+ &temp,
+ &repo_config,
+ &[],
+ &HashMap::new(),
+ &ContainerOptions::default(),
+ &stdout_tmp,
+ &stderr_tmp,
+ BUILD_TIMEOUT_DEFAULT,
+ false,
+ )
+ .await;
+
+ assert!(result.is_err(), "build should fail for invalid image");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn execute_workdir_is_correct() {
+ let Some(runtime) = get_runtime().await else {
+ eprintln!("Skipping test: no container runtime available");
+ return;
+ };
+
+ let temp = temp_dir("build-test").await;
+ let stdout_tmp = temp.join("stdout.tmp");
+ let stderr_tmp = temp.join("stderr.tmp");
+
+ // Create a file in the temp dir to verify we can see it
+ fs::write(temp.join("marker.txt"), "test-marker")
+ .await
+ .unwrap();
+
+ let repo_config = RepoConfig {
+ image: "alpine:latest".to_owned(),
+ command: "cat marker.txt".to_owned(),
+ public: "dist".to_owned(),
+ };
+
+ let result = execute(
+ &runtime,
+ &temp,
+ &repo_config,
+ &[],
+ &HashMap::new(),
+ &ContainerOptions::default(),
+ &stdout_tmp,
+ &stderr_tmp,
+ BUILD_TIMEOUT_DEFAULT,
+ false,
+ )
+ .await;
+
+ assert!(result.is_ok(), "build should succeed: {result:?}");
+ let stdout = fs::read_to_string(&stdout_tmp).await.unwrap();
+ assert!(stdout.contains("test-marker"));
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn execute_invalid_runtime_returns_error() {
+ let temp = temp_dir("build-test").await;
+ let stdout_tmp = temp.join("stdout.tmp");
+ let stderr_tmp = temp.join("stderr.tmp");
+
+ let repo_config = RepoConfig {
+ image: "alpine:latest".to_owned(),
+ command: "echo hello".to_owned(),
+ public: "dist".to_owned(),
+ };
+
+ let result = execute(
+ "nonexistent-runtime-xyz",
+ &temp,
+ &repo_config,
+ &[],
+ &HashMap::new(),
+ &ContainerOptions::default(),
+ &stdout_tmp,
+ &stderr_tmp,
+ BUILD_TIMEOUT_DEFAULT,
+ false,
+ )
+ .await;
+
+ assert!(result.is_err(), "build should fail for invalid runtime");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn execute_with_env_vars_passes_to_container() {
+ let Some(runtime) = get_runtime().await else {
+ eprintln!("Skipping test: no container runtime available");
+ return;
+ };
+
+ let temp = temp_dir("build-test").await;
+ let stdout_tmp = temp.join("stdout.tmp");
+ let stderr_tmp = temp.join("stderr.tmp");
+
+ let repo_config = RepoConfig {
+ image: "alpine:latest".to_owned(),
+ command: "printenv MY_VAR".to_owned(),
+ public: "dist".to_owned(),
+ };
+
+ let env = HashMap::from([("MY_VAR".to_owned(), "my_value".to_owned())]);
+ let result = execute(
+ &runtime,
+ &temp,
+ &repo_config,
+ &[],
+ &env,
+ &ContainerOptions::default(),
+ &stdout_tmp,
+ &stderr_tmp,
+ BUILD_TIMEOUT_DEFAULT,
+ false,
+ )
+ .await;
+
+ assert!(result.is_ok(), "build should succeed: {result:?}");
+ let stdout = fs::read_to_string(&stdout_tmp).await.unwrap();
+ assert!(
+ stdout.contains("my_value"),
+ "stdout should contain env var value, got: {stdout}",
+ );
+
+ cleanup(&temp).await;
+ }
+}
diff --git a/src/build_guard.rs b/src/build_guard.rs
new file mode 100644
index 0000000..0c7fed3
--- /dev/null
+++ b/src/build_guard.rs
@@ -0,0 +1,128 @@
+use dashmap::DashSet;
+use std::sync::Arc;
+
+/// Manages per-site build scheduling: immediate execution and a depth-1 queue.
+///
+/// When a build is already in progress, a single rebuild can be queued.
+/// Subsequent requests while a rebuild is already queued are collapsed (no-op).
+pub struct BuildScheduler {
+ pub in_progress: DashSet<String>,
+ pub queued: DashSet<String>,
+}
+
+impl BuildScheduler {
+ #[must_use]
+ pub fn new() -> Self {
+ Self {
+ in_progress: DashSet::new(),
+ queued: DashSet::new(),
+ }
+ }
+
+ /// Queue a rebuild for a site that is currently building.
+ /// Returns `true` if newly queued, `false` if already queued (collapse).
+ pub(crate) fn try_queue(&self, site_name: &str) -> bool {
+ self.queued.insert(site_name.to_owned())
+ }
+
+ /// Check and clear queued rebuild. Returns `true` if there was one.
+ pub(crate) fn take_queued(&self, site_name: &str) -> bool {
+ self.queued.remove(site_name).is_some()
+ }
+}
+
+impl Default for BuildScheduler {
+ fn default() -> Self {
+ Self::new()
+ }
+}
+
+/// RAII guard for per-site build exclusion.
+/// Inserting into the scheduler's `in_progress` set acquires the lock;
+/// dropping removes it.
+pub(crate) struct BuildGuard {
+ site_name: String,
+ scheduler: Arc<BuildScheduler>,
+}
+
+impl BuildGuard {
+ pub(crate) fn try_acquire(site_name: String, scheduler: &Arc<BuildScheduler>) -> Option<Self> {
+ if scheduler.in_progress.insert(site_name.clone()) {
+ Some(Self {
+ site_name,
+ scheduler: Arc::clone(scheduler),
+ })
+ } else {
+ None
+ }
+ }
+}
+
+impl Drop for BuildGuard {
+ fn drop(&mut self) {
+ self.scheduler.in_progress.remove(&self.site_name);
+ }
+}
+
+#[cfg(test)]
+#[allow(clippy::unwrap_used, clippy::indexing_slicing)]
+mod tests {
+ use super::*;
+
+ #[test]
+ fn build_guard_try_acquire_success() {
+ let scheduler = Arc::new(BuildScheduler::new());
+ let guard = BuildGuard::try_acquire("my-site".to_owned(), &scheduler);
+ assert!(guard.is_some());
+ assert!(scheduler.in_progress.contains("my-site"));
+ }
+
+ #[test]
+ fn build_guard_try_acquire_fails_when_held() {
+ let scheduler = Arc::new(BuildScheduler::new());
+ let _guard = BuildGuard::try_acquire("my-site".to_owned(), &scheduler);
+ let second = BuildGuard::try_acquire("my-site".to_owned(), &scheduler);
+ assert!(second.is_none());
+ }
+
+ #[test]
+ fn build_guard_drop_releases_lock() {
+ let scheduler = Arc::new(BuildScheduler::new());
+ {
+ let _guard = BuildGuard::try_acquire("my-site".to_owned(), &scheduler);
+ assert!(scheduler.in_progress.contains("my-site"));
+ }
+ // Guard dropped — lock released
+ assert!(!scheduler.in_progress.contains("my-site"));
+ let again = BuildGuard::try_acquire("my-site".to_owned(), &scheduler);
+ assert!(again.is_some());
+ }
+
+ #[test]
+ fn scheduler_try_queue_succeeds() {
+ let scheduler = BuildScheduler::new();
+ assert!(scheduler.try_queue("my-site"));
+ assert!(scheduler.queued.contains("my-site"));
+ }
+
+ #[test]
+ fn scheduler_try_queue_collapse() {
+ let scheduler = BuildScheduler::new();
+ assert!(scheduler.try_queue("my-site"));
+ assert!(!scheduler.try_queue("my-site"));
+ }
+
+ #[test]
+ fn scheduler_take_queued_clears_flag() {
+ let scheduler = BuildScheduler::new();
+ scheduler.try_queue("my-site");
+ assert!(scheduler.take_queued("my-site"));
+ assert!(!scheduler.queued.contains("my-site"));
+ }
+
+ #[test]
+ fn scheduler_take_queued_returns_false_when_empty() {
+ let scheduler = BuildScheduler::new();
+ assert!(!scheduler.take_queued("my-site"));
+ }
+}
diff --git a/src/cleanup.rs b/src/cleanup.rs
new file mode 100644
index 0000000..ced8320
--- /dev/null
+++ b/src/cleanup.rs
@@ -0,0 +1,467 @@
+use anyhow::{Context as _, Result};
+use std::path::Path;
+use tracing::{debug, info, warn};
+
+/// Result of a cleanup operation.
+#[derive(Debug, Default)]
+pub struct CleanupResult {
+ /// Number of build directories removed.
+ pub builds_removed: u32,
+ /// Number of log files removed.
+ pub logs_removed: u32,
+}
+
+/// Clean up old build directories and their corresponding log files.
+///
+/// Keeps the `max_to_keep` most recent builds and removes older ones.
+/// Also removes the corresponding log files for each removed build.
+///
+/// # Arguments
+/// * `base_dir` - Base witryna directory (e.g., /var/lib/witryna)
+/// * `log_dir` - Log directory (e.g., /var/log/witryna)
+/// * `site_name` - The site name
+/// * `max_to_keep` - Maximum number of builds to keep (0 = keep all)
+///
+/// # Errors
+///
+/// Returns an error if the builds directory cannot be listed. Individual
+/// removal failures are logged as warnings but do not cause the function
+/// to return an error.
+pub async fn cleanup_old_builds(
+ base_dir: &Path,
+ log_dir: &Path,
+ site_name: &str,
+ max_to_keep: u32,
+) -> Result<CleanupResult> {
+ // If max_to_keep is 0, keep all builds
+ if max_to_keep == 0 {
+ debug!(%site_name, "max_builds_to_keep is 0, skipping cleanup");
+ return Ok(CleanupResult::default());
+ }
+
+ let builds_dir = base_dir.join("builds").join(site_name);
+ let site_log_dir = log_dir.join(site_name);
+
+ // Check if builds directory exists
+ if !builds_dir.exists() {
+ debug!(%site_name, "builds directory does not exist, skipping cleanup");
+ return Ok(CleanupResult::default());
+ }
+
+ // List all build directories (excluding 'current' symlink)
+ let mut build_timestamps = list_build_timestamps(&builds_dir).await?;
+
+ // Sort in descending order (newest first)
+ build_timestamps.sort_by(|a, b| b.cmp(a));
+
+ let mut result = CleanupResult::default();
+
+ // Calculate how many to remove
+ let to_remove = build_timestamps.len().saturating_sub(max_to_keep as usize);
+ if to_remove == 0 {
+ debug!(%site_name, count = build_timestamps.len(), max = max_to_keep, "no builds to remove");
+ }
+
+ // Remove oldest builds (they're at the end after reverse sort)
+ for timestamp in build_timestamps.iter().skip(max_to_keep as usize) {
+ let build_path = builds_dir.join(timestamp);
+ let log_path = site_log_dir.join(format!("{timestamp}.log"));
+
+ // Remove build directory
+ match tokio::fs::remove_dir_all(&build_path).await {
+ Ok(()) => {
+ debug!(path = %build_path.display(), "removed old build");
+ result.builds_removed += 1;
+ }
+ Err(e) => {
+ warn!(path = %build_path.display(), error = %e, "failed to remove old build");
+ }
+ }
+
+ // Remove corresponding log file (if exists)
+ if log_path.exists() {
+ match tokio::fs::remove_file(&log_path).await {
+ Ok(()) => {
+ debug!(path = %log_path.display(), "removed old log");
+ result.logs_removed += 1;
+ }
+ Err(e) => {
+ warn!(path = %log_path.display(), error = %e, "failed to remove old log");
+ }
+ }
+ }
+
+ // Remove corresponding hook log file (if exists)
+ let hook_log_path = site_log_dir.join(format!("{timestamp}-hook.log"));
+ match tokio::fs::remove_file(&hook_log_path).await {
+ Ok(()) => {
+ debug!(path = %hook_log_path.display(), "removed old hook log");
+ result.logs_removed += 1;
+ }
+ Err(e) if e.kind() == std::io::ErrorKind::NotFound => {
+ // Not every build has a hook — silently skip
+ }
+ Err(e) => {
+ warn!(path = %hook_log_path.display(), error = %e, "failed to remove old hook log");
+ }
+ }
+ }
+
+ // Remove orphaned temp files (crash recovery)
+ if site_log_dir.exists()
+ && let Ok(mut entries) = tokio::fs::read_dir(&site_log_dir).await
+ {
+ while let Ok(Some(entry)) = entries.next_entry().await {
+ let name = entry.file_name();
+ if name.to_string_lossy().ends_with(".tmp") {
+ let path = entry.path();
+ match tokio::fs::remove_file(&path).await {
+ Ok(()) => {
+ debug!(path = %path.display(), "removed orphaned temp file");
+ }
+ Err(e) => {
+ warn!(path = %path.display(), error = %e, "failed to remove orphaned temp file");
+ }
+ }
+ }
+ }
+ }
+
+ if result.builds_removed > 0 || result.logs_removed > 0 {
+ info!(
+ %site_name,
+ builds_removed = result.builds_removed,
+ logs_removed = result.logs_removed,
+ "cleanup completed"
+ );
+ }
+
+ Ok(result)
+}
+
+/// List all build timestamps in a builds directory.
+///
+/// Returns directory names that look like timestamps, excluding 'current' symlink.
+async fn list_build_timestamps(builds_dir: &Path) -> Result<Vec<String>> {
+ let mut timestamps = Vec::new();
+
+ let mut entries = tokio::fs::read_dir(builds_dir)
+ .await
+ .with_context(|| format!("failed to read builds directory: {}", builds_dir.display()))?;
+
+ while let Some(entry) = entries.next_entry().await? {
+ let name = entry.file_name();
+ let name_str = name.to_string_lossy();
+
+ // Skip 'current' symlink and any other non-timestamp entries
+ if name_str == "current" {
+ continue;
+ }
+
+ // Verify it's a directory (not a file or broken symlink)
+ let file_type = entry.file_type().await?;
+ if !file_type.is_dir() {
+ continue;
+ }
+
+ // Basic timestamp format validation: YYYYMMDD-HHMMSS-...
+ if looks_like_timestamp(&name_str) {
+ timestamps.push(name_str.to_string());
+ }
+ }
+
+ Ok(timestamps)
+}
+
+/// Check if a string looks like a valid timestamp format.
+///
+/// Expected format: YYYYMMDD-HHMMSS-microseconds (e.g., 20260126-143000-123456)
+fn looks_like_timestamp(s: &str) -> bool {
+ let parts: Vec<&str> = s.split('-').collect();
+ let [date, time, micros, ..] = parts.as_slice() else {
+ return false;
+ };
+
+ // First part should be 8 digits (YYYYMMDD)
+ if date.len() != 8 || !date.chars().all(|c| c.is_ascii_digit()) {
+ return false;
+ }
+
+ // Second part should be 6 digits (HHMMSS)
+ if time.len() != 6 || !time.chars().all(|c| c.is_ascii_digit()) {
+ return false;
+ }
+
+ // Third part should be microseconds (digits)
+ if micros.is_empty() || !micros.chars().all(|c| c.is_ascii_digit()) {
+ return false;
+ }
+
+ true
+}
+
+#[cfg(test)]
+#[allow(clippy::unwrap_used, clippy::indexing_slicing)]
+mod tests {
+ use super::*;
+ use crate::test_support::{cleanup, temp_dir};
+ use tokio::fs;
+
+ async fn create_build_and_log(base_dir: &Path, log_dir: &Path, site: &str, timestamp: &str) {
+ let build_dir = base_dir.join("builds").join(site).join(timestamp);
+ let site_log_dir = log_dir.join(site);
+ let log_file = site_log_dir.join(format!("{timestamp}.log"));
+
+ fs::create_dir_all(&build_dir).await.unwrap();
+ fs::create_dir_all(&site_log_dir).await.unwrap();
+ fs::write(&log_file, "test log content").await.unwrap();
+ fs::write(build_dir.join("index.html"), "<html></html>")
+ .await
+ .unwrap();
+ }
+
+ #[tokio::test]
+ async fn cleanup_removes_old_builds_and_logs() {
+ let base_dir = temp_dir("cleanup-test").await;
+ let log_dir = base_dir.join("logs");
+ let site = "test-site";
+
+ // Create 7 builds (keep 5, remove 2)
+ let timestamps = [
+ "20260126-100000-000001",
+ "20260126-100000-000002",
+ "20260126-100000-000003",
+ "20260126-100000-000004",
+ "20260126-100000-000005",
+ "20260126-100000-000006",
+ "20260126-100000-000007",
+ ];
+
+ for ts in &timestamps {
+ create_build_and_log(&base_dir, &log_dir, site, ts).await;
+ }
+
+ let result = cleanup_old_builds(&base_dir, &log_dir, site, 5).await;
+ assert!(result.is_ok(), "cleanup should succeed: {result:?}");
+ let result = result.unwrap();
+
+ assert_eq!(result.builds_removed, 2, "should remove 2 builds");
+ assert_eq!(result.logs_removed, 2, "should remove 2 logs");
+
+ // Verify oldest 2 are gone
+ let builds_dir = base_dir.join("builds").join(site);
+ assert!(!builds_dir.join("20260126-100000-000001").exists());
+ assert!(!builds_dir.join("20260126-100000-000002").exists());
+
+ // Verify newest 5 remain
+ assert!(builds_dir.join("20260126-100000-000003").exists());
+ assert!(builds_dir.join("20260126-100000-000007").exists());
+
+ // Verify log cleanup
+ let site_logs = log_dir.join(site);
+ assert!(!site_logs.join("20260126-100000-000001.log").exists());
+ assert!(site_logs.join("20260126-100000-000003.log").exists());
+
+ cleanup(&base_dir).await;
+ }
+
+ #[tokio::test]
+ async fn cleanup_with_fewer_builds_than_max() {
+ let base_dir = temp_dir("cleanup-test").await;
+ let log_dir = base_dir.join("logs");
+ let site = "test-site";
+
+ // Create only 3 builds (max is 5)
+ for ts in &[
+ "20260126-100000-000001",
+ "20260126-100000-000002",
+ "20260126-100000-000003",
+ ] {
+ create_build_and_log(&base_dir, &log_dir, site, ts).await;
+ }
+
+ let result = cleanup_old_builds(&base_dir, &log_dir, site, 5).await;
+ assert!(result.is_ok());
+ let result = result.unwrap();
+
+ assert_eq!(result.builds_removed, 0, "should not remove any builds");
+ assert_eq!(result.logs_removed, 0, "should not remove any logs");
+
+ // Verify all builds remain
+ let builds_dir = base_dir.join("builds").join(site);
+ assert!(builds_dir.join("20260126-100000-000001").exists());
+ assert!(builds_dir.join("20260126-100000-000002").exists());
+ assert!(builds_dir.join("20260126-100000-000003").exists());
+
+ cleanup(&base_dir).await;
+ }
+
+ #[tokio::test]
+ async fn cleanup_preserves_current_symlink() {
+ let base_dir = temp_dir("cleanup-test").await;
+ let log_dir = base_dir.join("logs");
+ let site = "test-site";
+
+ // Create builds
+ create_build_and_log(&base_dir, &log_dir, site, "20260126-100000-000001").await;
+ create_build_and_log(&base_dir, &log_dir, site, "20260126-100000-000002").await;
+ create_build_and_log(&base_dir, &log_dir, site, "20260126-100000-000003").await;
+
+ // Create 'current' symlink
+ let builds_dir = base_dir.join("builds").join(site);
+ let current = builds_dir.join("current");
+ let target = builds_dir.join("20260126-100000-000003");
+ tokio::fs::symlink(&target, &current).await.unwrap();
+
+ let result = cleanup_old_builds(&base_dir, &log_dir, site, 2).await;
+ assert!(result.is_ok());
+ let result = result.unwrap();
+
+ assert_eq!(result.builds_removed, 1, "should remove 1 build");
+
+ // Verify symlink still exists and points correctly
+ assert!(current.exists(), "current symlink should exist");
+ let link_target = fs::read_link(&current).await.unwrap();
+ assert_eq!(link_target, target);
+
+ cleanup(&base_dir).await;
+ }
+
+ #[tokio::test]
+ async fn cleanup_handles_missing_logs_gracefully() {
+ let base_dir = temp_dir("cleanup-test").await;
+ let log_dir = base_dir.join("logs");
+ let site = "test-site";
+
+ // Create builds but only some logs
+ let builds_dir = base_dir.join("builds").join(site);
+ fs::create_dir_all(builds_dir.join("20260126-100000-000001"))
+ .await
+ .unwrap();
+ fs::create_dir_all(builds_dir.join("20260126-100000-000002"))
+ .await
+ .unwrap();
+ fs::create_dir_all(builds_dir.join("20260126-100000-000003"))
+ .await
+ .unwrap();
+
+ // Only create log for one build
+ let site_logs = log_dir.join(site);
+ fs::create_dir_all(&site_logs).await.unwrap();
+ fs::write(site_logs.join("20260126-100000-000001.log"), "log")
+ .await
+ .unwrap();
+
+ let result = cleanup_old_builds(&base_dir, &log_dir, site, 2).await;
+ assert!(result.is_ok(), "should succeed even with missing logs");
+ let result = result.unwrap();
+
+ assert_eq!(result.builds_removed, 1, "should remove 1 build");
+ assert_eq!(result.logs_removed, 1, "should remove 1 log");
+
+ cleanup(&base_dir).await;
+ }
+
+ #[tokio::test]
+ async fn cleanup_with_max_zero_keeps_all() {
+ let base_dir = temp_dir("cleanup-test").await;
+ let log_dir = base_dir.join("logs");
+ let site = "test-site";
+
+ // Create builds
+ for ts in &[
+ "20260126-100000-000001",
+ "20260126-100000-000002",
+ "20260126-100000-000003",
+ ] {
+ create_build_and_log(&base_dir, &log_dir, site, ts).await;
+ }
+
+ let result = cleanup_old_builds(&base_dir, &log_dir, site, 0).await;
+ assert!(result.is_ok());
+ let result = result.unwrap();
+
+ assert_eq!(result.builds_removed, 0, "max 0 should keep all");
+ assert_eq!(result.logs_removed, 0);
+
+ // Verify all builds remain
+ let builds_dir = base_dir.join("builds").join(site);
+ assert!(builds_dir.join("20260126-100000-000001").exists());
+ assert!(builds_dir.join("20260126-100000-000002").exists());
+ assert!(builds_dir.join("20260126-100000-000003").exists());
+
+ cleanup(&base_dir).await;
+ }
+
+ #[tokio::test]
+ async fn cleanup_nonexistent_builds_dir() {
+ let base_dir = temp_dir("cleanup-test").await;
+ let site = "nonexistent-site";
+
+ let log_dir = base_dir.join("logs");
+ let result = cleanup_old_builds(&base_dir, &log_dir, site, 5).await;
+ assert!(result.is_ok(), "should succeed for nonexistent dir");
+ let result = result.unwrap();
+
+ assert_eq!(result.builds_removed, 0);
+ assert_eq!(result.logs_removed, 0);
+
+ cleanup(&base_dir).await;
+ }
+
+ #[tokio::test]
+ async fn cleanup_removes_orphaned_tmp_files() {
+ let base_dir = temp_dir("cleanup-test").await;
+ let log_dir = base_dir.join("logs");
+ let site = "test-site";
+
+ // Create a build so cleanup runs
+ create_build_and_log(&base_dir, &log_dir, site, "20260126-100000-000001").await;
+
+ // Create orphaned temp files in site log dir
+ let site_log_dir = log_dir.join(site);
+ fs::write(
+ site_log_dir.join("20260126-100000-000001-stdout.tmp"),
+ "orphan",
+ )
+ .await
+ .unwrap();
+ fs::write(
+ site_log_dir.join("20260126-100000-000001-stderr.tmp"),
+ "orphan",
+ )
+ .await
+ .unwrap();
+ fs::write(site_log_dir.join("random.tmp"), "orphan")
+ .await
+ .unwrap();
+
+ assert!(
+ site_log_dir
+ .join("20260126-100000-000001-stdout.tmp")
+ .exists()
+ );
+
+ // Run cleanup (max_to_keep=5 means no builds removed, but tmp files should go)
+ let result = cleanup_old_builds(&base_dir, &log_dir, site, 5).await;
+ assert!(result.is_ok());
+
+ // Temp files should be gone
+ assert!(
+ !site_log_dir
+ .join("20260126-100000-000001-stdout.tmp")
+ .exists()
+ );
+ assert!(
+ !site_log_dir
+ .join("20260126-100000-000001-stderr.tmp")
+ .exists()
+ );
+ assert!(!site_log_dir.join("random.tmp").exists());
+
+ // Log file should still exist
+ assert!(site_log_dir.join("20260126-100000-000001.log").exists());
+
+ cleanup(&base_dir).await;
+ }
+}
diff --git a/src/cli.rs b/src/cli.rs
new file mode 100644
index 0000000..ab191a4
--- /dev/null
+++ b/src/cli.rs
@@ -0,0 +1,134 @@
+use clap::{Parser, Subcommand};
+use std::path::PathBuf;
+
+/// Witryna - minimalist Git-based static site deployment orchestrator
+#[derive(Debug, Parser)]
+#[command(
+ name = "witryna",
+ version,
+ author,
+ about = "Minimalist Git-based static site deployment orchestrator",
+ long_about = "Minimalist Git-based static site deployment orchestrator.\n\n\
+ Witryna listens for webhook HTTP requests, pulls the corresponding Git \
+ repository (with automatic Git LFS fetch and submodule initialization), \
+ runs a user-defined build command inside an ephemeral container and \
+ publishes the resulting assets via atomic symlink switching.\n\n\
+ A health-check endpoint is available at GET /health (returns 200 OK).\n\n\
+ Witryna does not serve files, terminate TLS, or manage DNS. \
+ It is designed to sit behind a reverse proxy (Nginx, Caddy, etc.).",
+ subcommand_required = true,
+ arg_required_else_help = true
+)]
+pub struct Cli {
+ /// Path to the configuration file.
+ /// If not specified, searches: ./witryna.toml, $XDG_CONFIG_HOME/witryna/witryna.toml, /etc/witryna/witryna.toml
+ #[arg(long, global = true, value_name = "FILE")]
+ pub config: Option<PathBuf>,
+
+ #[command(subcommand)]
+ pub command: Command,
+}
+
+#[derive(Debug, Subcommand)]
+pub enum Command {
+ /// Start the deployment server (foreground)
+ Serve,
+ /// Validate configuration file and print summary
+ Validate,
+ /// Trigger a one-off build for a site (synchronous, no server)
+ Run {
+ /// Site name (as defined in witryna.toml)
+ site: String,
+ /// Stream full build output to stderr in real-time
+ #[arg(long, short)]
+ verbose: bool,
+ },
+ /// Show deployment status for configured sites
+ Status {
+ /// Show last 10 deployments for a single site
+ #[arg(long, short)]
+ site: Option<String>,
+ /// Output in JSON format
+ #[arg(long)]
+ json: bool,
+ },
+}
+
+#[cfg(test)]
+#[allow(clippy::unwrap_used, clippy::indexing_slicing)]
+mod tests {
+ use super::*;
+
+ #[test]
+ fn run_parses_site_name() {
+ let cli = Cli::try_parse_from(["witryna", "run", "my-site"]).unwrap();
+ match cli.command {
+ Command::Run { site, verbose } => {
+ assert_eq!(site, "my-site");
+ assert!(!verbose);
+ }
+ _ => panic!("expected Run command"),
+ }
+ }
+
+ #[test]
+ fn run_parses_verbose_flag() {
+ let cli = Cli::try_parse_from(["witryna", "run", "my-site", "--verbose"]).unwrap();
+ match cli.command {
+ Command::Run { site, verbose } => {
+ assert_eq!(site, "my-site");
+ assert!(verbose);
+ }
+ _ => panic!("expected Run command"),
+ }
+ }
+
+ #[test]
+ fn status_parses_without_flags() {
+ let cli = Cli::try_parse_from(["witryna", "status"]).unwrap();
+ match cli.command {
+ Command::Status { site, json } => {
+ assert!(site.is_none());
+ assert!(!json);
+ }
+ _ => panic!("expected Status command"),
+ }
+ }
+
+ #[test]
+ fn status_parses_site_filter() {
+ let cli = Cli::try_parse_from(["witryna", "status", "--site", "my-site"]).unwrap();
+ match cli.command {
+ Command::Status { site, json } => {
+ assert_eq!(site.as_deref(), Some("my-site"));
+ assert!(!json);
+ }
+ _ => panic!("expected Status command"),
+ }
+ }
+
+ #[test]
+ fn status_parses_json_flag() {
+ let cli = Cli::try_parse_from(["witryna", "status", "--json"]).unwrap();
+ match cli.command {
+ Command::Status { site, json } => {
+ assert!(site.is_none());
+ assert!(json);
+ }
+ _ => panic!("expected Status command"),
+ }
+ }
+
+ #[test]
+ fn config_flag_is_optional() {
+ let cli = Cli::try_parse_from(["witryna", "status"]).unwrap();
+ assert!(cli.config.is_none());
+ }
+
+ #[test]
+ fn config_flag_explicit_path() {
+ let cli =
+ Cli::try_parse_from(["witryna", "--config", "/etc/witryna.toml", "status"]).unwrap();
+ assert_eq!(cli.config, Some(PathBuf::from("/etc/witryna.toml")));
+ }
+}
diff --git a/src/config.rs b/src/config.rs
new file mode 100644
index 0000000..63f3447
--- /dev/null
+++ b/src/config.rs
@@ -0,0 +1,3041 @@
+use crate::repo_config;
+use anyhow::{Context as _, Result, bail};
+use serde::{Deserialize, Deserializer};
+use std::collections::{HashMap, HashSet};
+use std::net::SocketAddr;
+use std::path::{Component, PathBuf};
+use std::time::Duration;
+use tracing::level_filters::LevelFilter;
+
+fn default_log_dir() -> PathBuf {
+ PathBuf::from("/var/log/witryna")
+}
+
+const fn default_rate_limit() -> u32 {
+ 10
+}
+
+const fn default_max_builds_to_keep() -> u32 {
+ 5
+}
+
+/// Minimum poll interval to prevent DoS.
+/// Lowered to 1 second under the `integration` feature so tests run quickly.
+#[cfg(not(feature = "integration"))]
+const MIN_POLL_INTERVAL: Duration = Duration::from_secs(60);
+#[cfg(feature = "integration")]
+const MIN_POLL_INTERVAL: Duration = Duration::from_secs(1);
+
+/// Custom deserializer for optional humantime durations (e.g. `poll_interval`, `build_timeout`).
+fn deserialize_optional_duration<'de, D>(deserializer: D) -> Result<Option<Duration>, D::Error>
+where
+ D: Deserializer<'de>,
+{
+ let opt: Option<String> = Option::deserialize(deserializer)?;
+ opt.map_or_else(
+ || Ok(None),
+ |s| {
+ humantime::parse_duration(&s)
+ .map(Some)
+ .map_err(serde::de::Error::custom)
+ },
+ )
+}
+
+#[derive(Debug, Deserialize)]
+pub struct Config {
+ pub listen_address: String,
+ pub container_runtime: String,
+ pub base_dir: PathBuf,
+ #[serde(default = "default_log_dir")]
+ pub log_dir: PathBuf,
+ pub log_level: String,
+ #[serde(default = "default_rate_limit")]
+ pub rate_limit_per_minute: u32,
+ #[serde(default = "default_max_builds_to_keep")]
+ pub max_builds_to_keep: u32,
+ /// Optional global git operation timeout (e.g., "2m", "5m").
+ /// If not set, defaults to 60 seconds.
+ #[serde(default, deserialize_with = "deserialize_optional_duration")]
+ pub git_timeout: Option<Duration>,
+ pub sites: Vec<SiteConfig>,
+}
+
+/// Optional build configuration overrides that can be specified in witryna.toml.
+/// These values take precedence over corresponding values in the repository's witryna.yaml.
+#[derive(Debug, Clone, Default, Deserialize)]
+pub struct BuildOverrides {
+ /// Container image to use for building (overrides witryna.yaml)
+ pub image: Option<String>,
+ /// Command to execute inside the container (overrides witryna.yaml)
+ pub command: Option<String>,
+ /// Directory containing built static assets (overrides witryna.yaml)
+ pub public: Option<String>,
+}
+
+impl BuildOverrides {
+ /// Returns true if all three fields are specified.
+ /// When complete, witryna.yaml becomes optional.
+ #[must_use]
+ pub const fn is_complete(&self) -> bool {
+ self.image.is_some() && self.command.is_some() && self.public.is_some()
+ }
+}
+
+#[derive(Debug, Clone, Deserialize)]
+pub struct SiteConfig {
+ pub name: String,
+ pub repo_url: String,
+ pub branch: String,
+ #[serde(default)]
+ pub webhook_token: String,
+ /// Path to a file containing the webhook token (e.g., Docker/K8s secrets).
+ /// Mutually exclusive with `${VAR}` syntax in `webhook_token`.
+ #[serde(default)]
+ pub webhook_token_file: Option<PathBuf>,
+ /// Optional build configuration overrides from witryna.toml
+ #[serde(flatten)]
+ pub build_overrides: BuildOverrides,
+ /// Optional polling interval for automatic builds (e.g., "30m", "1h")
+ /// If not set, polling is disabled (webhook-only mode)
+ #[serde(default, deserialize_with = "deserialize_optional_duration")]
+ pub poll_interval: Option<Duration>,
+ /// Optional build timeout (e.g., "5m", "30m", "1h").
+ /// If not set, defaults to 10 minutes.
+ #[serde(default, deserialize_with = "deserialize_optional_duration")]
+ pub build_timeout: Option<Duration>,
+ /// Optional list of absolute container paths to persist as cache volumes across builds.
+ /// Each path gets a dedicated host directory under `{base_dir}/cache/{site_name}/`.
+ #[serde(default)]
+ pub cache_dirs: Option<Vec<String>>,
+ /// Optional post-deploy hook command (array form, no shell).
+ /// Runs after successful symlink switch. Non-fatal on failure.
+ #[serde(default)]
+ pub post_deploy: Option<Vec<String>>,
+ /// Optional environment variables passed to container builds and post-deploy hooks.
+ /// Keys must not use the reserved `WITRYNA_*` prefix (case-insensitive).
+ #[serde(default)]
+ pub env: Option<HashMap<String, String>>,
+ /// Container memory limit (e.g., "512m", "2g"). Passed as --memory to the container runtime.
+ #[serde(default)]
+ pub container_memory: Option<String>,
+ /// Container CPU limit (e.g., 0.5, 2.0). Passed as --cpus to the container runtime.
+ #[serde(default)]
+ pub container_cpus: Option<f64>,
+ /// Container PID limit (e.g., 100). Passed as --pids-limit to the container runtime.
+ #[serde(default)]
+ pub container_pids_limit: Option<u32>,
+ /// Container network mode. Defaults to "bridge" for compatibility.
+ /// Set to "none" for maximum isolation (builds that don't need network).
+ #[serde(default = "default_container_network")]
+ pub container_network: String,
+ /// Git clone depth. Default 1 (shallow). Set to 0 for full clone.
+ #[serde(default)]
+ pub git_depth: Option<u32>,
+ /// Container working directory relative to repo root (e.g., "packages/frontend").
+ /// Translates to --workdir /workspace/{path}. Defaults to repo root (/workspace).
+ #[serde(default)]
+ pub container_workdir: Option<String>,
+ /// Path to a custom build config file in the repository (e.g., ".witryna.yaml",
+ /// "build/config.yml"). Relative to repo root. If not set, witryna searches:
+ /// .witryna.yaml -> .witryna.yml -> witryna.yaml -> witryna.yml
+ #[serde(default)]
+ pub config_file: Option<String>,
+}
+
+fn default_container_network() -> String {
+ "bridge".to_owned()
+}
+
+/// Check if a string is a valid environment variable name.
+/// Must start with ASCII uppercase letter or underscore, and contain only
+/// ASCII uppercase letters, digits, and underscores.
+fn is_valid_env_var_name(s: &str) -> bool {
+ !s.is_empty()
+ && s.bytes()
+ .next()
+ .is_some_and(|b| b.is_ascii_uppercase() || b == b'_')
+ && s.bytes()
+ .all(|b| b.is_ascii_uppercase() || b.is_ascii_digit() || b == b'_')
+}
+
+/// Discover the configuration file path.
+///
+/// Search order:
+/// 1. Explicit `--config` path (if provided)
+/// 2. `./witryna.toml` (current directory)
+/// 3. `$XDG_CONFIG_HOME/witryna/witryna.toml` (default: `~/.config/witryna/witryna.toml`)
+/// 4. `/etc/witryna/witryna.toml`
+///
+/// Returns the first path that exists, or an error with all searched locations.
+///
+/// # Errors
+///
+/// Returns an error if no configuration file is found in any of the searched
+/// locations, or if an explicit path is provided but does not exist.
+pub fn discover_config(explicit: Option<&std::path::Path>) -> Result<PathBuf> {
+ if let Some(path) = explicit {
+ if path.exists() {
+ return Ok(path.to_owned());
+ }
+ bail!("config file not found: {}", path.display());
+ }
+
+ let mut candidates: Vec<PathBuf> = vec![PathBuf::from("witryna.toml")];
+ candidates.push(xdg_config_path());
+ candidates.push(PathBuf::from("/etc/witryna/witryna.toml"));
+
+ for path in &candidates {
+ if path.exists() {
+ return Ok(path.clone());
+ }
+ }
+
+ bail!(
+ "no configuration file found\n searched: {}",
+ candidates
+ .iter()
+ .map(|p| p.display().to_string())
+ .collect::<Vec<_>>()
+ .join(", ")
+ );
+}
+
+fn xdg_config_path() -> PathBuf {
+ if let Ok(xdg) = std::env::var("XDG_CONFIG_HOME") {
+ return PathBuf::from(xdg).join("witryna/witryna.toml");
+ }
+ if let Ok(home) = std::env::var("HOME") {
+ return PathBuf::from(home).join(".config/witryna/witryna.toml");
+ }
+ // No HOME set — fall back to /etc path (will be checked by the caller anyway)
+ PathBuf::from("/etc/witryna/witryna.toml")
+}
+
+impl Config {
+ /// # Errors
+ ///
+ /// Returns an error if the config file cannot be read, parsed, or fails
+ /// validation.
+ pub async fn load(path: &std::path::Path) -> Result<Self> {
+ let content = tokio::fs::read_to_string(path)
+ .await
+ .with_context(|| format!("failed to read config file: {}", path.display()))?;
+
+ let mut config: Self =
+ toml::from_str(&content).with_context(|| "failed to parse config file")?;
+
+ config.resolve_secrets().await?;
+ config.validate()?;
+
+ Ok(config)
+ }
+
+ /// Resolve secret references in webhook tokens.
+ ///
+ /// Supports two mechanisms:
+ /// - `webhook_token = "${VAR_NAME}"` — resolved from environment
+ /// - `webhook_token_file = "/path/to/file"` — read from file
+ /// # Errors
+ ///
+ /// Returns an error if an environment variable is missing or a token file
+ /// cannot be read.
+ pub async fn resolve_secrets(&mut self) -> Result<()> {
+ for site in &mut self.sites {
+ let is_env_ref = site
+ .webhook_token
+ .strip_prefix("${")
+ .and_then(|s| s.strip_suffix('}'))
+ .filter(|name| is_valid_env_var_name(name));
+
+ if is_env_ref.is_some() && site.webhook_token_file.is_some() {
+ bail!(
+ "site '{}': webhook_token uses ${{VAR}} syntax and webhook_token_file \
+ are mutually exclusive",
+ site.name
+ );
+ }
+
+ if let Some(var_name) = is_env_ref {
+ let var_name = var_name.to_owned();
+ site.webhook_token = std::env::var(&var_name).with_context(|| {
+ format!(
+ "site '{}': environment variable '{}' not set",
+ site.name, var_name
+ )
+ })?;
+ } else if let Some(path) = &site.webhook_token_file {
+ site.webhook_token = tokio::fs::read_to_string(path)
+ .await
+ .with_context(|| {
+ format!(
+ "site '{}': failed to read webhook_token_file '{}'",
+ site.name,
+ path.display()
+ )
+ })?
+ .trim()
+ .to_owned();
+ }
+ }
+ Ok(())
+ }
+
+ fn validate(&self) -> Result<()> {
+ self.validate_listen_address()?;
+ self.validate_log_level()?;
+ self.validate_rate_limit()?;
+ self.validate_git_timeout()?;
+ self.validate_container_runtime()?;
+ self.validate_sites()?;
+ Ok(())
+ }
+
+ fn validate_git_timeout(&self) -> Result<()> {
+ if let Some(timeout) = self.git_timeout {
+ if timeout < MIN_GIT_TIMEOUT {
+ bail!(
+ "git_timeout is too short ({:?}): minimum is {}s",
+ timeout,
+ MIN_GIT_TIMEOUT.as_secs()
+ );
+ }
+ if timeout > MAX_GIT_TIMEOUT {
+ bail!("git_timeout is too long ({:?}): maximum is 1h", timeout,);
+ }
+ }
+ Ok(())
+ }
+
+ fn validate_container_runtime(&self) -> Result<()> {
+ if self.container_runtime.trim().is_empty() {
+ bail!("container_runtime must not be empty");
+ }
+ Ok(())
+ }
+
+ fn validate_listen_address(&self) -> Result<()> {
+ self.listen_address
+ .parse::<SocketAddr>()
+ .with_context(|| format!("invalid listen_address: {}", self.listen_address))?;
+ Ok(())
+ }
+
+ fn validate_log_level(&self) -> Result<()> {
+ const VALID_LEVELS: &[&str] = &["trace", "debug", "info", "warn", "error"];
+ if !VALID_LEVELS.contains(&self.log_level.to_lowercase().as_str()) {
+ bail!(
+ "invalid log_level '{}': must be one of {:?}",
+ self.log_level,
+ VALID_LEVELS
+ );
+ }
+ Ok(())
+ }
+
+ fn validate_rate_limit(&self) -> Result<()> {
+ if self.rate_limit_per_minute == 0 {
+ bail!("rate_limit_per_minute must be greater than 0");
+ }
+ Ok(())
+ }
+
+ fn validate_sites(&self) -> Result<()> {
+ let mut seen_names = HashSet::new();
+
+ for site in &self.sites {
+ site.validate()?;
+
+ if !seen_names.insert(&site.name) {
+ bail!("duplicate site name: {}", site.name);
+ }
+ }
+
+ Ok(())
+ }
+
+ #[must_use]
+ /// # Panics
+ ///
+ /// Panics if `listen_address` is not a valid socket address. This is
+ /// unreachable after successful validation.
+ #[allow(clippy::expect_used)] // value validated by validate_listen_address()
+ pub fn parsed_listen_address(&self) -> SocketAddr {
+ self.listen_address
+ .parse()
+ .expect("listen_address already validated")
+ }
+
+ #[must_use]
+ pub fn log_level_filter(&self) -> LevelFilter {
+ match self.log_level.to_lowercase().as_str() {
+ "trace" => LevelFilter::TRACE,
+ "debug" => LevelFilter::DEBUG,
+ "warn" => LevelFilter::WARN,
+ "error" => LevelFilter::ERROR,
+ // Catch-all: covers "info" and the unreachable default after validation.
+ _ => LevelFilter::INFO,
+ }
+ }
+
+ /// Find a site configuration by name.
+ #[must_use]
+ pub fn find_site(&self, name: &str) -> Option<&SiteConfig> {
+ self.sites.iter().find(|s| s.name == name)
+ }
+}
+
+/// Sanitize a container path for use as a host directory name.
+///
+/// Percent-encodes `_` → `%5F`, then replaces `/` → `%2F`, and strips the leading `%2F`.
+/// This prevents distinct container paths from mapping to the same host directory.
+#[must_use]
+pub fn sanitize_cache_dir_name(container_path: &str) -> String {
+ let encoded = container_path.replace('_', "%5F").replace('/', "%2F");
+ encoded.strip_prefix("%2F").unwrap_or(&encoded).to_owned()
+}
+
+/// Minimum allowed git timeout.
+const MIN_GIT_TIMEOUT: Duration = Duration::from_secs(5);
+
+/// Maximum allowed git timeout (1 hour).
+const MAX_GIT_TIMEOUT: Duration = Duration::from_secs(3600);
+
+/// Minimum allowed build timeout.
+const MIN_BUILD_TIMEOUT: Duration = Duration::from_secs(10);
+
+/// Maximum allowed build timeout (24 hours).
+const MAX_BUILD_TIMEOUT: Duration = Duration::from_secs(24 * 60 * 60);
+
+/// Maximum number of `cache_dirs` entries per site.
+const MAX_CACHE_DIRS: usize = 20;
+
+/// Maximum number of elements in a `post_deploy` command array.
+const MAX_POST_DEPLOY_ARGS: usize = 64;
+
+/// Maximum number of environment variables per site.
+const MAX_ENV_VARS: usize = 64;
+
+impl SiteConfig {
+ fn validate(&self) -> Result<()> {
+ self.validate_name()?;
+ self.validate_webhook_token()?;
+ self.validate_build_overrides()?;
+ self.validate_poll_interval()?;
+ self.validate_build_timeout()?;
+ self.validate_cache_dirs()?;
+ self.validate_post_deploy()?;
+ self.validate_env()?;
+ self.validate_resource_limits()?;
+ self.validate_container_network()?;
+ self.validate_container_workdir()?;
+ self.validate_config_file()?;
+ Ok(())
+ }
+
+ fn validate_poll_interval(&self) -> Result<()> {
+ if let Some(interval) = self.poll_interval
+ && interval < MIN_POLL_INTERVAL
+ {
+ bail!(
+ "poll_interval for site '{}' is too short ({:?}): minimum is 1 minute",
+ self.name,
+ interval
+ );
+ }
+ Ok(())
+ }
+
+ fn validate_build_timeout(&self) -> Result<()> {
+ if let Some(timeout) = self.build_timeout {
+ if timeout < MIN_BUILD_TIMEOUT {
+ bail!(
+ "build_timeout for site '{}' is too short ({:?}): minimum is {}s",
+ self.name,
+ timeout,
+ MIN_BUILD_TIMEOUT.as_secs()
+ );
+ }
+ if timeout > MAX_BUILD_TIMEOUT {
+ bail!(
+ "build_timeout for site '{}' is too long ({:?}): maximum is 24h",
+ self.name,
+ timeout,
+ );
+ }
+ }
+ Ok(())
+ }
+
+ fn validate_build_overrides(&self) -> Result<()> {
+ if let Some(image) = &self.build_overrides.image {
+ repo_config::validate_image(image)
+ .with_context(|| format!("site '{}': invalid image override", self.name))?;
+ }
+ if let Some(command) = &self.build_overrides.command {
+ repo_config::validate_command(command)
+ .with_context(|| format!("site '{}': invalid command override", self.name))?;
+ }
+ if let Some(public) = &self.build_overrides.public {
+ repo_config::validate_public(public)
+ .with_context(|| format!("site '{}': invalid public override", self.name))?;
+ }
+ Ok(())
+ }
+
+ fn validate_name(&self) -> Result<()> {
+ if self.name.is_empty() {
+ bail!("site name cannot be empty");
+ }
+
+ // OWASP: Validate site names contain only safe characters
+ // Allows alphanumeric characters, hyphens, and underscores
+ let is_valid = self
+ .name
+ .chars()
+ .all(|c| c.is_ascii_alphanumeric() || c == '-' || c == '_')
+ && !self.name.starts_with('-')
+ && !self.name.ends_with('-')
+ && !self.name.contains("--")
+ && !self.name.starts_with('_')
+ && !self.name.ends_with('_')
+ && !self.name.contains("__");
+
+ if !is_valid {
+ bail!(
+ "invalid site name '{}': must contain only alphanumeric characters, hyphens, and underscores, \
+ cannot start/end with hyphen or underscore, or contain consecutive hyphens or underscores",
+ self.name
+ );
+ }
+
+ // OWASP: Reject path traversal attempts
+ if self.name.contains("..") || self.name.contains('/') {
+ bail!(
+ "invalid site name '{}': path traversal characters not allowed",
+ self.name
+ );
+ }
+
+ Ok(())
+ }
+
+ fn validate_webhook_token(&self) -> Result<()> {
+ if !self.webhook_token.is_empty() {
+ if self.webhook_token.trim().is_empty() {
+ bail!(
+ "site '{}': webhook_token is whitespace-only \
+ (omit it entirely to disable authentication)",
+ self.name
+ );
+ }
+ if self.webhook_token.contains('\0') {
+ bail!("site '{}': webhook_token contains null byte", self.name);
+ }
+ }
+ Ok(())
+ }
+
+ fn validate_cache_dirs(&self) -> Result<()> {
+ let Some(dirs) = &self.cache_dirs else {
+ return Ok(());
+ };
+
+ if dirs.len() > MAX_CACHE_DIRS {
+ bail!(
+ "site '{}': too many cache_dirs ({}, max {})",
+ self.name,
+ dirs.len(),
+ MAX_CACHE_DIRS
+ );
+ }
+
+ let mut seen = HashSet::new();
+
+ for (i, raw) in dirs.iter().enumerate() {
+ if raw.is_empty() {
+ bail!("site '{}': cache_dirs[{}] is empty", self.name, i);
+ }
+
+ // Normalize through std::path::Path to resolve //, /./, trailing slashes
+ let path = std::path::Path::new(raw);
+ let normalized: PathBuf = path.components().collect();
+ let normalized_str = normalized.to_string_lossy().to_string();
+
+ // Must be absolute
+ if !normalized.is_absolute() {
+ bail!(
+ "site '{}': cache_dirs[{}] ('{}') must be an absolute path",
+ self.name,
+ i,
+ raw
+ );
+ }
+
+ // No parent directory components (path traversal)
+ if normalized.components().any(|c| c == Component::ParentDir) {
+ bail!(
+ "site '{}': cache_dirs[{}] ('{}') contains path traversal (..)",
+ self.name,
+ i,
+ raw
+ );
+ }
+
+ // No duplicates after normalization
+ if !seen.insert(normalized_str.clone()) {
+ bail!(
+ "site '{}': duplicate cache_dirs entry '{}'",
+ self.name,
+ normalized_str
+ );
+ }
+ }
+
+ Ok(())
+ }
+
+ fn validate_post_deploy(&self) -> Result<()> {
+ let Some(cmd) = &self.post_deploy else {
+ return Ok(());
+ };
+
+ if cmd.is_empty() {
+ bail!(
+ "site '{}': post_deploy must not be an empty array",
+ self.name
+ );
+ }
+
+ if cmd.len() > MAX_POST_DEPLOY_ARGS {
+ bail!(
+ "site '{}': post_deploy has too many elements ({}, max {})",
+ self.name,
+ cmd.len(),
+ MAX_POST_DEPLOY_ARGS
+ );
+ }
+
+ // First element is the executable
+ let Some(exe) = cmd.first() else {
+ // Already checked cmd.is_empty() above
+ unreachable!()
+ };
+ if exe.trim().is_empty() {
+ bail!(
+ "site '{}': post_deploy executable must not be empty",
+ self.name
+ );
+ }
+
+ // No element may contain null bytes
+ for (i, arg) in cmd.iter().enumerate() {
+ if arg.contains('\0') {
+ bail!(
+ "site '{}': post_deploy[{}] contains null byte",
+ self.name,
+ i
+ );
+ }
+ }
+
+ Ok(())
+ }
+
+ fn validate_resource_limits(&self) -> Result<()> {
+ if let Some(memory) = &self.container_memory
+ && (!memory
+ .as_bytes()
+ .last()
+ .is_some_and(|c| matches!(c, b'k' | b'm' | b'g' | b'K' | b'M' | b'G'))
+ || memory.len() < 2
+ || !memory[..memory.len() - 1]
+ .chars()
+ .all(|c| c.is_ascii_digit()))
+ {
+ bail!(
+ "site '{}': invalid container_memory '{}': must be digits followed by k, m, or g",
+ self.name,
+ memory
+ );
+ }
+
+ if let Some(cpus) = self.container_cpus
+ && cpus <= 0.0
+ {
+ bail!(
+ "site '{}': container_cpus must be greater than 0.0",
+ self.name
+ );
+ }
+
+ if let Some(pids) = self.container_pids_limit
+ && pids == 0
+ {
+ bail!(
+ "site '{}': container_pids_limit must be greater than 0",
+ self.name
+ );
+ }
+
+ Ok(())
+ }
+
+ fn validate_container_network(&self) -> Result<()> {
+ const ALLOWED: &[&str] = &["none", "bridge", "host", "slirp4netns"];
+ if !ALLOWED.contains(&self.container_network.as_str()) {
+ bail!(
+ "site '{}': invalid container_network '{}': must be one of {:?}",
+ self.name,
+ self.container_network,
+ ALLOWED
+ );
+ }
+ Ok(())
+ }
+
+ fn validate_container_workdir(&self) -> Result<()> {
+ let Some(workdir) = &self.container_workdir else {
+ return Ok(());
+ };
+ if workdir.trim().is_empty() {
+ bail!("site '{}': container_workdir cannot be empty", self.name);
+ }
+ if workdir.starts_with('/') {
+ bail!(
+ "site '{}': container_workdir '{}' must be a relative path",
+ self.name,
+ workdir
+ );
+ }
+ if std::path::Path::new(workdir)
+ .components()
+ .any(|c| c == Component::ParentDir)
+ {
+ bail!(
+ "site '{}': container_workdir '{}': path traversal not allowed",
+ self.name,
+ workdir
+ );
+ }
+ Ok(())
+ }
+
+ fn validate_config_file(&self) -> Result<()> {
+ let Some(cf) = &self.config_file else {
+ return Ok(());
+ };
+ if cf.trim().is_empty() {
+ bail!("site '{}': config_file cannot be empty", self.name);
+ }
+ if cf.starts_with('/') {
+ bail!(
+ "site '{}': config_file '{}' must be a relative path",
+ self.name,
+ cf
+ );
+ }
+ if std::path::Path::new(cf)
+ .components()
+ .any(|c| c == Component::ParentDir)
+ {
+ bail!(
+ "site '{}': config_file '{}': path traversal not allowed",
+ self.name,
+ cf
+ );
+ }
+ Ok(())
+ }
+
+ fn validate_env(&self) -> Result<()> {
+ let Some(env_vars) = &self.env else {
+ return Ok(());
+ };
+
+ if env_vars.len() > MAX_ENV_VARS {
+ bail!(
+ "site '{}': too many env vars ({}, max {})",
+ self.name,
+ env_vars.len(),
+ MAX_ENV_VARS
+ );
+ }
+
+ for (key, value) in env_vars {
+ if key.is_empty() {
+ bail!("site '{}': env var key cannot be empty", self.name);
+ }
+
+ if key.contains('=') {
+ bail!(
+ "site '{}': env var key '{}' contains '=' character",
+ self.name,
+ key
+ );
+ }
+
+ // Case-insensitive check: block witryna_, Witryna_, WITRYNA_, etc.
+ if key.to_ascii_uppercase().starts_with("WITRYNA_") {
+ bail!(
+ "site '{}': env var key '{}' uses reserved prefix 'WITRYNA_'",
+ self.name,
+ key
+ );
+ }
+
+ if key.contains('\0') {
+ bail!(
+ "site '{}': env var key '{}' contains null byte",
+ self.name,
+ key
+ );
+ }
+
+ if value.contains('\0') {
+ bail!(
+ "site '{}': env var value for '{}' contains null byte",
+ self.name,
+ key
+ );
+ }
+ }
+
+ Ok(())
+ }
+}
+
+#[cfg(test)]
+#[allow(clippy::unwrap_used, clippy::indexing_slicing, clippy::expect_used)]
+mod tests {
+ use super::*;
+
+ fn valid_config_toml() -> &'static str {
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/my-site.git"
+branch = "main"
+webhook_token = "secret-token-123"
+"#
+ }
+
+ #[test]
+ fn parse_valid_config() {
+ let config: Config = toml::from_str(valid_config_toml()).unwrap();
+
+ assert_eq!(config.listen_address, "127.0.0.1:8080");
+ assert_eq!(config.sites.len(), 1);
+ assert_eq!(config.sites[0].name, "my-site");
+ }
+
+ #[test]
+ fn parse_multiple_sites() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "site-one"
+repo_url = "https://github.com/user/site-one.git"
+branch = "main"
+webhook_token = "token-1"
+
+[[sites]]
+name = "site-two"
+repo_url = "https://github.com/user/site-two.git"
+branch = "develop"
+webhook_token = "token-2"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(config.sites.len(), 2);
+ }
+
+ #[test]
+ fn missing_required_field() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+"#;
+ let result: Result<Config, _> = toml::from_str(toml);
+ assert!(result.is_err());
+ }
+
+ #[test]
+ fn invalid_listen_address() {
+ let toml = r#"
+listen_address = "not-a-valid-address"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+sites = []
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("listen_address"));
+ }
+
+ #[test]
+ fn invalid_log_level() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "invalid"
+sites = []
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("log_level"));
+ }
+
+ #[test]
+ fn valid_log_levels() {
+ for level in &["trace", "debug", "info", "warn", "error", "INFO", "Debug"] {
+ let toml = format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "{level}"
+sites = []
+"#
+ );
+ let config: Config = toml::from_str(&toml).unwrap();
+ assert!(
+ config.validate().is_ok(),
+ "log_level '{level}' should be valid"
+ );
+ }
+ }
+
+ #[test]
+ fn zero_rate_limit_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+rate_limit_per_minute = 0
+sites = []
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("rate_limit_per_minute")
+ );
+ }
+
+ #[test]
+ fn duplicate_site_names() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "duplicate"
+repo_url = "https://github.com/user/site1.git"
+branch = "main"
+webhook_token = "token-1"
+
+[[sites]]
+name = "duplicate"
+repo_url = "https://github.com/user/site2.git"
+branch = "main"
+webhook_token = "token-2"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("duplicate"));
+ }
+
+ #[test]
+ fn invalid_site_name_with_path_traversal() {
+ let invalid_names = vec!["../etc", "foo/../bar", "..site", "site..", "foo/bar"];
+
+ for name in invalid_names {
+ let toml = format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "{name}"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+"#
+ );
+ let config: Config = toml::from_str(&toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err(), "site name '{name}' should be invalid");
+ }
+ }
+
+ #[test]
+ fn invalid_site_name_special_chars() {
+ let invalid_names = vec![
+ "site@name",
+ "site name",
+ "-start",
+ "end-",
+ "a--b",
+ "_start",
+ "end_",
+ "a__b",
+ ];
+
+ for name in invalid_names {
+ let toml = format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "{name}"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+"#
+ );
+ let config: Config = toml::from_str(&toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err(), "site name '{name}' should be invalid");
+ }
+ }
+
+ #[test]
+ fn valid_site_names() {
+ let valid_names = vec![
+ "site",
+ "my-site",
+ "site123",
+ "123site",
+ "a-b-c",
+ "site-1-test",
+ "site_name",
+ "my_site",
+ "my-site_v2",
+ "a_b-c",
+ ];
+
+ for name in valid_names {
+ let toml = format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "{name}"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+"#
+ );
+ let config: Config = toml::from_str(&toml).unwrap();
+ assert!(
+ config.validate().is_ok(),
+ "site name '{name}' should be valid"
+ );
+ }
+ }
+
+ #[test]
+ fn empty_site_name() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = ""
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("empty"));
+ }
+
+ // BuildOverrides tests
+
+ #[test]
+ fn parse_site_with_image_override() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/my-site.git"
+branch = "main"
+webhook_token = "token"
+image = "node:20-alpine"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(
+ config.sites[0].build_overrides.image,
+ Some("node:20-alpine".to_owned())
+ );
+ }
+
+ #[test]
+ fn parse_site_with_all_overrides() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/my-site.git"
+branch = "main"
+webhook_token = "token"
+image = "node:20-alpine"
+command = "npm ci && npm run build"
+public = "dist"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert!(config.sites[0].build_overrides.is_complete());
+ assert_eq!(
+ config.sites[0].build_overrides.image,
+ Some("node:20-alpine".to_owned())
+ );
+ assert_eq!(
+ config.sites[0].build_overrides.command,
+ Some("npm ci && npm run build".to_owned())
+ );
+ assert_eq!(
+ config.sites[0].build_overrides.public,
+ Some("dist".to_owned())
+ );
+ }
+
+ #[test]
+ fn invalid_image_override_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/my-site.git"
+branch = "main"
+webhook_token = "token"
+image = " "
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("image"));
+ }
+
+ #[test]
+ fn invalid_command_override_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/my-site.git"
+branch = "main"
+webhook_token = "token"
+command = " "
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("command"));
+ }
+
+ #[test]
+ fn invalid_public_override_path_traversal() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/my-site.git"
+branch = "main"
+webhook_token = "token"
+public = "../etc"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ let err = result.unwrap_err();
+ // Use alternate format {:#} to get full error chain
+ let err_str = format!("{err:#}");
+ assert!(
+ err_str.contains("path traversal"),
+ "Expected 'path traversal' in error: {err_str}"
+ );
+ }
+
+ #[test]
+ fn invalid_public_override_absolute_path() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/my-site.git"
+branch = "main"
+webhook_token = "token"
+public = "/var/www/html"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ let err = result.unwrap_err();
+ // Use alternate format {:#} to get full error chain
+ let err_str = format!("{err:#}");
+ assert!(
+ err_str.contains("relative path"),
+ "Expected 'relative path' in error: {err_str}"
+ );
+ }
+
+ // Poll interval tests
+
+ #[test]
+ fn parse_site_with_poll_interval() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "polled-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+poll_interval = "30m"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(
+ config.sites[0].poll_interval,
+ Some(Duration::from_secs(30 * 60))
+ );
+ }
+
+ #[test]
+ #[cfg(not(feature = "integration"))]
+ fn poll_interval_too_short_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "test-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+poll_interval = "30s"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("too short"));
+ }
+
+ #[test]
+ fn poll_interval_invalid_format_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "test-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+poll_interval = "invalid"
+"#;
+ let result: Result<Config, _> = toml::from_str(toml);
+ assert!(result.is_err());
+ }
+
+ // Git timeout tests
+
+ #[test]
+ fn parse_config_with_git_timeout() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+git_timeout = "2m"
+sites = []
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(config.git_timeout, Some(Duration::from_secs(120)));
+ }
+
+ #[test]
+ fn git_timeout_not_set_defaults_to_none() {
+ let config: Config = toml::from_str(valid_config_toml()).unwrap();
+ config.validate().unwrap();
+ assert!(config.git_timeout.is_none());
+ }
+
+ #[test]
+ fn git_timeout_too_short_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+git_timeout = "3s"
+sites = []
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("too short"));
+ }
+
+ #[test]
+ fn git_timeout_too_long_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+git_timeout = "2h"
+sites = []
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("too long"));
+ }
+
+ #[test]
+ fn git_timeout_boundary_values_accepted() {
+ // 5 seconds (minimum)
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+git_timeout = "5s"
+sites = []
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(config.git_timeout, Some(Duration::from_secs(5)));
+
+ // 1 hour (maximum)
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+git_timeout = "1h"
+sites = []
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(config.git_timeout, Some(Duration::from_secs(3600)));
+ }
+
+ #[test]
+ fn git_timeout_invalid_format_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+git_timeout = "invalid"
+sites = []
+"#;
+ let result: Result<Config, _> = toml::from_str(toml);
+ assert!(result.is_err());
+ }
+
+ // Build timeout tests
+
+ #[test]
+ fn parse_site_with_build_timeout() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+build_timeout = "5m"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(
+ config.sites[0].build_timeout,
+ Some(Duration::from_secs(300))
+ );
+ }
+
+ #[test]
+ fn build_timeout_not_set_defaults_to_none() {
+ let config: Config = toml::from_str(valid_config_toml()).unwrap();
+ config.validate().unwrap();
+ assert!(config.sites[0].build_timeout.is_none());
+ }
+
+ #[test]
+ fn build_timeout_too_short_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+build_timeout = "5s"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("too short"));
+ }
+
+ #[test]
+ fn build_timeout_too_long_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+build_timeout = "25h"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("too long"));
+ }
+
+ #[test]
+ fn build_timeout_invalid_format_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+build_timeout = "invalid"
+"#;
+ let result: Result<Config, _> = toml::from_str(toml);
+ assert!(result.is_err());
+ }
+
+ #[test]
+ fn build_timeout_boundary_values_accepted() {
+ // 10 seconds (minimum)
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+build_timeout = "10s"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(config.sites[0].build_timeout, Some(Duration::from_secs(10)));
+
+ // 24 hours (maximum)
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+build_timeout = "24h"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(
+ config.sites[0].build_timeout,
+ Some(Duration::from_secs(24 * 60 * 60))
+ );
+ }
+
+ // Cache dirs tests
+
+ #[test]
+ fn parse_site_with_cache_dirs() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+cache_dirs = ["/root/.npm", "/root/.cache/pip"]
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(
+ config.sites[0].cache_dirs,
+ Some(vec!["/root/.npm".to_owned(), "/root/.cache/pip".to_owned()])
+ );
+ }
+
+ #[test]
+ fn cache_dirs_relative_path_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+cache_dirs = ["relative/path"]
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("absolute path"));
+ }
+
+ #[test]
+ fn cache_dirs_path_traversal_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+cache_dirs = ["/root/../etc/passwd"]
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("path traversal"));
+ }
+
+ #[test]
+ fn cache_dirs_empty_path_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+cache_dirs = [""]
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("empty"));
+ }
+
+ #[test]
+ fn cache_dirs_normalized_paths_accepted() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+cache_dirs = ["/root//.npm", "/root/./cache", "/root/pip/"]
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ }
+
+ #[test]
+ fn cache_dirs_duplicate_after_normalization_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+cache_dirs = ["/root/.npm", "/root/.npm/"]
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("duplicate"));
+ }
+
+ #[test]
+ fn sanitize_cache_dir_name_no_collisions() {
+ // /a_b/c and /a/b_c must produce different host directory names
+ let a = sanitize_cache_dir_name("/a_b/c");
+ let b = sanitize_cache_dir_name("/a/b_c");
+ assert_ne!(a, b, "sanitized names must not collide");
+ }
+
+ #[test]
+ fn sanitize_cache_dir_name_examples() {
+ assert_eq!(sanitize_cache_dir_name("/root/.npm"), "root%2F.npm");
+ assert_eq!(
+ sanitize_cache_dir_name("/root/.cache/pip"),
+ "root%2F.cache%2Fpip"
+ );
+ }
+
+ #[test]
+ fn sanitize_cache_dir_name_with_underscores() {
+ assert_eq!(
+ sanitize_cache_dir_name("/home/user_name/.cache"),
+ "home%2Fuser%5Fname%2F.cache"
+ );
+ }
+
+ // Post-deploy hook tests
+
+ #[test]
+ fn post_deploy_valid() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+post_deploy = ["cmd", "arg1", "arg2"]
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(
+ config.sites[0].post_deploy,
+ Some(vec!["cmd".to_owned(), "arg1".to_owned(), "arg2".to_owned()])
+ );
+ }
+
+ #[test]
+ fn post_deploy_empty_array_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+post_deploy = []
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("empty array"));
+ }
+
+ #[test]
+ fn post_deploy_empty_executable_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+post_deploy = [""]
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("executable"));
+ }
+
+ #[test]
+ fn post_deploy_whitespace_executable_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+post_deploy = [" "]
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("executable"));
+ }
+
+ #[test]
+ fn empty_webhook_token_disables_auth() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+webhook_token = ""
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ assert!(config.validate().is_ok());
+ assert!(config.sites[0].webhook_token.is_empty());
+ }
+
+ #[test]
+ fn absent_webhook_token_disables_auth() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ assert!(config.validate().is_ok());
+ assert!(config.sites[0].webhook_token.is_empty());
+ }
+
+ #[test]
+ fn whitespace_only_webhook_token_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+webhook_token = " "
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("whitespace-only"));
+ }
+
+ #[test]
+ fn webhook_token_with_null_byte_rejected() {
+ let site = SiteConfig {
+ name: "my-site".to_owned(),
+ repo_url: "https://example.com/repo.git".to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: "tok\0en".to_owned(),
+ webhook_token_file: None,
+ build_overrides: BuildOverrides {
+ image: None,
+ command: None,
+ public: None,
+ },
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: None,
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: default_container_network(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ };
+ let result = site.validate_webhook_token();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("null byte"));
+ }
+
+ #[test]
+ fn valid_webhook_token_accepted() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+webhook_token = "secret-token"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ assert!(config.validate().is_ok());
+ }
+
+ #[test]
+ fn whitespace_padded_webhook_token_accepted() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+webhook_token = " secret-token "
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ assert!(config.validate().is_ok());
+ }
+
+ // Env var tests
+
+ #[test]
+ fn env_vars_valid() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+
+[sites.env]
+DEPLOY_TOKEN = "abc123"
+NODE_ENV = "production"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ let env = config.sites[0].env.as_ref().unwrap();
+ assert_eq!(env.get("DEPLOY_TOKEN").unwrap(), "abc123");
+ assert_eq!(env.get("NODE_ENV").unwrap(), "production");
+ }
+
+ #[test]
+ fn env_vars_none_accepted() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert!(config.sites[0].env.is_none());
+ }
+
+ #[test]
+ fn env_vars_empty_key_rejected() {
+ let site = SiteConfig {
+ name: "my-site".to_owned(),
+ repo_url: "https://example.com/repo.git".to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: "token".to_owned(),
+ webhook_token_file: None,
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: Some(HashMap::from([(String::new(), "val".to_owned())])),
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: default_container_network(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ };
+ let result = site.validate();
+ assert!(result.is_err());
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("key cannot be empty")
+ );
+ }
+
+ #[test]
+ fn env_vars_equals_in_key_rejected() {
+ let site = SiteConfig {
+ name: "my-site".to_owned(),
+ repo_url: "https://example.com/repo.git".to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: "token".to_owned(),
+ webhook_token_file: None,
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: Some(HashMap::from([("FOO=BAR".to_owned(), "val".to_owned())])),
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: default_container_network(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ };
+ let result = site.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("contains '='"));
+ }
+
+ #[test]
+ fn env_vars_null_byte_in_key_rejected() {
+ let site = SiteConfig {
+ name: "my-site".to_owned(),
+ repo_url: "https://example.com/repo.git".to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: "token".to_owned(),
+ webhook_token_file: None,
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: Some(HashMap::from([("FOO\0".to_owned(), "val".to_owned())])),
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: default_container_network(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ };
+ let result = site.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("null byte"));
+ }
+
+ #[test]
+ fn env_vars_null_byte_in_value_rejected() {
+ let site = SiteConfig {
+ name: "my-site".to_owned(),
+ repo_url: "https://example.com/repo.git".to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: "token".to_owned(),
+ webhook_token_file: None,
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: Some(HashMap::from([("FOO".to_owned(), "val\0ue".to_owned())])),
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: default_container_network(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ };
+ let result = site.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("null byte"));
+ }
+
+ #[test]
+ fn env_vars_reserved_prefix_rejected() {
+ // Test case-insensitive prefix blocking
+ for key in &["WITRYNA_SITE", "witryna_foo", "Witryna_Bar"] {
+ let site = SiteConfig {
+ name: "my-site".to_owned(),
+ repo_url: "https://example.com/repo.git".to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: "token".to_owned(),
+ webhook_token_file: None,
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: Some(HashMap::from([((*key).to_owned(), "val".to_owned())])),
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: default_container_network(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ };
+ let result = site.validate();
+ assert!(result.is_err(), "key '{key}' should be rejected");
+ assert!(
+ result.unwrap_err().to_string().contains("reserved prefix"),
+ "key '{key}' error should mention reserved prefix"
+ );
+ }
+ }
+
+ #[test]
+ fn env_vars_too_many_rejected() {
+ let mut env = HashMap::new();
+ for i in 0..65 {
+ env.insert(format!("VAR{i}"), format!("val{i}"));
+ }
+ let site = SiteConfig {
+ name: "my-site".to_owned(),
+ repo_url: "https://example.com/repo.git".to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: "token".to_owned(),
+ webhook_token_file: None,
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: Some(env),
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: default_container_network(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ };
+ let result = site.validate();
+ assert!(result.is_err());
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("too many env vars")
+ );
+ }
+
+ // BuildOverrides.is_complete tests
+
+ #[test]
+ fn build_overrides_complete_requires_all_three() {
+ let full = BuildOverrides {
+ image: Some("node:20".to_owned()),
+ command: Some("npm run build".to_owned()),
+ public: Some("dist".to_owned()),
+ };
+ assert!(full.is_complete());
+
+ let no_image = BuildOverrides {
+ image: None,
+ command: Some("npm run build".to_owned()),
+ public: Some("dist".to_owned()),
+ };
+ assert!(!no_image.is_complete());
+ }
+
+ // container_runtime validation tests
+
+ #[test]
+ fn container_runtime_empty_string_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = " "
+base_dir = "/var/lib/witryna"
+log_level = "info"
+sites = []
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("container_runtime must not be empty")
+ );
+ }
+
+ #[test]
+ fn container_runtime_accepted() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ }
+
+ #[test]
+ fn cache_dirs_with_site_accepted() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+cache_dirs = ["/root/.npm"]
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ }
+
+ // Resource limits tests
+
+ #[test]
+ fn container_memory_valid_values() {
+ for val in &["512m", "2g", "1024k", "512M", "2G", "1024K"] {
+ let toml = format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+container_memory = "{val}"
+"#
+ );
+ let config: Config = toml::from_str(&toml).unwrap();
+ assert!(
+ config.validate().is_ok(),
+ "container_memory '{val}' should be valid"
+ );
+ }
+ }
+
+ #[test]
+ fn container_memory_invalid_values() {
+ for val in &["512", "abc", "m", "512mb", "2 g", ""] {
+ let toml = format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+container_memory = "{val}"
+"#
+ );
+ let config: Config = toml::from_str(&toml).unwrap();
+ let result = config.validate();
+ assert!(
+ result.is_err(),
+ "container_memory '{val}' should be invalid"
+ );
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("invalid container_memory"),
+ "error for '{val}' should mention container_memory"
+ );
+ }
+ }
+
+ #[test]
+ fn container_cpus_valid() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+container_cpus = 0.5
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(config.sites[0].container_cpus, Some(0.5));
+ }
+
+ #[test]
+ fn container_cpus_zero_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+container_cpus = 0.0
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("container_cpus must be greater than 0.0")
+ );
+ }
+
+ #[test]
+ fn container_cpus_negative_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+container_cpus = -1.0
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("container_cpus must be greater than 0.0")
+ );
+ }
+
+ #[test]
+ fn container_pids_limit_valid() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+container_pids_limit = 100
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(config.sites[0].container_pids_limit, Some(100));
+ }
+
+ #[test]
+ fn container_pids_limit_zero_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+container_pids_limit = 0
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("container_pids_limit must be greater than 0")
+ );
+ }
+
+ #[test]
+ fn resource_limits_not_set_by_default() {
+ let config: Config = toml::from_str(valid_config_toml()).unwrap();
+ config.validate().unwrap();
+ assert!(config.sites[0].container_memory.is_none());
+ assert!(config.sites[0].container_cpus.is_none());
+ assert!(config.sites[0].container_pids_limit.is_none());
+ }
+
+ // Container network tests
+
+ #[test]
+ fn container_network_defaults_to_bridge() {
+ let config: Config = toml::from_str(valid_config_toml()).unwrap();
+ config.validate().unwrap();
+ assert_eq!(config.sites[0].container_network, "bridge");
+ }
+
+ #[test]
+ fn container_network_valid_values() {
+ for val in &["none", "bridge", "host", "slirp4netns"] {
+ let toml = format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+container_network = "{val}"
+"#
+ );
+ let config: Config = toml::from_str(&toml).unwrap();
+ assert!(
+ config.validate().is_ok(),
+ "container_network '{val}' should be valid"
+ );
+ }
+ }
+
+ #[test]
+ fn container_network_invalid_rejected() {
+ for val in &["custom", "vpn", "", "pasta"] {
+ let toml = format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+container_network = "{val}"
+"#
+ );
+ let config: Config = toml::from_str(&toml).unwrap();
+ let result = config.validate();
+ assert!(
+ result.is_err(),
+ "container_network '{val}' should be invalid"
+ );
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("invalid container_network"),
+ "error for '{val}' should mention container_network"
+ );
+ }
+ }
+
+ // container_workdir tests
+
+ #[test]
+ fn container_workdir_valid_relative_paths() {
+ for path in &["packages/frontend", "apps/web", "src"] {
+ let toml = format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+container_workdir = "{path}"
+"#
+ );
+ let config: Config = toml::from_str(&toml).unwrap();
+ assert!(
+ config.validate().is_ok(),
+ "container_workdir '{path}' should be valid"
+ );
+ }
+ }
+
+ #[test]
+ fn container_workdir_defaults_to_none() {
+ let config: Config = toml::from_str(valid_config_toml()).unwrap();
+ config.validate().unwrap();
+ assert!(config.sites[0].container_workdir.is_none());
+ }
+
+ #[test]
+ fn container_workdir_absolute_path_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+container_workdir = "/packages/frontend"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("relative path"));
+ }
+
+ #[test]
+ fn container_workdir_path_traversal_rejected() {
+ for path in &["../packages", "packages/../etc"] {
+ let toml = format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+container_workdir = "{path}"
+"#
+ );
+ let config: Config = toml::from_str(&toml).unwrap();
+ let result = config.validate();
+ assert!(
+ result.is_err(),
+ "container_workdir '{path}' should be rejected"
+ );
+ assert!(
+ result.unwrap_err().to_string().contains("path traversal"),
+ "error for '{path}' should mention path traversal"
+ );
+ }
+ }
+
+ #[test]
+ fn container_workdir_empty_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+container_workdir = ""
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("cannot be empty"));
+ }
+
+ // git_depth tests
+
+ #[test]
+ fn parse_config_with_git_depth() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+git_depth = 10
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(config.sites[0].git_depth, Some(10));
+ }
+
+ #[test]
+ fn git_depth_not_set_defaults_to_none() {
+ let config: Config = toml::from_str(valid_config_toml()).unwrap();
+ config.validate().unwrap();
+ assert!(config.sites[0].git_depth.is_none());
+ }
+
+ #[test]
+ fn git_depth_zero_accepted() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+git_depth = 0
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(config.sites[0].git_depth, Some(0));
+ }
+
+ #[test]
+ fn git_depth_large_value_accepted() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+git_depth = 1000
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ config.validate().unwrap();
+ assert_eq!(config.sites[0].git_depth, Some(1000));
+ }
+
+ // is_valid_env_var_name tests
+
+ #[test]
+ fn valid_env_var_names() {
+ assert!(is_valid_env_var_name("FOO"));
+ assert!(is_valid_env_var_name("_FOO"));
+ assert!(is_valid_env_var_name("FOO_BAR"));
+ assert!(is_valid_env_var_name("WITRYNA_TOKEN"));
+ assert!(is_valid_env_var_name("A1"));
+ }
+
+ #[test]
+ fn invalid_env_var_names() {
+ assert!(!is_valid_env_var_name(""));
+ assert!(!is_valid_env_var_name("foo"));
+ assert!(!is_valid_env_var_name("1FOO"));
+ assert!(!is_valid_env_var_name("FOO-BAR"));
+ assert!(!is_valid_env_var_name("FOO BAR"));
+ assert!(!is_valid_env_var_name("foo_bar"));
+ }
+
+ // resolve_secrets tests
+
+ #[tokio::test]
+ async fn resolve_secrets_env_var() {
+ // Use a unique env var name to avoid test races
+ let var_name = "WITRYNA_TEST_TOKEN_RESOLVE_01";
+ // SAFETY: test-only, single-threaded tokio test
+ unsafe { std::env::set_var(var_name, "resolved-secret") };
+
+ let mut config: Config = toml::from_str(&format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+webhook_token = "${{{var_name}}}"
+"#
+ ))
+ .unwrap();
+
+ config.resolve_secrets().await.unwrap();
+ assert_eq!(config.sites[0].webhook_token, "resolved-secret");
+
+ // SAFETY: test-only cleanup
+ unsafe { std::env::remove_var(var_name) };
+ }
+
+ #[tokio::test]
+ async fn resolve_secrets_env_var_missing() {
+ let var_name = "WITRYNA_TEST_TOKEN_RESOLVE_02_MISSING";
+ // SAFETY: test-only, single-threaded tokio test
+ unsafe { std::env::remove_var(var_name) };
+
+ let mut config: Config = toml::from_str(&format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+webhook_token = "${{{var_name}}}"
+"#
+ ))
+ .unwrap();
+
+ let result = config.resolve_secrets().await;
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("not set"));
+ }
+
+ #[tokio::test]
+ async fn resolve_secrets_file() {
+ let dir = tempfile::tempdir().unwrap();
+ let token_path = dir.path().join("token");
+ std::fs::write(&token_path, " file-secret\n ").unwrap();
+
+ let mut config: Config = toml::from_str(&format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+webhook_token_file = "{}"
+"#,
+ token_path.display()
+ ))
+ .unwrap();
+
+ config.resolve_secrets().await.unwrap();
+ assert_eq!(config.sites[0].webhook_token, "file-secret");
+ }
+
+ #[tokio::test]
+ async fn resolve_secrets_file_missing() {
+ let mut config: Config = toml::from_str(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+webhook_token_file = "/nonexistent/path/token"
+"#,
+ )
+ .unwrap();
+
+ let result = config.resolve_secrets().await;
+ assert!(result.is_err());
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("failed to read webhook_token_file")
+ );
+ }
+
+ #[tokio::test]
+ async fn resolve_secrets_mutual_exclusivity() {
+ let var_name = "WITRYNA_TEST_TOKEN_RESOLVE_03";
+ // SAFETY: test-only, single-threaded tokio test
+ unsafe { std::env::set_var(var_name, "val") };
+
+ let mut config: Config = toml::from_str(&format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+webhook_token = "${{{var_name}}}"
+webhook_token_file = "/run/secrets/token"
+"#
+ ))
+ .unwrap();
+
+ let result = config.resolve_secrets().await;
+ assert!(result.is_err());
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("mutually exclusive")
+ );
+
+ // SAFETY: test-only cleanup
+ unsafe { std::env::remove_var(var_name) };
+ }
+
+ #[test]
+ fn webhook_token_file_only_passes_validation() {
+ // When webhook_token_file is set and webhook_token is empty (default),
+ // validation should not complain about empty webhook_token
+ let site = SiteConfig {
+ name: "my-site".to_owned(),
+ repo_url: "https://example.com/repo.git".to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: String::new(),
+ webhook_token_file: Some(PathBuf::from("/run/secrets/token")),
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: None,
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: default_container_network(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ };
+ // validate_webhook_token should pass because webhook_token_file is set
+ site.validate_webhook_token().unwrap();
+ }
+
+ #[test]
+ fn literal_dollar_brace_not_treated_as_env_ref() {
+ // Invalid env var names should be treated as literal tokens
+ let toml_str = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+webhook_token = "${lowercase}"
+"#;
+ let config: Config = toml::from_str(toml_str).unwrap();
+ // "lowercase" is not a valid env var name (not uppercase), so treated as literal
+ assert_eq!(config.sites[0].webhook_token, "${lowercase}");
+ config.validate().unwrap();
+ }
+
+ #[test]
+ fn partial_interpolation_treated_as_literal() {
+ let toml_str = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+webhook_token = "prefix-${VAR}"
+"#;
+ let config: Config = toml::from_str(toml_str).unwrap();
+ // Not a full-value ${VAR}, so treated as literal
+ assert_eq!(config.sites[0].webhook_token, "prefix-${VAR}");
+ config.validate().unwrap();
+ }
+
+ #[test]
+ fn empty_braces_treated_as_literal() {
+ let toml_str = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+webhook_token = "${}"
+"#;
+ let config: Config = toml::from_str(toml_str).unwrap();
+ // "${}" has empty var name — not valid, treated as literal
+ assert_eq!(config.sites[0].webhook_token, "${}");
+ // But validation will reject it (non-empty after trim but still weird chars)
+ config.validate().unwrap();
+ }
+
+ // config_file validation tests
+
+ #[test]
+ fn config_file_path_traversal_rejected() {
+ for path in &["../config.yaml", "build/../etc/passwd"] {
+ let toml = format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+config_file = "{path}"
+"#
+ );
+ let config: Config = toml::from_str(&toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err(), "config_file '{path}' should be rejected");
+ assert!(result.unwrap_err().to_string().contains("path traversal"));
+ }
+ }
+
+ #[test]
+ fn config_file_absolute_path_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+config_file = "/etc/witryna.yaml"
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("relative path"));
+ }
+
+ #[test]
+ fn config_file_empty_rejected() {
+ let toml = r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+config_file = ""
+"#;
+ let config: Config = toml::from_str(toml).unwrap();
+ let result = config.validate();
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("config_file"));
+ }
+
+ #[test]
+ fn config_file_valid_paths_accepted() {
+ for path in &[".witryna.yaml", "build/config.yml", "ci/witryna.yaml"] {
+ let toml = format!(
+ r#"
+listen_address = "127.0.0.1:8080"
+container_runtime = "podman"
+base_dir = "/var/lib/witryna"
+log_level = "info"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://github.com/user/site.git"
+branch = "main"
+webhook_token = "token"
+config_file = "{path}"
+"#
+ );
+ let config: Config = toml::from_str(&toml).unwrap();
+ assert!(
+ config.validate().is_ok(),
+ "config_file '{path}' should be valid"
+ );
+ }
+ }
+
+ // discover_config tests
+
+ #[test]
+ fn discover_config_explicit_path_found() {
+ let dir = tempfile::tempdir().unwrap();
+ let path = dir.path().join("witryna.toml");
+ std::fs::write(&path, "").unwrap();
+ let result = super::discover_config(Some(&path));
+ assert!(result.is_ok());
+ assert_eq!(result.unwrap(), path);
+ }
+
+ #[test]
+ fn discover_config_explicit_path_not_found_errors() {
+ let result = super::discover_config(Some(std::path::Path::new(
+ "/tmp/nonexistent-witryna-test-12345/witryna.toml",
+ )));
+ assert!(result.is_err());
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("config file not found")
+ );
+ }
+
+ #[test]
+ fn xdg_config_path_returns_valid_path() {
+ // xdg_config_path uses env vars internally; just verify it returns a
+ // path ending with the expected suffix regardless of environment.
+ let result = super::xdg_config_path();
+ assert!(
+ result.ends_with("witryna/witryna.toml"),
+ "expected path ending with witryna/witryna.toml, got: {}",
+ result.display()
+ );
+ }
+}
diff --git a/src/git.rs b/src/git.rs
new file mode 100644
index 0000000..2193add
--- /dev/null
+++ b/src/git.rs
@@ -0,0 +1,1320 @@
+use anyhow::{Context as _, Result, bail};
+use std::path::Path;
+use std::time::Duration;
+use tokio::process::Command;
+use tracing::{debug, error, info, warn};
+
+/// Default timeout for git operations (used when not configured).
+pub const GIT_TIMEOUT_DEFAULT: Duration = Duration::from_secs(60);
+
+/// Default git clone depth (shallow clone with 1 commit).
+pub const GIT_DEPTH_DEFAULT: u32 = 1;
+
+/// Timeout for LFS operations (longer due to large file downloads)
+const LFS_TIMEOUT: Duration = Duration::from_secs(300);
+
+/// LFS pointer file signature (per Git LFS spec)
+const LFS_POINTER_SIGNATURE: &str = "version https://git-lfs.github.com/spec/v1";
+
+/// Maximum size for a valid LFS pointer file (per spec)
+const LFS_POINTER_MAX_SIZE: u64 = 1024;
+
+/// Create a git Command with clean environment isolation.
+///
+/// Strips `GIT_DIR`, `GIT_WORK_TREE`, and `GIT_INDEX_FILE` so that git
+/// discovers the repository from the working directory set via
+/// `.current_dir()`, not from inherited environment variables.
+///
+/// This is defensive: in production these vars are never set, but it
+/// prevents failures when tests run inside git hooks (e.g., a pre-commit
+/// hook that invokes `cargo test`).
+fn git_command() -> Command {
+ let mut cmd = Command::new("git");
+ cmd.env_remove("GIT_DIR")
+ .env_remove("GIT_WORK_TREE")
+ .env_remove("GIT_INDEX_FILE");
+ cmd
+}
+
+/// Create a git Command that allows the file:// protocol.
+///
+/// Git ≥ 2.38.1 disables file:// by default (CVE-2022-39253), but the
+/// restriction targets local-clone hardlink attacks, not file:// transport.
+/// Submodule URLs come from the trusted config, so this is safe.
+/// Used only for submodule operations whose internal clones may use file://.
+fn git_command_allow_file_transport() -> Command {
+ let mut cmd = git_command();
+ cmd.env("GIT_CONFIG_COUNT", "1")
+ .env("GIT_CONFIG_KEY_0", "protocol.file.allow")
+ .env("GIT_CONFIG_VALUE_0", "always");
+ cmd
+}
+
+/// Run a git command with timeout and standard error handling.
+///
+/// Builds a `git` `Command`, optionally sets the working directory,
+/// enforces a timeout, and converts non-zero exit into an `anyhow` error.
+async fn run_git(args: &[&str], dir: Option<&Path>, timeout: Duration, op: &str) -> Result<()> {
+ run_git_cmd(git_command(), args, dir, timeout, op).await
+}
+
+/// Like [`run_git`] but uses a pre-built `Command` (e.g. one that allows
+/// the file:// protocol for submodule clones).
+async fn run_git_cmd(
+ mut cmd: Command,
+ args: &[&str],
+ dir: Option<&Path>,
+ timeout: Duration,
+ op: &str,
+) -> Result<()> {
+ cmd.args(args);
+ if let Some(d) = dir {
+ cmd.current_dir(d);
+ }
+
+ let output = tokio::time::timeout(timeout, cmd.output())
+ .await
+ .with_context(|| format!("{op} timed out"))?
+ .with_context(|| format!("failed to execute {op}"))?;
+
+ if !output.status.success() {
+ let stderr = String::from_utf8_lossy(&output.stderr);
+ bail!("{op} failed: {}", stderr.trim());
+ }
+
+ Ok(())
+}
+
+/// Synchronize a Git repository: clone if not exists, pull if exists.
+/// Automatically initializes submodules and fetches LFS objects if needed.
+///
+/// # Errors
+///
+/// Returns an error if the clone, pull, submodule init, or LFS fetch fails.
+pub async fn sync_repo(
+ repo_url: &str,
+ branch: &str,
+ clone_dir: &Path,
+ timeout: Duration,
+ depth: u32,
+) -> Result<()> {
+ let is_pull = clone_dir.exists();
+
+ if is_pull {
+ pull(clone_dir, branch, timeout, depth).await?;
+ } else if let Err(e) = clone(repo_url, branch, clone_dir, timeout, depth).await {
+ if clone_dir.exists() {
+ warn!(path = %clone_dir.display(), "cleaning up partial clone after failure");
+ if let Err(cleanup_err) = tokio::fs::remove_dir_all(clone_dir).await {
+ error!(path = %clone_dir.display(), error = %cleanup_err,
+ "failed to clean up partial clone");
+ }
+ }
+ return Err(e);
+ }
+
+ // Initialize submodules before LFS (submodule files may contain LFS pointers)
+ maybe_init_submodules(clone_dir, timeout, depth, is_pull).await?;
+
+ // Handle LFS after clone/pull + submodules
+ maybe_fetch_lfs(clone_dir).await?;
+
+ Ok(())
+}
+
+/// Check if the remote branch has new commits compared to local HEAD.
+/// Returns `Ok(true)` if new commits are available, `Ok(false)` if up-to-date.
+///
+/// This function:
+/// 1. Returns true if `clone_dir` doesn't exist (needs initial clone)
+/// 2. Runs `git fetch` to update remote refs (with `--depth` if depth > 0)
+/// 3. Compares local HEAD with `origin/{branch}`
+/// 4. Does NOT modify the working directory (no reset/checkout)
+///
+/// # Errors
+///
+/// Returns an error if git fetch or rev-parse fails.
+pub async fn has_remote_changes(
+ clone_dir: &Path,
+ branch: &str,
+ timeout: Duration,
+ depth: u32,
+) -> Result<bool> {
+ // If clone directory doesn't exist, treat as "needs update"
+ if !clone_dir.exists() {
+ debug!(path = %clone_dir.display(), "clone directory does not exist, needs initial clone");
+ return Ok(true);
+ }
+
+ // Fetch from remote (update refs only, no working tree changes)
+ debug!(path = %clone_dir.display(), branch, "fetching remote refs");
+ let depth_str = depth.to_string();
+ let mut fetch_args = vec!["fetch"];
+ if depth > 0 {
+ fetch_args.push("--depth");
+ fetch_args.push(&depth_str);
+ }
+ fetch_args.extend_from_slice(&["origin", branch]);
+ run_git(&fetch_args, Some(clone_dir), timeout, "git fetch").await?;
+
+ // Get local HEAD commit
+ let local_head = get_commit_hash(clone_dir, "HEAD").await?;
+
+ // Get remote branch commit
+ let remote_ref = format!("origin/{branch}");
+ let remote_head = get_commit_hash(clone_dir, &remote_ref).await?;
+
+ debug!(
+ path = %clone_dir.display(),
+ local = %local_head,
+ remote = %remote_head,
+ "comparing commits"
+ );
+
+ Ok(local_head != remote_head)
+}
+
+/// Get the full commit hash for a ref (HEAD, branch name, etc.)
+async fn get_commit_hash(clone_dir: &Path, ref_name: &str) -> Result<String> {
+ let output = git_command()
+ .args(["rev-parse", ref_name])
+ .current_dir(clone_dir)
+ .output()
+ .await
+ .context("failed to execute git rev-parse")?;
+
+ if !output.status.success() {
+ let stderr = String::from_utf8_lossy(&output.stderr);
+ bail!("git rev-parse {} failed: {}", ref_name, stderr.trim());
+ }
+
+ Ok(String::from_utf8_lossy(&output.stdout).trim().to_owned())
+}
+
+async fn clone(
+ repo_url: &str,
+ branch: &str,
+ clone_dir: &Path,
+ timeout: Duration,
+ depth: u32,
+) -> Result<()> {
+ info!(repo_url, branch, path = %clone_dir.display(), "cloning repository");
+
+ // Create parent directory if needed
+ if let Some(parent) = clone_dir.parent() {
+ tokio::fs::create_dir_all(parent)
+ .await
+ .with_context(|| format!("failed to create parent directory: {}", parent.display()))?;
+ }
+
+ let clone_dir_str = clone_dir.display().to_string();
+ let depth_str = depth.to_string();
+ let mut args = vec!["clone", "--branch", branch, "--single-branch"];
+ if depth > 0 {
+ args.push("--depth");
+ args.push(&depth_str);
+ }
+ args.push(repo_url);
+ args.push(clone_dir_str.as_str());
+ run_git(&args, None, timeout, "git clone").await?;
+
+ debug!(path = %clone_dir.display(), "clone completed");
+ Ok(())
+}
+
+async fn pull(clone_dir: &Path, branch: &str, timeout: Duration, depth: u32) -> Result<()> {
+ info!(branch, path = %clone_dir.display(), "pulling latest changes");
+
+ // Fetch from origin (shallow or full depending on depth)
+ let depth_str = depth.to_string();
+ let mut fetch_args = vec!["fetch"];
+ if depth > 0 {
+ fetch_args.push("--depth");
+ fetch_args.push(&depth_str);
+ }
+ fetch_args.extend_from_slice(&["origin", branch]);
+ run_git(&fetch_args, Some(clone_dir), timeout, "git fetch").await?;
+
+ // Reset to origin/branch to discard any local changes
+ let reset_ref = format!("origin/{branch}");
+ run_git(
+ &["reset", "--hard", &reset_ref],
+ Some(clone_dir),
+ timeout,
+ "git reset",
+ )
+ .await?;
+
+ debug!(path = %clone_dir.display(), "pull completed");
+ Ok(())
+}
+
+/// Check if the repository has LFS configured via .gitattributes.
+async fn has_lfs_configured(clone_dir: &Path) -> bool {
+ let gitattributes = clone_dir.join(".gitattributes");
+
+ tokio::fs::read_to_string(&gitattributes)
+ .await
+ .is_ok_and(|content| content.contains("filter=lfs"))
+}
+
+/// Scan repository for LFS pointer files.
+/// Returns true if any tracked file matches the LFS pointer signature.
+async fn has_lfs_pointers(clone_dir: &Path) -> Result<bool> {
+ // Use git ls-files to get tracked files
+ let output = git_command()
+ .args(["ls-files", "-z"]) // -z for null-separated output
+ .current_dir(clone_dir)
+ .output()
+ .await
+ .context("failed to list git files")?;
+
+ if !output.status.success() {
+ // If ls-files fails, assume pointers might exist (conservative)
+ return Ok(true);
+ }
+
+ let files_str = String::from_utf8_lossy(&output.stdout);
+
+ for file_path in files_str.split('\0').filter(|s| !s.is_empty()) {
+ let full_path = clone_dir.join(file_path);
+
+ // Check file size first (pointer files are < 1024 bytes)
+ let Ok(metadata) = tokio::fs::metadata(&full_path).await else {
+ continue;
+ };
+ if metadata.len() >= LFS_POINTER_MAX_SIZE || !metadata.is_file() {
+ continue;
+ }
+
+ // Read and check for LFS signature
+ let Ok(content) = tokio::fs::read_to_string(&full_path).await else {
+ continue;
+ };
+ if content.starts_with(LFS_POINTER_SIGNATURE) {
+ debug!(file = %file_path, "found LFS pointer");
+ return Ok(true);
+ }
+ }
+
+ Ok(false)
+}
+
+async fn is_lfs_available() -> bool {
+ git_command()
+ .args(["lfs", "version"])
+ .output()
+ .await
+ .map(|o| o.status.success())
+ .unwrap_or(false)
+}
+
+async fn lfs_pull(clone_dir: &Path) -> Result<()> {
+ info!(path = %clone_dir.display(), "fetching LFS objects");
+
+ run_git(
+ &["lfs", "pull"],
+ Some(clone_dir),
+ LFS_TIMEOUT,
+ "git lfs pull",
+ )
+ .await?;
+
+ debug!(path = %clone_dir.display(), "LFS pull completed");
+ Ok(())
+}
+
+/// Detect and fetch LFS objects if needed.
+///
+/// Detection strategy:
+/// 1. Check .gitattributes for `filter=lfs`
+/// 2. If configured, scan for actual pointer files
+/// 3. If pointers exist, verify git-lfs is available
+/// 4. Run `git lfs pull` to fetch objects
+async fn maybe_fetch_lfs(clone_dir: &Path) -> Result<()> {
+ // Step 1: Quick check for LFS configuration
+ if !has_lfs_configured(clone_dir).await {
+ debug!(path = %clone_dir.display(), "no LFS configuration found");
+ return Ok(());
+ }
+
+ info!(path = %clone_dir.display(), "LFS configured, checking for pointers");
+
+ // Step 2: Scan for actual pointer files
+ match has_lfs_pointers(clone_dir).await {
+ Ok(true) => {
+ // Pointers found, need to fetch
+ }
+ Ok(false) => {
+ debug!(path = %clone_dir.display(), "no LFS pointers found");
+ return Ok(());
+ }
+ Err(e) => {
+ // If scan fails, try to fetch anyway (conservative approach)
+ debug!(error = %e, "LFS pointer scan failed, attempting fetch");
+ }
+ }
+
+ // Step 3: Verify git-lfs is available
+ if !is_lfs_available().await {
+ bail!("repository requires git-lfs but git-lfs is not installed");
+ }
+
+ // Step 4: Fetch LFS objects
+ lfs_pull(clone_dir).await
+}
+
+/// Check if the repository has submodules configured via .gitmodules.
+async fn has_submodules(clone_dir: &Path) -> bool {
+ let gitmodules = clone_dir.join(".gitmodules");
+ tokio::fs::read_to_string(&gitmodules)
+ .await
+ .is_ok_and(|content| !content.trim().is_empty())
+}
+
+/// Detect and initialize submodules if needed.
+///
+/// Detection: checks for `.gitmodules` (single stat call when absent).
+/// On pull: runs `git submodule sync --recursive` first to handle URL changes.
+/// Then: `git submodule update --init --recursive [--depth 1]`.
+async fn maybe_init_submodules(
+ clone_dir: &Path,
+ timeout: Duration,
+ depth: u32,
+ is_pull: bool,
+) -> Result<()> {
+ if !has_submodules(clone_dir).await {
+ debug!(path = %clone_dir.display(), "no submodules configured");
+ return Ok(());
+ }
+
+ info!(path = %clone_dir.display(), "submodules detected, initializing");
+
+ // On pull, sync URLs first (handles upstream submodule URL changes)
+ if is_pull {
+ run_git(
+ &["submodule", "sync", "--recursive"],
+ Some(clone_dir),
+ timeout,
+ "git submodule sync",
+ )
+ .await?;
+ }
+
+ // Initialize and update submodules.
+ // Uses file-transport-allowing command because `git submodule update`
+ // internally clones each submodule, and URLs may use the file:// scheme.
+ let depth_str = depth.to_string();
+ let mut args = vec!["submodule", "update", "--init", "--recursive"];
+ if depth > 0 {
+ args.push("--depth");
+ args.push(&depth_str);
+ }
+ run_git_cmd(
+ git_command_allow_file_transport(),
+ &args,
+ Some(clone_dir),
+ timeout,
+ "git submodule update",
+ )
+ .await?;
+
+ debug!(path = %clone_dir.display(), "submodule initialization completed");
+ Ok(())
+}
+
+#[cfg(test)]
+#[allow(clippy::unwrap_used, clippy::indexing_slicing, clippy::expect_used)]
+mod tests {
+ use super::*;
+ use crate::test_support::{cleanup, temp_dir};
+ use tokio::fs;
+ use tokio::process::Command;
+
+ /// Alias for `git_command_allow_file_transport()` — tests use file://
+ /// URLs for bare repos, so the file protocol must be allowed.
+ fn git_cmd() -> Command {
+ git_command_allow_file_transport()
+ }
+
+ async fn configure_test_git_user(dir: &Path) {
+ git_cmd()
+ .args(["config", "user.email", "test@test.com"])
+ .current_dir(dir)
+ .output()
+ .await
+ .unwrap();
+ git_cmd()
+ .args(["config", "user.name", "Test"])
+ .current_dir(dir)
+ .output()
+ .await
+ .unwrap();
+ }
+
+ /// Create a local bare git repository with an initial commit on the specified branch.
+ /// Returns a file:// URL that works with git clone --depth 1.
+ async fn create_local_repo(temp: &Path, branch: &str) -> String {
+ let bare_repo = temp.join("origin.git");
+ fs::create_dir_all(&bare_repo).await.unwrap();
+
+ // Initialize bare repo with explicit initial branch
+ let output = git_cmd()
+ .args(["init", "--bare", "--initial-branch", branch])
+ .current_dir(&bare_repo)
+ .output()
+ .await
+ .unwrap();
+ assert!(output.status.success(), "git init failed");
+
+ // Create a working copy to make initial commit
+ let work_dir = temp.join("work");
+ let output = git_cmd()
+ .args([
+ "clone",
+ bare_repo.to_str().unwrap(),
+ work_dir.to_str().unwrap(),
+ ])
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "git clone failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+
+ // Configure git user for commit
+ configure_test_git_user(&work_dir).await;
+
+ // Checkout the target branch (in case clone defaulted to something else)
+ let output = git_cmd()
+ .args(["checkout", "-B", branch])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "git checkout failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+
+ // Create initial commit
+ fs::write(work_dir.join("README.md"), "# Test Repo")
+ .await
+ .unwrap();
+ let output = git_cmd()
+ .args(["add", "README.md"])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(output.status.success(), "git add failed");
+
+ let output = git_cmd()
+ .args(["commit", "-m", "Initial commit"])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "git commit failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+
+ // Push to origin
+ let output = git_cmd()
+ .args(["push", "-u", "origin", branch])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "git push failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+
+ // Clean up working copy
+ let _ = fs::remove_dir_all(&work_dir).await;
+
+ // Return file:// URL so --depth works correctly
+ format!("file://{}", bare_repo.to_str().unwrap())
+ }
+
+ #[tokio::test]
+ async fn clone_creates_directory_and_clones_repo() {
+ let temp = temp_dir("git-test").await;
+ let repo_url = create_local_repo(&temp, "main").await;
+ let clone_dir = temp.join("test-repo");
+
+ let result = clone(&repo_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 1).await;
+
+ assert!(result.is_ok(), "clone should succeed: {result:?}");
+ assert!(clone_dir.exists(), "clone directory should exist");
+ assert!(
+ clone_dir.join(".git").exists(),
+ ".git directory should exist"
+ );
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn clone_invalid_url_returns_error() {
+ let temp = temp_dir("git-test").await;
+ let clone_dir = temp.join("invalid-repo");
+
+ let result = clone(
+ "/nonexistent/path/to/repo.git",
+ "main",
+ &clone_dir,
+ GIT_TIMEOUT_DEFAULT,
+ 1,
+ )
+ .await;
+
+ assert!(result.is_err(), "clone should fail for invalid URL");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn clone_invalid_branch_returns_error() {
+ let temp = temp_dir("git-test").await;
+ let repo_url = create_local_repo(&temp, "main").await;
+ let clone_dir = temp.join("invalid-branch");
+
+ let result = clone(
+ &repo_url,
+ "nonexistent-branch-xyz",
+ &clone_dir,
+ GIT_TIMEOUT_DEFAULT,
+ 1,
+ )
+ .await;
+
+ assert!(result.is_err(), "clone should fail for invalid branch");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn pull_updates_existing_repo() {
+ let temp = temp_dir("git-test").await;
+ let repo_url = create_local_repo(&temp, "main").await;
+ let clone_dir = temp.join("pull-test");
+
+ // First clone
+ clone(&repo_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 1)
+ .await
+ .expect("initial clone should succeed");
+
+ // Push a new commit to origin
+ let work_dir = temp.join("work-pull");
+ push_new_commit(&repo_url, &work_dir, "pulled.txt", "pulled content").await;
+
+ // Pull should fetch the new commit
+ pull(&clone_dir, "main", GIT_TIMEOUT_DEFAULT, 1)
+ .await
+ .expect("pull should succeed");
+
+ // Verify the new file appeared in the working copy
+ let pulled_file = clone_dir.join("pulled.txt");
+ assert!(pulled_file.exists(), "pulled file should exist after pull");
+ let content = fs::read_to_string(&pulled_file).await.unwrap();
+ assert_eq!(content, "pulled content");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn pull_invalid_branch_returns_error() {
+ let temp = temp_dir("git-test").await;
+ let repo_url = create_local_repo(&temp, "main").await;
+ let clone_dir = temp.join("pull-invalid-branch");
+
+ // First clone
+ clone(&repo_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 1)
+ .await
+ .expect("initial clone should succeed");
+
+ // Pull with invalid branch
+ let result = pull(&clone_dir, "nonexistent-branch-xyz", GIT_TIMEOUT_DEFAULT, 1).await;
+
+ assert!(result.is_err(), "pull should fail for invalid branch");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn sync_repo_clones_when_not_exists() {
+ let temp = temp_dir("git-test").await;
+ let repo_url = create_local_repo(&temp, "main").await;
+ let clone_dir = temp.join("sync-clone");
+
+ let result = sync_repo(&repo_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 1).await;
+
+ assert!(result.is_ok(), "sync should succeed: {result:?}");
+ assert!(clone_dir.exists(), "clone directory should exist");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn sync_repo_pulls_when_exists() {
+ let temp = temp_dir("git-test").await;
+ let repo_url = create_local_repo(&temp, "main").await;
+ let clone_dir = temp.join("sync-pull");
+
+ // First sync (clone)
+ sync_repo(&repo_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 1)
+ .await
+ .expect("initial sync should succeed");
+
+ // Push a new commit to origin
+ let work_dir = temp.join("work-sync");
+ push_new_commit(&repo_url, &work_dir, "synced.txt", "synced content").await;
+
+ // Second sync should pull the new commit
+ sync_repo(&repo_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 1)
+ .await
+ .expect("second sync should succeed");
+
+ // Verify the new file appeared
+ let synced_file = clone_dir.join("synced.txt");
+ assert!(synced_file.exists(), "synced file should exist after pull");
+ let content = fs::read_to_string(&synced_file).await.unwrap();
+ assert_eq!(content, "synced content");
+
+ cleanup(&temp).await;
+ }
+
+ // LFS tests
+
+ #[tokio::test]
+ async fn has_lfs_configured_with_lfs() {
+ let temp = temp_dir("git-test").await;
+ fs::write(
+ temp.join(".gitattributes"),
+ "*.bin filter=lfs diff=lfs merge=lfs -text\n",
+ )
+ .await
+ .unwrap();
+
+ assert!(has_lfs_configured(&temp).await);
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn has_lfs_configured_without_lfs() {
+ let temp = temp_dir("git-test").await;
+ fs::write(temp.join(".gitattributes"), "*.txt text\n")
+ .await
+ .unwrap();
+
+ assert!(!has_lfs_configured(&temp).await);
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn has_lfs_configured_no_file() {
+ let temp = temp_dir("git-test").await;
+ // No .gitattributes file
+
+ assert!(!has_lfs_configured(&temp).await);
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn has_lfs_pointers_detects_pointer() {
+ let temp = temp_dir("git-test").await;
+
+ // Initialize git repo
+ init_git_repo(&temp).await;
+
+ // Create LFS pointer file
+ let pointer_content = "version https://git-lfs.github.com/spec/v1\n\
+ oid sha256:4d7a214614ab2935c943f9e0ff69d22eadbb8f32b1258daaa5e2ca24d17e2393\n\
+ size 12345\n";
+ fs::write(temp.join("large.bin"), pointer_content)
+ .await
+ .unwrap();
+
+ // Stage the file
+ stage_file(&temp, "large.bin").await;
+
+ let result = has_lfs_pointers(&temp).await;
+ assert!(result.is_ok());
+ assert!(result.unwrap());
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn has_lfs_pointers_ignores_non_pointers() {
+ let temp = temp_dir("git-test").await;
+
+ // Initialize git repo
+ init_git_repo(&temp).await;
+
+ // Create normal small file
+ fs::write(temp.join("readme.txt"), "Hello World")
+ .await
+ .unwrap();
+ stage_file(&temp, "readme.txt").await;
+
+ let result = has_lfs_pointers(&temp).await;
+ assert!(result.is_ok());
+ assert!(!result.unwrap());
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn has_lfs_pointers_ignores_large_files() {
+ let temp = temp_dir("git-test").await;
+
+ init_git_repo(&temp).await;
+
+ // Create large file that starts with LFS signature (edge case)
+ let mut content = String::from("version https://git-lfs.github.com/spec/v1\n");
+ content.push_str(&"x".repeat(2000)); // > 1024 bytes
+ fs::write(temp.join("large.txt"), &content).await.unwrap();
+ stage_file(&temp, "large.txt").await;
+
+ let result = has_lfs_pointers(&temp).await;
+ assert!(result.is_ok());
+ assert!(!result.unwrap()); // Should be ignored due to size
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn maybe_fetch_lfs_no_config() {
+ let temp = temp_dir("git-test").await;
+ init_git_repo(&temp).await;
+
+ // No .gitattributes = no LFS
+ let result = maybe_fetch_lfs(&temp).await;
+ assert!(result.is_ok());
+
+ cleanup(&temp).await;
+ }
+
+ // Helper functions for LFS tests
+
+ async fn init_git_repo(dir: &Path) {
+ git_cmd()
+ .args(["init"])
+ .current_dir(dir)
+ .output()
+ .await
+ .unwrap();
+ configure_test_git_user(dir).await;
+ }
+
+ async fn stage_file(dir: &Path, filename: &str) {
+ git_cmd()
+ .args(["add", filename])
+ .current_dir(dir)
+ .output()
+ .await
+ .unwrap();
+ }
+
+ /// Clone a bare repo into `work_dir`, commit a new file, and push it.
+ async fn push_new_commit(repo_url: &str, work_dir: &Path, filename: &str, content: &str) {
+ git_cmd()
+ .args(["clone", repo_url, work_dir.to_str().unwrap()])
+ .output()
+ .await
+ .unwrap();
+ configure_test_git_user(work_dir).await;
+
+ fs::write(work_dir.join(filename), content).await.unwrap();
+
+ git_cmd()
+ .args(["add", filename])
+ .current_dir(work_dir)
+ .output()
+ .await
+ .unwrap();
+
+ git_cmd()
+ .args(["commit", "-m", "New commit"])
+ .current_dir(work_dir)
+ .output()
+ .await
+ .unwrap();
+
+ git_cmd()
+ .args(["push"])
+ .current_dir(work_dir)
+ .output()
+ .await
+ .unwrap();
+ }
+
+ // has_remote_changes tests
+
+ #[tokio::test]
+ async fn has_remote_changes_nonexistent_dir_returns_true() {
+ let temp = temp_dir("git-test").await;
+ let nonexistent = temp.join("does-not-exist");
+
+ let result = has_remote_changes(&nonexistent, "main", GIT_TIMEOUT_DEFAULT, 1).await;
+ assert!(result.is_ok());
+ assert!(result.unwrap(), "nonexistent directory should return true");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn has_remote_changes_up_to_date_returns_false() {
+ let temp = temp_dir("git-test").await;
+ let repo_url = create_local_repo(&temp, "main").await;
+ let clone_dir = temp.join("test-clone");
+
+ // Clone the repo
+ clone(&repo_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 1)
+ .await
+ .unwrap();
+
+ // Check for changes - should be false since we just cloned
+ let result = has_remote_changes(&clone_dir, "main", GIT_TIMEOUT_DEFAULT, 1).await;
+ assert!(result.is_ok(), "has_remote_changes failed: {result:?}");
+ assert!(!result.unwrap(), "freshly cloned repo should be up-to-date");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn has_remote_changes_detects_new_commits() {
+ let temp = temp_dir("git-test").await;
+ let repo_url = create_local_repo(&temp, "main").await;
+ let clone_dir = temp.join("test-clone");
+
+ // Clone the repo
+ clone(&repo_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 1)
+ .await
+ .unwrap();
+
+ // Push a new commit to the origin
+ let work_dir = temp.join("work2");
+ push_new_commit(&repo_url, &work_dir, "new-file.txt", "new content").await;
+
+ // Now check for changes - should detect the new commit
+ let result = has_remote_changes(&clone_dir, "main", GIT_TIMEOUT_DEFAULT, 1).await;
+ assert!(result.is_ok(), "has_remote_changes failed: {result:?}");
+ assert!(result.unwrap(), "should detect new commits on remote");
+
+ cleanup(&temp).await;
+ }
+
+ // git_depth tests
+
+ #[tokio::test]
+ async fn clone_full_depth_creates_complete_history() {
+ let temp = temp_dir("git-test").await;
+ let repo_url = create_local_repo(&temp, "main").await;
+
+ // Push a second commit so we have more than 1 commit in history
+ let work_dir = temp.join("work-depth");
+ push_new_commit(&repo_url, &work_dir, "second.txt", "second commit").await;
+
+ let clone_dir = temp.join("full-clone");
+
+ // Clone with depth=0 (full clone)
+ clone(&repo_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 0)
+ .await
+ .expect("full clone should succeed");
+
+ // Verify we have more than 1 commit (full history)
+ let output = git_cmd()
+ .args(["rev-list", "--count", "HEAD"])
+ .current_dir(&clone_dir)
+ .output()
+ .await
+ .unwrap();
+ let count: u32 = String::from_utf8_lossy(&output.stdout)
+ .trim()
+ .parse()
+ .unwrap();
+ assert!(
+ count > 1,
+ "full clone should have multiple commits, got {count}"
+ );
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn sync_repo_full_depth_preserves_history() {
+ let temp = temp_dir("git-test").await;
+ let repo_url = create_local_repo(&temp, "main").await;
+
+ // Push a second commit
+ let work_dir = temp.join("work-depth2");
+ push_new_commit(&repo_url, &work_dir, "second.txt", "second").await;
+
+ let clone_dir = temp.join("sync-full");
+
+ // sync_repo with depth=0 should do a full clone
+ sync_repo(&repo_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 0)
+ .await
+ .expect("sync with full depth should succeed");
+
+ let output = git_cmd()
+ .args(["rev-list", "--count", "HEAD"])
+ .current_dir(&clone_dir)
+ .output()
+ .await
+ .unwrap();
+ let count: u32 = String::from_utf8_lossy(&output.stdout)
+ .trim()
+ .parse()
+ .unwrap();
+ assert!(
+ count > 1,
+ "full sync should have multiple commits, got {count}"
+ );
+
+ cleanup(&temp).await;
+ }
+
+ // Submodule tests
+
+ #[tokio::test]
+ async fn has_submodules_with_gitmodules_file() {
+ let temp = temp_dir("git-test").await;
+ fs::write(
+ temp.join(".gitmodules"),
+ "[submodule \"lib\"]\n\tpath = lib\n\turl = ../lib.git\n",
+ )
+ .await
+ .unwrap();
+
+ assert!(has_submodules(&temp).await);
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn has_submodules_without_gitmodules() {
+ let temp = temp_dir("git-test").await;
+
+ assert!(!has_submodules(&temp).await);
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn has_submodules_empty_gitmodules() {
+ let temp = temp_dir("git-test").await;
+ fs::write(temp.join(".gitmodules"), "").await.unwrap();
+
+ assert!(!has_submodules(&temp).await);
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn maybe_init_submodules_no_submodules_is_noop() {
+ let temp = temp_dir("git-test").await;
+ let repo_url = create_local_repo(&temp, "main").await;
+ let clone_dir = temp.join("no-submodules");
+
+ clone(&repo_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 1)
+ .await
+ .expect("clone should succeed");
+
+ // No .gitmodules → should be a no-op
+ let result = maybe_init_submodules(&clone_dir, GIT_TIMEOUT_DEFAULT, 1, false).await;
+ assert!(
+ result.is_ok(),
+ "noop submodule init should succeed: {result:?}"
+ );
+
+ cleanup(&temp).await;
+ }
+
+ /// Create a parent repo with a submodule wired up.
+ /// Returns (parent_url, submodule_url).
+ async fn create_repo_with_submodule(temp: &Path, branch: &str) -> (String, String) {
+ // 1. Create bare submodule repo with a file
+ let sub_bare = temp.join("sub.git");
+ fs::create_dir_all(&sub_bare).await.unwrap();
+ git_cmd()
+ .args(["init", "--bare", "--initial-branch", branch])
+ .current_dir(&sub_bare)
+ .output()
+ .await
+ .unwrap();
+
+ let sub_work = temp.join("sub-work");
+ git_cmd()
+ .args([
+ "clone",
+ sub_bare.to_str().unwrap(),
+ sub_work.to_str().unwrap(),
+ ])
+ .output()
+ .await
+ .unwrap();
+ configure_test_git_user(&sub_work).await;
+ git_cmd()
+ .args(["checkout", "-B", branch])
+ .current_dir(&sub_work)
+ .output()
+ .await
+ .unwrap();
+ fs::write(sub_work.join("sub-file.txt"), "submodule content")
+ .await
+ .unwrap();
+ git_cmd()
+ .args(["add", "sub-file.txt"])
+ .current_dir(&sub_work)
+ .output()
+ .await
+ .unwrap();
+ let output = git_cmd()
+ .args(["commit", "-m", "sub initial"])
+ .current_dir(&sub_work)
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "sub commit failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+ let output = git_cmd()
+ .args(["push", "-u", "origin", branch])
+ .current_dir(&sub_work)
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "sub push failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+
+ // 2. Create bare parent repo with a submodule reference
+ let parent_bare = temp.join("parent.git");
+ fs::create_dir_all(&parent_bare).await.unwrap();
+ git_cmd()
+ .args(["init", "--bare", "--initial-branch", branch])
+ .current_dir(&parent_bare)
+ .output()
+ .await
+ .unwrap();
+
+ let parent_work = temp.join("parent-work");
+ git_cmd()
+ .args([
+ "clone",
+ parent_bare.to_str().unwrap(),
+ parent_work.to_str().unwrap(),
+ ])
+ .output()
+ .await
+ .unwrap();
+ configure_test_git_user(&parent_work).await;
+ git_cmd()
+ .args(["checkout", "-B", branch])
+ .current_dir(&parent_work)
+ .output()
+ .await
+ .unwrap();
+ fs::write(parent_work.join("README.md"), "# Parent")
+ .await
+ .unwrap();
+ git_cmd()
+ .args(["add", "README.md"])
+ .current_dir(&parent_work)
+ .output()
+ .await
+ .unwrap();
+
+ // Add submodule using file:// URL
+ let sub_url = format!("file://{}", sub_bare.to_str().unwrap());
+ let output = git_cmd()
+ .args(["submodule", "add", &sub_url, "lib"])
+ .current_dir(&parent_work)
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "git submodule add failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+
+ git_cmd()
+ .args(["commit", "-m", "add submodule"])
+ .current_dir(&parent_work)
+ .output()
+ .await
+ .unwrap();
+ let output = git_cmd()
+ .args(["push", "-u", "origin", branch])
+ .current_dir(&parent_work)
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "git push failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+
+ let _ = fs::remove_dir_all(&sub_work).await;
+ let _ = fs::remove_dir_all(&parent_work).await;
+
+ let parent_url = format!("file://{}", parent_bare.to_str().unwrap());
+ (parent_url, sub_url)
+ }
+
+ #[tokio::test]
+ async fn sync_repo_initializes_submodules() {
+ let temp = temp_dir("git-test").await;
+ let (parent_url, _sub_url) = create_repo_with_submodule(&temp, "main").await;
+ let clone_dir = temp.join("clone-with-sub");
+
+ sync_repo(&parent_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 1)
+ .await
+ .expect("sync should succeed");
+
+ // Verify submodule content is present
+ let sub_file = clone_dir.join("lib").join("sub-file.txt");
+ assert!(sub_file.exists(), "submodule file should exist after sync");
+ let content = fs::read_to_string(&sub_file).await.unwrap();
+ assert_eq!(content, "submodule content");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn sync_repo_updates_submodules_on_pull() {
+ let temp = temp_dir("git-test").await;
+ let (parent_url, sub_url) = create_repo_with_submodule(&temp, "main").await;
+ let clone_dir = temp.join("pull-sub");
+
+ // First sync (clone + submodule init)
+ sync_repo(&parent_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 1)
+ .await
+ .expect("initial sync should succeed");
+
+ // Push a new commit to the submodule
+ let sub_work = temp.join("sub-update");
+ git_cmd()
+ .args(["clone", &sub_url, sub_work.to_str().unwrap()])
+ .output()
+ .await
+ .unwrap();
+ configure_test_git_user(&sub_work).await;
+ fs::write(sub_work.join("new-sub-file.txt"), "updated submodule")
+ .await
+ .unwrap();
+ git_cmd()
+ .args(["add", "new-sub-file.txt"])
+ .current_dir(&sub_work)
+ .output()
+ .await
+ .unwrap();
+ let output = git_cmd()
+ .args(["commit", "-m", "update sub"])
+ .current_dir(&sub_work)
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "sub commit failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+ let output = git_cmd()
+ .args(["push"])
+ .current_dir(&sub_work)
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "sub push failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+
+ // Update parent to point to new submodule commit
+ let parent_work = temp.join("parent-update");
+ let parent_bare = temp.join("parent.git");
+ git_cmd()
+ .args([
+ "clone",
+ parent_bare.to_str().unwrap(),
+ parent_work.to_str().unwrap(),
+ ])
+ .output()
+ .await
+ .unwrap();
+ configure_test_git_user(&parent_work).await;
+ // Init submodule in parent work copy, then update to latest
+ git_cmd()
+ .args(["submodule", "update", "--init", "--remote", "lib"])
+ .current_dir(&parent_work)
+ .output()
+ .await
+ .unwrap();
+ git_cmd()
+ .args(["add", "lib"])
+ .current_dir(&parent_work)
+ .output()
+ .await
+ .unwrap();
+ let output = git_cmd()
+ .args(["commit", "-m", "bump submodule"])
+ .current_dir(&parent_work)
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "parent bump commit failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+ let output = git_cmd()
+ .args(["push"])
+ .current_dir(&parent_work)
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "parent push failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+
+ // Second sync (pull + submodule update)
+ sync_repo(&parent_url, "main", &clone_dir, GIT_TIMEOUT_DEFAULT, 1)
+ .await
+ .expect("second sync should succeed");
+
+ // Verify the new submodule content is present
+ let new_sub_file = clone_dir.join("lib").join("new-sub-file.txt");
+ assert!(
+ new_sub_file.exists(),
+ "updated submodule file should exist after pull"
+ );
+ let content = fs::read_to_string(&new_sub_file).await.unwrap();
+ assert_eq!(content, "updated submodule");
+
+ cleanup(&temp).await;
+ }
+}
diff --git a/src/hook.rs b/src/hook.rs
new file mode 100644
index 0000000..53e1e18
--- /dev/null
+++ b/src/hook.rs
@@ -0,0 +1,499 @@
+use crate::build::copy_with_tail;
+use std::collections::HashMap;
+use std::path::{Path, PathBuf};
+use std::process::Stdio;
+use std::time::{Duration, Instant};
+use tokio::io::AsyncWriteExt as _;
+use tokio::io::BufWriter;
+use tokio::process::Command;
+use tracing::debug;
+
+#[cfg(not(test))]
+const HOOK_TIMEOUT: Duration = Duration::from_secs(30);
+#[cfg(test)]
+const HOOK_TIMEOUT: Duration = Duration::from_secs(2);
+
+/// Size of the in-memory tail buffer for stderr (last 256 bytes).
+/// Used for error context in `HookResult` without reading the full file.
+const STDERR_TAIL_SIZE: usize = 256;
+
+/// Result of a post-deploy hook execution.
+///
+/// Stdout and stderr are streamed to temporary files on disk during execution.
+/// Callers should pass these paths to `logs::save_hook_log()` for composition.
+pub struct HookResult {
+ pub command: Vec<String>,
+ pub stdout_file: PathBuf,
+ pub stderr_file: PathBuf,
+ pub last_stderr: String,
+ pub exit_code: Option<i32>,
+ pub duration: Duration,
+ pub success: bool,
+}
+
+/// Execute a post-deploy hook command.
+///
+/// Runs the command directly (no shell), with a minimal environment and a timeout.
+/// Stdout and stderr are streamed to the provided temporary files.
+/// Always returns a `HookResult` — never an `Err` — so callers can always log the outcome.
+#[allow(
+ clippy::implicit_hasher,
+ clippy::large_futures,
+ clippy::too_many_arguments
+)]
+pub async fn run_post_deploy_hook(
+ command: &[String],
+ site_name: &str,
+ build_dir: &Path,
+ public_dir: &Path,
+ timestamp: &str,
+ env: &HashMap<String, String>,
+ stdout_file: &Path,
+ stderr_file: &Path,
+) -> HookResult {
+ let start = Instant::now();
+
+ let Some(executable) = command.first() else {
+ let _ = tokio::fs::File::create(stdout_file).await;
+ let _ = tokio::fs::File::create(stderr_file).await;
+ return HookResult {
+ command: command.to_vec(),
+ stdout_file: stdout_file.to_path_buf(),
+ stderr_file: stderr_file.to_path_buf(),
+ last_stderr: "empty command".to_owned(),
+ exit_code: None,
+ duration: start.elapsed(),
+ success: false,
+ };
+ };
+ let args = command.get(1..).unwrap_or_default();
+
+ let path_env = std::env::var("PATH").unwrap_or_else(|_| "/usr/bin:/bin".to_owned());
+ let home_env = std::env::var("HOME").unwrap_or_else(|_| "/nonexistent".to_owned());
+
+ let child = Command::new(executable)
+ .args(args)
+ .current_dir(build_dir)
+ .kill_on_drop(true)
+ .env_clear()
+ .envs(env)
+ .env("PATH", &path_env)
+ .env("HOME", &home_env)
+ .env("LANG", "C.UTF-8")
+ .env("WITRYNA_SITE", site_name)
+ .env("WITRYNA_BUILD_DIR", build_dir.as_os_str())
+ .env("WITRYNA_PUBLIC_DIR", public_dir.as_os_str())
+ .env("WITRYNA_BUILD_TIMESTAMP", timestamp)
+ .stdout(Stdio::piped())
+ .stderr(Stdio::piped())
+ .spawn();
+
+ let mut child = match child {
+ Ok(c) => c,
+ Err(e) => {
+ // Spawn failure — create empty temp files so log composition works
+ let _ = tokio::fs::File::create(stdout_file).await;
+ let _ = tokio::fs::File::create(stderr_file).await;
+ return HookResult {
+ command: command.to_vec(),
+ stdout_file: stdout_file.to_path_buf(),
+ stderr_file: stderr_file.to_path_buf(),
+ last_stderr: format!("failed to spawn hook: {e}"),
+ exit_code: None,
+ duration: start.elapsed(),
+ success: false,
+ };
+ }
+ };
+
+ debug!(cmd = ?command, "hook process spawned");
+
+ let (last_stderr, exit_code, success) =
+ stream_hook_output(&mut child, stdout_file, stderr_file).await;
+
+ HookResult {
+ command: command.to_vec(),
+ stdout_file: stdout_file.to_path_buf(),
+ stderr_file: stderr_file.to_path_buf(),
+ last_stderr,
+ exit_code,
+ duration: start.elapsed(),
+ success,
+ }
+}
+
+/// Stream hook stdout/stderr to disk and wait for completion with timeout.
+///
+/// Returns `(last_stderr, exit_code, success)`. On any setup or I/O failure,
+/// returns an error description in `last_stderr` with `success = false`.
+#[allow(clippy::large_futures)]
+async fn stream_hook_output(
+ child: &mut tokio::process::Child,
+ stdout_file: &Path,
+ stderr_file: &Path,
+) -> (String, Option<i32>, bool) {
+ let Some(stdout_pipe) = child.stdout.take() else {
+ let _ = tokio::fs::File::create(stdout_file).await;
+ let _ = tokio::fs::File::create(stderr_file).await;
+ return ("missing stdout pipe".to_owned(), None, false);
+ };
+ let Some(stderr_pipe) = child.stderr.take() else {
+ let _ = tokio::fs::File::create(stdout_file).await;
+ let _ = tokio::fs::File::create(stderr_file).await;
+ return ("missing stderr pipe".to_owned(), None, false);
+ };
+
+ let stdout_writer = match tokio::fs::File::create(stdout_file).await {
+ Ok(f) => BufWriter::new(f),
+ Err(e) => {
+ let _ = tokio::fs::File::create(stderr_file).await;
+ return (
+ format!("failed to create stdout temp file: {e}"),
+ None,
+ false,
+ );
+ }
+ };
+ let stderr_writer = match tokio::fs::File::create(stderr_file).await {
+ Ok(f) => BufWriter::new(f),
+ Err(e) => {
+ return (
+ format!("failed to create stderr temp file: {e}"),
+ None,
+ false,
+ );
+ }
+ };
+
+ let mut stdout_writer = stdout_writer;
+ let mut stderr_writer = stderr_writer;
+
+ #[allow(clippy::large_futures)]
+ match tokio::time::timeout(HOOK_TIMEOUT, async {
+ let (stdout_res, stderr_res, wait_result) = tokio::join!(
+ copy_with_tail(stdout_pipe, &mut stdout_writer, 0),
+ copy_with_tail(stderr_pipe, &mut stderr_writer, STDERR_TAIL_SIZE),
+ child.wait(),
+ );
+ (stdout_res, stderr_res, wait_result)
+ })
+ .await
+ {
+ Ok((stdout_res, stderr_res, Ok(status))) => {
+ let _ = stdout_writer.flush().await;
+ let _ = stderr_writer.flush().await;
+
+ let last_stderr = match stderr_res {
+ Ok((_, tail)) => String::from_utf8_lossy(&tail).into_owned(),
+ Err(_) => String::new(),
+ };
+ // stdout_res error is non-fatal for hook result
+ let _ = stdout_res;
+
+ (last_stderr, status.code(), status.success())
+ }
+ Ok((_, _, Err(e))) => {
+ let _ = stdout_writer.flush().await;
+ let _ = stderr_writer.flush().await;
+ (format!("hook I/O error: {e}"), None, false)
+ }
+ Err(_) => {
+ // Timeout — kill the child
+ let _ = child.kill().await;
+ let _ = stdout_writer.flush().await;
+ let _ = stderr_writer.flush().await;
+ (String::new(), None, false)
+ }
+ }
+}
+
+#[cfg(test)]
+#[allow(clippy::unwrap_used, clippy::indexing_slicing, clippy::large_futures)]
+mod tests {
+ use super::*;
+ use std::collections::HashMap;
+ use tempfile::TempDir;
+ use tokio::fs;
+
+ fn cmd(args: &[&str]) -> Vec<String> {
+ args.iter().map(std::string::ToString::to_string).collect()
+ }
+
+ #[tokio::test]
+ async fn hook_success() {
+ let tmp = TempDir::new().unwrap();
+ let stdout_tmp = tmp.path().join("stdout.tmp");
+ let stderr_tmp = tmp.path().join("stderr.tmp");
+
+ let result = run_post_deploy_hook(
+ &cmd(&["echo", "hello"]),
+ "test-site",
+ tmp.path(),
+ &tmp.path().join("current"),
+ "ts",
+ &HashMap::new(),
+ &stdout_tmp,
+ &stderr_tmp,
+ )
+ .await;
+
+ assert!(result.success);
+ assert_eq!(result.exit_code, Some(0));
+ let stdout = fs::read_to_string(&stdout_tmp).await.unwrap();
+ assert!(stdout.contains("hello"));
+ }
+
+ #[tokio::test]
+ async fn hook_failure_exit_code() {
+ let tmp = TempDir::new().unwrap();
+ let stdout_tmp = tmp.path().join("stdout.tmp");
+ let stderr_tmp = tmp.path().join("stderr.tmp");
+
+ let result = run_post_deploy_hook(
+ &cmd(&["false"]),
+ "test-site",
+ tmp.path(),
+ &tmp.path().join("current"),
+ "ts",
+ &HashMap::new(),
+ &stdout_tmp,
+ &stderr_tmp,
+ )
+ .await;
+
+ assert!(!result.success);
+ assert_eq!(result.exit_code, Some(1));
+ }
+
+ #[tokio::test]
+ async fn hook_timeout() {
+ let tmp = TempDir::new().unwrap();
+ let stdout_tmp = tmp.path().join("stdout.tmp");
+ let stderr_tmp = tmp.path().join("stderr.tmp");
+
+ let result = run_post_deploy_hook(
+ &cmd(&["sleep", "10"]),
+ "test-site",
+ tmp.path(),
+ &tmp.path().join("current"),
+ "ts",
+ &HashMap::new(),
+ &stdout_tmp,
+ &stderr_tmp,
+ )
+ .await;
+
+ assert!(!result.success);
+ // Timeout path sets last_stderr to empty string — error context is in the log
+ assert!(result.last_stderr.is_empty());
+ }
+
+ #[tokio::test]
+ async fn hook_env_vars() {
+ let tmp = TempDir::new().unwrap();
+ let stdout_tmp = tmp.path().join("stdout.tmp");
+ let stderr_tmp = tmp.path().join("stderr.tmp");
+
+ let env = HashMap::from([
+ ("MY_VAR".to_owned(), "my_value".to_owned()),
+ ("DEPLOY_TARGET".to_owned(), "staging".to_owned()),
+ ]);
+ let public_dir = tmp.path().join("current");
+ let result = run_post_deploy_hook(
+ &cmd(&["env"]),
+ "my-site",
+ tmp.path(),
+ &public_dir,
+ "20260202-120000-000000",
+ &env,
+ &stdout_tmp,
+ &stderr_tmp,
+ )
+ .await;
+
+ assert!(result.success);
+ let stdout = fs::read_to_string(&stdout_tmp).await.unwrap();
+ assert!(stdout.contains("WITRYNA_SITE=my-site"));
+ assert!(stdout.contains("WITRYNA_BUILD_TIMESTAMP=20260202-120000-000000"));
+ assert!(stdout.contains("WITRYNA_BUILD_DIR="));
+ assert!(stdout.contains("WITRYNA_PUBLIC_DIR="));
+ assert!(stdout.contains("PATH="));
+ assert!(stdout.contains("HOME="));
+ assert!(stdout.contains("LANG=C.UTF-8"));
+ assert!(stdout.contains("MY_VAR=my_value"));
+ assert!(stdout.contains("DEPLOY_TARGET=staging"));
+
+ // Verify no unexpected env vars leak through
+ let lines: Vec<&str> = stdout.lines().collect();
+ for line in &lines {
+ let key = line.split('=').next().unwrap_or("");
+ assert!(
+ [
+ "PATH",
+ "HOME",
+ "LANG",
+ "WITRYNA_SITE",
+ "WITRYNA_BUILD_DIR",
+ "WITRYNA_PUBLIC_DIR",
+ "WITRYNA_BUILD_TIMESTAMP",
+ "MY_VAR",
+ "DEPLOY_TARGET",
+ ]
+ .contains(&key),
+ "unexpected env var: {line}"
+ );
+ }
+ }
+
+ #[tokio::test]
+ async fn hook_nonexistent_command() {
+ let tmp = TempDir::new().unwrap();
+ let stdout_tmp = tmp.path().join("stdout.tmp");
+ let stderr_tmp = tmp.path().join("stderr.tmp");
+
+ let result = run_post_deploy_hook(
+ &cmd(&["/nonexistent-binary-xyz"]),
+ "test-site",
+ tmp.path(),
+ &tmp.path().join("current"),
+ "ts",
+ &HashMap::new(),
+ &stdout_tmp,
+ &stderr_tmp,
+ )
+ .await;
+
+ assert!(!result.success);
+ assert!(result.last_stderr.contains("failed to spawn hook"));
+ }
+
+ #[tokio::test]
+ async fn hook_large_output_streams_to_disk() {
+ let tmp = TempDir::new().unwrap();
+ let stdout_tmp = tmp.path().join("stdout.tmp");
+ let stderr_tmp = tmp.path().join("stderr.tmp");
+
+ // Generate output larger than old MAX_OUTPUT_BYTES (256 KB) — now unbounded to disk
+ let result = run_post_deploy_hook(
+ &cmd(&["sh", "-c", "yes | head -c 300000"]),
+ "test-site",
+ tmp.path(),
+ &tmp.path().join("current"),
+ "ts",
+ &HashMap::new(),
+ &stdout_tmp,
+ &stderr_tmp,
+ )
+ .await;
+
+ assert!(result.success);
+ // All 300000 bytes should be on disk (no truncation)
+ let stdout_len = fs::metadata(&stdout_tmp).await.unwrap().len();
+ assert_eq!(stdout_len, 300_000);
+ }
+
+ #[tokio::test]
+ async fn hook_current_dir() {
+ let tmp = TempDir::new().unwrap();
+ let stdout_tmp = tmp.path().join("stdout.tmp");
+ let stderr_tmp = tmp.path().join("stderr.tmp");
+
+ let result = run_post_deploy_hook(
+ &cmd(&["pwd"]),
+ "test-site",
+ tmp.path(),
+ &tmp.path().join("current"),
+ "ts",
+ &HashMap::new(),
+ &stdout_tmp,
+ &stderr_tmp,
+ )
+ .await;
+
+ assert!(result.success);
+ let stdout = fs::read_to_string(&stdout_tmp).await.unwrap();
+ // Canonicalize to handle /tmp -> /private/tmp on macOS
+ let expected = std::fs::canonicalize(tmp.path()).unwrap();
+ let actual = stdout.trim();
+ let actual_canonical = std::fs::canonicalize(actual).unwrap_or_default();
+ assert_eq!(actual_canonical, expected);
+ }
+
+ #[tokio::test]
+ async fn hook_large_stdout_no_deadlock() {
+ // Writes 128 KB to stdout, exceeding the ~64 KB OS pipe buffer.
+ let tmp = TempDir::new().unwrap();
+ let stdout_tmp = tmp.path().join("stdout.tmp");
+ let stderr_tmp = tmp.path().join("stderr.tmp");
+
+ let result = run_post_deploy_hook(
+ &cmd(&["sh", "-c", "dd if=/dev/zero bs=1024 count=128 2>/dev/null"]),
+ "test-site",
+ tmp.path(),
+ &tmp.path().join("current"),
+ "ts",
+ &HashMap::new(),
+ &stdout_tmp,
+ &stderr_tmp,
+ )
+ .await;
+
+ assert!(result.success);
+ let stdout_len = fs::metadata(&stdout_tmp).await.unwrap().len();
+ assert_eq!(stdout_len, 128 * 1024);
+ }
+
+ #[tokio::test]
+ async fn hook_large_stderr_no_deadlock() {
+ // Writes 128 KB to stderr, covering the other pipe.
+ let tmp = TempDir::new().unwrap();
+ let stdout_tmp = tmp.path().join("stdout.tmp");
+ let stderr_tmp = tmp.path().join("stderr.tmp");
+
+ let result = run_post_deploy_hook(
+ &cmd(&[
+ "sh",
+ "-c",
+ "dd if=/dev/zero bs=1024 count=128 >&2 2>/dev/null",
+ ]),
+ "test-site",
+ tmp.path(),
+ &tmp.path().join("current"),
+ "ts",
+ &HashMap::new(),
+ &stdout_tmp,
+ &stderr_tmp,
+ )
+ .await;
+
+ assert!(result.success);
+ let stderr_len = fs::metadata(&stderr_tmp).await.unwrap().len();
+ assert_eq!(stderr_len, 128 * 1024);
+ }
+
+ #[tokio::test]
+ async fn hook_user_env_does_not_override_reserved() {
+ let tmp = TempDir::new().unwrap();
+ let stdout_tmp = tmp.path().join("stdout.tmp");
+ let stderr_tmp = tmp.path().join("stderr.tmp");
+
+ let env = HashMap::from([("PATH".to_owned(), "/should-not-appear".to_owned())]);
+ let result = run_post_deploy_hook(
+ &cmd(&["env"]),
+ "test-site",
+ tmp.path(),
+ &tmp.path().join("current"),
+ "ts",
+ &env,
+ &stdout_tmp,
+ &stderr_tmp,
+ )
+ .await;
+
+ assert!(result.success);
+ let stdout = fs::read_to_string(&stdout_tmp).await.unwrap();
+ // PATH should be the system value, not the user override
+ assert!(!stdout.contains("PATH=/should-not-appear"));
+ assert!(stdout.contains("PATH="));
+ }
+}
diff --git a/src/lib.rs b/src/lib.rs
new file mode 100644
index 0000000..a80b591
--- /dev/null
+++ b/src/lib.rs
@@ -0,0 +1,21 @@
+//! Internal library crate for witryna.
+//!
+//! This crate exposes modules for use by the binary and integration tests.
+//! It is not intended for external consumption and has no stability guarantees.
+
+pub mod build;
+pub mod build_guard;
+pub mod cleanup;
+pub mod cli;
+pub mod config;
+pub mod git;
+pub mod hook;
+pub mod logs;
+pub mod pipeline;
+pub mod polling;
+pub mod publish;
+pub mod repo_config;
+pub mod server;
+
+#[cfg(any(test, feature = "integration"))]
+pub mod test_support;
diff --git a/src/logs.rs b/src/logs.rs
new file mode 100644
index 0000000..bddcc9d
--- /dev/null
+++ b/src/logs.rs
@@ -0,0 +1,919 @@
+use anyhow::{Context as _, Result};
+use std::path::{Path, PathBuf};
+use std::time::Duration;
+use tokio::io::AsyncWriteExt as _;
+use tokio::process::Command;
+use tracing::{debug, warn};
+
+use crate::hook::HookResult;
+
+/// Exit status of a build operation.
+#[derive(Debug)]
+pub enum BuildExitStatus {
+ Success,
+ Failed {
+ exit_code: Option<i32>,
+ error: String,
+ },
+}
+
+impl std::fmt::Display for BuildExitStatus {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ match self {
+ Self::Success => write!(f, "success"),
+ Self::Failed { exit_code, error } => {
+ if let Some(code) = exit_code {
+ write!(f, "failed (exit code: {code}): {error}")
+ } else {
+ write!(f, "failed: {error}")
+ }
+ }
+ }
+ }
+}
+
+/// Metadata about a build for logging purposes.
+#[derive(Debug)]
+pub struct BuildLogMeta {
+ pub site_name: String,
+ pub timestamp: String,
+ pub git_commit: Option<String>,
+ pub container_image: String,
+ pub duration: Duration,
+ pub exit_status: BuildExitStatus,
+}
+
+/// Save build log to disk via streaming composition.
+///
+/// Writes the metadata header to the log file, then streams stdout and stderr
+/// content from temporary files via `tokio::io::copy` (O(1) memory).
+/// Deletes the temporary files after successful composition.
+///
+/// Creates a log file at `{log_dir}/{site_name}/{timestamp}.log`.
+///
+/// # Errors
+///
+/// Returns an error if the log directory cannot be created, the log file
+/// cannot be written, or the temp files cannot be read.
+pub async fn save_build_log(
+ log_dir: &Path,
+ meta: &BuildLogMeta,
+ stdout_file: &Path,
+ stderr_file: &Path,
+) -> Result<PathBuf> {
+ let site_log_dir = log_dir.join(&meta.site_name);
+ let log_file = site_log_dir.join(format!("{}.log", meta.timestamp));
+
+ // Create logs directory if it doesn't exist
+ tokio::fs::create_dir_all(&site_log_dir)
+ .await
+ .with_context(|| {
+ format!(
+ "failed to create logs directory: {}",
+ site_log_dir.display()
+ )
+ })?;
+
+ // Write header + stream content from temp files
+ let mut log_writer = tokio::io::BufWriter::new(
+ tokio::fs::File::create(&log_file)
+ .await
+ .with_context(|| format!("failed to create log file: {}", log_file.display()))?,
+ );
+
+ let header = format_log_header(meta);
+ log_writer.write_all(header.as_bytes()).await?;
+
+ // Append stdout section
+ log_writer.write_all(b"\n=== STDOUT ===\n").await?;
+ let mut stdout_reader = tokio::fs::File::open(stdout_file)
+ .await
+ .with_context(|| format!("failed to open {}", stdout_file.display()))?;
+ tokio::io::copy(&mut stdout_reader, &mut log_writer).await?;
+
+ // Append stderr section
+ log_writer.write_all(b"\n\n=== STDERR ===\n").await?;
+ let mut stderr_reader = tokio::fs::File::open(stderr_file)
+ .await
+ .with_context(|| format!("failed to open {}", stderr_file.display()))?;
+ tokio::io::copy(&mut stderr_reader, &mut log_writer).await?;
+ log_writer.write_all(b"\n").await?;
+
+ log_writer.flush().await?;
+ drop(log_writer);
+
+ // Delete temp files (best-effort)
+ let _ = tokio::fs::remove_file(stdout_file).await;
+ let _ = tokio::fs::remove_file(stderr_file).await;
+
+ debug!(
+ path = %log_file.display(),
+ site = %meta.site_name,
+ "build log saved"
+ );
+
+ Ok(log_file)
+}
+
+/// Format a duration as a human-readable string (e.g., "45s" or "2m 30s").
+#[must_use]
+pub fn format_duration(d: Duration) -> String {
+ let secs = d.as_secs();
+ if secs >= 60 {
+ format!("{}m {}s", secs / 60, secs % 60)
+ } else {
+ format!("{secs}s")
+ }
+}
+
+/// Format the metadata header for a build log (without output sections).
+fn format_log_header(meta: &BuildLogMeta) -> String {
+ let git_commit = meta.git_commit.as_deref().unwrap_or("unknown");
+ let duration_str = format_duration(meta.duration);
+
+ format!(
+ "=== BUILD LOG ===\n\
+ Site: {}\n\
+ Timestamp: {}\n\
+ Git Commit: {}\n\
+ Image: {}\n\
+ Duration: {}\n\
+ Status: {}",
+ meta.site_name,
+ meta.timestamp,
+ git_commit,
+ meta.container_image,
+ duration_str,
+ meta.exit_status,
+ )
+}
+
+/// Get the current git commit hash from a repository.
+///
+/// Returns the short (7 character) commit hash, or None if the repository
+/// is not a valid git repository or the command fails.
+pub async fn get_git_commit(clone_dir: &Path) -> Option<String> {
+ let mut cmd = Command::new("git");
+ cmd.env_remove("GIT_DIR")
+ .env_remove("GIT_WORK_TREE")
+ .env_remove("GIT_INDEX_FILE");
+ let output = cmd
+ .args(["rev-parse", "--short", "HEAD"])
+ .current_dir(clone_dir)
+ .output()
+ .await
+ .ok()?;
+
+ if !output.status.success() {
+ return None;
+ }
+
+ let commit = String::from_utf8_lossy(&output.stdout).trim().to_owned();
+
+ if commit.is_empty() {
+ None
+ } else {
+ Some(commit)
+ }
+}
+
+/// Save hook log to disk via streaming composition.
+///
+/// Writes the metadata header to the log file, then streams stdout and stderr
+/// content from temporary files via `tokio::io::copy` (O(1) memory).
+/// Deletes the temporary files after successful composition.
+///
+/// Creates a log file at `{log_dir}/{site_name}/{timestamp}-hook.log`.
+/// A log is written for every hook invocation regardless of outcome.
+///
+/// # Errors
+///
+/// Returns an error if the log directory cannot be created or the log file
+/// cannot be written.
+pub async fn save_hook_log(
+ log_dir: &Path,
+ site_name: &str,
+ timestamp: &str,
+ hook_result: &HookResult,
+) -> Result<PathBuf> {
+ let site_log_dir = log_dir.join(site_name);
+ let log_file = site_log_dir.join(format!("{timestamp}-hook.log"));
+
+ tokio::fs::create_dir_all(&site_log_dir)
+ .await
+ .with_context(|| {
+ format!(
+ "failed to create logs directory: {}",
+ site_log_dir.display()
+ )
+ })?;
+
+ let mut log_writer = tokio::io::BufWriter::new(
+ tokio::fs::File::create(&log_file)
+ .await
+ .with_context(|| format!("failed to create hook log file: {}", log_file.display()))?,
+ );
+
+ let header = format_hook_log_header(site_name, timestamp, hook_result);
+ log_writer.write_all(header.as_bytes()).await?;
+
+ // Append stdout section
+ log_writer.write_all(b"\n=== STDOUT ===\n").await?;
+ let mut stdout_reader = tokio::fs::File::open(&hook_result.stdout_file)
+ .await
+ .with_context(|| format!("failed to open {}", hook_result.stdout_file.display()))?;
+ tokio::io::copy(&mut stdout_reader, &mut log_writer).await?;
+
+ // Append stderr section
+ log_writer.write_all(b"\n\n=== STDERR ===\n").await?;
+ let mut stderr_reader = tokio::fs::File::open(&hook_result.stderr_file)
+ .await
+ .with_context(|| format!("failed to open {}", hook_result.stderr_file.display()))?;
+ tokio::io::copy(&mut stderr_reader, &mut log_writer).await?;
+ log_writer.write_all(b"\n").await?;
+
+ log_writer.flush().await?;
+ drop(log_writer);
+
+ // Delete temp files (best-effort)
+ let _ = tokio::fs::remove_file(&hook_result.stdout_file).await;
+ let _ = tokio::fs::remove_file(&hook_result.stderr_file).await;
+
+ debug!(
+ path = %log_file.display(),
+ site = %site_name,
+ "hook log saved"
+ );
+
+ Ok(log_file)
+}
+
+/// Format the metadata header for a hook log (without output sections).
+fn format_hook_log_header(site_name: &str, timestamp: &str, result: &HookResult) -> String {
+ let command_str = result.command.join(" ");
+ let duration_str = format_duration(result.duration);
+
+ let status_str = if result.success {
+ "success".to_owned()
+ } else if let Some(code) = result.exit_code {
+ format!("failed (exit code {code})")
+ } else {
+ "failed (signal)".to_owned()
+ };
+
+ format!(
+ "=== HOOK LOG ===\n\
+ Site: {site_name}\n\
+ Timestamp: {timestamp}\n\
+ Command: {command_str}\n\
+ Duration: {duration_str}\n\
+ Status: {status_str}"
+ )
+}
+
+/// Parsed header from a build log file.
+#[derive(Debug, Clone, serde::Serialize)]
+pub struct ParsedLogHeader {
+ pub site_name: String,
+ pub timestamp: String,
+ pub git_commit: String,
+ pub image: String,
+ pub duration: String,
+ pub status: String,
+}
+
+/// Combined deployment status (build + optional hook).
+#[derive(Debug, Clone, serde::Serialize)]
+pub struct DeploymentStatus {
+ pub site_name: String,
+ pub timestamp: String,
+ pub git_commit: String,
+ pub duration: String,
+ pub status: String,
+ pub log: String,
+}
+
+/// Parse the header section of a build log file.
+///
+/// Expects lines like:
+/// ```text
+/// === BUILD LOG ===
+/// Site: my-site
+/// Timestamp: 20260126-143000-123456
+/// Git Commit: abc123d
+/// Image: node:20-alpine
+/// Duration: 45s
+/// Status: success
+/// ```
+///
+/// Returns `None` if the header is malformed.
+#[must_use]
+pub fn parse_log_header(content: &str) -> Option<ParsedLogHeader> {
+ let mut site_name = None;
+ let mut timestamp = None;
+ let mut git_commit = None;
+ let mut image = None;
+ let mut duration = None;
+ let mut status = None;
+
+ for line in content.lines().take(10) {
+ if let Some(val) = line.strip_prefix("Site: ") {
+ site_name = Some(val.to_owned());
+ } else if let Some(val) = line.strip_prefix("Timestamp: ") {
+ timestamp = Some(val.to_owned());
+ } else if let Some(val) = line.strip_prefix("Git Commit: ") {
+ git_commit = Some(val.to_owned());
+ } else if let Some(val) = line.strip_prefix("Image: ") {
+ image = Some(val.to_owned());
+ } else if let Some(val) = line.strip_prefix("Duration: ") {
+ duration = Some(val.to_owned());
+ } else if let Some(val) = line.strip_prefix("Status: ") {
+ status = Some(val.to_owned());
+ }
+ }
+
+ Some(ParsedLogHeader {
+ site_name: site_name?,
+ timestamp: timestamp?,
+ git_commit: git_commit.unwrap_or_else(|| "unknown".to_owned()),
+ image: image.unwrap_or_else(|| "unknown".to_owned()),
+ duration: duration?,
+ status: status?,
+ })
+}
+
+/// Parse the status line from a hook log.
+///
+/// Returns `Some(true)` for success, `Some(false)` for failure,
+/// `None` if the content cannot be parsed.
+#[must_use]
+pub fn parse_hook_status(content: &str) -> Option<bool> {
+ for line in content.lines().take(10) {
+ if let Some(val) = line.strip_prefix("Status: ") {
+ return Some(val == "success");
+ }
+ }
+ None
+}
+
+/// List build log files for a site, sorted newest-first.
+///
+/// Returns `(timestamp, path)` pairs. Excludes `*-hook.log` and `*.tmp` files.
+///
+/// # Errors
+///
+/// Returns an error if the directory cannot be read (except for not-found,
+/// which returns an empty list).
+pub async fn list_site_logs(log_dir: &Path, site_name: &str) -> Result<Vec<(String, PathBuf)>> {
+ let site_log_dir = log_dir.join(site_name);
+
+ if !site_log_dir.is_dir() {
+ return Ok(Vec::new());
+ }
+
+ let mut entries = tokio::fs::read_dir(&site_log_dir)
+ .await
+ .with_context(|| format!("failed to read log directory: {}", site_log_dir.display()))?;
+
+ let mut logs = Vec::new();
+
+ while let Some(entry) = entries.next_entry().await? {
+ let name = entry.file_name();
+ let name_str = name.to_string_lossy();
+
+ // Skip hook logs and temp files
+ if name_str.ends_with("-hook.log") || name_str.ends_with(".tmp") {
+ continue;
+ }
+
+ if let Some(timestamp) = name_str.strip_suffix(".log") {
+ logs.push((timestamp.to_owned(), entry.path()));
+ }
+ }
+
+ // Sort descending (newest first) — timestamps are lexicographically sortable
+ logs.sort_by(|a, b| b.0.cmp(&a.0));
+
+ Ok(logs)
+}
+
+/// Get the deployment status for a single build log.
+///
+/// Reads the build log header and checks for an accompanying hook log
+/// to determine overall deployment status.
+///
+/// # Errors
+///
+/// Returns an error if the build log cannot be read.
+pub async fn get_deployment_status(
+ log_dir: &Path,
+ site_name: &str,
+ timestamp: &str,
+ log_path: &Path,
+) -> Result<DeploymentStatus> {
+ let content = tokio::fs::read_to_string(log_path)
+ .await
+ .with_context(|| format!("failed to read build log: {}", log_path.display()))?;
+
+ let header = parse_log_header(&content);
+
+ let (git_commit, duration, build_status) = match &header {
+ Some(h) => (h.git_commit.clone(), h.duration.clone(), h.status.clone()),
+ None => {
+ warn!(path = %log_path.display(), "malformed build log header");
+ (
+ "unknown".to_owned(),
+ "-".to_owned(),
+ "(parse error)".to_owned(),
+ )
+ }
+ };
+
+ // Check for accompanying hook log
+ let hook_log_path = log_dir
+ .join(site_name)
+ .join(format!("{timestamp}-hook.log"));
+
+ let status = if hook_log_path.is_file() {
+ match tokio::fs::read_to_string(&hook_log_path).await {
+ Ok(hook_content) => match parse_hook_status(&hook_content) {
+ Some(true) => {
+ if build_status.starts_with("failed") {
+ build_status
+ } else {
+ "success".to_owned()
+ }
+ }
+ Some(false) => {
+ if build_status.starts_with("failed") {
+ build_status
+ } else {
+ "hook failed".to_owned()
+ }
+ }
+ None => build_status,
+ },
+ Err(_) => build_status,
+ }
+ } else {
+ build_status
+ };
+
+ Ok(DeploymentStatus {
+ site_name: site_name.to_owned(),
+ timestamp: timestamp.to_owned(),
+ git_commit,
+ duration,
+ status,
+ log: log_path.to_string_lossy().to_string(),
+ })
+}
+
+#[cfg(test)]
+#[allow(clippy::unwrap_used, clippy::indexing_slicing)]
+mod tests {
+ use super::*;
+ use crate::test_support::{cleanup, temp_dir};
+ use tokio::fs;
+
+ /// Create a git Command isolated from parent git environment.
+ /// Prevents interference when tests run inside git hooks
+ /// (e.g., pre-commit hook running `cargo test`).
+ fn git_cmd() -> Command {
+ let mut cmd = Command::new("git");
+ cmd.env_remove("GIT_DIR")
+ .env_remove("GIT_WORK_TREE")
+ .env_remove("GIT_INDEX_FILE");
+ cmd
+ }
+
+ #[tokio::test]
+ async fn save_build_log_creates_file_with_correct_content() {
+ let base_dir = temp_dir("logs-test").await;
+ let log_dir = base_dir.join("logs");
+
+ let meta = BuildLogMeta {
+ site_name: "test-site".to_owned(),
+ timestamp: "20260126-143000-123456".to_owned(),
+ git_commit: Some("abc123d".to_owned()),
+ container_image: "node:20-alpine".to_owned(),
+ duration: Duration::from_secs(45),
+ exit_status: BuildExitStatus::Success,
+ };
+
+ // Create temp files with content
+ let stdout_tmp = base_dir.join("stdout.tmp");
+ let stderr_tmp = base_dir.join("stderr.tmp");
+ fs::write(&stdout_tmp, "build output").await.unwrap();
+ fs::write(&stderr_tmp, "warning message").await.unwrap();
+
+ let result = save_build_log(&log_dir, &meta, &stdout_tmp, &stderr_tmp).await;
+
+ assert!(result.is_ok(), "save_build_log should succeed: {result:?}");
+ let log_path = result.unwrap();
+
+ // Verify file exists at expected path
+ assert_eq!(
+ log_path,
+ log_dir.join("test-site/20260126-143000-123456.log")
+ );
+ assert!(log_path.exists(), "log file should exist");
+
+ // Verify content
+ let content = fs::read_to_string(&log_path).await.unwrap();
+ assert!(content.contains("=== BUILD LOG ==="));
+ assert!(content.contains("Site: test-site"));
+ assert!(content.contains("Timestamp: 20260126-143000-123456"));
+ assert!(content.contains("Git Commit: abc123d"));
+ assert!(content.contains("Image: node:20-alpine"));
+ assert!(content.contains("Duration: 45s"));
+ assert!(content.contains("Status: success"));
+ assert!(content.contains("=== STDOUT ==="));
+ assert!(content.contains("build output"));
+ assert!(content.contains("=== STDERR ==="));
+ assert!(content.contains("warning message"));
+
+ // Verify temp files were deleted
+ assert!(!stdout_tmp.exists(), "stdout temp file should be deleted");
+ assert!(!stderr_tmp.exists(), "stderr temp file should be deleted");
+
+ cleanup(&base_dir).await;
+ }
+
+ #[tokio::test]
+ async fn save_build_log_handles_empty_output() {
+ let base_dir = temp_dir("logs-test").await;
+ let log_dir = base_dir.join("logs");
+
+ let meta = BuildLogMeta {
+ site_name: "empty-site".to_owned(),
+ timestamp: "20260126-150000-000000".to_owned(),
+ git_commit: None,
+ container_image: "alpine:latest".to_owned(),
+ duration: Duration::from_secs(5),
+ exit_status: BuildExitStatus::Success,
+ };
+
+ let stdout_tmp = base_dir.join("stdout.tmp");
+ let stderr_tmp = base_dir.join("stderr.tmp");
+ fs::write(&stdout_tmp, "").await.unwrap();
+ fs::write(&stderr_tmp, "").await.unwrap();
+
+ let result = save_build_log(&log_dir, &meta, &stdout_tmp, &stderr_tmp).await;
+
+ assert!(result.is_ok(), "save_build_log should succeed: {result:?}");
+ let log_path = result.unwrap();
+
+ let content = fs::read_to_string(&log_path).await.unwrap();
+ assert!(content.contains("Git Commit: unknown"));
+ assert!(content.contains("=== STDOUT ===\n\n"));
+ assert!(content.contains("=== STDERR ===\n\n"));
+
+ cleanup(&base_dir).await;
+ }
+
+ #[tokio::test]
+ async fn save_build_log_failed_status() {
+ let base_dir = temp_dir("logs-test").await;
+ let log_dir = base_dir.join("logs");
+
+ let meta = BuildLogMeta {
+ site_name: "failed-site".to_owned(),
+ timestamp: "20260126-160000-000000".to_owned(),
+ git_commit: Some("def456".to_owned()),
+ container_image: "node:18".to_owned(),
+ duration: Duration::from_secs(120),
+ exit_status: BuildExitStatus::Failed {
+ exit_code: Some(1),
+ error: "npm install failed".to_owned(),
+ },
+ };
+
+ let stdout_tmp = base_dir.join("stdout.tmp");
+ let stderr_tmp = base_dir.join("stderr.tmp");
+ fs::write(&stdout_tmp, "").await.unwrap();
+ fs::write(&stderr_tmp, "Error: ENOENT").await.unwrap();
+
+ let result = save_build_log(&log_dir, &meta, &stdout_tmp, &stderr_tmp).await;
+
+ assert!(result.is_ok());
+ let log_path = result.unwrap();
+
+ let content = fs::read_to_string(&log_path).await.unwrap();
+ assert!(content.contains("Duration: 2m 0s"));
+ assert!(content.contains("Status: failed (exit code: 1): npm install failed"));
+
+ cleanup(&base_dir).await;
+ }
+
+ #[tokio::test]
+ async fn save_build_log_deletes_temp_files() {
+ let base_dir = temp_dir("logs-test").await;
+ let log_dir = base_dir.join("logs");
+
+ let meta = BuildLogMeta {
+ site_name: "temp-test".to_owned(),
+ timestamp: "20260126-170000-000000".to_owned(),
+ git_commit: None,
+ container_image: "alpine:latest".to_owned(),
+ duration: Duration::from_secs(1),
+ exit_status: BuildExitStatus::Success,
+ };
+
+ let stdout_tmp = base_dir.join("stdout.tmp");
+ let stderr_tmp = base_dir.join("stderr.tmp");
+ fs::write(&stdout_tmp, "some output").await.unwrap();
+ fs::write(&stderr_tmp, "some errors").await.unwrap();
+
+ assert!(stdout_tmp.exists());
+ assert!(stderr_tmp.exists());
+
+ let result = save_build_log(&log_dir, &meta, &stdout_tmp, &stderr_tmp).await;
+ assert!(result.is_ok());
+
+ // Temp files must be gone
+ assert!(!stdout_tmp.exists(), "stdout temp file should be deleted");
+ assert!(!stderr_tmp.exists(), "stderr temp file should be deleted");
+
+ cleanup(&base_dir).await;
+ }
+
+ #[tokio::test]
+ async fn get_git_commit_returns_short_hash() {
+ let temp = temp_dir("logs-test").await;
+
+ // Initialize a git repo
+ git_cmd()
+ .args(["init"])
+ .current_dir(&temp)
+ .output()
+ .await
+ .unwrap();
+
+ // Configure git user for commit
+ git_cmd()
+ .args(["config", "user.email", "test@test.com"])
+ .current_dir(&temp)
+ .output()
+ .await
+ .unwrap();
+ git_cmd()
+ .args(["config", "user.name", "Test"])
+ .current_dir(&temp)
+ .output()
+ .await
+ .unwrap();
+
+ // Create a file and commit
+ fs::write(temp.join("file.txt"), "content").await.unwrap();
+ git_cmd()
+ .args(["add", "."])
+ .current_dir(&temp)
+ .output()
+ .await
+ .unwrap();
+ git_cmd()
+ .args(["commit", "-m", "initial"])
+ .current_dir(&temp)
+ .output()
+ .await
+ .unwrap();
+
+ let commit = get_git_commit(&temp).await;
+
+ assert!(commit.is_some(), "should return commit hash");
+ let hash = commit.unwrap();
+ assert!(!hash.is_empty(), "hash should not be empty");
+ assert!(hash.len() >= 7, "short hash should be at least 7 chars");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn get_git_commit_returns_none_for_non_repo() {
+ let temp = temp_dir("logs-test").await;
+
+ // No git init - just an empty directory
+ let commit = get_git_commit(&temp).await;
+
+ assert!(commit.is_none(), "should return None for non-git directory");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn save_hook_log_creates_file_with_correct_content() {
+ let base_dir = temp_dir("logs-test").await;
+ let log_dir = base_dir.join("logs");
+
+ let stdout_tmp = base_dir.join("hook-stdout.tmp");
+ let stderr_tmp = base_dir.join("hook-stderr.tmp");
+ fs::write(&stdout_tmp, "hook output").await.unwrap();
+ fs::write(&stderr_tmp, "").await.unwrap();
+
+ let hook_result = HookResult {
+ command: vec!["touch".to_owned(), "marker".to_owned()],
+ stdout_file: stdout_tmp.clone(),
+ stderr_file: stderr_tmp.clone(),
+ last_stderr: String::new(),
+ exit_code: Some(0),
+ duration: Duration::from_secs(1),
+ success: true,
+ };
+
+ let result = save_hook_log(
+ &log_dir,
+ "test-site",
+ "20260202-120000-000000",
+ &hook_result,
+ )
+ .await;
+ assert!(result.is_ok());
+ let log_path = result.unwrap();
+
+ assert_eq!(
+ log_path,
+ log_dir.join("test-site/20260202-120000-000000-hook.log")
+ );
+ assert!(log_path.exists());
+
+ let content = fs::read_to_string(&log_path).await.unwrap();
+ assert!(content.contains("=== HOOK LOG ==="));
+ assert!(content.contains("Site: test-site"));
+ assert!(content.contains("Command: touch marker"));
+ assert!(content.contains("Status: success"));
+ assert!(content.contains("=== STDOUT ==="));
+ assert!(content.contains("hook output"));
+
+ // Temp files should be deleted
+ assert!(!stdout_tmp.exists());
+ assert!(!stderr_tmp.exists());
+
+ cleanup(&base_dir).await;
+ }
+
+ #[tokio::test]
+ async fn save_hook_log_failure_status() {
+ let base_dir = temp_dir("logs-test").await;
+ let log_dir = base_dir.join("logs");
+
+ let stdout_tmp = base_dir.join("hook-stdout.tmp");
+ let stderr_tmp = base_dir.join("hook-stderr.tmp");
+ fs::write(&stdout_tmp, "").await.unwrap();
+ fs::write(&stderr_tmp, "error output").await.unwrap();
+
+ let hook_result = HookResult {
+ command: vec!["false".to_owned()],
+ stdout_file: stdout_tmp,
+ stderr_file: stderr_tmp,
+ last_stderr: "error output".to_owned(),
+ exit_code: Some(1),
+ duration: Duration::from_secs(0),
+ success: false,
+ };
+
+ let result = save_hook_log(
+ &log_dir,
+ "test-site",
+ "20260202-120000-000000",
+ &hook_result,
+ )
+ .await;
+ assert!(result.is_ok());
+ let log_path = result.unwrap();
+
+ let content = fs::read_to_string(&log_path).await.unwrap();
+ assert!(content.contains("Status: failed (exit code 1)"));
+ assert!(content.contains("error output"));
+
+ cleanup(&base_dir).await;
+ }
+
+ #[tokio::test]
+ async fn save_hook_log_signal_status() {
+ let base_dir = temp_dir("logs-test").await;
+ let log_dir = base_dir.join("logs");
+
+ let stdout_tmp = base_dir.join("hook-stdout.tmp");
+ let stderr_tmp = base_dir.join("hook-stderr.tmp");
+ fs::write(&stdout_tmp, "").await.unwrap();
+ fs::write(&stderr_tmp, "post-deploy hook timed out after 30s")
+ .await
+ .unwrap();
+
+ let hook_result = HookResult {
+ command: vec!["sleep".to_owned(), "100".to_owned()],
+ stdout_file: stdout_tmp,
+ stderr_file: stderr_tmp,
+ last_stderr: String::new(),
+ exit_code: None,
+ duration: Duration::from_secs(30),
+ success: false,
+ };
+
+ let result = save_hook_log(
+ &log_dir,
+ "test-site",
+ "20260202-120000-000000",
+ &hook_result,
+ )
+ .await;
+ assert!(result.is_ok());
+ let log_path = result.unwrap();
+
+ let content = fs::read_to_string(&log_path).await.unwrap();
+ assert!(content.contains("Status: failed (signal)"));
+ assert!(content.contains("timed out"));
+
+ cleanup(&base_dir).await;
+ }
+
+ // --- parse_log_header tests ---
+
+ #[test]
+ fn parse_log_header_success() {
+ let content = "\
+=== BUILD LOG ===
+Site: my-site
+Timestamp: 20260126-143000-123456
+Git Commit: abc123d
+Image: node:20-alpine
+Duration: 45s
+Status: success
+
+=== STDOUT ===
+build output
+";
+ let header = parse_log_header(content).unwrap();
+ assert_eq!(header.site_name, "my-site");
+ assert_eq!(header.timestamp, "20260126-143000-123456");
+ assert_eq!(header.git_commit, "abc123d");
+ assert_eq!(header.image, "node:20-alpine");
+ assert_eq!(header.duration, "45s");
+ assert_eq!(header.status, "success");
+ }
+
+ #[test]
+ fn parse_log_header_failed_build() {
+ let content = "\
+=== BUILD LOG ===
+Site: fail-site
+Timestamp: 20260126-160000-000000
+Git Commit: def456
+Image: node:18
+Duration: 2m 0s
+Status: failed (exit code: 42): build error
+";
+ let header = parse_log_header(content).unwrap();
+ assert_eq!(header.status, "failed (exit code: 42): build error");
+ assert_eq!(header.duration, "2m 0s");
+ }
+
+ #[test]
+ fn parse_log_header_unknown_commit() {
+ let content = "\
+=== BUILD LOG ===
+Site: test-site
+Timestamp: 20260126-150000-000000
+Git Commit: unknown
+Image: alpine:latest
+Duration: 5s
+Status: success
+";
+ let header = parse_log_header(content).unwrap();
+ assert_eq!(header.git_commit, "unknown");
+ }
+
+ #[test]
+ fn parse_log_header_malformed() {
+ let content = "This is not a valid log file\nSome random text\n";
+ let header = parse_log_header(content);
+ assert!(header.is_none());
+ }
+
+ #[test]
+ fn parse_hook_status_success() {
+ let content = "\
+=== HOOK LOG ===
+Site: test-site
+Timestamp: 20260202-120000-000000
+Command: touch marker
+Duration: 1s
+Status: success
+";
+ assert_eq!(parse_hook_status(content), Some(true));
+ }
+
+ #[test]
+ fn parse_hook_status_failed() {
+ let content = "\
+=== HOOK LOG ===
+Site: test-site
+Timestamp: 20260202-120000-000000
+Command: false
+Duration: 0s
+Status: failed (exit code 1)
+";
+ assert_eq!(parse_hook_status(content), Some(false));
+ }
+}
diff --git a/src/main.rs b/src/main.rs
new file mode 100644
index 0000000..b153297
--- /dev/null
+++ b/src/main.rs
@@ -0,0 +1,422 @@
+use anyhow::{Context as _, Result, bail};
+use clap::Parser as _;
+use tracing::{info, warn};
+use tracing_subscriber::EnvFilter;
+use witryna::cli::{Cli, Command};
+use witryna::config;
+use witryna::logs::{self, DeploymentStatus};
+use witryna::{pipeline, server};
+
+#[tokio::main]
+async fn main() -> Result<()> {
+ let cli = Cli::parse();
+ let config_path = config::discover_config(cli.config.as_deref())?;
+
+ match cli.command {
+ Command::Serve => run_serve(config_path).await,
+ Command::Validate => run_validate(config_path).await,
+ Command::Run { site, verbose } => run_run(config_path, site, verbose).await,
+ Command::Status { site, json } => run_status(config_path, site, json).await,
+ }
+}
+
+async fn run_serve(config_path: std::path::PathBuf) -> Result<()> {
+ let config = config::Config::load(&config_path).await?;
+
+ // Initialize tracing with configured log level
+ // RUST_LOG env var takes precedence if set
+ let filter = EnvFilter::try_from_default_env()
+ .unwrap_or_else(|_| EnvFilter::new(config.log_level_filter().to_string()));
+ tracing_subscriber::fmt().with_env_filter(filter).init();
+
+ info!(
+ listen_address = %config.listen_address,
+ container_runtime = %config.container_runtime,
+ base_dir = %config.base_dir.display(),
+ log_dir = %config.log_dir.display(),
+ log_level = %config.log_level,
+ sites_count = config.sites.len(),
+ "loaded configuration"
+ );
+
+ for site in &config.sites {
+ if site.webhook_token.is_empty() {
+ warn!(
+ name = %site.name,
+ "webhook authentication disabled (no token configured)"
+ );
+ }
+ if let Some(interval) = site.poll_interval {
+ info!(
+ name = %site.name,
+ repo_url = %site.repo_url,
+ branch = %site.branch,
+ poll_interval_secs = interval.as_secs(),
+ "configured site with polling"
+ );
+ } else {
+ info!(
+ name = %site.name,
+ repo_url = %site.repo_url,
+ branch = %site.branch,
+ "configured site (webhook-only)"
+ );
+ }
+ }
+
+ server::run(config, config_path).await
+}
+
+#[allow(clippy::print_stderr)] // CLI validation output goes to stderr
+async fn run_validate(config_path: std::path::PathBuf) -> Result<()> {
+ let config = config::Config::load(&config_path).await?;
+ eprintln!("{}", format_validate_summary(&config, &config_path));
+ Ok(())
+}
+
+#[allow(clippy::print_stderr)] // CLI output goes to stderr
+async fn run_run(config_path: std::path::PathBuf, site_name: String, verbose: bool) -> Result<()> {
+ let config = config::Config::load(&config_path).await?;
+
+ let site = config
+ .find_site(&site_name)
+ .with_context(|| {
+ format!(
+ "site '{}' not found in {}",
+ site_name,
+ config_path.display()
+ )
+ })?
+ .clone();
+
+ // Initialize tracing: compact stderr, DEBUG when verbose
+ let level = if verbose { "debug" } else { "info" };
+ let filter = EnvFilter::try_from_default_env().unwrap_or_else(|_| EnvFilter::new(level));
+ tracing_subscriber::fmt()
+ .with_env_filter(filter)
+ .with_writer(std::io::stderr)
+ .init();
+
+ eprintln!(
+ "Building site: {} (repo: {}, branch: {})",
+ site_name, site.repo_url, site.branch
+ );
+
+ let git_timeout = config
+ .git_timeout
+ .unwrap_or(witryna::git::GIT_TIMEOUT_DEFAULT);
+
+ let result = pipeline::run_build(
+ &site_name,
+ &site,
+ &config.base_dir,
+ &config.log_dir,
+ &config.container_runtime,
+ config.max_builds_to_keep,
+ git_timeout,
+ verbose,
+ )
+ .await?;
+
+ eprintln!(
+ "Build succeeded in {} — {}",
+ logs::format_duration(result.duration),
+ result.build_dir.display(),
+ );
+ eprintln!("Log: {}", result.log_file.display());
+
+ Ok(())
+}
+
+#[allow(clippy::print_stdout)] // CLI status output goes to stdout (pipeable)
+async fn run_status(
+ config_path: std::path::PathBuf,
+ site_filter: Option<String>,
+ json: bool,
+) -> Result<()> {
+ let config = config::Config::load(&config_path).await?;
+
+ if let Some(name) = &site_filter
+ && config.find_site(name).is_none()
+ {
+ bail!("site '{}' not found in {}", name, config_path.display());
+ }
+
+ let mut statuses: Vec<DeploymentStatus> = Vec::new();
+
+ match &site_filter {
+ Some(name) => {
+ // Show last 10 deployments for a single site
+ let site_logs = logs::list_site_logs(&config.log_dir, name).await?;
+ for (ts, path) in site_logs.into_iter().take(10) {
+ let ds = logs::get_deployment_status(&config.log_dir, name, &ts, &path).await?;
+ statuses.push(ds);
+ }
+ }
+ None => {
+ // Show latest deployment for each site
+ for site in &config.sites {
+ let site_logs = logs::list_site_logs(&config.log_dir, &site.name).await?;
+ if let Some((ts, path)) = site_logs.into_iter().next() {
+ let ds = logs::get_deployment_status(&config.log_dir, &site.name, &ts, &path)
+ .await?;
+ statuses.push(ds);
+ } else {
+ statuses.push(DeploymentStatus {
+ site_name: site.name.clone(),
+ timestamp: "-".to_owned(),
+ git_commit: "-".to_owned(),
+ duration: "-".to_owned(),
+ status: "-".to_owned(),
+ log: "(no builds)".to_owned(),
+ });
+ }
+ }
+ }
+ }
+
+ if json {
+ #[allow(clippy::expect_used)] // DeploymentStatus serialization cannot fail
+ let output = serde_json::to_string_pretty(&statuses)
+ .expect("DeploymentStatus serialization cannot fail");
+ println!("{output}");
+ } else {
+ print!("{}", format_status_table(&statuses));
+ }
+
+ Ok(())
+}
+
+fn format_status_table(statuses: &[DeploymentStatus]) -> String {
+ use std::fmt::Write as _;
+
+ let site_width = statuses
+ .iter()
+ .map(|s| s.site_name.len())
+ .max()
+ .unwrap_or(4)
+ .max(4);
+
+ let mut out = String::new();
+ let _ = writeln!(
+ out,
+ "{:<site_width$} {:<11} {:<7} {:<8} {:<24} LOG",
+ "SITE", "STATUS", "COMMIT", "DURATION", "TIMESTAMP"
+ );
+
+ for s in statuses {
+ let _ = writeln!(
+ out,
+ "{:<site_width$} {:<11} {:<7} {:<8} {:<24} {}",
+ s.site_name, s.status, s.git_commit, s.duration, s.timestamp, s.log
+ );
+ }
+
+ out
+}
+
+fn format_validate_summary(config: &config::Config, path: &std::path::Path) -> String {
+ use std::fmt::Write as _;
+ let mut out = String::new();
+ let _ = writeln!(out, "Configuration valid: {}", path.display());
+ let _ = writeln!(out, " Listen: {}", config.listen_address);
+ let _ = writeln!(out, " Runtime: {}", config.container_runtime);
+ let _ = write!(out, " Sites: {}", config.sites.len());
+ for site in &config.sites {
+ let _ = write!(
+ out,
+ "\n - {} ({}, branch: {})",
+ site.name, site.repo_url, site.branch
+ );
+ }
+ out
+}
+
+#[cfg(test)]
+#[allow(clippy::unwrap_used, clippy::indexing_slicing)]
+mod tests {
+ use super::*;
+ use std::path::PathBuf;
+ use witryna::config::{BuildOverrides, Config, SiteConfig};
+ use witryna::logs::DeploymentStatus;
+
+ fn test_config(sites: Vec<SiteConfig>) -> Config {
+ Config {
+ listen_address: "127.0.0.1:8080".to_owned(),
+ container_runtime: "podman".to_owned(),
+ base_dir: PathBuf::from("/var/lib/witryna"),
+ log_dir: PathBuf::from("/var/log/witryna"),
+ log_level: "info".to_owned(),
+ rate_limit_per_minute: 10,
+ max_builds_to_keep: 5,
+ git_timeout: None,
+ sites,
+ }
+ }
+
+ fn test_site(name: &str, repo_url: &str, branch: &str) -> SiteConfig {
+ SiteConfig {
+ name: name.to_owned(),
+ repo_url: repo_url.to_owned(),
+ branch: branch.to_owned(),
+ webhook_token: "token".to_owned(),
+ webhook_token_file: None,
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: None,
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: "none".to_owned(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ }
+ }
+
+ #[test]
+ fn validate_summary_single_site() {
+ let config = test_config(vec![test_site(
+ "my-site",
+ "https://github.com/user/my-site.git",
+ "main",
+ )]);
+ let output = format_validate_summary(&config, &PathBuf::from("witryna.toml"));
+ assert!(output.contains("Configuration valid: witryna.toml"));
+ assert!(output.contains("Listen: 127.0.0.1:8080"));
+ assert!(output.contains("Runtime: podman"));
+ assert!(output.contains("Sites: 1"));
+ assert!(output.contains("my-site (https://github.com/user/my-site.git, branch: main)"));
+ }
+
+ #[test]
+ fn validate_summary_multiple_sites() {
+ let config = test_config(vec![
+ test_site("site-one", "https://github.com/user/site-one.git", "main"),
+ test_site(
+ "site-two",
+ "https://github.com/user/site-two.git",
+ "develop",
+ ),
+ ]);
+ let output = format_validate_summary(&config, &PathBuf::from("/etc/witryna.toml"));
+ assert!(output.contains("Sites: 2"));
+ assert!(output.contains("site-one (https://github.com/user/site-one.git, branch: main)"));
+ assert!(
+ output.contains("site-two (https://github.com/user/site-two.git, branch: develop)")
+ );
+ }
+
+ #[test]
+ fn validate_summary_no_sites() {
+ let config = test_config(vec![]);
+ let output = format_validate_summary(&config, &PathBuf::from("witryna.toml"));
+ assert!(output.contains("Sites: 0"));
+ assert!(!output.contains(" -"));
+ }
+
+ #[test]
+ fn validate_summary_runtime_shows_value() {
+ let config = test_config(vec![]);
+ let output = format_validate_summary(&config, &PathBuf::from("witryna.toml"));
+ assert!(output.contains("Runtime: podman"));
+ }
+
+ // --- format_status_table tests ---
+
+ fn test_deployment(
+ site_name: &str,
+ status: &str,
+ commit: &str,
+ duration: &str,
+ timestamp: &str,
+ log: &str,
+ ) -> DeploymentStatus {
+ DeploymentStatus {
+ site_name: site_name.to_owned(),
+ timestamp: timestamp.to_owned(),
+ git_commit: commit.to_owned(),
+ duration: duration.to_owned(),
+ status: status.to_owned(),
+ log: log.to_owned(),
+ }
+ }
+
+ #[test]
+ fn format_status_table_single_site_success() {
+ let statuses = vec![test_deployment(
+ "my-site",
+ "success",
+ "abc123d",
+ "45s",
+ "20260126-143000-123456",
+ "/var/log/witryna/my-site/20260126-143000-123456.log",
+ )];
+ let output = format_status_table(&statuses);
+ assert!(output.contains("SITE"));
+ assert!(output.contains("STATUS"));
+ assert!(output.contains("my-site"));
+ assert!(output.contains("success"));
+ assert!(output.contains("abc123d"));
+ assert!(output.contains("45s"));
+ }
+
+ #[test]
+ fn format_status_table_no_builds() {
+ let statuses = vec![test_deployment(
+ "empty-site",
+ "-",
+ "-",
+ "-",
+ "-",
+ "(no builds)",
+ )];
+ let output = format_status_table(&statuses);
+ assert!(output.contains("empty-site"));
+ assert!(output.contains("(no builds)"));
+ }
+
+ #[test]
+ fn format_status_table_multiple_sites() {
+ let statuses = vec![
+ test_deployment(
+ "site-one",
+ "success",
+ "abc123d",
+ "45s",
+ "20260126-143000-123456",
+ "/logs/site-one/20260126-143000-123456.log",
+ ),
+ test_deployment(
+ "site-two",
+ "failed",
+ "def456",
+ "2m 0s",
+ "20260126-160000-000000",
+ "/logs/site-two/20260126-160000-000000.log",
+ ),
+ ];
+ let output = format_status_table(&statuses);
+ assert!(output.contains("site-one"));
+ assert!(output.contains("site-two"));
+ assert!(output.contains("success"));
+ assert!(output.contains("failed"));
+ }
+
+ #[test]
+ fn format_status_table_hook_failed() {
+ let statuses = vec![test_deployment(
+ "hook-site",
+ "hook failed",
+ "abc123d",
+ "12s",
+ "20260126-143000-123456",
+ "/logs/hook-site/20260126-143000-123456.log",
+ )];
+ let output = format_status_table(&statuses);
+ assert!(output.contains("hook failed"));
+ }
+}
diff --git a/src/pipeline.rs b/src/pipeline.rs
new file mode 100644
index 0000000..5827ad7
--- /dev/null
+++ b/src/pipeline.rs
@@ -0,0 +1,328 @@
+use crate::config::SiteConfig;
+use crate::logs::{BuildExitStatus, BuildLogMeta};
+use crate::{build, cleanup, git, hook, logs, publish, repo_config};
+use anyhow::Result;
+use chrono::Utc;
+use std::path::{Path, PathBuf};
+use std::time::{Duration, Instant};
+use tracing::{error, info, warn};
+
+/// Result of a successful pipeline run.
+pub struct PipelineResult {
+ pub build_dir: PathBuf,
+ pub log_file: PathBuf,
+ pub timestamp: String,
+ pub duration: Duration,
+}
+
+/// Run the complete build pipeline: git sync → build → publish.
+///
+/// This is the core pipeline logic shared by both the HTTP server and the CLI
+/// `run` command. The server wraps this with `BuildGuard` for concurrency
+/// control; the CLI calls it directly.
+///
+/// # Errors
+///
+/// Returns an error on git sync failure, config load failure, build failure,
+/// or publish failure. Post-deploy hook and cleanup failures are non-fatal
+/// (logged as warnings).
+#[allow(clippy::too_many_arguments, clippy::too_many_lines)]
+pub async fn run_build(
+ site_name: &str,
+ site: &SiteConfig,
+ base_dir: &Path,
+ log_dir: &Path,
+ container_runtime: &str,
+ max_builds_to_keep: u32,
+ git_timeout: Duration,
+ verbose: bool,
+) -> Result<PipelineResult> {
+ let timestamp = Utc::now().format("%Y%m%d-%H%M%S-%f").to_string();
+ let start_time = Instant::now();
+
+ let clone_dir = base_dir.join("clones").join(site_name);
+
+ // 1. Sync git repository
+ info!(%site_name, "syncing repository");
+ if let Err(e) = git::sync_repo(
+ &site.repo_url,
+ &site.branch,
+ &clone_dir,
+ git_timeout,
+ site.git_depth.unwrap_or(git::GIT_DEPTH_DEFAULT),
+ )
+ .await
+ {
+ error!(%site_name, error = %e, "git sync failed");
+ save_build_log_for_error(
+ log_dir,
+ site_name,
+ &timestamp,
+ start_time,
+ None,
+ "git-sync",
+ &e.to_string(),
+ )
+ .await;
+ return Err(e.context("git sync failed"));
+ }
+
+ // Get git commit hash for logging
+ let git_commit = logs::get_git_commit(&clone_dir).await;
+
+ // 2. Load repo config (witryna.yaml) with overrides from witryna.toml
+ let repo_config = match repo_config::RepoConfig::load_with_overrides(
+ &clone_dir,
+ &site.build_overrides,
+ site.config_file.as_deref(),
+ )
+ .await
+ {
+ Ok(config) => config,
+ Err(e) => {
+ error!(%site_name, error = %e, "failed to load repo config");
+ save_build_log_for_error(
+ log_dir,
+ site_name,
+ &timestamp,
+ start_time,
+ git_commit,
+ "config-load",
+ &e.to_string(),
+ )
+ .await;
+ return Err(e.context("failed to load repo config"));
+ }
+ };
+
+ // 3. Prepare cache volumes
+ let cache_volumes = match &site.cache_dirs {
+ Some(dirs) if !dirs.is_empty() => {
+ let mut volumes = Vec::with_capacity(dirs.len());
+ for dir in dirs {
+ let sanitized = crate::config::sanitize_cache_dir_name(dir);
+ let host_path = base_dir.join("cache").join(site_name).join(&sanitized);
+ if let Err(e) = tokio::fs::create_dir_all(&host_path).await {
+ error!(%site_name, path = %host_path.display(), error = %e, "failed to create cache directory");
+ anyhow::bail!("failed to create cache directory: {e}");
+ }
+ volumes.push((dir.clone(), host_path));
+ }
+ let mount_list: Vec<_> = volumes
+ .iter()
+ .map(|(c, h)| format!("{}:{}", h.display(), c))
+ .collect();
+ info!(%site_name, mounts = ?mount_list, "mounting cache volumes");
+ volumes
+ }
+ _ => Vec::new(),
+ };
+
+ // 4. Execute build — stream output to temp files
+ let site_log_dir = log_dir.join(site_name);
+ if let Err(e) = tokio::fs::create_dir_all(&site_log_dir).await {
+ error!(%site_name, error = %e, "failed to create log directory");
+ anyhow::bail!("failed to create log directory: {e}");
+ }
+ let stdout_tmp = site_log_dir.join(format!("{timestamp}-stdout.tmp"));
+ let stderr_tmp = site_log_dir.join(format!("{timestamp}-stderr.tmp"));
+
+ let env = site.env.clone().unwrap_or_default();
+ let timeout = site.build_timeout.unwrap_or(build::BUILD_TIMEOUT_DEFAULT);
+ let options = build::ContainerOptions {
+ memory: site.container_memory.clone(),
+ cpus: site.container_cpus,
+ pids_limit: site.container_pids_limit,
+ network: site.container_network.clone(),
+ workdir: site.container_workdir.clone(),
+ };
+ info!(%site_name, image = %repo_config.image, "running container build");
+ let build_result = build::execute(
+ container_runtime,
+ &clone_dir,
+ &repo_config,
+ &cache_volumes,
+ &env,
+ &options,
+ &stdout_tmp,
+ &stderr_tmp,
+ timeout,
+ verbose,
+ )
+ .await;
+
+ // Determine exit status and extract temp file paths
+ let (exit_status, build_stdout_file, build_stderr_file, build_duration) = match &build_result {
+ Ok(result) => (
+ BuildExitStatus::Success,
+ result.stdout_file.clone(),
+ result.stderr_file.clone(),
+ result.duration,
+ ),
+ Err(e) => e.downcast_ref::<build::BuildFailure>().map_or_else(
+ || {
+ (
+ BuildExitStatus::Failed {
+ exit_code: None,
+ error: e.to_string(),
+ },
+ stdout_tmp.clone(),
+ stderr_tmp.clone(),
+ start_time.elapsed(),
+ )
+ },
+ |failure| {
+ (
+ BuildExitStatus::Failed {
+ exit_code: Some(failure.exit_code),
+ error: failure.to_string(),
+ },
+ failure.stdout_file.clone(),
+ failure.stderr_file.clone(),
+ failure.duration,
+ )
+ },
+ ),
+ };
+
+ // Ensure temp files exist for save_build_log (spawn errors may not create them)
+ if !build_stdout_file.exists() {
+ let _ = tokio::fs::File::create(&build_stdout_file).await;
+ }
+ if !build_stderr_file.exists() {
+ let _ = tokio::fs::File::create(&build_stderr_file).await;
+ }
+
+ // Save build log (always, success or failure) — streams from temp files
+ let meta = BuildLogMeta {
+ site_name: site_name.to_owned(),
+ timestamp: timestamp.clone(),
+ git_commit: git_commit.clone(),
+ container_image: repo_config.image.clone(),
+ duration: build_duration,
+ exit_status,
+ };
+
+ let log_file =
+ match logs::save_build_log(log_dir, &meta, &build_stdout_file, &build_stderr_file).await {
+ Ok(path) => path,
+ Err(e) => {
+ error!(%site_name, error = %e, "failed to save build log");
+ let _ = tokio::fs::remove_file(&build_stdout_file).await;
+ let _ = tokio::fs::remove_file(&build_stderr_file).await;
+ // Non-fatal for log save — continue if build succeeded
+ log_dir.join(site_name).join(format!("{timestamp}.log"))
+ }
+ };
+
+ // If build failed, return error
+ if let Err(e) = build_result {
+ error!(%site_name, "build failed");
+ return Err(e);
+ }
+
+ // 5. Publish assets (with same timestamp as log)
+ info!(%site_name, public = %repo_config.public, "publishing assets");
+ let publish_result = publish::publish(
+ base_dir,
+ site_name,
+ &clone_dir,
+ &repo_config.public,
+ &timestamp,
+ )
+ .await?;
+
+ info!(
+ %site_name,
+ build_dir = %publish_result.build_dir.display(),
+ timestamp = %publish_result.timestamp,
+ "deployment completed successfully"
+ );
+
+ // 6. Run post-deploy hook (non-fatal)
+ if let Some(hook_cmd) = &site.post_deploy {
+ info!(%site_name, "running post-deploy hook");
+ let hook_stdout_tmp = site_log_dir.join(format!("{timestamp}-hook-stdout.tmp"));
+ let hook_stderr_tmp = site_log_dir.join(format!("{timestamp}-hook-stderr.tmp"));
+ let public_dir = base_dir.join("builds").join(site_name).join("current");
+
+ let hook_result = hook::run_post_deploy_hook(
+ hook_cmd,
+ site_name,
+ &publish_result.build_dir,
+ &public_dir,
+ &timestamp,
+ &env,
+ &hook_stdout_tmp,
+ &hook_stderr_tmp,
+ )
+ .await;
+
+ if let Err(e) = logs::save_hook_log(log_dir, site_name, &timestamp, &hook_result).await {
+ error!(%site_name, error = %e, "failed to save hook log");
+ let _ = tokio::fs::remove_file(&hook_stdout_tmp).await;
+ let _ = tokio::fs::remove_file(&hook_stderr_tmp).await;
+ }
+
+ if hook_result.success {
+ info!(%site_name, "post-deploy hook completed");
+ } else {
+ warn!(
+ %site_name,
+ exit_code = ?hook_result.exit_code,
+ "post-deploy hook failed (non-fatal)"
+ );
+ }
+ }
+
+ // 7. Cleanup old builds (non-fatal if it fails)
+ if let Err(e) =
+ cleanup::cleanup_old_builds(base_dir, log_dir, site_name, max_builds_to_keep).await
+ {
+ warn!(%site_name, error = %e, "cleanup failed (non-fatal)");
+ }
+
+ let duration = start_time.elapsed();
+ Ok(PipelineResult {
+ build_dir: publish_result.build_dir,
+ log_file,
+ timestamp,
+ duration,
+ })
+}
+
+/// Save a build log for errors that occur before the build starts.
+async fn save_build_log_for_error(
+ log_dir: &Path,
+ site_name: &str,
+ timestamp: &str,
+ start_time: Instant,
+ git_commit: Option<String>,
+ phase: &str,
+ error_msg: &str,
+) {
+ let meta = BuildLogMeta {
+ site_name: site_name.to_owned(),
+ timestamp: timestamp.to_owned(),
+ git_commit,
+ container_image: format!("(failed at {phase})"),
+ duration: start_time.elapsed(),
+ exit_status: BuildExitStatus::Failed {
+ exit_code: None,
+ error: error_msg.to_owned(),
+ },
+ };
+
+ let site_log_dir = log_dir.join(site_name);
+ let _ = tokio::fs::create_dir_all(&site_log_dir).await;
+ let stdout_tmp = site_log_dir.join(format!("{timestamp}-stdout.tmp"));
+ let stderr_tmp = site_log_dir.join(format!("{timestamp}-stderr.tmp"));
+ let _ = tokio::fs::File::create(&stdout_tmp).await;
+ let _ = tokio::fs::File::create(&stderr_tmp).await;
+
+ if let Err(e) = logs::save_build_log(log_dir, &meta, &stdout_tmp, &stderr_tmp).await {
+ error!(site_name, error = %e, "failed to save build log");
+ let _ = tokio::fs::remove_file(&stdout_tmp).await;
+ let _ = tokio::fs::remove_file(&stderr_tmp).await;
+ }
+}
diff --git a/src/polling.rs b/src/polling.rs
new file mode 100644
index 0000000..6c25326
--- /dev/null
+++ b/src/polling.rs
@@ -0,0 +1,242 @@
+//! Polling manager for periodic repository change detection.
+//!
+//! Spawns background tasks for sites with `poll_interval` configured.
+//! Integrates with SIGHUP reload to restart polling tasks on config change.
+
+use crate::build_guard::BuildGuard;
+use crate::config::SiteConfig;
+use crate::git;
+use crate::server::AppState;
+use std::collections::HashMap;
+use std::hash::{Hash as _, Hasher as _};
+use std::sync::Arc;
+use std::time::Duration;
+use tokio::sync::RwLock;
+use tokio_util::sync::CancellationToken;
+use tracing::{debug, error, info};
+
+/// Manages polling tasks for all sites.
+pub struct PollingManager {
+ /// Map of `site_name` -> cancellation token for active polling tasks
+ tasks: Arc<RwLock<HashMap<String, CancellationToken>>>,
+}
+
+impl PollingManager {
+ #[must_use]
+ pub fn new() -> Self {
+ Self {
+ tasks: Arc::new(RwLock::new(HashMap::new())),
+ }
+ }
+
+ /// Start polling tasks for sites with `poll_interval` configured.
+ /// Call this on startup and after SIGHUP reload.
+ pub async fn start_polling(&self, state: AppState) {
+ let config = state.config.read().await;
+
+ for site in &config.sites {
+ if let Some(interval) = site.poll_interval {
+ self.spawn_poll_task(state.clone(), site.clone(), interval)
+ .await;
+ }
+ }
+ }
+
+ /// Stop all currently running polling tasks.
+ /// Call this before starting new tasks on SIGHUP.
+ pub async fn stop_all(&self) {
+ let mut tasks = self.tasks.write().await;
+
+ for (site_name, token) in tasks.drain() {
+ info!(site = %site_name, "stopping polling task");
+ token.cancel();
+ }
+ }
+
+ /// Spawn a single polling task for a site.
+ async fn spawn_poll_task(&self, state: AppState, site: SiteConfig, interval: Duration) {
+ let site_name = site.name.clone();
+ let token = CancellationToken::new();
+
+ // Store the cancellation token
+ {
+ let mut tasks = self.tasks.write().await;
+ tasks.insert(site_name.clone(), token.clone());
+ }
+
+ info!(
+ site = %site_name,
+ interval_secs = interval.as_secs(),
+ "starting polling task"
+ );
+
+ // Spawn the polling loop
+ let tasks = Arc::clone(&self.tasks);
+ tokio::spawn(async move {
+ #[allow(clippy::large_futures)]
+ poll_loop(state, site, interval, token.clone()).await;
+
+ // Remove from active tasks when done
+ tasks.write().await.remove(&site_name);
+ debug!(site = %site_name, "polling task ended");
+ });
+ }
+}
+
+impl Default for PollingManager {
+ fn default() -> Self {
+ Self::new()
+ }
+}
+
+/// The main polling loop for a single site.
+async fn poll_loop(
+ state: AppState,
+ site: SiteConfig,
+ interval: Duration,
+ cancel_token: CancellationToken,
+) {
+ let site_name = &site.name;
+
+ // Initial delay before first poll (avoid thundering herd on startup)
+ let initial_delay = calculate_initial_delay(site_name, interval);
+ debug!(site = %site_name, delay_secs = initial_delay.as_secs(), "initial poll delay");
+
+ tokio::select! {
+ () = tokio::time::sleep(initial_delay) => {}
+ () = cancel_token.cancelled() => return,
+ }
+
+ loop {
+ debug!(site = %site_name, "polling for changes");
+
+ // 1. Acquire build lock before any git operation
+ let Some(guard) = BuildGuard::try_acquire(site_name.clone(), &state.build_scheduler) else {
+ debug!(site = %site_name, "build in progress, skipping poll cycle");
+ tokio::select! {
+ () = tokio::time::sleep(interval) => {}
+ () = cancel_token.cancelled() => {
+ info!(site = %site_name, "polling cancelled");
+ return;
+ }
+ }
+ continue;
+ };
+
+ // Get current config (might have changed via SIGHUP)
+ let (base_dir, git_timeout) = {
+ let config = state.config.read().await;
+ (
+ config.base_dir.clone(),
+ config.git_timeout.unwrap_or(git::GIT_TIMEOUT_DEFAULT),
+ )
+ };
+ let clone_dir = base_dir.join("clones").join(site_name);
+
+ // 2. Check for changes (guard held — no concurrent git ops possible)
+ let has_changes = match git::has_remote_changes(
+ &clone_dir,
+ &site.branch,
+ git_timeout,
+ site.git_depth.unwrap_or(git::GIT_DEPTH_DEFAULT),
+ )
+ .await
+ {
+ Ok(changed) => changed,
+ Err(e) => {
+ error!(site = %site_name, error = %e, "failed to check for changes");
+ false
+ }
+ };
+
+ if has_changes {
+ // 3a. Keep guard alive — move into build pipeline
+ info!(site = %site_name, "new commits detected, triggering build");
+ #[allow(clippy::large_futures)]
+ crate::server::run_build_pipeline(
+ state.clone(),
+ site_name.clone(),
+ site.clone(),
+ guard,
+ )
+ .await;
+ } else {
+ // 3b. Explicit drop BEFORE sleep — release lock immediately
+ drop(guard);
+ }
+
+ // 4. Sleep (lock is NOT held here in either branch)
+ tokio::select! {
+ () = tokio::time::sleep(interval) => {}
+ () = cancel_token.cancelled() => {
+ info!(site = %site_name, "polling cancelled");
+ return;
+ }
+ }
+ }
+}
+
+/// Calculate staggered initial delay to avoid all sites polling at once.
+/// Uses a simple hash of the site name to distribute start times.
+fn calculate_initial_delay(site_name: &str, interval: Duration) -> Duration {
+ use std::collections::hash_map::DefaultHasher;
+
+ let mut hasher = DefaultHasher::new();
+ site_name.hash(&mut hasher);
+ let hash = hasher.finish();
+
+ // Spread across 0 to interval/2
+ let max_delay_secs = interval.as_secs() / 2;
+ let delay_secs = if max_delay_secs > 0 {
+ hash % max_delay_secs
+ } else {
+ 0
+ };
+
+ Duration::from_secs(delay_secs)
+}
+
+#[cfg(test)]
+#[allow(clippy::unwrap_used)]
+mod tests {
+ use super::*;
+
+ #[test]
+ fn initial_delay_zero_interval() {
+ // interval=0 → max_delay_secs=0 → delay=0
+ let delay = calculate_initial_delay("site", Duration::from_secs(0));
+ assert_eq!(delay, Duration::from_secs(0));
+ }
+
+ #[test]
+ fn initial_delay_one_second_interval() {
+ // interval=1s → max_delay_secs=0 → delay=0
+ let delay = calculate_initial_delay("site", Duration::from_secs(1));
+ assert_eq!(delay, Duration::from_secs(0));
+ }
+
+ #[test]
+ fn initial_delay_within_half_interval() {
+ let interval = Duration::from_secs(600); // 10 min
+ let delay = calculate_initial_delay("my-site", interval);
+ // Must be < interval/2 (300s)
+ assert!(delay < Duration::from_secs(300));
+ }
+
+ #[test]
+ fn initial_delay_deterministic() {
+ let interval = Duration::from_secs(600);
+ let d1 = calculate_initial_delay("my-site", interval);
+ let d2 = calculate_initial_delay("my-site", interval);
+ assert_eq!(d1, d2);
+ }
+
+ #[test]
+ fn initial_delay_different_sites_differ() {
+ let interval = Duration::from_secs(3600);
+ let d1 = calculate_initial_delay("site-alpha", interval);
+ let d2 = calculate_initial_delay("site-beta", interval);
+ // Different names should (almost certainly) produce different delays
+ assert_ne!(d1, d2);
+ }
+}
diff --git a/src/publish.rs b/src/publish.rs
new file mode 100644
index 0000000..338a136
--- /dev/null
+++ b/src/publish.rs
@@ -0,0 +1,488 @@
+use anyhow::{Context as _, Result, bail};
+use std::path::{Path, PathBuf};
+use tracing::{debug, info};
+
+/// Result of a successful publish operation.
+#[derive(Debug)]
+pub struct PublishResult {
+ /// Path to the timestamped build directory containing the published assets.
+ pub build_dir: PathBuf,
+ /// Timestamp used for the build directory name.
+ pub timestamp: String,
+}
+
+/// Publish built assets with atomic symlink switching.
+///
+/// # Arguments
+/// * `base_dir` - Base witryna directory (e.g., /var/lib/witryna)
+/// * `site_name` - The site name (already validated)
+/// * `clone_dir` - Path to the cloned repository
+/// * `public` - Relative path to built assets within `clone_dir` (e.g., "dist")
+/// * `timestamp` - Timestamp string for the build directory (format: %Y%m%d-%H%M%S-%f)
+///
+/// # Errors
+///
+/// Returns an error if the source directory doesn't exist, the asset copy
+/// fails, or the atomic symlink switch fails.
+///
+/// # Workflow
+/// 1. Validate source directory exists
+/// 2. Create timestamped build directory: {`base_dir}/builds/{site_name}/{timestamp`}
+/// 3. Copy assets from {`clone_dir}/{public`}/ to the timestamped directory
+/// 4. Atomic symlink switch: update {`base_dir}/builds/{site_name}/current`
+pub async fn publish(
+ base_dir: &Path,
+ site_name: &str,
+ clone_dir: &Path,
+ public: &str,
+ timestamp: &str,
+) -> Result<PublishResult> {
+ // 1. Construct source path and validate it exists
+ let source_dir = clone_dir.join(public);
+ if !source_dir.exists() {
+ bail!("public directory does not exist");
+ }
+ if !source_dir.is_dir() {
+ bail!("public path is not a directory");
+ }
+
+ // 2. Create build directory with provided timestamp
+ let site_builds_dir = base_dir.join("builds").join(site_name);
+ let build_dir = site_builds_dir.join(timestamp);
+ let current_link = site_builds_dir.join("current");
+
+ info!(
+ source = %source_dir.display(),
+ destination = %build_dir.display(),
+ "publishing assets"
+ );
+
+ // 3. Create builds directory structure
+ tokio::fs::create_dir_all(&site_builds_dir)
+ .await
+ .with_context(|| {
+ format!(
+ "failed to create builds directory: {}",
+ site_builds_dir.display()
+ )
+ })?;
+
+ // 4. Copy assets recursively
+ copy_dir_contents(&source_dir, &build_dir)
+ .await
+ .context("failed to copy assets")?;
+
+ // 5. Atomic symlink switch
+ atomic_symlink_update(&build_dir, &current_link).await?;
+
+ debug!(
+ build_dir = %build_dir.display(),
+ symlink = %current_link.display(),
+ "publish completed"
+ );
+
+ Ok(PublishResult {
+ build_dir,
+ timestamp: timestamp.to_owned(),
+ })
+}
+
+async fn copy_dir_contents(src: &Path, dst: &Path) -> Result<()> {
+ tokio::fs::create_dir_all(dst)
+ .await
+ .with_context(|| format!("failed to create directory: {}", dst.display()))?;
+
+ // Preserve source directory permissions
+ let dir_metadata = tokio::fs::symlink_metadata(src).await?;
+ tokio::fs::set_permissions(dst, dir_metadata.permissions())
+ .await
+ .with_context(|| format!("failed to set permissions on {}", dst.display()))?;
+
+ let mut entries = tokio::fs::read_dir(src)
+ .await
+ .with_context(|| format!("failed to read directory: {}", src.display()))?;
+
+ while let Some(entry) = entries.next_entry().await? {
+ let entry_path = entry.path();
+ let dest_path = dst.join(entry.file_name());
+
+ // SEC-002: reject symlinks in build output to prevent symlink attacks
+ let metadata = tokio::fs::symlink_metadata(&entry_path).await?;
+ if metadata.file_type().is_symlink() {
+ tracing::warn!(path = %entry_path.display(), "skipping symlink in build output");
+ continue;
+ }
+
+ let file_type = entry.file_type().await?;
+
+ if file_type.is_dir() {
+ Box::pin(copy_dir_contents(&entry_path, &dest_path)).await?;
+ } else {
+ tokio::fs::copy(&entry_path, &dest_path)
+ .await
+ .with_context(|| {
+ format!(
+ "failed to copy {} to {}",
+ entry_path.display(),
+ dest_path.display()
+ )
+ })?;
+ // Preserve source file permissions
+ tokio::fs::set_permissions(&dest_path, metadata.permissions())
+ .await
+ .with_context(|| format!("failed to set permissions on {}", dest_path.display()))?;
+ }
+ }
+
+ Ok(())
+}
+
+/// Atomically update a symlink to point to a new target.
+///
+/// Uses the temp-symlink + rename pattern for atomicity:
+/// 1. Create temp symlink: {`link_path}.tmp` -> target
+/// 2. Rename temp to final: {`link_path}.tmp` -> {`link_path`}
+///
+/// The rename operation is atomic on POSIX filesystems.
+async fn atomic_symlink_update(target: &Path, link_path: &Path) -> Result<()> {
+ let temp_link = link_path.with_extension("tmp");
+
+ // Remove any stale temp symlink from previous failed attempts
+ let _ = tokio::fs::remove_file(&temp_link).await;
+
+ // Create temporary symlink pointing to target
+ tokio::fs::symlink(target, &temp_link)
+ .await
+ .with_context(|| "failed to create temporary symlink")?;
+
+ // Atomically rename temp symlink to final location
+ tokio::fs::rename(&temp_link, link_path)
+ .await
+ .with_context(|| "failed to atomically update symlink")?;
+
+ Ok(())
+}
+
+#[cfg(test)]
+#[allow(clippy::unwrap_used, clippy::indexing_slicing)]
+mod tests {
+ use super::*;
+ use crate::test_support::{cleanup, temp_dir};
+ use chrono::Utc;
+ use tokio::fs;
+
+ fn test_timestamp() -> String {
+ Utc::now().format("%Y%m%d-%H%M%S-%f").to_string()
+ }
+
+ #[tokio::test]
+ async fn publish_copies_assets_to_timestamped_directory() {
+ let base_dir = temp_dir("publish-test").await;
+ let clone_dir = temp_dir("publish-test").await;
+
+ // Create source assets
+ let source = clone_dir.join("dist");
+ fs::create_dir_all(&source).await.unwrap();
+ fs::write(source.join("index.html"), "<html>hello</html>")
+ .await
+ .unwrap();
+
+ let timestamp = test_timestamp();
+ let result = publish(&base_dir, "my-site", &clone_dir, "dist", &timestamp).await;
+
+ assert!(result.is_ok(), "publish should succeed: {result:?}");
+ let publish_result = result.unwrap();
+
+ // Verify timestamp is used for build directory
+ assert_eq!(publish_result.timestamp, timestamp);
+
+ // Verify assets were copied
+ let copied_file = publish_result.build_dir.join("index.html");
+ assert!(copied_file.exists(), "copied file should exist");
+ let content = fs::read_to_string(&copied_file).await.unwrap();
+ assert_eq!(content, "<html>hello</html>");
+
+ cleanup(&base_dir).await;
+ cleanup(&clone_dir).await;
+ }
+
+ #[tokio::test]
+ async fn publish_creates_current_symlink() {
+ let base_dir = temp_dir("publish-test").await;
+ let clone_dir = temp_dir("publish-test").await;
+
+ // Create source assets
+ let source = clone_dir.join("public");
+ fs::create_dir_all(&source).await.unwrap();
+ fs::write(source.join("file.txt"), "content").await.unwrap();
+
+ let timestamp = test_timestamp();
+ let result = publish(&base_dir, "test-site", &clone_dir, "public", &timestamp).await;
+
+ assert!(result.is_ok(), "publish should succeed: {result:?}");
+ let publish_result = result.unwrap();
+
+ // Verify current symlink exists and points to build dir
+ let current_link = base_dir.join("builds/test-site/current");
+ assert!(current_link.exists(), "current symlink should exist");
+
+ let link_target = fs::read_link(&current_link).await.unwrap();
+ assert_eq!(link_target, publish_result.build_dir);
+
+ cleanup(&base_dir).await;
+ cleanup(&clone_dir).await;
+ }
+
+ #[tokio::test]
+ async fn publish_symlink_updated_on_second_publish() {
+ let base_dir = temp_dir("publish-test").await;
+ let clone_dir = temp_dir("publish-test").await;
+
+ // Create source assets
+ let source = clone_dir.join("dist");
+ fs::create_dir_all(&source).await.unwrap();
+ fs::write(source.join("file.txt"), "v1").await.unwrap();
+
+ // First publish
+ let timestamp1 = "20260126-100000-000001".to_owned();
+ let result1 = publish(&base_dir, "my-site", &clone_dir, "dist", &timestamp1).await;
+ assert!(result1.is_ok());
+ let publish1 = result1.unwrap();
+
+ // Update source and publish again with different timestamp
+ fs::write(source.join("file.txt"), "v2").await.unwrap();
+
+ let timestamp2 = "20260126-100000-000002".to_owned();
+ let result2 = publish(&base_dir, "my-site", &clone_dir, "dist", &timestamp2).await;
+ assert!(result2.is_ok());
+ let publish2 = result2.unwrap();
+
+ // Verify symlink points to second build
+ let current_link = base_dir.join("builds/my-site/current");
+ let link_target = fs::read_link(&current_link).await.unwrap();
+ assert_eq!(link_target, publish2.build_dir);
+
+ // Verify both build directories still exist
+ assert!(
+ publish1.build_dir.exists(),
+ "first build should still exist"
+ );
+ assert!(publish2.build_dir.exists(), "second build should exist");
+
+ // Verify content is correct
+ let content = fs::read_to_string(publish2.build_dir.join("file.txt"))
+ .await
+ .unwrap();
+ assert_eq!(content, "v2");
+
+ cleanup(&base_dir).await;
+ cleanup(&clone_dir).await;
+ }
+
+ #[tokio::test]
+ async fn publish_missing_source_returns_error() {
+ let base_dir = temp_dir("publish-test").await;
+ let clone_dir = temp_dir("publish-test").await;
+
+ // Don't create source directory
+
+ let timestamp = test_timestamp();
+ let result = publish(&base_dir, "my-site", &clone_dir, "nonexistent", &timestamp).await;
+
+ assert!(result.is_err(), "publish should fail");
+ let err = result.unwrap_err().to_string();
+ assert!(err.contains("public directory does not exist"));
+
+ cleanup(&base_dir).await;
+ cleanup(&clone_dir).await;
+ }
+
+ #[tokio::test]
+ async fn publish_source_is_file_returns_error() {
+ let base_dir = temp_dir("publish-test").await;
+ let clone_dir = temp_dir("publish-test").await;
+
+ // Create a file instead of directory
+ fs::write(clone_dir.join("dist"), "not a directory")
+ .await
+ .unwrap();
+
+ let timestamp = test_timestamp();
+ let result = publish(&base_dir, "my-site", &clone_dir, "dist", &timestamp).await;
+
+ assert!(result.is_err(), "publish should fail");
+ let err = result.unwrap_err().to_string();
+ assert!(err.contains("public path is not a directory"));
+
+ cleanup(&base_dir).await;
+ cleanup(&clone_dir).await;
+ }
+
+ #[tokio::test]
+ async fn publish_nested_public_directory() {
+ let base_dir = temp_dir("publish-test").await;
+ let clone_dir = temp_dir("publish-test").await;
+
+ // Create nested source directory
+ let source = clone_dir.join("build/output/dist");
+ fs::create_dir_all(&source).await.unwrap();
+ fs::write(source.join("app.js"), "console.log('hello')")
+ .await
+ .unwrap();
+
+ let timestamp = test_timestamp();
+ let result = publish(
+ &base_dir,
+ "my-site",
+ &clone_dir,
+ "build/output/dist",
+ &timestamp,
+ )
+ .await;
+
+ assert!(result.is_ok(), "publish should succeed: {result:?}");
+ let publish_result = result.unwrap();
+
+ // Verify file was copied
+ let copied_file = publish_result.build_dir.join("app.js");
+ assert!(copied_file.exists(), "copied file should exist");
+
+ cleanup(&base_dir).await;
+ cleanup(&clone_dir).await;
+ }
+
+ #[tokio::test]
+ async fn publish_preserves_directory_structure() {
+ let base_dir = temp_dir("publish-test").await;
+ let clone_dir = temp_dir("publish-test").await;
+
+ // Create source with subdirectories
+ let source = clone_dir.join("public");
+ fs::create_dir_all(source.join("css")).await.unwrap();
+ fs::create_dir_all(source.join("js")).await.unwrap();
+ fs::write(source.join("index.html"), "<html></html>")
+ .await
+ .unwrap();
+ fs::write(source.join("css/style.css"), "body {}")
+ .await
+ .unwrap();
+ fs::write(source.join("js/app.js"), "// app").await.unwrap();
+
+ let timestamp = test_timestamp();
+ let result = publish(&base_dir, "my-site", &clone_dir, "public", &timestamp).await;
+
+ assert!(result.is_ok(), "publish should succeed: {result:?}");
+ let publish_result = result.unwrap();
+
+ // Verify structure preserved
+ assert!(publish_result.build_dir.join("index.html").exists());
+ assert!(publish_result.build_dir.join("css/style.css").exists());
+ assert!(publish_result.build_dir.join("js/app.js").exists());
+
+ cleanup(&base_dir).await;
+ cleanup(&clone_dir).await;
+ }
+
+ #[tokio::test]
+ async fn atomic_symlink_update_replaces_existing() {
+ let temp = temp_dir("publish-test").await;
+
+ // Create two target directories
+ let target1 = temp.join("build-1");
+ let target2 = temp.join("build-2");
+ fs::create_dir_all(&target1).await.unwrap();
+ fs::create_dir_all(&target2).await.unwrap();
+
+ let link_path = temp.join("current");
+
+ // Create initial symlink
+ atomic_symlink_update(&target1, &link_path).await.unwrap();
+ let link1 = fs::read_link(&link_path).await.unwrap();
+ assert_eq!(link1, target1);
+
+ // Update symlink
+ atomic_symlink_update(&target2, &link_path).await.unwrap();
+ let link2 = fs::read_link(&link_path).await.unwrap();
+ assert_eq!(link2, target2);
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn copy_dir_contents_skips_symlinks() {
+ let src = temp_dir("publish-test").await;
+ let dst = temp_dir("publish-test").await;
+
+ // Create a normal file
+ fs::write(src.join("real.txt"), "hello").await.unwrap();
+
+ // Create a symlink pointing outside the directory
+ let outside = temp_dir("publish-test").await;
+ fs::write(outside.join("secret.txt"), "secret")
+ .await
+ .unwrap();
+ tokio::fs::symlink(outside.join("secret.txt"), src.join("link.txt"))
+ .await
+ .unwrap();
+
+ // Run copy
+ let dest = dst.join("output");
+ copy_dir_contents(&src, &dest).await.unwrap();
+
+ // Normal file should be copied
+ assert!(dest.join("real.txt").exists(), "real file should be copied");
+ let content = fs::read_to_string(dest.join("real.txt")).await.unwrap();
+ assert_eq!(content, "hello");
+
+ // Symlink should NOT be copied
+ assert!(
+ !dest.join("link.txt").exists(),
+ "symlink should not be copied"
+ );
+
+ cleanup(&src).await;
+ cleanup(&dst).await;
+ cleanup(&outside).await;
+ }
+
+ #[tokio::test]
+ async fn copy_dir_contents_preserves_permissions() {
+ use std::os::unix::fs::PermissionsExt;
+
+ let src = temp_dir("publish-test").await;
+ let dst = temp_dir("publish-test").await;
+
+ // Create a file with executable permissions (0o755)
+ fs::write(src.join("script.sh"), "#!/bin/sh\necho hi")
+ .await
+ .unwrap();
+ let mut perms = fs::metadata(src.join("script.sh"))
+ .await
+ .unwrap()
+ .permissions();
+ perms.set_mode(0o755);
+ fs::set_permissions(src.join("script.sh"), perms)
+ .await
+ .unwrap();
+
+ // Create a file with restrictive permissions (0o644)
+ fs::write(src.join("data.txt"), "data").await.unwrap();
+
+ let dest = dst.join("output");
+ copy_dir_contents(&src, &dest).await.unwrap();
+
+ // Verify executable permission preserved
+ let copied_perms = fs::metadata(dest.join("script.sh"))
+ .await
+ .unwrap()
+ .permissions();
+ assert_eq!(
+ copied_perms.mode() & 0o777,
+ 0o755,
+ "executable permissions should be preserved"
+ );
+
+ cleanup(&src).await;
+ cleanup(&dst).await;
+ }
+}
diff --git a/src/repo_config.rs b/src/repo_config.rs
new file mode 100644
index 0000000..46c74b5
--- /dev/null
+++ b/src/repo_config.rs
@@ -0,0 +1,523 @@
+use crate::config::BuildOverrides;
+use anyhow::{Context as _, Result, bail};
+use serde::Deserialize;
+use std::path::{Component, Path};
+
+/// Configuration for building a site, read from `witryna.yaml` or `witryna.yml` in the repository root.
+#[derive(Debug, Deserialize)]
+pub struct RepoConfig {
+ /// Container image to use for building (e.g., "node:20-alpine").
+ pub image: String,
+ /// Command to execute inside the container (e.g., "npm install && npm run build")
+ pub command: String,
+ /// Directory containing built static assets, relative to repo root (e.g., "dist")
+ pub public: String,
+}
+
+/// Validate container image name.
+///
+/// # Errors
+///
+/// Returns an error if the image name is empty or whitespace-only.
+pub fn validate_image(image: &str) -> Result<()> {
+ if image.trim().is_empty() {
+ bail!("image cannot be empty");
+ }
+ Ok(())
+}
+
+/// Validate build command.
+///
+/// # Errors
+///
+/// Returns an error if the command is empty or whitespace-only.
+pub fn validate_command(command: &str) -> Result<()> {
+ if command.trim().is_empty() {
+ bail!("command cannot be empty");
+ }
+ Ok(())
+}
+
+/// Validate public directory path.
+///
+/// # Errors
+///
+/// Returns an error if the path is empty, contains path traversal segments,
+/// or is an absolute path.
+pub fn validate_public(public: &str) -> Result<()> {
+ if public.trim().is_empty() {
+ bail!("public directory cannot be empty");
+ }
+
+ // OWASP: Reject absolute paths
+ if public.starts_with('/') {
+ bail!("invalid public directory '{public}': must be a relative path");
+ }
+
+ // OWASP: Reject real path traversal (Component::ParentDir)
+ // Allows names like "dist..v2" which contain ".." but are not traversal
+ if Path::new(public)
+ .components()
+ .any(|c| c == Component::ParentDir)
+ {
+ bail!("invalid public directory '{public}': path traversal not allowed");
+ }
+
+ Ok(())
+}
+
+impl RepoConfig {
+ /// Load repo configuration from the given repository directory.
+ ///
+ /// If `config_file` is `Some`, reads that specific path (relative to repo root).
+ /// Otherwise searches: `.witryna.yaml` -> `.witryna.yml` -> `witryna.yaml` -> `witryna.yml`.
+ ///
+ /// # Errors
+ ///
+ /// Returns an error if no config file is found, the file cannot be read
+ /// or parsed, or validation fails.
+ pub async fn load(repo_dir: &Path, config_file: Option<&str>) -> Result<Self> {
+ if let Some(custom) = config_file {
+ let path = repo_dir.join(custom);
+ let content = tokio::fs::read_to_string(&path)
+ .await
+ .with_context(|| format!("failed to read {}", path.display()))?;
+ let config: Self = serde_yaml_ng::from_str(&content)
+ .with_context(|| format!("failed to parse {}", path.display()))?;
+ config.validate()?;
+ return Ok(config);
+ }
+
+ let candidates = [
+ ".witryna.yaml",
+ ".witryna.yml",
+ "witryna.yaml",
+ "witryna.yml",
+ ];
+ for name in candidates {
+ let path = repo_dir.join(name);
+ if path.exists() {
+ let content = tokio::fs::read_to_string(&path)
+ .await
+ .with_context(|| format!("failed to read {}", path.display()))?;
+ let config: Self = serde_yaml_ng::from_str(&content)
+ .with_context(|| format!("failed to parse {}", path.display()))?;
+ config.validate()?;
+ return Ok(config);
+ }
+ }
+ bail!(
+ "no build config found in {} (tried: {})",
+ repo_dir.display(),
+ candidates.join(", ")
+ );
+ }
+
+ fn validate(&self) -> Result<()> {
+ validate_image(&self.image)?;
+ validate_command(&self.command)?;
+ validate_public(&self.public)?;
+ Ok(())
+ }
+
+ /// Load repo configuration, applying overrides from witryna.toml.
+ ///
+ /// If all three override fields are specified, witryna.yaml is not loaded.
+ /// Otherwise, loads witryna.yaml and applies any partial overrides.
+ ///
+ /// # Errors
+ ///
+ /// Returns an error if the base config cannot be loaded (when overrides
+ /// are incomplete) or validation fails.
+ ///
+ /// # Panics
+ ///
+ /// Panics if `is_complete()` returns true but a required override
+ /// field is `None`. This is unreachable because `is_complete()`
+ /// checks all required fields.
+ #[allow(clippy::expect_used)] // fields verified by is_complete()
+ pub async fn load_with_overrides(
+ repo_dir: &Path,
+ overrides: &BuildOverrides,
+ config_file: Option<&str>,
+ ) -> Result<Self> {
+ // If all overrides are specified, skip loading witryna.yaml
+ if overrides.is_complete() {
+ let config = Self {
+ image: overrides.image.clone().expect("verified by is_complete"),
+ command: overrides.command.clone().expect("verified by is_complete"),
+ public: overrides.public.clone().expect("verified by is_complete"),
+ };
+ // Validation already done in SiteConfig::validate(), but validate again for safety
+ config.validate()?;
+ return Ok(config);
+ }
+
+ // Load base config from repo
+ let mut config = Self::load(repo_dir, config_file).await?;
+
+ // Apply overrides (already validated in SiteConfig)
+ if let Some(image) = &overrides.image {
+ config.image.clone_from(image);
+ }
+ if let Some(command) = &overrides.command {
+ config.command.clone_from(command);
+ }
+ if let Some(public) = &overrides.public {
+ config.public.clone_from(public);
+ }
+
+ Ok(config)
+ }
+}
+
+#[cfg(test)]
+#[allow(clippy::unwrap_used, clippy::indexing_slicing)]
+mod tests {
+ use super::*;
+ use crate::test_support::{cleanup, temp_dir};
+
+ fn parse_yaml(yaml: &str) -> Result<RepoConfig> {
+ let config: RepoConfig = serde_yaml_ng::from_str(yaml)?;
+ config.validate()?;
+ Ok(config)
+ }
+
+ #[test]
+ fn parse_valid_repo_config() {
+ let yaml = r#"
+image: "node:20-alpine"
+command: "npm install && npm run build"
+public: "dist"
+"#;
+ let config = parse_yaml(yaml).unwrap();
+ assert_eq!(config.image, "node:20-alpine");
+ assert_eq!(config.command, "npm install && npm run build");
+ assert_eq!(config.public, "dist");
+ }
+
+ #[test]
+ fn missing_required_field() {
+ let yaml = r#"
+image: "node:20-alpine"
+command: "npm run build"
+"#;
+ let result: Result<RepoConfig, _> = serde_yaml_ng::from_str(yaml);
+ assert!(result.is_err());
+ }
+
+ #[test]
+ fn empty_or_whitespace_image_rejected() {
+ for image in ["", " "] {
+ let yaml =
+ format!("image: \"{image}\"\ncommand: \"npm run build\"\npublic: \"dist\"\n");
+ let result = parse_yaml(&yaml);
+ assert!(result.is_err(), "image '{image}' should be rejected");
+ assert!(result.unwrap_err().to_string().contains("image"));
+ }
+ }
+
+ #[test]
+ fn empty_command() {
+ let yaml = r#"
+image: "node:20-alpine"
+command: ""
+public: "dist"
+"#;
+ let result = parse_yaml(yaml);
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("command"));
+ }
+
+ #[test]
+ fn empty_public() {
+ let yaml = r#"
+image: "node:20-alpine"
+command: "npm run build"
+public: ""
+"#;
+ let result = parse_yaml(yaml);
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("public"));
+ }
+
+ #[test]
+ fn public_path_traversal() {
+ let invalid_paths = vec!["../dist", "build/../dist", "dist/..", ".."];
+
+ for path in invalid_paths {
+ let yaml = format!(
+ r#"
+image: "node:20-alpine"
+command: "npm run build"
+public: "{path}"
+"#
+ );
+ let result = parse_yaml(&yaml);
+ assert!(result.is_err(), "public path '{path}' should be rejected");
+ assert!(result.unwrap_err().to_string().contains("path traversal"));
+ }
+ }
+
+ #[test]
+ fn public_absolute_path_unix() {
+ let invalid_paths = vec!["/dist", "/var/www/dist"];
+
+ for path in invalid_paths {
+ let yaml = format!(
+ r#"
+image: "node:20-alpine"
+command: "npm run build"
+public: "{path}"
+"#
+ );
+ let result = parse_yaml(&yaml);
+ assert!(result.is_err(), "public path '{path}' should be rejected");
+ assert!(result.unwrap_err().to_string().contains("relative path"));
+ }
+ }
+
+ #[test]
+ fn valid_nested_public() {
+ let valid_paths = vec![
+ "dist",
+ "build/dist",
+ "out/static",
+ "_site",
+ ".next/out",
+ "dist..v2",
+ "assets/..hidden",
+ "foo..bar/dist",
+ ];
+
+ for path in valid_paths {
+ let yaml = format!(
+ r#"
+image: "node:20-alpine"
+command: "npm run build"
+public: "{path}"
+"#
+ );
+ let result = parse_yaml(&yaml);
+ assert!(result.is_ok(), "public path '{path}' should be valid");
+ }
+ }
+
+ #[test]
+ fn public_dot_segments_accepted() {
+ let valid = vec![".", "./dist", "dist/.", "dist//assets"];
+ for path in valid {
+ assert!(
+ validate_public(path).is_ok(),
+ "path '{path}' should be valid"
+ );
+ }
+ }
+
+ // load_with_overrides tests
+
+ #[tokio::test]
+ async fn load_with_overrides_complete_skips_file() {
+ // No need to create witryna.yaml since all overrides are provided
+ let temp = temp_dir("repo-config-test").await;
+
+ let overrides = BuildOverrides {
+ image: Some("alpine:latest".to_owned()),
+ command: Some("echo hello".to_owned()),
+ public: Some("out".to_owned()),
+ };
+
+ let result = RepoConfig::load_with_overrides(&temp, &overrides, None).await;
+
+ assert!(result.is_ok());
+ let config = result.unwrap();
+ assert_eq!(config.image, "alpine:latest");
+ assert_eq!(config.command, "echo hello");
+ assert_eq!(config.public, "out");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn load_with_overrides_partial_merges() {
+ let temp = temp_dir("repo-config-test").await;
+
+ // Create witryna.yaml with base config
+ let yaml = r#"
+image: "node:18"
+command: "npm run build"
+public: "dist"
+"#;
+ tokio::fs::write(temp.join("witryna.yaml"), yaml)
+ .await
+ .unwrap();
+
+ // Override only the image
+ let overrides = BuildOverrides {
+ image: Some("node:20-alpine".to_owned()),
+ command: None,
+ public: None,
+ };
+
+ let result = RepoConfig::load_with_overrides(&temp, &overrides, None).await;
+
+ assert!(result.is_ok());
+ let config = result.unwrap();
+ assert_eq!(config.image, "node:20-alpine"); // Overridden
+ assert_eq!(config.command, "npm run build"); // From yaml
+ assert_eq!(config.public, "dist"); // From yaml
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn load_with_overrides_none_loads_yaml() {
+ let temp = temp_dir("repo-config-test").await;
+
+ // Create witryna.yaml
+ let yaml = r#"
+image: "node:18"
+command: "npm run build"
+public: "dist"
+"#;
+ tokio::fs::write(temp.join("witryna.yaml"), yaml)
+ .await
+ .unwrap();
+
+ let overrides = BuildOverrides::default();
+
+ let result = RepoConfig::load_with_overrides(&temp, &overrides, None).await;
+
+ assert!(result.is_ok());
+ let config = result.unwrap();
+ assert_eq!(config.image, "node:18");
+ assert_eq!(config.command, "npm run build");
+ assert_eq!(config.public, "dist");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn load_with_overrides_missing_yaml_partial_fails() {
+ let temp = temp_dir("repo-config-test").await;
+
+ // No witryna.yaml, partial overrides
+ let overrides = BuildOverrides {
+ image: Some("node:20-alpine".to_owned()),
+ command: None, // Missing, needs yaml
+ public: None,
+ };
+
+ let result = RepoConfig::load_with_overrides(&temp, &overrides, None).await;
+
+ assert!(result.is_err());
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("no build config found")
+ );
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn load_with_overrides_incomplete_needs_yaml() {
+ let temp = temp_dir("repo-config-test").await;
+
+ // Only command+public — incomplete (no image), no yaml file
+ let overrides = BuildOverrides {
+ image: None,
+ command: Some("npm run build".to_owned()),
+ public: Some("dist".to_owned()),
+ };
+
+ let result = RepoConfig::load_with_overrides(&temp, &overrides, None).await;
+
+ assert!(result.is_err());
+ assert!(
+ result
+ .unwrap_err()
+ .to_string()
+ .contains("no build config found")
+ );
+
+ cleanup(&temp).await;
+ }
+
+ // Discovery chain tests
+
+ const VALID_YAML: &str = "image: \"node:20\"\ncommand: \"npm run build\"\npublic: \"dist\"\n";
+
+ #[tokio::test]
+ async fn load_finds_dot_witryna_yaml() {
+ let temp = temp_dir("repo-config-test").await;
+ tokio::fs::write(temp.join(".witryna.yaml"), VALID_YAML)
+ .await
+ .unwrap();
+
+ let config = RepoConfig::load(&temp, None).await.unwrap();
+ assert_eq!(config.image, "node:20");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn load_finds_dot_witryna_yml() {
+ let temp = temp_dir("repo-config-test").await;
+ tokio::fs::write(temp.join(".witryna.yml"), VALID_YAML)
+ .await
+ .unwrap();
+
+ let config = RepoConfig::load(&temp, None).await.unwrap();
+ assert_eq!(config.image, "node:20");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn load_precedence_dot_over_plain() {
+ let temp = temp_dir("repo-config-test").await;
+ let dot_yaml = "image: \"dot-image\"\ncommand: \"build\"\npublic: \"out\"\n";
+ let plain_yaml = "image: \"plain-image\"\ncommand: \"build\"\npublic: \"out\"\n";
+ tokio::fs::write(temp.join(".witryna.yaml"), dot_yaml)
+ .await
+ .unwrap();
+ tokio::fs::write(temp.join("witryna.yaml"), plain_yaml)
+ .await
+ .unwrap();
+
+ let config = RepoConfig::load(&temp, None).await.unwrap();
+ assert_eq!(config.image, "dot-image");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn load_custom_config_file() {
+ let temp = temp_dir("repo-config-test").await;
+ let subdir = temp.join("build");
+ tokio::fs::create_dir_all(&subdir).await.unwrap();
+ tokio::fs::write(subdir.join("config.yml"), VALID_YAML)
+ .await
+ .unwrap();
+
+ let config = RepoConfig::load(&temp, Some("build/config.yml"))
+ .await
+ .unwrap();
+ assert_eq!(config.image, "node:20");
+
+ cleanup(&temp).await;
+ }
+
+ #[tokio::test]
+ async fn load_custom_config_file_not_found_errors() {
+ let temp = temp_dir("repo-config-test").await;
+
+ let result = RepoConfig::load(&temp, Some("nonexistent.yaml")).await;
+ assert!(result.is_err());
+ assert!(result.unwrap_err().to_string().contains("failed to read"));
+
+ cleanup(&temp).await;
+ }
+}
diff --git a/src/server.rs b/src/server.rs
new file mode 100644
index 0000000..e31a1e4
--- /dev/null
+++ b/src/server.rs
@@ -0,0 +1,1219 @@
+use crate::build_guard::{BuildGuard, BuildScheduler};
+use crate::config::{Config, SiteConfig};
+use crate::polling::PollingManager;
+use anyhow::Result;
+use axum::{
+ Json, Router,
+ extract::{DefaultBodyLimit, Path, State},
+ http::{HeaderMap, StatusCode},
+ response::IntoResponse,
+ routing::{get, post},
+};
+use governor::clock::DefaultClock;
+use governor::state::keyed::DashMapStateStore;
+use governor::{Quota, RateLimiter};
+use std::num::NonZeroU32;
+use std::path::PathBuf;
+use std::sync::Arc;
+use subtle::ConstantTimeEq as _;
+use tokio::net::TcpListener;
+use tokio::signal::unix::{SignalKind, signal};
+use tokio::sync::RwLock;
+use tracing::{error, info, warn};
+
+#[derive(serde::Serialize)]
+struct ErrorResponse {
+ error: &'static str,
+}
+
+#[derive(serde::Serialize)]
+struct QueuedResponse {
+ status: &'static str,
+}
+
+#[derive(serde::Serialize)]
+struct HealthResponse {
+ status: &'static str,
+}
+
+fn error_response(status: StatusCode, error: &'static str) -> impl IntoResponse {
+ (status, Json(ErrorResponse { error }))
+}
+
+type TokenRateLimiter = RateLimiter<String, DashMapStateStore<String>, DefaultClock>;
+
+#[derive(Clone)]
+pub struct AppState {
+ pub config: Arc<RwLock<Config>>,
+ pub config_path: Arc<PathBuf>,
+ pub build_scheduler: Arc<BuildScheduler>,
+ pub rate_limiter: Arc<TokenRateLimiter>,
+ pub polling_manager: Arc<PollingManager>,
+}
+
+pub fn create_router(state: AppState) -> Router {
+ Router::new()
+ .route("/health", get(health_handler))
+ .route("/{site_name}", post(deploy_handler))
+ .layer(DefaultBodyLimit::max(1024 * 1024)) // 1MB limit
+ .with_state(state)
+}
+
+async fn health_handler() -> impl IntoResponse {
+ Json(HealthResponse { status: "ok" })
+}
+
+/// Extract Bearer token from Authorization header.
+fn extract_bearer_token(headers: &HeaderMap) -> Option<&str> {
+ headers
+ .get("authorization")
+ .and_then(|v| v.to_str().ok())
+ .and_then(|v| v.strip_prefix("Bearer "))
+}
+
+fn validate_token(provided: &str, expected: &str) -> bool {
+ let provided_bytes = provided.as_bytes();
+ let expected_bytes = expected.as_bytes();
+
+ // Constant-time comparison - OWASP requirement
+ provided_bytes.ct_eq(expected_bytes).into()
+}
+
+async fn deploy_handler(
+ State(state): State<AppState>,
+ Path(site_name): Path<String>,
+ headers: HeaderMap,
+) -> impl IntoResponse {
+ info!(%site_name, "deployment request received");
+
+ // Find the site first to avoid information leakage
+ let site = {
+ let config = state.config.read().await;
+ if let Some(site) = config.find_site(&site_name) {
+ site.clone()
+ } else {
+ info!(%site_name, "site not found");
+ return error_response(StatusCode::NOT_FOUND, "not_found").into_response();
+ }
+ };
+
+ // Validate Bearer token (skip if auth disabled for this site)
+ if site.webhook_token.is_empty() {
+ // Auth disabled — rate limit by site name instead
+ if state.rate_limiter.check_key(&site_name).is_err() {
+ info!(%site_name, "rate limit exceeded");
+ return error_response(StatusCode::TOO_MANY_REQUESTS, "rate_limit_exceeded")
+ .into_response();
+ }
+ } else {
+ let Some(token) = extract_bearer_token(&headers) else {
+ info!(%site_name, "missing or malformed authorization header");
+ return error_response(StatusCode::UNAUTHORIZED, "unauthorized").into_response();
+ };
+
+ if !validate_token(token, &site.webhook_token) {
+ info!(%site_name, "invalid token");
+ return error_response(StatusCode::UNAUTHORIZED, "unauthorized").into_response();
+ }
+
+ // Rate limit check (per token)
+ if state.rate_limiter.check_key(&token.to_owned()).is_err() {
+ info!(%site_name, "rate limit exceeded");
+ return error_response(StatusCode::TOO_MANY_REQUESTS, "rate_limit_exceeded")
+ .into_response();
+ }
+ }
+
+ // Try immediate build
+ let Some(guard) = BuildGuard::try_acquire(site_name.clone(), &state.build_scheduler) else {
+ // Build in progress — try to queue
+ if state.build_scheduler.try_queue(&site_name) {
+ info!(%site_name, "build queued");
+ return (
+ StatusCode::ACCEPTED,
+ Json(QueuedResponse { status: "queued" }),
+ )
+ .into_response();
+ }
+ // Already queued — collapse
+ info!(%site_name, "build already queued, collapsing");
+ return StatusCode::ACCEPTED.into_response();
+ };
+
+ info!(%site_name, "deployment accepted");
+
+ // Spawn async build pipeline with queue drain loop
+ tokio::spawn(async move {
+ let mut current_site = site;
+ let mut current_guard = guard;
+ loop {
+ #[allow(clippy::large_futures)]
+ run_build_pipeline(
+ state.clone(),
+ site_name.clone(),
+ current_site.clone(),
+ current_guard,
+ )
+ .await;
+ // Guard dropped here — build lock released
+
+ if !state.build_scheduler.take_queued(&site_name) {
+ break;
+ }
+ info!(%site_name, "processing queued rebuild");
+ let Some(new_site) = state.config.read().await.find_site(&site_name).cloned() else {
+ warn!(%site_name, "site removed from config, skipping queued rebuild");
+ break;
+ };
+ let Some(new_guard) =
+ BuildGuard::try_acquire(site_name.clone(), &state.build_scheduler)
+ else {
+ break; // someone else grabbed it
+ };
+ current_site = new_site;
+ current_guard = new_guard;
+ }
+ });
+
+ StatusCode::ACCEPTED.into_response()
+}
+
+/// Run the complete build pipeline: git sync → build → publish.
+#[allow(clippy::large_futures)]
+pub(crate) async fn run_build_pipeline(
+ state: AppState,
+ site_name: String,
+ site: SiteConfig,
+ _guard: BuildGuard,
+) {
+ let (base_dir, log_dir, container_runtime, max_builds_to_keep, git_timeout) = {
+ let config = state.config.read().await;
+ (
+ config.base_dir.clone(),
+ config.log_dir.clone(),
+ config.container_runtime.clone(),
+ config.max_builds_to_keep,
+ config
+ .git_timeout
+ .unwrap_or(crate::git::GIT_TIMEOUT_DEFAULT),
+ )
+ };
+
+ match crate::pipeline::run_build(
+ &site_name,
+ &site,
+ &base_dir,
+ &log_dir,
+ &container_runtime,
+ max_builds_to_keep,
+ git_timeout,
+ false,
+ )
+ .await
+ {
+ Ok(result) => {
+ info!(
+ %site_name,
+ build_dir = %result.build_dir.display(),
+ duration_secs = result.duration.as_secs(),
+ "pipeline completed"
+ );
+ }
+ Err(e) => {
+ error!(%site_name, error = %e, "pipeline failed");
+ }
+ }
+}
+
+/// Setup SIGHUP signal handler for configuration hot-reload.
+pub(crate) fn setup_sighup_handler(state: AppState) {
+ tokio::spawn(async move {
+ #[allow(clippy::expect_used)] // fatal: cannot proceed without signal handler
+ let mut sighup =
+ signal(SignalKind::hangup()).expect("failed to setup SIGHUP signal handler");
+
+ loop {
+ sighup.recv().await;
+ info!("SIGHUP received, reloading configuration");
+
+ let config_path = state.config_path.as_ref();
+ match Config::load(config_path).await {
+ Ok(new_config) => {
+ let old_sites_count = state.config.read().await.sites.len();
+ let new_sites_count = new_config.sites.len();
+
+ // Check for non-reloadable changes and capture old values
+ let (old_listen, old_base, old_log_dir, old_log_level) = {
+ let old_config = state.config.read().await;
+ if old_config.listen_address != new_config.listen_address {
+ warn!(
+ old = %old_config.listen_address,
+ new = %new_config.listen_address,
+ "listen_address changed but cannot be reloaded (restart required)"
+ );
+ }
+ if old_config.base_dir != new_config.base_dir {
+ warn!(
+ old = %old_config.base_dir.display(),
+ new = %new_config.base_dir.display(),
+ "base_dir changed but cannot be reloaded (restart required)"
+ );
+ }
+ if old_config.log_dir != new_config.log_dir {
+ warn!(
+ old = %old_config.log_dir.display(),
+ new = %new_config.log_dir.display(),
+ "log_dir changed but cannot be reloaded (restart required)"
+ );
+ }
+ if old_config.log_level != new_config.log_level {
+ warn!(
+ old = %old_config.log_level,
+ new = %new_config.log_level,
+ "log_level changed but cannot be reloaded (restart required)"
+ );
+ }
+ (
+ old_config.listen_address.clone(),
+ old_config.base_dir.clone(),
+ old_config.log_dir.clone(),
+ old_config.log_level.clone(),
+ )
+ };
+
+ // Preserve non-reloadable fields from the running config
+ let mut final_config = new_config;
+ final_config.listen_address = old_listen;
+ final_config.base_dir = old_base;
+ final_config.log_dir = old_log_dir;
+ final_config.log_level = old_log_level;
+
+ // Apply the merged configuration
+ *state.config.write().await = final_config;
+
+ // Restart polling tasks with new configuration
+ info!("restarting polling tasks");
+ state.polling_manager.stop_all().await;
+ state.polling_manager.start_polling(state.clone()).await;
+
+ info!(
+ old_sites_count,
+ new_sites_count, "configuration reloaded successfully"
+ );
+ }
+ Err(e) => {
+ error!(error = %e, "failed to reload configuration, keeping current config");
+ }
+ }
+ }
+ });
+}
+
+/// Start the server in production mode.
+///
+/// # Errors
+///
+/// Returns an error if the TCP listener cannot bind or the server encounters
+/// a fatal I/O error.
+///
+/// # Panics
+///
+/// Panics if `rate_limit_per_minute` is zero. This is unreachable after
+/// successful config validation.
+pub async fn run(config: Config, config_path: PathBuf) -> Result<()> {
+ let addr = config.parsed_listen_address();
+
+ #[allow(clippy::expect_used)] // validated by Config::validate_rate_limit()
+ let quota = Quota::per_minute(
+ NonZeroU32::new(config.rate_limit_per_minute)
+ .expect("rate_limit_per_minute must be greater than 0"),
+ );
+ let rate_limiter = Arc::new(RateLimiter::dashmap(quota));
+ let polling_manager = Arc::new(PollingManager::new());
+
+ let state = AppState {
+ config: Arc::new(RwLock::new(config)),
+ config_path: Arc::new(config_path),
+ build_scheduler: Arc::new(BuildScheduler::new()),
+ rate_limiter,
+ polling_manager,
+ };
+
+ // Setup SIGHUP handler for configuration hot-reload
+ setup_sighup_handler(state.clone());
+
+ // Start polling tasks for sites with poll_interval configured
+ state.polling_manager.start_polling(state.clone()).await;
+
+ let listener = TcpListener::bind(addr).await?;
+ info!(%addr, "server listening");
+
+ run_with_listener(state, listener, async {
+ let mut sigterm = signal(SignalKind::terminate()).expect("failed to setup SIGTERM handler");
+ let mut sigint = signal(SignalKind::interrupt()).expect("failed to setup SIGINT handler");
+ tokio::select! {
+ _ = sigterm.recv() => info!("received SIGTERM, shutting down"),
+ _ = sigint.recv() => info!("received SIGINT, shutting down"),
+ }
+ })
+ .await
+}
+
+/// Run the server on an already-bound listener with a custom shutdown signal.
+///
+/// This is the core server loop used by both production (`run`) and integration tests.
+/// Production delegates here after binding the listener and setting up SIGHUP handlers.
+/// Tests call this via `test_support::run_server` with their own listener and shutdown channel.
+pub(crate) async fn run_with_listener(
+ state: AppState,
+ listener: TcpListener,
+ shutdown_signal: impl std::future::Future<Output = ()> + Send + 'static,
+) -> Result<()> {
+ let router = create_router(state);
+
+ axum::serve(listener, router)
+ .with_graceful_shutdown(shutdown_signal)
+ .await?;
+
+ Ok(())
+}
+
+#[cfg(test)]
+#[allow(clippy::unwrap_used, clippy::indexing_slicing, clippy::expect_used)]
+mod tests {
+ use super::*;
+ use crate::config::{BuildOverrides, SiteConfig};
+ use axum::body::Body;
+ use axum::http::{Request, StatusCode};
+ use axum::response::Response;
+ use std::path::PathBuf;
+ use tower::ServiceExt as _;
+
+ fn test_state(config: Config) -> AppState {
+ test_state_with_rate_limit(config, 1000) // High limit for most tests
+ }
+
+ fn test_state_with_rate_limit(config: Config, rate_limit: u32) -> AppState {
+ let quota = Quota::per_minute(NonZeroU32::new(rate_limit).unwrap());
+ AppState {
+ config: Arc::new(RwLock::new(config)),
+ config_path: Arc::new(PathBuf::from("witryna.toml")),
+ build_scheduler: Arc::new(BuildScheduler::new()),
+ rate_limiter: Arc::new(RateLimiter::dashmap(quota)),
+ polling_manager: Arc::new(PollingManager::new()),
+ }
+ }
+
+ fn test_config() -> Config {
+ Config {
+ listen_address: "127.0.0.1:8080".to_owned(),
+ container_runtime: "podman".to_owned(),
+ base_dir: PathBuf::from("/var/lib/witryna"),
+ log_dir: PathBuf::from("/var/log/witryna"),
+ log_level: "info".to_owned(),
+ rate_limit_per_minute: 10,
+ max_builds_to_keep: 5,
+ git_timeout: None,
+ sites: vec![],
+ }
+ }
+
+ fn test_config_with_sites() -> Config {
+ Config {
+ sites: vec![SiteConfig {
+ name: "my-site".to_owned(),
+ repo_url: "https://github.com/user/my-site.git".to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: "secret-token".to_owned(),
+ webhook_token_file: None,
+
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: None,
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: "none".to_owned(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ }],
+ ..test_config()
+ }
+ }
+
+ #[tokio::test]
+ async fn health_endpoint_returns_ok() {
+ let state = test_state(test_config_with_sites());
+ let router = create_router(state);
+
+ let response: Response = router
+ .oneshot(
+ Request::builder()
+ .uri("/health")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+
+ assert_eq!(response.status(), StatusCode::OK);
+ let body = axum::body::to_bytes(response.into_body(), 1024)
+ .await
+ .unwrap();
+ let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
+ assert_eq!(json["status"], "ok");
+ }
+
+ #[tokio::test]
+ async fn unknown_site_post_returns_not_found() {
+ let state = test_state(test_config());
+ let router = create_router(state);
+
+ let response: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/nonexistent")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+
+ assert_eq!(response.status(), StatusCode::NOT_FOUND);
+ let body = axum::body::to_bytes(response.into_body(), 1024)
+ .await
+ .unwrap();
+ let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
+ assert_eq!(json["error"], "not_found");
+ }
+
+ #[tokio::test]
+ async fn deploy_known_site_with_valid_token_returns_accepted() {
+ let state = test_state(test_config_with_sites());
+ let router = create_router(state);
+
+ let response: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .header("Authorization", "Bearer secret-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+
+ assert_eq!(response.status(), StatusCode::ACCEPTED);
+ let body = axum::body::to_bytes(response.into_body(), 1024)
+ .await
+ .unwrap();
+ assert!(body.is_empty());
+ }
+
+ #[tokio::test]
+ async fn deploy_missing_auth_header_returns_unauthorized() {
+ let state = test_state(test_config_with_sites());
+ let router = create_router(state);
+
+ let response: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+
+ assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
+ let body = axum::body::to_bytes(response.into_body(), 1024)
+ .await
+ .unwrap();
+ let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
+ assert_eq!(json["error"], "unauthorized");
+ }
+
+ #[tokio::test]
+ async fn deploy_invalid_token_returns_unauthorized() {
+ let state = test_state(test_config_with_sites());
+ let router = create_router(state);
+
+ let response: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .header("Authorization", "Bearer wrong-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+
+ assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
+ let body = axum::body::to_bytes(response.into_body(), 1024)
+ .await
+ .unwrap();
+ let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
+ assert_eq!(json["error"], "unauthorized");
+ }
+
+ #[tokio::test]
+ async fn deploy_malformed_auth_header_returns_unauthorized() {
+ let state = test_state(test_config_with_sites());
+ let router = create_router(state);
+
+ // Test without "Bearer " prefix
+ let response: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .header("Authorization", "secret-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+
+ assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
+ let body = axum::body::to_bytes(response.into_body(), 1024)
+ .await
+ .unwrap();
+ let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
+ assert_eq!(json["error"], "unauthorized");
+ }
+
+ #[tokio::test]
+ async fn deploy_basic_auth_returns_unauthorized() {
+ let state = test_state(test_config_with_sites());
+ let router = create_router(state);
+
+ // Test Basic auth instead of Bearer
+ let response: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .header("Authorization", "Basic dXNlcjpwYXNz")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+
+ assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
+ let body = axum::body::to_bytes(response.into_body(), 1024)
+ .await
+ .unwrap();
+ let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
+ assert_eq!(json["error"], "unauthorized");
+ }
+
+ #[tokio::test]
+ async fn deploy_get_method_not_allowed() {
+ let state = test_state(test_config_with_sites());
+ let router = create_router(state);
+
+ let response: Response = router
+ .oneshot(
+ Request::builder()
+ .method("GET")
+ .uri("/my-site")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+
+ assert_eq!(response.status(), StatusCode::METHOD_NOT_ALLOWED);
+ }
+
+ #[tokio::test]
+ async fn deploy_unknown_site_with_token_returns_not_found() {
+ let state = test_state(test_config_with_sites());
+ let router = create_router(state);
+
+ let response: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/unknown-site")
+ .header("Authorization", "Bearer any-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+
+ // Returns 404 before checking token (site lookup first)
+ assert_eq!(response.status(), StatusCode::NOT_FOUND);
+ let body = axum::body::to_bytes(response.into_body(), 1024)
+ .await
+ .unwrap();
+ let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
+ assert_eq!(json["error"], "not_found");
+ }
+
+ fn test_config_with_two_sites() -> Config {
+ Config {
+ listen_address: "127.0.0.1:8080".to_owned(),
+ container_runtime: "podman".to_owned(),
+ base_dir: PathBuf::from("/var/lib/witryna"),
+ log_dir: PathBuf::from("/var/log/witryna"),
+ log_level: "info".to_owned(),
+ rate_limit_per_minute: 10,
+ max_builds_to_keep: 5,
+ git_timeout: None,
+ sites: vec![
+ SiteConfig {
+ name: "site-one".to_owned(),
+ repo_url: "https://github.com/user/site-one.git".to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: "token-one".to_owned(),
+ webhook_token_file: None,
+
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: None,
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: "none".to_owned(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ },
+ SiteConfig {
+ name: "site-two".to_owned(),
+ repo_url: "https://github.com/user/site-two.git".to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: "token-two".to_owned(),
+ webhook_token_file: None,
+
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: None,
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: "none".to_owned(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ },
+ ],
+ }
+ }
+
+ #[tokio::test]
+ async fn deploy_concurrent_same_site_gets_queued() {
+ let state = test_state(test_config_with_sites());
+ let router = create_router(state.clone());
+
+ // First request should succeed (immediate build)
+ let response1: Response = router
+ .clone()
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .header("Authorization", "Bearer secret-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response1.status(), StatusCode::ACCEPTED);
+ let body1 = axum::body::to_bytes(response1.into_body(), 1024)
+ .await
+ .unwrap();
+ assert!(body1.is_empty());
+
+ // Second request to same site should be queued (202 with body)
+ let response2: Response = router
+ .clone()
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .header("Authorization", "Bearer secret-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response2.status(), StatusCode::ACCEPTED);
+ let body2 = axum::body::to_bytes(response2.into_body(), 1024)
+ .await
+ .unwrap();
+ let json: serde_json::Value = serde_json::from_slice(&body2).unwrap();
+ assert_eq!(json["status"], "queued");
+
+ // Third request should be collapsed (202, no body)
+ let response3: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .header("Authorization", "Bearer secret-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response3.status(), StatusCode::ACCEPTED);
+ let body3 = axum::body::to_bytes(response3.into_body(), 1024)
+ .await
+ .unwrap();
+ assert!(body3.is_empty());
+ }
+
+ #[tokio::test]
+ async fn deploy_concurrent_different_sites_both_succeed() {
+ let state = test_state(test_config_with_two_sites());
+ let router = create_router(state.clone());
+
+ // First site deployment
+ let response1: Response = router
+ .clone()
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/site-one")
+ .header("Authorization", "Bearer token-one")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response1.status(), StatusCode::ACCEPTED);
+
+ // Second site deployment should also succeed
+ let response2: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/site-two")
+ .header("Authorization", "Bearer token-two")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response2.status(), StatusCode::ACCEPTED);
+ }
+
+ #[tokio::test]
+ async fn deploy_site_in_progress_checked_after_auth() {
+ let state = test_state(test_config_with_sites());
+
+ // Pre-mark site as building
+ state
+ .build_scheduler
+ .in_progress
+ .insert("my-site".to_owned());
+
+ let router = create_router(state);
+
+ // Request with wrong token should return 401 (auth checked before build status)
+ let response: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .header("Authorization", "Bearer wrong-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response.status(), StatusCode::UNAUTHORIZED);
+ let body = axum::body::to_bytes(response.into_body(), 1024)
+ .await
+ .unwrap();
+ let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
+ assert_eq!(json["error"], "unauthorized");
+ }
+
+ #[tokio::test]
+ async fn rate_limit_exceeded_returns_429() {
+ // Create state with rate limit of 2 per minute
+ let state = test_state_with_rate_limit(test_config_with_sites(), 2);
+ let router = create_router(state);
+
+ // First request should succeed
+ let response1: Response = router
+ .clone()
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .header("Authorization", "Bearer secret-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response1.status(), StatusCode::ACCEPTED);
+
+ // Second request should succeed (or 409 if build in progress)
+ let response2: Response = router
+ .clone()
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .header("Authorization", "Bearer secret-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ // Could be 202 or 409 depending on timing
+ assert!(
+ response2.status() == StatusCode::ACCEPTED
+ || response2.status() == StatusCode::CONFLICT
+ );
+
+ // Third request should hit rate limit
+ let response3: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .header("Authorization", "Bearer secret-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response3.status(), StatusCode::TOO_MANY_REQUESTS);
+ let body = axum::body::to_bytes(response3.into_body(), 1024)
+ .await
+ .unwrap();
+ let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
+ assert_eq!(json["error"], "rate_limit_exceeded");
+ }
+
+ #[tokio::test]
+ async fn rate_limit_different_tokens_independent() {
+ // Create state with rate limit of 1 per minute
+ let state = test_state_with_rate_limit(test_config_with_two_sites(), 1);
+ let router = create_router(state);
+
+ // First request with token-one should succeed
+ let response1: Response = router
+ .clone()
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/site-one")
+ .header("Authorization", "Bearer token-one")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response1.status(), StatusCode::ACCEPTED);
+
+ // Second request with token-one should hit rate limit
+ let response2: Response = router
+ .clone()
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/site-one")
+ .header("Authorization", "Bearer token-one")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response2.status(), StatusCode::TOO_MANY_REQUESTS);
+ let body = axum::body::to_bytes(response2.into_body(), 1024)
+ .await
+ .unwrap();
+ let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
+ assert_eq!(json["error"], "rate_limit_exceeded");
+
+ // Request with different token should still succeed
+ let response3: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/site-two")
+ .header("Authorization", "Bearer token-two")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response3.status(), StatusCode::ACCEPTED);
+ }
+
+ #[tokio::test]
+ async fn rate_limit_checked_after_auth() {
+ // Create state with rate limit of 1 per minute
+ let state = test_state_with_rate_limit(test_config_with_sites(), 1);
+ let router = create_router(state);
+
+ // First valid request exhausts rate limit
+ let response1: Response = router
+ .clone()
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .header("Authorization", "Bearer secret-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response1.status(), StatusCode::ACCEPTED);
+
+ // Request with invalid token should return 401, not 429
+ // (auth is checked before rate limit)
+ let response2: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/my-site")
+ .header("Authorization", "Bearer wrong-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response2.status(), StatusCode::UNAUTHORIZED);
+ let body = axum::body::to_bytes(response2.into_body(), 1024)
+ .await
+ .unwrap();
+ let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
+ assert_eq!(json["error"], "unauthorized");
+ }
+
+ #[tokio::test]
+ async fn sighup_preserves_non_reloadable_fields() {
+ // Original config with specific non-reloadable values
+ let original = Config {
+ listen_address: "127.0.0.1:8080".to_owned(),
+ container_runtime: "podman".to_owned(),
+ base_dir: PathBuf::from("/var/lib/witryna"),
+ log_dir: PathBuf::from("/var/log/witryna"),
+ log_level: "info".to_owned(),
+ rate_limit_per_minute: 10,
+ max_builds_to_keep: 5,
+ git_timeout: None,
+ sites: vec![SiteConfig {
+ name: "old-site".to_owned(),
+ repo_url: "https://example.com/old.git".to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: "old-token".to_owned(),
+ webhook_token_file: None,
+
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: None,
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: "none".to_owned(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ }],
+ };
+
+ let state = test_state(original);
+
+ // Simulate a new config loaded from disk with changed non-reloadable
+ // AND reloadable fields
+ let new_config = Config {
+ listen_address: "0.0.0.0:9999".to_owned(),
+ container_runtime: "docker".to_owned(),
+ base_dir: PathBuf::from("/tmp/new-base"),
+ log_dir: PathBuf::from("/tmp/new-logs"),
+ log_level: "debug".to_owned(),
+ rate_limit_per_minute: 20,
+ max_builds_to_keep: 10,
+ git_timeout: None,
+ sites: vec![SiteConfig {
+ name: "new-site".to_owned(),
+ repo_url: "https://example.com/new.git".to_owned(),
+ branch: "develop".to_owned(),
+ webhook_token: "new-token".to_owned(),
+ webhook_token_file: None,
+
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: None,
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: "none".to_owned(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ }],
+ };
+
+ // Apply the same merge logic used in setup_sighup_handler
+ let (old_listen, old_base, old_log_dir, old_log_level) = {
+ let old_config = state.config.read().await;
+ (
+ old_config.listen_address.clone(),
+ old_config.base_dir.clone(),
+ old_config.log_dir.clone(),
+ old_config.log_level.clone(),
+ )
+ };
+
+ let mut final_config = new_config;
+ final_config.listen_address = old_listen;
+ final_config.base_dir = old_base;
+ final_config.log_dir = old_log_dir;
+ final_config.log_level = old_log_level;
+
+ *state.config.write().await = final_config;
+
+ // Verify non-reloadable fields are preserved
+ let config = state.config.read().await;
+ assert_eq!(config.listen_address, "127.0.0.1:8080");
+ assert_eq!(config.base_dir, PathBuf::from("/var/lib/witryna"));
+ assert_eq!(config.log_dir, PathBuf::from("/var/log/witryna"));
+ assert_eq!(config.log_level, "info");
+
+ // Verify reloadable fields are updated
+ assert_eq!(config.container_runtime, "docker");
+ assert_eq!(config.rate_limit_per_minute, 20);
+ assert_eq!(config.max_builds_to_keep, 10);
+ assert_eq!(config.sites.len(), 1);
+ assert_eq!(config.sites[0].name, "new-site");
+ }
+
+ fn test_config_with_disabled_auth() -> Config {
+ Config {
+ sites: vec![SiteConfig {
+ name: "open-site".to_owned(),
+ repo_url: "https://github.com/user/open-site.git".to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: String::new(),
+ webhook_token_file: None,
+ build_overrides: BuildOverrides::default(),
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: None,
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: "none".to_owned(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ }],
+ ..test_config()
+ }
+ }
+
+ #[tokio::test]
+ async fn deploy_disabled_auth_returns_accepted() {
+ let state = test_state(test_config_with_disabled_auth());
+ let router = create_router(state);
+
+ // Request without Authorization header should succeed
+ let response: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/open-site")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+
+ assert_eq!(response.status(), StatusCode::ACCEPTED);
+ }
+
+ #[tokio::test]
+ async fn deploy_disabled_auth_ignores_token() {
+ let state = test_state(test_config_with_disabled_auth());
+ let router = create_router(state);
+
+ // Request WITH a Bearer token should also succeed (token ignored)
+ let response: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/open-site")
+ .header("Authorization", "Bearer any-token")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+
+ assert_eq!(response.status(), StatusCode::ACCEPTED);
+ }
+
+ #[tokio::test]
+ async fn deploy_disabled_auth_rate_limited_by_site_name() {
+ let state = test_state_with_rate_limit(test_config_with_disabled_auth(), 1);
+ let router = create_router(state);
+
+ // First request should succeed
+ let response1: Response = router
+ .clone()
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/open-site")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response1.status(), StatusCode::ACCEPTED);
+
+ // Second request should hit rate limit (keyed by site name)
+ let response2: Response = router
+ .oneshot(
+ Request::builder()
+ .method("POST")
+ .uri("/open-site")
+ .body(Body::empty())
+ .unwrap(),
+ )
+ .await
+ .unwrap();
+ assert_eq!(response2.status(), StatusCode::TOO_MANY_REQUESTS);
+ let body = axum::body::to_bytes(response2.into_body(), 1024)
+ .await
+ .unwrap();
+ let json: serde_json::Value = serde_json::from_slice(&body).unwrap();
+ assert_eq!(json["error"], "rate_limit_exceeded");
+ }
+}
diff --git a/src/test_support.rs b/src/test_support.rs
new file mode 100644
index 0000000..8f2d2bf
--- /dev/null
+++ b/src/test_support.rs
@@ -0,0 +1,72 @@
+//! Test support utilities shared between unit and integration tests.
+//!
+//! Gated behind `cfg(any(test, feature = "integration"))`.
+//! Provides thin wrappers around `pub(crate)` server internals so integration
+//! tests can start a real server on a random port without exposing internal APIs,
+//! plus common helpers (temp dirs, cleanup) used across unit test modules.
+
+#![allow(clippy::unwrap_used, clippy::expect_used)]
+
+use crate::server::{AppState, run_with_listener};
+use anyhow::Result;
+use std::path::{Path, PathBuf};
+use tokio::net::TcpListener;
+
+/// Start the HTTP server on the given listener, shutting down when `shutdown` resolves.
+///
+/// The server behaves identically to production — same middleware, same handlers.
+///
+/// # Errors
+///
+/// Returns an error if the server encounters a fatal I/O error.
+pub async fn run_server(
+ state: AppState,
+ listener: TcpListener,
+ shutdown: impl std::future::Future<Output = ()> + Send + 'static,
+) -> Result<()> {
+ run_with_listener(state, listener, shutdown).await
+}
+
+/// Install the SIGHUP configuration-reload handler for `state`.
+///
+/// Call this before sending SIGHUP in tests that exercise hot-reload.
+/// It replaces the default signal disposition (terminate) with the production
+/// reload handler, so the process stays alive after receiving the signal.
+pub fn setup_sighup_handler(state: &AppState) {
+ crate::server::setup_sighup_handler(state.clone());
+}
+
+/// Generate a unique ID for test isolation (timestamp + counter).
+///
+/// # Panics
+///
+/// Panics if the system clock is before the Unix epoch.
+pub fn uuid() -> String {
+ use std::sync::atomic::{AtomicU64, Ordering};
+ use std::time::{SystemTime, UNIX_EPOCH};
+ static COUNTER: AtomicU64 = AtomicU64::new(0);
+ let duration = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();
+ let count = COUNTER.fetch_add(1, Ordering::SeqCst);
+ format!(
+ "{}-{}-{}",
+ duration.as_secs(),
+ duration.subsec_nanos(),
+ count
+ )
+}
+
+/// Create a unique temporary directory for a test.
+///
+/// # Panics
+///
+/// Panics if the directory cannot be created.
+pub async fn temp_dir(prefix: &str) -> PathBuf {
+ let dir = std::env::temp_dir().join(format!("witryna-{}-{}", prefix, uuid()));
+ tokio::fs::create_dir_all(&dir).await.unwrap();
+ dir
+}
+
+/// Remove a temporary directory (ignores errors).
+pub async fn cleanup(dir: &Path) {
+ let _ = tokio::fs::remove_dir_all(dir).await;
+}
diff --git a/tests/integration/auth.rs b/tests/integration/auth.rs
new file mode 100644
index 0000000..78984d8
--- /dev/null
+++ b/tests/integration/auth.rs
@@ -0,0 +1,58 @@
+use crate::harness::{SiteBuilder, TestServer, server_with_site, test_config_with_site};
+
+#[tokio::test]
+async fn invalid_auth_returns_401() {
+ let server = server_with_site().await;
+
+ let cases: Vec<(&str, Option<&str>)> = vec![
+ ("no header", None),
+ ("wrong token", Some("Bearer wrong-token")),
+ ("wrong scheme", Some("Basic dXNlcjpwYXNz")),
+ ("empty header", Some("")),
+ ("bearer without token", Some("Bearer ")),
+ ];
+
+ for (label, header_value) in &cases {
+ let mut req = TestServer::client().post(server.url("/my-site"));
+ if let Some(value) = header_value {
+ req = req.header("Authorization", *value);
+ }
+
+ let resp = req.send().await.unwrap();
+ assert_eq!(
+ resp.status().as_u16(),
+ 401,
+ "expected 401 for case: {label}"
+ );
+ let body = resp.text().await.unwrap();
+ let json: serde_json::Value = serde_json::from_str(&body).unwrap();
+ assert_eq!(
+ json["error"], "unauthorized",
+ "expected JSON error for case: {label}"
+ );
+ }
+}
+
+#[tokio::test]
+async fn disabled_auth_allows_unauthenticated_requests() {
+ let dir = tempfile::tempdir().unwrap().keep();
+ let site = SiteBuilder::new("open-site", "https://example.com/repo.git", "").build();
+ let server = TestServer::start(test_config_with_site(dir, site)).await;
+
+ // POST without Authorization header → 202
+ let resp = TestServer::client()
+ .post(server.url("/open-site"))
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 202);
+
+ // POST with arbitrary Authorization header → 202 (token ignored)
+ let resp = TestServer::client()
+ .post(server.url("/open-site"))
+ .header("Authorization", "Bearer anything")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 202);
+}
diff --git a/tests/integration/cache.rs b/tests/integration/cache.rs
new file mode 100644
index 0000000..42d2a15
--- /dev/null
+++ b/tests/integration/cache.rs
@@ -0,0 +1,125 @@
+use crate::git_helpers::create_local_repo;
+use crate::harness::{SiteBuilder, TestServer, test_config_with_site};
+use crate::runtime::{skip_without_git, skip_without_runtime};
+use std::time::Duration;
+use witryna::config::sanitize_cache_dir_name;
+
+#[tokio::test]
+async fn cache_dir_persists_across_builds() {
+ skip_without_git!();
+ skip_without_runtime!();
+
+ let tempdir = tempfile::tempdir().unwrap();
+ let base_dir = tempdir.path().to_path_buf();
+
+ let repo_dir = tempdir.path().join("repos");
+ tokio::fs::create_dir_all(&repo_dir).await.unwrap();
+ let repo_url = create_local_repo(&repo_dir, "main").await;
+
+ let site = SiteBuilder::new("cache-site", &repo_url, "cache-token")
+ .overrides(
+ "alpine:latest",
+ "mkdir -p /tmp/test-cache && echo 'cached' > /tmp/test-cache/marker && mkdir -p out && cp /tmp/test-cache/marker out/marker",
+ "out",
+ )
+ .cache_dirs(vec!["/tmp/test-cache".to_owned()])
+ .build();
+
+ let server = TestServer::start(test_config_with_site(base_dir.clone(), site)).await;
+
+ // --- Build 1: create marker in cache ---
+ let resp = TestServer::client()
+ .post(server.url("/cache-site"))
+ .header("Authorization", "Bearer cache-token")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 202);
+
+ // Wait for build to complete
+ let builds_dir = base_dir.join("builds/cache-site");
+ let max_wait = Duration::from_secs(120);
+ let start = std::time::Instant::now();
+
+ loop {
+ assert!(
+ start.elapsed() <= max_wait,
+ "build 1 timed out after {max_wait:?}"
+ );
+ if builds_dir.join("current").is_symlink() {
+ break;
+ }
+ tokio::time::sleep(Duration::from_millis(500)).await;
+ }
+
+ // Verify built output
+ let target = tokio::fs::read_link(builds_dir.join("current"))
+ .await
+ .unwrap();
+ assert!(
+ target.join("marker").exists(),
+ "marker should exist in build output"
+ );
+
+ // Host-side verification: cache directory should exist with marker
+ let sanitized = sanitize_cache_dir_name("/tmp/test-cache");
+ let host_cache_dir = base_dir.join("cache/cache-site").join(&sanitized);
+ assert!(
+ host_cache_dir.join("marker").exists(),
+ "marker should exist in host cache dir: {}",
+ host_cache_dir.display()
+ );
+
+ // --- Build 2: verify marker was already there (cache persisted) ---
+ // Wait for build lock to release
+ let start = std::time::Instant::now();
+ loop {
+ if start.elapsed() > Duration::from_secs(10) {
+ break;
+ }
+ if !server
+ .state
+ .build_scheduler
+ .in_progress
+ .contains("cache-site")
+ {
+ break;
+ }
+ tokio::time::sleep(Duration::from_millis(200)).await;
+ }
+
+ let resp = TestServer::client()
+ .post(server.url("/cache-site"))
+ .header("Authorization", "Bearer cache-token")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 202);
+
+ // Wait for second build to complete (symlink target changes)
+ let first_target = target;
+ let start = std::time::Instant::now();
+ loop {
+ assert!(
+ start.elapsed() <= max_wait,
+ "build 2 timed out after {max_wait:?}"
+ );
+ if let Ok(new_target) = tokio::fs::read_link(builds_dir.join("current")).await
+ && new_target != first_target
+ {
+ // Verify marker still in output
+ assert!(
+ new_target.join("marker").exists(),
+ "marker should exist in second build output (cache persisted)"
+ );
+ break;
+ }
+ tokio::time::sleep(Duration::from_millis(500)).await;
+ }
+
+ // Host-side: cache dir still has marker
+ assert!(
+ host_cache_dir.join("marker").exists(),
+ "marker should still exist in host cache dir after build 2"
+ );
+}
diff --git a/tests/integration/cleanup.rs b/tests/integration/cleanup.rs
new file mode 100644
index 0000000..e0cc902
--- /dev/null
+++ b/tests/integration/cleanup.rs
@@ -0,0 +1,92 @@
+use crate::git_helpers::create_local_repo;
+use crate::harness::{SiteBuilder, TestServer};
+use crate::runtime::{skip_without_git, skip_without_runtime};
+use std::time::Duration;
+use witryna::config::Config;
+
+#[tokio::test]
+async fn old_builds_cleaned_up() {
+ skip_without_git!();
+ skip_without_runtime!();
+
+ let tempdir = tempfile::tempdir().unwrap();
+ let base_dir = tempdir.path().to_path_buf();
+
+ let repo_dir = tempdir.path().join("repos");
+ tokio::fs::create_dir_all(&repo_dir).await.unwrap();
+ let repo_url = create_local_repo(&repo_dir, "main").await;
+
+ let site = SiteBuilder::new("cleanup-site", &repo_url, "test-token")
+ .overrides(
+ "alpine:latest",
+ "mkdir -p out && echo '<h1>test</h1>' > out/index.html",
+ "out",
+ )
+ .build();
+
+ // Keep only 2 builds
+ let config = Config {
+ listen_address: "127.0.0.1:0".to_owned(),
+ container_runtime: crate::harness::test_config(base_dir.clone()).container_runtime,
+ base_dir: base_dir.clone(),
+ log_dir: base_dir.join("logs"),
+ log_level: "debug".to_owned(),
+ rate_limit_per_minute: 100,
+ max_builds_to_keep: 2,
+ git_timeout: None,
+ sites: vec![site],
+ };
+
+ let server = TestServer::start(config).await;
+ let builds_dir = base_dir.join("builds/cleanup-site");
+
+ // Run 3 builds sequentially
+ for i in 0..3 {
+ let resp = TestServer::client()
+ .post(server.url("/cleanup-site"))
+ .header("Authorization", "Bearer test-token")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 202, "build {i} should be accepted");
+
+ // Wait for build to complete
+ let max_wait = Duration::from_secs(120);
+ let start = std::time::Instant::now();
+
+ loop {
+ assert!(start.elapsed() <= max_wait, "build {i} timed out");
+
+ // Check that the site is no longer building
+ if !server
+ .state
+ .build_scheduler
+ .in_progress
+ .contains("cleanup-site")
+ {
+ break;
+ }
+ tokio::time::sleep(Duration::from_millis(200)).await;
+ }
+
+ // Small delay between builds to ensure different timestamps
+ tokio::time::sleep(Duration::from_millis(100)).await;
+ }
+
+ // Count timestamped build directories (excluding "current" symlink)
+ let mut count = 0;
+ if builds_dir.is_dir() {
+ let mut entries = tokio::fs::read_dir(&builds_dir).await.unwrap();
+ while let Some(entry) = entries.next_entry().await.unwrap() {
+ let name = entry.file_name();
+ if name != "current" && name != "current.tmp" {
+ count += 1;
+ }
+ }
+ }
+
+ assert!(
+ count <= 2,
+ "should have at most 2 builds after cleanup, got {count}"
+ );
+}
diff --git a/tests/integration/cli_run.rs b/tests/integration/cli_run.rs
new file mode 100644
index 0000000..0ea8d20
--- /dev/null
+++ b/tests/integration/cli_run.rs
@@ -0,0 +1,277 @@
+use crate::git_helpers::create_bare_repo;
+use crate::runtime::{detect_container_runtime, skip_without_git, skip_without_runtime};
+use std::process::Stdio;
+use tempfile::TempDir;
+use tokio::process::Command;
+
+/// Build the binary path for the witryna executable.
+fn witryna_bin() -> std::path::PathBuf {
+ // cargo test sets CARGO_BIN_EXE_witryna when the binary exists,
+ // but for integration tests we use the debug build path directly.
+ let mut path = std::path::PathBuf::from(env!("CARGO_BIN_EXE_witryna"));
+ if !path.exists() {
+ // Fallback to target/debug/witryna
+ path = std::path::PathBuf::from("target/debug/witryna");
+ }
+ path
+}
+
+/// Write a minimal witryna.toml config file.
+async fn write_config(
+ dir: &std::path::Path,
+ site_name: &str,
+ repo_url: &str,
+ base_dir: &std::path::Path,
+ log_dir: &std::path::Path,
+ command: &str,
+ public: &str,
+) -> std::path::PathBuf {
+ let config_path = dir.join("witryna.toml");
+ let runtime = detect_container_runtime();
+ let config = format!(
+ r#"listen_address = "127.0.0.1:0"
+container_runtime = "{runtime}"
+base_dir = "{base_dir}"
+log_dir = "{log_dir}"
+log_level = "debug"
+
+[[sites]]
+name = "{site_name}"
+repo_url = "{repo_url}"
+branch = "main"
+webhook_token = "unused"
+image = "alpine:latest"
+command = "{command}"
+public = "{public}"
+"#,
+ base_dir = base_dir.display(),
+ log_dir = log_dir.display(),
+ );
+ tokio::fs::write(&config_path, config).await.unwrap();
+ config_path
+}
+
+// ---------------------------------------------------------------------------
+// Tier 1: no container runtime needed
+// ---------------------------------------------------------------------------
+
+#[tokio::test]
+async fn cli_run_site_not_found_exits_nonzero() {
+ let tempdir = TempDir::new().unwrap();
+ let base_dir = tempdir.path().join("data");
+ let log_dir = tempdir.path().join("logs");
+ tokio::fs::create_dir_all(&base_dir).await.unwrap();
+ tokio::fs::create_dir_all(&log_dir).await.unwrap();
+
+ // Write config with no sites matching "nonexistent"
+ let config_path = tempdir.path().join("witryna.toml");
+ let config = format!(
+ r#"listen_address = "127.0.0.1:0"
+container_runtime = "podman"
+base_dir = "{}"
+log_dir = "{}"
+log_level = "info"
+sites = []
+"#,
+ base_dir.display(),
+ log_dir.display(),
+ );
+ tokio::fs::write(&config_path, config).await.unwrap();
+
+ let output = Command::new(witryna_bin())
+ .args([
+ "--config",
+ config_path.to_str().unwrap(),
+ "run",
+ "nonexistent",
+ ])
+ .stdout(Stdio::piped())
+ .stderr(Stdio::piped())
+ .output()
+ .await
+ .unwrap();
+
+ assert!(
+ !output.status.success(),
+ "should exit non-zero for unknown site"
+ );
+ let stderr = String::from_utf8_lossy(&output.stderr);
+ assert!(
+ stderr.contains("not found"),
+ "stderr should mention site not found, got: {stderr}"
+ );
+}
+
+#[tokio::test]
+async fn cli_run_build_failure_exits_nonzero() {
+ skip_without_git!();
+ skip_without_runtime!();
+
+ let tempdir = TempDir::new().unwrap();
+ let base_dir = tempdir.path().join("data");
+ let log_dir = tempdir.path().join("logs");
+ tokio::fs::create_dir_all(&base_dir).await.unwrap();
+ tokio::fs::create_dir_all(&log_dir).await.unwrap();
+
+ let repo_dir = tempdir.path().join("repos");
+ tokio::fs::create_dir_all(&repo_dir).await.unwrap();
+ let repo_url = create_bare_repo(&repo_dir, "main").await;
+
+ let config_path = write_config(
+ tempdir.path(),
+ "fail-site",
+ &repo_url,
+ &base_dir,
+ &log_dir,
+ "exit 42",
+ "dist",
+ )
+ .await;
+
+ let output = Command::new(witryna_bin())
+ .args([
+ "--config",
+ config_path.to_str().unwrap(),
+ "run",
+ "fail-site",
+ ])
+ .stdout(Stdio::piped())
+ .stderr(Stdio::piped())
+ .output()
+ .await
+ .unwrap();
+
+ assert!(
+ !output.status.success(),
+ "should exit non-zero on build failure"
+ );
+
+ // Verify a log file was created
+ let logs_dir = log_dir.join("fail-site");
+ if logs_dir.is_dir() {
+ let mut entries = tokio::fs::read_dir(&logs_dir).await.unwrap();
+ let mut found_log = false;
+ while let Some(entry) = entries.next_entry().await.unwrap() {
+ if entry.file_name().to_string_lossy().ends_with(".log") {
+ found_log = true;
+ break;
+ }
+ }
+ assert!(found_log, "should have a .log file after failed build");
+ }
+}
+
+// ---------------------------------------------------------------------------
+// Tier 2: requires git + container runtime
+// ---------------------------------------------------------------------------
+
+#[tokio::test]
+async fn cli_run_builds_site_successfully() {
+ skip_without_git!();
+ skip_without_runtime!();
+
+ let tempdir = TempDir::new().unwrap();
+ let base_dir = tempdir.path().join("data");
+ let log_dir = tempdir.path().join("logs");
+ tokio::fs::create_dir_all(&base_dir).await.unwrap();
+ tokio::fs::create_dir_all(&log_dir).await.unwrap();
+
+ let repo_dir = tempdir.path().join("repos");
+ tokio::fs::create_dir_all(&repo_dir).await.unwrap();
+ let repo_url = create_bare_repo(&repo_dir, "main").await;
+
+ let config_path = write_config(
+ tempdir.path(),
+ "test-site",
+ &repo_url,
+ &base_dir,
+ &log_dir,
+ "mkdir -p out && echo hello > out/index.html",
+ "out",
+ )
+ .await;
+
+ let output = Command::new(witryna_bin())
+ .args([
+ "--config",
+ config_path.to_str().unwrap(),
+ "run",
+ "test-site",
+ ])
+ .stdout(Stdio::piped())
+ .stderr(Stdio::piped())
+ .output()
+ .await
+ .unwrap();
+
+ let stderr = String::from_utf8_lossy(&output.stderr);
+ assert!(
+ output.status.success(),
+ "should exit 0 on success, stderr: {stderr}"
+ );
+
+ // Verify symlink exists
+ let current = base_dir.join("builds/test-site/current");
+ assert!(current.is_symlink(), "current symlink should exist");
+
+ // Verify published content
+ let target = tokio::fs::read_link(&current).await.unwrap();
+ let content = tokio::fs::read_to_string(target.join("index.html"))
+ .await
+ .unwrap();
+ assert!(content.contains("hello"), "published content should match");
+
+ // Verify log file exists
+ let logs_dir = log_dir.join("test-site");
+ assert!(logs_dir.is_dir(), "logs directory should exist");
+}
+
+#[tokio::test]
+async fn cli_run_verbose_shows_build_output() {
+ skip_without_git!();
+ skip_without_runtime!();
+
+ let tempdir = TempDir::new().unwrap();
+ let base_dir = tempdir.path().join("data");
+ let log_dir = tempdir.path().join("logs");
+ tokio::fs::create_dir_all(&base_dir).await.unwrap();
+ tokio::fs::create_dir_all(&log_dir).await.unwrap();
+
+ let repo_dir = tempdir.path().join("repos");
+ tokio::fs::create_dir_all(&repo_dir).await.unwrap();
+ let repo_url = create_bare_repo(&repo_dir, "main").await;
+
+ let config_path = write_config(
+ tempdir.path(),
+ "verbose-site",
+ &repo_url,
+ &base_dir,
+ &log_dir,
+ "echo VERBOSE_MARKER && mkdir -p out && echo ok > out/index.html",
+ "out",
+ )
+ .await;
+
+ let output = Command::new(witryna_bin())
+ .args([
+ "--config",
+ config_path.to_str().unwrap(),
+ "run",
+ "verbose-site",
+ "--verbose",
+ ])
+ .stdout(Stdio::piped())
+ .stderr(Stdio::piped())
+ .output()
+ .await
+ .unwrap();
+
+ let stderr = String::from_utf8_lossy(&output.stderr);
+ assert!(output.status.success(), "should exit 0, stderr: {stderr}");
+
+ // In verbose mode, build output should appear in stderr
+ assert!(
+ stderr.contains("VERBOSE_MARKER"),
+ "stderr should contain build output in verbose mode, got: {stderr}"
+ );
+}
diff --git a/tests/integration/cli_status.rs b/tests/integration/cli_status.rs
new file mode 100644
index 0000000..25135fb
--- /dev/null
+++ b/tests/integration/cli_status.rs
@@ -0,0 +1,313 @@
+use std::process::Stdio;
+use tempfile::TempDir;
+use tokio::process::Command;
+
+/// Build the binary path for the witryna executable.
+fn witryna_bin() -> std::path::PathBuf {
+ let mut path = std::path::PathBuf::from(env!("CARGO_BIN_EXE_witryna"));
+ if !path.exists() {
+ path = std::path::PathBuf::from("target/debug/witryna");
+ }
+ path
+}
+
+/// Write a minimal witryna.toml config for status tests.
+async fn write_status_config(
+ dir: &std::path::Path,
+ sites: &[&str],
+ log_dir: &std::path::Path,
+) -> std::path::PathBuf {
+ let base_dir = dir.join("data");
+ tokio::fs::create_dir_all(&base_dir).await.unwrap();
+
+ let mut sites_toml = String::new();
+ for name in sites {
+ sites_toml.push_str(&format!(
+ r#"
+[[sites]]
+name = "{name}"
+repo_url = "https://example.com/{name}.git"
+branch = "main"
+webhook_token = "unused"
+"#
+ ));
+ }
+
+ let config_path = dir.join("witryna.toml");
+ let config = format!(
+ r#"listen_address = "127.0.0.1:0"
+container_runtime = "podman"
+base_dir = "{base_dir}"
+log_dir = "{log_dir}"
+log_level = "info"
+{sites_toml}"#,
+ base_dir = base_dir.display(),
+ log_dir = log_dir.display(),
+ );
+ tokio::fs::write(&config_path, config).await.unwrap();
+ config_path
+}
+
+/// Write a fake build log with a valid header.
+async fn write_test_build_log(
+ log_dir: &std::path::Path,
+ site_name: &str,
+ timestamp: &str,
+ status: &str,
+ commit: &str,
+ image: &str,
+ duration: &str,
+) {
+ let site_log_dir = log_dir.join(site_name);
+ tokio::fs::create_dir_all(&site_log_dir).await.unwrap();
+
+ let content = format!(
+ "=== BUILD LOG ===\n\
+ Site: {site_name}\n\
+ Timestamp: {timestamp}\n\
+ Git Commit: {commit}\n\
+ Image: {image}\n\
+ Duration: {duration}\n\
+ Status: {status}\n\
+ \n\
+ === STDOUT ===\n\
+ build output\n\
+ \n\
+ === STDERR ===\n"
+ );
+
+ let log_file = site_log_dir.join(format!("{timestamp}.log"));
+ tokio::fs::write(&log_file, content).await.unwrap();
+}
+
+/// Write a fake hook log with a valid header.
+async fn write_test_hook_log(
+ log_dir: &std::path::Path,
+ site_name: &str,
+ timestamp: &str,
+ status: &str,
+) {
+ let site_log_dir = log_dir.join(site_name);
+ tokio::fs::create_dir_all(&site_log_dir).await.unwrap();
+
+ let content = format!(
+ "=== HOOK LOG ===\n\
+ Site: {site_name}\n\
+ Timestamp: {timestamp}\n\
+ Command: hook-cmd\n\
+ Duration: 1s\n\
+ Status: {status}\n\
+ \n\
+ === STDOUT ===\n\
+ \n\
+ === STDERR ===\n"
+ );
+
+ let log_file = site_log_dir.join(format!("{timestamp}-hook.log"));
+ tokio::fs::write(&log_file, content).await.unwrap();
+}
+
+// ---------------------------------------------------------------------------
+// Tier 1: no container runtime / git needed
+// ---------------------------------------------------------------------------
+
+#[tokio::test]
+async fn cli_status_no_builds() {
+ let tempdir = TempDir::new().unwrap();
+ let log_dir = tempdir.path().join("logs");
+ tokio::fs::create_dir_all(&log_dir).await.unwrap();
+
+ let config_path = write_status_config(tempdir.path(), &["empty-site"], &log_dir).await;
+
+ let output = Command::new(witryna_bin())
+ .args(["--config", config_path.to_str().unwrap(), "status"])
+ .stdout(Stdio::piped())
+ .stderr(Stdio::piped())
+ .output()
+ .await
+ .unwrap();
+
+ assert!(output.status.success(), "should exit 0");
+ let stdout = String::from_utf8_lossy(&output.stdout);
+ assert!(stdout.contains("SITE"), "should have table header");
+ assert!(
+ stdout.contains("(no builds)"),
+ "should show (no builds), got: {stdout}"
+ );
+}
+
+#[tokio::test]
+async fn cli_status_single_build() {
+ let tempdir = TempDir::new().unwrap();
+ let log_dir = tempdir.path().join("logs");
+ tokio::fs::create_dir_all(&log_dir).await.unwrap();
+
+ write_test_build_log(
+ &log_dir,
+ "my-site",
+ "20260126-143000-123456",
+ "success",
+ "abc123d",
+ "node:20-alpine",
+ "45s",
+ )
+ .await;
+
+ let config_path = write_status_config(tempdir.path(), &["my-site"], &log_dir).await;
+
+ let output = Command::new(witryna_bin())
+ .args(["--config", config_path.to_str().unwrap(), "status"])
+ .stdout(Stdio::piped())
+ .stderr(Stdio::piped())
+ .output()
+ .await
+ .unwrap();
+
+ assert!(output.status.success(), "should exit 0");
+ let stdout = String::from_utf8_lossy(&output.stdout);
+ assert!(stdout.contains("my-site"), "should show site name");
+ assert!(stdout.contains("success"), "should show status");
+ assert!(stdout.contains("abc123d"), "should show commit");
+ assert!(stdout.contains("45s"), "should show duration");
+}
+
+#[tokio::test]
+async fn cli_status_json_output() {
+ let tempdir = TempDir::new().unwrap();
+ let log_dir = tempdir.path().join("logs");
+ tokio::fs::create_dir_all(&log_dir).await.unwrap();
+
+ write_test_build_log(
+ &log_dir,
+ "json-site",
+ "20260126-143000-123456",
+ "success",
+ "abc123d",
+ "node:20-alpine",
+ "45s",
+ )
+ .await;
+
+ let config_path = write_status_config(tempdir.path(), &["json-site"], &log_dir).await;
+
+ let output = Command::new(witryna_bin())
+ .args([
+ "--config",
+ config_path.to_str().unwrap(),
+ "status",
+ "--json",
+ ])
+ .stdout(Stdio::piped())
+ .stderr(Stdio::piped())
+ .output()
+ .await
+ .unwrap();
+
+ assert!(output.status.success(), "should exit 0");
+ let stdout = String::from_utf8_lossy(&output.stdout);
+ let parsed: serde_json::Value = serde_json::from_str(&stdout).unwrap();
+ let arr = parsed.as_array().unwrap();
+ assert_eq!(arr.len(), 1);
+ assert_eq!(arr[0]["site_name"], "json-site");
+ assert_eq!(arr[0]["status"], "success");
+ assert_eq!(arr[0]["git_commit"], "abc123d");
+ assert_eq!(arr[0]["duration"], "45s");
+}
+
+#[tokio::test]
+async fn cli_status_site_filter() {
+ let tempdir = TempDir::new().unwrap();
+ let log_dir = tempdir.path().join("logs");
+ tokio::fs::create_dir_all(&log_dir).await.unwrap();
+
+ // Create logs for two sites
+ write_test_build_log(
+ &log_dir,
+ "site-a",
+ "20260126-143000-000000",
+ "success",
+ "aaa1111",
+ "alpine:latest",
+ "10s",
+ )
+ .await;
+
+ write_test_build_log(
+ &log_dir,
+ "site-b",
+ "20260126-150000-000000",
+ "success",
+ "bbb2222",
+ "alpine:latest",
+ "20s",
+ )
+ .await;
+
+ let config_path = write_status_config(tempdir.path(), &["site-a", "site-b"], &log_dir).await;
+
+ let output = Command::new(witryna_bin())
+ .args([
+ "--config",
+ config_path.to_str().unwrap(),
+ "status",
+ "--site",
+ "site-a",
+ ])
+ .stdout(Stdio::piped())
+ .stderr(Stdio::piped())
+ .output()
+ .await
+ .unwrap();
+
+ assert!(output.status.success(), "should exit 0");
+ let stdout = String::from_utf8_lossy(&output.stdout);
+ assert!(stdout.contains("site-a"), "should show filtered site");
+ assert!(
+ !stdout.contains("site-b"),
+ "should NOT show other site, got: {stdout}"
+ );
+}
+
+#[tokio::test]
+async fn cli_status_hook_failed() {
+ let tempdir = TempDir::new().unwrap();
+ let log_dir = tempdir.path().join("logs");
+ tokio::fs::create_dir_all(&log_dir).await.unwrap();
+
+ // Build succeeded, but hook failed
+ write_test_build_log(
+ &log_dir,
+ "hook-site",
+ "20260126-143000-123456",
+ "success",
+ "abc123d",
+ "alpine:latest",
+ "12s",
+ )
+ .await;
+
+ write_test_hook_log(
+ &log_dir,
+ "hook-site",
+ "20260126-143000-123456",
+ "failed (exit code 1)",
+ )
+ .await;
+
+ let config_path = write_status_config(tempdir.path(), &["hook-site"], &log_dir).await;
+
+ let output = Command::new(witryna_bin())
+ .args(["--config", config_path.to_str().unwrap(), "status"])
+ .stdout(Stdio::piped())
+ .stderr(Stdio::piped())
+ .output()
+ .await
+ .unwrap();
+
+ assert!(output.status.success(), "should exit 0");
+ let stdout = String::from_utf8_lossy(&output.stdout);
+ assert!(
+ stdout.contains("hook failed"),
+ "should show 'hook failed', got: {stdout}"
+ );
+}
diff --git a/tests/integration/concurrent.rs b/tests/integration/concurrent.rs
new file mode 100644
index 0000000..e7f2b64
--- /dev/null
+++ b/tests/integration/concurrent.rs
@@ -0,0 +1,111 @@
+use crate::harness::{SiteBuilder, TestServer, test_config_with_site};
+
+#[tokio::test]
+async fn concurrent_build_gets_queued() {
+ let dir = tempfile::tempdir().unwrap().keep();
+ let site = SiteBuilder::new("my-site", "https://example.com/repo.git", "secret-token").build();
+ let server = TestServer::start(test_config_with_site(dir, site)).await;
+
+ // Pre-inject a build in progress via AppState
+ server
+ .state
+ .build_scheduler
+ .in_progress
+ .insert("my-site".to_owned());
+
+ let resp = TestServer::client()
+ .post(server.url("/my-site"))
+ .header("Authorization", "Bearer secret-token")
+ .send()
+ .await
+ .unwrap();
+
+ assert_eq!(resp.status().as_u16(), 202);
+ let body = resp.text().await.unwrap();
+ let json: serde_json::Value = serde_json::from_str(&body).unwrap();
+ assert_eq!(json["status"], "queued");
+}
+
+#[tokio::test]
+async fn concurrent_build_queue_collapse() {
+ let dir = tempfile::tempdir().unwrap().keep();
+ let site = SiteBuilder::new("my-site", "https://example.com/repo.git", "secret-token").build();
+ let server = TestServer::start(test_config_with_site(dir, site)).await;
+
+ // Pre-inject a build in progress and a queued rebuild
+ server
+ .state
+ .build_scheduler
+ .in_progress
+ .insert("my-site".to_owned());
+ server
+ .state
+ .build_scheduler
+ .queued
+ .insert("my-site".to_owned());
+
+ // Third request should collapse (202, no body)
+ let resp = TestServer::client()
+ .post(server.url("/my-site"))
+ .header("Authorization", "Bearer secret-token")
+ .send()
+ .await
+ .unwrap();
+
+ assert_eq!(resp.status().as_u16(), 202);
+ let body = resp.text().await.unwrap();
+ assert!(body.is_empty());
+}
+
+#[tokio::test]
+async fn concurrent_different_sites_both_accepted() {
+ let dir = tempfile::tempdir().unwrap().keep();
+ let sites = vec![
+ SiteBuilder::new("site-one", "https://example.com/one.git", "token-one").build(),
+ SiteBuilder::new("site-two", "https://example.com/two.git", "token-two").build(),
+ ];
+ let config = crate::harness::test_config_with_sites(dir, sites);
+ let server = TestServer::start(config).await;
+
+ // First site — accepted
+ let resp1 = TestServer::client()
+ .post(server.url("/site-one"))
+ .header("Authorization", "Bearer token-one")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp1.status().as_u16(), 202);
+
+ // Second site — also accepted (different build lock)
+ let resp2 = TestServer::client()
+ .post(server.url("/site-two"))
+ .header("Authorization", "Bearer token-two")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp2.status().as_u16(), 202);
+}
+
+#[tokio::test]
+async fn build_in_progress_checked_after_auth() {
+ let dir = tempfile::tempdir().unwrap().keep();
+ let site = SiteBuilder::new("my-site", "https://example.com/repo.git", "secret-token").build();
+ let server = TestServer::start(test_config_with_site(dir, site)).await;
+
+ // Pre-mark site as building
+ server
+ .state
+ .build_scheduler
+ .in_progress
+ .insert("my-site".to_owned());
+
+ // Request with wrong token should return 401 (auth checked before build status)
+ let resp = TestServer::client()
+ .post(server.url("/my-site"))
+ .header("Authorization", "Bearer wrong-token")
+ .send()
+ .await
+ .unwrap();
+
+ assert_eq!(resp.status().as_u16(), 401);
+}
diff --git a/tests/integration/deploy.rs b/tests/integration/deploy.rs
new file mode 100644
index 0000000..b74dbe6
--- /dev/null
+++ b/tests/integration/deploy.rs
@@ -0,0 +1,78 @@
+use crate::git_helpers::create_local_repo;
+use crate::harness::{SiteBuilder, TestServer, test_config_with_site};
+use crate::runtime::{skip_without_git, skip_without_runtime};
+use std::time::Duration;
+
+#[tokio::test]
+async fn valid_deployment_returns_202_and_builds() {
+ skip_without_git!();
+ skip_without_runtime!();
+
+ let tempdir = tempfile::tempdir().unwrap();
+ let base_dir = tempdir.path().to_path_buf();
+
+ // Create a local git repo with witryna.yaml
+ let repo_dir = tempdir.path().join("repos");
+ tokio::fs::create_dir_all(&repo_dir).await.unwrap();
+ let repo_url = create_local_repo(&repo_dir, "main").await;
+
+ let site = SiteBuilder::new("test-site", &repo_url, "test-token")
+ .overrides(
+ "alpine:latest",
+ "mkdir -p out && echo '<h1>test</h1>' > out/index.html",
+ "out",
+ )
+ .build();
+
+ let server = TestServer::start(test_config_with_site(base_dir.clone(), site)).await;
+
+ // Trigger deployment
+ let resp = TestServer::client()
+ .post(server.url("/test-site"))
+ .header("Authorization", "Bearer test-token")
+ .send()
+ .await
+ .unwrap();
+
+ assert_eq!(resp.status().as_u16(), 202);
+
+ // Wait for build to complete (check for current symlink)
+ let builds_dir = base_dir.join("builds/test-site");
+ let max_wait = Duration::from_secs(120);
+ let start = std::time::Instant::now();
+
+ loop {
+ assert!(
+ start.elapsed() <= max_wait,
+ "build timed out after {max_wait:?}"
+ );
+
+ let current = builds_dir.join("current");
+ if current.is_symlink() {
+ break;
+ }
+
+ tokio::time::sleep(Duration::from_millis(500)).await;
+ }
+
+ // Verify clone directory
+ assert!(
+ base_dir.join("clones/test-site/.git").is_dir(),
+ ".git directory should exist"
+ );
+
+ // Verify current symlink points to a real directory
+ let symlink_target = tokio::fs::read_link(builds_dir.join("current"))
+ .await
+ .expect("failed to read symlink");
+ assert!(
+ symlink_target.is_dir(),
+ "symlink target should be a directory"
+ );
+
+ // Verify built assets
+ assert!(
+ symlink_target.join("index.html").exists(),
+ "built index.html should exist"
+ );
+}
diff --git a/tests/integration/edge_cases.rs b/tests/integration/edge_cases.rs
new file mode 100644
index 0000000..248c36f
--- /dev/null
+++ b/tests/integration/edge_cases.rs
@@ -0,0 +1,69 @@
+use crate::harness::{TestServer, test_config};
+
+#[tokio::test]
+async fn path_traversal_rejected() {
+ let server = TestServer::start(test_config(tempfile::tempdir().unwrap().keep())).await;
+
+ let traversal_attempts = [
+ "../etc/passwd",
+ "..%2F..%2Fetc%2Fpasswd",
+ "valid-site/../other",
+ ];
+
+ for attempt in &traversal_attempts {
+ let resp = TestServer::client()
+ .post(server.url(attempt))
+ .header("Authorization", "Bearer test-token")
+ .send()
+ .await;
+
+ if let Ok(resp) = resp {
+ let status = resp.status().as_u16();
+ assert!(
+ status == 400 || status == 404,
+ "path traversal '{attempt}' should be rejected, got {status}"
+ );
+ }
+ }
+}
+
+#[tokio::test]
+async fn very_long_site_name_rejected() {
+ let server = TestServer::start(test_config(tempfile::tempdir().unwrap().keep())).await;
+
+ let long_name = "a".repeat(1000);
+ let resp = TestServer::client()
+ .post(server.url(&long_name))
+ .header("Authorization", "Bearer test-token")
+ .send()
+ .await;
+
+ if let Ok(resp) = resp {
+ let status = resp.status().as_u16();
+ assert!(
+ status == 400 || status == 404 || status == 414,
+ "long site name should be rejected gracefully, got {status}"
+ );
+ }
+}
+
+#[tokio::test]
+async fn service_healthy_after_errors() {
+ let server = TestServer::start(test_config(tempfile::tempdir().unwrap().keep())).await;
+
+ // Make requests to non-existent sites (causes 404s in the app)
+ for _ in 0..5 {
+ let _ = TestServer::client()
+ .post(server.url("/nonexistent"))
+ .send()
+ .await;
+ }
+
+ // Server should still be healthy
+ let resp = TestServer::client()
+ .get(server.url("/health"))
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 200);
+}
diff --git a/tests/integration/env_vars.rs b/tests/integration/env_vars.rs
new file mode 100644
index 0000000..44f74fa
--- /dev/null
+++ b/tests/integration/env_vars.rs
@@ -0,0 +1,162 @@
+use crate::git_helpers::create_local_repo;
+use crate::harness::{SiteBuilder, TestServer, test_config_with_site};
+use crate::runtime::{skip_without_git, skip_without_runtime};
+use std::collections::HashMap;
+use std::time::Duration;
+
+// ---------------------------------------------------------------------------
+// Tier 2 (requires container runtime + git)
+// ---------------------------------------------------------------------------
+
+#[tokio::test]
+async fn env_vars_passed_to_container_build() {
+ skip_without_git!();
+ skip_without_runtime!();
+
+ let tempdir = tempfile::tempdir().unwrap();
+ let base_dir = tempdir.path().to_path_buf();
+
+ let repo_dir = tempdir.path().join("repos");
+ tokio::fs::create_dir_all(&repo_dir).await.unwrap();
+ let repo_url = create_local_repo(&repo_dir, "main").await;
+
+ let env_vars = HashMap::from([("MY_TEST_VAR".to_owned(), "test_value_123".to_owned())]);
+
+ let site = SiteBuilder::new("env-test", &repo_url, "test-token")
+ .overrides(
+ "alpine:latest",
+ "sh -c \"mkdir -p out && echo $MY_TEST_VAR > out/env.txt\"",
+ "out",
+ )
+ .env(env_vars)
+ .build();
+
+ let server = TestServer::start(test_config_with_site(base_dir.clone(), site)).await;
+
+ let resp = TestServer::client()
+ .post(server.url("/env-test"))
+ .header("Authorization", "Bearer test-token")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 202);
+
+ // Wait for build to complete
+ let builds_dir = base_dir.join("builds/env-test");
+ let max_wait = Duration::from_secs(120);
+ let start = std::time::Instant::now();
+
+ loop {
+ assert!(start.elapsed() <= max_wait, "build timed out");
+ if builds_dir.join("current").is_symlink() {
+ break;
+ }
+ tokio::time::sleep(Duration::from_millis(500)).await;
+ }
+
+ // Verify the env var was available in the container
+ let current_target = tokio::fs::read_link(builds_dir.join("current"))
+ .await
+ .expect("current symlink should exist");
+ let content = tokio::fs::read_to_string(current_target.join("env.txt"))
+ .await
+ .expect("env.txt should exist");
+ assert_eq!(
+ content.trim(),
+ "test_value_123",
+ "env var should be passed to container build"
+ );
+}
+
+#[tokio::test]
+async fn env_vars_passed_to_post_deploy_hook() {
+ skip_without_git!();
+ skip_without_runtime!();
+
+ let tempdir = tempfile::tempdir().unwrap();
+ let base_dir = tempdir.path().to_path_buf();
+
+ let repo_dir = tempdir.path().join("repos");
+ tokio::fs::create_dir_all(&repo_dir).await.unwrap();
+ let repo_url = create_local_repo(&repo_dir, "main").await;
+
+ let env_vars = HashMap::from([
+ ("HOOK_VAR".to_owned(), "hook_value_456".to_owned()),
+ ("DEPLOY_ENV".to_owned(), "production".to_owned()),
+ ]);
+
+ let site = SiteBuilder::new("hook-env-test", &repo_url, "test-token")
+ .overrides(
+ "alpine:latest",
+ "mkdir -p out && echo test > out/index.html",
+ "out",
+ )
+ .env(env_vars)
+ .post_deploy(vec![
+ "sh".to_owned(),
+ "-c".to_owned(),
+ "env > \"$WITRYNA_BUILD_DIR/env-dump.txt\"".to_owned(),
+ ])
+ .build();
+
+ let server = TestServer::start(test_config_with_site(base_dir.clone(), site)).await;
+
+ let resp = TestServer::client()
+ .post(server.url("/hook-env-test"))
+ .header("Authorization", "Bearer test-token")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 202);
+
+ // Wait for build + hook to complete (poll for env-dump.txt)
+ let builds_dir = base_dir.join("builds/hook-env-test");
+ let max_wait = Duration::from_secs(120);
+ let start = std::time::Instant::now();
+
+ let env_dump_path = loop {
+ assert!(start.elapsed() <= max_wait, "build timed out");
+ if builds_dir.join("current").is_symlink() {
+ let target = tokio::fs::read_link(builds_dir.join("current"))
+ .await
+ .expect("current symlink should exist");
+ let dump = target.join("env-dump.txt");
+ if dump.exists() {
+ break dump;
+ }
+ }
+ tokio::time::sleep(Duration::from_millis(500)).await;
+ };
+
+ let content = tokio::fs::read_to_string(&env_dump_path)
+ .await
+ .expect("env-dump.txt should be readable");
+
+ // Verify custom env vars
+ assert!(
+ content.contains("HOOK_VAR=hook_value_456"),
+ "HOOK_VAR should be in hook environment"
+ );
+ assert!(
+ content.contains("DEPLOY_ENV=production"),
+ "DEPLOY_ENV should be in hook environment"
+ );
+
+ // Verify standard witryna env vars are also present
+ assert!(
+ content.contains("WITRYNA_SITE=hook-env-test"),
+ "WITRYNA_SITE should be set"
+ );
+ assert!(
+ content.contains("WITRYNA_BUILD_DIR="),
+ "WITRYNA_BUILD_DIR should be set"
+ );
+ assert!(
+ content.contains("WITRYNA_PUBLIC_DIR="),
+ "WITRYNA_PUBLIC_DIR should be set"
+ );
+ assert!(
+ content.contains("WITRYNA_BUILD_TIMESTAMP="),
+ "WITRYNA_BUILD_TIMESTAMP should be set"
+ );
+}
diff --git a/tests/integration/git_helpers.rs b/tests/integration/git_helpers.rs
new file mode 100644
index 0000000..578806a
--- /dev/null
+++ b/tests/integration/git_helpers.rs
@@ -0,0 +1,275 @@
+use std::path::Path;
+use tokio::process::Command;
+
+/// Create a git Command isolated from parent git environment.
+/// Prevents interference when tests run inside git hooks
+/// (e.g., pre-commit hook running `cargo test`).
+fn git_cmd() -> Command {
+ let mut cmd = Command::new("git");
+ cmd.env_remove("GIT_DIR")
+ .env_remove("GIT_WORK_TREE")
+ .env_remove("GIT_INDEX_FILE");
+ cmd
+}
+
+/// Check if git is available on this system.
+pub fn is_git_available() -> bool {
+ std::process::Command::new("git")
+ .arg("--version")
+ .stdout(std::process::Stdio::null())
+ .stderr(std::process::Stdio::null())
+ .status()
+ .map(|s| s.success())
+ .unwrap_or(false)
+}
+
+/// Create a local bare git repository with an initial commit.
+/// Returns a `file://` URL usable by `git clone --depth 1`.
+pub async fn create_local_repo(parent_dir: &Path, branch: &str) -> String {
+ let bare_repo = parent_dir.join("origin.git");
+ tokio::fs::create_dir_all(&bare_repo).await.unwrap();
+
+ // Init bare repo
+ let output = git_cmd()
+ .args(["init", "--bare", "--initial-branch", branch])
+ .current_dir(&bare_repo)
+ .output()
+ .await
+ .unwrap();
+ assert!(output.status.success(), "git init --bare failed");
+
+ // Create working copy for initial commit
+ let work_dir = parent_dir.join("work");
+ let output = git_cmd()
+ .args([
+ "clone",
+ bare_repo.to_str().unwrap(),
+ work_dir.to_str().unwrap(),
+ ])
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "git clone failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+
+ // Configure git user
+ for args in [
+ &["config", "user.email", "test@test.local"][..],
+ &["config", "user.name", "Test"],
+ ] {
+ let out = git_cmd()
+ .args(args)
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(out.status.success());
+ }
+
+ // Checkout target branch
+ let output = git_cmd()
+ .args(["checkout", "-B", branch])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(output.status.success(), "git checkout failed");
+
+ // Create witryna.yaml + initial content
+ tokio::fs::write(
+ work_dir.join("witryna.yaml"),
+ "image: alpine:latest\ncommand: \"mkdir -p out && echo '<h1>test</h1>' > out/index.html\"\npublic: out\n",
+ )
+ .await
+ .unwrap();
+
+ tokio::fs::create_dir_all(work_dir.join("out"))
+ .await
+ .unwrap();
+ tokio::fs::write(work_dir.join("out/index.html"), "<h1>initial</h1>")
+ .await
+ .unwrap();
+
+ // Stage and commit
+ let output = git_cmd()
+ .args(["add", "-A"])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(output.status.success(), "git add failed");
+
+ let output = git_cmd()
+ .args(["commit", "-m", "Initial commit"])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "git commit failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+
+ // Push
+ let output = git_cmd()
+ .args(["push", "-u", "origin", branch])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(
+ output.status.success(),
+ "git push failed: {}",
+ String::from_utf8_lossy(&output.stderr)
+ );
+
+ // Cleanup working copy
+ let _ = tokio::fs::remove_dir_all(&work_dir).await;
+
+ format!("file://{}", bare_repo.to_str().unwrap())
+}
+
+/// Create a local bare repo without a witryna.yaml (for override-only tests).
+pub async fn create_bare_repo(parent_dir: &Path, branch: &str) -> String {
+ let bare_repo = parent_dir.join("bare-origin.git");
+ tokio::fs::create_dir_all(&bare_repo).await.unwrap();
+
+ let output = git_cmd()
+ .args(["init", "--bare", "--initial-branch", branch])
+ .current_dir(&bare_repo)
+ .output()
+ .await
+ .unwrap();
+ assert!(output.status.success());
+
+ let work_dir = parent_dir.join("bare-work");
+ let output = git_cmd()
+ .args([
+ "clone",
+ bare_repo.to_str().unwrap(),
+ work_dir.to_str().unwrap(),
+ ])
+ .output()
+ .await
+ .unwrap();
+ assert!(output.status.success());
+
+ for args in [
+ &["config", "user.email", "test@test.local"][..],
+ &["config", "user.name", "Test"],
+ ] {
+ git_cmd()
+ .args(args)
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ }
+
+ let output = git_cmd()
+ .args(["checkout", "-B", branch])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(output.status.success());
+
+ tokio::fs::write(work_dir.join("README.md"), "# Test\n")
+ .await
+ .unwrap();
+
+ git_cmd()
+ .args(["add", "-A"])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+
+ let output = git_cmd()
+ .args(["commit", "-m", "Initial commit"])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(output.status.success());
+
+ let output = git_cmd()
+ .args(["push", "-u", "origin", branch])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(output.status.success());
+
+ let _ = tokio::fs::remove_dir_all(&work_dir).await;
+
+ format!("file://{}", bare_repo.to_str().unwrap())
+}
+
+/// Push a new commit to a bare repo (clone, commit, push).
+pub async fn push_new_commit(bare_repo_url: &str, parent_dir: &Path, branch: &str) {
+ let work_dir = parent_dir.join("push-work");
+ let _ = tokio::fs::remove_dir_all(&work_dir).await;
+
+ let output = git_cmd()
+ .args([
+ "clone",
+ "--branch",
+ branch,
+ bare_repo_url,
+ work_dir.to_str().unwrap(),
+ ])
+ .output()
+ .await
+ .unwrap();
+ assert!(output.status.success(), "clone for push failed");
+
+ for args in [
+ &["config", "user.email", "test@test.local"][..],
+ &["config", "user.name", "Test"],
+ ] {
+ git_cmd()
+ .args(args)
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ }
+
+ let timestamp = std::time::SystemTime::now()
+ .duration_since(std::time::UNIX_EPOCH)
+ .unwrap()
+ .as_nanos();
+ tokio::fs::write(work_dir.join("update.txt"), format!("update-{timestamp}"))
+ .await
+ .unwrap();
+
+ git_cmd()
+ .args(["add", "-A"])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+
+ let output = git_cmd()
+ .args(["commit", "-m", "Test update"])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(output.status.success(), "commit failed");
+
+ let output = git_cmd()
+ .args(["push", "origin", branch])
+ .current_dir(&work_dir)
+ .output()
+ .await
+ .unwrap();
+ assert!(output.status.success(), "push failed");
+
+ let _ = tokio::fs::remove_dir_all(&work_dir).await;
+}
diff --git a/tests/integration/harness.rs b/tests/integration/harness.rs
new file mode 100644
index 0000000..c015fa8
--- /dev/null
+++ b/tests/integration/harness.rs
@@ -0,0 +1,356 @@
+use governor::{Quota, RateLimiter};
+use std::collections::HashMap;
+use std::num::NonZeroU32;
+use std::path::PathBuf;
+use std::sync::Arc;
+use tempfile::TempDir;
+use tokio::net::TcpListener;
+use tokio::sync::{RwLock, oneshot};
+use witryna::build_guard::BuildScheduler;
+use witryna::config::{BuildOverrides, Config, SiteConfig};
+use witryna::polling::PollingManager;
+use witryna::server::AppState;
+
+/// A running test server with its own temp directory and shutdown handle.
+pub struct TestServer {
+ pub base_url: String,
+ pub state: AppState,
+ /// Kept alive for RAII cleanup of the config file written during startup.
+ #[allow(dead_code)]
+ pub tempdir: TempDir,
+ shutdown_tx: Option<oneshot::Sender<()>>,
+}
+
+impl TestServer {
+ /// Start a new test server with the given config.
+ /// Binds to `127.0.0.1:0` (OS-assigned port).
+ pub async fn start(config: Config) -> Self {
+ Self::start_with_rate_limit(config, 1000).await
+ }
+
+ /// Start a new test server with a specific rate limit.
+ pub async fn start_with_rate_limit(mut config: Config, rate_limit: u32) -> Self {
+ let tempdir = TempDir::new().expect("failed to create temp dir");
+ let config_path = tempdir.path().join("witryna.toml");
+
+ // Write a minimal config file so SIGHUP reload has something to read
+ let config_toml = build_config_toml(&config);
+ tokio::fs::write(&config_path, &config_toml)
+ .await
+ .expect("failed to write test config");
+
+ config
+ .resolve_secrets()
+ .await
+ .expect("failed to resolve secrets");
+
+ let quota = Quota::per_minute(NonZeroU32::new(rate_limit).expect("rate limit must be > 0"));
+
+ let state = AppState {
+ config: Arc::new(RwLock::new(config)),
+ config_path: Arc::new(config_path),
+ build_scheduler: Arc::new(BuildScheduler::new()),
+ rate_limiter: Arc::new(RateLimiter::dashmap(quota)),
+ polling_manager: Arc::new(PollingManager::new()),
+ };
+
+ let listener = TcpListener::bind("127.0.0.1:0")
+ .await
+ .expect("failed to bind to random port");
+ let port = listener.local_addr().unwrap().port();
+ let base_url = format!("http://127.0.0.1:{port}");
+
+ let (shutdown_tx, shutdown_rx) = oneshot::channel::<()>();
+
+ let server_state = state.clone();
+ tokio::spawn(async move {
+ witryna::test_support::run_server(server_state, listener, async {
+ let _ = shutdown_rx.await;
+ })
+ .await
+ .expect("server failed");
+ });
+
+ Self {
+ base_url,
+ state,
+ tempdir,
+ shutdown_tx: Some(shutdown_tx),
+ }
+ }
+
+ /// Get an async reqwest client.
+ pub fn client() -> reqwest::Client {
+ reqwest::Client::new()
+ }
+
+ /// Build a URL for the given path.
+ pub fn url(&self, path: &str) -> String {
+ format!("{}/{}", self.base_url, path.trim_start_matches('/'))
+ }
+
+ /// Shut down the server gracefully.
+ pub fn shutdown(&mut self) {
+ if let Some(tx) = self.shutdown_tx.take() {
+ let _ = tx.send(());
+ }
+ }
+}
+
+impl Drop for TestServer {
+ fn drop(&mut self) {
+ self.shutdown();
+ }
+}
+
+/// Build a default test config pointing to the given base dir.
+pub fn test_config(base_dir: PathBuf) -> Config {
+ let log_dir = base_dir.join("logs");
+ Config {
+ listen_address: "127.0.0.1:0".to_owned(),
+ container_runtime: "podman".to_owned(),
+ base_dir,
+ log_dir,
+ log_level: "debug".to_owned(),
+ rate_limit_per_minute: 10,
+ max_builds_to_keep: 5,
+ git_timeout: None,
+ sites: vec![],
+ }
+}
+
+/// Build a test config with a single site.
+pub fn test_config_with_site(base_dir: PathBuf, site: SiteConfig) -> Config {
+ let log_dir = base_dir.join("logs");
+ Config {
+ listen_address: "127.0.0.1:0".to_owned(),
+ container_runtime: detect_container_runtime(),
+ base_dir,
+ log_dir,
+ log_level: "debug".to_owned(),
+ rate_limit_per_minute: 10,
+ max_builds_to_keep: 5,
+ git_timeout: None,
+ sites: vec![site],
+ }
+}
+
+/// Build a test config with multiple sites.
+pub fn test_config_with_sites(base_dir: PathBuf, sites: Vec<SiteConfig>) -> Config {
+ let log_dir = base_dir.join("logs");
+ Config {
+ listen_address: "127.0.0.1:0".to_owned(),
+ container_runtime: detect_container_runtime(),
+ base_dir,
+ log_dir,
+ log_level: "debug".to_owned(),
+ rate_limit_per_minute: 10,
+ max_builds_to_keep: 5,
+ git_timeout: None,
+ sites,
+ }
+}
+
+/// Builder for test `SiteConfig` instances.
+///
+/// Replaces `simple_site`, `site_with_overrides`, `site_with_hook`, and
+/// `site_with_cache` with a single fluent API.
+pub struct SiteBuilder {
+ name: String,
+ repo_url: String,
+ token: String,
+ webhook_token_file: Option<PathBuf>,
+ image: Option<String>,
+ command: Option<String>,
+ public: Option<String>,
+ cache_dirs: Option<Vec<String>>,
+ post_deploy: Option<Vec<String>>,
+ env: Option<HashMap<String, String>>,
+ container_workdir: Option<String>,
+}
+
+impl SiteBuilder {
+ pub fn new(name: &str, repo_url: &str, token: &str) -> Self {
+ Self {
+ name: name.to_owned(),
+ repo_url: repo_url.to_owned(),
+ token: token.to_owned(),
+ webhook_token_file: None,
+ image: None,
+ command: None,
+ public: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: None,
+ container_workdir: None,
+ }
+ }
+
+ /// Set complete build overrides (image, command, public dir).
+ pub fn overrides(mut self, image: &str, command: &str, public: &str) -> Self {
+ self.image = Some(image.to_owned());
+ self.command = Some(command.to_owned());
+ self.public = Some(public.to_owned());
+ self
+ }
+
+ pub fn webhook_token_file(mut self, path: PathBuf) -> Self {
+ self.webhook_token_file = Some(path);
+ self
+ }
+
+ pub fn post_deploy(mut self, hook: Vec<String>) -> Self {
+ self.post_deploy = Some(hook);
+ self
+ }
+
+ pub fn env(mut self, env_vars: HashMap<String, String>) -> Self {
+ self.env = Some(env_vars);
+ self
+ }
+
+ pub fn cache_dirs(mut self, dirs: Vec<String>) -> Self {
+ self.cache_dirs = Some(dirs);
+ self
+ }
+
+ #[allow(dead_code)]
+ pub fn container_workdir(mut self, path: &str) -> Self {
+ self.container_workdir = Some(path.to_owned());
+ self
+ }
+
+ pub fn build(self) -> SiteConfig {
+ SiteConfig {
+ name: self.name,
+ repo_url: self.repo_url,
+ branch: "main".to_owned(),
+ webhook_token: self.token,
+ webhook_token_file: self.webhook_token_file,
+ build_overrides: BuildOverrides {
+ image: self.image,
+ command: self.command,
+ public: self.public,
+ },
+ poll_interval: None,
+ build_timeout: None,
+ cache_dirs: self.cache_dirs,
+ post_deploy: self.post_deploy,
+ env: self.env,
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: "none".to_owned(),
+ git_depth: None,
+ container_workdir: self.container_workdir,
+ config_file: None,
+ }
+ }
+}
+
+/// Start a server with a single pre-configured site for simple tests.
+///
+/// Uses `my-site` with token `secret-token` — suitable for auth, 404, and basic endpoint tests.
+pub async fn server_with_site() -> TestServer {
+ let dir = tempfile::tempdir().unwrap().keep();
+ let site = SiteBuilder::new("my-site", "https://example.com/repo.git", "secret-token").build();
+ TestServer::start(test_config_with_site(dir, site)).await
+}
+
+/// Detect the first available container runtime.
+fn detect_container_runtime() -> String {
+ for runtime in &["podman", "docker"] {
+ if std::process::Command::new(runtime)
+ .args(["info"])
+ .stdout(std::process::Stdio::null())
+ .stderr(std::process::Stdio::null())
+ .status()
+ .map(|s| s.success())
+ .unwrap_or(false)
+ {
+ return (*runtime).to_owned();
+ }
+ }
+ // Fallback — tests that need a runtime will skip themselves
+ "podman".to_owned()
+}
+
+/// Serialize a Config into a minimal TOML string for writing to disk.
+fn build_config_toml(config: &Config) -> String {
+ use std::fmt::Write as _;
+
+ let runtime_line = format!("container_runtime = \"{}\"\n", config.container_runtime);
+
+ let mut toml = format!(
+ r#"listen_address = "{}"
+{}base_dir = "{}"
+log_dir = "{}"
+log_level = "{}"
+rate_limit_per_minute = {}
+max_builds_to_keep = {}
+"#,
+ config.listen_address,
+ runtime_line,
+ config.base_dir.display(),
+ config.log_dir.display(),
+ config.log_level,
+ config.rate_limit_per_minute,
+ config.max_builds_to_keep,
+ );
+
+ if let Some(timeout) = config.git_timeout {
+ let _ = writeln!(toml, "git_timeout = \"{}s\"", timeout.as_secs());
+ }
+
+ for site in &config.sites {
+ let _ = writeln!(toml, "\n[[sites]]");
+ let _ = writeln!(toml, "name = \"{}\"", site.name);
+ let _ = writeln!(toml, "repo_url = \"{}\"", site.repo_url);
+ let _ = writeln!(toml, "branch = \"{}\"", site.branch);
+ if !site.webhook_token.is_empty() {
+ let _ = writeln!(toml, "webhook_token = \"{}\"", site.webhook_token);
+ }
+ if let Some(path) = &site.webhook_token_file {
+ let _ = writeln!(toml, "webhook_token_file = \"{}\"", path.display());
+ }
+
+ if let Some(image) = &site.build_overrides.image {
+ let _ = writeln!(toml, "image = \"{image}\"");
+ }
+ if let Some(command) = &site.build_overrides.command {
+ let _ = writeln!(toml, "command = \"{command}\"");
+ }
+ if let Some(public) = &site.build_overrides.public {
+ let _ = writeln!(toml, "public = \"{public}\"");
+ }
+ if let Some(interval) = site.poll_interval {
+ let _ = writeln!(toml, "poll_interval = \"{}s\"", interval.as_secs());
+ }
+ if let Some(timeout) = site.build_timeout {
+ let _ = writeln!(toml, "build_timeout = \"{}s\"", timeout.as_secs());
+ }
+ if let Some(depth) = site.git_depth {
+ let _ = writeln!(toml, "git_depth = {depth}");
+ }
+ if let Some(workdir) = &site.container_workdir {
+ let _ = writeln!(toml, "container_workdir = \"{workdir}\"");
+ }
+ if let Some(dirs) = &site.cache_dirs {
+ let quoted: Vec<_> = dirs.iter().map(|d| format!("\"{d}\"")).collect();
+ let _ = writeln!(toml, "cache_dirs = [{}]", quoted.join(", "));
+ }
+ if let Some(hook) = &site.post_deploy {
+ let quoted: Vec<_> = hook.iter().map(|a| format!("\"{a}\"")).collect();
+ let _ = writeln!(toml, "post_deploy = [{}]", quoted.join(", "));
+ }
+ if let Some(env_vars) = &site.env {
+ let _ = writeln!(toml, "\n[sites.env]");
+ for (key, value) in env_vars {
+ let escaped = value.replace('\\', "\\\\").replace('"', "\\\"");
+ let _ = writeln!(toml, "{key} = \"{escaped}\"");
+ }
+ }
+ }
+
+ toml
+}
diff --git a/tests/integration/health.rs b/tests/integration/health.rs
new file mode 100644
index 0000000..c8895c1
--- /dev/null
+++ b/tests/integration/health.rs
@@ -0,0 +1,17 @@
+use crate::harness::{TestServer, test_config};
+
+#[tokio::test]
+async fn health_endpoint_returns_200() {
+ let server = TestServer::start(test_config(tempfile::tempdir().unwrap().keep())).await;
+
+ let resp = TestServer::client()
+ .get(server.url("/health"))
+ .send()
+ .await
+ .expect("request failed");
+
+ assert_eq!(resp.status().as_u16(), 200);
+ let body = resp.text().await.expect("failed to read body");
+ let json: serde_json::Value = serde_json::from_str(&body).expect("invalid JSON");
+ assert_eq!(json["status"], "ok");
+}
diff --git a/tests/integration/hooks.rs b/tests/integration/hooks.rs
new file mode 100644
index 0000000..86684cc
--- /dev/null
+++ b/tests/integration/hooks.rs
@@ -0,0 +1,137 @@
+use crate::git_helpers::create_local_repo;
+use crate::harness::{SiteBuilder, TestServer, test_config_with_site};
+use crate::runtime::{skip_without_git, skip_without_runtime};
+use std::time::Duration;
+
+// ---------------------------------------------------------------------------
+// Tier 2 (requires container runtime + git)
+// ---------------------------------------------------------------------------
+
+#[tokio::test]
+async fn post_deploy_hook_runs_after_build() {
+ skip_without_git!();
+ skip_without_runtime!();
+
+ let tempdir = tempfile::tempdir().unwrap();
+ let base_dir = tempdir.path().to_path_buf();
+
+ let repo_dir = tempdir.path().join("repos");
+ tokio::fs::create_dir_all(&repo_dir).await.unwrap();
+ let repo_url = create_local_repo(&repo_dir, "main").await;
+
+ // The hook creates a "hook-ran" marker file in the build output directory
+ let site = SiteBuilder::new("hook-test", &repo_url, "test-token")
+ .overrides(
+ "alpine:latest",
+ "mkdir -p out && echo '<h1>hook</h1>' > out/index.html",
+ "out",
+ )
+ .post_deploy(vec!["touch".to_owned(), "hook-ran".to_owned()])
+ .build();
+
+ let server = TestServer::start(test_config_with_site(base_dir.clone(), site)).await;
+
+ let resp = TestServer::client()
+ .post(server.url("/hook-test"))
+ .header("Authorization", "Bearer test-token")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 202);
+
+ // Wait for build + hook to complete
+ let builds_dir = base_dir.join("builds/hook-test");
+ let max_wait = Duration::from_secs(120);
+ let start = std::time::Instant::now();
+
+ loop {
+ assert!(start.elapsed() <= max_wait, "build timed out");
+ if builds_dir.join("current").is_symlink() {
+ // Give the hook a moment to finish after symlink switch
+ tokio::time::sleep(Duration::from_secs(3)).await;
+ break;
+ }
+ tokio::time::sleep(Duration::from_millis(500)).await;
+ }
+
+ // Verify the hook ran — marker file should exist in the build directory
+ let current_target = tokio::fs::read_link(builds_dir.join("current"))
+ .await
+ .expect("current symlink should exist");
+ assert!(
+ current_target.join("hook-ran").exists(),
+ "hook marker file should exist in build directory"
+ );
+}
+
+#[tokio::test]
+async fn post_deploy_hook_failure_nonfatal() {
+ skip_without_git!();
+ skip_without_runtime!();
+
+ let tempdir = tempfile::tempdir().unwrap();
+ let base_dir = tempdir.path().to_path_buf();
+
+ let repo_dir = tempdir.path().join("repos");
+ tokio::fs::create_dir_all(&repo_dir).await.unwrap();
+ let repo_url = create_local_repo(&repo_dir, "main").await;
+
+ // The hook will fail (exit 1), but the deploy should still succeed
+ let site = SiteBuilder::new("hook-fail", &repo_url, "test-token")
+ .overrides(
+ "alpine:latest",
+ "mkdir -p out && echo '<h1>ok</h1>' > out/index.html",
+ "out",
+ )
+ .post_deploy(vec!["false".to_owned()])
+ .build();
+
+ let server = TestServer::start(test_config_with_site(base_dir.clone(), site)).await;
+
+ let resp = TestServer::client()
+ .post(server.url("/hook-fail"))
+ .header("Authorization", "Bearer test-token")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 202);
+
+ // Wait for build to complete
+ let builds_dir = base_dir.join("builds/hook-fail");
+ let max_wait = Duration::from_secs(120);
+ let start = std::time::Instant::now();
+
+ loop {
+ assert!(start.elapsed() <= max_wait, "build timed out");
+ if builds_dir.join("current").is_symlink() {
+ tokio::time::sleep(Duration::from_secs(3)).await;
+ break;
+ }
+ tokio::time::sleep(Duration::from_millis(500)).await;
+ }
+
+ // Deploy succeeded despite hook failure
+ let current_target = tokio::fs::read_link(builds_dir.join("current"))
+ .await
+ .expect("current symlink should exist");
+ assert!(
+ current_target.join("index.html").exists(),
+ "built assets should exist despite hook failure"
+ );
+
+ // Hook log should have been written with failure status
+ let logs_dir = base_dir.join("logs/hook-fail");
+ let mut found_hook_log = false;
+ let mut entries = tokio::fs::read_dir(&logs_dir).await.unwrap();
+ while let Some(entry) = entries.next_entry().await.unwrap() {
+ let name = entry.file_name();
+ if name.to_string_lossy().ends_with("-hook.log") {
+ found_hook_log = true;
+ let content = tokio::fs::read_to_string(entry.path()).await.unwrap();
+ assert!(content.contains("=== HOOK LOG ==="));
+ assert!(content.contains("Status: failed"));
+ break;
+ }
+ }
+ assert!(found_hook_log, "hook log should exist for failed hook");
+}
diff --git a/tests/integration/logs.rs b/tests/integration/logs.rs
new file mode 100644
index 0000000..4ecdb87
--- /dev/null
+++ b/tests/integration/logs.rs
@@ -0,0 +1,73 @@
+use crate::git_helpers::create_local_repo;
+use crate::harness::{SiteBuilder, TestServer, test_config_with_site};
+use crate::runtime::{skip_without_git, skip_without_runtime};
+use std::time::Duration;
+
+#[tokio::test]
+async fn build_log_created_after_deployment() {
+ skip_without_git!();
+ skip_without_runtime!();
+
+ let tempdir = tempfile::tempdir().unwrap();
+ let base_dir = tempdir.path().to_path_buf();
+
+ let repo_dir = tempdir.path().join("repos");
+ tokio::fs::create_dir_all(&repo_dir).await.unwrap();
+ let repo_url = create_local_repo(&repo_dir, "main").await;
+
+ let site = SiteBuilder::new("log-site", &repo_url, "test-token")
+ .overrides(
+ "alpine:latest",
+ "mkdir -p out && echo '<h1>test</h1>' > out/index.html",
+ "out",
+ )
+ .build();
+
+ let server = TestServer::start(test_config_with_site(base_dir.clone(), site)).await;
+
+ // Trigger deployment
+ let resp = TestServer::client()
+ .post(server.url("/log-site"))
+ .header("Authorization", "Bearer test-token")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 202);
+
+ // Wait for build to complete
+ let builds_dir = base_dir.join("builds/log-site");
+ let max_wait = Duration::from_secs(120);
+ let start = std::time::Instant::now();
+
+ loop {
+ assert!(start.elapsed() <= max_wait, "build timed out");
+ if builds_dir.join("current").is_symlink() {
+ break;
+ }
+ tokio::time::sleep(Duration::from_millis(500)).await;
+ }
+
+ // Verify logs directory and log file
+ let logs_dir = base_dir.join("logs/log-site");
+ assert!(logs_dir.is_dir(), "logs directory should exist");
+
+ let mut entries = tokio::fs::read_dir(&logs_dir).await.unwrap();
+ let mut found_log = false;
+ while let Some(entry) = entries.next_entry().await.unwrap() {
+ let name = entry.file_name();
+ if name.to_string_lossy().ends_with(".log") {
+ found_log = true;
+ let content = tokio::fs::read_to_string(entry.path()).await.unwrap();
+ assert!(
+ content.contains("=== BUILD LOG ==="),
+ "log should have header"
+ );
+ assert!(
+ content.contains("Site: log-site"),
+ "log should contain site name"
+ );
+ break;
+ }
+ }
+ assert!(found_log, "should have at least one .log file");
+}
diff --git a/tests/integration/main.rs b/tests/integration/main.rs
new file mode 100644
index 0000000..7ee422e
--- /dev/null
+++ b/tests/integration/main.rs
@@ -0,0 +1,31 @@
+#![cfg(feature = "integration")]
+#![allow(
+ clippy::unwrap_used,
+ clippy::indexing_slicing,
+ clippy::expect_used,
+ clippy::print_stderr
+)]
+
+mod git_helpers;
+mod harness;
+mod runtime;
+
+mod auth;
+mod cache;
+mod cleanup;
+mod cli_run;
+mod cli_status;
+mod concurrent;
+mod deploy;
+mod edge_cases;
+mod env_vars;
+mod health;
+mod hooks;
+mod logs;
+mod not_found;
+mod overrides;
+mod packaging;
+mod polling;
+mod rate_limit;
+mod secrets;
+mod sighup;
diff --git a/tests/integration/not_found.rs b/tests/integration/not_found.rs
new file mode 100644
index 0000000..a86d570
--- /dev/null
+++ b/tests/integration/not_found.rs
@@ -0,0 +1,17 @@
+use crate::harness::{TestServer, test_config};
+
+#[tokio::test]
+async fn unknown_site_returns_404() {
+ let server = TestServer::start(test_config(tempfile::tempdir().unwrap().keep())).await;
+
+ let resp = TestServer::client()
+ .post(server.url("/nonexistent"))
+ .send()
+ .await
+ .unwrap();
+
+ assert_eq!(resp.status().as_u16(), 404);
+ let body = resp.text().await.unwrap();
+ let json: serde_json::Value = serde_json::from_str(&body).unwrap();
+ assert_eq!(json["error"], "not_found");
+}
diff --git a/tests/integration/overrides.rs b/tests/integration/overrides.rs
new file mode 100644
index 0000000..f34bf9c
--- /dev/null
+++ b/tests/integration/overrides.rs
@@ -0,0 +1,59 @@
+use crate::git_helpers::create_bare_repo;
+use crate::harness::{SiteBuilder, TestServer, test_config_with_site};
+use crate::runtime::{skip_without_git, skip_without_runtime};
+use std::time::Duration;
+
+#[tokio::test]
+async fn complete_override_builds_without_witryna_yaml() {
+ skip_without_git!();
+ skip_without_runtime!();
+
+ let tempdir = tempfile::tempdir().unwrap();
+ let base_dir = tempdir.path().to_path_buf();
+
+ // Create a repo without witryna.yaml
+ let repo_dir = tempdir.path().join("repos");
+ tokio::fs::create_dir_all(&repo_dir).await.unwrap();
+ let repo_url = create_bare_repo(&repo_dir, "main").await;
+
+ // Complete overrides — witryna.yaml not needed
+ let site = SiteBuilder::new("override-site", &repo_url, "test-token")
+ .overrides(
+ "alpine:latest",
+ "mkdir -p out && echo '<h1>override</h1>' > out/index.html",
+ "out",
+ )
+ .build();
+
+ let server = TestServer::start(test_config_with_site(base_dir.clone(), site)).await;
+
+ let resp = TestServer::client()
+ .post(server.url("/override-site"))
+ .header("Authorization", "Bearer test-token")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 202);
+
+ // Wait for build
+ let builds_dir = base_dir.join("builds/override-site");
+ let max_wait = Duration::from_secs(120);
+ let start = std::time::Instant::now();
+
+ loop {
+ assert!(start.elapsed() <= max_wait, "build timed out");
+ if builds_dir.join("current").is_symlink() {
+ break;
+ }
+ tokio::time::sleep(Duration::from_millis(500)).await;
+ }
+
+ // Verify output
+ let target = tokio::fs::read_link(builds_dir.join("current"))
+ .await
+ .unwrap();
+ let content = tokio::fs::read_to_string(target.join("index.html"))
+ .await
+ .unwrap();
+ assert!(content.contains("<h1>override</h1>"));
+}
diff --git a/tests/integration/packaging.rs b/tests/integration/packaging.rs
new file mode 100644
index 0000000..6a86bc5
--- /dev/null
+++ b/tests/integration/packaging.rs
@@ -0,0 +1,49 @@
+use std::path::Path;
+
+#[test]
+fn docker_override_exists_and_valid() {
+ let path = Path::new(env!("CARGO_MANIFEST_DIR")).join("examples/systemd/docker.conf");
+ assert!(path.exists(), "docker.conf template missing");
+ let content = std::fs::read_to_string(&path).unwrap();
+ assert!(
+ content.contains("SupplementaryGroups=docker"),
+ "docker.conf must grant docker group"
+ );
+ assert!(
+ content.contains("ReadWritePaths=/var/run/docker.sock"),
+ "docker.conf must allow docker socket access"
+ );
+ assert!(
+ content.contains("[Service]"),
+ "docker.conf must be a systemd unit override"
+ );
+}
+
+#[test]
+fn podman_override_exists_and_valid() {
+ let path = Path::new(env!("CARGO_MANIFEST_DIR")).join("examples/systemd/podman.conf");
+ assert!(path.exists(), "podman.conf template missing");
+ let content = std::fs::read_to_string(&path).unwrap();
+ assert!(
+ content.contains("RestrictNamespaces=no"),
+ "podman.conf must disable RestrictNamespaces"
+ );
+ assert!(
+ content.contains("XDG_RUNTIME_DIR=/run/user/%U"),
+ "podman.conf must set XDG_RUNTIME_DIR with %U"
+ );
+ assert!(
+ content.contains("[Service]"),
+ "podman.conf must be a systemd unit override"
+ );
+}
+
+#[test]
+fn override_templates_are_not_empty() {
+ let dir = Path::new(env!("CARGO_MANIFEST_DIR")).join("examples/systemd");
+ for name in ["docker.conf", "podman.conf"] {
+ let path = dir.join(name);
+ let meta = std::fs::metadata(&path).unwrap();
+ assert!(meta.len() > 0, "{name} must not be empty");
+ }
+}
diff --git a/tests/integration/polling.rs b/tests/integration/polling.rs
new file mode 100644
index 0000000..a4447cc
--- /dev/null
+++ b/tests/integration/polling.rs
@@ -0,0 +1,114 @@
+use crate::git_helpers::{create_local_repo, push_new_commit};
+use crate::harness::TestServer;
+use crate::runtime::{skip_without_git, skip_without_runtime};
+use serial_test::serial;
+use std::time::Duration;
+use witryna::config::{BuildOverrides, Config, SiteConfig};
+
+fn polling_site(name: &str, repo_url: &str) -> SiteConfig {
+ SiteConfig {
+ name: name.to_owned(),
+ repo_url: repo_url.to_owned(),
+ branch: "main".to_owned(),
+ webhook_token: "poll-token".to_owned(),
+ webhook_token_file: None,
+ build_overrides: BuildOverrides {
+ image: Some("alpine:latest".to_owned()),
+ command: Some("mkdir -p out && echo '<h1>polled</h1>' > out/index.html".to_owned()),
+ public: Some("out".to_owned()),
+ },
+ poll_interval: Some(Duration::from_secs(2)),
+ build_timeout: None,
+ cache_dirs: None,
+ post_deploy: None,
+ env: None,
+ container_memory: None,
+ container_cpus: None,
+ container_pids_limit: None,
+ container_network: "none".to_owned(),
+ git_depth: None,
+ container_workdir: None,
+ config_file: None,
+ }
+}
+
+#[tokio::test]
+#[serial]
+async fn polling_triggers_build_on_new_commits() {
+ skip_without_git!();
+ skip_without_runtime!();
+
+ let tempdir = tempfile::tempdir().unwrap();
+ let base_dir = tempdir.path().to_path_buf();
+
+ let repo_dir = tempdir.path().join("repos");
+ tokio::fs::create_dir_all(&repo_dir).await.unwrap();
+ let repo_url = create_local_repo(&repo_dir, "main").await;
+
+ let site = polling_site("poll-site", &repo_url);
+
+ let config = Config {
+ listen_address: "127.0.0.1:0".to_owned(),
+ container_runtime: crate::harness::test_config(base_dir.clone()).container_runtime,
+ base_dir: base_dir.clone(),
+ log_dir: base_dir.join("logs"),
+ log_level: "debug".to_owned(),
+ rate_limit_per_minute: 100,
+ max_builds_to_keep: 5,
+ git_timeout: None,
+ sites: vec![site],
+ };
+
+ let server = TestServer::start(config).await;
+
+ // Start polling
+ server
+ .state
+ .polling_manager
+ .start_polling(server.state.clone())
+ .await;
+
+ // Wait for the initial poll cycle to trigger a build
+ let builds_dir = base_dir.join("builds/poll-site");
+ let max_wait = Duration::from_secs(30);
+ let start = std::time::Instant::now();
+
+ loop {
+ if start.elapsed() > max_wait {
+ // Polling may not have triggered yet — acceptable in CI
+ eprintln!("SOFT FAIL: polling did not trigger build within {max_wait:?}");
+ return;
+ }
+ if builds_dir.join("current").is_symlink() {
+ break;
+ }
+ tokio::time::sleep(Duration::from_millis(500)).await;
+ }
+
+ let first_target = tokio::fs::read_link(builds_dir.join("current"))
+ .await
+ .unwrap();
+
+ // Push a new commit
+ push_new_commit(&repo_url, &tempdir.path().join("push"), "main").await;
+
+ // Wait for polling to detect and rebuild
+ let max_wait = Duration::from_secs(30);
+ let start = std::time::Instant::now();
+
+ loop {
+ if start.elapsed() > max_wait {
+ eprintln!("SOFT FAIL: polling did not detect new commit within {max_wait:?}");
+ return;
+ }
+
+ if let Ok(target) = tokio::fs::read_link(builds_dir.join("current")).await
+ && target != first_target
+ {
+ // New build detected
+ return;
+ }
+
+ tokio::time::sleep(Duration::from_millis(500)).await;
+ }
+}
diff --git a/tests/integration/rate_limit.rs b/tests/integration/rate_limit.rs
new file mode 100644
index 0000000..81378a2
--- /dev/null
+++ b/tests/integration/rate_limit.rs
@@ -0,0 +1,114 @@
+use crate::harness::{SiteBuilder, TestServer, test_config_with_site, test_config_with_sites};
+
+#[tokio::test]
+async fn rate_limit_exceeded_returns_429() {
+ let dir = tempfile::tempdir().unwrap().keep();
+ let site = SiteBuilder::new("my-site", "https://example.com/repo.git", "secret-token").build();
+ let config = test_config_with_site(dir, site);
+
+ // Rate limit of 2 per minute
+ let server = TestServer::start_with_rate_limit(config, 2).await;
+
+ // First request — accepted (or 202)
+ let resp1 = TestServer::client()
+ .post(server.url("/my-site"))
+ .header("Authorization", "Bearer secret-token")
+ .send()
+ .await
+ .unwrap();
+ let status1 = resp1.status().as_u16();
+ assert!(
+ status1 == 202 || status1 == 409,
+ "expected 202 or 409, got {status1}"
+ );
+
+ // Second request
+ let resp2 = TestServer::client()
+ .post(server.url("/my-site"))
+ .header("Authorization", "Bearer secret-token")
+ .send()
+ .await
+ .unwrap();
+ let status2 = resp2.status().as_u16();
+ assert!(
+ status2 == 202 || status2 == 409,
+ "expected 202 or 409, got {status2}"
+ );
+
+ // Third request should hit rate limit
+ let resp3 = TestServer::client()
+ .post(server.url("/my-site"))
+ .header("Authorization", "Bearer secret-token")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp3.status().as_u16(), 429);
+ let body = resp3.text().await.unwrap();
+ let json: serde_json::Value = serde_json::from_str(&body).unwrap();
+ assert_eq!(json["error"], "rate_limit_exceeded");
+}
+
+#[tokio::test]
+async fn rate_limit_different_tokens_independent() {
+ let dir = tempfile::tempdir().unwrap().keep();
+ let sites = vec![
+ SiteBuilder::new("site-one", "https://example.com/one.git", "token-one").build(),
+ SiteBuilder::new("site-two", "https://example.com/two.git", "token-two").build(),
+ ];
+ let config = test_config_with_sites(dir, sites);
+
+ // Rate limit of 1 per minute
+ let server = TestServer::start_with_rate_limit(config, 1).await;
+
+ // token-one: first request succeeds
+ let resp1 = TestServer::client()
+ .post(server.url("/site-one"))
+ .header("Authorization", "Bearer token-one")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp1.status().as_u16(), 202);
+
+ // token-one: second request hits rate limit
+ let resp2 = TestServer::client()
+ .post(server.url("/site-one"))
+ .header("Authorization", "Bearer token-one")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp2.status().as_u16(), 429);
+
+ // token-two: still has its own budget
+ let resp3 = TestServer::client()
+ .post(server.url("/site-two"))
+ .header("Authorization", "Bearer token-two")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp3.status().as_u16(), 202);
+}
+
+#[tokio::test]
+async fn rate_limit_checked_after_auth() {
+ let dir = tempfile::tempdir().unwrap().keep();
+ let site = SiteBuilder::new("my-site", "https://example.com/repo.git", "secret-token").build();
+ let config = test_config_with_site(dir, site);
+ let server = TestServer::start_with_rate_limit(config, 1).await;
+
+ // Exhaust rate limit
+ let _ = TestServer::client()
+ .post(server.url("/my-site"))
+ .header("Authorization", "Bearer secret-token")
+ .send()
+ .await
+ .unwrap();
+
+ // Wrong token should get 401, not 429
+ let resp = TestServer::client()
+ .post(server.url("/my-site"))
+ .header("Authorization", "Bearer wrong-token")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 401);
+}
diff --git a/tests/integration/runtime.rs b/tests/integration/runtime.rs
new file mode 100644
index 0000000..d5a9635
--- /dev/null
+++ b/tests/integration/runtime.rs
@@ -0,0 +1,61 @@
+/// Check if a container runtime (podman or docker) is available and responsive.
+pub fn is_container_runtime_available() -> bool {
+ for runtime in &["podman", "docker"] {
+ if std::process::Command::new(runtime)
+ .args(["info"])
+ .stdout(std::process::Stdio::null())
+ .stderr(std::process::Stdio::null())
+ .status()
+ .map(|s| s.success())
+ .unwrap_or(false)
+ {
+ return true;
+ }
+ }
+ false
+}
+
+/// Macro that skips the current test with an explicit message when
+/// no container runtime is available.
+///
+/// Usage: `skip_without_runtime!();`
+macro_rules! skip_without_runtime {
+ () => {
+ if !crate::runtime::is_container_runtime_available() {
+ eprintln!("SKIPPED: no container runtime (podman/docker) found");
+ return;
+ }
+ };
+}
+
+/// Macro that skips the current test with an explicit message when
+/// git is not available.
+macro_rules! skip_without_git {
+ () => {
+ if !crate::git_helpers::is_git_available() {
+ eprintln!("SKIPPED: git not found");
+ return;
+ }
+ };
+}
+
+/// Return the name of an available container runtime ("podman" or "docker"),
+/// falling back to "podman" when neither is responsive.
+pub fn detect_container_runtime() -> &'static str {
+ for runtime in &["podman", "docker"] {
+ if std::process::Command::new(runtime)
+ .args(["info"])
+ .stdout(std::process::Stdio::null())
+ .stderr(std::process::Stdio::null())
+ .status()
+ .map(|s| s.success())
+ .unwrap_or(false)
+ {
+ return runtime;
+ }
+ }
+ "podman"
+}
+
+pub(crate) use skip_without_git;
+pub(crate) use skip_without_runtime;
diff --git a/tests/integration/secrets.rs b/tests/integration/secrets.rs
new file mode 100644
index 0000000..f07c2a0
--- /dev/null
+++ b/tests/integration/secrets.rs
@@ -0,0 +1,74 @@
+use crate::harness::{self, SiteBuilder, TestServer};
+
+/// Tier 1: env-var token resolves and auth works
+#[tokio::test]
+async fn env_var_token_auth() {
+ let var_name = "WITRYNA_INTEG_SECRET_01";
+ let token_value = "env-resolved-secret-token";
+ // SAFETY: test-only, called before spawning server
+ unsafe { std::env::set_var(var_name, token_value) };
+
+ let dir = tempfile::tempdir().unwrap().keep();
+ let site = SiteBuilder::new(
+ "secret-site",
+ "https://example.com/repo.git",
+ &format!("${{{var_name}}}"),
+ )
+ .build();
+ let config = harness::test_config_with_site(dir, site);
+ let server = TestServer::start(config).await;
+
+ // Valid token → 404 (site exists but no real repo)
+ let resp = TestServer::client()
+ .post(server.url("secret-site"))
+ .header("Authorization", format!("Bearer {token_value}"))
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status(), 202);
+
+ // Wrong token → 401
+ let resp = TestServer::client()
+ .post(server.url("secret-site"))
+ .header("Authorization", "Bearer wrong-token")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status(), 401);
+
+ // SAFETY: test-only cleanup
+ unsafe { std::env::remove_var(var_name) };
+}
+
+/// Tier 1: file-based token resolves and auth works
+#[tokio::test]
+async fn file_token_auth() {
+ let token_value = "file-resolved-secret-token";
+ let dir = tempfile::tempdir().unwrap().keep();
+ let token_path = std::path::PathBuf::from(&dir).join("webhook_token");
+ std::fs::write(&token_path, format!(" {token_value} \n")).unwrap();
+
+ let site = SiteBuilder::new("file-site", "https://example.com/repo.git", "")
+ .webhook_token_file(token_path)
+ .build();
+ let config = harness::test_config_with_site(dir, site);
+ let server = TestServer::start(config).await;
+
+ // Valid token → 202
+ let resp = TestServer::client()
+ .post(server.url("file-site"))
+ .header("Authorization", format!("Bearer {token_value}"))
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status(), 202);
+
+ // Wrong token → 401
+ let resp = TestServer::client()
+ .post(server.url("file-site"))
+ .header("Authorization", "Bearer wrong-token")
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status(), 401);
+}
diff --git a/tests/integration/sighup.rs b/tests/integration/sighup.rs
new file mode 100644
index 0000000..23c0dfd
--- /dev/null
+++ b/tests/integration/sighup.rs
@@ -0,0 +1,149 @@
+use crate::harness::{SiteBuilder, TestServer, test_config_with_site};
+use serial_test::serial;
+use std::time::Duration;
+
+/// Send SIGHUP to the current process.
+fn send_sighup_to_self() {
+ use nix::sys::signal::{Signal, kill};
+ use nix::unistd::Pid;
+
+ kill(Pid::this(), Signal::SIGHUP).expect("failed to send SIGHUP");
+}
+
+/// Install the SIGHUP handler and wait for it to be registered.
+async fn install_sighup_handler(server: &TestServer) {
+ witryna::test_support::setup_sighup_handler(&server.state);
+ // Yield to allow the spawned handler task to register the signal listener
+ tokio::task::yield_now().await;
+ tokio::time::sleep(std::time::Duration::from_millis(50)).await;
+}
+
+#[tokio::test]
+#[serial]
+async fn sighup_reload_keeps_server_healthy() {
+ let dir = tempfile::tempdir().unwrap().keep();
+ let site = SiteBuilder::new("my-site", "https://example.com/repo.git", "test-token").build();
+ let server = TestServer::start(test_config_with_site(dir, site)).await;
+
+ install_sighup_handler(&server).await;
+
+ // Verify server is healthy before SIGHUP
+ let resp = TestServer::client()
+ .get(server.url("/health"))
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 200);
+
+ // Send SIGHUP (reload config)
+ send_sighup_to_self();
+
+ // Give the handler time to process
+ tokio::time::sleep(std::time::Duration::from_millis(500)).await;
+
+ // Server should still be healthy
+ let resp = TestServer::client()
+ .get(server.url("/health"))
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 200);
+}
+
+#[tokio::test]
+#[serial]
+async fn rapid_sighup_does_not_crash() {
+ let dir = tempfile::tempdir().unwrap().keep();
+ let site = SiteBuilder::new("my-site", "https://example.com/repo.git", "test-token").build();
+ let server = TestServer::start(test_config_with_site(dir, site)).await;
+
+ install_sighup_handler(&server).await;
+
+ // Send multiple SIGHUPs in quick succession
+ for _ in 0..3 {
+ send_sighup_to_self();
+ tokio::time::sleep(std::time::Duration::from_millis(50)).await;
+ }
+
+ // Wait for stabilization
+ tokio::time::sleep(std::time::Duration::from_millis(500)).await;
+
+ // Server should survive
+ let resp = TestServer::client()
+ .get(server.url("/health"))
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 200);
+}
+
+#[tokio::test]
+#[serial]
+async fn sighup_preserves_listen_address() {
+ let dir = tempfile::tempdir().unwrap().keep();
+ let site = SiteBuilder::new("my-site", "https://example.com/repo.git", "test-token").build();
+ let server = TestServer::start(test_config_with_site(dir, site)).await;
+
+ install_sighup_handler(&server).await;
+
+ // Verify server is healthy before SIGHUP
+ let resp = TestServer::client()
+ .get(server.url("/health"))
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 200);
+
+ // Rewrite the on-disk config with a different listen_address (unreachable port)
+ // and an additional site to verify reloadable fields are updated
+ let config_path = server.state.config_path.as_ref();
+ let new_toml = format!(
+ r#"listen_address = "127.0.0.1:19999"
+container_runtime = "podman"
+base_dir = "{}"
+log_dir = "{}"
+log_level = "debug"
+
+[[sites]]
+name = "my-site"
+repo_url = "https://example.com/repo.git"
+branch = "main"
+webhook_token = "test-token"
+
+[[sites]]
+name = "new-site"
+repo_url = "https://example.com/new.git"
+branch = "main"
+webhook_token = "new-token"
+"#,
+ server.state.config.read().await.base_dir.display(),
+ server.state.config.read().await.log_dir.display(),
+ );
+ tokio::fs::write(config_path, &new_toml).await.unwrap();
+
+ // Send SIGHUP to reload
+ send_sighup_to_self();
+ tokio::time::sleep(Duration::from_millis(500)).await;
+
+ // Server should still respond on the original port (listen_address preserved)
+ let resp = TestServer::client()
+ .get(server.url("/health"))
+ .send()
+ .await
+ .unwrap();
+ assert_eq!(resp.status().as_u16(), 200);
+
+ // Verify the reloadable field (sites) was updated
+ let config = server.state.config.read().await;
+ assert_eq!(config.sites.len(), 2, "sites should have been reloaded");
+ assert!(
+ config.find_site("new-site").is_some(),
+ "new-site should exist after reload"
+ );
+
+ // Verify non-reloadable field was preserved (not overwritten with "127.0.0.1:19999")
+ assert_ne!(
+ config.listen_address, "127.0.0.1:19999",
+ "listen_address should be preserved from original config"
+ );
+}