FrogNet logo
FrogNetOpen Source · Self-Forming Mesh

Architecture

How FrogNet works.
From first contact to last byte.

FrogNet is a behavioral system: a set of rules that govern how ordinary machines observe their environment, form temporary agreements about reality, and continue operating even as those agreements fall apart.

Node architecture Four processes. One node. Complete autonomy.

Every FrogNet node runs four cooperating processes. Traffic from external clients hits port 80; the proxy decides how to handle it. This splits what to send (the proxy) from what to do with what arrives (the daemon).

PROXY :80 Intercept · Template · SAME/DIFF Path decision · Encoding DAEMON :9009 Execute · Decode · Compress 256-worker thread pool APACHE :8080 LAMP · PHP · api.php Dashboards · Sensors MYSQL :3306 Templates · Cache Transient database FNW1 HTTP SQL Local dest → Apache direct Fast link → FAST path Slow link → SEMANTIC path Inter-node: FNW1 binary frames only Up to 128 parallel requests Seq-tagged, out-of-order delivery
Discovery Finding the network — without flooding it.

Full-stack echo, not ping. The only proof a node is reachable is a successful frognet_echo through the full semantic stack: proxy → daemon → Apache → identity response. If any link is broken, the echo fails. Discovery never produces phantom routes.

BFS with eight seed sources. DHCP leases, gateway addresses, transit peers, discovery cache (MySQL), WireGuard state files, kernel route scans. Each candidate is probed, validated, and either routed or logged with a reason for failure. Max BFS depth: 2.

BLDC-1 compression on discovery. Discovery queries go through the semantic proxy, so they benefit from SAME/DIFF automatically. Subsequent discovery cycles cost near-zero bytes. 128 requests fire in parallel. Message coalescing combines responses to the same destination.

Event-driven. Discovery fires only on qualifying network events — interface up, DHCP lease, tunnel change. The merge controller acquires an exclusive lock, flushes the cache, rebuilds from scratch, installs only routes justified by current evidence. No "probably still OK" state.

Route policy: Non-transit (local LAN) always beats transit (WireGuard tunnel), regardless of RTT. Within same class, RTT determines winner.

Request flow A message crosses a 4,800-baud narrowband link in 70ms instead of 17 seconds.
1Interception. Proxy receives HTTP POST, parses Host header to determine destination.
2Path decision. EWMA-smoothed RTT. Above 250ms → SEMANTIC. Below 200ms → FAST. 50ms hysteresis, 30s hold-down.
3Template lookup. Templates learned automatically from real traffic. Keyed by (method, semantic_path).
4Semantic encoding. Extract dynamic values, diff-encode against previous request. 10KB POST → ~50-byte FNW1 frame.
5Wire transit. ~80ms instead of ~17s. No HTTP on the wire — no headers, no status codes, no chunked encoding.
6Daemon execution. Reconstruct HTTP from templates, execute against local Apache :8080. Application sees standard HTTP.
7Response compression. Hash comparison. Unchanged → RESP_SAME (21 bytes). Changed → RESP_DIFF with only changed fields.
8Reconstruction. Proxy rebuilds full HTTP response. Client has no idea a radio link was involved.
FNW1 protocol Dedicated channels. Minimum overhead. Enhanced security.

Binary frame protocol. Format: [frame_len:4][FNW1:4][op:1][payload...]

DirTypeOpCostPurpose
P→DREQ_FULL0x01~280BFirst request — full semantic packet
P→DREQ_REPEAT0x0221BIdentical — hash only
P→DREQ_DIFF0x04~50BChanged fields only
D→PRESP_SAME0x1321BUnchanged — use cached
D→PRESP_DIFF0x11~85BChanged fields only
D→PRESP_RAW0x14variesUncompressible — raw

Purpose-built socket pairs between known peers. HELLO handshake authenticates. Rejects anything that doesn't speak FNW1. The attack surface is the protocol itself — minimal by design.

SAME/DIFF The single most impactful optimization — 42 bytes for a complete round-trip.
ScenarioRaw HTTPFNW1Savings@ 4800 bps
Sensor POST (first)~10,200B~280B97.3%17s → 0.5s
Sensor POST (changed)~10,200B~85B99.2%17s → 0.14s
Sensor POST (same)~10,200B42B99.6%17s → 70ms
Dashboard (unchanged)~45,000B42B99.9%75s → 70ms

SAME_ID is a 16-byte HMAC-SHA256 token — both content address and authentication token. Cannot be forged without knowing the response content and shared secret. Prevents cache poisoning across nodes.

The merge The machine stops trusting what it believed before.

Triggered by network events — interface up, DHCP lease, remote host announcement. The machine rebuilds its understanding from scratch. Probes neighbors, verifies reachability, installs only routes justified by current evidence, discards everything else.

A new /etc/hosts is built in staging and compared to the active one. Database leadership is computed deterministically: highest-IP active node wins. No elections. No heartbeats. No coordinator. Every machine reaches the same conclusion independently because every machine applies the same rules to the same observations.

Design invariants Rules that must never be violated.

Network completeness > Speed > Brevity. A correct 50-byte response is always preferred over a fast but wrong response.

Never mask failure. No 2>/dev/null. No silent timeouts. Errors propagate with full context.

No distributed consensus. Nodes converge through deterministic template IDs and SAME/DIFF, not synchronization.

Fail closed on slow links. No template + slow link = refuse the request. Better to fail visibly than saturate the link for 17 seconds.

The papers Four papers. One architectural narrative.

I. How the Cloud Broke the Internet — The problem statement. Centralization traded the ARPANET's design for convenience.

II. The Architecture — FrogNet's answer. Sovereign nodes, self-forming mesh, transport agnosticism.

III. The Semantic Layer — BLDC-1 and FNW1 in depth. How 93.8% compression works.

IV. Finishing the Thought — Sensor platform, AI host, transient database, defensive posture.