Plug into your existing switch. Dedicated local AI. Doesn't touch your router. Works in 10 minutes.
You already have the network. You have UniFi, or OPNsense, or whatever you've built. We're not touching any of it. We're just adding the dedicated local AI inference node your stack doesn't have yet — dedicated ARM AI silicon, runs standard Docker, SSH always open, plug it into your existing switch and walk away.
If you don't have a network — ISP router, nothing else, paying Plex $7/month — we're building that device too. One box, one cable into the wall. RigidNode Complete — router, switch, WiFi, NAS, and local AI. Coming soon.
| AI Compute | Jetson Orin NX 16GB — 157 TOPS | Dedicated AI silicon. Not a GPU bolted onto a PC. |
| Unified RAM | 16GB LPDDR5 | Ceiling of the 260-pin family — Orin NX 16GB is the top. |
| AI Model Class | Llama 13B / Qwen 14B (Q4) | Genuinely useful. Not GPT-4. Not an 8B toy either. |
| Inference Speed | ~30–40 tok/sec (13B Q4) | Conversational. 2 concurrent users comfortable. |
| OS Storage | 1TB NVMe (NV3 M.2) | OS + AI models + Docker. Slot 2 open for expansion. |
| NAS Storage | 6TB HDD (CMR 3.5") | Internal. ~$26.67/TB sweet spot. |
| Router | ER605 V2 — dedicated silicon | Hardware-isolated. Jetson reboots don't drop the network. |
| Switch | 8-port Gigabit (TL-SG108) | 6 open ports for APs and wired devices. |
| WiFi | 2× EAP610 WiFi 6 APs | Wall-mounted. Omada SDN. Whole-home coverage. |
| Power | 12V 10A single-rail PSU (120W) | Peak load ~53W. 2× headroom. |
| Enclosure | Custom — KI Designed & Manufactured | Aluminum. Shared form factor across all three tiers. |
| v1 | $1,999 estimated MSRP target | Off-shelf. Margin improves on v2 custom carrier. |
| Module | Connector | RAM | AI Compute | Max Model | ChatGPT / Claude | RigidNode Tier |
|---|---|---|---|---|---|---|
Standard Family — 260-pin SO-DIMM — One carrier board, all modules accepted | ||||||
| Orin Nano Super 67 TOPS |
260-pin | 8GB | 67 TOPS | Llama 8B | Free / Free | Entry / RigidHearth |
| Orin NX 8GB 70 TOPS |
260-pin | 8GB | 70 TOPS | Llama 8B | Free / Free | Standard Base |
| Orin NX 16GB 157 TOPS — ceiling of this family |
260-pin | 16GB | 157 TOPS | Llama 13B | Plus / Pro | v1 Target |
Pro Family — 699-pin Mezzanine (AGX Orin) — Separate carrier board | ||||||
| AGX Orin 32GB 200 TOPS |
699-pin | 32GB | 200 TOPS | Llama 34B | Pro / Pro Max | Pro Entry |
| AGX Orin 64GB 275 TOPS |
699-pin | 64GB | 275 TOPS | Llama 70B | GPT-4 / Claude Full | Pro Max |
Ultra Family — Jetson Thor connector · Blackwell GPU · Aug 2025 — Third carrier board | ||||||
| Jetson T4000 1,200 TFLOPS FP4 |
Thor | 64GB | 1,200 TFLOPS | 70B+ concurrent | Beyond subscription | Ultra Entry |
| Jetson T5000 2,070 TFLOPS FP4 — Blackwell |
Thor | 128GB | 2,070 TFLOPS | Multiple 70B concurrent | Beyond subscription | Ultra Max |
* ChatGPT / Claude column shows approximate cloud subscription tier with equivalent capability. Local inference — your data never leaves the hardware.
| Module | Connector | RAM (VRAM) | AI Compute | Max Model | ChatGPT / Claude | RigidNode Tier Equivalent |
|---|---|---|---|---|---|---|
RTX 40 Series — Ada Lovelace — PCIe x16 | ||||||
| RTX 4060 | PCIe x16 | 8GB VRAM | ~136 TOPS | Llama 8B | Free / Free | Entry |
| RTX 4070 | PCIe x16 | 12GB VRAM | ~165 TOPS | Llama 13B | Plus / Pro | Standard |
| RTX 4080 | PCIe x16 | 16GB VRAM | ~780 TOPS | Llama 13B | Plus / Pro | Standard |
| RTX 4090 | PCIe x16 | 24GB VRAM | ~1,321 TOPS | Llama 34B | Pro / Pro Max | Pro Entry |
RTX 50 Series — Blackwell — PCIe x16 | ||||||
| RTX 5060 | PCIe x16 | 8GB VRAM | ~612 TOPS (FP4) | Llama 8B | Free / Free | Entry |
| RTX 5070 | PCIe x16 | 12GB VRAM | ~838 TOPS (FP4) | Llama 13B | Plus / Pro | Standard |
| RTX 5070 Ti | PCIe x16 | 16GB VRAM | ~1,024 TOPS (FP4) | Llama 13B | Plus / Pro | Standard |
| RTX 5080 | PCIe x16 | 16GB VRAM | ~1,421 TOPS (FP4) | Llama 13B | Plus / Pro | Standard |
| RTX 5090 | PCIe x16 | 32GB VRAM | ~1,792 TOPS (FP4) | Llama 70B | GPT-4 / Claude Full | Pro Max |
* Discrete GPU cards require a full desktop PC — CPU, motherboard, OS, and power supply not included. VRAM is the primary limit for local model size. AI TOPS not directly comparable to Jetson unified memory architecture. RTX 50-series TOPS figures use FP4 precision — not directly comparable to RTX 40 series INT8 TOPS or Jetson unified memory architecture TOPS. Host PC OS telemetry and cloud connectivity mean discrete GPU inference is not sovereign by default.
I’m a solo dad. Three kids at home — Liam builds and scans, Kathryn makes digital art, Emily runs operations. Connor manages the archive remotely. Their photos, their creative work, four generations of family heritage, and the AI that makes sense of it all — it lives on hardware we own, governed by Three Laws I designed, in a building we control.
My great-grandfather Thomas came to America, got a mechanical engineering degree from the University of Detroit in 1931, built The Wood Shop, and watched industrial scale take it. That was the founding wound. I built KI so nothing we make gets taken.
Layer 0 costs nothing beyond the hardware. Everything local, everything sovereign, everything governed. Here’s what the ecosystem looks like when you’re ready to expand.
30 years of mechanical systems engineering — AutoCAD at 9, first professional CAD job in 1996, GM Global Technical Operations through 2025. Five patent filings. I am not a software person who decided to do hardware. I am a mechanical systems engineer who got tired of watching good hardware get buried under subscription paywalls.
I'm Q1 now. I have the UniFi stack, the NAS, the Jetsons, the whole thing. I've been Q2 more times than I can count — garbage ISP router, paying Plex $7/month, photos on Google's servers because there was no real answer. Nobody built the Q2 device. So I'm building it.
RigidNode Home Node is for Q1. Plugs into your existing stack. Doesn't touch your router. Dedicated local AI inference, sovereign storage, runs whatever Docker containers you bring.
RigidNode Complete is for Q2. One box, one cable into the wall. Everything included. The device I needed every time I moved into a new place with nothing and no good answer.
We're not here to pitch you. If you're Q1 — you already have a network, you're running UniFi or OPNsense or whatever you've built, you just want a dedicated local AI node to complete your stack — that's the Home Node. Plugs into your switch. Doesn't touch your router. If you're Q2 — ISP router, nothing built yet, want to replace everything in one box — that's RigidNode Complete, coming soon. Either way: tell us what we got wrong. Tell us if the Orin NX 16GB is the wrong module for the plug-in tier. Tell us what's missing from the Docker stack. We're listening before we're selling.
Show and tell incoming — actual hardware photos, enclosure design, power architecture. The plug-in AI node that doesn't touch your UniFi setup. Hard questions welcome.
Docker Compose stack going public on GitHub. See exactly what runs. Bring your own containers. Tell us what's missing. No black box.
Orin NX 16GB vs AGX Orin 32GB vs Mac Mini M4 benchmark numbers coming. Real tok/sec on real models. No cherry-picking.
Kavanagh Industries · Always on