Kavanagh Industries LLC — Clinton Township, MI

RigidNode
Home Node

Plug into your existing switch. Dedicated local AI. Doesn't touch your router. Works in 10 minutes.

You already have the network. You have UniFi, or OPNsense, or whatever you've built. We're not touching any of it. We're just adding the dedicated local AI inference node your stack doesn't have yet — dedicated ARM AI silicon, runs standard Docker, SSH always open, plug it into your existing switch and walk away.

If you don't have a network — ISP router, nothing else, paying Plex $7/month — we're building that device too. One box, one cable into the wall. RigidNode Complete — router, switch, WiFi, NAS, and local AI. Coming soon.

user@rigidnode:~$ docker ps | grep -E "ollama|nextcloud|jellyfin|adguard|wireguard"
ollama · nextcloud · jellyfin · adguardhome · wireguard · suricata · samba · immich
✓ Plugs Into Existing Network ✓ Standard Docker ✓ Doesn't Touch Your Router v1 — Apr 2026 — Jetson Orin NX 16GB RigidNode Complete — All-in-One — Coming Soon Beta — Real Limitations Below
01
Three Connector Families. One Platform.
Each board family is a distinct PCB design. Same software stack, different compute ceiling. Designed in parallel.
Standard — v1 Shipping
RigidNode Standard
260-pin SO-DIMM · 69.6mm × 45mm
  • Orin Nano Super67 TOPS / 8GB
  • Orin NX 8GB70 TOPS / 8GB
  • Orin NX 16GB ← v1157 TOPS / 16GB
One carrier board design accepts all three modules. Upgrade from Nano → NX 16GB by swapping the module — no new board, no new enclosure.
From $1,999
v1 hardware in development — KI custom carrier board (v2) in PCB design
Pro — Parallel Development
Pro tier
699-pin Mezzanine · AGX Orin form factor
  • AGX Orin 32GB ← entry200 TOPS / 32GB
  • AGX Orin 64GB275 TOPS / 64GB
  • AGX Orin Industrial275 TOPS / 64GB
Different carrier board — same enclosure family, same software stack. 32GB makes 3–4 concurrent AI users real. 64GB is GPT-4 class locally. This is the true family-of-4 tier.
Contact for Pricing
Carrier board in parallel PCB design — AGX Orin reference design confirmed
Ultra — Parallel Development
RigidNode Ultra
Jetson Thor connector · Blackwell GPU · Aug 2025
  • T4000 ← entry1,200 TFLOPS / 64GB
  • T50002,070 TFLOPS / 128GB
Third connector family — third carrier board. Blackwell GPU, 7.5× the compute of AGX Orin. Module alone costs $2,000–$3,000. This is the defense, enterprise, and advanced robotics tier. Not a home product.
Contact for Pricing
Architecture defined — carrier board design follows Pro completion
Why three in parallel: The carrier boards share a design methodology and an enclosure family. Designing all three now costs almost nothing extra vs designing one. The alternative is redesigning the enclosure three separate times over three years. Same enclosure footprint. Different boards. Different modules. One manufacturing line.
02
What Actually Runs Inside
Every service as a Docker container on the Jetson host. Identical across all three tiers — only the model class changes with RAM.
AI Inference
Ollama + Open WebUI
docker: ollama · open-webui
Local LLM inference on dedicated NVIDIA AI silicon. No API calls. No data leaving the device. Model class scales with RAM — 13B on Standard, 34B on Pro, 70B+ on Ultra.
Standard: 13B max. Pro: 34B. Ultra: 70B+ runs comfortably.
Private Cloud Storage
Nextcloud + Samba
docker: nextcloud · samba
Family file sync and photo backup. Replaces iCloud and Google Drive. SMB shares to Windows and Mac. Storage tier scales per edition.
v1: 6TB WD Blue internal. Pro/Ultra: NAS expansion configurable.
Media Server
Jellyfin
docker: jellyfin
Open-source, no Plex Pass, no remote access paywall. Hardware transcoding via NVDEC on all Jetson platforms. Streams to every device on your network.
Jellyfin app availability varies by TV brand — known limitation.
Network Security
AdGuard Home
docker: adguardhome
DNS-level ad and tracker blocking for every device on the network. No client installs. Xbox, smart TV, and phone all benefit automatically.
Covers the whole house, not just the browser.
Remote Access VPN
WireGuard
docker: wireguard
WireGuard VPN server for secure remote access. Access Nextcloud, Jellyfin, and the AI from anywhere without exposing ports.
Self-hosted — no Tailscale dependency, no $160M-VC enshittification risk.
Intrusion Detection
Suricata IDS/IPS
docker: suricata
Network intrusion detection and prevention. Monitors traffic, alerts on threats, blocks known malicious patterns. Runs quietly in the background.
Requires tuning for low false positives on first run.
Photo Management
Immich
docker: immich
Self-hosted Google Photos replacement with ML-powered face recognition running locally on the Jetson NPU. Your family photos don't train anyone's model.
Fastest-growing self-hosted app for a reason.
Monitoring
Uptime Kuma + Glances
docker: uptime-kuma · glances
Service health monitoring and system metrics. Get alerted when something goes wrong. View CPU, memory, network, and disk from any browser.
Add your own Prometheus/Grafana stack if you prefer.
Add Your Own
Standard Docker Compose
docker compose up -d
Standard JetPack Linux OS. If a container runs on ARM64, it runs on RigidNode. We don't lock the app store. Bring your existing compose files.
Portainer available if you prefer a GUI. SSH access always open.
03
Standard Tier — v1 Hardware Spec
Real parts, real prices. Off-shelf v1. Custom KI carrier board in v2. Pro and Ultra boards in parallel design.
AI ComputeJetson Orin NX 16GB — 157 TOPSDedicated AI silicon. Not a GPU bolted onto a PC.
Unified RAM16GB LPDDR5Ceiling of the 260-pin family — Orin NX 16GB is the top.
AI Model ClassLlama 13B / Qwen 14B (Q4)Genuinely useful. Not GPT-4. Not an 8B toy either.
Inference Speed~30–40 tok/sec (13B Q4)Conversational. 2 concurrent users comfortable.
OS Storage1TB NVMe (NV3 M.2)OS + AI models + Docker. Slot 2 open for expansion.
NAS Storage6TB HDD (CMR 3.5")Internal. ~$26.67/TB sweet spot.
RouterER605 V2 — dedicated siliconHardware-isolated. Jetson reboots don't drop the network.
Switch8-port Gigabit (TL-SG108)6 open ports for APs and wired devices.
WiFi2× EAP610 WiFi 6 APsWall-mounted. Omada SDN. Whole-home coverage.
Power12V 10A single-rail PSU (120W)Peak load ~53W. 2× headroom.
EnclosureCustom — KI Designed & ManufacturedAluminum. Shared form factor across all three tiers.
v1$1,999 estimated MSRP targetOff-shelf. Margin improves on v2 custom carrier.
v2 Standard carrier: RTL8370N switch chip + native SATA + clean 12V distribution on one board. Eliminates Waveshare + TL-SG108 + USB-SATA adapter. Designed in SolidWorks PCB, manufactured by JLCPCB. Pro and Ultra carrier boards in parallel design using the same AI toolchain.
04
Honest Limitations — Standard Tier
These are the Standard (260-pin) tier limits. Pro and Ultra resolve most of them at higher cost.
✓ What it does well
Local AI for a family — 13B runs privately, no API calls
Replace iCloud, Google Photos, Dropbox with your own hardware
Network-level ad blocking — every device, no client install
Jellyfin replaces Plex — no remote access paywall
Hardware-isolated router — AI crashes don't take the network down
Full function with zero cloud subscriptions
Standard Docker — add any ARM64 container
WireGuard VPN — self-hosted, no Tailscale
SSH always available — you own the hardware
✗ Standard tier limits (Pro/Ultra resolve)
70B models — 16GB RAM is the 260-pin ceiling. → Pro (32–64GB) resolves
4+ concurrent AI users — 2 comfortable, 4 queues → Pro resolves at 32GB
AI training/fine-tuning — inference only → Ultra (Thor) resolves
Self-hosted email — don't. Nobody wins that fight. Not resolved by any tier.
10GbE — GbE throughout. Enough for family NAS. → Pro/Ultra adds 25GbE
RAID redundancy — single drive on v1. Backup is your responsibility.
App Store abstraction — Docker Compose, not one-click Umbrel
First-gen hardware — v1.1 will follow. Early adopters accept this.
05
Complete Platform — All Three Board Families
Every module across every connector family. This is the full product map being designed in parallel.
ModuleConnectorRAMAI ComputeMax ModelChatGPT / ClaudeRigidNode Tier
Standard Family — 260-pin SO-DIMM — One carrier board, all modules accepted
Orin Nano Super
67 TOPS
260-pin 8GB67 TOPSLlama 8BFree / Free Entry / RigidHearth
Orin NX 8GB
70 TOPS
260-pin 8GB70 TOPSLlama 8BFree / Free Standard Base
Orin NX 16GB
157 TOPS — ceiling of this family
260-pin 16GB157 TOPSLlama 13BPlus / Pro v1 Target
Pro Family — 699-pin Mezzanine (AGX Orin) — Separate carrier board
AGX Orin 32GB
200 TOPS
699-pin 32GB200 TOPSLlama 34BPro / Pro Max Pro Entry
AGX Orin 64GB
275 TOPS
699-pin 64GB275 TOPSLlama 70BGPT-4 / Claude Full Pro Max
Ultra Family — Jetson Thor connector · Blackwell GPU · Aug 2025 — Third carrier board
Jetson T4000
1,200 TFLOPS FP4
Thor 64GB1,200 TFLOPS70B+ concurrentBeyond subscription Ultra Entry
Jetson T5000
2,070 TFLOPS FP4 — Blackwell
Thor 128GB2,070 TFLOPSMultiple 70B concurrentBeyond subscription Ultra Max

* ChatGPT / Claude column shows approximate cloud subscription tier with equivalent capability. Local inference — your data never leaves the hardware.

Connector compatibility note: The 260-pin family accepts Orin Nano and Orin NX modules interchangeably — module swap, no new board. The 699-pin family accepts AGX Orin 32GB, 64GB, and Industrial — module swap, no new board. Thor modules are exclusive to the Thor connector family. Crossing families requires a new carrier board. This is the correct architecture — each tier is a clean product SKU with a defined upgrade cliff.
GPU
PC Discrete GPU — Local AI Comparison
Same columns. Different architecture. Requires a full desktop PC host — not a standalone device.
ModuleConnectorRAM (VRAM)AI ComputeMax ModelChatGPT / ClaudeRigidNode Tier Equivalent
RTX 40 Series — Ada Lovelace — PCIe x16
RTX 4060 PCIe x16 8GB VRAM~136 TOPSLlama 8BFree / Free Entry
RTX 4070 PCIe x16 12GB VRAM~165 TOPSLlama 13BPlus / Pro Standard
RTX 4080 PCIe x16 16GB VRAM~780 TOPSLlama 13BPlus / Pro Standard
RTX 4090 PCIe x16 24GB VRAM~1,321 TOPSLlama 34BPro / Pro Max Pro Entry
RTX 50 Series — Blackwell — PCIe x16
RTX 5060 PCIe x16 8GB VRAM~612 TOPS (FP4)Llama 8BFree / Free Entry
RTX 5070 PCIe x16 12GB VRAM~838 TOPS (FP4)Llama 13BPlus / Pro Standard
RTX 5070 Ti PCIe x16 16GB VRAM~1,024 TOPS (FP4)Llama 13BPlus / Pro Standard
RTX 5080 PCIe x16 16GB VRAM~1,421 TOPS (FP4)Llama 13BPlus / Pro Standard
RTX 5090 PCIe x16 32GB VRAM~1,792 TOPS (FP4)Llama 70BGPT-4 / Claude Full Pro Max

* Discrete GPU cards require a full desktop PC — CPU, motherboard, OS, and power supply not included. VRAM is the primary limit for local model size. AI TOPS not directly comparable to Jetson unified memory architecture. RTX 50-series TOPS figures use FP4 precision — not directly comparable to RTX 40 series INT8 TOPS or Jetson unified memory architecture TOPS. Host PC OS telemetry and cloud connectivity mean discrete GPU inference is not sovereign by default.

06
The KI Ecosystem
RigidNode is one pillar. Here's what surrounds it.
RigidNode™ Platform
Three-Board Sovereign Compute Platform
Standard / Pro / Ultra. AI + NAS + Router + Firewall + Switch + WiFi. Custom-manufactured enclosure shared across all three tiers. v1 Standard shipping, Pro and Ultra in parallel design.
v1 Standard in dev — Pro/Ultra carrier boards in parallel design
RigidVault™
Sovereign Data Storage
Private encrypted backup tier. Explorer tier subscribers get off-site backup to RigidVault cloud. Air-gapped options available. Your data, encrypted before it leaves your box.
Service live — kavanaghind.com/rigidvault
RigidAI™
AI Module Ecosystem
76 discrete AI modules deployable across all RigidNode tiers. Industry-specific inference packages, automation agents, specialized models. The model that runs on Standard also runs on Ultra.
Module store live — 76 modules catalogued
RigidHearth™
Home AI Device — 260-pin / Nano Tier
Simpler device at the entry of the Standard family. Orin Nano Super. AI inference and local storage — no full network stack. For families who want local AI without the full RigidNode stack.
Prototype stage — Jetson NVMe boot pending
RigidEngineering™
Manufacturing Technology Consulting
The commercial engine behind KI. SolidWorks Design (commercially licensed). NX/UG. CAD consulting for manufacturing clients. The same engineering discipline designing the carrier boards.
Billable now — see solidworks-consulting
RigidPulse™
Shop Floor AI — CNC & Motion
The RigidNode for your CNC machine. The same Jetson platform, different software stack — motion intelligence, tool wear, spindle health. RigidNode runs your home. RigidPulse runs your shop.
Phase 0 — Docker stack on Jetson Orin Nano Super
07
Built for Families First. Opened to Everyone.
RigidNode is the hardware foundation. Every KI capability runs on it. Subscriptions are the optional cloud and AI module layers — your hardware runs fully local without them.

I’m a solo dad. Three kids at home — Liam builds and scans, Kathryn makes digital art, Emily runs operations. Connor manages the archive remotely. Their photos, their creative work, four generations of family heritage, and the AI that makes sense of it all — it lives on hardware we own, governed by Three Laws I designed, in a building we control.

My great-grandfather Thomas came to America, got a mechanical engineering degree from the University of Detroit in 1931, built The Wood Shop, and watched industrial scale take it. That was the founding wound. I built KI so nothing we make gets taken.

Layer 0 costs nothing beyond the hardware. Everything local, everything sovereign, everything governed. Here’s what the ecosystem looks like when you’re ready to expand.

— Shaun Kavanagh, Founder
Layer 0 — Local Only
Free
Hardware purchase only. No cloud dependency.
  • Full local AI inference — 13B models at real-time speed
  • Local NAS, Nextcloud, Jellyfin, Immich
  • Docker stack — bring your own containers
  • RigidTrust Three Laws governance on every inference
  • SSH always open, full Linux access
  • No subscription required. Ever.
Layer 1 — Explorer
$5 /mo
Add the sovereign cloud layer.
  • Everything in Layer 0
  • RigidVault Cloud — 100GB Michigan sovereign storage
  • RigidMarket ecosystem access — 140 modules
  • Off-site encrypted backup via RigidVault tunnel
  • Heritage assets protected — never auto-deleted on cancellation
  • Cancel anytime. Hardware keeps running.
Start at Explorer →
Heritage assets are never deleted on cancellation. That protection is permanent across all tiers. We make money on subscriptions chosen freely — not on paywalling hardware you already own. KI was built for three kids in Clinton Township, Michigan. It works for yours too.
Who's building this
Shaun Kavanagh
Founder & CEO — Kavanagh Industries LLC

30 years of mechanical systems engineering — AutoCAD at 9, first professional CAD job in 1996, GM Global Technical Operations through 2025. Five patent filings. I am not a software person who decided to do hardware. I am a mechanical systems engineer who got tired of watching good hardware get buried under subscription paywalls.

I'm Q1 now. I have the UniFi stack, the NAS, the Jetsons, the whole thing. I've been Q2 more times than I can count — garbage ISP router, paying Plex $7/month, photos on Google's servers because there was no real answer. Nobody built the Q2 device. So I'm building it.

RigidNode Home Node is for Q1. Plugs into your existing stack. Doesn't touch your router. Dedicated local AI inference, sovereign storage, runs whatever Docker containers you bring. RigidNode Complete is for Q2. One box, one cable into the wall. Everything included. The device I needed every time I moved into a new place with nothing and no good answer.

30
Years CAD/Engineering
5
Patent Filings
3
Boards in Parallel
v1
Apr 2026
08
We're Building This in Public
The homelab community will tell us where we're wrong faster than any QA team.

Where to Find Us

We're not here to pitch you. If you're Q1 — you already have a network, you're running UniFi or OPNsense or whatever you've built, you just want a dedicated local AI node to complete your stack — that's the Home Node. Plugs into your switch. Doesn't touch your router. If you're Q2 — ISP router, nothing built yet, want to replace everything in one box — that's RigidNode Complete, coming soon. Either way: tell us what we got wrong. Tell us if the Orin NX 16GB is the wrong module for the plug-in tier. Tell us what's missing from the Docker stack. We're listening before we're selling.

r/homelab

Show and tell incoming — actual hardware photos, enclosure design, power architecture. The plug-in AI node that doesn't touch your UniFi setup. Hard questions welcome.

r/selfhosted

Docker Compose stack going public on GitHub. See exactly what runs. Bring your own containers. Tell us what's missing. No black box.

r/LocalLLaMA

Orin NX 16GB vs AGX Orin 32GB vs Mac Mini M4 benchmark numbers coming. Real tok/sec on real models. No cherry-picking.

R

RigidAI

Kavanagh Industries · Always on