The Hardware

More dedicated compute than most colleges. $12K invested. $60K+ retail value.

18+
GPUs
228GB+
Total VRAM
512GB
POWER8 RAM
3-5x
ROI on Investment

Acquisition Strategy

Pawn shops for consumer hardware. eBay datacenter decomm for enterprise GPUs. Parts cascade from upgraded machines to lab infrastructure. Nothing wasted.

Examples: Ryzen 9 7950X tower for $600 (retail $1,500+). HP Victus laptop for $617 (retail $1,700). V100 32GB for ~$500 (retail $3,000+).

GPU Fleet

Card           VRAM    Qty   Total    Location
─────────────────────────────────────────────────
V100 32GB      32GB    2     64GB     C4130, Ryzen build
V100 16GB      16GB    3     48GB     C4130 #2, builds
RTX 5070       12GB    2     24GB     7950X tower, Ryzen 5
RTX 4070        8GB    1      8GB     HP Victus laptop
RTX 3060       12GB    2     24GB     Dual 3060 Ryzen
Tesla M40      12GB    2     24GB     C4130 #1
─────────────────────────────────────────────────
ACTIVE TOTAL   12 GPUs       192GB VRAM
+ Bench/Reserve 6+ GPUs       36GB+ VRAM
═════════════════════════════════════════════════
FULL FLEET     18+ GPUs      228GB+ VRAM
    

Plus: Hailo-8 TPU + 2x Alveo U30 FPGA

IBM POWER8 S824 - "Cathedral of Voltage"

The crown jewel. Server-class PowerPC for LLM inference.

CPU:      Dual 8-core POWER8 = 16 cores, 128 threads (SMT8)
RAM:      512 GB DDR3 (2 NUMA nodes)
Storage:  1.8 TB SAS
GPU:      40GbE link to C4130 for matmul offload
Peak:     147.54 tokens/sec (TinyLlama 1.1B Q4_K)
    

Running vec_perm non-bijunctive collapse - a technique impossible on x86/ARM/CUDA. The POWER8's vec_perm instruction does 5 ops where a GPU needs 80.

Featured RAM Coffers on Grokipedia · GitHub

PowerPC Mac Fleet (RustChain Miners)

Vintage Macs mining RTC with antiquity bonuses:

Machine              CPU          Multiplier   Status
─────────────────────────────────────────────────────
PowerBook G4 #1      G4 7450      2.5x        Mining
PowerBook G4 #2      G4 7447      2.5x        Mining
PowerBook G4 #3      G4 7455      2.5x        Mining
Power Mac G4 MDD     Dual G4      2.5x        Mining
Power Mac G5 #1      Dual 2.0 G5  2.0x        Mining
Power Mac G5 #2      Dual 2.0 G5  2.0x        Node.js target
Mac Mini M2          Apple M2     1.2x        Inference
    

GPU Offload Architecture

The POWER8 and C4130 are linked via 40GbE (0.15ms latency):

POWER8 S824 (512GB RAM)        C4130 (V100 + M40)
┌─────────────────────┐        ┌──────────────────┐
│ Model loaded in RAM │──40GbE──▶│ CUDA matmul only │
│ PSE vec_perm active │◀────────│ 31+ TFLOPS       │
│ 128 threads         │ 0.15ms  │ Q4_K dequant GPU │
└─────────────────────┘         └──────────────────┘
    

Model stays on POWER8. Only the math goes to GPU. Supports any model that fits in 500GB RAM.