Director, Cloud & Data Engineering with 24 years enterprise experience. Building production AI systems that ship — from multi-agent orchestration to memory infrastructure to LLM observability.
Every project is production-hardened, running on real infrastructure, serving real users. Not demos. Not tutorials. Shipped products.
Family AI operating system with 10 specialized agents, 4-layer memory architecture, and multi-channel communication (WhatsApp, Telegram, web). Local-first with Ollama, cloud fallback chain.
Memory-as-a-Service API for AI agents. Temporal Memory Scoring with Ebbinghaus decay curves. 3-call integration. MCP server on npm. Half the price of Mem0 Pro.
AI-powered project estimation platform. Upload a BRD/SOW → 9 specialized AI agents run a visual drag-and-drop pipeline → complete project estimate in under 2 minutes. Built for enterprise innovation competitions.
Local-first LLM observability. PostgreSQL-only alternative to Langfuse (which needs
ClickHouse + Redis + MinIO). Single service, ~200MB RAM, deploys in 30 seconds
via npx traces-dev.
AI-powered bedtime story production pipeline for children. Full screenplay format (Pixar/Ghibli quality), 14 TTS voices, 8-dimension quality scoring, visual drag-and-drop pipeline IDE, automated cron generation.
Postman meets Datadog for MCP servers. Compliance testing, continuous monitoring, tool-call debugging, latency tracking. 50+ compliance checks. SaaS model targeting indie devs and small teams.
Lumen's memory stack is the core IP. Four layers working together, each solving a different retrieval problem.
Not technologies I've read about. Technologies running in production right now, serving real workloads.
Every system I build starts as a personal pain point, gets hardened to production, then becomes proof of architectural thinking.
Build AI systems that solve real problems in your own life. If it survives your family, it'll survive enterprise.