I'm a full-stack developer who ships production software. Not side projects — actual products with real users, production databases, and paying stakeholders. My technical identity sits at the intersection of full-stack engineering and LLM-powered application development.

Here's the honest version: I'm strong at building things quickly with TypeScript and Next.js, I understand multi-agent system architecture well enough to win international hackathons with it, and I'm one of the few people actively building with Google Gemini's Live API for real-time voice AI. I'm not a machine learning researcher — I don't train models from scratch, and listing TensorFlow to pad a resume isn't something I do.


Full-Stack Developer

Everything started with the frontend. The discipline of building something that renders fast, feels right, and handles edge cases gracefully is what got me into engineering — and it still shapes how I think about the whole stack.

I gravitated toward component-driven architecture early. React and Next.js became the lens through which I see every web problem. Over time I moved further down the stack: REST and streaming APIs, PostgreSQL under production load, NestJS for structured backend services, and CI/CD for actually shipping things reliably.

TypeScript

TypeScript is the language I think in. I've been writing it across every project for 3+ years — frontend, backend, agent pipelines, API routes. The type safety is especially critical in multi-agent systems where data contracts between components cannot be ambiguous.

React & Next.js

My primary tool for every web product. I've built AI-powered learning platforms, streaming chatbot UIs, enterprise dashboards, and marketing sites with Next.js. I'm comfortable with App Router, Pages Router, edge functions, server components, and the tradeoffs between them.

Node.js · Express.js · NestJS

I use Express for lightweight services and NestJS for production-grade backends where you actually want dependency injection, guards, interceptors, and proper module boundaries. Both have been in production deployments.

Tailwind CSS

I rarely touch raw CSS anymore. The constraint-based system produces more consistent UIs at higher velocity. For complex motion work I layer in Framer Motion.


LLM Application Engineer

This is the area I've invested most heavily in over the past year. I build AI systems that go beyond wrapping a chat API — multi-agent pipelines, voice-to-voice interfaces, tool-augmented reasoning systems, and RAG architectures that hold up against real data.

The mental model that unlocked this for me: LLMs are reasoning engines, not text generators. Design around that — structured outputs, deterministic tool execution, observable state, measurable evals — and you build systems that are actually reliable.

Google Gemini SDK · Live API

This is where I'm doing the most novel work right now. The Gemini Live API enables real-time bidirectional voice conversation — full voice-to-voice communication with sub-500ms latency, streaming audio in and out simultaneously over WebSocket. I've built voice interfaces with tool calling baked in: the model can interrupt itself to execute a function, receive the result, and continue speaking without breaking the conversation flow. This is genuinely different from the text-then-TTS pattern most people use.

Beyond voice: I use Gemini Pro for multi-turn chat, generateContent with tool declarations for structured agent tasks, and function calling for integrating LLM reasoning with real APIs.

Vercel AI SDK

The cleanest abstraction for AI-powered web UIs. streamText, generateObject, useChat, useCompletion — all of it in production. The streaming primitives make chatbot UIs feel instant. The provider-agnostic design means I can swap models without rewriting UI code.

LangChain · LangGraph

LangChain for document ingestion pipelines and RAG systems. LangGraph for stateful multi-agent workflows where agents need to loop, branch, and hand off work with persistent state between steps. The graph-based model for agent orchestration is significantly more debuggable than ad-hoc agent loops.

Multi-Agent Architecture

Designing systems where multiple specialized LLM agents collaborate — orchestrator patterns, fan-out/fan-in pipelines, Zod-validated inter-agent contracts, structured handoffs with no free-form text passing between agents. The 100agentdev win (1st place, 600+ teams) validated this approach under competitive pressure.

Vector Databases · RAG

Building retrieval systems that actually work: embedding pipelines, similarity search with pgvector or Pinecone, chunking strategies, and reranking. The naive RAG implementation works in demos; production RAG needs chunk overlap tuning, metadata filtering, and hybrid search.

Prompt Engineering · LLM Evals

System prompt architecture, few-shot examples, chain-of-thought structuring, and output validation. I use structured generation (Zod + generateObject) wherever possible instead of hoping the model formats its response correctly. Evaluating LLM outputs with deterministic test cases before shipping.


Backend & Infrastructure

Production is where assumptions die. I've run database operations under live traffic, wired CI/CD for multi-service deployments, and learned — the hard way, at 3 AM — what it means to have a bug in a production system handling real user data.

PostgreSQL · Supabase

Postgres is my default database. I understand indexes, query planning, row-level security, and VACUUM. I've used Supabase for real-time subscriptions in dashboards, and run batched PII deletions on a live production database (90K records, zero downtime).

MongoDB

Document storage for flexible schemas — conversation history, org config, unstructured event data. Mongoose for schema validation and query composition in Node.js services.

Redis

Session caching, rate limiting, pub/sub for real-time features, and BullMQ for background job queues in NestJS. Not just get/set — I understand eviction policies and when Redis is the wrong tool.

GitHub Actions · CI/CD

I build pipelines from scratch: lint → type-check → test → build → deploy. Configured for Next.js → Vercel and Node.js backends → AWS EC2. I've also configured GitLab CI for containerized deployments.

AWS · Vercel

Vercel for frontend (zero-config, edge functions, preview deployments). AWS for backend: EC2, RDS (Postgres), S3, CloudWatch monitoring. I'm comfortable here at the operational level — not an infrastructure engineer, but I can own a deployment without hand-holding.

Firebase

Auth (OAuth + email/password), Firestore, and Realtime Database for the AI learning platform at LastMinuteEngineering. Firebase Auth for session management across web clients.


Detail and Summary

I represent all data in labels to make it easier to read. The underline indicator shows how often I use the related item:

Legend
Frequently UsedOccasionally

Programming Languages

Primary
TypeScriptJavaScriptPythonSQL
Others
HTML5CSS3CC++

AI & LLM Stack

Google AI
Gemini SDKGemini Live APIGemini Tool Calling
Core AI
Vercel AI SDKLangChainLangGraphHugging Face
RAG & Search
pgvectorPineconeRAG Pipelines
Practice
Prompt EngineeringLLM EvalsMulti-Agent Design

Frontend

Core
Next.jsReactTailwind CSSFramer Motion
UI Libraries
Shadcn UIAceternity UIChakra UIMaterial UI
State & Data
ReduxZustandReact QuerySWR

Backend

Frameworks
Node.jsExpress.jsNestJSFastAPI
Real-time
Supabase RealtimeSocket.ioFirebase RTDBWebSocket / SSE

Databases

Relational
PostgreSQLpgvectorMySQL
NoSQL & Cache
MongoDBRedisSupabaseFirebase Firestore

DevOps & Cloud

CI/CD
GitHub ActionsGitLab CIDocker
Cloud
VercelAWSFirebaseNetlify

Tools & Platforms

Dev Tools
VS CodeGitGitHubPostmanSwaggerFigma
Observability
SentryAWS CloudWatch
Other
Claude CodePrisma ORMStripe API