// Anirudh Purohit · Fullstack Developer · 2026

Anirudh
Purohit.

Fullstack developer experienced in product ideation, architecture, implementation, deployment, networking, and maintenance. Capable of building infrastructure from scratch and working within existing cloud environments.

01 · Selected Work

Things I've shipped.

An NLP-powered task manager. Type a task in plain English; it parses intent, priority, color, and timing. 30+ users worldwide, 8000+ tasks processed.

↑ live widget · try the theme switcher

Open ↗
560/s
sustained req/sec
1.1s
LLM p95 / 1.5s p99
8k+
tasks processed
Infra & scaling
  • ·Stateless Flask, horizontal scaling via containerized replicas
  • ·~560 req/sec sustained across a 2-node cluster (~280/sec per node)
LLM integration
  • ·Latency-aware LLM gateway with live vendor health probing and adaptive failover
  • ·Cold-start cut from 10s to 1.1s p95 / 1.5s p99 via TCP pooling and custom keepalive
  • ·Parses NL into structured tasks; aggregates anonymized data for fine-tuning
Surface
  • ·Desktop and mobile PWA, 6 Jinja2 templates across 3 themes
  • ·Multi-platform push notifications, timezone-aware scheduling
  • ·Vanilla HTML/CSS/JS for maximum portability, no frontend framework
Flask PostgreSQL Docker Cloudflare Python Linux

Infrastructure

Two nodes: a late-2012 Mac Mini and a 2017 laptop with the battery yanked, both running Linux. They sit behind Cloudflare Tunnel and host Tasker (prod and staging), the SMC inventory system, and a couple of Minecraft servers for friends.

Live since February 2025 with effectively 100% uptime, give or take whatever the Texas grid is doing that week. The old-hardware thing isn't aesthetic. It keeps cost and waste down, and owning the whole stack (networking, kernel, deploys) teaches you what the cloud quietly handles for you.

  • ·Custom CI/CD and provisioning scripts
  • ·Container-orchestrated via Docker
  • ·Horizontally scaled services across nodes
  • ·Zero-downtime rolling deploys
  • ·Health checks with auto-restart
  • ·Cloudflare edge: caching, rate limiting, WAF
  • ·Centralized logging
  • ·Multi-channel alerting
  • ·Automated DB backups with tested restore
  • ·Per-service status pages
/02

Sentience

HackAI UTD 2026 · 24hr

Market intelligence platform forecasting public sentiment toward AI labs and tracing supply-chain dependencies to predict downstream stock movement.

Python PyTorch FinBERT LSTM MongoDB Express React
Sentience dashboard

Custom parallel Python scraper ingests thousands of Reddit posts into MongoDB. ML pipeline: FinBERT scores sentiment (70% post + 30% VADER comment weighting), extracts 7 behavioral signals per post (churn, advocacy, trust erosion, competitor mentions, emotion intensity, engagement weight, comment alignment), aggregates into daily buckets, and trains a 2-layer LSTM to forecast composite sentiment 14 days out.

The differentiator: an interdependency network mapping AI companies to hardware suppliers like NVIDIA, TSMC, and AMD, so sentiment shifts at an AI lab get traced downstream to flag potential market movement before it happens.

My role: parallel scraper pipeline plus Express/Node REST API serving sentiment scores, forecasts, and supply-chain data to the React frontend.

SMC Inventory Management System
/03

SMC IMS

Client project · private

Full-stack inventory management for a medical equipment company. 185+ items, $1.3M+ inventory value.

Built primarily as an exploration of ETL pipeline design: ingestion, transformation, and loading for inventory records including images, pricing, and manufacturer metadata. Role-based account permission layers, audit logging, and both direct and semantic search across item names, models, manufacturers, PI/CI references, and remarks.

My role: solo build. Private deployment.

Flask PostgreSQL Docker Vanilla JS ETL
02 · About

Let's build something.

I like working on the parts of a system most people skip: the concurrency knobs, the cold-start path, the ETL glue, the "what happens when the third-party API takes 9 seconds" question. If you're building something where the infrastructure has to actually hold up, I'd love to talk.