Backend Systems
Node.js
Event-driven backend runtime
Express
Backend framework for APIs
Flask
Lightweight Python API framework
Redis
In-memory caching and queue system
Databases
PostgreSQL
Relational DB with strong consistency
MongoDB
NoSQL document database
DevOps & Infrastructure
Docker
Containerization for deployments
Kubernetes
Container orchestration
AWS
Cloud infrastructure platform
Google Cloud
Cloud computing services
Nginx
Reverse proxy and load balancer
Linux
System-level operations and scripting
AI & Agentic Systems
Python
Primary language for AI and backend systems
LLM Systems
Prompt engineering and AI systems integration

LangChain
LLM orchestration and agent pipelines
Core Programming
JavaScript
Core language for web and event-driven systems
TypeScript
Typed JavaScript for scalable applications
Tools
Git
Version control system
GitHub
Code collaboration platform
NPM
Package manager for Node.js
Prettier
Code formatting tool
Vim
Efficient terminal-based editor
Agentic AI systems
Exploring how LLMs can move beyond static responses into structured workflows using tools, memory, and decision loops.
Distributed systems fundamentals
Trying to better understand failure modes, consistency tradeoffs, and how systems behave under real-world constraints.
System design thinking
Practicing breaking down vague problems into components, constraints, and tradeoffs instead of jumping into implementation.
Mathematical foundations
Continuing to strengthen intuition in optimization, probability, and linear algebra for better reasoning in ML and systems.
Observability & SRE mindset
Thinking more about monitoring, debugging, and how to design systems that explain their own failures.
This list changes as I learn, unlearn, and revisit ideas.
Systems & AI Engineering Projects
- Designed and deployed an agentic AI system enabling natural language-driven workflows for movie discovery, planning, and tracking.
- Built LLM-powered backend with intent parsing and function calling, mapping user queries to structured database operations.
- Engineered a semantic recommendation engine over 4800+ items using feature engineering and similarity computation.
- Developed stateful backend systems supporting users, watchlists, reviews, and temporal planning workflows.
- Implemented production-grade pipelines including containerization (Docker), automated DB initialization, and dependency orchestration.
- Built SRE-grade monitoring system using Prometheus + Grafana with alert lifecycle validation and root cause analysis workflows.
- Instrumented backend services for observability (latency, request rate) and implemented alerting for anomaly detection.
- Developed full-stack systems with Flask + React + MongoDB + Redis, including CI/CD pipelines (Docker + Jenkins) and rate-limited APIs.
- Built low-level systems including a real-time 2.5D raycasting engine in C with custom rendering pipeline and trigonometric computations.
- Applied algorithmic optimization (DP, segment trees) in a game-theoretic auction system for resource allocation problems.
LangChainDeep Learning Intern
- Worked on backend and data pipeline for GAN-based system generating realistic flower renderings.
- Handled data preprocessing and integration for training pipelines.
- Strengthened practical skills in SQL, BigQuery, and Flask.
- Collaborated under academic supervision on applied ML systems.
Research Intern
- Authored 2 research papers on game theory in edge computing and distributed systems.
- Implemented algorithmic models and conducted system-level simulations.
- Applied concepts from networking, distributed systems, and optimization.
- Bridged theoretical models with practical system design insights.
I think in systems, not features
Whenever I build something, I try to understand how data flows, where it breaks, and what happens under stress—not just whether it works.
I like constraints
Rate limits, latency, memory, failure cases—constraints make systems interesting. They force better design decisions.
I break things on purpose
Some of my best learning came from intentionally pushing systems until they failed. That’s usually where the real understanding begins.
I care about why something works
Not just using tools, but understanding the tradeoffs behind them—why Redis over Postgres, why queues, why eventual consistency.
I enjoy connecting ideas
Game theory, distributed systems, AI—different domains often solve similar problems. I like exploring those overlaps.
Still figuring things out
There’s a lot I don’t know yet. But I’ve learned how to learn fast, debug deeply, and stay curious without getting overwhelmed.
This section will probably evolve as I keep learning.
Teaching myself how to think mathematically
I wasn’t good at math in school—not because I disliked abstraction, but because I struggled with application and generalization. Inspired by thinkers like Cal Newport and Scott Young, I rebuilt my foundations from scratch: logic, calculus, linear algebra, and programming. After a year of consistent effort, things started to click. Today, I’ve worked on research in distributed systems and machine learning that relies heavily on mathematical reasoning.
Understanding comes not from exposure, but from reconstruction.
Unlearning OOP to actually understand design
In my OOP course, I could solve problems and pass tests, but I didn’t understand why systems were designed a certain way. So I took a different route—I abandoned the top-down teaching approach and rebuilt my understanding bottom-up. I experimented with hundreds of designs, analyzed why they failed, and gradually internalized good design principles. I didn’t top the course, but I learned something far more useful.
Not all metrics are signals. Not everything that matters is measurable.
Leading under constraint: 8 hours, 11 strangers
At the Lumen Hackathon, I led a team of 11 randomly assigned participants. We had 8 hours to design, build, and present a full-stack system. I focused on coordination, accountability, and clear communication, while contributing to backend and CI/CD. We shipped just minutes before the deadline and secured 3rd place—something we didn’t expect.
Leadership is less about control and more about clarity under pressure.
Challenging assumptions in research
I proposed an ML research idea that was initially dismissed as impractical. Instead of arguing, I built a working prototype in a week and followed it up with a structured explanation backed by literature. It took effort, but it changed the conversation. The idea was eventually accepted.
If an idea seems unreasonable, make it concrete.
I expect this section to keep evolving as I encounter better problems.
Contact Form
Please contact me directly at dasrupesh2124(at)gmail.com or drop your info here.