Professional Experience
Software Development Intern
Weav.AI Inc
Remote, United States
June 2024 – Present
TL;DR:
-
Backend engineer driving automation, scalability, and performance across distributed systems and internal tooling, with strong ownership in a fast-paced, small team environment.
-
Led the development of high-impact backend systems (notifications, config automation, Airflow pipelines) used by 10,000+ users, collaborating with cross-functional teams to improve SLA, throughput, and reliability.
-
Proven ability to independently build production-grade tools, optimize databases, and enable rapid prototyping — cut dev cycles by 40%, boosted system speed by 27%, and achieved 100% test coverage.
In-Depth
-
Architected and maintained a suite of reusable Python utilities and scripts that enhanced and standardized interactions with external APIs, simplifying integration and forming the foundation of the internal Weav.AI Developer Library.
-
Developed internal tools using FastAPI, MongoDB, and Streamlit to support rapid experimentation and feature delivery, reducing prototyping cycles by 40% and streamlining workflows for the Data Science team.
-
Designed and implemented scalable Apache Airflow DAGs to automate high-volume document processing pipelines (ingestion, classification, tagging), improving SLA compliance and boosting cross-service throughput by 27%.
-
Built and deployed a robust notification system supporting over 10,000 agents, featuring customizable delivery rules, role-based access control, and seamless integration with identity/messaging infrastructure.
-
Instituted a strong test-driven development culture by introducing Robot Framework-based end-to-end tests, integrated with GitHub Actions, achieving 100% test coverage and reducing deployment regressions.
-
Collaborated with the core platform team to design modular APIs and backend services, delivering a new service abstraction layer that enabled faster onboarding and streamlined feature rollouts.
-
Led performance tuning of MongoDB for high-throughput services like notifications, reducing average query response time by 41% and improving system scalability under load.
-
Engineered a Python-based configuration automation framework to replicate complex environment-specific artifacts across deployment tiers (dev → test/prod), featuring modular export/import workflows, structured logging, robust error handling, and scalable cross-environment setup.
-
Introduced a user impersonation feature to empower admins with secure, scoped access for debugging and support, improving operational efficiency and reducing turnaround time for user issues.
-
Built internal tooling for the Data Science team to compare models, edit/test prompts, and run batch jobs via CSV uploads or UI input — reducing manual processing time by 40% and enabling scalable, parallel workflows. Designed the front-end in Streamlit and the back-end with FastAPI, implementing async REST APIs, response caching, and state optimization to boost performance by 34% and eliminate redundant API calls.
Senior Software Engineer
Konverge.AI
Pune, India
June 2021 – June 2023
TL;DR:
- Proven expertise in designing scalable backend systems and ML infrastructure from scratch using modern Python frameworks.
- Strong ownership over production-grade AI systems, still powering automation pipelines today.
- Delivered consistent performance improvements: API response (+38%), model training speed (+41%), deployment cycles (−53%).
- Hands-on with async programming, Airflow DAG orchestration, and cloud-native deployments on Azure & Kubernetes.
- Known for delivering impact in small, high-ownership teams and recognized with the Emerging Player Award for division-wide contributions.
In-Depth
- Designed and owned an LLM-powered document processing platform (Flask, LangChain, Pinecone), powering enterprise automation for 4+ years.
- Led architecture for multi-tiered, multi-tenant microservices using Django, Flask, and FastAPI, improving API latency by 25% via asynchronous logic and database optimizations.
- Built resilient ML systems using Snowflake UDFs, dbt, and Apache Airflow to boost training speeds by 41%, optimize computational efficiency by 27%, and reduce query time by 55%.
- Developed a distributed Risk/Loss Analysis system using FastAPI, Bayesian Networks, RabbitMQ, and Kubernetes, stress-tested with JMeter to handle 2,000+ QPS under concurrent load.
- Enhanced data ingestion and memory usage by integrating Redis caching, fixing leaks, and designing scalable connectors with Airbyte.
- Automated workflows and test pipelines across both startups using CI/CD in Azure DevOps, Robot Framework, and Dockerized test environments, reducing deployment times by 50% and achieving 100% test coverage.