Hello, I'm
AI and Full-Stack Developer | Building Reliable,Ethical & Scalable Tech
Get To Know More
3+ years
AI Systems & Full Stack Development
B.Eng. Computer Engineering, Software Engineering Major
Toronto Metropolitan University
I’m a Software and AI Engineer designing scalable tools where technology meets social impact. My work spans LLM prompt pipelines, distributed data systems, and full stack applications for education, policy, and agriculture startups — all built with clarity, accessibility, and long term reliability and scalability in mind.
From Python forward evaluation pipelines and SQL driven ETL systems to WCAG compliant React platforms, I build from backend engineering, AI safety, and human focused design standpoint. I care deeply about building products that feel as intentional as they are intelligent, the kind of systems that teams can trust, scale, and grow from based on what their bottlenecks are.
Explore My
I build intelligent systems that combine retrieval, reasoning, and reliability. Using LangChain and LlamaIndex, I’ve designed retrieval-augmented and multi-agent pipelines for onboarding and document intelligence. I’ve integrated OpenAI and Hugging Face APIs with caching, batching, and scoring to make GPT systems faster and more accurate. My prompt evaluation frameworks process over a thousand model outputs a day, reducing hallucinations by more than 60%. I also work with vector databases like Qdrant and pgvector to embed, index, and version knowledge with clear traceability.
My backend experience blends structure and scalability. I build secure REST APIs and automation services using FastAPI, Flask, and Node.js, often integrating them with Slack bots or data-driven workflows. I design relational and NoSQL schemas in PostgreSQL and MongoDB for simulations, dashboards, and lab automation. My ETL pipelines clean, chunk, and embed data while caching frequent requests for speed. I also implement authentication and rate-limiting with OAuth, JWT, and scoped roles to keep every endpoint fast and secure.
On the frontend, I focus on clarity and accessibility — building responsive dashboards and admin tools in React and Tailwind that balance function and simplicity. I manage CI/CD pipelines through GitHub Actions to automate testing, linting, and deployments. My builds are containerized with Docker and deployed on Vercel or Hugging Face Hub for reproducibility and monitoring. With Figma and WCAG 2.1 design principles, I make sure each interface is thoughtful, intuitive, and human-centered.
Explore My
I build retrieval-augmented and multi-agent systems using LangChain, LlamaIndex, and vector databases like Qdrant and pgvector. I've integrated OpenAI and Hugging Face APIs with caching, batching, and scoring to make GPT-based workflows faster and more reliable. My prompt evaluation pipelines have processed over 1,000 model outputs per day, cutting hallucinations by 60% and improving overall accuracy.
My backend work blends performance and precision — from building REST APIs with FastAPI, Flask, and Node.js to designing PostgreSQL and MongoDB schemas for investment simulations and lab automation. I've developed ETL pipelines that clean, chunk, and embed data for retrieval while caching frequent requests. I also design secure authentication layers with OAuth, JWT, and rate-limited endpoints that keep systems stable and UX smooth.
On the frontend, I build intuitive, accessible dashboards with React and Tailwind — tested and deployed through CI/CD pipelines on GitHub Actions. My projects use Docker, Vercel, and Hugging Face Hub for reproducible builds and deployment monitoring. I also prioritize inclusive design with WCAG 2.1 accessibility reviews and Figma prototyping to ensure every interface feels seamless and human.
Browse My Recent
A containerized FastAPI inference service integrating Hugging Face Transformers for real-time text sentiment prediction. Designed with Uvicorn for async concurrency and validated via Jupyter-based parallel request testing.
Built with Docker and the distilbert-base-uncased-finetuned-sst-2-english model for optimized low-latency inference.
Features async I/O, batching, and scaling via Gunicorn workers with future-ready plans for NGINX load balancing, structured logging, and Prometheus metrics.
A FastAPI microservice demonstrating end-to-end observability through Grafana, Prometheus, Tempo, and Loki. Features OpenTelemetry instrumentation for trace-to-log correlation and metric exemplars, deployed via Docker Compose.
Implements trace injection across three FastAPI services using the OpenTelemetry SDK with Prometheus metrics, Tempo tracing, and Loki log pipelines.
Load tested via Locust and k6 to validate distributed tracing and resource efficiency, with metrics visualized in Grafana dashboards for real-time debugging and performance monitoring.
A Streamlit-based analytics app for visualizing Bluecoins expense data using Python, SQL, and Docker. Designed to support intuitive budget tracking and category insights with a responsive, minimal interface.
Developed an ETL pipeline from CSV → SQL → Streamlit UI using pandas and SQLAlchemy, containerized via Docker Compose for reproducibility. Provides dynamic charts on income trends, category flow, and net balance with future integrations planned for OAuth login and hosted deployment.
A serverless financial chatbot that guides users through retirement investment planning. Built with AWS Lex for NLP dialogue management and AWS Lambda for real-time portfolio recommendations based on user risk profiles.
Implements Lambda-backed validation for user intents and secure financial calculations with algorithm-driven portfolio allocation based on risk tolerance and horizon. Built with serverless architecture for scalability and low-cost deployment, featuring an integrated conversational flow designed for accuracy, responsiveness, and ease of use.
Get in Touch
I'm always interested in discussing new opportunities, collaborations, or just having a chat about technology and innovation.