Open to New Opportunities

AI Builder  ·  Product Engineer  ·  Safety Researcher

Scaling IdeasInto AI Systems.

I design, build, and scale AI infrastructure end-to-end — from first prototype to platforms running at enterprise scale. My focus is on systems that are fast, safe, and genuinely useful.

My craft spans RAG architectures, Constitutional AI, multi-model safety evaluation, and LLM tooling optimized for production — currently pushing the frontier with protocol-level safety via MCP.

12K+
Engineers Reached
9+
Years Experience
$80M+
Programs Managed
40%
Efficiency Gained
Scroll
Featured Work

Case Studies.

Enterprise AI products, platform orchestration, and scalable systems — built at Microsoft Azure, Wipro, and GovTech scale.

View All Case Studies
30% Cycle Reduction

Enterprise SaaS · Microsoft Azure

QaaS 2.5 Orchestration

Consolidated 5 fragmented testing services into a unified orchestration layer serving 500+ engineers at Microsoft Azure — fastest adoption in team history.

Azure DevOpsKusto/ADXPower Platform
Platform Strategy
View Protocol
12,000+ Engineers

Wipro · Embedded at Microsoft Azure

AI Juno & Vega Labs Portal

Unified developer portal merging LLM-powered AI Juno assistant with Vega Labs lab scheduling — serving 12,000+ Azure engineers globally.

ReactTypeScriptCopilot Studio
AI Product & UX
View Protocol
Live Build · Status: Active

DEXter

MCP_SUITE

v1.0 · Protocol Layer

Unified Model Context Protocol interface bridging DEXter Rise, agentic workflows, and system backends — with constitutional safety constraints at protocol level.

MCP
Protocol
v1.0
Version
DEXter
Backbone · Live
dexter_mcp — bash

// Initialize MCP v1.0 Handshake

const mcp = new DEXterMCP({ version: '1.0' });

mcp.connect({ backend: 'dexter-rise' });

mcp.applySafety({ alignment: 'Constitutional' });

✓ Protocol handshake complete

// Engineering in progress...

Research Archive.

Documentation of independent studies on LLM safety, tokenization divergence, and AI alignment.

View All Thoughts
Current Focus

Tokenization Divergence Research

Identifying vulnerability patterns across Claude, GPT-4, and Gemini; documentation of token-level inconsistency correlations with model safety bypass susceptibility.

AI SafetyAnalyze
Current Focus

Multi-Model Safety Evaluation

Systematic surface analysis of safety metrics and behavioral guardrails across five major model families using API and MCP integrations.

AlignmentAnalyze
Active

Constitutional AI Assistant Design

Building modular, steerable development assistants grounded in Constitutional AI principles for safe multi-turn reasoning and tool usage.

ArchitectureAnalyze
0

TERMINAL_STATION_ALPHA