Back to all projects
Building Governed AI with Enterprise Compliance for Wealth Management
Product Execution

Building trust through responsible AI governance

In wealth management, AI isn't just about intelligence—it's about trustworthy intelligence. I led the design and deployment of a multi-agent AI system on Google Cloud Platform that doesn't just deliver insights; it does so with enterprise-grade governance, regulatory compliance, and ethical guardrails built in from day one.


The Challenge & Solution

The Challenge

Financial services face a unique AI paradox: clients demand sophisticated AI-driven insights, but regulators demand explainability, fairness, and human oversight. How do you build a system that's both powerful and compliant?

The Solution

A governance-first approach to AI product development, embedding compliance controls throughout the entire AI lifecycle—from data strategy to deployment monitoring.


🤖 The System: Multi-Agent AI for Comprehensive Wealth Advisory

Five specialized agents working in harmony:

🤖 The System: Multi-Agent AI for Comprehensive Wealth Advisory


AgentCapabilityGovernance ControlsGCP Stack
Portfolio AgentClient portfolio information & analysis, model portfolio comparison
• PII encryption at rest• Access audit logs• GDPR data minimization
BigQuery, Vertex AI
Product AgentProduct information & personalized recommendations
• Bias testing across demographics• Explainability via RAG citations• Human review for high-stakes recs
Vertex AI Search, Embeddings API
Operations AgentOrder status & cash transfer updates
• Real-time data validation• Automated anomaly detection• Transaction audit trail
Cloud Functions, Firestore
Market Intelligence AgentInvestment Committee outlook & market insights
• Source attribution (no hallucinations)• Temporal bias monitoring• Fact-checking layer
Vertex AI, Document AI
Compliance AgentFee details & corporate action reporting
• Regulatory disclosure checks• Automated compliance validation• Version-controlled disclaimers
Cloud Storage, BigQuery

🔐 Governance Architecture: The Four Pillars

I structured the governance approach around the four core characteristics of trustworthy AI, aligned with global standards (NIST AI RMF, GDPR, EU AI Act, MAS AI Verify):

PillarCore PrincipleKey ControlsResult
🧭 1. Human-CentricAI augments, not replaces
• Human-in-loop >$100K• RM override capability• "Talk to human" always visible
94% RM satisfaction
⚖️ 2. AccountableClear ownership & traceability
• DRI for each agent• Full audit logs• Kill switch (3-person auth)
15-min incident response time
🔍 3. TransparentExplainable decisions
• Model cards published• RAG source citations• "AI Assistant" disclosure
89% client trust score
⚖️ 4. Legal & FairBias-free, compliant
• Quarterly bias audits• GDPR/PDPA compliance• Fairness metrics tracking
Zero discrimination complaints

Foundation: NIST AI RMF + GDPR + EU AI Act + MAS Guidelines


1. Human-Centric Design

Implementation:

Human-in-the-loop for all portfolio recommendations above $100K
Relationship managers can override AI suggestions with audit trail
"Talk to human advisor" button always visible
AI augments advisors, never replaces them

Outcome: 94% RM satisfaction; zero escalations due to AI forcing decisions

2. Accountable & Traceable

Implementation:

Defined Provider/Deployer responsibilities (EU AI Act compliance)
Every AI decision logged with model version, input data, timestamp
Clear DRI (Directly Responsible Individual) for each agent
Kill switch protocol with 3-person authorization
Incident response plan with 6-stage process (Prep → ID → Contain → Eradicate → Recover → Learn)

Outcome: Full audit trail for regulatory reviews; 15-minute incident response time

3. Transparent & Explainable

Implementation:

Model cards for each agent documenting capabilities, limitations, bias testing results
"You're chatting with AI" disclosure on every interaction
RAG (Retrieval-Augmented Generation) provides source citations for all market insights
Plain-language privacy notice (not legal jargon)
Dashboard showing which data feeds each agent

Outcome: 89% client trust score; zero "AI is a black box" complaints

4. Legal & Fair

Implementation:

Bias audits: Quarterly testing across client demographics (age, gender, nationality, portfolio size)
Fairness metrics: Demographic parity analysis; confusion matrix tracking (targeting >95% recall, >80% precision)
Regulatory compliance: GDPR (7 principles), MAS Fair Dealing, PDPA, EU AI Act (Limited Risk classification)
Data governance: 8 Fair Information Practices (FIPs) embedded in data pipeline
Privacy by Design: Pseudonymization by default; data minimization; purpose limitation

Outcome: Passed MAS Technology Risk audit; zero discrimination complaints


📊 The Governance Lifecycle in Action

I implemented NIST AI Risk Management Framework as the operational backbone:


PhaseActivitiesKey Deliverables
GOVERN(Foundation)
• Cross-functional AI governance committee• Risk tolerance statements• Diverse team (PM, data science, legal, compliance, social scientist)
Governance charter, role definitions, risk appetite document
MAP(Discovery)
• Risk classification (Limited Risk under EU AI Act)• Impact assessments (AIA, DPIA)• Stakeholder mapping (clients, RMs, regulators)
Risk inventory, impact assessment reports, stakeholder register
MEASURE(Testing)
• TEVV (Test, Evaluate, Verify, Validate) protocols• Bias testing with confusion matrix• Performance benchmarking• Red teaming for adversarial attacks
Test reports, bias audit results, performance dashboards
MANAGE(Mitigation)
• Prioritized risk backlog• Control implementation• Continuous monitoring• Incident response activation
Risk treatment plan, monitoring dashboards, incident logs

Continuous Improvement: This isn't a one-time process. We iterate monthly, feeding learnings from MANAGE back into GOVERN.

Continuous Improvement: This isn't a one-time process. We iterate monthly, feeding learnings from MANAGE back into GOVERN.


📂 Sample Governance Artifacts

To demonstrate the rigor of our governance approach, here are representative examples across the NIST AI RMF phases:

GOVERN Phase: AI Governance Charter

AI System Governance Charter - Multi-Agent Advisory Platform

Version: 2.1 | Effective Date: March 2025 | Next Review: September 2025

Purpose & Scope

This charter establishes governance for the Multi-Agent AI Advisory Platform serving wealth management clients. The system is classified as Limited Risk under EU AI Act Article 52 (transparency obligations) but treated as High Risk for internal governance given financial impact.

Roles & Responsibilities

AI Product Owner: Vishi Rajvanshi (DRI for product decisions, risk prioritization)
AI Ethics Lead: [Compliance Officer] (bias audits, ethical review)
Data Protection Officer: [DPO] (GDPR/PDPA compliance, data subject rights)
Technical Lead: [Engineering Manager] (model performance, infrastructure security)
Legal Counsel: [Legal] (regulatory compliance, contract review)

Decision Authority Matrix

Decision TypeAuthorityEscalation
Model deployment (A/B test)AI Product OwnerGovernance Committee if >10% users
Kill switch activationAny DRI + 1 peer confirmationImmediate notification to CEO
New data source integrationDPO + Technical LeadGovernance Committee if PII
Bias threshold adjustmentAI Ethics Lead + Product OwnerLegal if regulatory impact

Risk Appetite Statement

Bias: Demographic parity within 5% (target: 3%)
Accuracy: >90% for portfolio recommendations (target: 95%)
Privacy: Zero tolerance for unauthorized PII access
Explainability: All high-stakes decisions (>$50K) must have human-interpretable rationale

MAP Phase: Risk Inventory

Risk IDCategoryDescriptionLikelihoodSeverityRisk LevelOwner
AI-R-001BiasPortfolio recommendations favor high-net-worth clients over mass-affluent segmentOccasionalModerate🟡 MediumAI Ethics Lead
AI-R-002HallucinationMarket Intelligence Agent generates non-factual investment outlookOccasionalCritical🟠 Medium-HighProduct Owner
AI-R-003PrivacyRAG system inadvertently exposes client A's data when answering client B's queryImprobableCritical🟡 MediumDPO
AI-R-004AdversarialPrompt injection bypasses content filters to access unauthorized dataOccasionalCritical🟠 Medium-HighTechnical Lead
AI-R-005DriftModel performance degrades due to market regime change (e.g., 2020 COVID crash)ProbableModerate🟠 Medium-HighTechnical Lead

Key Risk Treatments:

AI-R-002 (Hallucination): Implemented RAG with source citations + human review for all market outlook content
AI-R-004 (Adversarial): Deployed meta-prompts, input sanitization, rate limiting, and quarterly red team exercises
AI-R-005 (Drift): Weekly performance monitoring, automated alerts, 6-month retraining cycle

MEASURE Phase

Model Card: Portfolio Recommendation Agent v2.3

Last Updated: September 2025 | Owner: Product Team

Model Details

Model Type: Multi-objective optimization engine + LLM-based explanation layer
Training Data: 15,000 historical client portfolios (2018-2024), 8,500 fund performance records
Foundation Model: Vertex AI PaLM 2 (fine-tuned on financial advisory corpus)
Deployment: Google Cloud Run (auto-scaling), Vertex AI Prediction

Intended Use

Generate portfolio rebalancing recommendations for relationship managers serving accredited investors. Recommendations consider risk profile, existing holdings, market outlook, and diversification constraints.

Performance Metrics (as of Sept 2025)

Recommendation Acceptance Rate: 78% (RMs accept AI suggestion without modification)
Sharpe Ratio Improvement: +0.18 average improvement vs. pre-AI portfolios (12-month trailing)
Precision: 92% (recommended portfolios meet risk/return targets)
Recall: 89% (identifies 89% of beneficial rebalancing opportunities)

Bias Testing Results

DemographicRecommendation Quality (F1 Score)Parity Gap
Age <400.88-2% vs baseline
Age 40-600.90Baseline
Age >600.89-1% vs baseline
Portfolio <$200K0.85-5% vs baseline ⚠️
Portfolio $200K-$1M0.90Baseline
Portfolio >$1M0.91+1% vs baseline
Male0.90Baseline
Female0.89-1% vs baseline
Singapore0.90Baseline
India0.88-2% vs baseline

Identified Bias & Mitigation

Model performs worse for smaller portfolios (<$200K) due to optimization constraints (minimum lot sizes, transaction costs). Mitigation: Flagged for human review; working on small-portfolio-specific model for Q1 2026.

Quarterly Bias Audit - Q3 2025

Auditor: AI Ethics Lead + External Consultant | Date: October 1, 2025

SegmentStatusDetailsAction
Gender✅ PASSMale F1: 0.90Female F1: 0.89
Age✅ PASS<40 F1: 0.8840-60 F1: 0.90
Geography⚠️ WATCHSingapore F1: 0.90India F1: 0.88
Portfolio Size🔴 FAIL<$200K F1: 0.85>$200K F1: 0.90

MANAGE Phase: Risk Treatment Plan

Risk IDCurrent Risk LevelMitigation StrategyOwnerTarget DateResidual Risk
AI-R-002(Hallucination)
🟠 Medium-High
• Implement RAG with verified sources only• Human review for all market outlook• Confidence scoring + uncertainty display
Product Owner
Completed(Aug 2025)
🟢 Low
AI-R-004(Adversarial)
🟠 Medium-High
• Meta-prompt injection defense• Input sanitization (max 500 tokens)• Rate limiting (20 queries/min/user)• Quarterly red team exercises
Technical Lead
Completed(Sept 2025)
🟡 Medium
AI-R-005(Drift)
🟠 Medium-High
• Weekly performance monitoring• Automated drift detection alerts• Model retraining every 6 months• Kill switch for >10% degradation
Technical LeadOngoing🟡 Medium

Monitoring Dashboard - Key Metrics (Real-Time)

CategoryMetricCurrent ValueTarget
PerformancePortfolio Agent Latency (p95)3.8s<5s
PerformanceProduct Agent Accuracy94.2%>90%
PerformanceMarket Intelligence Hallucination Rate0.8%<2%
BiasDemographic Parity (Gender)0.98>0.95
BiasDemographic Parity (Age)0.97>0.95
BiasDemographic Parity (Portfolio Size)0.94 ⚠️>0.95
SecurityAdversarial Attempts Blocked (7-day)23All blocked
PrivacyData Privacy Violations (30-day)00
AdoptionRM Adoption Rate87%>80%
TrustClient Trust Score (NPS)89%>85%

🛠️ Technical Implementation: GCP-Native Governance Stack

Why Google Cloud Platform?

🏗️ Three-Layer Architecture

Layer 3: Operational Governance (Control & Monitoring)

Cloud Logging → Centralized audit trails for all AI decisions

Cloud Monitoring → Real-time alerting on anomalies

IAM → Role-based access control (least privilege)

Org Policy → Automated guardrails enforcement

Security Command Center → Unified compliance dashboard

Layer 2: AI/ML Governance (Model Management)

Vertex AI → Model versioning, explainability, deployment

Vertex AI Model Monitoring → Drift detection, performance tracking

Vertex AI Feature Store → Centralized feature management + access control

AI Platform Pipelines → Reproducible, auditable ML workflows

Layer 1: Data Governance (Foundation)

BigQuery → Data warehouse with column-level security + audit logs

Dataplex → Automated data quality checks + lineage tracking

Data Loss Prevention API → Automatic PII detection & redaction

Cloud KMS → Encryption key management (CMEK)


Data Governance

BigQuery: Data warehouse with column-level security, audit logs
Dataplex: Automated data quality checks, lineage tracking
Data Loss Prevention API: Automatic PII detection and redaction
View all projects