What companies think Gen AI looks like
What it actually is
Governance & Oversight
Policy & Standards
Use Case Approval
Risk Classification
Audit Logging
Version Control
Human Ownership
Escalation Paths
Vendor Review
Use Case Definition
Data Sources
Knowledge Base
People
Process
Data Quality Assessment
⇒
Data Ingestion
Data Cleaning / Normalization
Metadata / Lineage
Permissions / Access Controls
PII / Sensitive Data Handling
Content Freshness / Re-indexing
⇒
Vendor Choice
Model Size
Context Window
Base vs. Fine-Tuned
System Prompt
Few-Shot Examples
Reasoning Scaffolds
Prompt Versioning
Chunking Strategy
Embedding Model
Vector Store
Retrieval Evaluation
⇒
Fine-Tuning
Continued Pre-Training
Synthetic Data Generation
RLHF / RLAIF
Dataset Curation
⇒
Tool Calling
Multi-Step Reasoning
Memory / State
Guard Rails
Iteration
Quality Gates
Retry Logic
⇒
API Gateway
Model Routing
Fallback Models
Caching
Rate Limits / Timeouts
Observability / Tracing
Circuit Breakers
Kill Switch / Rollback
⇒
Eval Suites
Red Team
Silent Failure Detection
Output Validation
Regression Testing
Human-in-the-Loop
Latency Benchmarks
Online Monitoring
Drift Detection
A/B Testing
⇒
Deployment
Monitoring
Cost Management
Feedback Loop
Workflow Integration
People Training
Change Management
KPI Tracking
ROI Measurement
Optimization
◀
Continual Improvement
◀
Cross-Cutting Risks & Controls
Hallucination Risk | Output Validation
Legal | IP Exposure | Data Residency
Ethical Use | Transparency | Explainability
Security | Prompt Injection | Access Control
Bias | Model Provenance | Historical Distortion
by Bradley W. Petersen & Orbis Scientia